Link to the University of Waterloo home page

Link to the Centre for Society, Technology and Values home page

jump to navigation

Your robot co-driver June 15, 2011

Posted by Cameron Shelley in : STV202, STV302 , trackback

According to this FastCompany article, researchers at MIT have been designing prototype automated driving systems. One of the goals of this research is to develop a kind of co-driver, a program that tracks your driving and that of other drivers around you so that it can intervene in the case of an emergency:

… they’ve built an algorithm that tries to predict how cars will accelerate and decelerate at intersections or corners and can thus compensate to move itself out of an area where it predicts the two vehicles could collide while maneuvering. It uses a game theory-like decision system, grabbing data from in-car sensors and other sensors in roadside and traffic light units, elements of the future intelligent driving system.

Imagine the first time that you are approaching an intersection and the car countermands your driving instructions, e.g., your use of the accelerator, to slow down to reduce the risk of collision.

KITT
(Image courtesy of Magnus Manske via Wikimedia Commons.)

It sounds like an intriguing development, and who would not approve of a system that seems likely to save lives? Of course, there could be unintended consequences. Readers of this blog will probably ask themselves whether or not this safety system could induce drivers to be more careless in their driving practices, on the assumption that their robot co-driver will bail them out of any difficulties. In his book Why things bite back, Edward Tenner discusses examples of how safety equipment in sports, for example, made players more reckless or aggressive, resulting in more frequent or more devastating injuries. The failure of anti-lock brakes to reduce accidents has been attributed to similar causes, so the concern is plausible.

Beyond that, the development of robot co-drivers poses some thorny ethical issues. It appears, for example, that the aim of the current system is to save the life of the driver in an accident. However, why is that outcome the best one to aim for? If the car finds itself in a situation where a bad accident, a pile-up, say, seems likely, then it might be preferable to sacrifice the driver in order to save others in the vicinity. This scenario sounds somewhat like the notorious Trolley problem, in which people are asked what they would do if they had to, for example, push a man under a trolley in order to prevent it from running over a group of others standing on the tracks further away.

In driver education, we do not train drivers to make these sorts of calculations. Instead, folklore suggests that drivers instinctively protect themselves. In the folklore of driving, the front passenger seat of a car is known as the death seat because a car driver will swerve away from an oncoming vehicle, thus placing the person in the passenger side between the driver and the threat. This maneuver is not a calculated decision, just a natural instinct in a split-second situation. Of course, a computerized co-driver with lots of information about the situation may well have the opportunity to decide who gets to live and who does not. So, we need to think about how this decision is to be made. Is it, as in the movie I, robot, to be based on a risk assessment? If so, how? If not, why not? How would you program the co-driver to behave in such circumstances?

Comments

1. Cameron Shelley - 2011/06/17

Have a look at Edward Tenner’s take on this issue.

  • Regulus by Ben @ Binary Moon
  • Created with WordPress
  • Top
  • -->