From a robotics perspective, how does the self-driving car work?
To my knowledge, Google has released few technical details of its implementation — its project was a company secret until fall of 2010. Google’s approach is said to combine sensor inputs from cameras, radar, wheel rotation and lasers that have been added to the car, along with previously stored maps and images from the Google Street View database. Presumably Google also uses GPS to get a rough idea of the car’s location. Exactly how it is using that information has not been disclosed, but it is likely both a combination of high-level planning and lower-level control.
For planning, the car is probably given a destination and then plans a route based on its current location and map data. The plan could also be revised along the way, should road closures or heavy traffic provide impediments.
For control, which is likely the more challenging task, it would be necessary to rapidly process data from the onboard sensors both to measure nominal aspects of the environment, such as road markings, and off-nominal events, such as unexpected pedestrians. The software would then need to quickly decide how to modify the car’s driving inputs (steering, throttle, brakes) to react safely.
The legal and ethical questions of driverless cars are very significant: Who will be responsible —and liable — if a driverless car is involved in an accident?
Though Google’s car has been making headlines since it was revealed, self-driving cars have been studied worldwide since the 1980s; some projects had already completed thousands of self-driven miles as early as 1995. The pace of advancement picked up over the last decade, in part because the United States Defense Advanced Research Projects Agency sponsored several high-profile competitions.
What technical challenges does Google face in designing its driverless car?
The main challenges are sensors that can substitute for human eyes and ears and software that can reliably process that data to drive safely.
The software would nominally need to understand the current location of the car and fundamental aspects of the driving environment, including road boundaries as well as the positions and speeds of nearby cars and pedestrians. It would also need to understand how the car would react to changes in the driving inputs. How far, for example, would the car travel before coming to a stop when a certain level of braking is applied?
A fundamental and deep challenge would be to predict the behavior of surrounding vehicles, especially when humans may be driving, but there are also a number of more subtle off-nominal situations that are likely huge challenges to handle automatically. These challenges include road damage, weather conditions, nearby accidents, unusual signage and sudden engine, wheel and break malfunctions
What are the benefits and drawbacks to introducing a self-driving car into the current traffic pattern?
One of the benefits that has been suggested is the possibility that self-driving cars could reduce accidents, many of which are attributed to human error or inattentiveness. Another benefit could be reduced traffic delays, since self-driving cars could potentially coordinate road use by acquiring and using larger-scale information about other cars on the road via wireless communication. This would likely require that at least a significant fraction of all cars be self-driven and participate in such coordination.
The main drawbacks are safety, liability and cost. Can the current software respond as well — and as rapidly — to off-nominal situations as an experienced human? When a self-driven car is involved in an accident, could the maker of the driving system be liable, just as a human driver could be? How much can the cost for the requisite advanced sensors and computers, which is currently in the tens of thousands of dollars,be reduced?