Will Self-Driving Cars Spontaneously Reboot?

A common rebuke to self-driving cars are thoughts about cars behaving like computers – like freezing or rebooting while driving. Those make amusing sound bytes or twitter comments, but there is a grain of truth to it. Self driving technology has come a long way, but while computers and software can follow programmed instructions, and can learn over time, humans are still better at many things.

An article in the New York Times entitled Why Robots Will Always Need Us does a good job of putting this in context, in part by the experience of aircraft.

Author Nicholas Carr points out that:

Pilots, physicians and other professionals routinely navigate unexpected dangers with great aplomb but little credit. Even in our daily routines, we perform feats of perception and skill that lie beyond the capacity of the sharpest computers. … Computers are wonderful at following instructions, but they’re terrible at improvisation. Their talents end at the limits of their programming.

and

In 2013, the Federal Aviation Administration noted that overreliance on automation has become a major factor in air disasters and urged airlines to give pilots more opportunities to fly manually.

That’s not to say that we should smugly dismiss automation or technology. Lawyers, for example, who dismiss the ability of software to replace certain things we do are in for a rude awakening.

In general, computer code is never bug free, is never perfect, and is not able to do certain things. (You can say the same for us humans, though.) For example, the aircraft industry spends huge amounts of time and money testing the software that operates aircraft. On the other hand, the types of things computers can do well are increasing, and will increase over time. At some point there may be breakthroughs that make computers more reliable and better at the things us humans are more adept at. But we are not there yet.

 

Comments

  1. David Collier-Brown

    An interesting question for this community might be to what extent can a software company excuse themselves from liability? For the failures of ordinary products, ranging from laptops in my back-pack to professional-grade servers in machine rooms, the software folks claim to be completely irresponsible (;-))

    But what if the software is in cars?

  2. Yes, the issue of liability and responsibility for self-driving cars will be interesting. When is it just a bug, and when is it a defect that should pass a responsibility threshold? Perhaps the airline industry might provide some clues for that, given how automated aircraft controls are.

    It may very well be that we are headed further in the no-fault direction. Otherwise, how practical would it be to figure out responsibility if two self-driving cars collided? And if a self-driving car hit a pedestrian – what liability would the human in the car have?

    Does it somehow depend on how much control the “driver” has? How much attention is a “driver” supposed to give to what the vehicle is doing, and what responsibility does the “driver” have to take over if the car appears to be doing something wrong, especially when the reaction time of the car is much better than a human’s, and it may be too late by the time the human figures out something is amiss? After all, if the “driver” still has to pay full attention and be ready to intervene, does it not miss the point somehow?