I’ve touched on this topic in an earlier post called, “Robots, Law, Regulation: “‘Unfortunately It’s Not a Conversation That’s Happening Anywhere …’” In that post I briefly highlighted an observation that Ed Walters* made when talking about the law of robotics, namely: Who makes or monitors the algorithmic decisions embedded in autonomous systems?
Patrick Lin, associate professor of ethics at California Polytechnic State University, posted a very nice “thought experiment” on Ted-Ed a couple of weeks ago that “isolates and stress tests our intuitions” and contributes to this conversation.
It’s a short animated video that provides some great examples of the type of “ethical dilemma” involved when a self-driving car is involved in an accident.
“If we were driving [and involved in an accident] … whichever way we react would be understood as just that: a reaction. Not a deliberate decision. It would be an instinctual, panicked move with no forethought or malice. But if a programmer were to instruct the car to make the same move, given conditions it may sense in the future, well, that looks more like premeditated homicide.”
It’s a very interesting, and nicely animated, thought experiment that leaves me thinking about the following questions that Lin raises:
“Could it be the case that a random decision is still better than a predetermined one designed to minimize harm? And who should be making these decisions anyhow: programmers, companies, governments?”
If you’re interested in expanding on this thought experiment Lin talked about these issues in more depth in “The Ethics of Autonomous Cars” an article published in the Atlantic in October 2013.