Today I have been both murderous and merciful. I have deliberately mown down pensioners and a pack of dogs. I have ploughed into the homeless, slain a couple of athletes and run over the obese. But I have always tried to save the children.
As I finish my session on the Moral Machine — a public experiment being run by the Massachusetts Institute of Technology — I learn that my moral outlook is not universally shared. Some argue that aggregating public opinions on ethical dilemmas is an effective way to endow intelligent machines, such as driverless cars, with limited moral reasoning capacity. Yet after my experience, I am not convinced that crowdsourcing is the best way to develop what is essentially the ethics of killing people. The question is not purely academic: Tesla is being sued in China over the death of a driver of a car equipped with its “semi-autonomous” autopilot. Tesla denies the technology was at fault.
Anyone with a computer and a coffee break can contribute to MIT’s mass experiment, which imagines the brakes failing on a fully autonomous vehicle. The vehicle is packed with passengers, and heading towards pedestrians. The experiment depicts 13 variations of the “trolley problem” — a classic dilemma in ethics that involves deciding who will die under the wheels of a runaway tram.