Trusting the Black Box

Nuclear energy is the most efficient, safe, clean form of energy there is. In sixty years of existence, approximately 4000 people have indirectly died because of it compared to about 7500 direct deaths every year due to coal particulate matter. Their deaths were highly visible and took place in Chernobyl, which was rendered uninhabitable by a nuclear disaster. But later disasters, like Three Mile Island, released little radiation because industry protections ensure that nuclear power is safe, has many redundant systems, and that another Chernobyl will never happen again.

Despite scientific consensus, the public as a whole remains skeptical of nuclear energy. People are likely to think of Homer Simpson causing a meltdown, or of an atomic bomb going off when they think of nuclear energy. They don’t understand why Chernobyl was a failure and why a similar incident is unlikely to occur. People think that a safe technology is dangerous because of a single, highly publicized incident.

Most people know that Three Mile Island evacuated thousands of residents after the worst commercial nuclear disaster in American history. But they don't know that the total radiation released was that of a chest X-ray, and that no one died because of it. Also see that smoke? It's not dangerous radiation. It's water vapor! ([Source](https://www.usatoday.com/story/money/2017/05/30/three-mile-island-exelon-nuclear-energy/102307448/)\)

Figure 1: Most people know that Three Mile Island evacuated thousands of residents after the worst commercial nuclear disaster in American history. But they don’t know that the total radiation released was that of a chest X-ray, and that no one died because of it. Also see that smoke? It’s not dangerous radiation. It’s water vapor! (Source)

The advent of self-driving cars promises to be the biggest technological development since the World Wide Web. A number of industries will go the way of the horse and buggy once they are in active use. Truck drivers? Nope. Valet parking? Your car will find a spot. DUI lawyers? A car can’t drive drunk. Self-driving cars will revolutionize lifestyle, traffic, and safety for millions of people.

Most technologically minded people are aware of the benefits of self-driving cars. But like nuclear energy in its early stages, self-driving cars offer enormous potential with little proven safety. Sure, they will prevent many car accident deaths. But all it takes is one highly visible incident with a media frenzy to stop them from ever being adopted.

Imagine this: A man is taking a self-driving car to work. The car floors into a farmer’s market, killing the driver and 30 passersby. They are the Bridget Driscolls of self-driving cars. Pictures of the gruesome scene are played every day for a month on CNN. Spokespeople for the car’s manufacturer are dumbfounded and unhelpful. The families of the victims demand an explanation. A post-mortem of a self-driving car accident will be very difficult. Like AlphaGo, which has beaten Go world champions using strategy that can be characterized as alien, self-driving cars will make mistakes that make little sense to humans. Human errors have easy explanations. “The driver was drunk and caused a car accident.” “The driver had a heart attack and swerved into a toll booth.” A second grader can understand why a typical car accident occurs.

Self-driving cars will make mistakes that are incomprehensible to humans because algorithms, not people, are the drivers. Will an expert be able to go on CNN and convincingly tell a bunch of talking heads that 30 people died because the 15th layer of a recurrent neural network has a ReLU function that sent a 0 instead of a whole number into the next layer? No. No explanation will have the same impact on the viewing public as dead people and interviews with grieving families. This will be even worse for self-driving cars if local governments respond by banning them for safety reasons.

The most important obstacle for wide adoption of self-driving cars is trust. Like nuclear energy, self-driving cars will be forever stigmatized if a single gruesome, highly publicized incident occurs. People feel a sense of control when they drive. They feel like they can always react to what other drivers are doing. Self-driving cars will necessarily give up this form of control, leading to some accidents that would be preventable if a person was driving. It is not surprising that 56% of people polled in a 2017 PEW survey said they would not want to ride in a self-driving car, and 72% of those said that their reasoning is due to safety concerns and lack of trust. The technology in self-driving cars requires hundreds of machine learning, computer vision, and robotics experts to develop. However trust, not technology, will be the primary factor in whether they become the crown jewel in an automated Second Industrial Revolution, or whether they go the way of Google Glass.

The results of a 2017 PEW survey on driverless vehicles. 56% of respondents would not want to ride in a driverless vehicle, of which 30% have safety concerns. Meanwhile, of the 44% who would ride in a driverless vehicle, only 17% want to because they think it’ll be safer ([Source](http://www.pewinternet.org/2017/10/04/americans-attitudes-toward-driverless-vehicles)\)

Figure 2: The results of a 2017 PEW survey on driverless vehicles. 56% of respondents would not want to ride in a driverless vehicle, of which 30% have safety concerns. Meanwhile, of the 44% who would ride in a driverless vehicle, only 17% want to because they think it’ll be safer (Source)

Because trust is the main factor in self-driving car adoption, companies need to be obsessed with safety. In early 2017, Uber’s self-driving cars required a human driver to take control approximately once every 0.8 miles. This is acceptable with a human driver in the car, but clearly defeats the purpose of the car being “self-driving.” And even if most self-driving cars drive perfectly, there will be major incidents. 2017 was the safest year for commercial airline passengers ever, but 13 people still died in plane crashes. People are mostly aware that airplanes are safe because they are an established technology with a proven safety record in over 100 years of existence. A single farmer’s market incident in the early adoption of self-driving cars will lead to a distrustful public and the end of the most promising technology of the 21st century.

Ethics in artificial intelligence are not just a philosophical concern. They make business sense. An inability to address safety concerns can destroy years and millions of dollars worth of research. And there are plenty of organizations who would love to see self-driving vehicles fail. Car manufacturer without a self-driving program? DUI law firm? The Teamsters, who also mention safety concerns? Small towns supported by truck drivers? Thousands of people will lose their livelihoods if self-driving cars become ubiquitous. These people have every incentive to promote the idea that they aren’t safe. The media, who loves a gruesome story, will immediately jump on these incidents. If Edison had today’s media to promote the idea that alternating current is dangerous, we’d be living in a different world.

Since self-driving car accidents will inevitably happen, what can companies do to establish trust in self-driving cars?

  • Once a self-driving car reaches a proven level of safety, have it do a cross-country road trip entirely on its own before releasing it to the public. Publicize the trip. Put a webcam in the front seat so people can watch it. Make a blog from the point of view of the car detailing where it is. Get people really interested in self-driving cars. Once they see how safe the technology is, they will inevitably trust it. Get people to realize that self-driving cars are more like Herbie the Love Bug than Christine.
  • Don’t be an asshole. Not just in technology development, but in every aspect of your company’s behavior. Pittsburgh has been getting sick of Uber’s behavior in its city. If people resent a company, they will eventually resent that company’s products, no matter how revolutionary. People trust companies that behave ethically. This will be the case especially when the company is producing two-ton metal boxes that can kill people.
  • Do not rush to beat the competition. It’s better to wait decades to release a car with a 99.9999% safety rating than to wait five years and release a car that kills a few thousand people. Sure a few thousand deaths is better than tens of thousands dying in human controlled-vehicles, but people do not think in a utilitarian way when their own lives are in danger and have no control over it. A farmer’s market incident will ensure that the public is focused on the dangers of self-driving cars instead of on their safety and lifestyle improvements.
  • Be as apologetic as possible once an incident occurs. Give survivors’ benefits to families of people who have died in self-driving car accidents. Send representatives to every news outlet and emphasize the company’s commitment to safety and how regretful you are that such an incident occured. Publicize that you are auditing the incident because no one should lose their child in a self-driving car. Audits and payments can cost companies millions of dollars in short-term revenue. However this is negligible compared to the financial losses they will experience if no one trusts self-driving cars. Boeing and Airbus are well aware of this.
  • Create a separate team that acts as a RAND Institute for self-driving car accidents. Hire behavioral psychologists to explain why self-driving cars crash. Although researchers have spent thousands of hours spent developing novel deep learning algorithms, they have spent considerably less time explaining when and why these algorithms make mistakes. If you’re Uber or Google, you can’t shrug your shoulders and say “idk” if your car plows into a farmer’s market and kills 30 people.
  • Companies will need to be able to give a comprehensive, non-technical explanation for the accident. Techniques like LIME are a nice first step. So are neural network visualizations. But the general public doesn’t know what a neural network or an activation function is. Anthropomorphize the car. “The car thought that the farmer’s market was a parking lot because someone was wearing a shirt that confused one of the cameras.” Of course the car doesn’t really think or know what a parking lot is, but the explanation makes more sense to the general public than a more accurate, but technical one. People like anthropomorphic explanations. That’s why Greek gods, fictional robots, and fictional aliens generally act like people.
When explaining self-driving car mistakes to the public, think about what Herbie the Love Bug would do. Not how the neural networks in Herbie’s car brain are firing. ([Source](https://en.wikipedia.org/wiki/Herbie)\)

Figure 3: When explaining self-driving car mistakes to the public, think about what Herbie the Love Bug would do. Not how the neural networks in Herbie’s car brain are firing. (Source)

Self-driving cars will revolutionize transportation and driving safety if they become ubiquitous. But this cannot happen unless people overwhelmingly trust them. Technology companies must act ethically in order for this to happen. Otherwise they will go the way of nuclear energy and every other technological dodo.