Trusting the Black Box

The most important obstacle for wide adoption of self-driving cars is trust. Like nuclear energy, self-driving cars will be forever stigmatized if a single gruesome, highly publicized incident occurs. People feel a sense of control when they drive. They feel like they can always react to what other drivers are doing. Self-driving cars will necessarily give up this form of control, leading to some accidents that would be preventable if a person was driving. It is not surprising that 56% of people polled in a 2017 PEW survey said they would not want to ride in a self-driving car, and 72% of those said that their reasoning is due to safety concerns and lack of trust. The technology in self-driving cars requires hundreds of machine learning, computer vision, and robotics experts to develop. However trust, not technology, will be the primary factor in whether they become the crown jewel in an automated Second Industrial Revolution, or whether they go the way of Google Glass.

The Question Concerning Technology (in the Statistics classroom)

In the past thirty years, cheap, ubiquitous computing power has allowed the field of statistics to address a wide variety of questions that previously would have been impractical. Any situation in which a closed-form expression for a particular quantity does not exist would have been virtually impossible to calculate by hand, and problems involving a large number of coefficients would have been unthinkable to solve. But computing has also made it easier than ever for anyone with little statistical understanding to use a statistical package, treat a procedure like a black box, and obtain a p-value without understanding the assumptions inherent in that procedure. How should the field of statistics teach technology to address both the increasing importance of computers and the dangers inherent in using them blindly?