Self-driving cars are the proclaimed future of transportation. It seems that for years, auto companies have been making bold predictions that fully autonomous vehicles are just a few years away. The arguments for these vehicles are numerous. First and most importantly, self-driving cars would be supposedly safer than their human-driven counterparts. There are 33,000 to 40,000 driving related fatalities per year, and a vast majority of those are caused by human error. In addition to the argument for safety, there is also the argument for the convenience that this mode of transportation would create. One wouldn’t have to stress about traffic on their morning commute. Traffic in general would decrease. People wouldn’t need to own their car and worry about oil changes, repairs, and filling up the gas tank; instead, they could use a ridesharing service for their every need. The companies researching and testing self-driving cars can expect to cash in if the transportation revolution takes place as expected.

Now, there are arguments against the self-driving car revolution. The RAND Foundation published a study showing that it would be impossible to prove that self driving cars are safer than human driven cars with the current fleet being used for testing. Anecdotally, a man driving a Tesla car with Autopilot engaged was involved in a fatal crash. Researchers from MIT have found that people would be hesitant to get in a self-driving car knowing that the car may be programmed to kill its occupants over passersby in certain circumstances.

Giving a self-driving car the power to determine how to minimize losses in the event of a fatal crash poses interesting moral dilemmas. Is the life of a 5-year-old more or less valuable than that of a 45-year-old or 85-year-old? Should the car give favorable treatment to its occupants? What about to law abiding citizens (i.e. if the incoming crash was caused by people jaywalking)? A few years ago, I participated in MIT’s Moral Machine to judge these potential situations. I think that a survey like this is a good solution to a programmer’s (or more generally, a company’s) moral and ethical dilemma regarding how to act in these situations. If everyone riding in a self-driving car was required to first complete the survey, the aggregated data could then be used to determine how the car could act. In that way, it’s not a machine making a decision, but the very people who may be affected. This regulation would need to come from the government, but may have the drawback of discouraging people from adopting self-driving vehicles by bringing to mind unlikely scenarios of gloom and doom.

Personally, I am enthusiastically looking forward to self-driving cars. I believe that they will be safer than human drivers, and when adopted fully, allow for several other benefits as well. I’m not a big fan of driving for enjoyments sake (I still haven’t gotten around to getting my license), rather, I see it as a utility for getting from Point A to Point B. If a self-driving car can get me from A to B more quickly, more cheaply, more safely, and without requiring my active input, I would be very excited. Hopefully, that vision of the future will be reality within “just the next few years.”