Technology 4 min read

What Autonomous car Engineers can Learn From Philosophy

Autonomous cars are practically an inevitability. Now, engineers are starting to dive into the realm of ethics. | Image By temp-64GTX | Shutterstock

Autonomous cars are practically an inevitability. Now, engineers are starting to dive into the realm of ethics. | Image By temp-64GTX | Shutterstock

Autonomous car engineers are now at the point where they are faced with some deep ethical issues. Now, a philosophical metaphor might provide the answer. 

The autonomous car and bus industry grows larger each and every day. In fact, you can even 3D print autonomous vehicles these days.

Despite their widespread development, they aren’t without their issues.

America’s first “autonomous shuttle” got into an accident within a mere two hours of its inaugural deployment. These incidents are not just a poor launch for products, but they raise important philosophical questions. But autonomous car engineers could learn a thing or two from “The Good Place”.

Our readers abroad might not be familiar with the popular show. But it uses philosophy as a main facet in its storytelling. One of the best episodes came in season two when the show took its turn at tackling the Trolley Problem.

How could this philosophical hallmark help autonomous car engineers develop betters cars?

Human Moral Quagmires for Autonomous Machines

If you’re unfamiliar with the trolley problem, don’t worry — it’s easy to understand.

Basically, imagine you are operating a trolley as you see above. There are two tracks — one with five people on it and one with only one person on it. Unfortunately, you are on a one-way track to hit someone. The question is…which track do you choose?

Philippa Foot developed the original idea that became the moral quandary. But it raises questions that summon shades from philosophers past. You have to examine the situation from different points of view such as utilitarian vs stoicism.

In the case of the trolley problem, you have to ask yourself which is best for yourself or the good of all. So, that makes the answer a no-brainer, right?

You switch tracks to save five people instead of one…don’t you?

What if that one person was your best friend and the others were strangers?

This is, of course, an outrageous permutation that wouldn’t likely happen. However, when it comes to moral philosophy, it’s often a question of someone’s intent vs an outcome.

But autonomous car engineers can employ this to their advantage, too. In fact, MIT researchers already used it as early as 2016 in their “Moral Machine.”

However, they gathered data by having human users say which pedestrians to kill first.

Data Input Doesn’t Equal Autonomous Thought

A moral machine sounds fantastic on paper — especially from a data science standpoint. But the results revealed some odd preferences from around the world.

Basically, MIT researchers asked participants to control an autonomous vehicle.

Participants then had to rank which pedestrians it should avoid and which it could hit. That’s right — MIT collected data on killing preferences of around 39.6 million people.

The results yielded interesting (if troubling) stats such as the fact that Russia preferred sparing “lawful” people more than younger people. By contrast, U.S. participants preferred sparing more people to sparing the lawful.

Though this is compelling, it doesn’t create decision-making skills for self-driving cars. The AI simply responds to its programming which now includes odd, niche prioritization.

This issue relates chiefly to why some believe researchers need to “start over” with all AI.

Despite this issue, autonomous car engineers can still use the data from the moral machine. The results can help them prepare for how the public might react in the event of a certain kind of incident.

However, for the most part, the trolley problem isn’t a pressing issue right now.

Why Autonomous Car Engineers Don’t Need to Worry

Karl Iagnemma, president of Aptiv Automated Mobility, told WIRED why he thought the trolley problem solution could wait.

“First, because it’s not clear what the right solution is, or if a solution even exists. And second, because the incident of events like this is vanishingly small and driverless cars should make them even less likely without a human behind the wheel.”

Furthermore, most autonomous vehicles don’t have moral decision-making skills. Even AI like Sophia remain shackled to their user inputs and code.

Until we create true AI, most autonomous creations will just perform their coded functions.

Despite this fact, Iagnemma and others want to consider the long-term implications of automation regarding these questions. For instance, autonomous cars may start to adapt to their user’s driving preferences. They could even adapt to the local culture, as well.

Perhaps, in time, moral questions such as the trolley problem will be more relevant.

How else can AI developers and autonomous car engineers use philosophy to mitigate potential future moral questions about their tech?

First AI Web Content Optimization Platform Just for Writers

Found this article interesting?

Let Juliet Childers know how much you appreciate this article by clicking the heart icon and by sharing this article on social media.


Profile Image

Juliet Childers

Content Specialist and EDGY OG with a (mostly) healthy obsession with video games. She covers Industry buzz including VR/AR, content marketing, cybersecurity, AI, and many more.

Comments (0)
Most Recent most recent
You
share Scroll to top

Link Copied Successfully

Sign in

Sign in to access your personalized homepage, follow authors and topics you love, and clap for stories that matter to you.

Sign in with Google Sign in with Facebook

By using our site you agree to our privacy policy.