Every driver has been there. You’re approaching an intersection, but someone starts to cross the street and you have to slam on the brakes.
Self-driving cars are becoming more popular, but how would these automated vehicles know when to slam on the brakes?
What is artificial intelligence, and how is it used to make decisions for driverless cars? Do our human biases transfer over to AI systems?
Keep reading to learn what AI is, how it is used in automated cars, and how it can be affected by bias.
What Is AI?
AI stands for artificial intelligence and is a subfield of computer science that seeks to develop computers that can think. Classically, there are four approaches to developing AI: thinking humanly, thinking rationally, acting humanly, and acting rationally.
Today’s artificial intelligence is far from the idea that computers can think or even act in the same way that humans do. Instead, it uses large amounts of data to make the best guess decision.
For instance, when given the task of determining someone’s profession based on a picture, an AI system will use million of pictures of people in a wide variety of professions to determine which one is most likely to correspond to the picture.
In addition to using very large data sets, AI systems typically learn from their own decisions and mistakes. Whenever the AI makes a decision, it will add that to the data set which gradually improves the accuracy of the decisions.
Many companies are currently developing automated vehicles, and many more have developed semi-automated ones. Semi-automated cars are commercially available and widely used all over.
These cars still require a human driver but use computerized safety features such as automatic braking and lane shift sensors. They can make driving safer while still requiring a human to make the basic decision making.
Completely automated vehicles, on the other hand, use only an artificial intelligence system to make all of the decisions while driving. Sometimes called self-driving cars, they are currently being developed and tested.
Even though these companies are making headway, many people still have reservations about driverless cars. Also, there are many hurdles to overcome, including dealing with racial bias within the AI systems.
Transfer of Human Bias
Even though AI systems can’t think in the same way that humans do, they can be subject to the same biases that we are. Because AI systems use databases made by humans, just using this data can create bias in the AI systems’ decisions.
For instance, automated vehicles have to decide when to brake for pedestrians at a crosswalk. However, AI systems don’t know how to tell when someone is trying to cross the street.
Instead, they use data from human drivers to tell when they should brake. Therefore, if humans show racial bias when deciding whether or not to stop for a pedestrian at a crosswalk, so will artificial intelligence systems.
A study done by the National Institute for Transportation and Communities shows exactly this bias. They concluded that black pedestrians were passed in a crosswalk by nearly twice as many drivers.
Whether this bias was intentional, or unconscious, it will still permeate into an AI system when it tries to make decisions. This sort of bias can lead to more traffic accidents with automated vehicles and might make you need a tow truck.
Now that you know what artificial intelligence is, how automated vehicles use it, and how bias affects it, feel free to do some more research on your own. There is still plenty to learn about self-driving cars.
If you liked this article, please leave a comment or check out some of our other articles!