search
Select Page
search
Select Page

Self-driving cars are still exposed to making mistakes, in part because the AI training can only account for so many situations. Microsoft and MIT might just fill in those gaps in knowledge ,they’ve developed a model that can catch these virtual “blind spots. A partnership of researchers of both has developed a system that helps identify lapses in artificial intelligence knowledge in autonomous cars and robots. These lapses, referred to as “blind spots,” occur when there are significant differences between training examples and what a human would do in a certain situation.

The model would also work with real-time corrections. If the AI stepped out of line, a human driver could take over and indicate that something went wrong.

AI Blind Spots

The experts say that the approach has the AI compare a human’s dealings in a given situation to what it would have done. If a self-directed car doesn’t know how to pull over when an ambulance is racing down the road, it could learn by watching a flesh-and-bone driver moving to the side of the road.

On the other hand, the researchers even have a way to prevent the driverless vehicle from becoming overconfident and marking all instances of a given response as safe. It says that machine learning algorithm not only indentifies the acceptable and unacceptable responses, but uses probability calculations to respond according. Even if an action is right 90 percent of the time, it might still see a weakness that it needs to address.

This technology isn’t ready for the field yet. Scientists have only tested their model with video games, where there are limited parameters and relatively ideal conditions. Microsoft and MIT will do it with real cars. If this works, though, it could go a long way toward making self-driving cars practical.

Mentitude

JOIN OVER 34K SUBSCRIBERS!