Despite the rapid improvement in the development of autonomous vehicle technologies, they still have weaknesses. However, a new training model from MIT and Microsoft could help identify the AI blind spots on autonomous vehicles.
To date, public skepticism remains one of the perennial issues of driverless car manufacturing. The rise in the number of reported crashes made it even more difficult for autonomous car makers like Uber and Tesla to gain the trust of people.
People fear that driverless vehicles are not safe enough. This is pushing automakers to try and identify every possible case that programmers and developers might not have thought of. This is where MIT and Microsoft’s new research could come into play.
New AI Blind Spots Model
In their research papers, the joint team described a model that utilizes human input to determine “blind spots” in AI design.
“The model helps autonomous systems better know what they don’t know,” Ramya Ramakrishnan, first author of the study, said.
The researchers reportedly trained an AI system with a human closely monitoring its actions in real time. The person provided feedback to the system whenever it made any mistakes.
The team combined that AI systems’ training data with the feedback data provided by the human monitor. They then utilized machine learning techniques to create a model that identifies AI blind spots in need of further attention or correction.
“Many times, when these systems are deployed, their trained simulations don’t match the real-world setting [and] they could make mistakes, such as getting into accidents. The idea is to use humans to bridge that gap between simulation and the real world, in a safe way so that we can reduce some of those errors,” Ramakrishnan went on to say.
The researchers believe that when applied to the real world, their new model can help autonomous vehicles and robots act more cautiously. For instance, if an AI assistant determined an action to be a blind spot, it could query a human for the most acceptable response to the situation.
Comments (0)
Most Recent