AI programs can be made smarter by enabling them to know when they should doubt themselves. That’s the premise of modifications that researchers at Uber and Google are making to two popular deep-learning programs to handle probability.
The work reflects the realization that uncertainty is a key aspect of human reasoning and intelligence. Adding it to AI programs could make them smarter and less prone to blunders.
It’s really important, for example, for a self-driving car to know its level of uncertainty. Otherwise it can easily get into a situation in which it makes a fatal error, says Dustin Tran, a researcher who’s working on the problem at Google.
A deep-learning program that’s able to handle probability would be able to recognize objects with a reasonable degree of certainly from just few examples rather than many thousands. Offering a measure of certainty rather than a yes-or-no answer should also help with engineering complex systems.