Ulster University AI researchers take a step toward self-aware computers

  • Researchers at Ulster University have developed a novel computational model of decision-making uncertainty, a potential step toward the development of self-aware systems.

    The past few years have seen significant advances in AI technology, with machine learning allowing us to train systems to make some pretty complex decisions in a variety of industries and applications. These systems aren't self-aware and have no knowledge of how their own thought process works, they're just decision-making tools trained on a set of data and problems.

    Researchers in the Intelligent Systems Research Centre (ISRC) at Ulster University’s Magee campus believe that they've now discovered how real brains may evaluate decisions. They've developed the world’s first biologically motivated computational model designed to quantify how confident an AI is in its decisions and how it changes its mind.

    This idea could be a major step toward creating self-aware machines by allowing AI to think about its own decisions and evaluate its decision-making processes. The work published in journal Nature Communications shows for the first time that a neural network model can be made self-aware of their own actions and choices. Having a neural network based model of change-of-mind and error correction behaviour could also help us learn about metacognitive disorders leading to conditions such as OCD and addiction.

    Senior paper author and researcher Dr KongFatt Wong-Lin commented on the work: "Our research has revealed the plausible brain circuit mechanisms underlying how we calculate decision uncertainty, which could in turn influence or bias our actions, such as change-of-mind. Further, given the wide applications of artificial neural networks in A.I., we are perhaps closer than ever before to creating self-aware machines than we have previously thought. Real-time monitoring of decision confidence in artificial neural networks could also potentially allow better interpretability of the decisions and actions made by these algorithms, thereby leading to more responsible and trustworthy A.I.”

    Source: Ulster University

    About the author

    Brendan is a Sync NI writer with a special interest in the gaming sector, programming, emerging technology, and physics. To connect with Brendan, feel free to send him an email or follow him on Twitter.

    Got a news-related tip you’d like to see covered on Sync NI? Email the editorial team for our consideration.

Share this story