Prioritizing Trust in AI

-

Society’s reliance on artificial intelligence (AI) and machine learning (ML) applications continues to grow, redefining how information is consumed. From AI-powered chatbots to information syntheses produced from Large Language Models (LLMs), society has access to more information and deeper insights than ever before. Nonetheless, as technology firms race to implement AI across their value chain, a critical query looms. Can we actually trust the outputs of AI solutions?

Can we actually trust AI outputs without uncertainty quantification

For a given input, a model might need generated many other equally-plausible outputs. This may very well be as a result of insufficient training data, variations within the training data, or other causes. When deploying models, organizations can leverage uncertainty quantification to supply their end-users with a clearer understanding of how much they need to trust the output of an AI/ML model. Uncertainty quantification is the strategy of estimating what those other outputs might have been.

Imagine a model predicting tomorrow’s extreme temperature. The model might generate the output 21ºC, but uncertainty quantification applied to that output might indicate that the model could just as well have generated the outputs 12 ºC, 15 ºC, or 16 ºC; knowing this, how much can we now trust the easy prediction of 20 ºC? Despite its potential to engender trust or to counsel caution, many organizations are selecting to skip uncertainty quantification due to the additional work they should do to implement it, in addition to due to its demands on computing resources and inference speed.

Human-in-the-loop systems, akin to medical diagnosis and prognosis systems, involve humans as a part of the decision-making process. By blindly trusting the info of healthcare AI/ML solutions, healthcare professionals risk misdiagnosing a patient, potentially resulting in sub-par health outcomes—or worse. Uncertainty quantification can allow healthcare professionals to see, quantitatively, after they can place more trust within the outputs of AI and after they should treat specific predictions with caution. Similarly, in a fully-automated system akin to a self-driving automobile, the output of a model for estimating the gap of an obstacle may lead to a crash that might need been otherwise avoided within the presence of uncertainty quantification on the gap estimate.

The challenge of leveraging Monte Carlo methods to construct trust in AI/ML models

Monte Carlo methods, developed throughout the Manhattan Project, are a strong strategy to perform uncertainty quantification. They involve re-running algorithms repeatedly with barely different inputs until further iterations don’t provide rather more information within the outputs; when the method reaches such a state, it is alleged to have converged. One drawback of Monte Carlo methods is that they’re typically slow and compute-intensive, requiring many repetitions of their constituent computations to acquire a converged output and have an inherent variability across those outputs. Because Monte Carlo methods use the outputs of random number generators as one in every of their key constructing blocks, even whenever you run a Monte Carlo with many internal repetitions, the outcomes you obtain will change whenever you repeat the method with similar parameters.

The trail forward to trustworthiness in AI/ML models

Unlike traditional servers and AI-specific accelerators, a brand new breed of computing platforms are being developed to directly process empirical probability distributions in the identical way that traditional computing platforms process integers and floating-point values. By deploying their AI models on these platforms, organizations can automate the implementation of uncertainty quantification on their pre-trained models and can even speed up different kinds of computing tasks which have traditionally used Monte Carlo methods, akin to VaR calculations in finance. Particularly, for the VaR scenario, this latest breed of platforms allows organizations to work with empirical distributions built directly from real market data, relatively than approximating these distributions with samples generated by random number generators, for more accurate analyses and faster results.

Recent breakthroughs in computing have significantly lowered the barriers to uncertainty quantification. A recent research article published by my colleagues and I, within the Machine Learning With Recent Compute Paradigms workshop at NeurIPS 2024, highlights how a next-generation computation platform we developed enabled uncertainty quantification evaluation to run over 100-fold faster in comparison with running traditional Monte-Carlo-based analyses on a high-end Intel-Xeon-based server. Advances akin to these allow organizations deploying AI solutions to implement uncertainty quantification with ease and to run such uncertainty quantification with low overheads.

The longer term of AI/ML trustworthiness is dependent upon advanced next-generation computation

As organizations integrate more AI solutions into society, trustworthiness in AI/ML will change into a top priority. Enterprises can not afford to skip implementing facilities of their AI model deployments to permit consumers to know when to treat specific AI model outputs with skepticism. The demand for such explainability and uncertainty quantification is obvious, with roughly three in 4 people indicating they might be more willing to trust an AI system if appropriate assurance mechanisms were in place.

Recent computing technologies are making it ever easier to implement and deploy uncertainty quantification. While industry and regulatory bodies grapple with other challenges related to deploying AI in society, there’s a minimum of a chance to engender the trust humans require, by making uncertainty quantification the norm in AI deployments.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x