Six Criteria for the Reliability of AI
en-GBde-DEes-ESfr-FR

Six Criteria for the Reliability of AI


Language models based on artificial intelligence (AI) can answer any question, but not always correctly. It would be helpful for users to know how reliable an AI system is. A team at Ruhr University Bochum and TU Dortmund University suggests six dimensions that determine the trustworthiness of a system, regardless of whether the system is made up of individuals, institutions, conventional machines, or AI. Dr. Carina Newen and Professor Emmanuel Müller from TU Dortmund University, alongside the philosopher Professor Albert Newen from Ruhr University Bochum, describe the concept in the international philosophical journal Topoi, published online on November 14, 2025.

Six dimensions of reliability

Whether a specific AI system is reliable is not a yes-or-no question. The authors suggest assessing how distinctly six criteria apply to each system in order to create a profile of reliability. These dimensions are:

  1. Objective functionality: How well does the system perform its core task and is the quality assessed and guaranteed?
  2. Transparency: How transparent are the system’s processes?
  3. Uncertainty quantification/Uncertainty of underlying data and models: How reliable are the data and models, and how secure are they against misuse?
  4. Embodiment: To what extent is the system physical or virtual?
  5. Immediacy Behaviors: To what extent is the user communicating with the system?
  6. Commitment: To what extent can the system have an obligation to the user?

“These criteria can illustrate that the reliability of current AI systems, such as ChatGPT or self-driving cars, usually exhibit severe deficits in most dimensions,” says the team from Bochum and Dortmund. “At the same time, it shows where there is need for improvement if AI systems are to achieve a sufficient level of reliability.”

Central dimensions from a technical perspective

From a technical standpoint, the dimensions transparency and uncertainty quantification of underlying data and models are crucial. These concern principal deficits of AI systems. “Deep learning achieves incredible things with large quantities of data. In chess, for example, AI systems are superior to any human,” explains Müller. “But the underlying processes are a blackbox to us, which has resulted in a key lack of trust up to this point.”

The uncertainty of data and models faces a similar situation. “Companies are already using AI systems to pre-sort applications,” says Carina Newen. “The data used to train the AI contain biases that the AI system then perpetuates.”

Central dimensions from a philosophical perspective

Discussing the philosophical perspective, the team uses ChatGPT as an example, which generates an intelligent-sounding answer to each question and prompt, but can still hallucinate: “The AI system invents information without making that clear,” emphasizes Albert Newen. “AI systems can and will be helpful as information systems, but we have to learn to always use them with a critical eye and not trust them blindly.”

However, Albert Newen considers the development of chatbots as a replacement for human communication to be questionable. “Forming interpersonal trust with a chatbot is dangerous, because the system has no obligation to the user who trusts it,” he says. “It doesn’t make sense to expect the chatbot to keep promises.”

Observing the reliability profile with the various dimensions can help understand the extent to which humans can trust AI systems as information experts, say the authors. It also helps to see why critical, routine understanding of these systems will be increasingly required.

Collaboration in the Ruhr Innovation Lab

Ruhr University Bochum and TU Dortmund University, which currently apply together as the Ruhr Innovation Lab in the Excellence Strategy, work closely on issues that help to develop a sustainable and resilient society in the digital age. The current publication stems from a partnership of the Institute of Philosophy II in Bochum and the Research Center Trustworthy Data Science and Security. The Center was founded by the two universities together with the University of Duisburg-Essen within the University Alliance Ruhr. The author Carina Newen was the first doctoral student to receive a doctorate from the Research Center.

Carina Newen, Emmanuel Müller, Albert Newen: Trust and Uncertainties: Characterizing Trustworthy AI Systems Within a Multidimensional Theory of Trust, in: Topoi, 2025, DOI: 10.1007/s11245-025-10287-0, https://link.springer.com/article/10.1007/s11245-025-10287-0
Regions: Europe, Germany
Keywords: Humanities, Philosophy & ethics, Applied science, Artificial Intelligence

Disclaimer: AlphaGalileo is not responsible for the accuracy of content posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.

Referenzen

We have used AlphaGalileo since its foundation but frankly we need it more than ever now to ensure our research news is heard across Europe, Asia and North America. As one of the UK’s leading research universities we want to continue to work with other outstanding researchers in Europe. AlphaGalileo helps us to continue to bring our research story to them and the rest of the world.
Peter Dunn, Director of Press and Media Relations at the University of Warwick
AlphaGalileo has helped us more than double our reach at SciDev.Net. The service has enabled our journalists around the world to reach the mainstream media with articles about the impact of science on people in low- and middle-income countries, leading to big increases in the number of SciDev.Net articles that have been republished.
Ben Deighton, SciDevNet
AlphaGalileo is a great source of global research news. I use it regularly.
Robert Lee Hotz, LA Times

Wir arbeiten eng zusammen mit...


  • e
  • The Research Council of Norway
  • SciDevNet
  • Swiss National Science Foundation
  • iesResearch
Copyright 2025 by DNN Corp Terms Of Use Privacy Statement