The Hidden Logic Behind AI’s Judgments of People
en-GBde-DEes-ESfr-FR

The Hidden Logic Behind AI’s Judgments of People


New study shows that modern AI systems don’t just process information, they systematically “judge” people in ways that resemble human trust, but with important differences. Like humans, they favor competence and integrity, yet they do so in a more rigid, rule-based, and often more extreme way. Crucially, their judgments can also be more consistently biased across demographic traits and vary significantly between models. The bottom line: AI can mimic the structure of human judgment, but it does not think like humans, and that gap matters when these systems are used to make real decisions about people.

In a world where artificial intelligence is quietly shaping who gets hired, who receives loans, and even how medical decisions are made, a new question is emerging: How does AI judge us?

A new study by Prof. Yaniv Dover and Valeria Lerman from Hebrew University suggests the answer is both reassuring and deeply unsettling.

Drawing on more than 43,000 simulated decisions alongside around a thousand human participants, the research reveals that today’s most advanced AI systems, including models similar to ChatGPT and Google’s Gemini, do not simply process information. They make judgments about people. And in doing so, they appear to form something that looks a lot like “trust.”

But that effective trust doesn’t work quite like ours.

The study placed both humans and AI in familiar situations: deciding how much money to lend a small business owner, whether to trust a babysitter, how to rate a boss, or how much to donate to a nonprofit founder.

Across these scenarios, a clear pattern emerged.

Both humans and AI favored people who seemed competent, honest, and well-intentioned. In other words, the machines appeared to grasp the basic ingredients of trust; competence, integrity, and benevolence, much like we do.

“That’s the good news,” says Prof. Yaniv Dover. “AI is not making random decisions. It captures something real about how humans evaluate one another.”

But the resemblance stops there - Look closer, and the differences become striking.

Is this a good person? Humans tend to form a general impression blending multiple traits into a single, intuitive and holistic judgment.

AI does something very different.

It breaks people down into components, scoring competence, integrity, and kindness almost like separate columns in a spreadsheet. The result is a more rigid, “by-the-book” style of judgment, consistent, but less human.

“People in our study are messy and holistic in how they judge others,” explains Valeria Lerman. “AI is cleaner, more systematic and that can lead to very different outcomes.”

Nevertheless, a troubling pattern of amplified bias emerged.

In financial scenarios, such as deciding how much money to lend or donate, AI systems showed consistent and sometimes sizable differences based solely on demographic traits.

For example:
  • Older individuals were frequently given more favorable outcomes, though in some cases the opposite pattern appeared.
  • Religion also had a significant effect on the outcomes, especially the monetary ones.
  • Gender also influenced decisions in certain models and scenarios.
These differences appeared even when every other detail about the person was identical.

“Humans have biases, of course,” says Prof. Dover. “But what surprised us is that AI’s biases can be more systematic, more predictable, and sometimes stronger.”

Another key insight: there is no single “AI opinion.”

Different models often made different judgments about the same person. In some cases, one system rewarded a trait that another penalized.

That means the choice of AI system could quietly shape real-world outcomes.

“Which model you use really matters,” Lerman notes. “Two systems can look similar on the surface but behave very differently when making decisions about people.”

AI is already being used to screen job candidates, assess creditworthiness, recommend medical actions, and guide organizational decisions.

As these systems move from assistants to decision-makers, understanding how they “think” becomes critical.

The study suggests that while AI can mimic the structure of human judgment, it does so in a more rigid, less nuanced way and with biases that may be harder to detect.

The researchers emphasize that their findings are not a warning against AI, but rather a call for awareness.

“These systems are powerful,” says Dover. “They can model aspects of human reasoning in a consistent way. But they are not human and we shouldn’t assume they see people the way we do.”

As AI becomes more embedded in everyday life, the question is no longer whether we trust machines. It’s whether we understand how they trust us.
The research paper titled “A closer look at how large language models ‘trust’ humans: patterns and biases” is now available in Proceedings of the Royal Society A, and can be accessed at this link.
This research was done as a part of activity in the new "Research Center for AI in Organizations" that is being formed in the business school
Researchers:
Valeria Lerman1, Yaniv Dover1,2,3
Institutions:
1. The Hebrew University Business School
2. The Federmann Center for the study of Rationality, The Hebrew University of Jerusalem
3. The Department for Cognitive and Brain Sciences, The Hebrew University of Jerusalem
Regions: Middle East, Israel, North America, United States
Keywords: Society, Economics/Management, Social Sciences, Applied science, Artificial Intelligence

Disclaimer: AlphaGalileo is not responsible for the accuracy of content posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.

Testimonios

We have used AlphaGalileo since its foundation but frankly we need it more than ever now to ensure our research news is heard across Europe, Asia and North America. As one of the UK’s leading research universities we want to continue to work with other outstanding researchers in Europe. AlphaGalileo helps us to continue to bring our research story to them and the rest of the world.
Peter Dunn, Director of Press and Media Relations at the University of Warwick
AlphaGalileo has helped us more than double our reach at SciDev.Net. The service has enabled our journalists around the world to reach the mainstream media with articles about the impact of science on people in low- and middle-income countries, leading to big increases in the number of SciDev.Net articles that have been republished.
Ben Deighton, SciDevNet
AlphaGalileo is a great source of global research news. I use it regularly.
Robert Lee Hotz, LA Times

Trabajamos en estrecha colaboración con...


  • The Research Council of Norway
  • SciDevNet
  • Swiss National Science Foundation
  • iesResearch
Copyright 2026 by DNN Corp Terms Of Use Privacy Statement