AI’s hidden bias: Chatbots can influence opinions without trying
en-GBde-DEes-ESfr-FR

AI’s hidden bias: Chatbots can influence opinions without trying

04/03/2026 Yale University

New Haven, Conn. — As people increasingly rely on AI-powered chatbots to look up basic facts about the world, a new Yale study shows that those interactions can influence users’ social and political opinions.

Prior research has shown that content generated by artificial intelligence (AI) that has been prompted to be persuasive can indeed shift people’s opinions. But this study provides evidence that the same is also true of content that is not intended to change minds, such as the summaries that popular chatbots produce in response to simple queries about historical events.

This unintended power to persuade is caused by latent biases introduced during the training of the large language models (LLMs) that drive chatbots’ core capabilities, the researchers said. Those latent biases — which can carry over from ideological leanings in the data used to train LLMs — lend subtle nuances to the framing of the narratives the chatbots generate, they explained.

“We show that querying an AI chatbot to obtain historical facts can influence people’s opinions even when the information provided is accurate and nobody has prompted the tool to try to persuade you of anything,” said Daniel Karell, an assistant professor of sociology in Yale’s Faculty of Arts and Sciences and the study’s senior author. “The effects are modest but could compound if somebody frequently engages with chatbots for factual information.”

The study was published on March 3 in the journal PNAS Nexus. Matthew Shu, a 2025 graduate of Yale College, is the lead author.

For the study, the researchers tested for the effects of both latent and prompted biases in AI-generated narratives about two historical events from the 20th century: the Seattle General Strike, a five-day general work stoppage in the city during February 1919; and the Third World Liberation Front student protests, student-led demonstrations in 1968 that demanded greater representation of ethnic minorities in academia.

To evaluate the effects of latent biases, the researchers asked 1,912 participants to read default summaries of the two events generated by either GPT-4o, a chatbot technology released by OpenAI in 2024, or the corresponding Wikipedia entries. They tested the relative influence of prompted biases by having other participants read summaries that portrayed the events with either deliberately liberal or conservative framing.

The researchers found that, compared to the Wikipedia entries, both the default AI summaries and those prompted to have what was considered a liberal framing caused participants to express more liberal opinions about the two events. At the same time, the study showed that readers of AI summaries with a conservative slant reported more conservative opinions relative to readers of Wikipedia.

That the default summaries moved readers’ opinions in a “liberal” direction demonstrates the persuasive effects of latent biases in LLMs, the researchers said. However, while statistically significant, the effects represent slight difference — from leaning towards a moderate stance to leaning towards a somewhat liberal stance, Karell noted.

To test whether readers’ existing political views moderate the degree to which the political framing of AI summaries influences their opinions, the researchers asked participants to self-report their political ideology. They found that the AI summaries prompted to have a liberal framing led to more liberal opinions across the ideological groups. The AI summaries with a conservative slant only showed statistically significant effects on the opinions of readers who had identified as politically conservative.

These findings suggest that conservative framing in content generated by GPT-4o, and perhaps other AI chatbots, would likely result from prompting bias, whereas liberal framing could be the result of both latent and prompting bias, Karell said.

“We show that using chatbots to learn about history has unanticipated and anticipated influences on people’s opinions,” he said. “In contrast to Wikipedia, which emphasizes transparency in how its entries are edited, the development of AI chatbots is opaque. Our work suggests that the companies developing these models have the ability to shape people’s opinions, which is an unsettling thought.”

The study was coauthored by Keitaro Okura, a Ph.D. candidate at Yale, and Thomas Davidson of Rutgers University.
04/03/2026 Yale University
Regions: North America, United States
Keywords: Applied science, Artificial Intelligence

Disclaimer: AlphaGalileo is not responsible for the accuracy of content posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.

Testimonials

For well over a decade, in my capacity as a researcher, broadcaster, and producer, I have relied heavily on Alphagalileo.
All of my work trips have been planned around stories that I've found on this site.
The under embargo section allows us to plan ahead and the news releases enable us to find key experts.
Going through the tailored daily updates is the best way to start the day. It's such a critical service for me and many of my colleagues.
Koula Bouloukos, Senior manager, Editorial & Production Underknown
We have used AlphaGalileo since its foundation but frankly we need it more than ever now to ensure our research news is heard across Europe, Asia and North America. As one of the UK’s leading research universities we want to continue to work with other outstanding researchers in Europe. AlphaGalileo helps us to continue to bring our research story to them and the rest of the world.
Peter Dunn, Director of Press and Media Relations at the University of Warwick
AlphaGalileo has helped us more than double our reach at SciDev.Net. The service has enabled our journalists around the world to reach the mainstream media with articles about the impact of science on people in low- and middle-income countries, leading to big increases in the number of SciDev.Net articles that have been republished.
Ben Deighton, SciDevNet

We Work Closely With...


  • e
  • The Research Council of Norway
  • SciDevNet
  • Swiss National Science Foundation
  • iesResearch
Copyright 2026 by AlphaGalileo Terms Of Use Privacy Statement