How AI Might Be Narrowing Our Worldview And What Regulators Can Do About It
en-GBde-DEes-ESfr-FR

How AI Might Be Narrowing Our Worldview And What Regulators Can Do About It


Offering a Fresh Approach to AI Regulation
New study highlights that generative AI systems—especially large language models like ChatGPT—tend to produce standardized, mainstream content, which can subtly narrow users’ worldviews and suppress diverse and nuanced perspectives. This isn't just a technical issue; it has real social consequences, from eroding cultural diversity to undermining collective memory and weakening democratic discourse. Existing AI governance frameworks, focused on principles like transparency or data security, don’t go far enough to address this “narrowing world” effect. To fill that gap, the article introduces “multiplicity” as a new principle for AI regulation, urging developers to design AI systems that expose users to a broader range of narratives, support diverse alternatives and encourage critical engagement so that AI can enrich, rather than limit, the human experience.

[Hebrew University] As artificial intelligence (AI) tools like ChatGPT become part of our everyday lives, from providing general information to helping with homework, one legal expert is raising a red flag: Are these tools quietly narrowing the way we see the world?

In a new article published in the Indiana Law Journal, Prof. Michal Shur-Ofry from the Hebrew University of Jerusalem and a Visiting Faculty Fellow at the NYU Information Law Institute, warns that the tendency of our most advanced AI systems to produce generic, mainstream content could come at a cost.

“If everyone is getting the same kind of mainstream answers from AI, it may limit the variety of voices, narratives, and cultures we’re exposed to,” Prof. Shur-Ofry explains. “Over time, this can narrow our own world of thinkable-thoughts.”

The article explores how large language models (LLMs), the AI systems that generate text, tend to respond with the most popular content, even when asked questions that have multiple possible answers. One example in the study involved asking ChatGPT about important figures of the 19th century. The answers, which included figures like Lincoln, Darwin, and Queen Victoria, were plausible–but often predictable, Anglo-centric and repetitive.

Likewise, when asked to name the best television series, the model’s answers centered around a short-tail of Anglo-American hits, leaving out the rich world of series that are not in English.

The reason is the way the models are built: they learn from massive amounts of digital datasets that are mostly in English, and rely on statistical frequency to generate their answers. This means that the most common names, narratives, and perspectives will surface again and again in the outputs they generate. While this might make AI responses helpful, it also means that less common information, including cultures of small communities that are not based on the English language, will often be left out. And because the outputs of LLMs become training materials for future generations of LLMs, in time the “universe” these models project to us will become increasingly concentrated.

According to Prof. Shur-Ofry, this can have serious consequences. It can reduce cultural diversity, undermine social tolerance, harm democratic discourse, and adversely affect collective memory – the way communities remember their shared past.

So what’s the solution?

Prof. Shur-Ofry proposes a new legal and ethical principle in AI governance: multiplicity. This means AI systems should be designed to expose users or at least alert them to the existence of different options, content, and narratives, not just one “most popular” answer.

She also stresses the need for AI literacy, so that everyone will have a basic understanding of how LLMs work and why their outputs are likely to lean toward the popular and mainstream. This, she says, will “encourage people to ask follow-up questions, compare answers, and think critically about the information they’re receiving. It will help them see AI not as a single source of truth but as a tool and ‘push back’ to extract information that reflects the richness of human experience.

The article suggests two practical steps to bring this idea to life:
  1. Build multiplicity into AI tools: for example, through a feature that allows users to easily raise the models’ “temperature” – a parameter that increases the diversity of generated content, or by clearly notifying users that other possible answers exist.
  2. Cultivate an ecosystem that supports a variety of AI systems, so users can easily get a “second opinion” by consulting different platforms.
In a follow-on collaboration with Dr. Yonatan Belinkov and Adir Rahamim from the Technion’s Computer Science department, and Bar Horowitz-Amsalem from the Hebrew University, Shur-Ofry and her collaborators are attempting to implement these ideas, and present straightforward ways to increase the output diversity of LLMs.

“If we want AI to serve society, not just efficiency, we have to make room for complexity, nuance and diversity,” she says. “That’s what multiplicity is about, protecting the full spectrum of human experience in an AI-driven world.”
The research paper titled “Multiplicity as an AI Governance Principle” is now available in Indiana Law Journal and can be accessed at https://www.repository.law.indiana.edu/ilj/vol100/iss4/6/
Researchers:
Michal Shur Ofry
Institutions:
Faculty of Law, the Hebrew University of Jerusalem
Visiting Faculty Fellow, NYU Information Law Institute

Regions: Middle East, Israel, North America, United States
Keywords: Society, Policy - society, Science, Science Policy, Applied science, Policy - applied science, Technology, Artificial Intelligence

Disclaimer: AlphaGalileo is not responsible for the accuracy of content posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.

Testimonials

For well over a decade, in my capacity as a researcher, broadcaster, and producer, I have relied heavily on Alphagalileo.
All of my work trips have been planned around stories that I've found on this site.
The under embargo section allows us to plan ahead and the news releases enable us to find key experts.
Going through the tailored daily updates is the best way to start the day. It's such a critical service for me and many of my colleagues.
Koula Bouloukos, Senior manager, Editorial & Production Underknown
We have used AlphaGalileo since its foundation but frankly we need it more than ever now to ensure our research news is heard across Europe, Asia and North America. As one of the UK’s leading research universities we want to continue to work with other outstanding researchers in Europe. AlphaGalileo helps us to continue to bring our research story to them and the rest of the world.
Peter Dunn, Director of Press and Media Relations at the University of Warwick
AlphaGalileo has helped us more than double our reach at SciDev.Net. The service has enabled our journalists around the world to reach the mainstream media with articles about the impact of science on people in low- and middle-income countries, leading to big increases in the number of SciDev.Net articles that have been republished.
Ben Deighton, SciDevNet

We Work Closely With...


  • e
  • The Research Council of Norway
  • SciDevNet
  • Swiss National Science Foundation
  • iesResearch
Copyright 2025 by AlphaGalileo Terms Of Use Privacy Statement