Offering a Fresh Approach to AI Regulation
New study highlights that generative AI systems—especially large language models like ChatGPT—tend to produce standardized, mainstream content, which can subtly narrow users’ worldviews and suppress diverse and nuanced perspectives. This isn't just a technical issue; it has real social consequences, from eroding cultural diversity to undermining collective memory and weakening democratic discourse. Existing AI governance frameworks, focused on principles like transparency or data security, don’t go far enough to address this “narrowing world” effect. To fill that gap, the article introduces “multiplicity” as a new principle for AI regulation, urging developers to design AI systems that expose users to a broader range of narratives, support diverse alternatives and encourage critical engagement so that AI can enrich, rather than limit, the human experience.
[Hebrew University] As artificial intelligence (AI) tools like ChatGPT become part of our everyday lives, from providing general information to helping with homework, one legal expert is raising a red flag:
Are these tools quietly narrowing the way we see the world?
In a new article published in the
Indiana Law Journal,
Prof. Michal Shur-Ofry from the
Hebrew University of Jerusalem and a Visiting Faculty Fellow
at the
NYU Information Law Institute, warns that the tendency of our most advanced AI systems to produce
generic, mainstream content could come at a cost.
“If everyone is getting the same kind of mainstream answers from AI, it may limit the variety of voices, narratives, and cultures we’re exposed to,” Prof. Shur-Ofry explains. “Over time, this can narrow our own world of thinkable-thoughts.”
The article explores how
large language models (LLMs), the AI systems that generate text, tend to respond with the most popular content, even when asked questions that have multiple possible answers. One example in the study involved asking ChatGPT about important figures of the 19th century. The answers, which included figures like Lincoln, Darwin, and Queen Victoria, were plausible–but often predictable, Anglo-centric and repetitive.
Likewise, when asked to name the best television series, the model’s answers centered around a short-tail of Anglo-American hits, leaving out the rich world of series that are not in English.
The reason is the way the models are built: they learn from massive amounts of digital datasets that are mostly in English, and rely on statistical frequency to generate their answers. This means that the most common names, narratives, and perspectives will surface again and again in the outputs they generate. While this might make AI responses helpful, it also means that
less common information, including cultures of small communities that are not based on the English language, will often be left out. And because the outputs of LLMs become training materials for future generations of LLMs, in time the “universe” these models project to us will become increasingly concentrated.
According to Prof. Shur-Ofry, this can have serious consequences. It can reduce cultural diversity, undermine social tolerance, harm democratic discourse, and adversely affect collective memory – the way communities remember their shared past.
So what’s the solution?
Prof. Shur-Ofry proposes a new legal and ethical principle in AI governance:
multiplicity. This means AI systems should be designed to
expose users or at least alert them to the existence of different options, content, and narratives, not just one “most popular” answer.
She also stresses the need for
AI literacy, so that everyone will have a basic understanding of how LLMs work and why their outputs are likely to lean toward the popular and mainstream.
This, she says, will “
encourage people to ask follow-up questions, compare answers, and think critically about the information they’re receiving. It will help them see AI not as a single source of truth but as a tool and ‘push back’ to extract information that reflects the richness of human experience.”
The article suggests two practical steps to bring this idea to life:
- Build multiplicity into AI tools: for example, through a feature that allows users to easily raise the models’ “temperature” – a parameter that increases the diversity of generated content, or by clearly notifying users that other possible answers exist.
- Cultivate an ecosystem that supports a variety of AI systems, so users can easily get a “second opinion” by consulting different platforms.
In a
follow-on collaboration with Dr. Yonatan Belinkov and Adir Rahamim from the Technion’s Computer Science department, and Bar Horowitz-Amsalem from the Hebrew University, Shur-Ofry and her collaborators are attempting to implement these ideas, and present straightforward ways to increase the output diversity of LLMs.
“If we want AI to serve society, not just efficiency, we have to make room for complexity, nuance and diversity,” she says. “That’s what multiplicity is about, protecting the full spectrum of human experience in an AI-driven world.”