“We All Become Partners with AI, Whether We Want to or Not”
en-GBde-DEes-ESfr-FR

“We All Become Partners with AI, Whether We Want to or Not”


By Lisbet Jaere.

AI's superior ability to formulate thoughts and statements for us weakens our judgment and ability to think critically, says media professor Petter Bae Brandtzæg.

No one knew about Chat GPT just three years ago. Today, 800 million people use the technology. The speed at which artificial intelligence (AI) is rolling out breaks all records and has become the new normal.

Many AI researchers, like Petter Bae Brandtzæg, are skeptical. AI is a technology that interferes with our ability to think, read, and write.

“We can largely avoid social media, but not AI. It is integrated into social media, Word, online newspapers, email programs, and the like. We all become partners with AI—whether we want to or not,” says Brandtzæg.

The professor of media innovations at the University of Oslo has examined how AI affects us in the recently completed project “An AI-Powered Society”.

The Freedom of Expression Commission Overlooked AI
The project has been conducted in collaboration with the research institute SINTEF. It is the first of its kind in Norway to research generative AI, that is AI which creates content, and how it affects both users and the public.

The background was that Brandtzæg reacted to the fact that the report from the Norwegian Commission for Freedom of Expression, which was presented in 2022, did not sufficiently address the impact of AI on society. At least not generative AI.

“There are studies that show that AI can weaken critical thinking. It affects our language, how we think, understand the world, and our moral judgment,” says Brandtzæg.

A few months after the Commission for Freedom of Expression report, ChatGPT was launched, making his research even more relevant.

“We wanted to understand how such generative AI affects society, and especially how AI changes social structures and relationships.”

AI-Individualism
The social implications of generative AI is a relatively new field that still lacks theory and concepts, and the researchers have therefore launched the concept of “AI-individualism”. It builds on “network individualism”, a framework which was launched in the early 2000s.

Back then, the need was to express how smartphones, the Internet, and social media enabled people to create and tailor their social networks beyond family, friends, and neighbors.

Networked individualism showed how technology weakened the old limits of time and place, enabling flexible, personalized networks. With AI, something new happens: the line between people and systems also starts to blur, as AI begins to take on roles that used to belong to humans.

“AI can also meet personal, social, and emotional needs,” says Brandtzæg.

With a background in psychology, he has for a long time studied human-AI relationships with chatbots like Replika. ChatGPT and similar social AIs can provide immediate, personal support for any number of things.

“It strengthens individualism by enabling more autonomous behavior and reducing our dependence on people around us. While it can enhance personal autonomy, it may also weaken community ties. A shift toward AI-individualism could therefore reshape core social structures.”

He argues that the concept of “AI-individualism” offers a new perspective for understanding and explaining how relationships change in society with AI.

“We use it as a relational partner, a collaborative partner at work, to make decisions,” says Brandtzæg.

Students Choose chatbot
The project is based on several investigations, including a questionnaire with open ended answers to 166 high school students on how they use AI.

“They (ChatGPT and MyAI) go straight to the point regarding what we ask, so we don't have to search endlessly in the books or online,” said one high school student about the benefits of AI.

“ChatGPT helps me with problems, I can open up and talk about difficult things, get comfort and good advice,” responded a student.

In another study, using an online experiment with a blind test, it turned out that many preferred answers from a chatbot over a professional when they had questions about mental health. More than half preferred answers from a chatbot, less than 20 percent said a professional, while 30 percent responded both.

“This shows how powerful this technology is, and that we sometime prefer AI-generated content over human-generated,” says Brandtzæg.

“Model Power” - Which Can Hallucinate
The theory of “model power” is another concept they've launched. It builds on a power relationship theory developed by sociologist Stein Bråten 50 years ago.

Model power is the influence one has by being in possession of a model of reality that has impact, and which others must accept in the absence of equivalent models of power of their own, according to the article “Modellmakt og styring” (online newspaper Panorama – in Norwegian).

In the 70s, it was about how media, science, and various groups with authority could influence people, and had model power. Now it's AI.

Brandtzæg's point is that AI-generated content no longer operates in a vacuum. It spreads everywhere, in public reports, new media, in research, and in encyclopedias. When we perform Google searches, we first get an AI-generated summary.

“A kind of AI layer is covering everything. We suggest that the model power of social AI can lead to model monopolies, significantly affecting human beliefs and behavior.”

Because AI models, like ChatGPT, are based on dialogue, they call them social AI. But how genuine is a dialogue with a machine fed with enormous amounts of text?

“Social AI can promote an illusion of real conversation and independence – a pseudo-autonomy through pseudo-dialogue,” says Brandtzæg.

Critical but Still Following AI Advice
91 percent of Norwegians are concerned about the spread of false information from AI services like Copilot, ChatGPT, and Gemini, according to a survey from The Norwegian Communications Authority (Nkom) from August 2025.

AI can hallucinate. A known example is a report the municipality of Tromsø used as a basis for a proposal to close eight schools, was based on sources that AI had fabricated. Thus, AI may contribute to misinformation, and may undermine user trust in both AI, service providers and public institutions.

Brandtzæg asks how many other smaller municipalities and public institutions that have done the same and is worried about the spread of this unintentional spread of misinformation.

He and his researcher colleagues have reviewed various studies indicating that although we like to say we are critical, we nevertheless follow AI's advice, which highlight the model power in such AI systems.

“It's perhaps not surprising that we follow the advice that we get. It's the first time in history that we're talking to a kind of almighty entity that has read so much. But it gives a model power that is scary. We believe we are in a dialogue, that it's cooperation, but it's one-way communication.”

American Monoculture
Another aspect of this model power is that the AI companies are based in the USA and built on vast amounts of American data.

“We estimate that as little as 0.1 percent is Norwegian in AI models like ChatGPT. This means that it is American information we relate to, which can affect our values, norms and decisions”.

What does this mean for diversity? The principle is that “the winner takes it all”. AI does not consider minority interests. Brandtzæg points out that the world has never before faced such an intrusive technology, which necessitates regulation and balancing against real human needs and values.

“We must not forget that AI is not a public, democratic project. It's commercial, and behind it are a few American companies and billionaires,” says Brandtzæg.

Sources:
Brandtzaeg, P. B., Skjuve, M., & Følstad, A. (2025). AI individualism: Transforming social structures in the age of social artificial intelligence. In P. Hacker (Ed.), Oxford Intersections: AI in Society (online ed.). Oxford

Skjuve, M., Følstad, A., Dysthe, K. K., Brænden, A., Boletsis, C., & Brandtzæg, P. B. (2025). Unge og helseinformasjon: ChatGPT vs. fagpersoner. (Young people and health information). Tidsskrift for velferdsforskning, 27(4), 1–17. https://doi.org/10.18261/tfv.27.4.2
Regions: Europe, Norway
Keywords: Applied science, Artificial Intelligence, Technology, Arts, Media & multimedia

Disclaimer: AlphaGalileo is not responsible for the accuracy of content posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.

Testimonios

We have used AlphaGalileo since its foundation but frankly we need it more than ever now to ensure our research news is heard across Europe, Asia and North America. As one of the UK’s leading research universities we want to continue to work with other outstanding researchers in Europe. AlphaGalileo helps us to continue to bring our research story to them and the rest of the world.
Peter Dunn, Director of Press and Media Relations at the University of Warwick
AlphaGalileo has helped us more than double our reach at SciDev.Net. The service has enabled our journalists around the world to reach the mainstream media with articles about the impact of science on people in low- and middle-income countries, leading to big increases in the number of SciDev.Net articles that have been republished.
Ben Deighton, SciDevNet
AlphaGalileo is a great source of global research news. I use it regularly.
Robert Lee Hotz, LA Times

Trabajamos en estrecha colaboración con...


  • e
  • The Research Council of Norway
  • SciDevNet
  • Swiss National Science Foundation
  • iesResearch
Copyright 2025 by DNN Corp Terms Of Use Privacy Statement