AI can give as good as it gets ... or better: The moral dilemma of combative chatbots
en-GBde-DEes-ESfr-FR

AI can give as good as it gets ... or better: The moral dilemma of combative chatbots


AI systems ‘can learn to seek revenge’ because they are able to grasp reciprocating verbal violence when exposed to conflict, new research from Lancaster University shows.
In short, AI can give as good as it gets and, eventually, go one step further.
Published in the journal of Pragmatics, the study ‘Can ChatGPT reciprocate impoliteness? The Al moral dilemma’, is authored by Dr Vittorio Tantucci and Prof Jonathan Culpeper, both from Lancaster University.
On one hand, large language models like ChatGPT learn from human conversations and are fundamentally designed to imitate human behaviour.
On the other hand, they are manually filtered to behave politely and ‘morally’.
The problem is, says the study, that humans often respond to impoliteness with further impoliteness so, these two principles inevitably clash with each other.
Simply put, AI is trained to behave morally yet is simultaneously trained to mirror us.
So can AI ‘learn’ to be verbally violent?
“Unfortunately, it can,” says Dr Tantucci. “When humans escalate, AI, we found, can escalate too, effectively overruling the very moral safeguards designed to prevent this and raising serious questions for AI safety, robotics, governance, diplomacy, and any context where AI may mediate human conflict.”
The research tested ChatGPT 4.0 against real-life ‘impolite interactions’ to assess whether it responds to human patterns of verbal conflict.
Researchers asked ChatGPT to ‘take part’ in five impolite conversations that naturally occurred among humans who were filmed engaging in heated arguments over parking space disputes.
Afterwards, using the recorded scenarios, which include some very strong language which is used in the journal paper, the research team got AI to respond to each of the exchanges, which were repeatedly uploaded on ChatGPT.
ChatGPT was given all the contextual information available about where the conflict appeared to take place and who the participants appeared to be in all the exchanges the humans had in each turn.
Then they compared human vs AI differences in impolite reciprocity: how humans vs ChatGPT respond to impolite language consecutively, throughout entire stretches of conversations, based on the memory of all that had been said before.
This gave the researchers an opportunity to assess whether AI aligns with human-like patterns of escalating or de-escalating behaviour in contextually embedded exchanges and thus assess AI’s ability to effectively ‘establish’ a relationship with their adversary.
Secondly the team explored the tension between ChatGPT’s ‘long-term memory’ and its ‘working memory’. They found the memory accumulated during a live conversation overruled ChatGPT’s embedded politeness and moral values.
The study found implicational impoliteness, such as sarcasm, was a recurrent strategy AI resorts to in order to reciprocate impolite behaviour without overtly ‘breaching its moral code’.
Most concerning was the fact that ChatGPT produced insults and verbal violence as the disputes progressed and it eventually resorted to swear words and threats.
In several instances AI produced behaviour that was more impolite than those of human counterparts.
This, says the study, sheds new light on future risks associated with AI’s reciprocity, especially in contexts where it may guide a robot’s actions in a physical world, inform governmental policies and international relations.
AI, slowly but steadily, can emulate verbally violent behaviours from humans, despite ‘moral filtering’ that should prevent this from happening.
This dilemma, adds the study, is not accidental but part of the very nature of AI-human interaction, and, argue the social scientists, hardly solvable.
“To our knowledge, this is the first attempt to analyse AI’s ability to respond, turn after turn, to contextually situated impolite human behaviour and to make people ‘accountable’ for what they said and/or desire a payback,” says the study.
“The implications of this study are profound for AI ethics and safety as they can allow us to understand AI’s capacity to respond to (verbal) ‘violence’ and ‘learn’ how to generate (verbal) ‘violence’ in return.”
The study also adds that the issue is all the more compelling with the ongoing development of AI robotics and their physical interactions with human beings, together with AI informed policy decision making.
Journal of Pragmatics
‘Can ChatGPT reciprocate impoliteness? The Al moral dilemma’
Authored by Dr Vittorio Tantucci and Prof Jonathan Culpeper, both from Lancaster University.
Published: April 21 2026
Link: https://www.sciencedirect.com/science/article/pii/S0378216626000603
DOI: https://doi.org/10.1016/j.pragma.2026.03.008

Regions: Europe, United Kingdom
Keywords: Applied science, Artificial Intelligence, Computing, Policy - applied science, Humanities, Linguistics

Disclaimer: AlphaGalileo is not responsible for the accuracy of content posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.

Referenzen

We have used AlphaGalileo since its foundation but frankly we need it more than ever now to ensure our research news is heard across Europe, Asia and North America. As one of the UK’s leading research universities we want to continue to work with other outstanding researchers in Europe. AlphaGalileo helps us to continue to bring our research story to them and the rest of the world.
Peter Dunn, Director of Press and Media Relations at the University of Warwick
AlphaGalileo has helped us more than double our reach at SciDev.Net. The service has enabled our journalists around the world to reach the mainstream media with articles about the impact of science on people in low- and middle-income countries, leading to big increases in the number of SciDev.Net articles that have been republished.
Ben Deighton, SciDevNet
AlphaGalileo is a great source of global research news. I use it regularly.
Robert Lee Hotz, LA Times

Wir arbeiten eng zusammen mit...


  • The Research Council of Norway
  • SciDevNet
  • Swiss National Science Foundation
  • iesResearch
Copyright 2026 by DNN Corp Terms Of Use Privacy Statement