A new study shows that generative AI is already being used to “bring back” the dead, as entertainment icons, as political witnesses, and as everyday companions for grieving families. Tracing cases of AI “resurrections,” the study claims this practice isn’t just emotionally powerful; it’s ethically explosive because it turns a person’s voice, face, and life history into reusable raw material. AI resurrections are important because they can happen with little or no consent, clear ownership rules, or accountability, creating a new kind of exploitation the authors call “spectral labor,” where the dead become an involuntary source of data and profit, while the living are left to navigate blurred lines between memory and manipulation, comfort and coercion, tribute and abuse.
What does it mean when artificial intelligence makes the dead speak again?
From hologram concerts of long-deceased pop stars to chatbots trained on the texts of lost loved ones, Gen AI is rapidly redrawing the boundary between life and death. A new study by
Tom Divon, an internet and technology researcher from Hebrew University and
Prof. Christian Pentzold of Leipzig University Germany offers one of the most comprehensive looks yet at this unsettling frontier and raises urgent questions about consent, exploitation, and power in a world where the dead can be digitally revived.
In their article,
Artificially Alive: An Exploration of AI Resurrections and Spectral Labor Modes in a Postmortal Society, the researchers analyze more than 50 real-world cases from the United States, Europe, the Middle East, and East Asia in which AI technologies are used to recreate deceased people’s voices, faces, and personalities.
What sets this study apart is its scope and clarity. Rather than focusing on a single technology or viral example, the researchers examined dozens of cases from across continents to show that AI “resurrections” are already forming a recognizable social pattern. They identify three distinct ways the dead are being digitally reintroduced into society, from celebrity spectacles to political testimony to intimate conversations with lost loved ones and reveal a shared underlying dynamic: the growing use of the dead as a source of data, voice, and likeness that can be reused and monetized, often without consent. This broad view shows how quickly experimental uses of AI are becoming normalized and why the ethical stakes are no longer theoretical.
Three ways AI brings back the dead
The study identifies three dominant ways AI is being used to “re-presence” the deceased:
- Spectacularization – the digital re-staging of famous figures for entertainment. Fans can now watch “new” performances by Whitney Houston or Freddie Mercury, generated by AI and staged as immersive spectacles.
- Sociopoliticization – the reanimation of victims of violence or injustice for political or commemorative purposes. In some cases, AI-generated personas of the dead are made to testify, protest, or tell their own stories posthumously.
- Mundanization – the most intimate and fast-growing mode, in which everyday people use chatbots or synthetic media to “talk” with deceased parents, partners, or children, keeping relationships alive through daily digital interaction.
The rise of “spectral labor”
Across all three modes, the dead are not simply remembered they are made to
work.
Divon and Pentzold introduce the concept of spectral labor to describe what is happening beneath the surface. AI systems are trained on the digital remains of the dead; photos, videos, voice recordings, social media posts. Without consent, these data are extracted, repackaged, and monetized, with immense potential for weaponization.
What happens when a figure like Charlie Kirk is resurrected to continue circulating his ideology, speaking to new audiences after his death, without accountability, context, or the possibility of refusal? Or when the likeness of a victim is reanimated to repeatedly relive trauma for political, commercial, or instructional ends? In these cases, AI resurrection becomes a tool for extending power, ideology, and influence beyond the limits of life itself.
“The dead are compelled to haunt the present,” the authors argue, serving the emotional, political, or commercial desires of the living.
This raises difficult questions: Who owns a voice after death? Can a digital likeness be exploited? And who gets to decide how, when, and why the dead are brought back?
Living in a “postmortal society”
The study situates AI resurrections within what sociologists call a
postmortal society, one that does not deny death, but increasingly seeks to overcome it technologically. In this world, immortality is no longer promised through religion alone, but through data, algorithms, and platforms offering “digital afterlives.”
Yet the authors are clear: AI does not conquer death. Instead, it keeps people suspended in an uneasy in-between state, neither fully alive nor fully gone.
As generative AI accelerates, Divon and Pentzold warn that society must confront the ethical and legal implications now, before digital resurrection becomes normalized and unregulated.
“Thinking seriously about what AI does to our relationship with the dead,” they write, “is essential to understanding what it is doing to the living.”