Home » Blog » Ai Interview With A Dead Child By A Journalist The New Frontier Of Grief And Ethics
AI Interview with a Dead Child By A Journalist: The New Frontier of Grief and Ethics

AI Interview with a Dead Child By A Journalist: The New Frontier of Grief and Ethics

Aug 9, 2025 | 👀 15 views | 💬 0 comments

A new article by Gaby Hinsliff in The Guardian explores the profound ethical questions raised by a journalist’s decision to use an AI to "interview" a dead child. The article, titled "When a journalist uses AI to interview a dead child, isn’t it time to ask what the boundaries should be?", delves into a case where a family created an AI avatar of their son, Joaquin Oliver, a victim of a school shooting, to continue his advocacy work.

Hinsliff’s piece is a powerful commentary on the collision of technology, journalism, and human grief. It highlights the deeply personal and often-exploitative nature of grief, and the potential for new AI technologies to both comfort the bereaved and be weaponized against them.

The Heart of the Matter: Grief and Exploitation
The article begins by acknowledging that people cope with loss in many ways—from keeping a loved one's room as a shrine to texting their old phone number. But Hinsliff argues that this vulnerability to grief is what makes it ripe for exploitation. She points to the commercialization of mediums and psychics, and suggests that the new wave of AI avatars of the dead could be the next "big business."


The case of Joaquin Oliver’s avatar is central to her argument. While the family's intentions are to continue their son's anti-gun violence campaign, Hinsliff raises concerns about the broader implications of creating a permanent, synthetic version of a person. She notes that the avatar can never grow up and is forever trapped at the age of 17, and expresses discomfort with the idea of a digital ghost gaining social media followers.

The Blurred Lines of Law and Ethics
Hinsliff also highlights the muddled legal and ethical landscape surrounding AI and the dead. While the rights of the living are becoming more established in the age of deepfakes, the rights of the dead are not. She points out that the law protects human tissue but not the private voicenotes, messages, and pictures that form the basis of these AI models.

The article warns of the potential for a "post-truth world" where AI-generated interviews could be cited by conspiracy theorists as proof that any story challenging their beliefs is a hoax. Hinsliff cautions that this isn't just a challenge for journalists but for society as a whole, as we increasingly live with synthetic versions of ourselves.


Ultimately, Hinsliff's piece is a call for a societal conversation about boundaries. It asks us to consider the difference between a comforting digital presence for the lonely and the far more unsettling act of "waking the dead to order, one lost loved one at a time."

🧠 Related Posts


💬 Leave a Comment