додому Latest News and Articles AI Griefbots: The New Frontier of Mourning, and the Risks Ahead

AI Griefbots: The New Frontier of Mourning, and the Risks Ahead

The way we grieve is changing. As digital tools advance, some are turning to AI to cope with loss, creating “griefbots” – chatbots trained on the memories, messages, and personality of deceased loved ones. While offering a new avenue for healing, this technology raises profound ethical and psychological concerns that demand serious consideration.

The Rise of Digital Resurrection

Roro, a content creator in China, sought solace after her mother’s death by crafting an AI version of her. Using the Xingye platform, she meticulously documented her mother’s life, defining behavioral patterns to bring a digital version of her back to life. The process itself became therapeutic, allowing Roro to reinterpret her past and create a more idealized figure.

“I wrote out the major life events that shape the protagonist’s personality… Once you’ve done that, the AI can generate responses on its own,” she explains. The resulting chatbot, Xia, allowed Roro’s followers to interact with a digital echo of her mother, offering comfort through simulated conversation.

How Griefbots Work

These “deathbots” rely on large language models (LLMs) trained on personal data – emails, texts, voice notes, and social media posts. Companies like You, Only Virtual in the US create chatbots that mimic a deceased person’s conversational style, often tailored to how they appeared to a specific friend or relative. Some bots remain static, while others evolve through ongoing interaction, learning and adapting to new information.

This raises complex questions: can AI accurately estimate the development of a human personality? What psychological impact does interacting with such an entity have on those left behind?

The Regulatory Response

China’s Cyberspace Administration is already responding to concerns, proposing new regulations to mitigate the emotional harm of “human-like interactive AI services.” The potential for manipulation, exploitation, and psychological distress is prompting calls for oversight.

The Psychological Impact: Healing or Harm?

The core shift is how grief is experienced. Unlike passively reviewing old letters, interacting with generative AI introduces an active, dynamic element. Roro found the process profoundly healing, allowing her to articulate unspoken feelings and find closure.

However, not all experiences are positive. Journalist Lottie Hayton, who lost both parents in 2022, found recreating them with AI unsettling and distressing. The technology was not yet refined enough to create a convincing simulation, cheapening her real memories instead of honoring them.

Ethical Minefields

Creating deathbots raises serious ethical questions:

  • Consent: Who decides whether a person should be digitally resurrected? What if relatives disagree?
  • Public Display: Does one person’s desire for a symbolic companion justify displaying a deathbot publicly, potentially exacerbating the grief of others?
  • Commercial Incentives: The companies building these bots are driven by profit, creating a tension between user wellbeing and engagement metrics. A chatbot that people compulsively revisit may be a business success, but a psychological trap.

The Path Forward

The emergence of AI-mediated grief is not inherently dangerous. For some, it offers genuine comfort. However, decisions about digital resurrection cannot be left solely to startups and venture capitalists. Clear rules are needed regarding consent, data usage, and design standards that prioritize psychological wellbeing over endless engagement.

The question is not simply if AI should resurrect the dead, but who gets to do so, on what terms, and at what cost.

Exit mobile version