Virtual companions, real responsibility: call for clear regulations on AI tools used for mental health interactions

Research

Artificial Intelligence (AI) can converse, mirror emotions, and simulate human engagement. Publicly available large language models (LLMs) – often used as personalised chatbots or AI characters – are increasingly involved in mental health-related interactions. While these tools offer new possibilities, they also pose significant risks, especially for vulnerable users. Mindy Nunez Duffourc (Assistant Professor of Private Law and member of the Law and Tech Lab) co-authored an article together with researchers from TUD Dresden University of Technology and the University Hospital Carl Gustav Carus, calling for stronger regulatory oversight. Their publication “AI characters are dangerous without legal guardrails” in Nature Human Behaviour outlines the urgent need for clear regulations for AI characters.

General-purpose large language models (LLMs) like ChatGPT or Gemini are not designed as specific AI characters or therapeutic tools. Yet simple prompts or specific settings can turn them into highly personalised, humanlike chatbots. Interaction with AI characters can negatively affect young people and individuals with mental health challenges. Users may form strong emotional bonds with these systems, but AI characters remain largely unregulated in both the EU and the United States. Importantly, they differ from clinical therapeutic chatbots, which are explicitly developed, tested, and approved for medical use.

“AI characters are currently slipping through the gaps in existing product safety regulations,” explains Nunez Duffourc. “They are often not classified as products and therefore escape safety checks. And even where they are newly regulated as products, clear standards and effective oversight are still lacking.”

Background: Digital interaction, real responsibility

Recent international reports have linked intensive personal interactions with AI chatbots to mental health crises. The researchers argue that systems imitating human behaviour must meet appropriate safety requirements and operate within defined legal frameworks. At present, however, AI characters largely escape regulatory oversight before entering the market.

Proposed solution: “Good Samaritan AI” as a safeguard

The research team emphasises that the transparency requirement of the European AI Act – simply informing users that they are interacting with AI – is not enough to protect vulnerable groups. They call for enforceable safety and monitoring standards, supported by voluntary guidelines to help developers with implementing safe design practices.

As solution they propose linking future AI applications with persistent chat memory to a so-called “Good Samaritan AI” – an independent, supportive AI instance to protect the user and intervene when necessary. Such an AI agent could detect potential risks at an early stage and take preventive action, for example by alerting users to support resources or issuing warnings about dangerous conversation patterns.

Recommendations for safe interaction with AI

In addition to implementing such safeguards, the researchers recommend robust age verification, age-specific protections, and mandatory risk assessments before market entry.

The researchers argue that clear, actionable standards are needed for mental health-related use cases. They recommend that LLMs clearly state that they are not an approved mental health medical tool. Chatbots should refrain from impersonating therapists, and limit themselves to basic, non-medical information. They should be able to recognise when professional support is needed and guide users toward appropriate resources. Effectiveness and application of these criteria could be ensured through simple open access tools to test chatbots for safety on an ongoing basis.

Publication

Mindy Nunez Duffourc, F. Gerrik Verhees, Stephen Gilbert: AI characters are dangerous without legal guardrails; Nature Human Behaviour, 2025.

doi: 10.1038/s41562-025-02375-3. URL: https://www.nature.com/articles/s41562-025-02375-3

Also read