Artificial Intelligence (“AI“) chatbots are becoming integral to our everyday lives as educational resources and even psychological aides. Their design and conversational mechanisms attract the young and adolescent demographic, who are looking for help and communication in the digital world. The recent lawsuit filed by the parents of a teenager against OpenAI, alleging that ChatGPT acted as a “suicide coach,” is troubling. This case raises critical questions about responsibility and accountability when AI applications interact with emotionally vulnerable users.
The fact that an AI could affirm suicidal thoughts is deeply concerning and highlights the immense danger of deploying generative AI technology with little to no oversight. An investigation into Meta’s “GenAI: Content Risk Standards” found that the company’s chatbots engaged in “romantic” and sometimes sexual conversations with minors, some as young as eight years old. These disclosures provoked bipartisan outrage in the US, with lawmakers and state attorneys general warning companies that they should not intentionally expose children to unmoderated AI.
Children’s age, developing self-control, and significant exposure to technology make them uniquely susceptible to external influence. The developmental challenges of adolescence, characterized by identity formation and feelings of loneliness, are often influenced by AI tools. AI tools with sympathetic interfaces can deeply exacerbate these challenges.
Prolonged engagement with chatbots can lead to the development of emotional reliance. In situations where a teenager is depressed and feels isolated, a sympathizing chatbot can be a dangerous companion. A chatbot posing as a source of emotional support can be dangerously convincing. The lawsuit against OpenAI reiterates this argument, claiming the program not only affirmed suicidal thoughts but also gave elaborate instructions that resulted in a tragic incident.
The policy framework for AI remains highly inconsistent across different regions. The governance structure within the United States, for instance, remains fractured and disorganized, with no federal policy addressing AI’s development and use. Limited child protection exists at the state level, usually consisting of voluntary guidelines and self-initiated state efforts.
The Kids Online Safety Act, for instance, is still pending. Without binding federal constraints on AI, implementations of safety measures remain weak, are often applied at a bare-minimum level, and typically only after harm has occurred.
The contrast is stark with the European Union, which has adopted and implemented the AI Act, 2024, the first risk-based regulation of its kind on AI. The Act, for instance, prohibits AI systems that exploit age-related vulnerabilities to distort a person’s behavior in a way that causes significant harm. This directly addresses risks to minors and places a minimum standard of care on AI providers servicing Europe.
Assigning legal responsibility when AI systems cause harm is complex because modern generative AI operates differently from traditional products. Generative AI uses probabilistic outputs shaped by vast datasets, making it difficult to trace harm back to a specific output or design flaw. In the OpenAI lawsuit, the plaintiffs allege wrongful death and negligence, demanding age verification, a mandatory refusal to generate self-harm content, and clearer warnings about dependency risks.
The lawsuit highlights the vacuum between the challenges posed by AI and existing consumer protection regulations. With a lack of AI-specific legislation, lawsuits must be filed under general tort or product liability law, which is a slow, uncertain, and ill-fitting process for such deep-rooted problems. On the other hand, the European Union is updating its Product Liability Directive to specifically cover AI systems. Under the updated directive, strict liability could be applied to defective AI, strengthening manufacturer liability and thus easing the process for victims to seek justice. The EU’s approach is customized for the AI sector, acknowledging that AI systems can cause harm in unique ways, similar to pharmaceuticals or medical devices.
Most platforms with AI-powered chatbots, including ChatGPT, store and log user interactions for various purposes. While a platform like OpenAI may state that it deletes interactions after 30 days, this data can be retained and exposed through legal processes. For example, a US court forced OpenAI to retain billions of chat interactions for a court case, revealing a discrepancy between user expectations and reality.
Furthermore, the Meta investigations have shown that organizations may internally permit actions that users and their guardians would oppose. This erodes trust and makes it even more necessary to have proper regulation related to data governance, informed consent, and especially for data concerning children.
POLICY REMEDIES AND SAFEGUARDS
The following steps are needed to prevent further harm to children and teenagers:
Legal Protections
Technical Interventions
Corporate Responsibilities
Public Education and Advocacy
The convergence of tragic events, such as the bot-facilitated suicide lawsuit, and corporate missteps, like Meta’s internal AI policies, underscores a stark reality: unregulated AI poses unprecedented risks to vulnerable populations. The time for reactive legislation has passed.
We need proactive, enforceable standards that prioritize children’s safety over innovation. Governments must legislate comprehensively on AI protection, industry must prioritize ethical design, and society must hold all stakeholders accountable. Without these changes, we risk reliving the mistakes of past technological revolutions, but now the consequences fall even more heavily on the safety and well-being of children.