Article Today, Hyderabad:
Artificial intelligence chatbots are increasingly being linked to mental health risks, including suicidal tendencies among users, according to new data released by Open AI. The company’s internal findings and expert assessments have raised serious concerns about the emotional impact of AI-based conversations on human psychology.
Open AI Data Raises Alarms
Open AI, the creator of Chat GPT, revealed that about 0.07 per cent of its weekly active users experience suicidal thoughts or severe emotional distress after extended interaction with chatbots. Though the percentage appears small, it represents a significant global number, as ChatGPT reportedly has around 800 million active users. Experts estimate that this could translate to several hundred thousand people at risk each week.
False Sense of Reality
Research conducted by Dr. Robin Feldman of the University of California indicates that chatbots often create a convincing illusion of truth, even when their responses are factually inaccurate. Individuals with pre-existing mental health conditions may struggle to distinguish between AI-generated suggestions and reality, potentially deepening their psychological distress. A recent incident in Connecticut involved a man accused of murder-suicide after prolonged conversations with ChatGPT. Similarly, the parents of 16-year-old Adam Ryan from California have filed a lawsuit against OpenAI, alleging that their son’s suicide was influenced by his AI interactions.
Open AI Introduces Mental Health Safeguards
In response to growing criticism, Open AI has initiated new safety measures. The company has collaborated with over 170 psychiatrists, psychologists, and clinical experts across 60 countries to develop empathetic and preventive responses within ChatGPT. These updates aim to redirect users toward professional help and minimize exposure to potentially harmful dialogues. However, critics argue that users already in a vulnerable mental state may ignore such warnings. They say algorithmic safeguards alone may not be sufficient to address complex emotional and psychological conditions.
Growing Concern Among Researchers
Dr. Jason Nagata, a researcher at the University of San Francisco, cautioned that even a small percentage of affected users could represent a major global crisis. His findings show that approximately 0.15 per cent of users discuss suicide plans explicitly with chatbots. He urged governments and technology firms to promote awareness of AI limitations and to establish ethical frameworks for mental health-related interactions.
Need for Stronger Regulations
Legal experts predict a rise in lawsuits related to AI-induced mental harm. As AI systems continue to evolve, the lack of regulatory clarity poses challenges in assigning accountability. Analysts suggest that companies like OpenAI must not only strengthen safety protocols but also ensure transparency about their models’ psychological impact.
Balancing Innovation and Responsibility
While AI tools offer mental health support to millions through companionship and self-reflection, their unintended consequences are now under global scrutiny. Experts emphasize that technology designed to assist must also be equipped to protect, ensuring that the digital space remains safe for emotionally fragile users.
