OPENAI, the maker of ChatGPT, has announced it would introduce new parental control features on its artificial intelligence chatbot after facing a lawsuit from a United States (US) couple who alleged the system encouraged their 16-year-old son to take his own life.
The teenager, Adam Raine, reportedly killed himself after months of conversations with the chatbot.
According to the BBC, the parents of the teenager, Matthew and Maria Raine, filed the case in California last week, claiming that ChatGPT validated their son Adam’s suicidal thoughts and provided him with instructions that ultimately led to his death in April.
In response, OpenAI said it was strengthening protections for teenagers and would roll out parental control features within the next month.
Under the new system, parents will be able to link their accounts with their children’s accounts, manage features such as chat history and memory, and receive notifications if the chatbot detects their child is in “acute distress.”
The company stated that experts in youth development, mental health, and human-computer interaction were helping design the safeguards.
According to OpenAI, the feature will be guided by professional input to support trust between parents and teens.
OpenAI acknowledged that its system had not always behaved as intended in sensitive situations but stressed that it continued to improve how its models recognise and respond to signs of emotional or mental distress.
The company also revealed plans to redirect some sensitive conversations to more advanced “reasoning models,” which it said applied safety guidelines more consistently.
The lawsuit has placed a spotlight on the safety of young users of AI platforms. ChatGPT requires users to be at least 13 years old, with parental consent needed for those under 18.
The case comes amid growing global pressure on technology companies to improve online safety.
OpenAI said it remained committed to building “an evidence-based vision for how AI can support people’s well-being and help them thrive.”
The ICIR reports that ChatGPT, released in November last year, is capable of writing logical essays and solving mathematical puzzles, as well as producing computer code.
But this new artificial intelligence tool has been opposed by many academic institutions around the world for ethical reasons, like plagiarism and misinformation.
For instance, the Sciences Po school in Paris, one of the most prestigious universities in France, has banned the use of ChatGPT for oral or written assignments.
It said anyone found to have used the chatbot for an assignment would face “sanctions which can go as far as expulsion from the establishment, or even from higher learning.”
Officials in New York and other jurisdictions have also forbidden its use in schools, according to several reports.
A reporter with the ICIR
A Journalist with a niche for quality and a promoter of good governance

