A dead teenage family says about the new parental controls from ChatGPT is not enough

A lawyer represents a couple in California Those who sued Openai After the death of their 16 -year -old son, she criticized the new parents’ controls in Chatbot.

The company presented new rules in the wake of the family’s allegations that she was absent, encouraging her child to take his private life.

Obayye said that the parents of teenagers will soon be able to receive a notification if the platform believes that their child is “sharp distress”, among the other parental controls.

But Jay Edlson, a lawyer who represents the family, said that the announcement was “Openai’s crisis management team tries to change the issue” and called for the landing of Chatbot.

“Instead of taking emergency measures to withdraw a dangerous product known for a non -communication mode, Openai has made mysterious promises to do better,” he said.

The lawsuit filed in California last week died and Maria Rin, the 16 -year -old Adam Rin’s parents, was the first legal action accusing OpenAi of illegal death.

The family included chat records between Adam, who died in April, and ChatGPT, which shows him showing that he has suicide ideas.

They argue that the program has verified the validity of his “most harmful ideas and self -destruction”, accusing the lawsuit against Openai of neglect and illegal death.

When the case news appeared last week, Openai I published a note On its website, Chatgpt is trained to direct people to search for professional help when they face a problem, such as Samaritans in the UK.

However, the company admitted, “There were moments when our systems were not behaved as intended in sensitive situations.”

Now she has Publish an additional update Determine the additional procedures that you plan and that will allow parents B:

  • Connect their account with their teenagers account
  • Advance management that must be disabled, including memory and chat record
  • Receive notifications when the teenage system discovers in a “sharp, sharp” moment

Openai said that to evaluate sharp distress, “experts will enter this advantage to support confidence between parents and adolescents.”

The company stated that it is working with a group of youth development and mental health professionals and “interaction between human and computer” to help forming a “evidence -based vision of how to support artificial intelligence people’s welfare and help them prosper.”

Openai has not yet responded to Mr. Edlson’s claims.

The age of Chatgpt users should be at least 13 years old, and if they are less than 18 years old, they must have Parent’s permission to use itAccording to Openai.

This advertisement from Openai is the latest in a series of measures from the world’s leading technology companies in an attempt to make online experiences for children safer.

Many came as a result of new legislation, such as online safety law in the United Kingdom.

This included the entry of age verification on Reddit, X and Porn sites.

Earlier this week, Meta – which runs Facebook and Instagram – He said it will offer more handrails To Chatbots Artificial Intelligence (AI) – including preventing them from talking to adolescents about suicide, self -harm and eating disorders.

American Senator launched an investigation into the technology giant after notes in an internal document that indicates that their AI products can have “sensory” conversations with teenagers.

The company described the notes in the document, obtained by Reuters, as wrong and do not agree with its policies that prohibit any content for children.

Leave a Comment