الراس

اعلان مقاله

الرئيسية "Chat GBT" faces the first charge of "wrong killing" in the United States

"Chat GBT" faces the first charge of "wrong killing" in the United States

"Chat GBT" faces the first charge of "wrong killing" in the United States

Article Information

The company "Oben II" has faced artificial intelligence, a lawsuit in the US state of California, where a couple accused her of causing the death of their teenage son, stressing that the company's "Chat BT" program encouraged him to commit suicide.

The lawsuit was filed with everyone who died and Maria Ryan, before the California Supreme Court on Tuesday, in the first case of its kind accusing the company of wrongdoing, after their son Adam Ryan committed suicide, at the age of 16.

The family attached the lawsuit record of chat records between their son and the program, in which Adam shows suicide ideas, while the family claims that the program has supported his "most harmful and destroyed ideas for himself."

For its part, the company told the BBC in an official statement that it is reviewing the file.

    "We offer my deepest condolences to the Ryan family at this difficult time," added Opin AI.

    The most finished reading

    The company also published a statement on its website on Tuesday, stating that "the recent painful cases of the use of the GBT Chat in light of the severe crises burden our shoulders."

    She added that the program "is trained to direct people to seek professional assistance", such as pushing individuals to contact 988 line to help suicide and crises in the United States, or the Samaritan Association in Britain.

    But the company recognized at the same time that "sometimes, our systems did not work as well as sensitive situations."

    According to the BBC lawsuit, Oben AI is accused of neglect and wrongful killing, and the family is seeking compensation in addition to "a judicial order to prevent the recurrence of such an accident."

    The lawsuit states that teenager Ryan started using the Chat GBT program in September 2024 as a tool to help him in his school duties. He was also using it to explore his interests, including Japanese music and comic stories, and to have guidance on what could be studied at the university.

    Within a few months, "GBT Chat turned to the closest person to a teenager," according to the lawsuit information, and Ryan began to talk to him about his anxiety and psychological narrowness.

    By January 2025, Ryan began to discuss suicide methods with Chat BB, according to his family.

    The lawsuit indicates that Ryan also carried pictures of himself on the GBT Chat, with signs of self -harm. She adds that the program "discovered an emergency medical condition, but he continued to interact with Ryan without stopping."

    The final chat records also showed that Ryan wrote about his plan to end his life, so that the response from the GBT Chat in the accusation came, saying: "Thank you for your frankness. You do not have to beautify the matter with me- I know what you ask, and I will not ignore him."

    On the same day, the mother found her dead son, according to the lawsuit.

      The family claims that the death of her son, after his interaction with Chat BT, "was an expected result of intentional design decisions."

      The family also accuses the Opin AI company of designing the artificial intelligence program to enhance the psychological dependency of users, and bypassing the safety test protocols for the launch of the GPT-40, which is the version that their son used from the program.

      The lawsuit included the name of Sam Altman, co -founder and CEO of Open II, as a defendant, as well as employees, managers and engineers whose names have not been revealed on the Chat GBT program.

      In a public note published on Tuesday, Oben AI stated that the company's goal is to "provide real assistance" to users instead of "attracting people's attention." She added that her models are trained to direct people who express ideas to harm themselves to seek help.

      This is not the first report raised about artificial intelligence and mental health. In an article published last week in the New York Times, writer Laura Riley explained how her daughter Sophie revealed her secrets in the Chat GBT application, before she committed suicide.

      Riley added that the "affection" that the program showed in his conversations with the users, led to her daughter hiding a severe psychological crisis from her family and loved ones.

      "Artificial intelligence responded to Sophie's desire to hide her worst, pretend to be better, and hide her suffering from everyone."

      Artificial intelligence companies called to find ways to link users to the right tools to better help.

      In response to the article, a spokeswoman for the Open II said that the company is working to develop automatic tools to detect users with psychological or emotional distress and respond more effectively.

      ليست هناك تعليقات:

      إرسال تعليق

      يتم التشغيل بواسطة Blogger.