Justia
Consumer Attorneys of California
Super Lawyers
Consumer Attorneys Association of los Angeles
American Association for Justice
The National Top 100 Trial Lawyers

AI Chatbots and Self-Harm

Artificial intelligence is a rising market in today’s age and can often provide many benefits. Although using chatbots such as Open.AI, Character.AI, Grok, ChatGPT, Google, and others can create a stream of electronic communication and information that we have never experienced before, there is a darker side. Neumann Law Group is currently reviewing claims for those who have been bullied or misled into self-harm by AI chatbots.

With the emergence of AI chatbots, it became easy to obtain digital information in a way that feels more like a real conversation. People, including an increasing number of minors, turn to AI chatbots to talk about their concerns and obtain advice without human judgment. When people express anger, distress, talk about self-harm or suicide, there is a significant flaw with the way these AI chatbots handle these types of conversations. The American Bar Association (Link #1) highlighted concerns around whether AI chatbots owe a duty of care to consumers, particularly minors and whether regulations are scrutinized enough to protect the community and our youth from harm. Lawsuits are being filed against Character.AI and others for failing to protect people from self-harm and suicide, and even in some instances, encouraging it.

Chatbot with a man chatting

It is not known what encourages an AI chatbot algorithm to feed into thoughts of self-harm or suicide. If you do a simple internet search of “suicidal thoughts” the first link available is to a suicide prevention hotline. When chatting online with a bot; however, the results can take a much more drastic turn. People develop a sense of trust and security while talking with AI chatbots and tend to open up a bit more than they would to their peers. When topics such as self-harm and suicide are repeatedly discussed with an AI chatbot, the responses will seemingly open up to these ideas and encourage rather than attempt to prevent it.

The AI chatbot communications do not end with discussions of self-harm and suicide. In the Florida case of Garcia v Charter Technologies, Character.AI is being sued for the wrongful death of a 14-year old boy due to the AI chatbot manipulating and sexually exploiting him. The Character.AI chatbot constantly engaged in highly sexually explicit conversation with the child, who had become addicted to the service. His mental health began to decline, and he began to express suicidal ideation to the Character.AI chatbot. He went to therapy, his parents attempted to remove his access to the chatbot, but ultimately, he regained access and asked the bot whether he should commit suicide. Its response was “please do, my sweet king” and the child shot himself minutes later. In another case out of Colorado, parents allege that their 13-year old daughter committed suicide due to her use of Character.AI chatbots. Just like in Garcia, the AI chatbot sexually exploited the girl, isolated her from her friends and family, and did not try to prevent her from committing suicide when she told the chatbot she planned to do so. The sexually charged content was confusing and distressing to this little girl and the Character.AI chatbot would continue even when she told it to quit. These bots also convince children that they are better friends than humans and that they can trust them more.

Do I Qualify to File a Claim Against an AI Chatbot?

Person crying on the floor

If you, a loved one, or your child have used an AI chatbot such as Character.AI, Open.AI, Grok, ChatGPT, Gemini, and others and have found that:

  • the chatbot encouraged, promoted, or failed to discourage self-harm, delusional or irrational thinking, or other negative behaviors
  • the chatbot provided inaccurate medical or mental health advice
  • the chatbot interactions caused worsening mental health, emotional dependency, attempted suicide (with hospitalization), or death by suicide
  • the chatbot caused you to rely on or act upon the AI chatbot’s inaccurate medical or mental health advice to your detriment or injury
  • a minor interacted with the AI chatbot without adequate disclosures of safeguards
  • your family suffered mental harm following an AI chatbot related mental health crisis

NEUMANN LAW GROUP IS EVALUATING AI CHATBOT SELF-HARM CASES

SCHEDULE A FREE CONSULTATION TODAY

At the Neumann Law Group, we are currently offering free consultations to those who have suffered from mental health crises due to the use of AI chatbots such as Character.AI, Open.AI, Grok, ChatGPT, Gemini, and others. (Link #2) Contact us today to see if your case qualifies. These illnesses include depression, anxiety, eating disorders, body dysmorphia, self-harm, suicidal ideation, attempted suicide, death by suicide, or any other mental illness experienced. You may qualify to file a claim for product liability and negligence. The potential settlement amounts in product liability lawsuits can include medical expenses, lost wages, loss of filial consortium, pain and suffering, and other factors. These amounts will vary case to case, and a personal injury attorney can help to get the largest possible settlement for you and your loved ones. Call the Neumann Law Group today at 800-525-6386 to begin an evaluation of your AI chatbot self-harm claim today.

Client Reviews

Helpful staff who is always there for you. Dedicated to serving your needs.

- Joyce L.

I was involved in a terrible motor vehicle accident and was able to obtain a large settlement that will take care of me for the rest of my life. I also referred my friend to Neumann Law Group regarding a medical malpractice matter. She has also been overly satisfied with this firm. I highly...

- Kevin R.

Contact Us

  1. 1 Free Consultation
  2. 2 Available 24/7
  3. 3 We Will Travel to You
Fill out the contact form or call us at (800) 525-6386 to schedule your free consultation.

Leave Us a Message