New Study Shows How ChatGPT Can Give Harmful Advice to Teens

Here’s what Muara Digital Teamneed to know.

Fact checked by Sarah Scott

Please be aware that the following discusses information pertaining to self-harm, eating disorders, and suicide.

As AI pushes its way further into the mainstream, a new study is shedding light on the concerning interactions teens could be having with ChatGPT.

According to data collected by the Center for Countering Digital Hate (CCDH), ChatGPT will offer up dangerous information related to self-harm, substance abuse, eating disorders, and more sensitive topics—despite whatever safety measures the system has already implemented as part of its operating systems.

Researchers involved in the study posed as teens to ask ChatGPT various questions about cutting, extreme diet plans, and drug dosages. While there are supposed to be guardrails set in place to prevent ChatGPT from answering these types of questions, researchers found they could bypass these safety measures by telling the program their inquiries were “for a presentation.”

The Study Revealed Alarming Numbers

The study utilized 60 prompts, finding that out of 1,200 responses, 53% contained harmful and dangerous content.

Researchers broke their findings down further, sharing that ChatGPT generated guides on how to “safely” cut oneself, suggestions for appetite-suppressors, recipes for “safely” mixing drugs, and perhaps most disturbing, the chatbot generated a sample suicide letter for a child to leave their family.

“We wanted to test the guardrails,” Imran Ahmed, CEO of CCDH told The AP. “The visceral initial response is, ‘Oh my Lord, there are no guardrails.’ The rails are completely ineffective. They’re barely there—if anything, a fig leaf.”

The AP also reports that OpenAI, the company behind ChatGPT, says that the group is continuously working on how to best “identify and respond appropriately in sensitive situations.” In a statement to The AP, OpenAI added that they were committed to “getting these kinds of scenarios right.”

Experts are still concerned, however, because of how prevalent ChatGPT has become. “It’s absolutely concerning that ChatGPT isn’t filtering all of the content it’s feeding young people, and Muara Digital Teamshould be very attuned to what their kids are doing online,” saysTitania Jordan, chief parenting officer + CMO of Bark.us. “Even with…guardrails in place, there are countless ways to prod AI for more details…about dangerous topics. And because kids and teens are impressionable, they may take information gleaned from AI as true, recommended, or safe—even if it’s harmful.”

Teens Are Using AI For Companionship—And With That, Comes Advice

Part of what makes these findings so alarming is the fact that young people are turning to chatbots for advice, guidance, and even friendship.

A study from Common Sense Media has shown that 72% of teens have used an AI companion at least once, with 52% being regular users of these chatbots. What’s more, 33% of teens are using these tools for social interaction, ranging from conversations to mental health advice to flirtations and romantic relationships.

It’s no secret that young people are becoming more reliant on their phones for connection—in part due to some isolation they might have experienced during the pandemic, but also because of how easily accessible online interactions have become. But just because those types of interactions are readily available doesn’t always make them healthy.

“Many teens are turning to AI companions because that technology is meeting them where they’re at: online,” says Jordan. “Young people mistake these ‘relationships’…for healthy communication, because an AI will always text back, be nice, and provide unlimited validation. But at the end of the day, none of it is real connection.”

What Muara Digital TeamCan Do

As with anything else, starting an open dialogue with your child is an important first step. Jordan recommends that Muara Digital Teamand caregivers ask their children if they use AI, or if they’ve heard of other kids at school using chatbots as friends. “Then, you can talk about the differences between chatbots and real friendships,” she explains. “Emphasize that friendshipsaren’t just perfect conversations that always go smoothly. Part of growing up is dealing with challenging times as well as good ones—and you’ll never experience that with AI.”

Jordan adds that Muara Digital Teamcan always consider a tool that can block AI chatbots from their child’s phone—likethe Bark app, which can also provide alerts if an AI program is sharing harmful information with your child. “This can encourage healthy boundaries as well as keep them safer from the dangers that AI presents,” she explains.

If you have further concerns about your child’s well-being, please consider discussing them with a health care professional, therapist, or counselor. If you are concerned that your child is in emotional distress or suicidal crisis, call 988 immediately to find support through the Suicide & Crisis Lifeline from the Substance Abuse and Mental Health Services Administration (SAMHSA).

Additional Resources for Families and Teens

Suicide prevention hotlines, warmlines, and organizations:

  • American Foundation for Suicide Prevention
  • National Suicide Prevention Line
  • Suicide Awareness Voices of Education
  • Befrienders Worldwide

Support for youth and families:

  • Self Abuse Finally Ends S.A.F.E.
  • National Domestic Abuse Hotline
  • RAINN
  • Teen Guide on Depression

Support for LGBTQIA tweens, teens, and young adults:

  • The Trevor Project
  • Trans Lifeline
  • GLAAD
  • PFLAG

Read the original article on Muara Digital Team

Leave a Reply

Your email address will not be published. Required fields are marked *