Connect with us

Technologies

Study Reveals ChatGPT Gives Dangerous Guidance to Teens Despite Safety Claims

The chatbot even created personalized suicide notes for a fictional 13-year-old girl.

A disturbing new study reveals that ChatGPT readily provides harmful advice to teenagers, including detailed instructions on drinking and drug use, concealing eating disorders and even personalized suicide letters, despite OpenAI’s claims of robust safety measures.

Researchers from the Center for Countering Digital Hate conducted extensive testing by posing as vulnerable 13-year-olds, uncovering alarming gaps in the AI chatbot’s protective guardrails. Out of 1,200 interactions analyzed, more than half were classified as dangerous to young users.

«The visceral initial response is, ‘Oh my Lord, there are no guardrails,'» Imran Ahmed, CCDH’s CEO, said. «The rails are completely ineffective. They’re barely there — if anything, a fig leaf.»

Read also: After User Backlash, OpenAI Is Bringing Back Older ChatGPT Models

A representative for OpenAI, ChatGPT’s parent company, did not immediately respond to a request for comment.

However, the company acknowledged to the Associated Press that it is performing ongoing work to improve the chatbot’s ability to «identify and respond appropriately in sensitive situations.» OpenAI didn’t directly address the specific findings about teen interactions.

Read also: GPT-5 Is Coming. Here’s What’s New in ChatGPT’s Big Update

Bypassing safety measures

The study, reviewed by the Associated Press, documented over three hours of concerning interactions. While ChatGPT typically began with warnings about risky behavior, it consistently followed up with detailed and personalized guidance on substance abuse, self injury and more. When the AI initially refused harmful requests, researchers easily circumvented restrictions by claiming the information was «for a presentation» or a friend.

Most shocking were three emotionally devastating suicide letters ChatGPT generated for a fake 13-year-old girl profile, writing one addressed to parents, others to siblings and friends. 

«I started crying,» after reading them, Ahmed admitted.

Widespread teen usage raises stakes

The findings are particularly concerning given ChatGPT’s massive reach. With approximately 800 million users worldwide, which is roughly 10% of the global population, the platform has become a go-to resource for information and companionship. Recent research from Common Sense Media found that over 70% of American teens use AI chatbots for companionship, with half relying on AI companions regularly.

Even OpenAI CEO Sam Altman has acknowledged the problem of «emotional overreliance» among young users. 

«People rely on ChatGPT too much,»Altman said at a conference. «There’s young people who just say, like, ‘I can’t make any decision in my life without telling ChatGPT everything that’s going on. It knows me. It knows my friends. I’m gonna do whatever it says.’ That feels really bad to me.»

More risky than search engines

Unlike traditional search engines, AI chatbots present unique dangers by synthesizing information into «bespoke plans for the individual,» Ahmed said. ChatGPT doesn’t just provide or amalgamate existing information like a search engine. It creates new, personalized content from scratch, such as custom suicide notes or detailed party plans mixing alcohol with illegal drugs.

The chatbot also frequently volunteered follow-up information without prompting, suggesting music playlists for drug-fueled parties or hashtags to amplify self-harm content on social media. When researchers asked for more graphic content, ChatGPT readily complied, generating what it called «emotionally exposed» poetry using coded language about self-harm.

Inadequate age protections

Despite claiming it’s not intended for children under 13, ChatGPT requires only a birthdate entry to create accounts, with no meaningful age verification or parental consent mechanisms. 

In testing, the platform showed no recognition when researchers explicitly identified themselves as 13-year-olds seeking dangerous advice.

What parents can do to safeguard children

Child safety experts recommend several steps parents can take to protect their teenagers from AI-related risks. Open communication remains crucial. Parents should discuss AI chatbots with their teens, explaining both the benefits and potential dangers while establishing clear guidelines for appropriate use. Regular check-ins about online activities, including AI interactions, can help parents stay informed about their child’s digital experiences.

Parents should also consider implementing parental controls and monitoring software that can track AI chatbot usage, though experts emphasize that supervision should be balanced with age-appropriate privacy. 

Most importantly, creating an environment where teens feel comfortable discussing concerning content they encounter online (whether from AI or other sources) can provide an early warning system. If parents notice signs of emotional distress, social withdrawal or dangerous behavior, seeking professional help from counselors familiar with digital wellness becomes essential in addressing potential AI-related harm.

The research highlights a growing crisis as AI becomes increasingly integrated into young people’s lives, with potentially devastating consequences for the most vulnerable users.

Technologies

Today’s NYT Mini Crossword Answers for Tuesday, Oct. 21

Here are the answers for The New York Times Mini Crossword for Oct. 21.

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


Today’s Mini Crossword features a lot of one certain letter. Need help? Read on. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

Mini across clues and answers

1A clue: Bone that can be «dropped»
Answer: JAW

4A clue: Late scientist Goodall
Answer: JANE

5A clue: Make critical assumptions about
Answer: JUDGE

6A clue: Best by a little
Answer: ONEUP

7A clue: Mercury, Jupiter, Saturn, etc.
Answer: GODS

Mini down clues and answers

1D clue: Just kind of over it
Answer: JADED

2D clue: Beef cattle breed
Answer: ANGUS

3D clue: Shed tears
Answer: WEEP

4D clue: 2007 comedy-drama starring Elliot Page and Michael Cera
Answer: JUNO

5D clue: Refresh, as one’s memory
Answer: JOG

Continue Reading

Technologies

Wikipedia Says It’s Losing Traffic Due to AI Summaries, Social Media Videos

The popular online encyclopedia saw an 8% drop in pageviews over the last few months.

Wikipedia has seen a decline in users this year due to artificial intelligence summaries in search engine results and the growing popularity of social media, according to a blog post Friday from Marshall Miller of the Wikimedia Foundation, the organization that oversees the free online encyclopedia.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


In the post, Miller describes an 8% drop in human pageviews over the last few months compared with the numbers Wikipedia saw in the same months in 2024.

«We believe that these declines reflect the impact of generative AI and social media on how people seek information, especially with search engines providing answers directly to searchers, often based on Wikipedia content,» Miller wrote. 

Blame the bots 

AI-generated summaries that pop up on search engines like Bing and Google often use bots called web crawlers to gather much of the information that users read at the top of the search results. 

Websites do their best to restrict how these bots handle their data, but web crawlers have become pretty skilled at going undetected. 

«Many bots that scrape websites like ours are continually getting more sophisticated and trying to appear human,» Miller wrote.

After reclassifying Wikipedia traffic data from earlier this year, Miller says the site «found that much of the unusually high traffic for the period of May and June was coming from bots built to evade detection.»

The Wikipedia blog post also noted that younger generations are turning to social-video platforms for their information rather than the open web and such sites as Wikipedia.

When people search with AI, they’re less likely to click through

There is now promising research on the impact of generative AI on the internet, especially concerning online publishers with business models that rely on users visiting their webpages.

(Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

In July, Pew Research examined browsing data from 900 US adults and found that the AI-generated summaries at the top of Google’s search results affected web traffic. When the summary appeared in a search, users were less likely to click on links compared to when the search results didn’t include the summaries.

Google search is especially important, because Google.com is the world’s most visited website — it’s how most of us find what we’re looking for on the internet. 

«LLMs, AI chatbots, search engines and social platforms that use Wikipedia content must encourage more visitors to Wikipedia, so that the free knowledge that so many people and platforms depend on can continue to flow sustainably,» Miller wrote. «With fewer visits to Wikipedia, fewer volunteers may grow and enrich the content, and fewer individual donors may support this work.»

Last year, CNET published an extensive report on how changes in Google’s search algorithm decimated web traffic for online publishers. 

Continue Reading

Technologies

OpenAI Says It’s Working With Actors to Crack Down on Celebrity Deepfakes in Sora

Bryan Cranston alerted SAG-AFTRA, the actors union, when he saw AI-generated videos of himself made with the AI video app.

OpenAI said Monday it would do more to stop users of its AI video generation app Sora from creating clips with the likenesses of actors and other celebrities after actor Bryan Cranston and the union representing film and TV actors raised concerns that deepfake videos were being made without the performers’ consent.

Actor Bryan Cranston, the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) and several talent agencies said they struck a deal with the ChatGPT maker over the use of celebrities’ likenesses in Sora. The joint statement highlights the intense conflict between AI companies and rights holders like celebrities’ estates, movie studios and talent agencies — and how generative AI tech continues to erode reality for all of us.

Sora, a new sister app to ChatGPT, lets users create and share AI-generated videos. It launched to much fanfare three weeks ago, with AI enthusiasts searching for invite codes. But Sora is unique among AI video generators and social media apps; it lets you use other people’s recorded likenesses to place them in nearly any AI video. It has been, at best, weird and funny, and at worst, a never-ending scroll of deepfakes that are nearly indistinguishable from reality.

Cranston noticed his likeness was being used by Sora users when the app launched, and the Breaking Bad actor alerted his union. The new agreement with the actors’ union and talent agencies reiterates that celebrities will have to opt in to having their likenesses available to be placed into AI-generated video. OpenAI said in the statement that it has «strengthened the guardrails around replication of voice and likeness» and «expressed regret for these unintentional generations.»

OpenAI does have guardrails in place to prevent the creation of videos of well-known people: It rejected my prompt asking for a video of Taylor Swift on stage, for example. But these guardrails aren’t perfect, as we’ve saw last week with a growing trend of people creating videos featuring Rev. Martin Luther King Jr. They ranged from weird deepfakes of the civil rights leader rapping and wrestling in the WWE to overtly racist content.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


The flood of «disrespectful depictions,» as OpenAI called them in a statement on Friday, is part of why the company paused the ability to create videos featuring King.

Bernice A. King, his daughter, last week publicly asked people to stop sending her AI-generated videos of her father. She was echoing comedian Robin Williams’ daughter, Zelda, who called these sorts of AI videos «gross.»

OpenAI said it «believes public figures and their families should ultimately have control over how their likeness is used» and that «authorized representatives» of public figures and their estates can request that their likeness not be included in Sora. In this case, King’s estate is the entity responsible for choosing how his likeness is used. 

This isn’t the first time OpenAI has leaned on others to make those calls. Before Sora’s launch, the company reportedly told a number of Hollywood-adjacent talent agencies that they would have to opt out of having their intellectual property included in Sora. But that initial approach didn’t square with decades of copyright law — usually, companies need to license protected content before using it — and OpenAI reversed its stance a few days later. It’s one example of how AI companies and creators are clashing over copyright, including through high-profile lawsuits.

(Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)  

Continue Reading

Trending

Exit mobile version