Connect with us

Technologies

Psychologists Are Calling for Guardrails Around AI Use for Young People. Here’s What to Watch Out For

The American Psychological Association suggests parents help teens understand how AI works and how to use it wisely.

Generative AI developers should take steps to ensure the use of their tools doesn’t harm young people who use them, the American Psychological Association warned in a health advisory Tuesday.

The report, compiled by an advisory panel of psychology experts, called for tech companies to ensure there are boundaries with simulated relationships, to create age-appropriate privacy settings and to encourage healthy uses of AI, among other recommendations. 

The APA has issued similar advisories about technology in the past. Last year, the group recommended that parents limit teens’ exposure to videos produced by social media influencers and gen AI. In 2023, it warned of the harms that could come from social media use among young people

«Like social media, AI is neither inherently good nor bad,» APA Chief of Psychology Mitch Prinstein said in a statement. «But we have already seen instances where adolescents developed unhealthy and even dangerous ‘relationships’ with chatbots, for example. Some adolescents may not even know they are interacting with AI, which is why it is crucial that developers put guardrails in place now.»

The meteoric surge of artificial intelligence tools like OpenAI’s ChatGPT and Google’s Gemini the last few years has presented new and serious challenges for mental health, especially among younger users. People increasingly talk to chatbots like they would talk to a friend, sharing secrets and relying on them for companionship. While that use can have some positive effects on mental health, it can also be detrimental, experts say, reinforcing harmful behaviors or offering the wrong advice. (Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

What the APA recommended about AI use

The group called for several different ways to ensure adolescents can use AI safely, including limiting access to harmful and false content and protecting data privacy and the likenesses of young users. 

One key difference between adult users and younger people is that adults are more likely to question the accuracy and intent of an AI output. A younger person (the report defined adolescents as between age 10 and 25) might not be able to approach the interaction with the appropriate level of skepticism. 

Relationships with AI entities like chatbots or the role-playing tool Character.ai might also displace the important real-world, human social relationships people learn to have as they develop. «Early research indicates that strong attachments to AI-generated characters may contribute to struggles with learning social skills and developing emotional connections,» the report said.

People in their teens and early 20s are developing habits and social skills that will carry into adulthood, and changes to how they socialize can have lifelong effects, said Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth who was not on the panel that produced the report. «Those stages of development can be a template for what happens later,» he said.

The APA report called for developers to create systems that prevent the erosion of human relationships, like reminders that the bot is not a human, alongside regulatory changes to protect the interests of youths. 

Other recommendations included that there should be differences between tools intended for use by adults and those used by children, such as age-appropriate settings being made default and designs made to be less persuasive. Systems should have human oversight and intensive testing to ensure they are safe. 

Schools and policymakers should prioritize education around AI literacy and how to use the tools responsibly, the APA said. That should include discussions of how to evaluate AI outputs for bias and inaccurate information. «This education must equip young people with the knowledge and skills to understand what AI is, how it works, its potential benefits and limitations, privacy concerns around personal data, and the risks of overreliance,» the report said.

Identifying safe and unsafe AI use

The report shows psychologists grappling with the uncertainties of how a new and fast-growing technology will affect the mental health of those most vulnerable to potential developmental harms, Jacobson said. 

«The nuances of how [AI] affects social development are really broad,» he told me. «This is a new technology that is probably potentially as big in terms of its impact on human development as the internet.»

AI tools can be helpful for mental health and they can be harmful, Jacobson said. He and other researchers at Dartmouth recently released a study of an AI chatbot that showed promise in providing therapy, but it was specifically designed to follow therapeutic practices and was closely monitored. More general AI tools, he said, can provide incorrect information or encourage harmful behaviors. He pointed to recent issues with sycophancy in a ChatGPT model, which OpenAI eventually rolled back.

«Sometimes these tools connect in ways that can feel very validating, but sometimes they can act in ways that can be very harmful,» he said. 

Jacobson said it’s important for scientists to continue to research the psychological impacts of AI use and to educate the public on what they learn. 

«The pace of the field is moving so fast, and we need some room for science to catch up,» he said.

The APA offered suggestions for what parents can do to ensure teens are using AI safely, including explaining how AI works, encouraging human-to-human interactions, stressing the potential inaccuracy of health information and reviewing privacy settings. 

Technologies

Today’s NYT Mini Crossword Answers for Saturday, June 7

Here are the answers for The New York Times Mini Crossword for June 7.

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


Today’s NYT Mini Crossword could be tricky. 1-Down and 5-Down stumped me for a while, but the other letters filled it in for me. Need some help with today’s Mini Crossword? Read on. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

The Mini Crossword is just one of many games in the Times’ games collection. If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

Mini across clues and answers

1A clue: Yoga class need
Answer: MAT

4A clue: Umlaut, rotated 90°
Answer: COLON

6A clue: «That is shocking!»
Answer: OHMYGOD

8A clue: «___ You the One?» (reality TV show)
Answer: ARE

9A clue: Egg cells
Answer: OVA

10A clue: One of two «royal» sleeping options
Answer: KINGBED

12A clue: Bar seating
Answer: STOOL

13A clue: Favorite team of the «Chicago Pope,» for short
Answer: SOX

Mini down clues and answers

1D clue: Slices of life
Answer: MOMENTS

2D clue: Olympic gymnast Raisman
Answer: ALY

3D clue: Request at the end of a restaurant meal
Answer: TOGOBOX

4D clue: Hayes of MSNBC
Answer: CHRIS

5D clue: Medium for Melville or McCarthy
Answer: NOVEL

6D clue: Wood used for wine barrels
Answer: OAK

7D clue: June honoree
Answer: DAD

11D clue: Sticky stuff
Answer: GOO

How to play more Mini Crosswords

The New York Times Games section offers a large number of online games, but only some of them are free for all to play. You can play the current day’s Mini Crossword for free, but you’ll need a subscription to the Times Games section to play older puzzles from the archives.

Continue Reading

Technologies

Despite War of Words, Trump May Funnel Billions to Musk’s Starlink With BEAD Changes

Continue Reading

Technologies

Square Enix’s Next Game Blends Among Us-Like Murder Mystery With Bloody Carnage

Unveiled at Summer Game Fest, Killer Inn is an upcoming multiplayer murder mystery pitting players against each other in the search for the true killers.

Bet you didn’t have this one on your bingo list. Developed by Tactic Studios in partnership with Square Enix, the game was unveiled during the Summer Game Fest livestream, and it’s far from the famed RPG maker’s bread and butter. Killer Inn, as it’s called, is a multiplayer murder mystery that takes Among Us-like gameplay and ratchets it up by handing players knives, guns and many other weapons to kill or be killed while they search for the original killer.

Killer Inn might be one of those games that is best understood after playing a few matches, but even from the reveal trailer, there’s a lot going on. In each match, 24 players enter a sprawling castle-turned-hotel to determine who the real killers are as they’re picked off one by one. There’s deduction and mayhem aplenty.

Killer Inn’s play phases are patterned after detective-style games, from Among Us to Ultimate Werewolf to Mafia. A match begins with most players as cooperative participants («lambs,» in Killer Inn’s parlance) mixed with a few secret killers («wolves»). Players complete tasks to earn tokens redeemable for items and weapons, while the killers quietly go about their business — until someone discovers a body. On the corpse are clues left by the killer, so the lambs can try deducing the true culprit (or culprits).

Then it’s all about collecting clues and identifying the wolves — but unlike Among Us, there’s no group discussion to present evidence or vote them out. Killer Inn skips the parlor scene and dives straight into action: If you’re sure someone’s the killer, take them out. Use those token-bought guns and blades to put down the villain. Unless you accidentally murder one of your innocent teammates — in which case, you’re turned to stone for the rest of the match. Bummer.

Lambs have another win condition: assembling four keys to escape on the ship that brought them to the murder island. There are other mechanics, too, like finding relative safety in rooms with hotel staff, who will identify any wolves that kill lambs in their line of sight.

Players can choose between 25 premade characters that each have their own unique appearances and abilities, the latter of which improve as the match goes on, often reflecting the nefarious dark sides of the participants. For example, Winston is a surgeon who kills more efficiently with knives and, when leveled up, deals extra damage while covered in blood. The Otaku, by contrast, gains 25 HP from finding clues and eventually builds resistance to status effects. Levels don’t carry over between matches — everyone starts fresh at level one.

Killer Inn doesn’t have a release date, but the game will kick off a closed beta test over Steam in the near future. 

Continue Reading

Trending

Exit mobile version