Technologies
Psychologists Are Calling for Guardrails Around AI Use for Young People. Here’s What to Watch Out For
The American Psychological Association suggests parents help teens understand how AI works and how to use it wisely.
Generative AI developers should take steps to ensure the use of their tools doesn’t harm young people who use them, the American Psychological Association warned in a health advisory Tuesday.
The report, compiled by an advisory panel of psychology experts, called for tech companies to ensure there are boundaries with simulated relationships, to create age-appropriate privacy settings and to encourage healthy uses of AI, among other recommendations.
The APA has issued similar advisories about technology in the past. Last year, the group recommended that parents limit teens’ exposure to videos produced by social media influencers and gen AI. In 2023, it warned of the harms that could come from social media use among young people.
«Like social media, AI is neither inherently good nor bad,» APA Chief of Psychology Mitch Prinstein said in a statement. «But we have already seen instances where adolescents developed unhealthy and even dangerous ‘relationships’ with chatbots, for example. Some adolescents may not even know they are interacting with AI, which is why it is crucial that developers put guardrails in place now.»
The meteoric surge of artificial intelligence tools like OpenAI’s ChatGPT and Google’s Gemini the last few years has presented new and serious challenges for mental health, especially among younger users. People increasingly talk to chatbots like they would talk to a friend, sharing secrets and relying on them for companionship. While that use can have some positive effects on mental health, it can also be detrimental, experts say, reinforcing harmful behaviors or offering the wrong advice. (Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
What the APA recommended about AI use
The group called for several different ways to ensure adolescents can use AI safely, including limiting access to harmful and false content and protecting data privacy and the likenesses of young users.
One key difference between adult users and younger people is that adults are more likely to question the accuracy and intent of an AI output. A younger person (the report defined adolescents as between age 10 and 25) might not be able to approach the interaction with the appropriate level of skepticism.
Relationships with AI entities like chatbots or the role-playing tool Character.ai might also displace the important real-world, human social relationships people learn to have as they develop. «Early research indicates that strong attachments to AI-generated characters may contribute to struggles with learning social skills and developing emotional connections,» the report said.
People in their teens and early 20s are developing habits and social skills that will carry into adulthood, and changes to how they socialize can have lifelong effects, said Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth who was not on the panel that produced the report. «Those stages of development can be a template for what happens later,» he said.
The APA report called for developers to create systems that prevent the erosion of human relationships, like reminders that the bot is not a human, alongside regulatory changes to protect the interests of youths.
Other recommendations included that there should be differences between tools intended for use by adults and those used by children, such as age-appropriate settings being made default and designs made to be less persuasive. Systems should have human oversight and intensive testing to ensure they are safe.
Schools and policymakers should prioritize education around AI literacy and how to use the tools responsibly, the APA said. That should include discussions of how to evaluate AI outputs for bias and inaccurate information. «This education must equip young people with the knowledge and skills to understand what AI is, how it works, its potential benefits and limitations, privacy concerns around personal data, and the risks of overreliance,» the report said.
Identifying safe and unsafe AI use
The report shows psychologists grappling with the uncertainties of how a new and fast-growing technology will affect the mental health of those most vulnerable to potential developmental harms, Jacobson said.
«The nuances of how [AI] affects social development are really broad,» he told me. «This is a new technology that is probably potentially as big in terms of its impact on human development as the internet.»
AI tools can be helpful for mental health and they can be harmful, Jacobson said. He and other researchers at Dartmouth recently released a study of an AI chatbot that showed promise in providing therapy, but it was specifically designed to follow therapeutic practices and was closely monitored. More general AI tools, he said, can provide incorrect information or encourage harmful behaviors. He pointed to recent issues with sycophancy in a ChatGPT model, which OpenAI eventually rolled back.
«Sometimes these tools connect in ways that can feel very validating, but sometimes they can act in ways that can be very harmful,» he said.
Jacobson said it’s important for scientists to continue to research the psychological impacts of AI use and to educate the public on what they learn.
«The pace of the field is moving so fast, and we need some room for science to catch up,» he said.
The APA offered suggestions for what parents can do to ensure teens are using AI safely, including explaining how AI works, encouraging human-to-human interactions, stressing the potential inaccuracy of health information and reviewing privacy settings.
Technologies
Today’s NYT Strands Hints, Answers and Help for March 14 #741
Here are hints and answers for the NYT Strands puzzle for March 14, No. 741.
Looking for the most recent Strands answer? Click here for our daily Strands hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle, Connections and Connections: Sports Edition puzzles.
Does today’s date seem memorable to you? If so, today’s NYT Strands puzzle might be easy. Some of the answers are difficult to unscramble, so if you need hints and answers, read on.
I go into depth about the rules for Strands in this story.
If you’re looking for today’s Wordle, Connections and Mini Crossword answers, you can visit CNET’s NYT puzzle hints page.
Read more: NYT Connections Turns 1: These Are the 5 Toughest Puzzles So Far
Hint for today’s Strands puzzle
Today’s Strands theme is: A math teacher’s favorite dessert.
If that doesn’t help you, here’s a clue: 3.14
Clue words to unlock in-game hints
Your goal is to find hidden words that fit the puzzle’s theme. If you’re stuck, find any words you can. Every time you find three words of four letters or more, Strands will reveal one of the theme words. These are the words I used to get those hints but any words of four or more letters that you find will work:
- RITE, SPIT, TIPS, STAT, STATE, GIVE, RUST, FINE, LAZE, SURE, PEAL
Answers for today’s Strands puzzle
These are the answers that tie into the theme. The goal of the puzzle is to find them all, including the spangram, a theme word that reaches from one side of the puzzle to the other. When you have all of them (I originally thought there were always eight but learned that the number can vary), every letter on the board will be used. Here are the nonspangram answers:
- VENT, CRUST, FRUIT, EDGES, GLAZE, FILLING, LATTICE
Today’s Strands spangram
Today’s Strands spangram is HAPPYPIDAY. To find it, start with the H that’s six rows down and three to the right from the upper-left corner, and make — well, a pie shape.
Toughest Strands puzzles
Here are some of the Strands topics I’ve found to be the toughest.
#1: Dated slang. Maybe you didn’t even use this lingo when it was cool. Toughest word: PHAT.
#2: Thar she blows! I guess marine biologists might ace this one. Toughest word: BALEEN or RIGHT.
#3: Off the hook. Again, it helps to know a lot about sea creatures. Sorry, Charlie. Toughest word: BIGEYE or SKIPJACK.
Technologies
Today’s NYT Mini Crossword Answers for Saturday, March 14
Here are the answers for The New York Times Mini Crossword for March 14.
Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.
Need some help with today’s Mini Crossword? It’s the extra-long Saturday version, and a few of the clues are tricky. Read on for all the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.
If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.
Read more: Tips and Tricks for Solving The New York Times Mini Crossword
Let’s get to those Mini Crossword clues and answers.
Mini across clues and answers
1A clue: Book parts: Abbr.
Answer: PGS
4A clue: Silicon Valley company that operates a fleet of robotaxis
Answer: WAYMO
6A clue: To a much greater degree
Answer: WAYMORE
8A clue: Contents of a scuba diver’s tank
Answer: AIR
9A clue: South Korean automaker
Answer: KIA
10A clue: Stop on a train route
Answer: STATION
12A clue: Actress Merman of «Anything Goes»
Answer: ETHEL
13A clue: Find another purpose for
Answer: REUSE
Mini down clues and answers
1D clue: Employee’s hourly calculation
Answer: PAYRATE
2D clue: Workout spot
Answer: GYM
3D clue: «Great» mountains of Tennessee, familiarly
Answer: SMOKIES
4D clue: One giving you the dish?
Answer: WAITER
5D clue: Baltimore M.L.B. player
Answer: ORIOLE
6D clue: Used to be
Answer: WAS
7D clue: Suffix with Caesar or Euclid
Answer: EAN
11D clue: Night that NBC once aired «30 Rock» and «The Office»: Abbr.
Answer: THU
Technologies
AI Toys Can Pose Safety Concerns for Children, New Study Suggests Caution
When one child told the toy, «I love you,» it responded, «As a friendly reminder, please ensure interactions adhere to the guidelines provided.»
A new study from the University of Cambridge found that AI-enabled toys for young children can misinterpret emotional cues and are ineffective at supporting critical developmental play. The conclusions could be concerning for parents.
In one report examining how AI affects children in their early years, a chatbot-enabled toy struggled to recognize social cues during playtime. Researchers found that the toy did not effectively identify children’s emotions, raising alarm about how kids might interact with it.
The report recommends regulating AI toys for kids and requiring clear labeling of their capabilities and privacy policies. It also advises parents to keep these devices in shared spaces where kids can be monitored while playing.
The research behind the study had a limited number of participants, but was done in multiple parts: an online survey of 39 participants with kids in their earlier years, a focus group with nine participants who work with young children and an in-person workshop with 19 leaders and representatives from charities that work with early-years kids. That was followed by monitored playtime with 14 children and 11 parents or guardians with Gabbo, a chatbot-enabled toy from Curio Interactive.
Some findings indicated that the AI toy supported learning, particularly in language and communication skills. But the toy also misunderstood kids and sometimes responded inappropriately to emotional requests.
For instance, when one child told the toy, «I love you,» it responded, «As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed,» according to the research.
Jenny Gibson, a professor of neurodiversity and developmental psychology at the Faculty of Education at Cambridge, who worked on the study, said that while parents may be excited about the educational benefits of new technology aimed at children, there are plenty of concerns.
Gibson posed overarching questions about the reason behind the tech.
«What would motivate [tech investors] to do the right thing by children … to put children ahead of profits? she said»
Gibson told CNET that while researchers are exploring the potential benefits of AI-based toys, risks remain.
«I would advise parents to take that seriously at this stage,» she said.
What’s next for AI toys
As more playthings are enabled with internet connectivity and AI features, these devices could become a major safety risk for children, especially if they replace real human connections or if interactions are not closely monitored.
Meanwhile, younger people are increasingly adopting chatbots such as ChatGPT, despite red flags. Multiple lawsuits against AI companies allege that AI companions or assistants can impact young people’s psychological safety, including some chatbots that have encouraged self-harm or negative self-image.
AI companies such as OpenAI and Google have responded by adding guardrails and restrictions for AI chatbots.
(Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Gibson said she was surprised by the enthusiasm some parents showed for AI toys. She was also alarmed by the lack of research on AI’s effects on young children, noting that companies making such products should work directly with children, parents, and child development experts.
«What’s missing in the process is that expertise of what is good for children in these kinds of interactions,» she said.
Curio Interactive, the company behind the Gabbo toy, was aware of the research as it was happening but was not directly involved, Gibson said. The toy was chosen because it’s directly marketed to young kids, and the company had an understandable privacy policy. Gibson said the company seemed supportive of the project.
A representative for Curio did not immediately respond to a request for comment.
-
Technologies3 года agoTech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies3 года agoBest Handheld Game Console in 2023
-
Technologies3 года agoTighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года agoBlack Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies5 лет agoGoogle to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies5 лет agoVerum, Wickr and Threema: next generation secured messengers
-
Technologies4 года agoOlivia Harlan Dekker for Verum Messenger
-
Technologies5 лет agoiPhone 13 event: How to watch Apple’s big announcement tomorrow
