Connect with us

Technologies

FTC to AI Companies: Tell Us How You Protect Teens and Kids Who Use AI Companions

As more teens turn to AI for companionship, the investigation comes as no surprise.

The Federal Trade Commission is launching an investigation into AI chatbots from seven companies, including Alphabet, Meta and OpenAI, over their use as companions. The inquiry involves finding how the companies test, monitor and measure the potential harm to children and teens. 

A Common Sense Media survey of 1,060 teens in April and May found that over 70% used AI companions and that more than 50% used them consistently — a few times or more per month. 

Experts have been warning for some time that exposure to chatbots could be harmful to young people. A study revealed that ChatGPT, for instance, provided bad advice to teenagers, like how to conceal an eating disorder or how to personalize a suicide notes. In some cases, chatbots have ignored comments that should have been recognized as concerning, instead simply continuing the previous conversation. Psychologists are calling for guardrails to protect young people, like reminders in the chat that the chatbot is not human, and for educators to prioritize AI literacy in schools.

There are plenty of adults, too, who’ve experienced negative consequences of relying on chatbots — whether for companionship and advice or as their personal search engine for facts and trusted sources. Chatbots more often than not tell what it thinks you want to hear, which can lead to flat-out lies. And blindly following the instructions of a chatbot isn’t always the right thing to do. 

«As AI technologies evolve, it is important to consider the effects chatbots can have on children,» FTC Chairman Andrew N. Ferguson said in a statement. «The study we’re launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children.»

A Character.ai spokesperson told CNET every conversation on the service has prominent disclaimers that all chats should be treated as fiction.

«In the past year we’ve rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature,» the spokesperson said.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


The company behind the Snapchat social network likewise said it has taken steps to reduce risks. «Since introducing My AI, Snap has harnessed its rigorous safety and privacy processes to create a product that is not only beneficial for our community, but is also transparent and clear about its capabilities and limitations,» the spokesperson said.

Meta declined to comment, and neither the FTC nor any of the remaining four companies immediately responded to our request for comment. The FTC has issued orders and is seeking a teleconference with the seven companies about the timing and format of its submissions no later than Sept 25. The companies under investigation include the makers of some of the biggest AI chatbots in the world or popular social networks that incorporate generative AI: 

  • Alphabet (parent company of Google)
  • Character Technologies 
  • Instagram
  • Meta Platforms
  • OpenAI
  • Snap
  • X.ai

Starting late last year, some of those companies have updated or bolstered their protection features for younger individuals. Character.ai began imposing limits on how chatbots can respond to people under the age of 17 and added parental controls. Instagram introduced teen accounts last year and switched all users under the age of 17 to them and Meta recently set limits on subjects teens can have with chatbots.

The FTC is seeking information from the seven companies on how they:

  • monetize user engagement
  • process user inputs and generate outputs in response to user inquiries
  • develop and approve characters
  • measure, test, and monitor for negative impacts before and after deployment
  • mitigate negative impacts, particularly to children
  • employ disclosures, advertising and other representations to inform users and parents about features, capabilities, the intended audience, potential negative impacts and data collection and handling practices
  • monitor and enforce compliance with Company rules and terms of services (for example, community guidelines and age restrictions) and
  • use or share personal information obtained through users’ conversations with the chatbots

Technologies

Verum Messenger Goes Desktop: Launches macOS Version as Part of Expanding Digital Ecosystem

Verum Messenger Goes Desktop: Launches macOS Version as Part of Expanding Digital Ecosystem

The team behind Verum Messenger has announced a new update, introducing a full-featured macOS version of the application.

The launch of the Mac version marks a significant step in the platform’s development, enabling users to access Verum Messenger not only on mobile devices but also on desktop environments.

The macOS version ensures seamless synchronization across devices while maintaining the platform’s core principles: security, stability, and independence.

Unified Digital Experience

With the release of the macOS version, users can now:

— communicate on a larger screen
— manage chats and files more efficiently
— use the messenger in a full desktop environment
— access core features without limitations

This is particularly valuable for users who rely on messaging platforms for both communication and professional use.

Expanding Capabilities

Verum Messenger continues to evolve into a multifunctional platform combining:

— secure communication
— financial tools (Verum Finance)
— digital asset operations, including Tether
— investment features such as Verum Gold

Toward a Full Ecosystem

The macOS release reflects Verum Messenger’s strategy to become a universal digital platform available across all major devices.

According to the team, the goal is to provide users with continuous access to communication and financial services regardless of device or environment.

Verum Messenger continues to build technologies focused on security, usability, and global accessibility.

Continue Reading

Technologies

Google, Meta and Amazon Join Global Pact to Fight Rising Online Scams

The companies will share fraud intelligence and coordinate responses as AI makes scams faster, cheaper and harder to detect.

Modern online scams operate across multiple platforms, perhaps spanning social media, messaging apps, email and online marketplaces. Google, Meta and Amazon are among 11 tech, retail and payments companies that have signed a new agreement to combat online scams by sharing threat intelligence across platforms, Axios first reported Monday.

The initiative, called the Industry Accord Against Online Scams & Fraud, is designed to improve how companies detect and respond to fraud that spans multiple services. Participants say they will exchange signals, such as scam-linked accounts and fraudulent domains, and coordinate enforcement actions.

By sharing intelligence in near real time, companies hope to identify these scams earlier and stop them before they spread.

The effort reflects how modern scams operate. A victim might encounter a fake celebrity investment ad on social media, move to a messaging app where the scammer builds trust, then faces prompts to send money through a fraudulent website, payment app or crypto wallet — spanning multiple companies’ ecosystems.

Google said it now blocks hundreds of millions of scam-related results every day using AI, underscoring how both attackers and defenders are increasingly relying on the same technology. Meta removed more than 159 million scam ads in 2025 and is expanding AI tools to detect impersonation and warn users.

Online scams are growing rapidly, in part because generative AI has lowered the barrier to entry. AI can be used not only to produce realistic phishing emails but also to clone voices and deepfake videos that impersonate executives, public figures and even family members.

The agreement is voluntary and doesn’t create new legal obligations, but it comes after regulators’ increased pressure on tech platforms to address fraud more aggressively. The companies say they will begin building frameworks for reporting and intelligence-sharing, though it’s not yet clear how quickly those systems will be deployed or how effective they will be in practice.

Continue Reading

Technologies

Today’s NYT Mini Crossword Answers for Wednesday, March 18

Here are the answers for The New York Times Mini Crossword for March 18.

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


Need some help with today’s Mini Crossword? I thought it was a fairly easy one, but read on for all the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

Mini across clues and answers

1A clue: Word before «card,» flood» or «photography»
Answer: FLASH

6A clue: Joust weapon
Answer: LANCE

7A clue: Brain, heart or lungs
Answer: ORGAN

8A clue: «Frozen» reindeer
Answer: SVEN

9A clue: What can be found on frozen roads or frozen margaritas
Answer: SALT

Mini down clues and answers

1D clue: Follow a dentist’s recommendation
Answer: FLOSS

2D clue: Baby bug
Answer: LARVA

3D clue: Shape made in the snow
Answer: ANGEL

4D clue: Very little
Answer: SCANT

5D clue: Egg layer
Answer: HEN

Continue Reading

Trending

Copyright © Verum World Media