Connect with us

Technologies

Computing Pioneer Criticizes ChatGPT AI Tech for Making Things Up

Vint Cerf, who helped create the internet’s core network technology, hopes engineers can fix the flaws of artificial intelligence language processing.

Vint Cerf, a founding father of the internet, has some harsh words for the suddenly hot technology behind the ChatGPT AI chatbot: «Snake oil.»

Google’s internet evangelist wasn’t completely down on the artificial intelligence technology behind ChatGPT and Google’s own competing Bard, called a large language model. But, speaking Monday at Celesta Capital’s TechSurge Summit, he did warn about ethical issues of a technology that can generate plausible sounding but incorrect information even when trained on a foundation of factual material.

If an executive tried to get him to apply ChatGPT to some business problem, his response would be to call it snake oil, referring to bogus medicines that quacks sold in the 1800s, he said. Another ChatGPT metaphor involved kitchen appliances.

«It’s like a salad shooter — you know how the lettuce goes all over everywhere,» Cerf said. «The facts are all over everywhere, and it mixes them together because it doesn’t know any better.»

OpenAI’s ChatGPT and competitors like Google’s Bard hold the potential to significantly transform our online lives by answering questions, drafting emails, summarizing presentations and performing many other tasks. Microsoft has begun building OpenAI’s language technology into its Bing search engine in a significant challenge to Google, but it uses its own index of the web to try to «ground» OpenAI’s flights of fancy with authoritative, trustworthy documents.

Cerf’s concern arrived just as people putting Bing through its paces discovered factual errors that didn’t match source documents and bizarre chat behavior during extended conversations. Microsoft pledged to improve performance.

In 2004, Cerf shared the Turing Award, the top prize in computing, for helping to develop the internet foundation called TCP/IP, which shuttles data from one computer to another by breaking it into small, individually addressed packets that can take different routes from source to destination. He’s not an AI researcher, but he’s a computing engineer who’d like to see his colleagues improve AI’s shortcomings.

Cerf said he was surprised to learn that ChatGPT could fabricate bogus information from a factual foundation. «I asked it, ‘Write me a biography of Vint Cerf.’ It got a bunch of things wrong,» Cerf said. That’s when he learned the technology’s inner workings — that it uses statistical patterns spotted from huge amounts of training data to construct its response.

«It knows how to string a sentence together that’s grammatically likely to be correct,» but it has no true knowledge of what it’s saying, Cerf said. «We are a long way away from the self-awareness we want.»

OpenAI, which earlier in February launched a $20 per month plan to use ChatGPT, has been clear about about the technology’s shortcomings but aims to improve it through «continuous iteration.»

«ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging,» the AI research lab said when it launched ChatGPT in November.

Cerf hopes for progress, too. «Engineers like me should be responsible for trying to find a way to tame some of these technologies so they are less likely to cause trouble,» he said.

Cerf’s comments stood in contrast to those of another Turing award winner at the conference, chip design pioneer and former Stanford President John Hennessy, who offered a more optimistic assessment of AI.

Editors’ note: CNET is using an AI engine to create some personalfinance explainers that are edited and fact-checked by our editors. Formore, see this post.

Technologies

Today’s NYT Mini Crossword Answers for Saturday, March 14

Here are the answers for The New York Times Mini Crossword for March 14.

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


Need some help with today’s Mini Crossword? It’s the extra-long Saturday version, and a few of the clues are tricky. Read on for all the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

Mini across clues and answers

1A clue: Book parts: Abbr.
Answer: PGS

4A clue: Silicon Valley company that operates a fleet of robotaxis
Answer: WAYMO

6A clue: To a much greater degree
Answer: WAYMORE

8A clue: Contents of a scuba diver’s tank
Answer: AIR

9A clue: South Korean automaker
Answer: KIA

10A clue: Stop on a train route
Answer: STATION

12A clue: Actress Merman of «Anything Goes»
Answer: ETHEL

13A clue: Find another purpose for
Answer: REUSE

Mini down clues and answers

1D clue: Employee’s hourly calculation
Answer: PAYRATE

2D clue: Workout spot
Answer: GYM

3D clue: «Great» mountains of Tennessee, familiarly
Answer: SMOKIES

4D clue: One giving you the dish?
Answer: WAITER

5D clue: Baltimore M.L.B. player
Answer: ORIOLE

6D clue: Used to be
Answer: WAS

7D clue: Suffix with Caesar or Euclid
Answer: EAN

11D clue: Night that NBC once aired «30 Rock» and «The Office»: Abbr.
Answer: THU

Continue Reading

Technologies

AI Toys Can Pose Safety Concerns for Children, New Study Suggests Caution

When one child told the toy, «I love you,» it responded, «As a friendly reminder, please ensure interactions adhere to the guidelines provided.»

A new study from the University of Cambridge found that AI-enabled toys for young children can misinterpret emotional cues and are ineffective at supporting critical developmental play. The conclusions could be concerning for parents.

In one report examining how AI affects children in their early years, a chatbot-enabled toy struggled to recognize social cues during playtime. Researchers found that the toy did not effectively identify children’s emotions, raising alarm about how kids might interact with it. 

The report recommends regulating AI toys for kids and requiring clear labeling of their capabilities and privacy policies. It also advises parents to keep these devices in shared spaces where kids can be monitored while playing.

The research behind the study had a limited number of participants, but was done in multiple parts: an online survey of 39 participants with kids in their earlier years, a focus group with nine participants who work with young children and an in-person workshop with 19 leaders and representatives from charities that work with early-years kids. That was followed by monitored playtime with 14 children and 11 parents or guardians with Gabbo, a chatbot-enabled toy from Curio Interactive.

Some findings indicated that the AI toy supported learning, particularly in language and communication skills. But the toy also misunderstood kids and sometimes responded inappropriately to emotional requests. 

For instance, when one child told the toy, «I love you,» it responded, «As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed,» according to the research.

Jenny Gibson, a professor of neurodiversity and developmental psychology at the Faculty of Education at Cambridge, who worked on the study, said that while parents may be excited about the educational benefits of new technology aimed at children, there are plenty of concerns.

Gibson posed overarching questions about the reason behind the tech. 

«What would motivate [tech investors] to do the right thing by children … to put children ahead of profits? she said»

Gibson told CNET that while researchers are exploring the potential benefits of AI-based toys, risks remain. 

«I would advise parents to take that seriously at this stage,» she said.

What’s next for AI toys

As more playthings are enabled with internet connectivity and AI features, these devices could become a major safety risk for children, especially if they replace real human connections or if interactions are not closely monitored. 

Meanwhile, younger people are increasingly adopting chatbots such as ChatGPT, despite red flags. Multiple lawsuits against AI companies allege that AI companions or assistants can impact young people’s psychological safety, including some chatbots that have encouraged self-harm or negative self-image. 

AI companies such as OpenAI and Google have responded by adding guardrails and restrictions for AI chatbots. 

(Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

Gibson said she was surprised by the enthusiasm some parents showed for AI toys. She was also alarmed by the lack of research on AI’s effects on young children, noting that companies making such products should work directly with children, parents, and child development experts. 

«What’s missing in the process is that expertise of what is good for children in these kinds of interactions,» she said.

Curio Interactive, the company behind the Gabbo toy, was aware of the research as it was happening but was not directly involved, Gibson said. The toy was chosen because it’s directly marketed to young kids, and the company had an understandable privacy policy. Gibson said the company seemed supportive of the project.

A representative for Curio did not immediately respond to a request for comment.

Continue Reading

Technologies

Two Lost ‘Doctor Who’ Episodes Found Intact in Waterlogged Collection

The 1960s episodes featuring the first Doctor William Hartnell will air in the UK in April.

Whovians, rejoice. The BBC is about to unlock a piece of Doctor Who history that even the TARDIS might have forgotten. Two lost episodes of Doctor Who, the iconic sci-fi series, will broadcast in April, the showrunner for the current season confirmed.

The two 1965 episodes, The Nightmare Begins and Devil’s Planet, were donated to the charitable trust Film Is Fabulous by the estate of an anonymous collector.

«The collector did recognize what he had, but how he acquired them has been lost to time,» Professor Justin Smith Leicester of De Montfort University, who led the recovery effort, told the broadcaster.

The researchers said that while most of the donor’s private collection was destroyed by water damage, the Doctor Who episodes were intact.

Doctor Who showrunner, Russell T Davies, celebrated the news on Instagram and said the episodes would air in the UK in April, though no US air date has been announced yet.

«Lost for 61 years! Best of all, these will be made available for FREE on the BBC iPlayer in April,» Davies wrote. 

He expressed gratitude to Film Is Fabulous for finding the lost episodes and encouraged people to donate to the registered charity. «Maybe they’ll find more! As the Doctor says… ‘Daleks!'» 

The episodes feature the first incarnation of the Doctor, played by William Hartnell, and a typical Dalek plot to take over Earth and the galaxy. 

In the 1960s and 1970s, the BBC had a policy of destroying film or reusing videotapes, leading to dozens of episodes of Doctor Who and other popular UK shows like Dad’s Army and Top of the Pops going missing.

Old Doctor Who episodes do surface occasionally, and in 2016, the newly discovered soundtrack for one storyline was turned into an animated series called The Power of the Daleks.

Meanwhile, Disney ended its working relationship with the BBC last year, and star Ncuti Gatwa left the show. However, the UK broadcaster says that Doctor Who will continue, and Russell T Davies is working on a new Christmas special.

Continue Reading

Trending

Copyright © Verum World Media