Connect with us

Technologies

Computing Pioneer Criticizes ChatGPT AI Tech for Making Things Up

Vint Cerf, who helped create the internet’s core network technology, hopes engineers can fix the flaws of artificial intelligence language processing.

Vint Cerf, a founding father of the internet, has some harsh words for the suddenly hot technology behind the ChatGPT AI chatbot: «Snake oil.»

Google’s internet evangelist wasn’t completely down on the artificial intelligence technology behind ChatGPT and Google’s own competing Bard, called a large language model. But, speaking Monday at Celesta Capital’s TechSurge Summit, he did warn about ethical issues of a technology that can generate plausible sounding but incorrect information even when trained on a foundation of factual material.

If an executive tried to get him to apply ChatGPT to some business problem, his response would be to call it snake oil, referring to bogus medicines that quacks sold in the 1800s, he said. Another ChatGPT metaphor involved kitchen appliances.

«It’s like a salad shooter — you know how the lettuce goes all over everywhere,» Cerf said. «The facts are all over everywhere, and it mixes them together because it doesn’t know any better.»

OpenAI’s ChatGPT and competitors like Google’s Bard hold the potential to significantly transform our online lives by answering questions, drafting emails, summarizing presentations and performing many other tasks. Microsoft has begun building OpenAI’s language technology into its Bing search engine in a significant challenge to Google, but it uses its own index of the web to try to «ground» OpenAI’s flights of fancy with authoritative, trustworthy documents.

Cerf’s concern arrived just as people putting Bing through its paces discovered factual errors that didn’t match source documents and bizarre chat behavior during extended conversations. Microsoft pledged to improve performance.

In 2004, Cerf shared the Turing Award, the top prize in computing, for helping to develop the internet foundation called TCP/IP, which shuttles data from one computer to another by breaking it into small, individually addressed packets that can take different routes from source to destination. He’s not an AI researcher, but he’s a computing engineer who’d like to see his colleagues improve AI’s shortcomings.

Cerf said he was surprised to learn that ChatGPT could fabricate bogus information from a factual foundation. «I asked it, ‘Write me a biography of Vint Cerf.’ It got a bunch of things wrong,» Cerf said. That’s when he learned the technology’s inner workings — that it uses statistical patterns spotted from huge amounts of training data to construct its response.

«It knows how to string a sentence together that’s grammatically likely to be correct,» but it has no true knowledge of what it’s saying, Cerf said. «We are a long way away from the self-awareness we want.»

OpenAI, which earlier in February launched a $20 per month plan to use ChatGPT, has been clear about about the technology’s shortcomings but aims to improve it through «continuous iteration.»

«ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging,» the AI research lab said when it launched ChatGPT in November.

Cerf hopes for progress, too. «Engineers like me should be responsible for trying to find a way to tame some of these technologies so they are less likely to cause trouble,» he said.

Cerf’s comments stood in contrast to those of another Turing award winner at the conference, chip design pioneer and former Stanford President John Hennessy, who offered a more optimistic assessment of AI.

Editors’ note: CNET is using an AI engine to create some personalfinance explainers that are edited and fact-checked by our editors. Formore, see this post.

Technologies

Today’s NYT Mini Crossword Answers for Thursday, July 17

Here are the answers for The New York Times Mini Crossword for July 17.

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


I breezed through today’s Mini Crossword. There’s a little something for everyone. Birders will appreciate 3-Down while musicians will immediately know the answer to 6-Down. Read on for an assist with today’s Mini Crossword. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

The Mini Crossword is just one of many games in the Times’ games collection. If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

Mini across clues and answers

1A clue: Workout facilities
Answer: GYMS

5A clue: Pipe dream? Just the opposite!
Answer: LEAK

6A clue: In good spirits
Answer: JOLLY

7A clue: Up to the task
Answer: ABLE

8A clue: Headache-inducing situation
Answer: MESS

Mini down clues and answers

1D clue: Boston newspaper
Answer: GLOBE

2D clue: TALKS LIKE THIS
Answer: YELLS

3D clue: Mallard ducks with green heads, e.g.
Answer: MALES

4D clue: Drone’s zone
Answer: SKY

6D clue: Rock out
Answer: JAM

Continue Reading

Technologies

WeTransfer Backtracks on AI File Training After Backlash: What You Need to Know

The company has updated the changes to its policies after some users objected to new terms.

WeTransfer, the service that allows users to send large files to others, is explaining itself to clients and updating its terms of service after a backlash related to training AI models.

The company published a blog post, «WeTransfer Terms of Service — What’s Really Changing,» that details more updates the company made to its policies, after users noticed that recent changes seemed to suggest WeTransfer was training AI models on the files users are transferring.

In the blog post, the company says: «First things first. Your content is always your content.»

The post goes on to say, «We don’t use machine learning or any form of AI to process content shared via WeTransfer.» WeTransfer explains that its use of AI would be to improve content moderation and enhance its ability to prevent the distribution of harmful content across its platform.

The company adds that those AI tools aren’t being used and haven’t been built yet. «To avoid confusion,» it says, «we’ve removed this reference.» 

A representative for WeTransfer did not immediately return an email seeking further comment.

The backlash over the terms prompted users such as political correspondent Ava Santina to write on X, «Time to stop using WeTransfer who from 8th August have decided they’ll own anything you transfer to power AI.» 

What this means for users

Anxieties are high about what information users share or store in services such as social media accounts is accessed by companies to train AI models. WeTransfer may be used for highly sensitive file transfers, raising fears that private information might be accessed by AI. According to the company, this isn’t the case.

To further explain, the company said in its post:

  • «YES — Your content is always your content. In fact, section 6.2 of our Terms of Service clearly states that you ‘own and retain all right, title, and interest, including all intellectual property rights, in and to the Content’.»
  • «YES — You’re granting us permission to ensure we can run and improve the WeTransfer service properly.»
  • «YES — Our terms are compliant with applicable privacy laws, including the GDPR.»
  • «NO — We are not using your content to train AI models.»
  • «NO — We do not sell your content to third parties.»

When the Terms of Service change

While eagle-eyed experts understood the potential implications of what WeTransfer’s new terms could mean for people using the service, it’s unlikely that most people would be able to spot such changes.

«Expecting users to fully understand Terms of Service is unrealistic. These documents are often too complex to navigate,» says Haibing Lu, associate professor at the Leavey School of Business at Santa Clara University. 

Lu told CNET that companies would do well to clearly highlight any changes they make to AI-related terms and explain them clearly to give people a real choice. «That’s what true transparency looks like,» Lu says. «Companies are increasingly risking backlash when they update Terms of Service to include AI, especially when users’ data is involved.»

Companies including Adobe, Slack and Zoom have had similar issues with terms changes related to AI, but it’s not just AI that’s the problem, Lu says — rather, it’s the lack of transparent communication.

In the case of WeTransfer, Lu says the company’s response, including revising the terms and blogging about them, «was a smart move and helped rebuild trust. It showed they were listening and willing to act fast.»

WeTransfer could include more understandable language in its terms, or communicate the changes better or sooner, Lu says, adding: «Transparency shouldn’t start after a backlash.»

Continue Reading

Technologies

Best Galaxy Z Flip 6 Deals: Get The Previous Generation Foldable for Less Now That the Z Flip 7 Is Available

Continue Reading

Trending

Copyright © Verum World Media