Connect with us

Technologies

Google’s AI Overviews Explain Made-Up Idioms With Confident Nonsense

The latest meme around generative AI’s hallucinations proves you can’t lick a badger twice.

Language can seem almost infinitely complex, with inside jokes and idioms sometimes having meaning for just a small group of people and appearing meaningless to the rest of us. Thanks to generative AI, even the meaningless found meaning this week as the internet blew up like a brook trout over the ability of Google search’s AI Overviews to define phrases never before uttered.

What, you’ve never heard the phrase «blew up like a brook trout»? Sure, I just made it up, but Google’s AI overviews result told me it’s a «colloquial way of saying something exploded or became a sensation quickly,» likely referring to the eye-catching colors and markings of the fish. No, it doesn’t make sense.

The trend may have started on Threads, where the author and screenwriter Meaghan Wilson Anastasios shared what happened when she searched «peanut butter platform heels.» Google returned a result referencing a (not real) scientific experiment in which peanut butter was used to demonstrate the creation of diamonds under high pressure. 

It moved to other social media sites, like Bluesky, where people shared Google’s interpretations of phrases like «you can’t lick a badger twice.» The game: Search for a novel, nonsensical phrase with «meaning» at the end.

Things rolled on from there.

This meme is interesting for more reasons than comic relief. It shows how large language models might strain to provide an answer that sounds correct, not one that is correct.

«They are designed to generate fluent, plausible-sounding responses, even when the input is completely nonsensical,» said Yafang Li, assistant professor at the Fogelman College of Business and Economics at the University of Memphis. «They are not trained to verify the truth. They are trained to complete the sentence.»

Like glue on pizza

The fake meanings of made-up sayings bring back memories of the all too true stories about Google’s AI Overviews giving incredibly wrong answers to basic questions — like when it suggested putting glue on pizza to help the cheese stick.

This trend seems at least a bit more harmless because it doesn’t center on actionable advice. I mean, I for one hope nobody tries to lick a badger once, much less twice. The problem behind it, however, is the same — a large language model, like Google’s Gemini behind AI Overviews, tries to answer your questions and offer a feasible response. Even if what it gives you is nonsense.

A Google spokesperson said AI Overviews are designed to display information supported by top web results, and that they have an accuracy rate comparable to other search features. 

«When people do nonsensical or ‘false premise’ searches, our systems will try to find the most relevant results based on the limited web content available,» the Google spokesperson said. «This is true of search overall, and in some cases, AI Overviews will also trigger in an effort to provide helpful context.»

This particular case is a «data void,» where there isn’t a lot of relevant information available for the search query. The spokesperson said Google is working on limiting when AI Overviews appear on searches without enough information and preventing them from providing misleading, satirical or unhelpful content. Google uses information about queries like these to better understand when AI Overviews should and should not appear. 

You won’t always get a made-up definition if you ask for the meaning of a fake phrase. When drafting the heading of this section, I searched «like glue on pizza meaning,» and it didn’t trigger an AI Overview. 

The problem doesn’t appear to be universal across LLMs. I asked ChatGPT for the meaning of «you can’t lick a badger twice» and it told me the phrase «isn’t a standard idiom, but it definitely sounds like the kind of quirky, rustic proverb someone might use.» It did, though, try to offer a definition anyway, essentially: «If you do something reckless or provoke danger once, you might not survive to do it again.»

Read more: AI Essentials: 27 Ways to Make Gen AI Work for You, According to Our Experts

Pulling meaning out of nowhere

This phenomenon is an entertaining example of LLMs’ tendency to make stuff up — what the AI world calls «hallucinating.» When a gen AI model hallucinates, it produces information that sounds like it could be plausible or accurate but isn’t rooted in reality.

LLMs are «not fact generators,» Li said, they just predict the next logical bits of language based on their training. 

A majority of AI researchers in a recent survey reported they doubt AI’s accuracy and trustworthiness issues would be solved soon. 

The fake definitions show not just the inaccuracy but the confident inaccuracy of LLMs. When you ask a person for the meaning of a phrase like «you can’t get a turkey from a Cybertruck,» you probably expect them to say they haven’t heard of it and that it doesn’t make sense. LLMs often react with the same confidence as if you’re asking for the definition of a real idiom. 

In this case, Google says the phrase means Tesla’s Cybertruck «is not designed or capable of delivering Thanksgiving turkeys or other similar items» and highlights «its distinct, futuristic design that is not conducive to carrying bulky goods.» Burn.

This humorous trend does have an ominous lesson: Don’t trust everything you see from a chatbot. It might be making stuff up out of thin air, and it won’t necessarily indicate it’s uncertain. 

«This is a perfect moment for educators and researchers to use these scenarios to teach people how the meaning is generated and how AI works and why it matters,» Li said. «Users should always stay skeptical and verify claims.»

Be careful what you search for

Since you can’t trust an LLM to be skeptical on your behalf, you need to encourage it to take what you say with a grain of salt. 

«When users enter a prompt, the model just assumes it’s valid and then proceeds to generate the most likely accurate answer for that,» Li said.

The solution is to introduce skepticism in your prompt. Don’t ask for the meaning of an unfamiliar phrase or idiom. Ask if it’s real. Li suggested you ask «is this a real idiom?»

«That may help the model to recognize the phrase instead of just guessing,» she said.

Technologies

OpenAI Launches ChatGPT Atlas, Challenging Google Chrome With an AI-First Browser

The browser is available now for MacOS users, with versions for Windows, iOS and Android coming later.

OpenAI has released a generative AI-powered web browser called ChatGPT Atlas, a major step in the company’s expansion beyond its ChatGPT chatbot platform. The browser, announced Tuesday, integrates ChatGPT’s capabilities directly into the browsing experience, aiming to make web use more interactive and chatbot-like.

OpenAI sparked speculation earlier Tuesday after posting a teaser on its X account showing a series of browser tabs. During the YouTube livestream, CEO Sam Altman and others announced the browser and live-demoed a few of the new features now available for MacOS users worldwide. Support for Windows, iOS and Android operating systems is «coming soon,» the company said.

(Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

The new product launch comes amid growing competition among tech companies to embed AI assistants more deeply into everyday tools. For instance, Google has already integrated Gemini into its Chrome browser to add AI to the online browsing experience. Earlier this year, the AI search tool developer Perplexity launched Comet, an AI-powered Chromium-based web browser. Here’s everything OpenAI announced today.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


What is ChatGPT Atlas?

ChatGPT Atlas looks and functions like a traditional web browser. It includes tabs, bookmarks, extensions and incognito mode, but adds popular ChatGPT functions and features throughout. Opening a new tab lets you either enter a URL or ask ChatGPT a question. The browser includes separate tabs for different types of results, such as search links, images, videos and news.

A built-in ChatGPT sidebar can analyze whatever page you’re viewing to provide summaries, explanations or quick answers without leaving the site. ChatGPT can also offer in-line writing assistance, suggesting edits and completions inside any text field, such as an email draft. 

One of the biggest new features is browser memory, which keeps track of pages and topics you’ve previously explored. Atlas can suggest related pages, help you return to past research or automate repetitive tasks. Memory is optional and can be viewed, edited or deleted at any time in settings.

Atlas also supports natural language commands, meaning you could type something like «reopen the shoes I looked at yesterday» or «clean up my tabs» and the browser should respond accordingly.

Read more: OpenAI Plans to Allow Erotica and Change Mental Health Restrictions for Adult Users

Agent mode in Atlas preview 

OpenAI also previewed agent mode, which lets ChatGPT take limited actions on behalf of the user — such as booking travel, ordering groceries or gathering research. The company says the mode is faster than standard ChatGPT and comes with new safeguards to keep users in control. 

Agent mode is available to Plus and Pro subscribers, and is available in beta for Business users.

«In the same way that GPT-5 and Codex are these great tools for vibe coding, we believe we can start in the long run to have an amazing tool for vibe lifing,» Will Ellsworth, the research lead for agent mode in Atlas, said during the livestream. «So delegating all kinds of tasks both in your personal and professional life to the agent in Atlas.»

How to get started with ChatGPT Atlas

To get started, you’ll first download Atlas at chatgpt.com/atlas. When you open Atlas for the first time, you’ll need to sign in to your ChatGPT account. 

From there, you can import your bookmarks, saved passwords and browsing history from your current browser. 

Continue Reading

Technologies

Amazon Will Pay $2.5 Billion for Misleading Customers Into Amazon Prime Subscriptions

Amazon settles its FTC lawsuit, and agrees to pay billions for «tricking» customers into Prime subscriptions.

In September, Amazon settled its case with the Federal Trade Commission over whether it had misled customers who signed up for Amazon Prime. The $2.5 billion settlement is one of the largest consumer protection settlements in US history, and while Amazon did not admit to wrongdoing, it’s still changing things.

The FTC said $1.5 billion will go into a fund to repay eligible subscribers, with the remaining $1 billion collected as a civil penalty. The settlement requires Amazon to add a «clear and conspicuous» option to decline Prime during checkout and to simplify the cancellation process.

«Amazon and our executives have always followed the law, and this settlement allows us to move forward and focus on innovating for customers,» Mark Blafkin, Amazon senior manager, said in a statement. «We work incredibly hard to make it clear and simple for customers to both sign up or cancel their Prime membership, and to offer substantial value for our many millions of loyal Prime members around the world.»


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


Why was the FTC suing Amazon?

The FTC filed suit against Amazon in 2023, accusing it of using «dark patterns» to nudge people into Prime subscriptions and then making it too hard to cancel. The FTC maintained Amazon was in violation of Section 5 of the FTC Act and the Restore Online Shoppers’ Confidence Act

«Specifically, Amazon used manipulative, coercive or deceptive user-interface designs known as ‘dark patterns’ to trick consumers into enrolling in automatically renewing Prime subscriptions,» the FTC complaint states.

Who is eligible for Amazon’s big payout?

Amazon’s legal settlement is limited to customers who enrolled in Amazon Prime between June 23, 2019, and June 23, 2025. It’s also restricted to customers who subscribed to Prime using a «challenged enrollment flow» or who enrolled in Prime through any method but were unsuccessful in canceling their memberships.

The FTC called out specific enrollment pages, including Prime Video enrollment, the Universal Prime Decision page, the Shipping Option Select page and the Single Page Checkout. To qualify for a payout, claimants must also not have used more than 10 Amazon Prime benefits in any 12-month period.

Customers who signed up via those challenged processes and did not use more than three Prime benefits within one year will be paid automatically by Amazon within 90 days. Other eligible Amazon customers will need to file a claim, and Amazon is required to send notices to those people within 30 days of making its automatic payments.

Customers who did not use a challenged sign-up process but instead were unable to cancel their memberships will also need to file claims for payment.

How much will the Amazon payments be?

Payouts to eligible Amazon claimants will be limited to a maximum of $51. That amount could be reduced depending on the number of Amazon Prime benefits you used while subscribed to the service. Those benefits include free two-day shipping, watching shows or movies on Prime Video or Whole Foods grocery discounts.

Continue Reading

Technologies

This Rumored Feature Could Make NotebookLM Essential for Work as Well as School

NotebookLM takes another step toward being the do-it-all AI tool for work and school.

Since it launched, NotebookLM has been aimed at students. While just about anyone can use the AI tool to some benefit, it’s a great study buddy thanks to an assortment of features for the classroom. But a promising new feature may help with your next work presentation: Slides.

Powered by Gemini, NotebookLM can help you brainstorm ideas and generate audio or video overviews. That sounds like most AI tools, but NotebookLM is different. You can provide it with your own material — documents, websites, YouTube videos and more — and it’ll only use those sources to answer your questions and generate content. Adding a slide generator to such a tool would be a solid, professional power-up. 


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


Google already has its own slide deck creation tool, but NotebookLM could make it even easier to create them. Using your uploaded sources and the recently integrated Nano Banana image generator, the ability to create a slide deck on the fly could soon be on its way. 

The tech and AI tool-focused site Testing Catalog recently spotted an unreleased and incomplete Slide tool. Not all of the features seem to be available, but it’d be easy to assume you’ll be able to create a slide deck based on your uploaded documents with just a few clicks. It’ll also likely allow you to further customize the deck by giving NotebookLM specific instructions and topics within your sources to focus on. 

That’s not all, though. Another, similar feature might also be on the way. Also spotted was an option to generate an infographic — allowing you to create a visual chart or image based on your data sources. We’ll have to wait and see when either of these features goes live, but NotebookLM remains a robust tool that has little competition, and I expect it’ll only get better. 

Continue Reading

Trending

Copyright © Verum World Media