Connect with us

Technologies

Google’s AI Overviews Explain Made-Up Idioms With Confident Nonsense

The latest meme around generative AI’s hallucinations proves you can’t lick a badger twice.

Language can seem almost infinitely complex, with inside jokes and idioms sometimes having meaning for just a small group of people and appearing meaningless to the rest of us. Thanks to generative AI, even the meaningless found meaning this week as the internet blew up like a brook trout over the ability of Google search’s AI Overviews to define phrases never before uttered.

What, you’ve never heard the phrase «blew up like a brook trout»? Sure, I just made it up, but Google’s AI overviews result told me it’s a «colloquial way of saying something exploded or became a sensation quickly,» likely referring to the eye-catching colors and markings of the fish. No, it doesn’t make sense.

The trend may have started on Threads, where the author and screenwriter Meaghan Wilson Anastasios shared what happened when she searched «peanut butter platform heels.» Google returned a result referencing a (not real) scientific experiment in which peanut butter was used to demonstrate the creation of diamonds under high pressure. 

It moved to other social media sites, like Bluesky, where people shared Google’s interpretations of phrases like «you can’t lick a badger twice.» The game: Search for a novel, nonsensical phrase with «meaning» at the end.

Things rolled on from there.

This meme is interesting for more reasons than comic relief. It shows how large language models might strain to provide an answer that sounds correct, not one that is correct.

«They are designed to generate fluent, plausible-sounding responses, even when the input is completely nonsensical,» said Yafang Li, assistant professor at the Fogelman College of Business and Economics at the University of Memphis. «They are not trained to verify the truth. They are trained to complete the sentence.»

Like glue on pizza

The fake meanings of made-up sayings bring back memories of the all too true stories about Google’s AI Overviews giving incredibly wrong answers to basic questions — like when it suggested putting glue on pizza to help the cheese stick.

This trend seems at least a bit more harmless because it doesn’t center on actionable advice. I mean, I for one hope nobody tries to lick a badger once, much less twice. The problem behind it, however, is the same — a large language model, like Google’s Gemini behind AI Overviews, tries to answer your questions and offer a feasible response. Even if what it gives you is nonsense.

A Google spokesperson said AI Overviews are designed to display information supported by top web results, and that they have an accuracy rate comparable to other search features. 

«When people do nonsensical or ‘false premise’ searches, our systems will try to find the most relevant results based on the limited web content available,» the Google spokesperson said. «This is true of search overall, and in some cases, AI Overviews will also trigger in an effort to provide helpful context.»

This particular case is a «data void,» where there isn’t a lot of relevant information available for the search query. The spokesperson said Google is working on limiting when AI Overviews appear on searches without enough information and preventing them from providing misleading, satirical or unhelpful content. Google uses information about queries like these to better understand when AI Overviews should and should not appear. 

You won’t always get a made-up definition if you ask for the meaning of a fake phrase. When drafting the heading of this section, I searched «like glue on pizza meaning,» and it didn’t trigger an AI Overview. 

The problem doesn’t appear to be universal across LLMs. I asked ChatGPT for the meaning of «you can’t lick a badger twice» and it told me the phrase «isn’t a standard idiom, but it definitely sounds like the kind of quirky, rustic proverb someone might use.» It did, though, try to offer a definition anyway, essentially: «If you do something reckless or provoke danger once, you might not survive to do it again.»

Read more: AI Essentials: 27 Ways to Make Gen AI Work for You, According to Our Experts

Pulling meaning out of nowhere

This phenomenon is an entertaining example of LLMs’ tendency to make stuff up — what the AI world calls «hallucinating.» When a gen AI model hallucinates, it produces information that sounds like it could be plausible or accurate but isn’t rooted in reality.

LLMs are «not fact generators,» Li said, they just predict the next logical bits of language based on their training. 

A majority of AI researchers in a recent survey reported they doubt AI’s accuracy and trustworthiness issues would be solved soon. 

The fake definitions show not just the inaccuracy but the confident inaccuracy of LLMs. When you ask a person for the meaning of a phrase like «you can’t get a turkey from a Cybertruck,» you probably expect them to say they haven’t heard of it and that it doesn’t make sense. LLMs often react with the same confidence as if you’re asking for the definition of a real idiom. 

In this case, Google says the phrase means Tesla’s Cybertruck «is not designed or capable of delivering Thanksgiving turkeys or other similar items» and highlights «its distinct, futuristic design that is not conducive to carrying bulky goods.» Burn.

This humorous trend does have an ominous lesson: Don’t trust everything you see from a chatbot. It might be making stuff up out of thin air, and it won’t necessarily indicate it’s uncertain. 

«This is a perfect moment for educators and researchers to use these scenarios to teach people how the meaning is generated and how AI works and why it matters,» Li said. «Users should always stay skeptical and verify claims.»

Be careful what you search for

Since you can’t trust an LLM to be skeptical on your behalf, you need to encourage it to take what you say with a grain of salt. 

«When users enter a prompt, the model just assumes it’s valid and then proceeds to generate the most likely accurate answer for that,» Li said.

The solution is to introduce skepticism in your prompt. Don’t ask for the meaning of an unfamiliar phrase or idiom. Ask if it’s real. Li suggested you ask «is this a real idiom?»

«That may help the model to recognize the phrase instead of just guessing,» she said.

Technologies

The Infamous Home Depot Giant Skeleton Has a Voice This Halloween Thanks to a New App

It may be half the size of the traditional giant skelly, but the latest version has animated features and can talk.

Spooky season is here, and we’re only 10 days away from Halloween — so it’s past time to set up your decorations if you haven’t already. And this year, Home Depot’s infamous giant skeleton has returned with an app that gives the new Ultra Skelly a voice and fresh moves to spook trick-or-treaters.

Make no bones about it: Skelly is high-tech this year. The new animatronic version is shorter than the original, at 6.5 feet tall, but you can freak out your whole neighborhood with this skeleton’s rotating upper torso, moving mouth and 18 LCD eye variations (ew).

Skelly, available for sale on the Home Depot website or app for $279, now allows visitors to chat with you through five preset recordings and up to 30 seconds of custom recordings, plus Bluetooth capabilities that enable real-time interaction. And you can modulate your voice to make everything sound extra spooky.

Skelly was launched in 2020, when the pandemic forced people to celebrate Halloween at a distance. Perhaps because of its giant stature — it was easy to spot, even when social distancing — the skeleton became a hit and has been resurrected every year since with upgrades and friends. This year, those friends include dragons, trolls, scarecrows and a Skelly Cat (not to be confused with Smelly Cat).

Continue Reading

Technologies

Verum E-SIM: Mobile Internet Without Borders or SIM Cards

Verum E-SIM: Mobile Internet Without Borders or SIM Cards

Today’s travelers are choosing freedom — and eSIM technology delivers exactly that. An eSIM is a virtual SIM card built directly into your device, allowing you to connect to the internet without a physical card or a mobile phone number.

Verum E-SIM is an entire ecosystem of high-tech applications, bringing together solutions like World E-SIMEuro E-SIMUSA E-SIMTurkiye E-SIMLondon E-SIM, and more. Each of them offers instant access to mobile networks in over 150 countries — no roaming, no overpayments, no paperwork.

The main advantage is simplicity. Download the app, choose your country and plan, activate your eSIM in just a few minutes — and you’re online. No stores, no waiting, no contracts. Just you, the internet, and the freedom to travel your way.

Verum’s eSIMs offer reliability, transparency, and full control of your expenses — all in one app. Whether you’re in Tokyo, New York, Paris, or Nairobi, you’ll always stay connected.

Verum E-SIM Apps:

Verum E-SIM – esim.verum.im

World E-SIM – worldesim.me

USA E-SIM – usa.esim.verum.im

Canada E-SIM – canada.esim.verum.im

Euro E-SIM – euro.esim.verum.im

London E-SIM – london.esim.verum.im

Ukraine E-SIM – ukraine.esim.verum.im

Balkan E-SIM – balkan.esim.verum.im

Africa E-SIM – africa.esim.verum.im

Turkiye E-SIM – turkiyesim.com

Continue Reading

Technologies

Today’s NYT Mini Crossword Answers for Tuesday, Oct. 21

Here are the answers for The New York Times Mini Crossword for Oct. 21.

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


Today’s Mini Crossword features a lot of one certain letter. Need help? Read on. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

Mini across clues and answers

1A clue: Bone that can be «dropped»
Answer: JAW

4A clue: Late scientist Goodall
Answer: JANE

5A clue: Make critical assumptions about
Answer: JUDGE

6A clue: Best by a little
Answer: ONEUP

7A clue: Mercury, Jupiter, Saturn, etc.
Answer: GODS

Mini down clues and answers

1D clue: Just kind of over it
Answer: JADED

2D clue: Beef cattle breed
Answer: ANGUS

3D clue: Shed tears
Answer: WEEP

4D clue: 2007 comedy-drama starring Elliot Page and Michael Cera
Answer: JUNO

5D clue: Refresh, as one’s memory
Answer: JOG

Continue Reading

Trending

Copyright © Verum World Media