Connect with us

Technologies

Gemini Live’s New Camera Trick Works Like Magic — When It Wants To

Gemini Live’s new camera mode can identify objects around you and more. I tested it out with my offbeat collectibles.

When Gemini Live’s new camera feature popped up on my phone, I didn’t hesitate to try it out. In one of my longer tests, I turned it on and started walking through my apartment, asking Gemini what it saw. It identified some fruit, chapstick and a few other everyday items with no problem, but I was wowed when I asked where I left my scissors. «I just spotted your scissors on the table, right next to the green package of pistachios. Do you see them?»

It was right, and I was wowed.

I never mentioned the scissors while I was giving Gemini a tour of my apartment, but I made sure their placement was in the camera view for a couple of seconds before moving on and asking additional questions about other objects in the room. 

I was following the lead of the demo that Google did last summer when it first showed off these Live video AI capabilities. Gemini reminded the person giving the demo where they left their glasses, and it seemed too good to be true, so I had to try it out and came away impressed.

Gemini Live will recognize a whole lot more than household odds and ends. Google says it’ll help you navigate a crowded train station or figure out the filling of a pastry. It can give you deeper information about artwork, like where an object originated and whether it was a limited edition.

It’s more than just a souped-up Google Lens. You talk with it, and it talks to you. I didn’t need to speak to Gemini in any particular way — it was as casual as any conversation. Way better than talking with the old Google Assistant that the company is quickly phasing out.

Google and Samsung are just starting to roll out the feature to all Pixel 9 (including the new, Pixel 9a) and Galaxy S25 phones. It’s free for those devices, and other Pixel phones can access it via a Google AI Premium subscription. Google also released a new YouTube video for the April 2025 Pixel Drop showcasing the feature, and there’s now a dedicated page on the Google Store for it.

To get started, you can go live with Gemini, enable the camera and start talking.

Gemini Live follows on from Google’s Project Astra, first revealed last year as possibly the company’s biggest «we’re in the future» feature, an experimental next step for generative AI capabilities, beyond your simply typing or even speaking prompts into a chatbot like ChatGPT, Claude or Gemini. It comes as AI companies continue to dramatically increase the skills of AI tools, from video generation to raw processing power. Similar to Gemini Live, there’s Apple’s Visual Intelligence, which the iPhone maker released in a beta form late last year. 

My big takeaway is that a feature like Gemini Live has the potential to change how we interact with the world around us, melding our digital and physical worlds together just by holding your camera in front of almost anything.

I put Gemini Live to a real test

The first time I tried it, Gemini was shockingly accurate when I placed a very specific gaming collectible of a stuffed rabbit in my camera’s view. The second time, I showed it to a friend in an art gallery. It identified the tortoise on a cross (don’t ask me) and immediately identified and translated the kanji right next to the tortoise, giving both of us chills and leaving us more than a little creeped out. In a good way, I think.

I got to thinking about how I could stress-test the feature. I tried to screen-record it in action, but it consistently fell apart at that task. And what if I went off the beaten path with it? I’m a huge fan of the horror genre — movies, TV shows, video games — and have countless collectibles, trinkets and what have you. How well would it do with more obscure stuff — like my horror-themed collectibles?

First, let me say that Gemini can be both absolutely incredible and ridiculously frustrating in the same round of questions. I had roughly 11 objects that I was asking Gemini to identify, and it would sometimes get worse the longer the live session ran, so I had to limit sessions to only one or two objects. My guess is that Gemini attempted to use contextual information from previously identified objects to guess new objects put in front of it, which sort of makes sense, but ultimately, neither I nor it benefited from this.

Sometimes, Gemini was just on point, easily landing the correct answers with no fuss or confusion, but this tended to happen with more recent or popular objects. For example, I was surprised when it immediately guessed one of my test objects was not only from Destiny 2, but was a limited edition from a seasonal event from last year. 

At other times, Gemini would be way off the mark, and I would need to give it more hints to get into the ballpark of the right answer. And sometimes, it seemed as though Gemini was taking context from my previous live sessions to come up with answers, identifying multiple objects as coming from Silent Hill when they were not. I have a display case dedicated to the game series, so I could see why it would want to dip into that territory quickly.

Gemini can get full-on bugged out at times. On more than one occasion, Gemini misidentified one of the items as a made-up character from the unreleased Silent Hill: f game, clearly merging pieces of different titles into something that never was. The other consistent bug I experienced was when Gemini would produce an incorrect answer, and I would correct it and hint closer at the answer — or straight up give it the answer, only to have it repeat the incorrect answer as if it was a new guess. When that happened, I would close the session and start a new one, which wasn’t always helpful.

One trick I found was that some conversations did better than others. If I scrolled through my Gemini conversation list, tapped an old chat that had gotten a specific item correct, and then went live again from that chat, it would be able to identify the items without issue. While that’s not necessarily surprising, it was interesting to see that some conversations worked better than others, even if you used the same language. 

Google didn’t respond to my requests for more information on how Gemini Live works.

I wanted Gemini to successfully answer my sometimes highly specific questions, so I provided plenty of hints to get there. The nudges were often helpful, but not always. Below are a series of objects I tried to get Gemini to identify and provide information about. 

Technologies

Best Pixel 9 Deals: Top Places to Score Big Savings on the New Pixel 9A and Other Models

Continue Reading

Technologies

EV Sales Are Up More Than 10% in the US Despite Tesla Sales Dropping

GM saw the biggest surge in EV shipments, while Tesla dropped by 9% compared to last year.

Continue Reading

Technologies

Rideable Horse Robot Viral Video: The Real Story Behind It

Kawasaki’s Corleo robot horse is just a concept right now, but a thrilling hype video makes it look like a blast to ride.

If you’ve ever watched a video featuring a Boston Dynamics Spot robot dog and wanted to saddle it up and ride it, then Kawasaki has a concept robot that’ll make your heart flutter — and it’s part horse, part leopard, part robot and all wild. Too bad you can’t actually buy one.

The Kawasaki Corleo is a four-legged rideable robot, the answer to the question: «What if we put legs on an all-terrain vehicle instead of wheels?» Kawasaki released a video showing what the concept would look like if it were fully realized. 

The trippy video features the Corleo and riders galloping through a forest, running across a field, leaping over rocky terrain and trotting across a snowy landscape. The video appears to be primarily computer generated with Lord of the Rings-worthy scenery.

Kawasaki is known for its motorcycles and ATVs, but the international company has its hands in everything from railcars to industrial equipment and robotics. 

Kawasaki unveiled the forward-thinking Corleo for the Osaka Expo 2025 in Japan. It’s a 2050 concept model for a future mode of transportation. The expo’s theme is «designing future society for our lives.» The event officially opens on April 13.

Corleo incorporates some nifty design ideas, including independent legs, a hydrogen engine and steering through weight shifting. 

«While preserving the joy of riding, the vehicle continually monitors the rider’s movements to achieve a reassuring sense of unity between human and machine,» Kawasaki said. 

Kawasaki didn’t immediately respond to a request for comment on its plans for Corleo.

For now, Corleo is just a model capable of limited movement, so your sci-fi dreams of riding across rugged mountains on a kick-butt robo-steed will have to be put on hold. Perhaps 2050 will bring us a world full of leggy, rideable robots. Somehow, that feels more achievable than a bunch of flying cars.

Continue Reading

Trending

Exit mobile version