Connect with us

Technologies

Gemini Live’s New Camera Trick Works Like Magic — When It Wants To

Gemini Live’s new camera mode can identify objects around you and more. I tested it out with my offbeat collectibles.

When Gemini Live’s new camera feature popped up on my phone, I didn’t hesitate to try it out. In one of my longer tests, I turned it on and started walking through my apartment, asking Gemini what it saw. It identified some fruit, chapstick and a few other everyday items with no problem, but I was wowed when I asked where I left my scissors. «I just spotted your scissors on the table, right next to the green package of pistachios. Do you see them?»

It was right, and I was wowed.

I never mentioned the scissors while I was giving Gemini a tour of my apartment, but I made sure their placement was in the camera view for a couple of seconds before moving on and asking additional questions about other objects in the room. 

I was following the lead of the demo that Google did last summer when it first showed off these Live video AI capabilities. Gemini reminded the person giving the demo where they left their glasses, and it seemed too good to be true, so I had to try it out and came away impressed.

Gemini Live will recognize a whole lot more than household odds and ends. Google says it’ll help you navigate a crowded train station or figure out the filling of a pastry. It can give you deeper information about artwork, like where an object originated and whether it was a limited edition.

It’s more than just a souped-up Google Lens. You talk with it, and it talks to you. I didn’t need to speak to Gemini in any particular way — it was as casual as any conversation. Way better than talking with the old Google Assistant that the company is quickly phasing out.

Google and Samsung are just starting to roll out the feature to all Pixel 9 (including the new, Pixel 9a) and Galaxy S25 phones. It’s free for those devices, and other Pixel phones can access it via a Google AI Premium subscription. Google also released a new YouTube video for the April 2025 Pixel Drop showcasing the feature, and there’s now a dedicated page on the Google Store for it.

To get started, you can go live with Gemini, enable the camera and start talking.

Gemini Live follows on from Google’s Project Astra, first revealed last year as possibly the company’s biggest «we’re in the future» feature, an experimental next step for generative AI capabilities, beyond your simply typing or even speaking prompts into a chatbot like ChatGPT, Claude or Gemini. It comes as AI companies continue to dramatically increase the skills of AI tools, from video generation to raw processing power. Similar to Gemini Live, there’s Apple’s Visual Intelligence, which the iPhone maker released in a beta form late last year. 

My big takeaway is that a feature like Gemini Live has the potential to change how we interact with the world around us, melding our digital and physical worlds together just by holding your camera in front of almost anything.

I put Gemini Live to a real test

The first time I tried it, Gemini was shockingly accurate when I placed a very specific gaming collectible of a stuffed rabbit in my camera’s view. The second time, I showed it to a friend in an art gallery. It identified the tortoise on a cross (don’t ask me) and immediately identified and translated the kanji right next to the tortoise, giving both of us chills and leaving us more than a little creeped out. In a good way, I think.

I got to thinking about how I could stress-test the feature. I tried to screen-record it in action, but it consistently fell apart at that task. And what if I went off the beaten path with it? I’m a huge fan of the horror genre — movies, TV shows, video games — and have countless collectibles, trinkets and what have you. How well would it do with more obscure stuff — like my horror-themed collectibles?

First, let me say that Gemini can be both absolutely incredible and ridiculously frustrating in the same round of questions. I had roughly 11 objects that I was asking Gemini to identify, and it would sometimes get worse the longer the live session ran, so I had to limit sessions to only one or two objects. My guess is that Gemini attempted to use contextual information from previously identified objects to guess new objects put in front of it, which sort of makes sense, but ultimately, neither I nor it benefited from this.

Sometimes, Gemini was just on point, easily landing the correct answers with no fuss or confusion, but this tended to happen with more recent or popular objects. For example, I was surprised when it immediately guessed one of my test objects was not only from Destiny 2, but was a limited edition from a seasonal event from last year. 

At other times, Gemini would be way off the mark, and I would need to give it more hints to get into the ballpark of the right answer. And sometimes, it seemed as though Gemini was taking context from my previous live sessions to come up with answers, identifying multiple objects as coming from Silent Hill when they were not. I have a display case dedicated to the game series, so I could see why it would want to dip into that territory quickly.

Gemini can get full-on bugged out at times. On more than one occasion, Gemini misidentified one of the items as a made-up character from the unreleased Silent Hill: f game, clearly merging pieces of different titles into something that never was. The other consistent bug I experienced was when Gemini would produce an incorrect answer, and I would correct it and hint closer at the answer — or straight up give it the answer, only to have it repeat the incorrect answer as if it was a new guess. When that happened, I would close the session and start a new one, which wasn’t always helpful.

One trick I found was that some conversations did better than others. If I scrolled through my Gemini conversation list, tapped an old chat that had gotten a specific item correct, and then went live again from that chat, it would be able to identify the items without issue. While that’s not necessarily surprising, it was interesting to see that some conversations worked better than others, even if you used the same language. 

Google didn’t respond to my requests for more information on how Gemini Live works.

I wanted Gemini to successfully answer my sometimes highly specific questions, so I provided plenty of hints to get there. The nudges were often helpful, but not always. Below are a series of objects I tried to get Gemini to identify and provide information about. 

Technologies

Apple CarPlay Ultra vs. Google Built-In: How the Next-Gen Auto Software Rivals Compare

Apple and Google are supercharging their car software experiences. Here’s how they differ.

I’d spent an hour driving a $250,000-plus Aston Martin up the Los Angeles coast when my hunger pangs became impossible to ignore, and as I’ve done many times before, I asked Siri (through Apple CarPlay) to find me a taco place. But then I did something no other car on the planet allows: I asked Siri to blast the AC and make the air colder. That’s because the 2025 Aston Martin DBX I drove was the first vehicle to come with Apple CarPlay Ultra, the upgraded version of the company’s car software.

Apple debuted CarPlay Ultra at WWDC 2025 last month, and this year’s version of the Aston Martin DBX is the first vehicle to launch with it (pairing with an iPhone running iOS 18.5 or later). As I drove the luxury crossover around, I fiddled with other features that aren’t available in regular CarPlay, from climate control to radio to checking the pressure on the car’s tires. Ultimately, Ultra gives deeper access to more car systems, which is a good thing.

That reminded me a lot of a new feature announced at Google I/O back in May: Google Built-In, which similarly lets users control more of a car’s systems straight from the software interface (in that case, Android Auto). When I got a demonstration of Google Built-In, sitting in a new Volvo EX90 electric SUV, I saw what this new integration of Google software offered: climate controls, Gemini AI assistance and even warnings about car maintenance issues.

But the name is telling: Google Built-In requires automakers to incorporate Android deeper into their cars’ inner workings. Comparatively, Apple CarPlay Ultra support seems like it won’t require car manufacturers to do nearly as much work to prepare their vehicles, just adding a reasonably advanced multicore processor onboard that can handle an increased task load. (Aston Martin will be able to add CarPlay Ultra support to its 2023 and 2024 lineups through firmware updates because they already contain sufficiently advanced CPUs.)

Both solutions reflect Apple’s and Google’s different approaches to their next versions of car software. Apple’s is lighter weight, seemingly requiring less commitment from the automaker to integrate CarPlay Ultra into their vehicles (so long as it has adequate processing power onboard), which will run through a paired iPhone. Google Built-In does require much more integration, but it’s so self-sufficient that you can leave your Android phone at home and still get much of its functionality (aside from getting and sending messages and calls). 

Driving with Apple CarPlay Ultra: Controlling climate, radio and more

As I drove around Los Angeles in the Aston Martin with Apple CarPlay Ultra, I could tell what new features I would be missing once I stepped back into my far more humble daily driver. 

At long last, I could summon Siri and ask it to play a specific song (or just a band) and have it pulled up on Spotify. Since Apple’s assistant now has access to climate controls, I asked to turn up the AC, and it went full blast. I asked to find tacos and it suggested several fast food restaurants — well, it’s not perfect, but at least it’s listening. 

To my relief, Aston Martin retained the physical knobs by the gearshift to control fan speed, temperature, stereo volume and the car’s myriad roadway options (like driving assistance) in case the driver likes traditional controls, but almost all of them could also be altered in the interface. Now, things like radio controls (AM/FM and satellite) and car settings are nestled in their own recognizable apps in CarPlay’s interface.

Ultimately, that’ll be one of CarPlay Ultra’s greatest advantages: If you enter an unfamiliar vehicle (like a rental), you still know exactly where everything is. No wrestling with a carmaker’s proprietary software or trying to figure out where some setting or other is located. It’s not a complete replacement — in the Aston Martin’s case, there were still a handful of settings (like for ambient light projected when the doors open) that the luxury automaker controlled, but they were weaved into CarPlay so you could pop open those windows and go back to Apple’s interface without visibly changing apps.

The dependable ubiquity of Apple’s CarPlay software will likely become even more essential as cars swap out their analog instrument clusters for screens, as Aston Martin did. There’s still a touch of the high-end automaker’s signature style as the default screen behind the wheel shows two traditional dials (one for the speedometer, one for RPMs) with Aston Martin’s livery. But that can be swapped out for other styles, from other dials with customizable colors to a full-screen Maps option.

Each of the half-dozen or so dashboard options was swapped out via square touchpads smaller than a dime on the wheel next to the other touch controls. On the dual-dial display types, I swiped vertically to rotate between a central square (with Maps directions, current music or other app information) or swiped horizontally to switch to another dashboard option. No matter which one you choose, the bottom bar contains all the warning lights drivers will recognize from analog cars — even with digital displays, you’re not safe from the check engine light (which is a good thing). 

Apple CarPlay Ultra doesn’t yet do everything I want. I wish I could also ask Siri to roll down the windows (as Google Built-In can — more on that later) and lock or unlock specific doors. If Apple is connected to the car enough to be able to read the pressure in each tire, I wish it could link up with the engine readout and be able to tell me in plain language what kind of maintenance issue has sprung up. Heck, I wish it could connect to the car remotely and blast the AC before I get in (or fire up the seat warmer), as some proprietary car apps can do. And while Apple Maps and Waze will be included at launch, Google Maps support is not, but it’s coming later.

These aren’t huge deficiencies, and they do show where CarPlay Ultra could better meet driver needs in future updates, notwithstanding the potentially dicey security concerns for using CarPlay Ultra for remote climate or unlocking capabilities. But it shows where the limits are today compared to Google’s more in-depth approach.

Google Built-In: Deeper car integrations — and, of course, Gemini AI

The day after Google I/O’s keynote was quieter back in May, as attendees flitted between focused sessions and demos of upcoming software. It was the ideal time to check out Google Built-In, which was appropriately shown off in a higher-end Volvo EX90 electric SUV (though not nearly as pricey as an Aston Martin). 

As mentioned above, Google Built-In has deeper integrations with vehicles than what I saw in Apple CarPlay Ultra, allowing users to change the climate through its interface or access other systems, including through voice requests. For instance, it can go beyond AC control to switch on the defroster, and even raise and lower specific windows relative to the speaker’s position: cameras within the car (in the rearview mirror, if I remember right) meant that when my demonstrator asked to «roll down this window» pointing over his left shoulder, the correct window rolled down.

Google Built-In is also connected to Gemini, Google’s AI assistant, for what the company is calling «Google Live,» a separate and more capable version of the Android Auto assistant experience in cars right now. With a Live session, I could request music or directions much like I could with Siri — but my demo went further, as the demonstrator tasked Gemini with requests better suited for generative AI, such as asking, «Give me suggestions for a family outing» and telling it to send a specific text to a contact. 

The demonstrator then asked Gemini for recipe advice — «I have chicken, rice and broccoli in the fridge, what can I make?» — as an example of a query someone might ask on the drive home.

Since you’re signed into your Google account, Gemini can consult anything connected to it, like emails and messages. It’s also trained on the user manuals from each car-maker, so if a warning light comes on, the driver can ask the voice assistant what it means — no more flipping through a dense manual trying to figure out what each alert means.

There are other benefits to Google Built-In, like not needing your phone for some features. But there are also drawbacks, like the need to keep car software updated, requiring more work on Google’s end to make sure cars are protected from issues or exploits. They can’t just fix it in the most current version of Android — they’ll need to backport that fix to older versions that vehicles might still be on. 

This deeper integration with Google Built-In has a lot of the benefits of Apple CarPlay Ultra (a familiar interface, easier to access features), just cranked up to a greater degree. It surely benefits fans of hands-off controls, and interweaving Gemini naturally dovetails with Google’s investments, so it’s easy to see that functionality improving. But a greater reliance on Android within the car’s systems could be concerning as the vehicle ages: Will the software stop being supported? Will it slow down or be exposed to security exploits? A lot of questions remain regarding making cars open to phone software interfaces.

Continue Reading

Technologies

A Samsung Tri-Fold Phone Could Be in Your Future, if This Leak Is to Be Believed

UI animations might have revealed the imminent release of a so-called «Galaxy G Fold» device with three screens.

Samsung has been showing off mobile display concepts with three screens at trade events such as CES for several years, but it might finally bring one to market soon if a leaked UI animation is any indicator.

As reported by Android Authority, an animated image from a software build of One UI 8 appears to show what some are dubbing a «Galaxy G Fold» device with three display panels. The screens would be capable of displaying different information or working in unison as one large display. The new phone model could debut as early as next week at Samsung’s Unpacked event on July 9 in Brooklyn. 

Huawei released a tri-folding phone in February, the Mate XT Ultimate Design. 

Some websites have gone into overdrive trying to uncover details on what Samsung’s new device might include and how much it may cost, with Phone Arena reporting that according to a Korean media report, it could be priced at about $3,000. 

Samsung didn’t immediately respond to request for comment.

Continue Reading

Technologies

Early Prime Day Headphone Deals: Up to $100 Off Top-Rated Pairs From Apple, Beats and More

Continue Reading

Trending

Copyright © Verum World Media