Connect with us

Technologies

Gemini Live Gives You AI With Eyes, and It’s Awesome

When it works, Gemini Live’s new camera mode feels like the future in all the right ways. I put it to the test.

Google’s been rolling out the new Gemini Live camera mode to all Android phones using the Gemini app for free after a two-week exclusive for Pixel 9 (including the new Pixel 9A) and Galaxy S5 smartphones. In simpler terms, Google successfully gave Gemini the ability to see, as it can recognize objects that you put in front of your camera. 

It’s not just a party trick, either. Not only can it identify objects, but you can also ask questions about them — and it works pretty well for the most part. In addition, you can share your screen with Gemini so it can identify things you surface on your phone’s display. When you start a live session with Gemini, you now have the option to enable a live camera view, where you can talk to the chatbot and ask it about anything the camera sees. I was most impressed when I asked Gemini where I misplaced my scissors during one of my initial tests.

«I just spotted your scissors on the table, right next to the green package of pistachios. Do you see them?»

Gemini Live’s chatty new camera feature was right. My scissors were exactly where it said they were, and all I did was pass my camera in front of them at some point during a 15-minute live session of me giving the AI chatbot a tour of my apartment.

When the new camera feature popped up on my phone, I didn’t hesitate to try it out. In one of my longer tests, I turned it on and started walking through my apartment, asking Gemini what it saw. It identified some fruit, ChapStick and a few other everyday items with no problem. I was wowed when it found my scissors. 

That’s because I hadn’t mentioned the scissors at all. Gemini had silently identified them somewhere along the way and then  recalled the location with precision. It felt so much like the future, I had to do further testing. 

My experiment with Gemini Live’s camera feature was following the lead of the demo that Google did last summer when it first showed off these live video AI capabilities. Gemini reminded the person giving the demo where they’d left their glasses, and it seemed too good to be true. But as I discovered, it was very true indeed.

Gemini Live will recognize a whole lot more than household odds and ends. Google says it’ll help you navigate a crowded train station or figure out the filling of a pastry. It can give you deeper information about artwork, like where an object originated and whether it was a limited edition piece.

It’s more than just a souped-up Google Lens. You talk with it, and it talks to you. I didn’t need to speak to Gemini in any particular way — it was as casual as any conversation. Way better than talking with the old Google Assistant that the company is quickly phasing out.

Google also released a new YouTube video for the April 2025 Pixel Drop showcasing the feature, and there’s now a dedicated page on the Google Store for it.

To get started, you can go live with Gemini, enable the camera and start talking. That’s it.

Gemini Live follows on from Google’s Project Astra, first revealed last year as possibly the company’s biggest «we’re in the future» feature, an experimental next step for generative AI capabilities, beyond your simply typing or even speaking prompts into a chatbot like ChatGPT, Claude or Gemini. It comes as AI companies continue to dramatically increase the skills of AI tools, from video generation to raw processing power. Similar to Gemini Live, there’s Apple’s Visual Intelligence, which the iPhone maker released in a beta form late last year. 

My big takeaway is that a feature like Gemini Live has the potential to change how we interact with the world around us, melding our digital and physical worlds together just by holding your camera in front of almost anything.

I put Gemini Live to a real test

The first time I tried it, Gemini was shockingly accurate when I placed a very specific gaming collectible of a stuffed rabbit in my camera’s view. The second time, I showed it to a friend in an art gallery. It identified the tortoise on a cross (don’t ask me) and immediately identified and translated the kanji right next to the tortoise, giving both of us chills and leaving us more than a little creeped out. In a good way, I think.

I got to thinking about how I could stress-test the feature. I tried to screen-record it in action, but it consistently fell apart at that task. And what if I went off the beaten path with it? I’m a huge fan of the horror genre — movies, TV shows, video games — and have countless collectibles, trinkets and what have you. How well would it do with more obscure stuff — like my horror-themed collectibles?

First, let me say that Gemini can be both absolutely incredible and ridiculously frustrating in the same round of questions. I had roughly 11 objects that I was asking Gemini to identify, and it would sometimes get worse the longer the live session ran, so I had to limit sessions to only one or two objects. My guess is that Gemini attempted to use contextual information from previously identified objects to guess new objects put in front of it, which sort of makes sense, but ultimately, neither I nor it benefited from this.

Sometimes, Gemini was just on point, easily landing the correct answers with no fuss or confusion, but this tended to happen with more recent or popular objects. For example, I was surprised when it immediately guessed one of my test objects was not only from Destiny 2, but was a limited edition from a seasonal event from last year. 

At other times, Gemini would be way off the mark, and I would need to give it more hints to get into the ballpark of the right answer. And sometimes, it seemed as though Gemini was taking context from my previous live sessions to come up with answers, identifying multiple objects as coming from Silent Hill when they were not. I have a display case dedicated to the game series, so I could see why it would want to dip into that territory quickly.

Gemini can get full-on bugged out at times. On more than one occasion, Gemini misidentified one of the items as a made-up character from the unreleased Silent Hill: f game, clearly merging pieces of different titles into something that never was. The other consistent bug I experienced was when Gemini would produce an incorrect answer, and I would correct it and hint closer at the answer — or straight up give it the answer, only to have it repeat the incorrect answer as if it was a new guess. When that happened, I would close the session and start a new one, which wasn’t always helpful.

One trick I found was that some conversations did better than others. If I scrolled through my Gemini conversation list, tapped an old chat that had gotten a specific item correct, and then went live again from that chat, it would be able to identify the items without issue. While that’s not necessarily surprising, it was interesting to see that some conversations worked better than others, even if you used the same language. 

Google didn’t respond to my requests for more information on how Gemini Live works.

I wanted Gemini to successfully answer my sometimes highly specific questions, so I provided plenty of hints to get there. The nudges were often helpful, but not always. Below are a series of objects I tried to get Gemini to identify and provide information about. 

Technologies

Verum Messenger Update: The voice of the universe now sounds clearer

Verum Messenger Update: The voice of the universe now sounds clearer

Verum Messenger Update: The voice of the universe now sounds clearer

The new version of Verum Messenger brings advanced microphone modes — Automatic, Standard, Voice Isolation, and Wide Spectrum. Choose how your voice will sound — focused, natural, or cosmically wide.

Anonymity. Energy. Freedom.

Your words move through protected channels, dissolving in a space with no surveillance, no borders — only Verum, the flow of truth and silence.

Continue Reading

Technologies

Aurora Borealis Alert: 21 States Could Marvel at the Dazzling Northern Lights Tonight

A strong G3 magnetic storm is pushing the aurora further south than it’s been since June 2025.

Remember that dazzling night in May 2024, when the aurora borealis lit up states that almost never see its colorful glow? Some of us have been chasing that natural marvel ever since. 

Now, the sun is at its solar maximum, and many might get their chance to see the northern lights again. Late Thursday night and early Friday morning, a moderately powerful magnetic storm impacts the Earth’s magnetic field, making the aurora visible in 21 states. 


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


According to NOAA, the aurora will be visible in Idaho, Maine, Michigan, Minnesota, Montana, North Dakota, South Dakota, Washington and Wisconsin. Those with a high enough vantage point facing north should also be able to see it in Illinois, Indiana, Iowa, Nebraska, New Hampshire, New York, Ohio, Oregon, Pennsylvania, Wyoming and Vermont. Alaskans and Canadians will have the best view. 

This is only a prediction. The aurora could be stronger or weaker depending on how things go. If you’re just south of any of these states, it may be worth seeing if the aurora makes it down to you. 

This storm is a continuation of one that hit the US on Wednesday night, for which NOAA initially predicted a G2 magnetic storm but ultimately classified it as a stronger G3 storm

Both storms come from a pair of X-Class coronal mass ejections, or eruptions of solar material and magnetic field that the sun launched toward the Earth on Nov. 4. X-class is the highest designation, so these ejections were pretty big. 

Tips for viewing the northern lights

The methods for viewing an aurora are straightforward. 

You’ll want to get as far away from the city and suburbs as you can to minimize light pollution. After that, you’ll want to get as high up as possible and then face north. 

The northern states in the US will have the best view, but those further south in the prediction zone may still see something if they’re high enough and it’s dark enough outside. 

Avoiding light pollution may be tough because the moon is almost full. It may drown out the aurora along the southern reaches of NOAA’s prediction area. 

If you do decide to head out, you also have a pretty good chance at spotting a shooting star since there are four meteor showers active right now, including Orionids, Leonids, Northern Taurids and Southern Taurids. Three of the meteor showers are scheduled to peak in November.

Continue Reading

Technologies

NASA’s Escapade Mission May Finally Reveal How the Martian Atmosphere Works

NASA, Blue Origin and UC Berkeley combined efforts for NASA’s lowest-cost mission to Mars.

Sending anything to Mars is a much more difficult process than it seems. In the 1960s, the Soviet Union tried (and failed) in its first nine consecutive attempts, and the US was only able to succeed in quick flybys. The losing streak came to an end in 1971 with the success of the Mariner 9, the first spacecraft to orbit another planet. 

More than 50 years later, Mars is still tough to get to, with only seven functional orbiters and two on-surface rovers still operating, most of which are run by NASA. 

On Sunday, NASA’s Escapade, a collaborative effort among the space agency, UC Berkeley and Jeff Bezos’ Blue Origin, will launch and attempt to add two more orbiters to the elusive club of successful missions to Mars. Liftoff is scheduled for 2:45 p.m. ET.

The mission is simple on paper: Blue Origin’s New Glenn rocket will launch two Escapade orbiters into space on Nov. 9, depending on the weather and other factors.

Once there, the orbiters — nicknamed Blue and Gold after UC Berkeley’s school colors — will separate. This is where things get a little complicated. Blue and Gold will hang out at the L2 Earth-Sun Lagrange point, a part of space behind the Earth when viewed from the sun, where the orbiters can quite literally hang out without getting lost in space. They’ll stay there for a year before doing a quick flyby of Earth and departing for Mars. The twin orbiters are expected to arrive at the Red Planet by November 2027. 


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


Space agencies launch missions all the time but few of them have the subtext of Escapade, which has not one but three underlying storylines to pay attention to. 

New Glenn’s official debut

NASA has tapped Blue Origin’s large New Glenn rocket for the launch. New Glenn is the proverbial new kid on the block, and the Escapade mission will be the company’s first official mission into space. The rocket’s role will be to launch Escapade into orbit and then return to Earth.

Blue Origin sent New Glenn into orbit for the first time in January 2025. That mission, dubbed NG-1 by Blue Origin, showed that the rocket could launch and make it to space while demonstrating the company’s Blue Ring orbital transfer vehicle. Things didn’t exactly go as planned, however. Upon reentry, New Glenn’s first stage was unable to stick its landing, missing its target and plunging into the Atlantic Ocean, prompting an FAA investigation

For the Escapade mission, all eyes will be on whether Blue Origin will do better this time in the landing phase. Not only is this the first NASA mission for the space company, owned by the CEO of online retail giant Amazon, but it will also make its second attempt to land New Glenn’s first-stage rockets without incident. 

Should the company succeed, Blue Origin will join Elon Musk’s SpaceX as the only commercial vendors with reusable space launch vehicles. This could help reduce the cost and increase the frequency of space launches. 

The 13 lives of Escapade

One of the challenges of the Escapade mission is its budget. Missions to Mars are usually expensive. The Mars Exploration Rover mission started in 2003 and launched a year later cost a hair over $1 billion, with $744 million of it going to vehicle design and launch. Even less expensive initiatives, like the failed 1999 Mars Polar Lander, still cost well north of $100 million. 

Escapade didn’t have that budget. It’s part of NASA’s Small Innovative Missions for Planetary Exploration program. Its budget was less than $80 million, and to build the two orbiters, UC Berkeley and Rocket Lab were allocated $55 million of that total. 

«Building two interplanetary spacecraft for $55 million was never going to be simple,» Dr. Robert Lillis, associate director for Planetary Science at UC Berkeley and the Escapade mission, tells CNET. «They say ‘space is hard’ and they’re right. For us and our spacecraft partners at Rocket Lab, it was tough to build robust, well-instrumented interplanetary probes on a low budget, so challenges were many.»

Researchers at Berkeley began work on Blue and Gold in 2016, and over the years, they dealt with myriad roadblocks, including budgetary concerns, the COVID-19 pandemic, supply issues from suppliers and even personal illnesses. 

«I’ll put it this way, we have a slide deck called ‘The Nine Lives of Escapade’ and I think we’re up to 13 now,» Lillis says. «I could write a book on all the things that could’ve doomed the mission.»

The cost of admission

In 2013, the Indian Space Research Organization launched its Mars Orbital Mission, a successful attempt to put a satellite on the Red Planet. The total cost of the mission was $74 million, which undercut all other missions to Mars by a fairly significant margin when adjusting for inflation. 

Escapade’s budget is roughly the same, with NASA paying Blue Origin $20 million for use of the New Glenn rocket in addition to the $55 million given to UC Berkeley and Rocket Lab for the creation of the two orbiters. Should the mission be a success, it’ll be NASA’s first low-cost mission to go as far as Mars, and the second such mission to succeed.

Reducing the cost of admission is an important milestone for NASA. It would open up more opportunities for future Mars missions, which could help pave the way for human exploration someday, although there are many other milestones that need to be hit before that can happen.

UC Berkeley and Rocket Lab successfully developed two orbiters that will spend their lifetimes scanning Mars’ magnetic field to gain a deeper understanding of its history, all while operating within a budget that may make future missions to Mars more frequent and affordable. 

The Martian magnetosphere

Despite being one of Earth’s closest neighbors, there are still a lot of question marks surrounding Mars. It’s pretty well established that the planet had water at some point. Over the span of its history, the Martian magnetosphere started getting stripped away by solar winds, making it nearly impossible for water to continue to exist. 

Science has a limited set of data that comes from single orbiters over the span of decades and Escapade hopes to fix that by having two orbiters that follow each other so that researchers can get more consistent measurements of the Martian magnetosphere. As Lillis says, the magnetosphere on Mars changes by the minute, so waiting for a single orbiter to circle back around leaves a lot of those changes unmeasured. 

«With a single orbiter, we could measure conditions in the upstream solar wind, but then have to wait a couple of hours before the spacecraft orbit brought us into the upper atmosphere to measure the rates of atmospheric escape,» Lillis said. «That’s too long: We know the space weather propagates through the system in only one or two minutes.»

The ultimate purpose of the mission is to measure and observe how solar weather interacts with the Martian magnetosphere. Per Lillis, solar winds have been eroding the magnetosphere on Mars, similar to how water erodes rock in a river. Escapade will help science determine how fast and how much of the magnetosphere has eroded under the sun’s constant onslaught. 

Because space weather can be so unpredictable and the existing data is spread out too far in terms of time, researchers aren’t quite sure what they’re going to find when they get there. Berkeley has simulation models that can predict things over the span of hours. Lillis says that the data from Escapade’s two-orbiter setup will help fill in a lot of those gaps.

«With Escapade, we can measure cause and effect at the same time, i.e., the solar wind and upper atmosphere simultaneously,» says Lillis. «To start to understand this highly dynamic system, we need that cause and effect perspective.» 

You can watch the livestream of the Escapade mission launch on Sunday, at Blue Origin’s website.

Continue Reading

Trending

Copyright © Verum World Media