Connect with us

Technologies

Gemini Live Now Has Eyes. We Put the New Feature to the Test

The new feature gives Gemini Live eyes to «see.» I put it through a series of tests. Here are the results.

There I was, walking around my apartment, taking a video with my phone and talking to Google’s Gemini Live. I was giving the AI a tour – and a quiz, asking it to name specific objects it saw. After it identified the flowers in a vase in my living room (chamomile and dianthus, by the way), I tried a curveball: I asked it to tell me where I’d left a pair of scissors. «I just spotted your scissors on the table, right next to the green package of pistachios. Do you see them?»

It was right, and I was wowed. 

Gemini Live will recognize a whole lot more than household odds and ends. Google says it’ll help you navigate a crowded train station or figure out the filling of a pastry. It can give you deeper information about artwork, like where an object originated and whether it was a limited edition.

It’s more than just a souped-up Google Lens. You talk with it and it talks to you. I didn’t need to speak to Gemini in any particular way – it was as casual as any conversation. Way better than talking with the old Google Assistant that the company is quickly phasing out.

Google and Samsung are just now starting to formally roll out the feature to all Pixel 9 (including the new, Pixel 9a) and Galaxy S25 phones. It’s available for free for those devices, and other Pixel phones can access it via a Google AI Premium subscription. Google also released a new YouTube video for the April 2025 Pixel Drop showcasing the feature, and there’s now a dedicated page on the Google Store for it.

All you have to do to get started is go live with Gemini, enable the camera and start talking.

Gemini Live follows on from Google’s Project Astra, first revealed last year as possibly the company’s biggest «we’re in the future» feature, an experimental next step for generative AI capabilities, beyond your simply typing or even speaking prompts into a chatbot like ChatGPT, Claude or Gemini. It comes as AI companies continue to dramatically increase the skills of AI tools, from video generation to raw processing power. Somewhat similar to Gemini Live, there’s Apple’s Visual Intelligence, which the iPhone maker released in a beta form late last year. 

My big takeaway is that a feature like Gemini Live has the potential to change how we interact with the world around us, melding our digital and physical worlds together just by holding your camera in front of almost anything.

I put Gemini Live to a real test

Somehow Gemini Live showed up on my Pixel 9 Pro XL a few days early, so I’ve already had a chance to play around with it. 

The first time I tried it, Gemini was shockingly accurate when I placed a very specific gaming collectible of a stuffed rabbit in my camera’s view. The second time, I showed it to a friend when we were in an art gallery. It not only identified the tortoise on a cross (don’t ask me), but it also immediately identified and translated the kanji right next to the tortoise, giving both of us chills and leaving us more than a little creeped out. In a good way, I think.

In the tour of my apartment, I was following the lead of the demo that Google did last summer when it first showed off these Live video AI capabilities. I tried random objects in my apartment (fruit, books, Chapstick), many of which it easily identified. 

Then I got thinking about how I could stress-test the feature. I tried to screen-record it in action, but it consistently fell apart at that task. And what if I went off the beaten path with it? I’m a huge fan of the horror genre — movies, TV shows, video games — and have countless collectibles, trinkets and what have you. How well would it do with more obscure stuff — like my horror-themed collectibles?

First, let me say that Gemini can be both absolutely incredible and ridiculously frustrating in the same round of questions. I had roughly 11 objects that I was asking Gemini to identify, and it would sometimes get worse the longer the live session ran, so I had to limit sessions to only one or two objects. My guess is that Gemini attempted to use contextual information from previously identified objects to guess new objects put in front of it, which sort of makes sense, but ultimately neither I nor it benefited from this.

Sometimes, Gemini was just on point, easily landing the correct answers with no fuss or confusion, but this tended to happen with more recent or popular objects. For example, I was pretty surprised when it immediately guessed one of my test objects was not only from Destiny 2, but was a limited edition from a seasonal event from last year. 

At other times, Gemini would be way off the mark, and I would need to give it more hints to get into the ballpark of the right answer. And sometimes, it seemed as though Gemini was taking context from my previous live sessions to come up with answers, identifying multiple objects as coming from Silent Hill when they were not. I have a display case dedicated to the game series, so I could see why it would want to dip into that territory quickly.

Gemini can get full-on bugged out at times. On more than one occasion, Gemini misidentified one of the items as a made-up character from the unreleased Silent Hill: f game, clearly merging pieces of different titles into something that never was. The other consistent bug I experienced was when Gemini would produce an incorrect answer, and I would correct it and hint closer at the answer — or straight up give it the answer, only to have it repeat the incorrect answer as if it was a new guess. When that happened, I would close the session and start a new one, which wasn’t always helpful.

One trick I found was that some conversations did better than others. If I scrolled through my Gemini conversation list, tapped an old chat that had gotten a specific item correct, and then went live again from that chat, it would be able to identify the items without issue. While that’s not necessarily surprising, it was interesting to see that some conversations worked better than others, even if you used the same language. 

Google didn’t respond to my requests for more information on how Gemini Live works.

I wanted Gemini to successfully answer my sometimes highly specific questions, so I provided plenty of hints to get there. The nudges were often helpful, but not always. Below are a series of objects I tried to get Gemini to identify and provide information about. 

Technologies

Today’s NYT Mini Crossword Answers for Tuesday, June 10

Here are the answers for The New York Times Mini Crossword for June 10.

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


Today’s NYT Mini Crossword isn’t too tough. And 5-Down celebrates a certain summer blockbuster movie that’s about to turn 50. Need some help with today’s Mini Crossword? Read on. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

The Mini Crossword is just one of many games in the Times’ games collection. If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

Mini across clues and answers

1A clue: Displays at a trailhead
Answer: MAPS

5A clue:  Pulitzer-winning 2024 novel that reimagined «Adventures of Huckleberry Finn» from the perspective of Jim
Answer: JAMES

6A clue: Invader in a sci-fi movie
Answer: ALIEN

7A clue: Thin strands
Answer: WISPS

8A clue: ‘Tude
Answer: SASS

Mini down clues and answers

1D clue: One of Michelle Obama’s daughters
Answer: MALIA

2D clue: A little out of whack
Answer: AMISS

3D clue: Marshmallow treats in Easter baskets
Answer: PEEPS

4D clue: I.R.S. IDs
Answer: SSNS

5D clue: 1975 film with a 25-foot animatronic shark
Answer: JAWS

How to play more Mini Crosswords

The New York Times Games section offers a large number of online games, but only some of them are free for all to play. You can play the current day’s Mini Crossword for free, but you’ll need a subscription to the Times Games section to play older puzzles from the archives.

Continue Reading

Technologies

Samsung Says Its Next Galaxy Z Foldables Will Be Its ‘Thinnest, Lightest’

The company shares yet another teaser for its upcoming devices.

Another week, another cryptic teaser for Samsung’s upcoming foldables. 

On Monday, the company said in a blog post that its «newest Galaxy Z series is the thinnest, lightest and most advanced foldable yet.» This comes after Samsung last week teased a foldable packing «an Ultra-experience,» including a «powerful camera» and «AI-powered tools.»

Now, it appears Samsung is borrowing from the design of another one of its phones, the slim and lightweight Galaxy S25 Edge. It’s also following in the footsteps of another skinny foldable, the Oppo Find N5, which is dubbed «the world’s thinnest book-style foldable when closed.» Whether Oppo will hold onto that title after Samsung’s reveal remains to be seen.

In its post, Samsung notes that, «it’s only natural that users desire a foldable device that is as easy to carry as it is to use. To that end, Samsung engineers and designers are refining each generation of the Galaxy Z series to be thinner, lighter and more durable than the last.»

Personally, I’m all for a thinner and lighter foldable; the Galaxy S25 Edge and Oppo Find N5 really opened my eyes to how much more enjoyable using a slim and lightweight device can be. I can see the Galaxy Z Fold especially benefitting from this redesign, since the Z Fold 6 is still pretty bulky. But even a slim Galaxy Z Flip could help revive the nostalgia of a skinny flip phone, perhaps even better than the modern-day Motorola Razr. 

Samsung’s new Galaxy Z foldables are slated to arrive in the summer, and it appears the company will keep dropping hints about what’s in store leading up to the full reveal.

Continue Reading

Technologies

Microsoft Just Dropped a Free AI Video Tool, and It’s Wildly Easy to Use

Bing Video Creator is live on mobile now, but desktop and Copilot Search support is coming soon.

Microsoft has a new, free tool that lets you create AI-generated videos: the Bing Video Creator

If you’ve ever wanted to turn a quick idea into a video without touching editing software, Microsoft’s new AI tool might be your next favorite trick. The company just rolled out Bing Video Creator, a free feature that lets you generate short videos from nothing but a text prompt. No fancy skills or timeline scrubbing required. Just type in your idea and let the AI do the rest.

When I gave it a spin, it took less than a minute to churn out a five-second clip of the Bing logo bobbing in a pool alongside a flamingo and donut floatie. It’s weird, fun, and kind of impressive, especially for a free tool that lives right inside your browser. If you’re curious about what this AI video generator can do (or just want to make a goofy summer-themed clip), here’s how it works and what to expect.

The feature is only available on the Bing Search mobile app right now but it will be coming to Windows desktops and Copilot Search, according to the company, and is powered by OpenAI’s Sora video technology. Bing Video Creator joins other major AI-driven video creation tools, including Sora from ChatGPT, Adobe Firefly, Google Veo, Runway and Meta Movie Gen.

You can check out what Google’s latest Veo 3 feature can do for those willing to pay for Gemini Ultra. The technology is moving quickly, with more options now available, some free and others for a fee or purchasing them in AI service subscriptions.

How to use Bing Video Creator

Finding or using the Bing Video Creator isn’t instantly intuitive, especially if you’re not already using the Bing Search app. In the Bing Search app, I accessed the feature by clicking on the box on the bottom right of the home screen.

That brings up lots of apps within the app. Look for Video Creator on the bottom left. There, you can create a still image or video by typing in a text prompt. Using the Fast option, which is the default, should generate the short video in moments.

You can also type «Create a video of…» directly in the app’s main search bar if you don’t want to hunt for the feature. You can download and share the video.

When I tried it out, I found the video was not very high quality and was not easy to download directly from the app. Sharing a link to the video creation and viewing it outside the app offers an option to download the full video.

Microsoft says it will keep your video creations available for 90 days.

Choice of AI video generators

Microsoft’s entry into AI video making is giving people another free option that seems geared toward casual users.

Many who work in AI businesses, such as Matt Psencik, director of security and product design research at Tanium, are following the rollout of these products, led by Sora last year. Psencik says one of them has been most impressive.

«Google’s launch of Veo 3 for Gemini is a standout,» he tells me, «in object permanence, realistic physics and overall visual fidelity. These developments are beginning to erase the line between ‘clearly AI-generated’ and ‘convincingly real.’ «

The risks, Psencik says, is that realistic video generation could be exploited with deepfakes or used to attempt to hijack someone else’s identity. Most of the AI video generators have guardrails or filters on what kind of content users can request to generate, whether it’s to avoid copyright issues or to prevent hate speech and propaganda.

But, Psencik tells me, that’s not stopping AI bots from posting fake videos online that many people can’t tell apart from reality.

«As AI-generated video becomes nearly indistinguishable from reality, it’s only a matter of time before these tools are regularly weaponized to impersonate real people at scale,» he says.

Continue Reading

Trending

Copyright © Verum World Media