Technologies
I Saw the AI Future of Video Games: It Starts With a Character Hopping Over a Box
At the 2025 Game Developers Conference, graphics-chip maker Nvidia showed off its latest tools that use generative AI to augment future games.

At its own GTC AI show in San Jose, California, earlier this month, graphics-chip maker Nvidia unveiled a plethora of partnerships and announcements for its generative AI products and platforms. At the same time, in San Francisco, Nvidia held behind-closed-doors showcases alongside the Game Developers Conference to show game-makers and media how its generative AI technology could augment the video games of the future.
Last year, Nvidia’s GDC 2024 showcase had hands-on demonstrations where I was able to speak with AI-powered nonplayable characters, or NPCs, in pseudo-conversations. They replied to things I typed out, with reasonably contextual responses (though not quite as natural as scripted ones). AI also radically modernized old games for a contemporary graphics look.
This year, at GDC 2025, Nvidia once again invited industry members and press into a hotel room near the Moscone Center, where the convention was held. In a large room ringed with computer rigs packed with its latest GeForce 5070, 5080 and 5090 GPUs, the company showed off more ways gamers could see generative AI remastering old games, offering new options for animators, and evolving NPC interactions.
Nvidia also demonstrated how its latest AI graphics rendering tech, DLSS 4 for its GPU line, improves image quality, light path and framerates in modern games, features that affect gamers every day, though these efforts by Nvidia are more conventional than its other experiments. While some of these advancements rely on studios to implement new tech into their games, others are available right now for gamers to try.
Making animations from text prompts
Nvidia detailed a new tool that generates character model animations based on text prompts — sort of like if you could use ChatGPT in iMovie to make your game’s characters move around in scripted action. The goal? Save developers time. Using the tool could turn programming a several-hour sequence into a several-minute task.
Body Motion, as the tool is called, can be plugged into many digital content creation platforms; Nvidia Senior Product Manager John Malaska, who ran my demo, used Autodesk Maya. To start the demonstration, Malaska set up a sample situation in which he wanted one character to hop over a box, land and move forward. On the timeline for the scene, he selected the moment for each of those three actions and wrote text prompts to have the software generate the animation. Then it was time to tinker.
To refine his animation, he used Body Motion to generate four different variations of the character hopping and chose the one he wanted. (All animations are generated from licensed motion capture data, Malaska said.) Then he specified where exactly he wanted the character to land, and then selected where he wanted them to end up. Body Motion simulated all the frames in between those carefully selected motion pivot points, and boom: animation segment achieved.
In the next section of the demo, Malaska had the same character walking through a fountain to get to a set of stairs. He could edit with text prompts and timeline markers to have the character sneak around and circumvent the courtyard fixtures.
«We’re excited about this,» Malaska said. «It’s really going to help people speed up and accelerate workflows.»
He pointed to situations where a developer may get an animation but want it to run slightly differently and send it back to the animators for edits. A far more time-consuming scenario would be if the animations had been based on actual motion capture, and if the game required such fidelity, getting mocap actors back to record could take days, weeks or months. Tweaking animations with Body Motion based on a library of motion capture data can circumvent all that.
I’d be remiss not to worry for motion capture artists and whether Body Motion could be used to circumvent their work in part or in whole. Generously, this tool could be put to good use making animatics and virtually storyboarding sequences before bringing in professional artists to motion capture finalized scenes. But like any tool, it all depends on who’s using it.
Body Motion is scheduled to be released later in 2025 under the Nvidia Enterprise License.
Another stab at remastering Half-Life 2 using RTX Remix
At last year’s GDC, I’d seen some remastering of Half-Life 2 with Nvidia’s platform for modders, RTX Remix, which is meant to breathe new life into old games. Nvidia’s latest stab at reviving Valve’s classic game was released to the public as a free demo, which gamers can download on Steam to check out for themselves. What I saw of it in Nvidia’s press room was ultimately a tech demo (and not the full game), but it still shows off what RTX Remix can do to update old games to meet modern graphics expectations.
Last year’s RTX Remix Half-Life 2 demonstration was about seeing how old, flat wall textures could be updated with depth effects to, say, make them look like grouted cobblestone, and that’s present here too. When looking at a wall, «the bricks seem to jut out because they use parallax occlusion mapping,» said Nyle Usmani, senior product manager of RTX Remix, who led the demo. But this year’s demo was more about lighting interaction — even to the point of simulating the shadow passing through the glass covering the dial of a gas meter.
Usmani walked me through all the lighting and fire effects, which modernized some of the more iconically haunting parts of Half-Life 2’s fallen Ravenholm area. But the most striking application was in an area where the iconic headcrab enemies attack, when Usmani paused and pointed out how backlight was filtering through the fleshy parts of the grotesque pseudo-zombies, which made them glow a translucent red, much like what happens when you put a finger in front of a flashlight. Coinciding with GDC, Nvidia released this effect, called subsurface scattering, in a software development kit so game developers can start using it.
RTX Remix has other tricks that Usmani pointed out, like a new neural shader for the latest version of the platform — the one in the Half-Life 2 demo. Essentially, he explained, a bunch of neural networks train live on the game data as you play, and tailor the indirect lighting to what the player sees, making areas lit more like they’d be in real life. In an example, he swapped between old and new RTX Remix versions, showing, in the new version, light properly filtering through the broken rafters of a garage. Better still, it bumped the frames per second to 100, up from 87.
«Traditionally, we would trace a ray and bounce it many times to illuminate a room,» Usmani said. «Now we trace a ray and bounce it only two to three times and then we terminate it, and the AI infers a multitude of bounces after. Over enough frames, it’s almost like it’s calculating an infinite amount of bounces, so we’re able to get more accuracy because it’s tracing less rays [and getting] more performance.»
Still, I was seeing the demo on an RTX 5070 GPU, which retails for $550, and the demo requires at least an RTX 3060 Ti, so owners of graphics cards older than that are out of luck. «That’s purely because path tracing is very expensive — I mean, it’s the future, basically the cutting edge, and it’s the most advanced path tracing,» Usmani said.
Nvidia ACE uses AI to help NPCs think
Last year’s NPC AI station demonstrated how nonplayer characters can uniquely respond to the player, but this year’s Nvidia ACE tech showed how players can suggest new thoughts for NPCs that’ll change their behavior and the lives around them.
The GPU maker demonstrated the tech as plugged into InZoi, a Sims-like game where players care for NPCs with their own behaviors. But with an upcoming update, players can toggle on Smart Zoi, which uses Nvidia ACE to insert thoughts directly into the minds of the Zois (characters) they oversee… and then watch them react accordingly. These thoughts can’t go against their own traits, explained Nvidia Geforce Tech Marketing Analyst Wynne Riawan, so they’ll send the Zoi in directions that make sense.
«So, by encouraging them, for example, ‘I want to make people’s day feel better,» it’ll encourage them to talk to more Zois around them,» Riawan said. «Try is the key word: They do still fail. They’re just like humans.»
Riawan inserted a thought into the Zoi’s head: «What if I’m just an AI in a simulation?» The poor Zoi freaked out but still ran to the public bathroom to brush her teeth, which fit her traits of, apparently, being really into dental hygiene.
Those NPC actions following up on player-inserted thoughts are powered by a small language model with half a billion parameters (large language models can go from 1 billion to over 30 billion parameters, with higher giving more opportunity for nuanced responses). The one used in-game is based on the 8 billion parameter Mistral NeMo Minitron model shrunken down to be able to be used by older and less powerful GPUs.
«We do purposely squish down the model to a smaller model so that it’s accessible to more people,» Riawan said.
The Nvidia ACE tech runs on-device using computer GPUs — Krafton, the publisher behind InZoi, recommends a minimum GPU spec of an Nvidia RTX 3060 with 8GB of virtual memory to use this feature, Riawan said. Krafton gave Nvidia a «budget» of one gigabyte of VRAM in order to ensure the graphics card has enough resources to render, well, the graphics. Hence the need to minimize the parameters.
Nvidia is still internally discussing how or whether to unlock the ability to use larger-parameter language models if players have more powerful GPUs. Players may be able to see the difference, as the NPCs «do react more dynamically as they react better to your surroundings with a bigger model,» Riawan said. «Right now, with this, the emphasis is mostly on their thoughts and feelings.»
An early access version of the Smart Zoi feature will go out to all users for free, starting March 28. Nvidia sees it and the Nvidia ACE technology as a stepping stone that could one day lead to truly dynamic NPCs.
«If you have MMORPGs with Nvidia ACE in it, NPCs will not be stagnant and just keep repeating the same dialogue — they can just be more dynamic and generate their own responses based on your reputation or something. Like, Hey, you’re a bad person, I don’t want to sell my goods to you,» Riawan said.
Technologies
The Future’s Here: Testing Out Gemini’s Live Camera Mode
Gemini Live’s new camera mode feels like the future when it works. I put it through a stress test with my offbeat collectibles.

«I just spotted your scissors on the table, right next to the green package of pistachios. Do you see them?»
Gemini Live’s chatty new camera feature was right. My scissors were exactly where it said they were, and all I did was pass my camera in front of them at some point during a 15-minute live session of me giving the AI chatbot a tour of my apartment. Google’s been rolling out the new camera mode to all Android phones using the Gemini app for free after a two-week exclusive to Pixel 9 (including the new Pixel 9A) and Galaxy S5 smartphones. So, what exactly is this camera mode and how does it work?
When you start a live session with Gemini, you now how have the option to enable a live camera view, where you can talk to the chatbot and ask it about anything the camera sees. Not only can it identify objects, but you can also ask questions about them — and it works pretty well for the most part. In addition, you can share your screen with Gemini so it can identify things you surface on your phone’s display.
When the new camera feature popped up on my phone, I didn’t hesitate to try it out. In one of my longer tests, I turned it on and started walking through my apartment, asking Gemini what it saw. It identified some fruit, ChapStick and a few other everyday items with no problem. I was wowed when it found my scissors.
That’s because I hadn’t mentioned the scissors at all. Gemini had silently identified them somewhere along the way and then recalled the location with precision. It felt so much like the future, I had to do further testing.
My experiment with Gemini Live’s camera feature was following the lead of the demo that Google did last summer when it first showed off these live video AI capabilities. Gemini reminded the person giving the demo where they’d left their glasses, and it seemed too good to be true. But as I discovered, it was very true indeed.
Gemini Live will recognize a whole lot more than household odds and ends. Google says it’ll help you navigate a crowded train station or figure out the filling of a pastry. It can give you deeper information about artwork, like where an object originated and whether it was a limited edition piece.
It’s more than just a souped-up Google Lens. You talk with it, and it talks to you. I didn’t need to speak to Gemini in any particular way — it was as casual as any conversation. Way better than talking with the old Google Assistant that the company is quickly phasing out.
Google also released a new YouTube video for the April 2025 Pixel Drop showcasing the feature, and there’s now a dedicated page on the Google Store for it.
To get started, you can go live with Gemini, enable the camera and start talking. That’s it.
Gemini Live follows on from Google’s Project Astra, first revealed last year as possibly the company’s biggest «we’re in the future» feature, an experimental next step for generative AI capabilities, beyond your simply typing or even speaking prompts into a chatbot like ChatGPT, Claude or Gemini. It comes as AI companies continue to dramatically increase the skills of AI tools, from video generation to raw processing power. Similar to Gemini Live, there’s Apple’s Visual Intelligence, which the iPhone maker released in a beta form late last year.
My big takeaway is that a feature like Gemini Live has the potential to change how we interact with the world around us, melding our digital and physical worlds together just by holding your camera in front of almost anything.
I put Gemini Live to a real test
The first time I tried it, Gemini was shockingly accurate when I placed a very specific gaming collectible of a stuffed rabbit in my camera’s view. The second time, I showed it to a friend in an art gallery. It identified the tortoise on a cross (don’t ask me) and immediately identified and translated the kanji right next to the tortoise, giving both of us chills and leaving us more than a little creeped out. In a good way, I think.
I got to thinking about how I could stress-test the feature. I tried to screen-record it in action, but it consistently fell apart at that task. And what if I went off the beaten path with it? I’m a huge fan of the horror genre — movies, TV shows, video games — and have countless collectibles, trinkets and what have you. How well would it do with more obscure stuff — like my horror-themed collectibles?
First, let me say that Gemini can be both absolutely incredible and ridiculously frustrating in the same round of questions. I had roughly 11 objects that I was asking Gemini to identify, and it would sometimes get worse the longer the live session ran, so I had to limit sessions to only one or two objects. My guess is that Gemini attempted to use contextual information from previously identified objects to guess new objects put in front of it, which sort of makes sense, but ultimately, neither I nor it benefited from this.
Sometimes, Gemini was just on point, easily landing the correct answers with no fuss or confusion, but this tended to happen with more recent or popular objects. For example, I was surprised when it immediately guessed one of my test objects was not only from Destiny 2, but was a limited edition from a seasonal event from last year.
At other times, Gemini would be way off the mark, and I would need to give it more hints to get into the ballpark of the right answer. And sometimes, it seemed as though Gemini was taking context from my previous live sessions to come up with answers, identifying multiple objects as coming from Silent Hill when they were not. I have a display case dedicated to the game series, so I could see why it would want to dip into that territory quickly.
Gemini can get full-on bugged out at times. On more than one occasion, Gemini misidentified one of the items as a made-up character from the unreleased Silent Hill: f game, clearly merging pieces of different titles into something that never was. The other consistent bug I experienced was when Gemini would produce an incorrect answer, and I would correct it and hint closer at the answer — or straight up give it the answer, only to have it repeat the incorrect answer as if it was a new guess. When that happened, I would close the session and start a new one, which wasn’t always helpful.
One trick I found was that some conversations did better than others. If I scrolled through my Gemini conversation list, tapped an old chat that had gotten a specific item correct, and then went live again from that chat, it would be able to identify the items without issue. While that’s not necessarily surprising, it was interesting to see that some conversations worked better than others, even if you used the same language.
Google didn’t respond to my requests for more information on how Gemini Live works.
I wanted Gemini to successfully answer my sometimes highly specific questions, so I provided plenty of hints to get there. The nudges were often helpful, but not always. Below are a series of objects I tried to get Gemini to identify and provide information about.
Technologies
Today’s Wordle Hints, Answer and Help for April 26, #1407
Here are hints and the answer for today’s Wordle No. 1,407 for April 26. Hint: Fans of a certain musical group will rock out with this puzzle.

Looking for the most recent Wordle answer? Click here for today’s Wordle hints, as well as our daily answers and hints for The New York Times Mini Crossword, Connections, Connections: Sports Edition and Strands puzzles.
Today’s Wordle puzzle isn’t too tough. The letters are fairly common, and fans of a certain rock band might get a kick out of the answer. If you need a new starter word, check out our list of which letters show up the most in English words. If you need hints and the answer, read on.
Today’s Wordle hints
Before we show you today’s Wordle answer, we’ll give you some hints. If you don’t want a spoiler, look away now.
Wordle hint No. 1: Repeats
Today’s Wordle answer has no repeated letters.
Wordle hint No. 2: Vowels
There is one vowel in today’s Wordle answer.
Wordle hint No. 3: Start letter
Today’s Wordle answer begins with the letter C.
Wordle hint No. 4: Rock out
Today’s Wordle answer is the name of a legendary English rock band.
Wordle hint No. 5: Meaning
Today’s Wordle answer can refer to a violent confrontation.
TODAY’S WORDLE ANSWER
Today’s Wordle answer is CLASH.
Yesterday’s Wordle answer
Yesterday’s Wordle answer, April 25, No. 1406 was KNOWN.
Recent Wordle answers
April 21, No. 1402: SPATE
April 22, No. 1403: ARTSY
April 23, No. 1404: OZONE.
April 24, No. 1405: GENIE
What’s the best Wordle starting word?
Don’t be afraid to use our tip sheet ranking all the letters in the alphabet by frequency of uses. In short, you want starter words that lean heavy on E, A and R, and don’t contain Z, J and Q.
Some solid starter words to try:
ADIEU
TRAIN
CLOSE
STARE
NOISE
Technologies
T-Mobile Adds New Top 5G Plans, T-Satellite and New 5-Year Price Locks
The new top unlimited plans, Experience More and Experience Beyond, shave some costs and add data and satellite options.

Just two years after expanding its lineup of cellular plans, T-Mobile this week announced two new plans that replace its Go5G Plus and Go5G Next offerings, refreshed its prepaid Metro line and wrapped them all in a promised five-year pricing guarantee.
To convert more subscribers, the carrier is also offering up to $800 to help customers pay off phone balances when switching from another carrier.
In a briefing with CNET, Jon Friar, president of T-Mobile’s consumer group, explained why the company is revamping and simplifying its array of mobile plans. «The pain point that’s out there over the last couple of years is rising costs all around consumers,» Friar said. «For us to be able to bring more value and even lower prices on [plans like] Experience More versus our former Go5G Plus is a huge win for consumers.»
The new plans went into effect April 23.
With these changes, CNET is already hard at work updating our picks for Best T-Mobile Plans, so check back soon for our recommendations.
More Experiences to define the T-Mobile experience
The top of the new T-Mobile postpaid lineup is two new plans: Experience More and Experience Beyond.
Experience More is the next generation of the Go5G Plus plan, which has unlimited 5G and 4G LTE access and unlimited Premium Data (download speeds up to 418Mbps and upload speeds up to 31Mbps). High-speed hotspot data is bumped up to 60GB from 50GB per month. The monthly price is now $5 lower per line than Go5G Plus.
The Experience More plan also gets free T-Satellite with Starlink service (the new name for T-Mobile’s satellite feature that uses Starlink’s constellation of satellites) through the end of 2025. Although T-Satellite is still officially in beta until July, customers can continue to get free access to the beta starting now. At the start of the new year, the service will cost $10 per month, a $5 drop from T-Mobile’s originally announced pricing. T-Satellite will be open to customers of other carriers for the same pricing beginning in July.
The new top-tier plan, Experience Beyond, also comes in $5 per line cheaper than its predecessor, Go5G Next. It has 250GB of high-speed hotspot data per month, up from 50GB, and more data when you’re traveling outside the US: 30GB in Canada and Mexico (versus 15GB) and 15GB in 215 countries (up from 5GB). T-Satellite service is included in the Experience Beyond plan.
However, one small change to the Experience plans affects that pricing: Taxes and fees, previously included in the Go5G Plus and Go5G Next prices, are now broken out separately. T-Mobile recently announced that one such fee, the Regulatory Programs and Telco Recovery Fee, would increase up to 50 cents per month.
According to T-Mobile, the Experience Beyond rates and features will be «rolling out soon» for customers currently on the Go5G Next plan.
The Essentials plan is staying in the lineup at the same cost of $60 per month for a single line, the same 50GB of Premium Data and unlimited 5G and 4G LTE data. High-speed hotspot data is an optional $10 add-on, as is T-Satellite access, for $15 (both per month).
Also still in the mix is the Essentials Saver plan, an affordable option that has ranked high in CNET’s Best Cellphone Plans recommendations.
Corresponding T-Mobile plans, such as those for military, first responders and people age 55 and older are also getting refreshed with the new lineup.
T-Mobile’s plan shakeup is being driven in part by the current economic climate. Explaining the rationale behind the price reductions and the streamlined number of plans, Mike Katz, president of marketing, innovation and experience at T-Mobile told CNET, «We’re in a weird time right now where prices everywhere are going up and they’ve happened over the last several years. We felt like there was an opportunity to compete with some simplicity, but more importantly, some peace of mind for customers.»
Existing customers who want to switch to one of the new plans can do so at the same rates offered to new customers. Or, if a current plan still works for them, they can continue without changes (although keep in mind that T-Mobile earlier this year increased prices for some legacy plans).
Five years of price stability
It’s nearly impossible to think about prices these days without warily eyeing how tariffs and US economic policy will affect what we pay for things. So it’s not surprising to see carriers implement some cost stability into their plans. For instance, Verizon recently locked prices for three years on their plans.
Now, T-Mobile is building a five-year price guarantee for its T-Mobile and Metro plans. That pricing applies to talk, text and data amounts — not necessarily taxes and other fees that can fluctuate.
Given the uncertain outlook, it seems counterintuitive to lock in a longer rate. When asked about this, Katz said, «We feel like our job is to solve pain points for customers and we feel like this helps with this exact sentiment. It shifts the risk from customers to us. We’ll take the risk so they don’t have to.»
The price hold applies to new customers signing up for the plans as well as current customers switching to one. T-Mobile is offering the same deals and pricing to new and existing subscribers. Also, the five-year deal applies to pricing; it’s not a five-year plan commitment.
More money and options to encourage switchers
The promise of a five-year price guarantee is also intended to lure people from other carriers, particularly AT&T and Verizon. As further incentive, T-Mobile is offering up to $800 per line (distributed via a virtual prepaid Mastercard) to help pay off other carriers’ device contracts. This is a limited-time offer. There are also options to trade in old devices, including locked phones, to get up to four new flagship phones.
Or, if getting out of a contract isn’t an issue, T-Mobile can offer $200 in credit (up to $800 for four lines) to bring an existing number to the network.
Four new Metro prepaid plans
On the prepaid side, T-Mobile is rolling out four new Metro plans, which are also covered by the new five-year price guarantee:
• Metro Starter costs $25 per line per month for a family of four and there is no need to bring an existing number. (The cost is $105 the first month.)
• Metro Starter Plus runs $40 per month for a new phone, unlimited talk, text and 5G data when bringing an existing number. For $65 per month, new customers can get two lines and two new Samsung A15 phones. No autopay is required.
• Metro Flex Unlimited is $30 per line per month with autopay for four lines ($125 the first month) with unlimited talk, text and 5G data.
• Metro Flex Unlimited Plus costs $60 per line per month, then $35 for lines two and three and then lowers the price of the fourth line to $10 per month as more family members are added. Adding a tablet or smartwatch to an existing line costs $5. And streaming video, such as from the included Amazon Prime membership, comes through at HD quality.
See more: If you’re looking for phone plans, you may also be looking for a new cell phone. Here are CNET’s picks.
-
Technologies2 года ago
Tech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies2 года ago
Best Handheld Game Console in 2023
-
Technologies2 года ago
Tighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года ago
Verum, Wickr and Threema: next generation secured messengers
-
Technologies4 года ago
Google to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies3 года ago
Olivia Harlan Dekker for Verum Messenger
-
Technologies3 года ago
Black Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies4 года ago
iPhone 13 event: How to watch Apple’s big announcement tomorrow