Connect with us

Technologies

I Saw the AI Future of Video Games: It Starts With a Character Hopping Over a Box

At the 2025 Game Developers Conference, graphics-chip maker Nvidia showed off its latest tools that use generative AI to augment future games.

At its own GTC AI show in San Jose, California, earlier this month, graphics-chip maker Nvidia unveiled a plethora of partnerships and announcements for its generative AI products and platforms. At the same time, in San Francisco, Nvidia held behind-closed-doors showcases alongside the Game Developers Conference to show game-makers and media how its generative AI technology could augment the video games of the future. 

Last year, Nvidia’s GDC 2024 showcase had hands-on demonstrations where I was able to speak with AI-powered nonplayable characters, or NPCs, in pseudo-conversations. They replied to things I typed out, with reasonably contextual responses (though not quite as natural as scripted ones). AI also radically modernized old games for a contemporary graphics look. 

This year, at GDC 2025, Nvidia once again invited industry members and press into a hotel room near the Moscone Center, where the convention was held. In a large room ringed with computer rigs packed with its latest GeForce 5070, 5080 and 5090 GPUs, the company showed off more ways gamers could see generative AI remastering old games, offering new options for animators, and evolving NPC interactions. 

Nvidia also demonstrated how its latest AI graphics rendering tech, DLSS 4 for its GPU line, improves image quality, light path and framerates in modern games, features that affect gamers every day, though these efforts by Nvidia are more conventional than its other experiments. While some of these advancements rely on studios to implement new tech into their games, others are available right now for gamers to try.

Making animations from text prompts

Nvidia detailed a new tool that generates character model animations based on text prompts — sort of like if you could use ChatGPT in iMovie to make your game’s characters move around in scripted action. The goal? Save developers time. Using the tool could turn programming a several-hour sequence into a several-minute task.

Body Motion, as the tool is called, can be plugged into many digital content creation platforms; Nvidia Senior Product Manager John Malaska, who ran my demo, used Autodesk Maya. To start the demonstration, Malaska set up a sample situation in which he wanted one character to hop over a box, land and move forward. On the timeline for the scene, he selected the moment for each of those three actions and wrote text prompts to have the software generate the animation. Then it was time to tinker.

To refine his animation, he used Body Motion to generate four different variations of the character hopping and chose the one he wanted. (All animations are generated from licensed motion capture data, Malaska said.) Then he specified where exactly he wanted the character to land, and then selected where he wanted them to end up. Body Motion simulated all the frames in between those carefully selected motion pivot points, and boom: animation segment achieved.

In the next section of the demo, Malaska had the same character walking through a fountain to get to a set of stairs. He could edit with text prompts and timeline markers to have the character sneak around and circumvent the courtyard fixtures. 

«We’re excited about this,» Malaska said. «It’s really going to help people speed up and accelerate workflows.»

He pointed to situations where a developer may get an animation but want it to run slightly differently and send it back to the animators for edits. A far more time-consuming scenario would be if the animations had been based on actual motion capture, and if the game required such fidelity, getting mocap actors back to record could take days, weeks or months. Tweaking animations with Body Motion based on a library of motion capture data can circumvent all that.

I’d be remiss not to worry for motion capture artists and whether Body Motion could be used to circumvent their work in part or in whole. Generously, this tool could be put to good use making animatics and virtually storyboarding sequences before bringing in professional artists to motion capture finalized scenes. But like any tool, it all depends on who’s using it.

Body Motion is scheduled to be released later in 2025 under the Nvidia Enterprise License.

Another stab at remastering Half-Life 2 using RTX Remix

At last year’s GDC, I’d seen some remastering of Half-Life 2 with Nvidia’s platform for modders, RTX Remix, which is meant to breathe new life into old games. Nvidia’s latest stab at reviving Valve’s classic game was released to the public as a free demo, which gamers can download on Steam to check out for themselves. What I saw of it in Nvidia’s press room was ultimately a tech demo (and not the full game), but it still shows off what RTX Remix can do to update old games to meet modern graphics expectations.

Last year’s RTX Remix Half-Life 2 demonstration was about seeing how old, flat wall textures could be updated with depth effects to, say, make them look like grouted cobblestone, and that’s present here too. When looking at a wall, «the bricks seem to jut out because they use parallax occlusion mapping,» said Nyle Usmani, senior product manager of RTX Remix, who led the demo. But this year’s demo was more about lighting interaction — even to the point of simulating the shadow passing through the glass covering the dial of a gas meter.

Usmani walked me through all the lighting and fire effects, which modernized some of the more iconically haunting parts of Half-Life 2’s fallen Ravenholm area. But the most striking application was in an area where the iconic headcrab enemies attack, when Usmani paused and pointed out how backlight was filtering through the fleshy parts of the grotesque pseudo-zombies, which made them glow a translucent red, much like what happens when you put a finger in front of a flashlight. Coinciding with GDC, Nvidia released this effect, called subsurface scattering, in a software development kit so game developers can start using it.

RTX Remix has other tricks that Usmani pointed out, like a new neural shader for the latest version of the platform — the one in the Half-Life 2 demo. Essentially, he explained, a bunch of neural networks train live on the game data as you play, and tailor the indirect lighting to what the player sees, making areas lit more like they’d be in real life. In an example, he swapped between old and new RTX Remix versions, showing, in the new version, light properly filtering through the broken rafters of a garage. Better still, it bumped the frames per second to 100, up from 87.

«Traditionally, we would trace a ray and bounce it many times to illuminate a room,» Usmani said. «Now we trace a ray and bounce it only two to three times and then we terminate it, and the AI infers a multitude of bounces after. Over enough frames, it’s almost like it’s calculating an infinite amount of bounces, so we’re able to get more accuracy because it’s tracing less rays [and getting] more performance.»

Still, I was seeing the demo on an RTX 5070 GPU, which retails for $550, and the demo requires at least an RTX 3060 Ti, so owners of graphics cards older than that are out of luck. «That’s purely because path tracing is very expensive — I mean, it’s the future, basically the cutting edge, and it’s the most advanced path tracing,» Usmani said.

Nvidia ACE uses AI to help NPCs think

Last year’s NPC AI station demonstrated how nonplayer characters can uniquely respond to the player, but this year’s Nvidia ACE tech showed how players can suggest new thoughts for NPCs that’ll change their behavior and the lives around them. 

The GPU maker demonstrated the tech as plugged into InZoi, a Sims-like game where players care for NPCs with their own behaviors. But with an upcoming update, players can toggle on Smart Zoi, which uses Nvidia ACE to insert thoughts directly into the minds of the Zois (characters) they oversee… and then watch them react accordingly. These thoughts can’t go against their own traits, explained Nvidia Geforce Tech Marketing Analyst Wynne Riawan, so they’ll send the Zoi in directions that make sense.

«So, by encouraging them, for example, ‘I want to make people’s day feel better,» it’ll encourage them to talk to more Zois around them,» Riawan said. «Try is the key word: They do still fail. They’re just like humans.»

Riawan inserted a thought into the Zoi’s head: «What if I’m just an AI in a simulation?» The poor Zoi freaked out but still ran to the public bathroom to brush her teeth, which fit her traits of, apparently, being really into dental hygiene. 

Those NPC actions following up on player-inserted thoughts are powered by a small language model with half a billion parameters (large language models can go from 1 billion to over 30 billion parameters, with higher giving more opportunity for nuanced responses). The one used in-game is based on the 8 billion parameter Mistral NeMo Minitron model shrunken down to be able to be used by older and less powerful GPUs. 

«We do purposely squish down the model to a smaller model so that it’s accessible to more people,» Riawan said. 

The Nvidia ACE tech runs on-device using computer GPUs — Krafton, the publisher behind InZoi, recommends a minimum GPU spec of an Nvidia RTX 3060 with 8GB of virtual memory to use this feature, Riawan said. Krafton gave Nvidia a «budget» of one gigabyte of VRAM in order to ensure the graphics card has enough resources to render, well, the graphics. Hence the need to minimize the parameters. 

Nvidia is still internally discussing how or whether to unlock the ability to use larger-parameter language models if players have more powerful GPUs. Players may be able to see the difference, as the NPCs «do react more dynamically as they react better to your surroundings with a bigger model,» Riawan said. «Right now, with this, the emphasis is mostly on their thoughts and feelings.»

An early access version of the Smart Zoi feature will go out to all users for free, starting March 28. Nvidia sees it and the Nvidia ACE technology as a stepping stone that could one day lead to truly dynamic NPCs.

«If you have MMORPGs with Nvidia ACE in it, NPCs will not be stagnant and just keep repeating the same dialogue — they can just be more dynamic and generate their own responses based on your reputation or something. Like, Hey, you’re a bad person, I don’t want to sell my goods to you,» Riawan said.

Technologies

Anthropic Launched New Claude 4 Gen AI Models. Here’s What They Do

The models can now use tools like web searches during extended reasoning tasks.

The latest versions of Anthropic’s Claude generative AI models made their debut Thursday, including a heavier-duty model built specifically for coding and complex tasks.

Anthropic launched the new Claude 4 Opus and Claude 4 Sonnet models during its Code with Claude developer conference, and executives said the new tools mark a significant step forward in terms of reasoning and deep thinking skills.

The company launched the prior model, Claude 3.7 Sonnet, in February. Since then, competing AI developers have also upped their game. OpenAI released GPT-4.1 in April, with an emphasis on an expanded context window, along with the new o3 reasoning model family. Google followed in early May with an updated version of Gemini 2.5 Pro that it said is better at coding.

Claude 4 Opus is a larger, more resource-intensive model built to handle particularly difficult challenges. Anthropic CEO Dario Amodei said test users have seen it quickly handle tasks that might have taken a person several hours to complete. 

«In many ways, as we’re often finding with large models, the benchmarks don’t fully do justice to it,» he said during the keynote event.

Claude 4 Sonnet is a leaner model, with improvements built on Anthropic’s Claude 3.7 Sonnet model. The 3.7 model often had problems with overeagerness and sometimes did more than the person asked it to do, Amodei said. While it’s a less resource-intensive model, it still performs well, he said. 

«It actually does just as well as Opus on some of the coding benchmarks, but I think it’s leaner and more narrowly focused,» Amodei said.

Anthropic said the models have a new capability, still being beta tested, in which they can use tools like web searches while engaged in extended reasoning. The models can alternate between reasoning and using tools to get better responses to complex queries.

The models both offer near-instant response modes and extended thinking modes. 

All of the paid plans offer both Opus and Sonnet models, while the free plan just has the Sonnet model.

Continue Reading

Technologies

Today’s NYT Strands Hints, Answers and Help for May 23, #446

Here are hints and answers for the NYT Strands puzzle No. 446 for May 23.

Looking for the most recent Strands answer? Click here for our daily Strands hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle, Connections and Connections: Sports Edition puzzles.


Today’s NYT Strands puzzle has a humorous title, and if you understand the reference, you’ll know what words to look for. If you need hints and answers, read on.

I go into depth about the rules for Strands in this story. 

If you’re looking for today’s Wordle, Connections and Mini Crossword answers, you can visit CNET’s NYT puzzle hints page.

Read more: NYT Connections Turns 1: These Are the 5 Toughest Puzzles So Far

Hint for today’s Strands puzzle

Today’s Strands theme is: The musical fruit

If that doesn’t help you, here’s a clue: There are magical ones in fairy tales.

Clue words to unlock in-game hints

Your goal is to find hidden words that fit the puzzle’s theme. If you’re stuck, find any words you can. Every time you find three words of four letters or more, Strands will reveal one of the theme words. These are the words I used to get those hints, but any words of four or more letters that you find will work:

  • REEK, GADS, PLAY, PLAYS, PITA, DIAL, FALL, PALL, PALLS, FALLS, GENIE, BEEN, LACK, DENY, NILL.

Answers for today’s Strands puzzle

These are the answers that tie into the theme. The goal of the puzzle is to find them all, including the spangram, a theme word that reaches from one side of the puzzle to the other. When you’ve got all of them (I originally thought there were always eight but learned that the number can vary), every letter on the board will be used. Here are the nonspangram answers:

  • FAVA, NAVY, BLACK, GREEN, PINTO, KIDNEY, CANNELLINI

Today’s Strands spangram

Today’s Strands spangram is BEANSALAD. To find it, start with the B that’s three letters to the right on the top row, and wind down.

Continue Reading

Technologies

The Marvel Rivals Auto Battler Is a Natural Evolution of Hero Shooters

Move over Teamfight Tactics. Marvel Rivals’ new limited-time mode is the perfect addition to the auto battler genre.

Marvel Rivals has been a breath of fresh air for the hero shooter genre, combining popular comic book characters and chaotic third-person shooter action to create epic team fights that keep me coming back for more.

Fast-paced combat is the name of the game in Marvel Rivals, which is why it could come across as a confusing development that the next limited-time mode launching in Marvel Rivals Season 2.5 is a form of auto battler (also frequently referred to as auto chess).

Ultron’s Battle Matrix Protocol is an experimental mode launching on June 6, where six players will draft teams of heroes to go head to head with their opponents’ drafts. You’ll be able to support your AI teams while the new hero Ultron (also debuting in season 2.5) is chipping in extra healing and damage to the fight.

Aside from the fact that it’ll be cool to stage your own version of Marvel Comics’ Secret Wars, is the decision to add an auto battler to Marvel Rivals (which has previously released limited-time modes that mostly tracked with the shooter’s core gameplay loop) really all that far out of left field? I don’t think so.

Why is Marvel Rivals getting an auto battler mode?

The new mode is similar to multiplayer online battle arena spinoffs such as Dota Auto Chess and League of Legends’ Teamfight Tactics. I think drawing the line from a multiplayer online battle arena (MOBA) to auto battler is easy for most people.

MOBAs are strategy games first and foremost, where players pick and choose items to craft builds that will help them win their lane, while also contributing to big team fights. Players need to work together to overwhelm the other team and push them back to their spawn.

MOBAs and auto battlers are both about team synergy, positioning and picking the right upgrades, so it’s not surprising to people when characters from a game in one of these genres appear in another.

There are many people that wouldn’t associate hero shooters with MOBAs in the slightest. Games like Marvel Rivals have a high ceiling for very different mechanical skills — especially aiming. But hero shooters are also complex strategy games that share many of the same fundamentals as a MOBA.

Putting together a viable team composition with strong character is the most important part of a hero shooter — and Marvel Rivals takes this to another level with the strongest team-up abilities that require multiple heroes to activate.

An auto battler will allow people to experiment team compositions that don’t often get played in real Marvel Rivals’ matches, and could even help the community find new experimental hero combinations that have the potential to shake up common ways people play the game.

In Ultron’s Battle Matrix Protocol, as the auto battler mode is called, players will be able to put together balanced teams, lock in the risky GATOR strategy (which is nightmarishly similar to Overwatch’s GOATS meta) or fall back on triple support with brand new upgrades that change how the game works.

Absurd power scaling might look like Overwatch 2’s Stadium mode

There’s a clear rivalry between Overwatch 2 and Marvel Rivals, since they’re the two biggest hero shooters on the market right now. Blizzard’s hero shooter is entering its ninth year of life with flagging interest, but its solid fundamentals have been a high bar for Marvel Rivals to hurdle.

Both games have been trying out bold new things — Overwatch 2 recently shipped the MOBA-like Stadium mode that lets players augment popular abilities and take powerful passives as they fight in a flurry of different objectives in a best of seven gauntlet.

Ultron’s Battle Matrix Protocol in some ways feels like NetEase’s response to Blizzard’s big success with Stadium mode. You might not have quite as much influence on the outcome of each battle, but this serves as a proof of concept for Marvel Rivals’ hero power scaling.

This new mode also lets players pick passive abilities that buff certain roles as well as more powerful hero-specific upgrades that drastically alter the course of a fight, so the snowballing power of a Stadium match is very much emulated here.

In the Season 2.5 developer vision video, we got a look at what some of the upgrades will look like.

Venom can grow into a hulking monster after devouring enemies with his ultimate ability, Hela cuts a swath through the playing field with a field of flying daggers, Psylocke zips around her ultimate ability’s area of effect at twice her normal speed and Namor summons many more squid turrets to attack his enemies.

It’s safe to assume that every character in the game will have some kind of special power unlocked in the later rounds of an Ultron’s Battle Matrix Protocol match. This definitely isn’t NetEase reheating Blizzard’s nachos, but I do think it’s indicative of a broader shift toward making hero shooters feel a little bit more chaotic and unrestrained.

Game balance is important, but one of the biggest draws of this genre is that each character is a unique power fantasy you can’t find elsewhere. I can’t imagine such in-depth upgrades were designed for a one-and-done mode, so it’ll be interesting to see where they might show up next.

Continue Reading

Trending

Copyright © Verum World Media