Connect with us

Technologies

How Team USA’s Olympic Skiers and Snowboarders Got an Edge From Google AI

Google engineers hit the slopes with Team USA’s skiers and snowboarders to build a custom AI training tool.

Team USA’s skiers and snowboarders are going home with some new hardware, including a few gold medals, from the 2026 Olympics. Along with the years of hard work that go into being an Olympic athlete, this year’s crew had an extra edge in their training thanks to a custom AI tool from Google Cloud.

US Ski and Snowboard, the governing body for the US national teams, oversees the training of the best skiers and snowboarders in the country to prepare them for big events, such as national championships and the Olympics. The organization partnered with Google Cloud to build an AI tool to offer more insight into how athletes are training and performing on the slopes.

Video review is a big part of winter sports training. A coach will literally stand on the sidelines recording an athlete’s run, then review the footage with them afterward to spot errors. But this process is somewhat dated, Anouk Patty, chief of sport at US Ski and Snowboard, told me. That’s where Google came in, bringing new AI-powered data insights to the training process.

Google Cloud engineers hit the slopes with the skiers and snowboarders to understand how to build an actually useful AI model for athletic training. They used video footage as the base of the currently unnamed AI tool. Gemini did a frame-by-frame analysis of the video, which was then fed into spatial intelligence models from Google DeepMind. Those models were able to take the 2D rendering of the athlete from the video and transform it into a 3D skeleton of an athlete as they contort and twist on runs. 

Final touches from Gemini help the AI tool analyze the physics in the pixels, according to Ravi Rajamani, global head of Google’s AI Blackbelt team. which worked on the project. Coaches and athletes told the engineers the specific metrics they wanted to track — speed, rotation, trajectory — and the Google engineers coded the model to make it easy to monitor them and compare between different videos. There’s also a chat interface to ask Gemini questions about performance.

«From just a video, we are actually able to recreate it in 3D, so you don’t need expensive equipment, [like] sensors, that get in the way of an athlete performing,» Rajamani said.

Coaches are undeniably the experts on the mountain, but the AI can act as a kind of gut check. The data can help confirm or deny what coaches are seeing and give them extra insight into the specifics of each athlete’s performance. It can catch things that humans would struggle to see with the naked eye or in poor video quality, like where an athlete was looking while doing a trick and the exact speed and angle of a rotation. 

«It’s data that they wouldn’t otherwise have,» Patty said. The 3D skeleton is especially helpful because it makes it easier to see movement obscured by the puffy jackets and pants athletes wear, she said. 

For elite athletes in skiing and snowboarding, making small adjustments can mean the difference between a gold medal and no medal at all. Technological advances in training are meant to help athletes get every available tool for improvement.

«You’re always trying to find that 1% that can make the difference for an athlete to get them on the podium or to win,» Patty said. It can also democratize coaching. «It’s a way for every coach who’s out there in a club working with young athletes to have that level of understanding of what an athlete should do that the national team athletes have.»

For Google, this purpose-built AI tool is «the tip of the iceberg,» Rajamani said. There are a lot of potential future use cases, including expanding the base model to be customized to other sports. It also lays the foundation for work in sports medicine, physical therapy, robotics and ergonomics — disciplines where understanding body positioning is important. But for now, there’s satisfaction in knowing the AI was built to actually help real athletes.

«This was not a case of tech engineers building something in the lab and handing it over,» Rajamani said. «This is a real-world problem that we are solving. For us, the motivation was building a tool that provides a true competitive advantage for our athletes.»

Technologies

Amazon Speeds Up Delivery Even More With 1- and 3-Hour Options

The retailer says the one-hour option is available in hundreds of cities, with discounted shipping for Prime members.

Same-day delivery apparently isn’t fast enough for some Amazon shoppers. The retail giant said on Tuesday it’s adding new shipping options that will get products to front doors within a one- or three-hour window.

The company said in its announcement that the one-hour option is available in hundreds of cities across the US, while the three-hour option is now live in more than 2,000 areas. Amazon’s web page at amazon.com/getitfast shows whether those options are available to shoppers for their location. More than 90,000 products will be available for those shipping windows, the company said.

For those who can’t get those services (including the author of this post, who lives between Austin and San Antonio in Texas), a message will display: «3-hour delivery is currently unavailable. Check back at a later time or shop products with Same-Day delivery below.»

Pricing for the faster delivery options is not cheap: It’ll cost you $20 for one-hour delivery and $15 for three-hour delivery for those without an Amazon Prime account, or $10 and $5 for customers who subscribe to Prime.

Last year, the company rolled out faster Amazon delivery options to 4,000 additional areas

In a video of the podcast Learn and Be Curious with Doug Herrington, hosted by Amazon’s CEO of worldwide stores, Kandace Kapps, the director of the company’s same-day strategy team, spoke in more detail about the challenges of fast shipping. Kapps discussed shifts in customer buying habits over the last few years, such as more people buying household essentials like toilet paper on Amazon.

She said that Amazon can deliver so quickly by placing same-day delivery hubs close to customers in metro areas and by getting products ready to ship within 15 minutes, aided by warehouse robots.

«I think customers are going to continue to get magically surprised by how fast we can deliver to their doorstop,» Kapps said. 

Herrington said fast shipping increases sales: «When we speed up the service, the probability that somebody buys a product from us goes up.»

Other retailers, including Walmart, have been adding same-day delivery options or exploring other ways to speed up shipping times to compete with Amazon. 

Removing buyers’ moments of hesitation

Part of Amazon’s strategy, which has involved a massive buildout of locations, deployment of thousands of trucks, deals with other delivery services and investment in logistics software, is actually pretty simple: being there when people need last-minute items or make impulse buys.

«It’s about removing the last moment where you would’ve reconsidered the purchase,» said Stephanie Carls, retail insights expert at coupon and promotional-code website RetailMeNot, a sibling site of CNET. «It changes how you shop, not just how fast you get things.» 

Carls said that Amazon’s super-fast delivery is removing the timeframe when people might change their minds about a purchase.

«There used to be a gap between deciding to buy something and actually having it. That’s when you’d price check, rethink it, or decide you didn’t need it after all,» she said. «This closes that gap.»

The retail expert said that competitors, including Walmart and Target, have been speeding up delivery times in some markets. Still, they’re not matching Amazon’s scale or product range at those speeds or levels of consistency. 

«And that’s what starts to make everyone else feel slow,» Carls said. «Amazon’s advantage is how tightly connected its technology, inventory and delivery networks are, which makes this level of speed more repeatable.»

Continue Reading

Technologies

Dog Health Goes Digital With New AI Chatbot

Fi Intelligence allows you to ask questions of a specially tailored pet health chatbot, but it’s not meant to replace vet visits.

It might be time to rethink what it means to be sick as a dog. On Tuesday, Fi, a smart pet technology company, announced a new AI-powered chatbot to help owners stay on top of their dog’s health using a blend of personal information and generalized dog breed data.

The AI agent, which the company is calling Fi Intelligence, is integrated directly into the Fi app. It has access to all the information gathered about your dog across the entire suite of Fi products, including the Fi Series 3 Plus and Fi Mini dog collars, as well as information and documents uploaded by the pet owner. The service is for dogs only (not cats, rabbits or other pets).

If you already own a Fi smart collar, existing data will be incorporated into the AI agent’s dataset to help it answer your questions.

When creating Fi Intelligence, the company identified a multitude of common questions that dog owners have, including whether their animal friend is walking or sleeping enough, or scratching more than usual. The chatbot was created to help owners find answers to these questions quickly and easily, according to Fi.

Fi designed its agent to answer these questions using a mix of general information about a dog’s breed, personal information and biometric data gathered by Fi smart pet collars. 

Pet owners can ask the chatbot questions in plain English and get back detailed responses. Fi Intelligence is equipped to answer general questions, contrast your dog’s current data to previous time periods and compare your dog’s data to other dogs of the same breed.

Fi says its chatbot is different from general-purpose AI agents because it has been trained on a proprietary dataset containing «the largest repository of real-world canine activity, sleep and behavior data in the world.»

Fi Intelligence doesn’t replace a trip to the vet — and the company stresses it’s not supposed to. Rather, the agent is supposed to grant owners «informed confidence» about their dog’s health and can help them «show up [to the vet] with specific, documented observations drawn from weeks of continuous data.»

«The strongest signal from our beta was that owners aren’t using this to replace their vet,» said Fi’s Vice President of Product Darrell Stone. «They’re using it to show up better prepared.»

According to Fi, the Fi Intelligence integration will provide the most complete dog health profile available in the app so far. Fi Intelligence is available to all Fi members immediately.

Continue Reading

Technologies

Nvidia Teases DLSS 5 and Gamers Aren’t Impressed

The new AI technology is making some big changes to video game graphics that hardly anyone seems to like.

Nvidia opened its GTC conference with a keynote by CEO Jensen Huang, revealing the company’s latest tech. Among the raft of the company’s AI developments, gamers were treated to the imminent version of its AI-powered upscaling and optimization technology, DLSS (Deep Learning Super Sampling), touted as the «biggest breakthrough in computer graphics». 

Nvidia published a video illustrating how DLSS 5 can enhance graphics in Resident Evil RequiemStarfield and other games, showing before-and-after takes. But gamers weren’t thrilled. In fact, the response to DLSS 5 resembles more of a collective backlash, replete with memes, ridicule and outrage. 

Gamers were quick to point out that DLSS 5 transformed the original graphics into something vastly different. Some called the visuals «AI slop» because they look like «yassified» AI-generated filters. 

Many worry that DLSS 5 could deviate from a creator’s specific artistic vision. Critics also fear that if this technology becomes the industry standard, video game graphics might start to look the same, losing their unique visual identity. 

«Everything about this is a betrayal of these game’s artistry,» said YouTuber The Sphere Hunter in a post on X Monday. «Painting over handcrafted, intentional 3D art with shiny, wrinkly, sunken-in, porous, puckered, fraudulent, filtered nonsense is deeply disrespectful. If you want this, just watch gen-AI videos all day.»

Countless memes mocking the tech’s exaggerated features flooded the internet. Others on social media parodied the effects DLSS 5 could produce in other games. 

In a Q&A on Tuesday, Huang addressed the backlash from gamers, calling them «completely wrong.» Huang underlined that DLSS 5 «enhances and adds generative capability, but it doesn’t change the artistic control» and that «it’s in the direct control of the game developer.»

The team at Digital Foundry, which specializes in game technology and hardware reviews, called it «disruptive and transformative» but was generally positive about it, though they saw some hiccups. 

«[The images] looked a little bit uncanny, I would say, but definitely the overall portrayal of those characters is much more sophisticated,» said Oliver Mackenzie, video producer and writer for Digital Foundry.

Bethesda’s official X account replied to comments from members of Digital Foundry about Starfield and The Elder Scrolls IV: Oblivion Remastered, both published by Bethesda.

«This is a very early look, and our art teams will be further adjusting the lighting and final effect to look the way we think works best for each game. This will all be under our artists’ control, and totally optional for players,» the publisher said. 

DLSS 5 is set to be released sometime in the fall. 

What is DLSS?

Nvidia first released its DLSS tech back in 2018 with its RTX 2080 card: The RTX architecture introduced the Tensor cores, which are essential for accelerating the calculations used by the DLSS AI. The deep learning technology was designed to upscale images and video from low resolution in real time to achieve higher frame rates. 

Gamers weren’t impressed at first, but later versions of the technology did perform better in games that supported it. DLSS 4, released last year and tweaked to 4.5 as of January, made significant improvements to detail rendering, reducing motion artifacts, boosting frame rates, and generating more realistic lighting via path tracing (which incorporates interactions with ray-traced lighting). 

What does DLSS 5 do?

DLSS 5 works a bit differently than previous versions of the technology. According to Nvidia, DLSS 5 shifts from processing simple pixels to understanding 3D elements. By deconstructing characters into specific components — such as skin, hair and clothing — the AI can render them more consistently. This results in faster performance and much more realistic details, especially for textures and lighting. 

Game developers control how DLSS 5 enhances images and to what degree, ensuring it matches the game’s aesthetic. The demo video showcased some positive enhancements, but others looked like sweeping changes to the characters and the environment. 

Which games will support DLSS 5 at launch?

On Monday, Nvidia released a list of games slated to support DLSS 5:

  • AION 2 
  • Assassin’s Creed Shadows
  • Black State 
  • Cinder City
  • Delta Force 
  • Hogwarts Legacy
  • Justice
  • Naraka: Bladepoint 
  • NTE: Neverness to Everness
  • Phantom Blade Zero
  • Resident Evil Requiem
  • Sea of Remnants
  • Starfield
  • The Elder Scrolls IV: Oblivion Remastered
  • Where Winds Meet

What cards will support DLSS 5?

Nvidia has yet to provide a list of GPUs that will support the new technology. In an FAQ, the company says it will release a list of supported cards closer to its release. 

Continue Reading

Trending

Copyright © Verum World Media