Technologies
Xbox PC Games Start to Land on Nvidia’s GeForce Now
Microsoft and Nvidia signed a 10-year agreement in February.
Microsoft announced Thursday that the first Xbox game to arrive on Nvidia’s cloud gaming service GeForce Now is the latest installment of the Gears of War franchise, Gears 5. GeForce Now subscribers can play the game now. The tech giant also said the games Deathloop, Grounded and Pentiment will arrive on the cloud gaming service on May 25 with more games to come in the future.
The release of Xbox games on Nvidia GeForce Now is part of a 10-year agreement Microsoft and Nvidia signed in February. Xbox Gaming CEO Phil Spencer tweeted at the time that the agreement will allow GeForce Now players to stream Xbox PC games. Spencer also said Activision Blizzard games, like Call of Duty, would be available on GeForce Now following Microsoft’s acquisition of the studio.
Nvidia GeForce Now lets subscribers play more than 1,500 PC games on multiple devices, including in their cars, with plans that range in price from free to $20 a month. CNET’s review of GeForce Now found that the cloud gaming service can get expensive, if you choose the top subscription tier, but it’s worth its for people with large game libraries who want to be able to play titles across devices. More Xbox games, like the Halo, Doom and Fallout franchises, are expected to arrive on GeForce Now in the future.
For more, here’s what to know about Microsoft’s 10-year agreement with Nvidia and Microsoft’s acquisition of Activision Blizzard.

01:32
Technologies
Watch Out, Meta. I Tried Alibaba’s Qwen Smart Glasses and They’re Mega Impressive
These AI-focused smart glasses are available now in China but will roll out internationally later this year.
Mobile World Congress in Barcelona might be a European tech show, but for the past few years, the event has largely been dominated by Chinese phone companies such as Xiaomi and Honor. This year, they were joined by tech giant Alibaba, which launched its Qwen smart glasses at the show — and having tried them, all I have to say is, Meta should watch its back.
The Qwen glasses are among the first wearable devices Alibaba is building on top of its Qwen AI family of large language models, and the company brought two different models to the MWC.
The first pair, the Qwen S1 specs, have a heads-up waveguide display etched into the lenses, and serve as a rival to Meta’s Ray-Ban Display model (minus the gesture control). My first impression of these AR glasses was that they were light and comfortable to wear — I wouldn’t have known that they were smart glasses by their weight alone. At the end of each arm are swappable batteries, which snap off easily so you can keep the glasses running for longer when you’re on the go.
I activated the glasses with the phrase «Hey Qwennie,» which it picked up with its five microphones. I then asked it to complete a range of basic tasks, including asking the device to take a photo and to tell me what I was looking at when I held a photo of Barcelona’s Sagrada Familia in front of my face.
I could see a miniature version of the photo I captured in the green display, and the glasses were able to answer my architectural query both by displaying text in the heads-up display and through the bone conduction built into the arms of the S1. Perhaps my favorite feature, though, was the turn-by-turn directions. This feature felt like it could become essential for navigating a busy city, and far more convenient than using a phone or smartwatch.
I also tried out the teleprompter feature, which scrolled as I read out loud from the text appearing on the display but must confess I didn’t find it quite as easy to follow as a similar demo I tried earlier in the week on the MemoMind One glasses. With the Qwen booth assistant talking to me in Chinese, I was able to see and hear the English translation of her words on the display and in my ear simultaneously, although there was enough of a delay to prevent our communication from being entirely smooth.
The second pair of glasses Alibaba brought to the show were the Qwen G1 glasses, which lack the heads-up display present on the S1, but otherwise offer pretty much the same features thanks to the microphones, cameras and bone-conduction.
On the whole, I was impressed by the look, feel, sound quality and capabilities of these glasses, which for many people might be their first introduction to Alibaba’s Qwen AI (by way of the Qwen App, which is integrated with the specs). In China, where preorders for the glasses are already live, people wearing the glasses will be able to complete tasks such as ordering food or hailing a cab completely hands free.
Alibaba said pricing for the G1 glasses will start at around $275 (for comparison, Meta’s Ray-Ban Gen 2 glasses cost $379), but didn’t say how much the more advanced S1 glasses will cost. Official sales in China will commence on March 8, with Alibaba promising an international rollout featuring integration with popular global services scheduled for an unspecified date later in 2026.
Technologies
Softness and Brightness Blend to Stunning Effect in TCL’s Nxtpaper AMOLED Phone Display
An anti-glare screen that’s still radiant and vivid? Sign me up.
I’ve always been impressed with TCL’s easy-to-read Nxtpaper technology. Sitting somewhere between E Ink and a more traditional screen with built-in anti-glare tech, there’s a softness both to the look and feel of a Nxtpaper display that makes it a real pleasure to use.
But if I were asked whether I’d be happy to replace my regular phone with one that had an LCD Nxtpaper display, the answer has always been no, for one simple reason: brightness. The vivid colors that we’re accustomed to on most phones screens tend to look dull on Nxtpaper, and I just wouldn’t be willing to compromise on radiance, in spite of the many good qualities Nxtpaper brings to the table.
Until now, that is. Among the cool phones and weird tech on display at Mobile World Congress 2026, I saw a Nxtpaper phone that might have changed my perspective. TCL showed off an upgraded AMOLED version of Nxtpaper stopped me in my tracks. It blended the luminosity of AMOLED and the softness of Nxtpaper to stunning effect, in a way that would genuinely make me reconsider my stance on owning a Nxtpaper phone.
The screen offers 3,200 nits of brightness, and has a circular polarization rate of 90%, which means it closely resembles natural light. TCL has managed to reduce blue light emission as low as 2.9%, and the display dynamically adjusts brightness and color temperature in tune with the body’s natural circadian rhythms.
The one drawback I can see for using Nxtpaper on a phone screen is that it might not be ideal for taking, viewing and editing photos. In my brief demo at MWC, I took a selfie and noticed the colors didn’t look especially true to life. But it’s important to note that TCL is still developing this technology, so it remains a work in progress and my brief time using it likely won’t be an accurate reflection of a final product.
In all, this is real leap forward for Nxtpaper. Although TCL hasn’t announced any devices featuring the technology yet, it likely will do in due course. I’d personally like to see it on a laptop — as I spend all day staring at my screen both reading and writing, it seems like the perfect application of this tech. I can’t wait to see where it ends up.
Technologies
AI Data Centers: What to Know About Their Water and Energy Use
OpenAI’s Sam Altman says AI’s water concerns are «totally fake.» The truth about AI’s impact on natural resources is more complicated.
When people find out I’m a journalist who covers AI, they often ask about the drastic energy consumption of AI data centers. Are these centers using up all of our drinking water? How is this tech affecting the environment? Is AI going to kill us all? The questions range from curious to downright dystopian.
Sam Altman, the CEO of OpenAI, recently faced criticism after calling some of these concerns, particularly those around water, «totally fake.» It all stems from a Q&A session hosted by The Indian Express newspaper. Around the 26-minute mark of the interview, Altman was asked to defend certain criticisms of AI, including the amount of natural resources it takes to power large language models like ChatGPT.
Altman responded, «(criticism of AI for overuse of) water is totally fake,» saying that while extreme water use «used to be true,» OpenAI no longer does evaporative cooling. He said estimates that 17 gallons of water are used for every chatbot query are no longer accurate.
«This is completely untrue and totally insane, [and has] no connection to reality,» he said. He then goes on to address AI energy consumption, calling the concerns «fair» but arguing that it should be evaluated as a whole, not per query, since some queries, like videos, are more intensive to generate than text conversations. (Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Still, Altman says, «we need to move toward nuclear or wind and solar (power) very quickly.»
Questions involving data centers and water are complicated.
Do AI data centers strain land and power systems?
Altman’s remarks come amid timely, ongoing debates over data centers and their energy use.
CNET’s Corin Cesaric dove into the issue of AI’s energy use last year and found the cost of training and running ChatGPT, Gemini, Claude and other generative AI tools to be «staggering.» The US accounted for the largest share (45%) of global data center electricity consumption in 2024, according to the International Energy Agency.
As for water: Two Google data centers in Council Bluffs, Iowa, alone used 1.4 billion gallons of water in 2024, enough to fill about 28 million standard bathtubs. Google has 29 data centers worldwide. Meta’s data centers also accounted for about 1.39 billion gallons of water used in 2023.
While we don’t currently have statistics from OpenAI, Meta, or Google on their natural resource consumption in 2025, it’s safe to bet that data center energy and water use will rise as more people use generative AI.
How do AI data centers use water?
Considering ChatGPT now has close to 1 billion weekly users, and OpenAI has estimated that it handles close to 2.5 billion prompts every day, that’s an astronomical amount of data to manage. And because of this demand, the powerful computers that train the AI models and process their prompts get extremely hot. Think of how your phone and laptop heat up when running demanding tasks. If servers overheat, they can slow down or become damaged. This is where water comes in.
Traditionally, water in AI data centers is used in two ways: evaporative cooling (consuming water) and closed-loop systems (recirculating water).
Evaporative cooling is a ventilation technique that uses the natural process of evaporation to convert liquid water into water vapor, which absorbs heat during the process. Closed-loop cooling is a more resource-efficient process that reuses the water to dissipate heat without evaporation or consumption.
OpenAI said in a January announcement that it is «prioritizing closed-loop or low-water cooling systems» to minimize water use. This does lend credence to Altman’s recent claims that OpenAI’s water use is not as high as the 17 gallons per query estimate, but we don’t yet have exact figures for OpenAI’s 2025 water use.
OpenAI says it is moving away from the more costly evaporative cooling systems. However, 56% of data centers still use this method in some form over closed-loop systems, according to a January 2026 report from global water technology company Xylem and market research firm Global Water Intelligence. The research anticipates that AI water consumption will spike nearly 130% by 2050.
How much energy does AI use?
Powering AI and these massive data centers is demanding.
Generative AI chatbots use more energy than traditional search engines like Google or Bing. One estimate calculated that a single chatbot query requires 10 times more electricity than a Google search. On average, a single text query takes about 0.24 to 3 watt-hours, but AI-generated videos and images require much more electricity.
An August 2025 report from Google details Gemini’s energy use. The report states «the median Gemini Apps text prompt uses 0.24 watt-hours (Wh) of energy, emits 0.03 grams of carbon dioxide equivalent (gCO2e) and consumes 0.26 milliliters (or about five drops) of water.» Google equates this energy consumption to powering a microwave for 9 seconds.
Is solar a valid alternative?
Even though AI models require 24/7 power, solar energy is a viable and scalable option for powering AI data centers.
OpenAI announced a multi-billion-dollar venturein October 2025 to explore new energy generation with solar and battery storage. Meta, Microsoft, Google and Amazon all expanded their solar power use across the US in 2025.
While renewable solutions could be the path forward, solar (or wind) energy is still only part of the mix of energy generation used by data centers. They generally rely on the grid itself, which is still largely powered by the burning of fossil fuels like natural gas.
Where we stand
The conversation around AI and water use is moving from unconfirmed claims to measured scrutiny. Communities and policymakers are now pushing for transparency and sustainable practices, aiming to ensure that AI’s rapid growth doesn’t come at the expense of local water resources or the local electricity grid. As AI continues to grow, so, too, will the debate about how best to balance technological innovation with environmental responsibility.
-
Technologies3 года agoTech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies3 года agoBest Handheld Game Console in 2023
-
Technologies3 года agoTighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года agoBlack Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies5 лет agoGoogle to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies5 лет agoVerum, Wickr and Threema: next generation secured messengers
-
Technologies4 года agoOlivia Harlan Dekker for Verum Messenger
-
Technologies4 года agoiPhone 13 event: How to watch Apple’s big announcement tomorrow
