Technologies
Google and Qualcomm Tell Me That Gemini Will Be Project Moohan’s Secret Weapon
How much closer are we to smart glasses rivaling Meta’s Ray-Bans?

While Meta’s Quest line of headsets has dominated the virtual reality space, mixed reality — using digital displays overlaying the real world — is a new frontier that’s just starting to be explored, going beyond the new Meta Ray-Ban Gen 2 to devices more akin to the Ray-Ban Display glasses. That’s where Google’s Project Moohan MR display aims to make headway. Unlike its prior efforts in the space, like Google Glass, the company hopes to gain an edge by partnering with Qualcomm and Samsung to bolster its chances.
At the Snapdragon Summit 2025 in Maui, I sat down to chat with Sameer Samat, Google’s head of Android, and Alex Katouzian, Qualcomm group general manager of mobile, compute and XR, to check in on Project Moohan and how the broadening of Android and Gemini coalesces with their collaboratively built headset. Which, despite CNET Editor at Large Scott Stein getting hands-on time with an early version of it last December, is still in development.
«We’re super excited about the device coming along really nicely,» Samat said. «We’re definitely getting closer.»
It was clear to Snapdragon Summit attendees that Project Moohan is still in development. The headset was quietly tucked into an easily missed corner of the event, shown off for only a couple of hours under glass and out of anyone’s hands. But Samat was bullish about the progress made in the last year, which has «subtle but very important refinements to the hardware,» he said.
Read more: You Got Your Phone OS in My Laptop! Here’s How Android and ChromeOS Will Merge
Design-wise, Samat explicitly pointed to improvements in the weight balance, ensuring the ergonomics are correct and that the light ingress is where it should be. Where the weight is balanced is crucial in the design of smart glasses that are expected to be worn for hours at a time. When the Apple Vision Pro launched in early 2024, CNET’s Stein noted that the headset felt top-heavy after only half an hour when using the standard single strap. However, using the dual strap was more comfortable, but, in his words, «Looks like the headband on my CPAP machine.» In summary: «A bunch of changes there that I don’t think you see when you look at it, but when you put it on from before and after, I think people would very much notice,» he said.
«I saw early prototypes until now, big difference,» added Katouzian. «I think the weight and the balance is really good and mechanically very well designed.»
Project Moohan uses Qualcomm’s XR2 mixed reality chip. The company worked with Google and Samsung to optimize everything, Katouzian said.
The software has come a long way, Samat continued, and he was quick to affirm that there’s been a lot of refinement in incorporating Gemini into the headset. That loops Project Moohan into the drum Qualcomm and Google were beating throughout Snapdragon Summit 2025: the Gemini experience that uses multiple large language models to answer queries will be an increasingly significant part of using devices, from phones to laptops to headsets, going forward.
«What would happen if, in the user experience, your AI assistant can see and hear what you’re hearing … if they could see the same virtual world as you at the same time, and you could ask them to walk through and explore that world with you?» Samat said. «I’m playing around a lot with that. Even to explore places, like you go somewhere in [Google] Maps and then you walk around and ask questions of Gemini and just explore an entire city with it.»
Bringing contextual information to the screen while going about your day was the dream of older experiments, such as the Google Glass mixed reality glasses released in 2013 and the 2016 Google Daydream, which turned your phone into an augmented reality headset. Samat obliquely referenced these, saying the company has «had our fair share of innovation and being first, but also some things that could have worked better.»
But Samat also pointed to what’s changed in the interim — one of which is computational power from chips like the Qualcomm XR2 that powers Project Moohan. This silicon «opens up another level of fidelity,» he said, pointing to other technical advancements, like optics in the hardware for eye tracking. And AI in general has improved too, with non-Gemini applications that can, for instance, augment Google Photos with uniquely enabled AI experiences in the XR world — experiences that «you’ll see soon enough,» Samat teased.
The companies believe combining Google’s software, Qualcomm’s silicon computational horsepower and Samsung’s ergonomic product design will create something special that fits the mixed reality format better than anything we’ve seen before.
In addition to Project Moohan, Google is exploring a whole range of ideas, including smart glasses. At some point, they’ll take what was developed for its mixed reality headset and shrink it down to something that would more directly compete with Meta’s Ray-Ban Display and others like it. And with Samsung in the mix, there’s a lot of potential.
«The close proximity between the glasses and the phone will bring an advantage that hasn’t been in the market before,» Katouzian said.
Read more: Smart Glasses Are Going to Work This Time, Google’s Android President Tells CNET
If and when a smart glasses collaboration happens, Google has another advantage that might be more appealing than Gemini integration: individual style. Not everyone wants smart glasses from Ray-Ban or Oakley. Google has previously announced that it’s working with Warby Parker and Gentle Monster to presumably put a Project Moohan successor in a variety of frames, which could entice consumers who aren’t fans of wrap-around sports shades.
«The aesthetic of it is super important,» Samat said. «Yes, of course, it’s a piece of technology, but it also has to be something you want to wear.»
Technologies
Video Chats From Space? T-Mobile’s Service Broadens What Apps Can Do Over Satellite
T-Satellite, T-Mobile’s Starlink-based satellite communications, now supports video and audio calls in some apps when you don’t have cellular coverage.
When T-Mobile took its T-Satellite service live during the summer, it teased the ability for developers to adapt their apps to work within the strict data limits required over satellite connections. Then, several apps were able to jump the gun and start working with the Starlink-based service at the launches of the Pixel 10 Pro and the iPhone 17. Now T-Satellite is open to any app configured to work with the network — with a few surprises I didn’t think we’d see so early.
Get ready to video chat with your friends from the middle of nowhere… Or prepare to be trapped by your friends who want to video chat no matter where you are.
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
T-Satellite breaks some Earth-bound limitations
T-Mobile isn’t the first company to connect a smartphone to a satellite network. Recent iPhone, Samsung Galaxy and Google Pixel models equipped with the proper hardware can talk to satellites when out of cellular range to access emergency services, text using the Messages app and send a location via Find My. But those are primarily based on sending short bursts of data, which is essential when communicating line-of-sight with satellites that are thousands of miles overhead and limited in their bandwidth capacities.
T-Satellite accesses a network of 657 Starlink satellites dedicated to cellular service using a band of cellular spectrum that works with most phones made during the last four years, according to T-Mobile. The company has also offered the service to customers of other providers for $10 a month. It shares the same text-centric limitations as the other companies, with the added ability to send and receive images using Multimedia Messaging Service.
With today’s announcement, T-Mobile is setting some of those limitations aside. In the WhatsApp app, for example, you can send texts, images, voice memos and video messages, which still fit (barely?) within the send-small-bursts-of-data model. WhatsApp now also supports live audio and video chats to other people using WhatsApp, but you can’t use it to make phone calls, emergency calls or texts.
Another example is the X app (formerly Twitter), which lets you scroll your feed and post text, photos, GIFs or videos. It also has the option to download high-resolution media when you need more detail.
Launching app data access
According to Jeff Giard, vice president of strategic partnerships at T-Mobile, getting to this point was largely due to customer feedback during the lengthy T-Satellite beta period while the Starlink constellation was still being completed. «We started seeing [customer feedback] start to shift to ‘Hey, this is awesome. I want more,'» he said. «So we started focusing on how do we enable great experiences on apps in an environment where it’s not our blazing-fast terrestrial network?»
Because T-Satellite is based on the LTE cellular standard, sending video and high-res images became a matter of maximizing the use of the spectrum and optimizing for better data transmission, said Giard.
During the beta period, there was some initial confusion about the network’s capabilities. «‘Oh my gosh, I get broadband Starlink on my phone now,’ [some customers believed] and it’s really not the case,» he said. «This is an entirely separate constellation of satellites that’s dedicated to … working on your phone.»
He also attributed the new capabilities to Apple and Google’s work at the operating system level, emphasizing that developers can tie into existing Application Program Interfaces, or APIs, to make their apps work with T-Satellite.
Importantly, Giard said that T-Mobile is not imposing any data caps or network throttling for T-Satellite customers who make heavy use of the service. «I don’t want to take anything off the table at this point,» he said, «but right now, what we’re launching [today] doesn’t have a data cap.»
In addition to built-in apps such as Apple Maps, Google Maps, Apple Music and Samsung Weather, that were added in September, T-Mobile announced the following list of apps that are working with T-Satellite: T-Life, AllTrails, AccuWeather, CalTopo and onX (plus X and WhatsApp).
As for which apps get optimized next for T-Satellite, Giard says he’s looking forward to what developers and customers start asking for. «Our driving mantra here is … what are we doing next? What pain point are we solving?» he said. The apps coming next «will be the ones that the customers tell us they really want, and [others that] are organically adopted along the way.»
Technologies
AT&T Is Using an Advanced Video Game Feature to Improve Your Phone Coverage
«Whatever Nvidia is doing for games, whatever Disney is doing … we are doing at a much bigger scale,» said AT&T’s Velin Kounev about using ray tracing to improve its network.
When you think of cellular networks, you probably envision radio towers and invisible data streams. But AT&T, by necessity, needs to see everything in between: buildings, trees and the multitude of obstacles that can interfere with wireless signals getting to your phone.
The cellular provider is turning to a key technology from gaming and computer graphics to get an accurate picture. AT&T Wireless Geo Modeler is a new system that uses ray tracing and AI to generate detailed representations of the areas covered by its network and improve connectivity. In doing so, AT&T says it can react to service interruptions quickly and also better predict how its network can be configured in response to large social events or during natural disasters.
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
How ray tracing works in a cellular context
In computer graphics, ray tracing is a technique for rendering three-dimensional scenes. Software simulates light beams emanating from a virtual camera and calculates how the light affects objects and materials in the scene. Ray tracing is notable for rendering shadows and reflections, leading to more realistic-looking environments.
In the past, ray tracing was computationally challenging. Early examples, such as the original Toy Story movie, required rooms full of processing hardware and up to 24 hours to render a single frame of footage. The graphics processor in high-end smartphones can now render photorealistic, ray-traced scenes in games in real time.
According to Velin Kounev, lead inventive scientist at AT&T Labs, the technology’s cellular application works the same way. «Whatever Nvidia is doing for games, whatever Disney is doing… we are doing at a much bigger scale,» he said.
In the context of AT&T’s Geo Modeler, Kounev explained, radio propagation from cellular towers is high-frequency light that our eyes cannot see. The towers measure how the rays react to the surrounding environment, such as colliding with structures or reflecting off surfaces. That collected data is processed and analyzed by several internal AT&T systems and machine learning models to determine if changes or optimizations need to be made, in what AT&T calls «near scale time.»
Those changes can include everyday adjustments to the angle of nearby antennas or compensating for a tower that has gone offline during a natural disaster. Modifications can be deployed automatically in seconds or minutes, ideally in a way that doesn’t impact customers.
«We don’t want [customers] to notice,» said Jennifer Yates, assistant vice president of inventive science, network and service automation at AT&T Labs. «The network is self-healing [and] autonomous behind the scenes so they don’t have to think about it.»
The benefit can also be a technical challenge that you would never notice as an AT&T subscriber. «When you hit Lincoln Tunnel traffic at 5 o’clock in the afternoon and you can get your website loaded, that’s when we come in,» said Kounev. «We’re optimizing the network traffic … in rush hour, where you’re able to get your connection.»
Predicting where to deploy resources
Although day-to-day network optimization is one advantage of using the Geo Modeler, it’s also a tool for determining how and where the company should deploy resources during situations such as weather events. For instance, if a prominent tree is blown over during a storm, ray tracing can quickly build a new representation of how towers should compensate.
For large events like music festivals, where tens of thousands of phones are accessing the network or impending natural disasters, the technology can be used to predict upcoming changes that are needed; Kounev mentioned Geo Modeler was applied in April at the Coachella festival.
Kounev also explained that if a hurricane is coming, for example, knowing its estimated size and timing, «we can go in and within two minutes remove [within the model] the towers that we think are going to be affected, and then see what the network coverage is going to look like.» Knowing where to expect holes in the network allows AT&T to position resources, such as generators or mobile cellular towers, in place before the hurricane strikes.
Most predictive tools, said Kounev, rely on existing measurement data. «Because we use ray tracing, we can predict in places where there’s no measurement data.»
AT&T has been building the Geo Modeler for a year and has accumulated enough data from different use cases over that time to be confident about deploying it more broadly. Yates said that AT&T has performed extensive validation of data, comparing the modeler’s results with measurements in the field.
«Over the last year,» said Kounev, «we had to convince people that this thing can actually work in real time with the many tower stations they have.»
Technologies
Home Depot’s Giant Skeleton Finally Has a Voice. Check Out the New App This Halloween
October is here, and spooky season starts now. Here’s the latest version of Home Depot’s skeleton — it’s half the size but has animated features and can talk.
The fall equinox has come and gone, and October is finally here — it’s time to start setting up your Halloween decorations. This year, Home Depot’s infamous giant skeleton has returned with an app that gives the new Ultra Skelly a voice and fresh moves to spook trick-or-treaters.
Make no bones about it: Skelly is high-tech this spooky season. The new animatronic version is shorter than the original, at 6.5 feet tall, but you can freak out your whole neighborhood with this skeleton’s rotating upper torso, moving mouth and 18 LCD eye variations (ew).
Skelly can now chat with visitors through five preset recordings and up to 30 seconds of custom recording, plus Bluetooth capabilities that let you interact in real time. And you can modulate your voice to make everything sound extra spooky.
Skelly was originally launched in 2020, when the pandemic forced people to celebrate Halloween at a distance. Perhaps because of its giant stature — it’s easy to spot, even when social distancing — the skeleton became a hit and has been resurrected every year since with upgrades and friends. This year, those friends include dragons, trolls, scarecrows and a Skelly Cat (not to be confused with Smelly Cat).
You can order Skelly and company now on the Home Depot website or app for $279.
-
Technologies3 года ago
Tech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies3 года ago
Best Handheld Game Console in 2023
-
Technologies3 года ago
Tighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года ago
Verum, Wickr and Threema: next generation secured messengers
-
Technologies4 года ago
Google to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies4 года ago
Black Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies4 года ago
Olivia Harlan Dekker for Verum Messenger
-
Technologies4 года ago
iPhone 13 event: How to watch Apple’s big announcement tomorrow