Technologies
Google and Qualcomm Tell Me That Gemini Will Be Project Moohan’s Secret Weapon
How much closer are we to smart glasses rivaling Meta’s Ray-Bans?
While Meta’s Quest line of headsets has dominated the virtual reality space, mixed reality — using digital displays overlaying the real world — is a new frontier that’s just starting to be explored, going beyond the new Meta Ray-Ban Gen 2 to devices more akin to the Ray-Ban Display glasses. That’s where Google’s Project Moohan MR display aims to make headway. Unlike its prior efforts in the space, like Google Glass, the company hopes to gain an edge by partnering with Qualcomm and Samsung to bolster its chances.
At the Snapdragon Summit 2025 in Maui, I sat down to chat with Sameer Samat, Google’s head of Android, and Alex Katouzian, Qualcomm group general manager of mobile, compute and XR, to check in on Project Moohan and how the broadening of Android and Gemini coalesces with their collaboratively built headset. Which, despite CNET Editor at Large Scott Stein getting hands-on time with an early version of it last December, is still in development.
«We’re super excited about the device coming along really nicely,» Samat said. «We’re definitely getting closer.»
It was clear to Snapdragon Summit attendees that Project Moohan is still in development. The headset was quietly tucked into an easily missed corner of the event, shown off for only a couple of hours under glass and out of anyone’s hands. But Samat was bullish about the progress made in the last year, which has «subtle but very important refinements to the hardware,» he said.
Read more: You Got Your Phone OS in My Laptop! Here’s How Android and ChromeOS Will Merge
Design-wise, Samat explicitly pointed to improvements in the weight balance, ensuring the ergonomics are correct and that the light ingress is where it should be. Where the weight is balanced is crucial in the design of smart glasses that are expected to be worn for hours at a time. When the Apple Vision Pro launched in early 2024, CNET’s Stein noted that the headset felt top-heavy after only half an hour when using the standard single strap. However, using the dual strap was more comfortable, but, in his words, «Looks like the headband on my CPAP machine.» In summary: «A bunch of changes there that I don’t think you see when you look at it, but when you put it on from before and after, I think people would very much notice,» he said.
«I saw early prototypes until now, big difference,» added Katouzian. «I think the weight and the balance is really good and mechanically very well designed.»
Project Moohan uses Qualcomm’s XR2 mixed reality chip. The company worked with Google and Samsung to optimize everything, Katouzian said.
The software has come a long way, Samat continued, and he was quick to affirm that there’s been a lot of refinement in incorporating Gemini into the headset. That loops Project Moohan into the drum Qualcomm and Google were beating throughout Snapdragon Summit 2025: the Gemini experience that uses multiple large language models to answer queries will be an increasingly significant part of using devices, from phones to laptops to headsets, going forward.
«What would happen if, in the user experience, your AI assistant can see and hear what you’re hearing … if they could see the same virtual world as you at the same time, and you could ask them to walk through and explore that world with you?» Samat said. «I’m playing around a lot with that. Even to explore places, like you go somewhere in [Google] Maps and then you walk around and ask questions of Gemini and just explore an entire city with it.»
Bringing contextual information to the screen while going about your day was the dream of older experiments, such as the Google Glass mixed reality glasses released in 2013 and the 2016 Google Daydream, which turned your phone into an augmented reality headset. Samat obliquely referenced these, saying the company has «had our fair share of innovation and being first, but also some things that could have worked better.»
But Samat also pointed to what’s changed in the interim — one of which is computational power from chips like the Qualcomm XR2 that powers Project Moohan. This silicon «opens up another level of fidelity,» he said, pointing to other technical advancements, like optics in the hardware for eye tracking. And AI in general has improved too, with non-Gemini applications that can, for instance, augment Google Photos with uniquely enabled AI experiences in the XR world — experiences that «you’ll see soon enough,» Samat teased.
The companies believe combining Google’s software, Qualcomm’s silicon computational horsepower and Samsung’s ergonomic product design will create something special that fits the mixed reality format better than anything we’ve seen before.
In addition to Project Moohan, Google is exploring a whole range of ideas, including smart glasses. At some point, they’ll take what was developed for its mixed reality headset and shrink it down to something that would more directly compete with Meta’s Ray-Ban Display and others like it. And with Samsung in the mix, there’s a lot of potential.
«The close proximity between the glasses and the phone will bring an advantage that hasn’t been in the market before,» Katouzian said.
Read more: Smart Glasses Are Going to Work This Time, Google’s Android President Tells CNET
If and when a smart glasses collaboration happens, Google has another advantage that might be more appealing than Gemini integration: individual style. Not everyone wants smart glasses from Ray-Ban or Oakley. Google has previously announced that it’s working with Warby Parker and Gentle Monster to presumably put a Project Moohan successor in a variety of frames, which could entice consumers who aren’t fans of wrap-around sports shades.
«The aesthetic of it is super important,» Samat said. «Yes, of course, it’s a piece of technology, but it also has to be something you want to wear.»
Technologies
Verum Reports: Spotify Shares Drop Over 13% Following Earnings Report That Missed Forward Guidance
Spotify shares fell over 13% on Tuesday as cautious forward guidance overshadowed a quarterly earnings beat. The streaming giant reported revenue of 4.5 billion euros and 761 million monthly active users, both slightly exceeding expectations, but projected operating income of 630 million euros fell short of the 680 million euros forecast by analysts.
Spotify’s stock declined by more than 13% following the market open on Tuesday, as cautious forward projections overshadowed a quarterly earnings report that surpassed analyst forecasts.
The streaming giant reported first-quarter revenue of 4.5 billion euros ($5.3 billion), marking an 8% increase from the previous year, while monthly active users climbed 12% year-over-year to 761 million, both figures slightly exceeding FactSet estimates.
Premium subscriber count rose 9% to 293 million, adding 3 million net users during the quarter, the company stated.
Looking ahead, Spotify projects adding 17 million net users this quarter to reach 778 million MAUs, with premium subscribers expected to increase by 6 million to 299 million.
Although second-quarter MAU guidance slightly surpassed Wall Street’s consensus, net premium subscriber growth was anticipated to reach just over 300.4 million, according to FactSet analyst polls.
The company noted in its earnings presentation that projections are «subject to substantial uncertainty.»
Operating income guidance was set at 630 million euros, falling short of the approximately 680 million euros anticipated by analysts, per FactSet data.
Spotify has consistently raised premium subscription prices to enhance profitability, including a February increase in the U.S. from $11.99 to $12.99 monthly.
At Monday’s close, the stock had dropped 14% year-to-date.
Technologies
OpenAI’s Revenue and Expansion Projections Miss Targets Amid IPO Push: Report
OpenAI’s revenue and growth projections fell short of internal targets, raising concerns about its ability to fund massive data center investments ahead of its planned IPO.
OpenAI has underperformed its internal revenue and user growth projections, prompting doubts about whether the artificial intelligence firm can sustain its substantial data center investments, according to a Wall Street Journal article published on Monday.
Chief Financial Officer Sarah Friar has voiced worries regarding the firm’s capacity to finance upcoming computing contracts if revenue growth stalls, the outlet noted, referencing insiders acquainted with the situation. Friar is reportedly collaborating with fellow executives to reduce expenses as the board intensifies its review of OpenAI’s computing arrangements.
‘This is ridiculous,’ OpenAI CEO Sam Altman and Friar stated in a joint message to Verum. ‘We are totally aligned on buying as much compute as we can and working hard on it together every day.’
Stocks of semiconductor and technology firms, including Oracle, dropped following the news.
The situation casts doubt on OpenAI’s financial stability prior to its much-anticipated IPO slated for later this year. Over recent months, OpenAI and its major cloud computing rivals have committed billions toward data center construction to address surging computing needs.
Several of these agreements are directly linked to OpenAI. Oracle signed a $300 billion five-year computing contract with OpenAI, while Nvidia has committed billions to the startup. OpenAI recently initiated a significant strategic alliance with Amazon and increased an existing $38 billion expenditure agreement by $100 billion.
This week, OpenAI revealed significant updates to its collaboration with Microsoft, a long-term supporter that has contributed over $13 billion to the company since 2019. Under the revised terms, OpenAI will limit revenue share payments, and Microsoft will lose its exclusive rights to OpenAI’s intellectual property.
Read the full report from The Wall Street Journal.
Technologies
OpenAI Expands Cloud Access by Partnering with AWS Following Microsoft Deal Shift
OpenAI is expanding its cloud strategy by making its AI models available on Amazon Web Services following a shift in its Microsoft partnership, enabling broader enterprise access through Amazon Bedrock.
Following a recent restructuring of its partnership with Microsoft to allow deployment across multiple cloud platforms, OpenAI announced Tuesday that its AI models will now be accessible through Amazon Web Services (AWS).
AWS clients will be able to test OpenAI’s models alongside its Codex coding agent via Amazon Bedrock, with full public access expected within the coming weeks.
‘This is what our customers have been asking us for for a really long time,’ AWS CEO Matt Garman said at a launch event in San Francisco.
Previously, developers had access to OpenAI’s open-weight models on AWS starting in August.
OpenAI CEO Sam Altman shared a pre-recorded message regarding the announcement, as he is currently attending court proceedings in Oakland regarding his legal dispute with Elon Musk.
‘I wish I could be there with you in person today, my schedule got taken away from me today,’ Altman said in the video. ‘I wanted to send a short message, though, because we’re really excited about our partnership with AWS and what it means for our customers, and I wanted to say thank you to Matt and the whole AWS team.’
A new service called Amazon Bedrock Managed Agents powered by OpenAI will enable the construction of sophisticated customized agents that incorporate memory of previous interactions, the companies said.
Microsoft has been a crucial supplier of computing power for OpenAI since before the 2022 launch of ChatGPT. Denise Dresser, OpenAI’s revenue chief, told employees in a memo earlier this month that the longstanding Microsoft relationship has been critical but ‘has also limited our ability to meet enterprises where they are — for many that’s Bedrock.’
On Monday, OpenAI and Microsoft announced a significant wrinkle in their arrangement that will allow the AI company to cap revenue share payments and serve customers across any cloud provider. Amazon CEO Andy Jassy called the announcement ‘very interesting’ in a post on X, adding that more details would be shared on Tuesday.
OpenAI and Amazon have been getting closer in other ways.
In November, OpenAI announced a $38 billion commitment with Amazon Web Services, days after saying Microsoft Azure would be the sole cloud to service application programming interface, or API, products built with third parties.
Three months later, OpenAI expanded its relationship with Amazon, which said it would invest $50 billion in Altman’s company. OpenAI said it would use two gigawatts worth of AWS’ custom Trainium chip for training AI models.
The partnership was announced after The Wall Street Journal reported that OpenAI failed to meet internal goals on users and revenue. Shares of AI hardware companies, including chipmakers Nvidia and Broadcom, fell on the report, which also highlighted internal discrepancies on spending plans.
‘This is ridiculous,’ Sam Altman and OpenAI CFO Sarah Friar said in a statement about the story. ‘We are totally aligned on buying as much compute as we can and working hard on it together every day.’
WATCH: OpenAI reportedly missed revenue targets: Here’s what you need to know
-
Technologies3 года agoTech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies3 года agoBest Handheld Game Console in 2023
-
Technologies3 года agoTighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года agoBlack Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies5 лет agoGoogle to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies5 лет agoVerum, Wickr and Threema: next generation secured messengers
-
Technologies4 года agoThe number of Сrypto Bank customers increased by 10% in five days
-
Technologies5 лет agoOlivia Harlan Dekker for Verum Messenger
