Connect with us

Technologies

Apple Vision Pro Hands-On: Far Better Than I Was Ready For

I experienced incredible fidelity, surprising video quality and a really smooth interface. Apple’s first mixed-reality headset nails those, but lots of questions remain.

I was in a movie theater last December watching Avatar: The Way of Water in 3D, and I said to myself: «Wow, this is an immersive film I’d love to watch in next-gen VR.» That’s exactly what I experienced in Apple’s Vision Pro headset, and yeah, it’s amazing.

On Monday, I tried out the Vision Pro in a series of carefully picked demos during WWDC at Apple’s Cupertino, California, headquarters. I’ve been using cutting-edge VR devices for years, and I found all sorts of augmented reality memories bubbling up in my brain. Apple’s compact — but still not small —headset reminds me of an Apple-designed Meta Quest Pro. The fit of the back strap was comfy yet stretchy, with a dial to adjust the rear fit and a top strap for stability. The headset’s sleek design, and even its glowing front faceplate, also gave me an instant Ready Player One vibe. 

vision-pro-apple-walks-through-mixed-reality-headset-design-mp4-00-00-37-04-still001.png vision-pro-apple-walks-through-mixed-reality-headset-design-mp4-00-00-37-04-still001.png
Watch this: Apple Vision Pro: I Tried Apple’s AR/VR Headset

05:35

I couldn’t wear my glasses during the demo, though, and neither will you. Apple’s headset does not support glasses, instead relying on Zeiss custom inserts to correct wearers’ vision. Apple did manage, through a setup process, to easily find lenses that fit my vision well enough so that everything seemed crystal clear, which is not an easy task. Also, we adjusted the fit and tuned spatial audio for my head using an iPhone, a system that will be finessed when the headset is released in 2024.

From there, I did my demos seated, mostly, and found myself surprised from the start. The passthrough video camera quality of this headset is good —really, really good. Not as good as my own vision, but good enough that I could see the room well, see people in it with me, see my watch notifications easily on my wrist. The only headset that’s done this previously was the extremely impressive but PC-connected Varjo XR-3, and Apple’s display and cameras feel even better.

Apple’s floating grid of apps appears when I press the top digital crown, which autocenters the home screen to wherever I’m looking. I set up eye tracking, which worked like on many other VR headsets I’ve used: I looked at glowing dots as musical notes played, and got a chime when it all worked.

An app menu in Apple's VisionOS. An app menu in Apple's VisionOS.

A list of apps as they would appear inside of the Apple Vision Pro headset.

Apple/Screenshot by CNET

From there, the interface was surprisingly fluid. Looking at icons or interface options slightly enlarges them, or changes how bold they appear. Tapping with my fingers while looking at something opens an app. 

I’ve used tons of hand-tracking technology on headsets like the HoloLens 2 and the Meta Quest 2 and Pro, and usually there’s a lot of hand motion required. Here, I could be really lazy. I pinched to open icons even while my hand was resting in my lap, and it worked. 

Scrolling involves pinching and pulling with my fingers; again, pretty easy to do. I resized windows by moving my hand to throw a window across the room or pin it closer to me. I opened multiple apps at once, including Safari, Messages and Photos. It was easy enough to scroll around, although sometimes my eye tracking needed a bit of extra concentration to pull off.

Apple’s headset uses eye tracking constantly in its interface, something Meta’s Quest Pro and even the PlayStation VR 2 don’t do. That might be part of the reason for the external battery pack. The emphasis on eye tracking as a major part of the interface felt transformative, in a way I expected might be the case for VR and AR years ago. What I don’t know is how it will feel in longer sessions.

I don’t know how the Vision Pro will work with keyboards and trackpads, since I didn’t get to demo the headset that way. It works with Apple’s Magic Keyboard and Magic Trackpad, and Macs, but not with iPhone and iPad or Watch touchscreens —not now, at least.

Dialing in reality

I scrolled through some photos in Apple’s preset photo album, plus a few 3D photos and video clips shot with the Vision Pro’s 3D camera. All the images looked really crisp, and a panoramic photo that spread around me looked almost like it was a window on a landscape that extended just beyond the room I was in. 

Apple has volumetric 3D landscapes on the Vision Pro that are immersive backgrounds like 3D wallpaper, but looking at one really shows off how nice that Micro OLED display looks. A lake looked like it was rolling up to a rocky shore that ended right where the real coffee table was in front of me. 

man using keyboard to work using apple vision pro headset man using keyboard to work using apple vision pro headset

Raising my hands to my face, I saw how the headset separates my hands from VR, a trick that’s already in Apple’s ARKit. It’s a little rough around the edges but good enough. Similarly, there’s a wild new trick where anyone else in the room can ghost into view if you look at them, a fuzzy halo with their real passthrough video image slowly materializing. It’s meant to help create meaningful contact with people while wearing the headset. I wondered how you could turn that off or tune it to be less present, but it’s a very new idea in mixed reality.

Apple’s digital crown, a small dial borrowed from the Apple Watch, handles reality blend. I could turn the dial to slowly extend the 3D panorama until it surrounded me everywhere, or dial it back so it just emerged a little bit like a 3D window. 

Mixed reality in Apple’s headset looks so casually impressive that I almost didn’t appreciate how great it was. Again, I’ve seen mixed reality in VR headsets before (Varjo XR-3, Quest Pro), and I’ve understood its capabilities. Apple’s execution of mixed reality felt much more immersive, rich and effortless on most fronts, with a field of view that felt expansive and rich. I can’t to see more experiences in it.

Cinematic fidelity that wowed me

The cinema demo was what really shocked me, though. I played a 3D clip of Avatar: The Way of Water in-headset, on a screen in various viewing modes including a cinema. Apple’s mixed-reality passthrough can also dim the rest of the world down a bit, in a way similar to how the Magic Leap 2 does with its AR. But the scenes of Way of Water sent little chills through me. It was vivid. This felt like a movie experience. I don’t feel that way in other VR headsets.

Jake Sully flies over Pandora's waters on a winged creature's back in Avatar: The Way of Water Jake Sully flies over Pandora's waters on a winged creature's back in Avatar: The Way of Water

Avatar: The Way of Water looked great in the Vision Pro.

20th Century Studios

Apple also demonstrated its Immersive Video format that’s coming as an extension to Apple TV Plus. It’s a 180-degree video format, similar to what I’ve seen before in concept, but with really strong resolution and video quality. A splash demo reel of Alicia Keys singing, Apple Sports events, documentary footage and more reeled off in front of me, a teaser of what’s to come. One-eighty-degree video never appears quite as crisp to me as big-screen film content, but the sports clips I saw made me wonder how good virtual Jets games could be in the future. Things have come a long way.

Would I pay $3,499 for a head-worn cinema? No, but it’s clearly one of this device’s greatest unique strengths. The resolution and brightness of the display were surprising.

appledisneypic appledisneypic
Watch this: Apple, Disney Partner on Vision Pro Entertainment

03:59

Convincing avatars (I mean, Personas)

Apple’s Personas are 3D-scanned avatars generated by using the Vision Pro to scan your face, making a version of yourself that shows up in FaceTime chats if you want, or also on the outside of the Vision Pro’s curved OLED display to show whether you’re «present» or in an app. I didn’t see how that outer display worked, but I had a FaceTime with someone in their Persona form, and it was good. Again, it looked surprisingly good.

I’ve chatted with Meta’s ultra-realistic Codec Avatars, which aim for realistic representations of people in VR. Those are stunning, and I’ve also seen Meta’s phone-scanned step-down version in an early form last year, where a talking head spoke to me in VR. Apple’s Persona looked better than Meta’s phone-scanned avatar, although a bit fuzzy around the edges, like a dream. The woman whose Persona was scanned appeared in her own window, not in a full-screen form. 

And I wondered how expressive the emotions are with the Vision Pro’s scanning cameras. The Pro has an ability to scan jaw movement similar to the Quest Pro, and the Persona I chatted with was friendly and smiling. How would it look for someone I know, like my mom? Here, it was good enough that I forgot it was a scan.

We demoed a bit of Apple’s Freeform app, where a collaboration window opened up while my Persona friend chatted in another window. 3D objects popped up in the Freeform app, a full home scan. It looked realistic enough.

Dinosaurs in my world

The final demo was an app experience called Encounter Dinosaurs, which reminded me of early VR app demos I had years ago: An experience emphasizing just the immersive «wow» factor of dinosaurs appearing in a 3D window that seemed to open up in the back wall of my demo room. Creatures that looked like carnotauruses slowly walked through the window and into my space. 

All my demos were seated except for this one, where I stood up and walked around a bit. This sounds like it wouldn’t be an impressive demo, but again, the quality of the visuals and how they looked in relation to the room’s passthrough video capture was what made it feel so great. As the dinosaur snapped at my hand, it felt pretty real. And so did a butterfly that danced through the room and tried to land on my extended finger.

I smiled. But even more so, I was impressed when I took off the headset. My own everyday vision wasn’t that much sharper than what Apple’s passthrough cameras provided. The gap between the two was closer than I would have expected, and it’s what makes Apple’s take on mixed reality in VR work so well.

Then there’s the battery pack. There’s a corded battery that’s needed to power the headset, instead of a built-in battery like most others have. That meant I had to make sure to grab the battery pack as I started to move around, which is probably a reason why so many of Apple’s demos were seated.

230605-clean-apple-wwdc-supercut-thumbnail-1 230605-clean-apple-wwdc-supercut-thumbnail-1
Watch this: Everything Apple Announced at WWDC 2023

11:44

What about fitness and everything else?

Apple didn’t emphasize fitness much at all, a surprise to me. VR is already a great platform for fitness, although no one’s finessed headset design for fitness comfort. Maybe having that battery pack right now will limit movement in active games and experiences. Maybe Apple will announce more plans here later. The only taste I got of health and wellness was a one-minute micro meditation, which was similar to the one on the Apple Watch. It was pretty, and again a great showcase of the display quality, but I want more.

2024 is still a while away, and Apple’s headset is priced way out of range for most people. And I have no idea how functional this current headset would feel if I were doing everyday work. But Apple did show off a display, and an interface, that are far better than I was ready for. If Apple can build on that, and the Vision Pro finds ways of expanding its mixed-reality capabilities, then who knows what else is possible?

This was just my fast-take reaction to a quick set of demos on one day in Cupertino. There are a lot more questions to come, but this first set of demos resonated with me. Apple showed what it can do, and we’re not even at the headset’s launch yet.

Technologies

Investors Favor Alphabet’s AI Spending Over Meta’s Despite Both Beating Earnings Expectations

Despite both Meta and Alphabet surpassing earnings expectations and raising AI spending forecasts, investors reacted differently, with Alphabet’s stock rising 7% while Meta’s fell 7%, highlighting the market’s preference for companies with cloud infrastructure that can monetize AI investments.

On Wednesday, both Meta and Alphabet surpassed analyst expectations in their quarterly earnings, marking their most robust growth in several years. The companies also raised their annual capital expenditure projections, signaling a continued commitment to investing heavily in artificial intelligence infrastructure.

However, Wall Street responded differently to the two tech giants. Alphabet’s stock surged 7% in after-hours trading, whereas Meta’s shares dropped by 7%.

This divergence continues a pattern that has weighed on Meta during much of the generative AI expansion. Unlike Alphabet, Microsoft, and Amazon, which operate vast cloud infrastructure businesses that convert AI investments into revenue, Meta lacks such a division.

Consequently, convincing investors of the return on AI spending is more challenging for Meta CEO Mark Zuckerberg, as the benefits must primarily manifest through higher ad revenue and improved profitability.

All four major tech firms released their quarterly results on Wednesday. While Alphabet, Microsoft, and Amazon reported cloud divisions that outperformed expectations, Meta was the only one among them to see its stock decline.

Leading up to the earnings releases, Alphabet’s stock had climbed 118% over the past year, significantly outpacing Meta’s 21% gain. Amazon rose 40%, and Microsoft increased by approximately 8%.

«Google is outperforming its peers which is well reflected in the current valuation,» analysts at D.A. Davidson wrote in a report after the results, maintaining their neutral rating.

The capital expenditure figures across the board are staggering and continue to grow, partly because companies are spending more on memory due to a global shortage driven by surging AI demand.

Alphabet updated its 2026 capex guidance range to $180 billion to $190 billion, up from its previous estimate of $175 billion to $185 billion. CFO Anat Ashkenazi said the company’s 2027 capex is expected to «significantly increase» from this year’s figure.

The spending forecast was coupled with revenue growth of 20%, the fastest for any quarter since 2022. Cloud revenue soared 63%, and Alphabet said it has a backlog of $460 billion, nearly double where it was last quarter, because of demand for AI infrastructure.

Defending the Spending

Meta upped its capex guidance for the year to between $125 billion and $145 billion, from a prior range of $115 billion to $135 billion, a move the company said, «reflects our expectations for higher component pricing this year and, to a lesser extent, additional data center costs to support future year capacity.»

Similar to when Meta raised its capex forecast in October, Zuckerberg spent time on the earnings call defending the company’s hefty AI spending, pitching it as necessary for future growth while bolstering the core online ad business.

«The trend over the last few years seems clear, that we are seeing an increasing return on the amount that we can improve engagement for people and value for advertisers,» Zuckerberg said. «This encourages us to continue investing heavily in what we expect will provide increasing value over the coming years as well.»

On the revenue side, growth is more impressive than at Google. Sales jumped 33% from a year earlier, marking the strongest period for expansion since 2021.

Zuckerberg said the company is «very focused on increasing the efficiency of our investments,» and is developing custom silicon with Broadcom while investing in a «significant amount of AMD chips to complement the new Nvidia systems that we’re rolling out as well.»

Meta CFO Susan Li told analysts that the company needs to spend big on AI in order to «meet our infrastructure needs and ensure we maximize our strategic flexibility over the coming years.» The company also has to ensure it has enough computing resources to train more AI models, build more products and help its AI agent push for consumers and businesses worldwide, Li said.

She added that Meta’s recent «multi-year cloud deals and our infrastructure purchase agreements» contributed to a $107 billion jump in contractual commitments during the quarter.

Still, investors are waiting to see new revenue streams come to fruition after Zuckerberg spent the past 10 months overhauling his company’s AI strategy and bringing in high-priced talent. Earlier this month, Meta debuted Muse Spark as its first proprietary foundation model.

Alphabet, meanwhile, has been cashing in on its bets, including on homegrown chips called tensor processing units (TPUs), which are increasingly competing with Nvidia’s graphics processing units (GPUs).

CEO Sundar Pichai addressed the momentum in the chip side of the business several times on Wednesday’s call.

«There’s tremendous demand for both AI solutions as well as AI infrastructure, including massive interest in our GPU offerings, as well as TPUs,» he said.

WATCH: Meta shares sliding

Continue Reading

Technologies

Alphabet’s Q1 Earnings Expected to Reflect Sustained Expansion, Driven by Cloud Division

Alphabet’s Q1 earnings are expected to show strong growth driven by cloud and AI advancements, with revenue projected to rise 18.7% year-over-year. The company’s stock has surged 118% over the past year, supported by Gemini AI integration and expanding cloud infrastructure investments.

Alphabet is scheduled to release its first-quarter financial results after market close on Wednesday. Below are the key metrics Wall Street anticipates, based on analyst estimates from LSEG: — Earnings per share: $2.63 — Revenue: $107.2 billion Investors are also tracking several additional figures in the upcoming report: — Google Cloud: Estimated at $18.05 billion, per StreetAccount — YouTube advertising: Estimated at $9.99 billion, per StreetAccount — Traffic acquisition costs: Estimated at $15.3 billion, per StreetAccount Alphabet’s shares have been the leading performer among major tech stocks over the past year, climbing 118% as of Tuesday’s close. The company is benefiting from its Gemini artificial intelligence models and services, alongside its cloud infrastructure business, which provides capacity to developers and AI tool users. Analysts forecast an 18.7% increase in revenue from $90.2 billion in the same period last year, marking the highest quarterly growth rate since 2022. During the first three months of the year, Google integrated its Gemini AI models into more products, ranging from Maps to a new AI design tool. Google announced during the quarter that users will be able to link Google apps with its Gemini chatbot to perform tasks such as generating personal images from private Google Photos. Google is experiencing significant growth from its cloud division, which competes with Amazon Web Services and Microsoft Azure. Revenue is projected to surge 47% from $12.26 billion in the same quarter a year ago. Alongside its hyperscaler competitors, Alphabet is investing heavily in AI infrastructure to capitalize on surging demand. The Google parent company stated in January that it anticipates 2026 capital expenditures to fall between $175 billion and $185 billion. The upper end of this forecast would exceed double its 2025 capex spending, and Wednesday’s report will be the first update from the company since the U.S.-Iran conflict began in February, causing oil prices to spike. Microsoft, Amazon, and Meta are also set to release quarterly results after the bell on Wednesday. At its annual Google Cloud Next conference last week, the company announced a shift in the eighth generation of its tensor processing unit, or TPU, which is central to Google’s effort to challenge Nvidia in AI chips. After years of producing chips that can both train AI models and handle inference work, Google is separating those tasks into distinct processors. Alphabet’s investments may also be a focus for investors. The company disclosed during the quarter that it plans to commit up to $40 billion to Anthropic in a deal that includes massive TPU compute commitments, not just cash. Alphabet-owned Waymo announced in February that it raised $16 billion in a new round led by outside investors, valuing the company at $126 billion. Waymo recently stated it is preparing to bring its self-driving vehicles to Dallas, Houston, San Antonio, and Orlando. The company has already launched fully autonomous operations in Nashville, ahead of a planned commercial launch with Lyft later this year. The company also reduced some equity stakes. Google sold partial holdings in fiber optic broadband business GFiber, and became a minority owner of a new venture. Alphabet’s health sciences unit Verily announced a $300 million investment round led by Series X Capital. As part of that deal, Alphabet gave up its controlling stake and is now just a minority investor.

Continue Reading

Technologies

Amazon to Release First-Quarter Financials Following Market Close

Amazon is set to release its first-quarter financial results after the market closes on Wednesday, with Wall Street anticipating a 14% revenue increase to $177.3 billion.

Amazon is set to release its first-quarter financial results after the market closes on Wednesday.

Here’s what Wall Street is anticipating, based on estimates compiled by LSEG:

— Earnings per share: $1.64

— Revenue: $177.3 billion

Wall Street is also tracking other key revenue figures:

— Amazon Web Services: $36.92 billion expected, according to StreetAccount

— Advertising: $16.87 billion expected, according to StreetAccount

Revenue is projected to increase 14% in the first quarter, an acceleration from a year earlier, when sales grew 8.6% to $155.7 billion, and roughly in line with last quarter’s 13.6% growth.

Investors will be closely watching Amazon’s cloud business, where revenue is expected to jump roughly 26% from a year ago. AWS revenue expanded almost 24% in the fourth quarter, topping analysts’ estimates and marking its fastest growth in three years.

Amazon and other big tech companies have been trying to justify their hefty artificial intelligence spending, which could approach $700 billion in 2026. Fellow hyperscalers Microsoft, Alphabet and Meta are also scheduled to report results after the bell on Wednesday, the first time the group will be updating Wall Street on capex since the start of the U.S.-Iran war in February.

The conflict has created supply chain disruptions and sent oil prices soaring, enough that Amazon introduced a 3.5% fuel surcharge for some of its third-party sellers.

Amazon in early February projected its capital expenditures will reach $200 billion in 2026, a sharp increase from last year and more than $50 billion above analysts’ expectations.

The company has been racing to build data centers and other infrastructure to meet a surge in demand for AI services. Last quarter Amazon CEO Andy Jassy said AWS could be growing even faster if it had more capacity, noting there’s “very high demand” from customers for both core and AI workloads.

Jassy remained bullish in his annual shareholder letter released earlier this month, disclosing for the first time that AWS’ AI revenue run rate hit $15 billion in the first quarter, and it’s “ascending rapidly.”

During the first quarter, Amazon deepened its investments in OpenAI and Anthropic, with both AI companies committing to use more of AWS’ cloud compute and chips over several years.

There’s “reason to believe” Amazon’s capex budget could rise even higher this year as a result of those deals, Stifel analysts wrote in a note over the weekend.

“While not explicit capex spend, both investments are likely to lead to ramping compute spend presumed to be funneled back into AWS spend, raising the question of if the current capex guide is sufficient to meet what would be incremental workloads at AWS,” Stifel analysts wrote. The firm has a buy rating on Amazon’s shares.

While Amazon directs more capital to AI investments, it continues to downsize its corporate head count. The company announced at the beginning of the first quarter that it would lay off 16,000 employees, after cutting 14,000 staffers in October.

Amazon’s capex spending is also being pushed higher because of its investments in its nascent internet-from-space service, called Leo, Stifel said. The company is aiming to begin commercial service in mid-2026.

Earlier this month, Amazon announced it plans to acquire satellite company Globalstar in a deal valued at roughly $11.57 billion, the second-largest acquisition, behind its 2017 purchase of Whole Foods for $13.7 billion.

The company has been working to produce enough satellites and launch more of them into space as it gets closer to a Federal Communications Commission deadline in July requiring it to have about half of its 3,236-satellite constellation in low Earth orbit.

Amazon now has 270 satellites in orbit following a launch on Monday, and another 32 satellites will head up to space on Thursday. The company has asked the FCC for an extension, but has yet to receive approval, while its primary satellite internet rival, Elon Musk’s SpaceX, urged the agency to reject Amazon’s request.

WATCH: Amazon needs to spend more to keep AWS as premier AI play

Continue Reading

Trending

Copyright © Verum World Media