Technologies
Apple Vision Pro Hands-On: Far Better Than I Was Ready For
I experienced incredible fidelity, surprising video quality and a really smooth interface. Apple’s first mixed-reality headset nails those, but lots of questions remain.

I was in a movie theater last December watching Avatar: The Way of Water in 3D, and I said to myself: «Wow, this is an immersive film I’d love to watch in next-gen VR.» That’s exactly what I experienced in Apple’s Vision Pro headset, and yeah, it’s amazing.
On Monday, I tried out the Vision Pro in a series of carefully picked demos during WWDC at Apple’s Cupertino, California, headquarters. I’ve been using cutting-edge VR devices for years, and I found all sorts of augmented reality memories bubbling up in my brain. Apple’s compact — but still not small —headset reminds me of an Apple-designed Meta Quest Pro. The fit of the back strap was comfy yet stretchy, with a dial to adjust the rear fit and a top strap for stability. The headset’s sleek design, and even its glowing front faceplate, also gave me an instant Ready Player One vibe.

05:35
I couldn’t wear my glasses during the demo, though, and neither will you. Apple’s headset does not support glasses, instead relying on Zeiss custom inserts to correct wearers’ vision. Apple did manage, through a setup process, to easily find lenses that fit my vision well enough so that everything seemed crystal clear, which is not an easy task. Also, we adjusted the fit and tuned spatial audio for my head using an iPhone, a system that will be finessed when the headset is released in 2024.
From there, I did my demos seated, mostly, and found myself surprised from the start. The passthrough video camera quality of this headset is good —really, really good. Not as good as my own vision, but good enough that I could see the room well, see people in it with me, see my watch notifications easily on my wrist. The only headset that’s done this previously was the extremely impressive but PC-connected Varjo XR-3, and Apple’s display and cameras feel even better.
Apple’s floating grid of apps appears when I press the top digital crown, which autocenters the home screen to wherever I’m looking. I set up eye tracking, which worked like on many other VR headsets I’ve used: I looked at glowing dots as musical notes played, and got a chime when it all worked.

A list of apps as they would appear inside of the Apple Vision Pro headset.
From there, the interface was surprisingly fluid. Looking at icons or interface options slightly enlarges them, or changes how bold they appear. Tapping with my fingers while looking at something opens an app.
I’ve used tons of hand-tracking technology on headsets like the HoloLens 2 and the Meta Quest 2 and Pro, and usually there’s a lot of hand motion required. Here, I could be really lazy. I pinched to open icons even while my hand was resting in my lap, and it worked.
Scrolling involves pinching and pulling with my fingers; again, pretty easy to do. I resized windows by moving my hand to throw a window across the room or pin it closer to me. I opened multiple apps at once, including Safari, Messages and Photos. It was easy enough to scroll around, although sometimes my eye tracking needed a bit of extra concentration to pull off.
More from WWDC 2023
Apple’s headset uses eye tracking constantly in its interface, something Meta’s Quest Pro and even the PlayStation VR 2 don’t do. That might be part of the reason for the external battery pack. The emphasis on eye tracking as a major part of the interface felt transformative, in a way I expected might be the case for VR and AR years ago. What I don’t know is how it will feel in longer sessions.
I don’t know how the Vision Pro will work with keyboards and trackpads, since I didn’t get to demo the headset that way. It works with Apple’s Magic Keyboard and Magic Trackpad, and Macs, but not with iPhone and iPad or Watch touchscreens —not now, at least.
Dialing in reality
I scrolled through some photos in Apple’s preset photo album, plus a few 3D photos and video clips shot with the Vision Pro’s 3D camera. All the images looked really crisp, and a panoramic photo that spread around me looked almost like it was a window on a landscape that extended just beyond the room I was in.
Apple has volumetric 3D landscapes on the Vision Pro that are immersive backgrounds like 3D wallpaper, but looking at one really shows off how nice that Micro OLED display looks. A lake looked like it was rolling up to a rocky shore that ended right where the real coffee table was in front of me.

Raising my hands to my face, I saw how the headset separates my hands from VR, a trick that’s already in Apple’s ARKit. It’s a little rough around the edges but good enough. Similarly, there’s a wild new trick where anyone else in the room can ghost into view if you look at them, a fuzzy halo with their real passthrough video image slowly materializing. It’s meant to help create meaningful contact with people while wearing the headset. I wondered how you could turn that off or tune it to be less present, but it’s a very new idea in mixed reality.
Apple’s digital crown, a small dial borrowed from the Apple Watch, handles reality blend. I could turn the dial to slowly extend the 3D panorama until it surrounded me everywhere, or dial it back so it just emerged a little bit like a 3D window.
Mixed reality in Apple’s headset looks so casually impressive that I almost didn’t appreciate how great it was. Again, I’ve seen mixed reality in VR headsets before (Varjo XR-3, Quest Pro), and I’ve understood its capabilities. Apple’s execution of mixed reality felt much more immersive, rich and effortless on most fronts, with a field of view that felt expansive and rich. I can’t to see more experiences in it.
Cinematic fidelity that wowed me
The cinema demo was what really shocked me, though. I played a 3D clip of Avatar: The Way of Water in-headset, on a screen in various viewing modes including a cinema. Apple’s mixed-reality passthrough can also dim the rest of the world down a bit, in a way similar to how the Magic Leap 2 does with its AR. But the scenes of Way of Water sent little chills through me. It was vivid. This felt like a movie experience. I don’t feel that way in other VR headsets.

Avatar: The Way of Water looked great in the Vision Pro.
Apple also demonstrated its Immersive Video format that’s coming as an extension to Apple TV Plus. It’s a 180-degree video format, similar to what I’ve seen before in concept, but with really strong resolution and video quality. A splash demo reel of Alicia Keys singing, Apple Sports events, documentary footage and more reeled off in front of me, a teaser of what’s to come. One-eighty-degree video never appears quite as crisp to me as big-screen film content, but the sports clips I saw made me wonder how good virtual Jets games could be in the future. Things have come a long way.
Would I pay $3,499 for a head-worn cinema? No, but it’s clearly one of this device’s greatest unique strengths. The resolution and brightness of the display were surprising.

03:59
Convincing avatars (I mean, Personas)
Apple’s Personas are 3D-scanned avatars generated by using the Vision Pro to scan your face, making a version of yourself that shows up in FaceTime chats if you want, or also on the outside of the Vision Pro’s curved OLED display to show whether you’re «present» or in an app. I didn’t see how that outer display worked, but I had a FaceTime with someone in their Persona form, and it was good. Again, it looked surprisingly good.
I’ve chatted with Meta’s ultra-realistic Codec Avatars, which aim for realistic representations of people in VR. Those are stunning, and I’ve also seen Meta’s phone-scanned step-down version in an early form last year, where a talking head spoke to me in VR. Apple’s Persona looked better than Meta’s phone-scanned avatar, although a bit fuzzy around the edges, like a dream. The woman whose Persona was scanned appeared in her own window, not in a full-screen form.
And I wondered how expressive the emotions are with the Vision Pro’s scanning cameras. The Pro has an ability to scan jaw movement similar to the Quest Pro, and the Persona I chatted with was friendly and smiling. How would it look for someone I know, like my mom? Here, it was good enough that I forgot it was a scan.
We demoed a bit of Apple’s Freeform app, where a collaboration window opened up while my Persona friend chatted in another window. 3D objects popped up in the Freeform app, a full home scan. It looked realistic enough.
Dinosaurs in my world
The final demo was an app experience called Encounter Dinosaurs, which reminded me of early VR app demos I had years ago: An experience emphasizing just the immersive «wow» factor of dinosaurs appearing in a 3D window that seemed to open up in the back wall of my demo room. Creatures that looked like carnotauruses slowly walked through the window and into my space.
All my demos were seated except for this one, where I stood up and walked around a bit. This sounds like it wouldn’t be an impressive demo, but again, the quality of the visuals and how they looked in relation to the room’s passthrough video capture was what made it feel so great. As the dinosaur snapped at my hand, it felt pretty real. And so did a butterfly that danced through the room and tried to land on my extended finger.
I smiled. But even more so, I was impressed when I took off the headset. My own everyday vision wasn’t that much sharper than what Apple’s passthrough cameras provided. The gap between the two was closer than I would have expected, and it’s what makes Apple’s take on mixed reality in VR work so well.
Then there’s the battery pack. There’s a corded battery that’s needed to power the headset, instead of a built-in battery like most others have. That meant I had to make sure to grab the battery pack as I started to move around, which is probably a reason why so many of Apple’s demos were seated.

11:44
What about fitness and everything else?
Apple didn’t emphasize fitness much at all, a surprise to me. VR is already a great platform for fitness, although no one’s finessed headset design for fitness comfort. Maybe having that battery pack right now will limit movement in active games and experiences. Maybe Apple will announce more plans here later. The only taste I got of health and wellness was a one-minute micro meditation, which was similar to the one on the Apple Watch. It was pretty, and again a great showcase of the display quality, but I want more.
2024 is still a while away, and Apple’s headset is priced way out of range for most people. And I have no idea how functional this current headset would feel if I were doing everyday work. But Apple did show off a display, and an interface, that are far better than I was ready for. If Apple can build on that, and the Vision Pro finds ways of expanding its mixed-reality capabilities, then who knows what else is possible?
This was just my fast-take reaction to a quick set of demos on one day in Cupertino. There are a lot more questions to come, but this first set of demos resonated with me. Apple showed what it can do, and we’re not even at the headset’s launch yet.
Technologies
Google Making AI-Powered Glasses With Warby Parker, Gentle Monster
Google revealed its first two partnerships with eyeglass brands, with more to come.

The tech world has rarely been called stylish. But at Google’s annual I/O developers conference on Tuesday, the company took one step into the fashion world — kind of. The company revealed that the first eyeglass brands to carry Android XR AI-powered glasses will be Warby Parker and Gentle Monster, with more brand partners to be revealed in the future. Android XR is Google’s upcoming platform for VR, AR and AI on glasses and headsets.
Yes, there was a Superman joke as the company joked that unlike Clark Kent, who hid his superpowers behind nerdy glasses, the Android XR glasses will give you superpowers. That remains to be seen, although NBA star Giannis Antetokounmpo did show up at Google I/O wearing the XR glasses.
Warby Parker, founded in 2010, was originally an online eyeglass retailer that gained fame for its home try-on program, where customers could order five frames sent to their home to try on and then return. It also allowed customers to upload photos to see how they would look wearing different frames.
South Korean eyeglass brand Gentle Monster, founded in 2011, is known for its luxury eyeglasses and sunglasses. The company’s celebrity customers include Beyoncé, Rihanna, Kendrick Lamar and Billie Eilish.
Technologies
Tariffs Explained: I Have Everything You Need to Know as Walmart, Subaru Hike Prices
Technologies
Google I/O Announcements: The Latest AI Upgrades Coming to Gemini, XR and More
From its new Project Aura XR glasses to Chrome’s wants-to-be-more-helpful AI mode, Gemini Live and new Flow generative video tool, Google puts AI everywhere.

As you’d expect, this year’s Google I/O developer’s conference focused almost exclusively on AI — where the company’s Gemini AI platform stands, where it’s going and how much it’s going to cost you now for its new AI Ultra subscription plan (spoiler: $250 per month). Meanwhile, a new Flow app expands the company’s video-generation toolset, and its Android XR glasses make their debut.
Plus, all AI usage and performance numbers are up! (Given that a new 42.5-exaflop Ironwood Tensor processing unit is coming to Google Cloud later this year, they’ll continue to rise.)
Google’s Project Aura, a developer kit for Android XR that includes new AR glasses from Xreal, is the company’s next step in the company’s roadmap toward glasses-based, AI-driven extended reality. CNET’s Scott Stein goes in-depth in an exclusive interview with Shahram Izadi, Google’s VP and GM for Android XR about that future. And headset-based Project Moohan, developed in conjunction with Samsung, is now available, and Google’s working with Samsung to extend beyond headsets.
For a play-by-play of the event, you can read the archive of our live blog.
Google already held a separate event for Android, where it launched Android 16, debuting its new Material 3 Expressive interface, updates to security and an update on Gemini integration and features.
A lot of the whizzy new AI features are only available via one of its subscription levels. AI Pro is just a rebranding of Google’s $20-per-month Gemini Advanced plan (adding some new features), but Google AI Ultra is a pricier new option — $250 per month, with half off the first three months for the moment — that provides access to the latest, spiffiest and least usage-limited of all its tools and models, as well as a prototype for managing AI agents and the 30 terabytes of storage you’re going to need to store it all. They’re both available today.
Google also wants to make your automation sound smarter with Personalized Smart Replies, which makes your generated answers sound more like you, as well as plowing through pieces of information on your device to provide relevant information. It’ll be in Gmail this summer for subscribers. Eventually, it’ll be everywhere.
Also, it includes lots of better models, better coding tools and other details on developer-friendly things you expect from a developer conference. The announcement included its conversational Gemini Live, formerly part of Project Astra, its interactive, agentic, voice AI, kitchen sink AI app. (As Managing Editor Patrick Holland says, «Astra is a rehearsal of features that, when they’re ready for the spotlight, get added to Gemini Live.») And for researchers, NotebookLM incorporates Gemini Live to improve its… everything.
It’s available now in the US.
Chrome AI Mode
People (that is, those over 18) who pony up for the subscriptions, plus users on the Chrome Beta, Dev and Canary tracks, will be able to try out the company’s expanded Gemini integration with Chrome — summary, research and agentic chat based on the contents of your screen, somewhat like Gemini Live does for phones (which, by the way, is available for free on Android and iOS as of today). But the Chrome version is more suited to the type of things you do at a computer rather than a phone. (Microsoft already does this with Copilot in its own Edge browser.)
Eventually, Google plans for Gemini in Chrome to be capable of synthesizing using multiple tabs and voice navigation.
The company is also expanding how you can interact with its AI Overviews in Google Search as part of AI Mode, with interactions with AI Overviews and more agentic shopping help. It’s a new tab with search, or on the search bar, and it’s available now. It includes deeper searches, Personal Context — which uses all the information it knows about you, and that’s a lot — to make suggestions and customize replies.
The company detailed its new AI Mode for shopping, which has an improved conversational shopping experience, a checkout that monitors for the best pricing, and an updated «try on» interface that lets you upload a photo of yourself rather than modeling it on a generic body.
Google plans to launch it soon, though the updated «try on» feature is now available in the US via Search Labs.
Google Beam
Formerly known as Project Starline, Google Beam is the updated version of the company’s 3D videoconferencing, now with AI. It uses a six-camera array to capture all angles of you, which the AI then stitches together, uses head tracking to follow your movements, and sends at up to 60 frames per second.
The platform uses a light field display that doesn’t require wearing any special equipment, but that technology also tends to be sensitive to off-angle viewing. HP is an old hand in the large-scale scanning biz, including 3D scanning, so the partnership with Google isn’t a big surprise.
Flow and other generative creative tools
Google Flow is a new tool that builds on Imagen 4 and Veo 3 to perform tasks like creating AI video clips and stitching them into longer sequences, or extending them, with a single prompt while keeping them consistent from scene to scene. It also provides editing tools like camera controls. It’s available as part of Gemini AI Ultra.
Imagen 4 image generation is more detailed, with improved tonality and better text and typography. And it’s faster. Meanwhile, Veo 3, also available today, has a better understanding of physics and native audio generation — sound effects, background sounds and dialogue.
Of course, all this is available under the AI Pro plan. Google’s Synth ID gen AI detection tool is also available today.
-
Technologies2 года ago
Tech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies2 года ago
Best Handheld Game Console in 2023
-
Technologies2 года ago
Tighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года ago
Verum, Wickr and Threema: next generation secured messengers
-
Technologies4 года ago
Google to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies4 года ago
Black Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies4 года ago
Olivia Harlan Dekker for Verum Messenger
-
Technologies4 года ago
iPhone 13 event: How to watch Apple’s big announcement tomorrow