Connect with us

Technologies

Apple Vision Pro’s Biggest Missing Pieces

Commentary: Apple’s AR/VR «spatial computer» pushes the upper limits of immersive tech. But it has some notable omissions.

The evolution of VR and AR is in major flux, and right now Apple’s bleeding-edge, ultra-expensive Vision Pro headset is sitting at the top of the heap — and it’s not even expected to arrive until 2024.

After a demo at WWDC, I came away instantly impressed at how the Vision Pro hardware synthesized so much of what I’ve seen in VR and AR over the last five years. But this time it was all done with Retina Display-level resolution and smooth, easy hand-tracking finesse. At $3,499 (around £2,800 or AU$5,300 converted), Apple’s hardware is priced far beyond VR headsets like the Meta Quest 2, and also aims to be a full computer experience in AR (as well as VR, even though Apple doesn’t outright acknowledge it).

Even so, there are notable absences from the Vision Pro, at least based on what Apple presented at WWDC. I had expectations as to what Apple might make the killer apps and features for its spatial computer headset, and only some of them materialized. Maybe others will emerge as we get closer to Apple’s 2024 headset release, or get introduced via software updates much like Meta has done with the Quest over time.

Still, I’m surprised they’re not already part of the Vision Pro experience. To me, they’ll eventually make everything I saw work even better.

Read moreBest VR Headsets of 2023

Fitness

The Meta Quest’s best feature, other than games, is its ability to be a portable exercise machine. Beat Saber was my pandemic home workout, and Meta’s acquisition of Within (maker of Supernatural, a subscription fitness app that pairs with the Apple Watch) indicates how much fitness is already a part of the VR landscape.

Apple is a prime candidate to fuse VR, AR, fitness and health and take the experience far beyond what Meta has done. Apple already has the Apple Watch and Apple Health and Fitness Plus subscription workouts. And yet, the Vision Pro has no announced fitness or health apps yet, except for a sitting-still Meditation app that’s more of a breathing prompt. 

Apple Watch Ultra Apple Watch Ultra

When will the Apple Watch become part of the Vision Pro experience?

James Martin/CNET

Even more puzzling: The Vision Pro seemingly doesn’t work with the Apple Watch at all. This could change. Maybe Apple is waiting to discuss this aspect next year. Or, maybe, it will arrive with a future version of the Vision hardware.

Some VR sporting game app makers are already announcing ports for the Vision Pro, including Golf Plus, an app that works in VR with controllers. The assumption, for now, is that these apps will find a way to work just using eye and hand tracking.

Apple didn’t even demonstrate that much active motion inside the Vision Pro; my demos were mostly seated, except for a final walk-around experience where I looked at a dinosaur up close.

Is the dangling battery pack part of the concern? The headset’s weight? Or is Apple starting with computing interfaces first and adding fitness later?

iPhone 14 Pro vs. iPhone 13 Pro iPhone 14 Pro vs. iPhone 13 Pro

The iPhone in your pocket should ideally interface with Vision Pro, too.

Lexy Savvides/CNET

iPhone, iPad and Watch compatibility

Speaking of fitness and the Apple Watch, I always imagined Apple’s AR headset would emphasize seamless compatibility with all of its products. Apple didn’t exactly do that with the Vision Pro, either.

The Vision Pro will work as a monitor-extending device with Macs, providing high-res virtual displays in a similar way that headsets like the Quest 2, Quest Pro and others already do. I didn’t get to try using the Vision Pro with a Mac, and I didn’t get to use a trackpad or keyboard, either. The Vision Pro will work with Magic Trackpads and Magic Keyboards to add physical trackpad/typing input options, again, like other VR/AR headsets, in addition to onboard eye- and hand-tracking.

And yet, the Vision Pro won’t interface directly with iPhones, iPads or the Apple Watch. Not yet, at least.

The Vision Pro primarily runs iPad-type apps. This is why the iPad Pro seems to be the best computer companion to the Vision Pro: it has a keyboard, a trackpad, built-in motion tracking that’s already AR-friendly, front and rear depth-sensing cameras that could possibly help with 3D scanning environments or faces, and it has a touchscreen and Pencil stylus.

A man with AR glasses on, holding a phone, seeing a floating window with a person speaking to him A man with AR glasses on, holding a phone, seeing a floating window with a person speaking to him

Qualcomm’s software tools for AR glasses extend phone apps to headsets. The Apple Vision Pro bypasses the phone and works on its own.

Qualcomm

Apple is emphasizing that the Vision Pro is a self-contained computer that doesn’t need other devices. That’s understandable, and most of Apple’s cloud services, like FaceTime, will work so that the Vision Pro will essentially absorb most iPhone and iPad features. Yet I don’t understand why iPhones, iPads and Watches wouldn’t be welcome input accessories. Their touchscreens and motion controls could help them act as remotes or physical-feedback devices, in a similar way to how Qualcomm is already looking at the relationship between phones and AR glasses. I hold up my iPhone all the time to enter passwords on the Apple TV. I seamlessly drop photos, links and text from my iPhone over to my Mac.

Touchscreens could act as virtual keyboards. Drawing on the iPad could mirror a 3D art interface. With Apple’s already excellent passthrough cameras, iPhone, iPad and Watch displays could become interactive second screens, tactile interfaces that sprout extra parts in AR. Also, there’s the value of haptics and physical feedback.

Sony Playstation VR 2 virtual reality headset Sony Playstation VR 2 virtual reality headset

The PSVR 2 controller: One advantage to physical devices is physical feedback.

James Martin/CNET

No haptics

The buzzing, tapping and rumbling feelings we get on our phones, watches and game controllers, those are feedback tools I’ve really connected with when I go into VR. The PlayStation VR 2 even has rumbling feedback in its headset. The Vision Pro, with eye and hand tracking, has no controllers. And no haptic feedback. I’ve been fascinated by the future of haptics — I saw a lot of experimental solutions earlier this year. For now, Apple is sitting out on haptic solutions for Vision Pro.

When I use iPhones and the Watch, I feel those little virtual clicks as reminders of when I’ve opened something, or when information comes in. I feel them as extensions of my perceptual field. In VR, it’s the same way. Apple’s pinch-based hand tracking technically has some physical sensation when your own fingers touch each other, but nothing will buzz or tap to let you know something is happening beyond your field of view — in another open app, for instance, or behind you in an immersive 3D environment.

Microsoft made a similar decision with the HoloLens by only having in-air hand tracking, but former AR head Alex Kipman told me years ago that haptics were part of the HoloLens roadmap. 

Apple already has haptic devices; the Apple Watch, for example. All those iPhones, too. I’m surprised the Vision Pro doesn’t already have a solution for haptics. But maybe it’s also on its roadmap? 

logitech-vr-ink-lifestyle-final logitech-vr-ink-lifestyle-final

Logitech’s VR Ink, released in 2019, is an in-air 3D stylus. How will Apple handle creative tools in 3D?

Logitech

Will there ever be other accessories like the Pencil?

One of the wildest parts about a mixed-reality future is how it can blend virtual and real tools together, or even invent tools that don’t exist. I’ve had my VR controllers act like they’re morphing into objects that feel like they’re an extension of my body. Some companies like Logitech have already developed in-air 3D styluses for creative work in VR and AR.

Apple’s Vision Pro demos didn’t show off any creative apps beyond the collaborative Freeform, and nothing that showed how 3D inputs could be improved with handheld tools. 

Maybe Apple is starting off by emphasizing the power of just eyes and hands here, similar to how Steve Jobs initially refused to give the iPad a stylus. But the iPad has a Pencil now, and it’s an essential art tool for many people. Dedicated physical peripherals are helpful, and Apple has none with its Vision Pro headset (yet). I do like VR controllers, and Meta’s clever transforming Quest Pro controllers can be flipped around to become writing tools with an added stylus tip. As a flood of creative apps arrive on the Vision Pro in 2024, will Apple address possibilities for dedicated accessories? Will the Vision Pro allow for easy pairing of them? Hopefully, yes.

The Apple Vision Pro is a long way from arriving, and there’s still so much we don’t know. As Apple’s first AR/VR headset evolves, however, these key aspects should be kept in mind, because they’ll be incredibly important ways to expand how the headset feels useful and flexible for everyone.

Technologies

Google Making AI-Powered Glasses With Warby Parker, Gentle Monster

Google revealed its first two partnerships with eyeglass brands, with more to come.

The tech world has rarely been called stylish. But at Google’s annual I/O developers conference on Tuesday, the company took one step into the fashion world — kind of. The company revealed that the first eyeglass brands to carry Android XR AI-powered glasses will be Warby Parker and Gentle Monster, with more brand partners to be revealed in the future. Android XR is Google’s upcoming platform for VR, AR and AI on glasses and headsets.

Yes, there was a Superman joke as the company joked that unlike Clark Kent, who hid his superpowers behind nerdy glasses, the Android XR glasses will give you superpowers. That remains to be seen, although NBA star Giannis Antetokounmpo did show up at Google I/O wearing the XR glasses.

Warby Parker, founded in 2010, was originally an online eyeglass retailer that gained fame for its home try-on program, where customers could order five frames sent to their home to try on and then return. It also allowed customers to upload photos to see how they would look wearing different frames.

South Korean eyeglass brand Gentle Monster, founded in 2011, is known for its luxury eyeglasses and sunglasses. The company’s celebrity customers include Beyoncé, Rihanna, Kendrick Lamar and Billie Eilish.

Continue Reading

Technologies

Tariffs Explained: I Have Everything You Need to Know as Walmart, Subaru Hike Prices

Continue Reading

Technologies

Google I/O Announcements: The Latest AI Upgrades Coming to Gemini, XR and More

From its new Project Aura XR glasses to Chrome’s wants-to-be-more-helpful AI mode, Gemini Live and new Flow generative video tool, Google puts AI everywhere.

As you’d expect, this year’s Google I/O developer’s conference focused almost exclusively on AI — where the company’s Gemini AI platform stands, where it’s going and how much it’s going to cost you now for its new AI Ultra subscription plan (spoiler: $250 per month). Meanwhile, a new Flow app expands the company’s video-generation toolset, and its Android XR glasses make their debut. 

Plus, all AI usage and performance numbers are up! (Given that a new 42.5-exaflop Ironwood Tensor processing unit is coming to Google Cloud later this year, they’ll continue to rise.) 

Google’s Project Aura, a developer kit for Android XR that includes new AR glasses from Xreal, is the company’s next step in the company’s roadmap toward glasses-based, AI-driven extended reality. CNET’s Scott Stein goes in-depth in an exclusive interview with Shahram Izadi, Google’s VP and GM for Android XR about that future. And headset-based Project Moohan, developed in conjunction with Samsung, is now available, and Google’s working with Samsung to extend beyond headsets. 

For a play-by-play of the event, you can read the archive of our live blog.

Google already held a separate event for Android, where it launched Android 16, debuting its new Material 3 Expressive interface, updates to security and an update on Gemini integration and features. 

A lot of the whizzy new AI features are only available via one of its subscription levels. AI Pro is just a rebranding of Google’s $20-per-month Gemini Advanced plan (adding some new features), but Google AI Ultra is a pricier new option — $250 per month, with half off the first three months for the moment — that provides access to the latest, spiffiest and least usage-limited of all its tools and models,  as well as a prototype for managing AI agents and the 30 terabytes of storage you’re going to need to store it all. They’re both available today.

Google also wants to make your automation sound smarter with Personalized Smart Replies, which makes your generated answers sound more like you, as well as plowing through pieces of information on your device to provide relevant information. It’ll be in Gmail this summer for subscribers. Eventually, it’ll be everywhere. 

Also, it includes lots of better models, better coding tools and other details on developer-friendly things you expect from a developer conference. The announcement included its conversational Gemini Live, formerly part of Project Astra, its interactive, agentic, voice AI, kitchen sink AI app. (As Managing Editor Patrick Holland says, «Astra is a rehearsal of features that, when they’re ready for the spotlight, get added to Gemini Live.») And for researchers, NotebookLM incorporates Gemini Live to improve its… everything.

It’s available now in the US. 

Chrome AI Mode

People (that is, those over 18) who pony up for the subscriptions, plus users on the Chrome Beta, Dev and Canary tracks, will be able to try out the company’s expanded Gemini integration with Chrome — summary, research and agentic chat based on the contents of your screen, somewhat like Gemini Live does for phones (which, by the way, is available for free on Android and iOS as of today). But the Chrome version is more suited to the type of things you do at a computer rather than a phone. (Microsoft already does this with Copilot in its own Edge browser.)

Eventually, Google plans for Gemini in Chrome to be capable of synthesizing using multiple tabs and voice navigation. 

The company is also expanding how you can interact with its AI Overviews in Google Search as part of AI Mode, with interactions with AI Overviews and more agentic shopping help. It’s a new tab with search, or on the search bar, and it’s available now. It includes deeper searches, Personal Context — which uses all the information it knows about you, and that’s a lot — to make suggestions and customize replies.

The company detailed its new AI Mode for shopping, which has an improved conversational shopping experience, a checkout that monitors for the best pricing, and an updated «try on» interface that lets you upload a photo of yourself rather than modeling it on a generic body. 

Google plans to launch it soon, though the updated «try on» feature is now available in the US via Search Labs.

Google Beam

Formerly known as Project Starline, Google Beam is the updated version of the company’s 3D videoconferencing, now with AI. It uses a six-camera array to capture all angles of you, which the AI then stitches together, uses head tracking to follow your movements, and sends at up to 60 frames per second.

The platform uses a light field display that doesn’t require wearing any special equipment, but that technology also tends to be sensitive to off-angle viewing. HP is an old hand in the large-scale scanning biz, including 3D scanning, so the partnership with Google isn’t a big surprise. 

Flow and other generative creative tools

Google Flow is a new tool that builds on Imagen 4 and Veo 3 to perform tasks like creating AI video clips and stitching them into longer sequences, or extending them, with a single prompt while keeping them consistent from scene to scene. It also provides editing tools like camera controls. It’s available as part of Gemini AI Ultra. 

Imagen 4 image generation is more detailed, with improved tonality and better text and typography. And it’s faster. Meanwhile, Veo 3, also available today, has a better understanding of physics and native audio generation — sound effects, background sounds and dialogue.

Of course, all this is available under the AI Pro plan. Google’s Synth ID gen AI detection tool is also available today.

Continue Reading

Trending

Copyright © Verum World Media