Connect with us

Technologies

iOS 16.4 Beta 3: New Features Public Beta Testers Can Try Now

Public beta testers can try new emoji, changes to Apple Podcasts, and more.

Apple released iOS 16.4 beta 3 to public beta testers Wednesday, about a week after the company released the second iOS 16.4 public beta. This third beta means the wide unveiling of iOS 16.4 is probably close at hand. Beta testers can now try out new iOS features, like new emoji and updates to Apple Books.

CNET Tech Tips logoCNET Tech Tips logo

These features are available only to people who are a part of Apple’s Beta Software Program. New iOS features can be fun, but we recommend downloading a beta only onto something other than your primary phone, just in case the new software causes issues. Apple provides beta testers with an app called Feedback. The app lets testers notify Apple of any issues in the new software so the problem can be addressed before general release. 

Here are some of the new features testers can find in the iOS 16.4 betas.

Apple ID and beta software updates

Text that reads You can sign in with a different Apple ID that is enrolled in the Apple Beta Software Program or the Apple Developer ProgramText that reads You can sign in with a different Apple ID that is enrolled in the Apple Beta Software Program or the Apple Developer Program

The latest beta lets you sign into another Apple ID to access other beta software.

Zach McAuliffe/CNET

With the third iOS 16.4 beta, developers and beta testers can check whether their Apple ID is associated with the developer beta, public beta or both. If you have a different Apple ID, like one for your job, that has access to beta updates, iOS 16.4 beta 3 also lets you switch to that account from your device.

Apple Books updates

The iOS 16.4 beta 2 update brings the page-turn curl animation back to Apple Books, after it was removed in a previous iOS update. Before, when you turned a page in an e-book on your iPhone, the page would slide to one side of your screen or it would vanish and be replaced by the next page. Beta testers can still choose these other page-turn animations in addition to the curl animation.

With iOS 16.4 beta 3, a new popup appears when you open Apple Books for the first time after downloading the update. It lets you know you can change your page-turn animation, theme and more. 

31 new emoji

The first iOS 16.4 beta software brought 31 new emoji to your iOS device. The new emoji include a new smiley; new animals, like a moose and a goose; and new heart colors, like pink and light blue. 

9 of the new emoji, arranged in a grid on a pink background: peapod, hair pick, goose, hand, smiley, gray heart, maracas, donkey, wifi signal9 of the new emoji, arranged in a grid on a pink background: peapod, hair pick, goose, hand, smiley, gray heart, maracas, donkey, wifi signal

Some of the new emoji released in the first iOS 16.4 beta.

Patrick Holland/CNET

The new emoji all come from Unicode’s September 2022 recommendation list, Emoji 15.0

Apple Podcasts updates

The first beta brought a few changes to how you navigate Apple Podcasts. Now you can access podcast channels you subscribe to in your Library. You can also use Up Next to resume podcast episodes you’ve started, start episodes you’ve saved and remove episodes you want to skip. 

Preview Mastodon links in Messages

Apple’s first iOS 16.4 beta enabled rich previews of Mastodon links in Messages. That’s good because Mastodon saw a 400% increase in the rate of new accounts in December, so you might be receiving Mastodon links in Messages.

Music app changes

The Kid Cudi album Man On the Moon artwork with the track list belowThe Kid Cudi album Man On the Moon artwork with the track list below

A small banner appears at the bottom of the screen when you choose to play a song next in Apple Music in the frst iOS 16.4 beta.

Zach McAuliffe/CNET

The Music interface has been slightly modified in the first iOS 16.4 beta. When you add a song to your queue, a small banner appears near the bottom of your screen instead of a full-screen pop-up.

See who and what is covered under AppleCare

Starting with iOS 16.4 beta 1, you could go to Settings to check who and what devices are covered on your AppleCare plan. With iOS 16.4 beta 2, this menu will show you a small icon next to each device that’s covered under AppleCare. 

Focus Mode, Shortcuts and always-on display

If you have an iPhone 14 Pro or Pro Max, iOS 16.4 beta 1 lets you enable or disable the always-on display option with certain Focus Modes. A new option in Shortcuts called Set Always on Display was also added, in addition to new Lock Screen and Set VPN actions.

New Apple Wallet widgets

You can add three new order-tracking widgets for Apple Wallet to your home screen with the first iOS 16.4 beta. Each widget displays your tracking information on active orders, but the widgets are different sizes: small, medium and large.

No Active Orders displayed in the Apple Wallet widgetNo Active Orders displayed in the Apple Wallet widget

The medium-size Apple Wallet order tracking widget takes up three tile spaces on your iPhone’s screen.

Zach McAuliffe/CNET

More accessibility options

The first beta update added a new accessibility option, too. The new option is called Dim Flashing Lights, and it can be found in the Motion menu in Settings. The option’s description says video content that depicts repeated flashing or strobing lights will automatically be dimmed. Video timelines will also show when flashing lights will occur.

New keyboards, Siri voices and language updates

The first iOS 16.4 beta added keyboards for the Choctaw and Chickasaw languages, and there are new Siri voices for Arabic and Hebrew. Language updates have also come to Korean, Ukrainian, Gujarati, Punjabi and Urdu. 

There’s no word on when iOS 16.4 will be released to the general public. There’s no guarantee these beta features will be released with iOS 16.4, or that these will be the only features released with the update.

For more, check out how to become an Apple beta testerwhat was included in iOS 16.3.1 and features you may have missed in iOS 16.3.

Technologies

Google Making AI-Powered Glasses With Warby Parker, Gentle Monster

Google revealed its first two partnerships with eyeglass brands, with more to come.

The tech world has rarely been called stylish. But at Google’s annual I/O developers conference on Tuesday, the company took one step into the fashion world — kind of. The company revealed that the first eyeglass brands to carry Android XR AI-powered glasses will be Warby Parker and Gentle Monster, with more brand partners to be revealed in the future. Android XR is Google’s upcoming platform for VR, AR and AI on glasses and headsets.

Yes, there was a Superman joke as the company joked that unlike Clark Kent, who hid his superpowers behind nerdy glasses, the Android XR glasses will give you superpowers. That remains to be seen, although NBA star Giannis Antetokounmpo did show up at Google I/O wearing the XR glasses.

Warby Parker, founded in 2010, was originally an online eyeglass retailer that gained fame for its home try-on program, where customers could order five frames sent to their home to try on and then return. It also allowed customers to upload photos to see how they would look wearing different frames.

South Korean eyeglass brand Gentle Monster, founded in 2011, is known for its luxury eyeglasses and sunglasses. The company’s celebrity customers include Beyoncé, Rihanna, Kendrick Lamar and Billie Eilish.

Continue Reading

Technologies

Tariffs Explained: I Have Everything You Need to Know as Walmart, Subaru Hike Prices

Continue Reading

Technologies

Google I/O Announcements: The Latest AI Upgrades Coming to Gemini, XR and More

From its new Project Aura XR glasses to Chrome’s wants-to-be-more-helpful AI mode, Gemini Live and new Flow generative video tool, Google puts AI everywhere.

As you’d expect, this year’s Google I/O developer’s conference focused almost exclusively on AI — where the company’s Gemini AI platform stands, where it’s going and how much it’s going to cost you now for its new AI Ultra subscription plan (spoiler: $250 per month). Meanwhile, a new Flow app expands the company’s video-generation toolset, and its Android XR glasses make their debut. 

Plus, all AI usage and performance numbers are up! (Given that a new 42.5-exaflop Ironwood Tensor processing unit is coming to Google Cloud later this year, they’ll continue to rise.) 

Google’s Project Aura, a developer kit for Android XR that includes new AR glasses from Xreal, is the company’s next step in the company’s roadmap toward glasses-based, AI-driven extended reality. CNET’s Scott Stein goes in-depth in an exclusive interview with Shahram Izadi, Google’s VP and GM for Android XR about that future. And headset-based Project Moohan, developed in conjunction with Samsung, is now available, and Google’s working with Samsung to extend beyond headsets. 

For a play-by-play of the event, you can read the archive of our live blog.

Google already held a separate event for Android, where it launched Android 16, debuting its new Material 3 Expressive interface, updates to security and an update on Gemini integration and features. 

A lot of the whizzy new AI features are only available via one of its subscription levels. AI Pro is just a rebranding of Google’s $20-per-month Gemini Advanced plan (adding some new features), but Google AI Ultra is a pricier new option — $250 per month, with half off the first three months for the moment — that provides access to the latest, spiffiest and least usage-limited of all its tools and models,  as well as a prototype for managing AI agents and the 30 terabytes of storage you’re going to need to store it all. They’re both available today.

Google also wants to make your automation sound smarter with Personalized Smart Replies, which makes your generated answers sound more like you, as well as plowing through pieces of information on your device to provide relevant information. It’ll be in Gmail this summer for subscribers. Eventually, it’ll be everywhere. 

Also, it includes lots of better models, better coding tools and other details on developer-friendly things you expect from a developer conference. The announcement included its conversational Gemini Live, formerly part of Project Astra, its interactive, agentic, voice AI, kitchen sink AI app. (As Managing Editor Patrick Holland says, «Astra is a rehearsal of features that, when they’re ready for the spotlight, get added to Gemini Live.») And for researchers, NotebookLM incorporates Gemini Live to improve its… everything.

It’s available now in the US. 

Chrome AI Mode

People (that is, those over 18) who pony up for the subscriptions, plus users on the Chrome Beta, Dev and Canary tracks, will be able to try out the company’s expanded Gemini integration with Chrome — summary, research and agentic chat based on the contents of your screen, somewhat like Gemini Live does for phones (which, by the way, is available for free on Android and iOS as of today). But the Chrome version is more suited to the type of things you do at a computer rather than a phone. (Microsoft already does this with Copilot in its own Edge browser.)

Eventually, Google plans for Gemini in Chrome to be capable of synthesizing using multiple tabs and voice navigation. 

The company is also expanding how you can interact with its AI Overviews in Google Search as part of AI Mode, with interactions with AI Overviews and more agentic shopping help. It’s a new tab with search, or on the search bar, and it’s available now. It includes deeper searches, Personal Context — which uses all the information it knows about you, and that’s a lot — to make suggestions and customize replies.

The company detailed its new AI Mode for shopping, which has an improved conversational shopping experience, a checkout that monitors for the best pricing, and an updated «try on» interface that lets you upload a photo of yourself rather than modeling it on a generic body. 

Google plans to launch it soon, though the updated «try on» feature is now available in the US via Search Labs.

Google Beam

Formerly known as Project Starline, Google Beam is the updated version of the company’s 3D videoconferencing, now with AI. It uses a six-camera array to capture all angles of you, which the AI then stitches together, uses head tracking to follow your movements, and sends at up to 60 frames per second.

The platform uses a light field display that doesn’t require wearing any special equipment, but that technology also tends to be sensitive to off-angle viewing. HP is an old hand in the large-scale scanning biz, including 3D scanning, so the partnership with Google isn’t a big surprise. 

Flow and other generative creative tools

Google Flow is a new tool that builds on Imagen 4 and Veo 3 to perform tasks like creating AI video clips and stitching them into longer sequences, or extending them, with a single prompt while keeping them consistent from scene to scene. It also provides editing tools like camera controls. It’s available as part of Gemini AI Ultra. 

Imagen 4 image generation is more detailed, with improved tonality and better text and typography. And it’s faster. Meanwhile, Veo 3, also available today, has a better understanding of physics and native audio generation — sound effects, background sounds and dialogue.

Of course, all this is available under the AI Pro plan. Google’s Synth ID gen AI detection tool is also available today.

Continue Reading

Trending

Copyright © Verum World Media