Connect with us

Technologies

Bluetooth 6.0: What You Need to Know About the Future of Wireless Headphones

Bluetooth got a major upgrade, and it’s already showing up in phones and headphones. Here’s what to expect and what we’re still waiting for.

The Bluetooth Special Interest Group announced version 6.0 of the near-ubiquitous wireless technology in Sept. 2024, adding some major new features that aim to improve Bluetooth’s reliability, security, smoothness and efficiency. It might even get you a greater range between your headphones and phone, as well as longer battery life. 

We’re finally seeing devices arrive with Bluetooth 6.0, including phones from Apple and Google, as well as headphones and earbuds. Here’s what you need to know about Bluetooth 6.0 and how it will affect wireless connectivity for years to come. 

Main improvements of Bluetooth 6.0

Latency

Latency is the time between an audio signal being sent and when you actually hear it. The higher the latency, the more annoying it can be — think of when the sound lags behind the video in movies or games. Most Bluetooth (5.0 and newer) devices have latency somewhere between 50 to 100 milliseconds, depending on gear and configuration, which is noticeable to most people. 

Bluetooth 6.0’s new isochronous adaptive layer, or ISOAL, allows devices to break up audio data into smaller chunks for quicker processing. In theory, this has the potential to reduce latency, and it’s possible that we might see latency under 10 milliseconds under ideal conditions, such as close range, no obstacles and no interference. 

We expect that under real-world conditions, the majority of setups will operate at a latency of around 20 milliseconds, which would still represent a significant improvement over Bluetooth 5.x. 

Location tracking and security

One of the new spec’s most buzzworthy features is called Channel Sounding, which provides a significant improvement in the accuracy of device location tracking. It relies on a back-and-forth exchange of data packets between connected devices and a combination of time stamps and frequency analysis, rather than the old, less accurate method of just measuring relative signal strength. 

Channel Sounding is a boon for Apple’s Find My and its Google and Samsung equivalents, offering location accuracy down to approximately 10 centimeters, along with improved resistance to obstacles and interference. It also enables enhanced security for Bluetooth lock systems using a combination of encryption, randomization and location cross-referencing to ensure some random person isn’t unlocking your car or front door. 

Power efficiency and pairing speed

The same features that reduce latency also help with power efficiency: Everything behaves intelligently to use more power for keeping audio and video in sync for things like gaming, and less power for less intensive applications like audiobooks. This flexibility is especially crucial for wireless earbuds, which require the most effective power management due to their compact size. 

The process of scanning for nearby Bluetooth devices is also being upgraded, with decision-based advertiser filtering and monitoring. Advertising in this case doesn’t refer to selling you products. Basically, it’s a set of headphones broadcasting, «I’m a headset, and I’m nearby and ready to connect.»  

Instead of constantly shouting, «Is anyone there?!» to see if there’s anything nearby to connect to, Bluetooth 6.0 devices will keep track when previously paired devices go in and out of range. This should save precious battery life, make pairing quicker and provide smoother multipoint switching.

What Bluetooth 6.0 doesn’t do

Improved Bluetooth sound quality (maybe)

Were you waiting for reliable, wireless lossless audio transmission from your phone to your headphones? Still not there yet. 

Astute readers who note that CD-quality lossless audio transmission requires 1.4Mbps of throughput speed may wonder why Bluetooth 6.0’s theoretical 3Mbps isn’t enough. It’s because much of Bluetooth’s bandwidth is taken up by overhead — a bunch of ancillary data that’s required for secure Bluetooth connections that has nothing to do with audio. While there are some codecs that promise high-quality wireless audio, lossless CD-quality audio remains elusive.

Bluetooth 6.0 does bring the optional long-discussed LC3plus codec, which can transmit up to 24-bit and 96kHz audio. However, unlike «regular» LC3, this is an optional codec that has a separate licensing fee. That means there will be limited adoption compared to the more popular codecs. Remember, both your device and headphones must be compatible with LC3plus for it to work. How well it works and whether it can reliably transmit 24/96 in the real world remain to be seen.

A future incremental revision of Bluetooth 6.0 promises to add a high-data-throughput feature that will open up usable bandwidth for lossless streaming, potentially by using other frequency bands besides the crowded 2.4GHz band, to achieve speeds of up to 7.5Mbps. That should provide enough headroom to enable high-res audio streams, though it’s unclear if manufacturers will adopt the right codecs for lossless Bluetooth audio via headphones. Given past and current adoption rates for different Bluetooth codecs, it is unlikely to be Apple, and this technology will instead first find its way into lesser-known Android phones.

Where to find Bluetooth 6.0 right now

If you want to get a head start on Bluetooth 6.0 compatibility, there are a handful of devices already shipping (though not all of these are available in the US).

Technologies

Adobe Firefly’s New AI Editing Tools Are a Step Toward More Precise AI Video

In an exclusive interview, Adobe shares how the company is building Firefly to be your forever partner for AI creation.

Anyone who’s used an AI image or video generator knows that these tools are often the opposite of precise. If you’re using an AI generator with a specific idea in mind, you’ll likely need to do a lot of work to bring that vision to life.

Adobe’s convinced that it can make its AI hub, Firefly, a place where AI can be customized and precise, which is what the company aims for with the release of new AI video editing tools on Tuesday.

Over the course of 2025, Adobe has quietly emerged as one of the best places to use generative AI tools. Firefly subscriptions starting at $10 a month, making it an affordable program that provides integration with top models from Google, OpenAI, Runway, Luma and several other leading AI companies. It’s expanding its roster with Topaz Labs’ Astra (available in Firefly Boards) and Flux 2.1 from Black Forest Labs, available in Firefly and Photoshop desktop.

The partnerships are helping to make Firefly an all-in-one hub for creators to leverage AI, said Steve Newcomb, vice president of product for Firefly, in an exclusive interview. Just as Photoshop is the «career partner» of photographers, Firefly aims to become a partner for AI video and image creators.

«If you’re a photographer, [Photoshop] has everything that you could ever want. It has all the tooling, all the plugins, in one spot with one subscription. You don’t have to subscribe to 25 different photo things,» Newcomb said. «So for us, Firefly, our philosophy is, how do we be that home?» 

One way is through partnerships with AI companies, similar to Photoshop plug-ins. Precise editing tools are another, he said.

That’s why Adobe is trying to make it easier to edit AI-generated content. Hallucinations are common in AI-generated images and videos, such as disappearing and reappearing objects, weird blurs and other inaccuracies. For professional creators who use Adobe, the inability to edit out hallucinations makes AI almost unusable for final projects. 

In my own testing, I’ve often found that editing tools are basic, at best. At worst, they’re entirely absent, particularly for newer AI video technologies. Firefly’s new prompt-based editing for AI videos, announced on Tuesday, is a way to get that hands-on control.

If you’ve edited images in Firefly via prompting, the video setup will feel familiar. Even if you haven’t, prompt-based editing is essentially a fancy term for asking AI to modify things as you would when talking with a chatbot. Google’s nano banana pro in Gemini is one example of an AI tool that allows you to edit through prompts. 

Firefly’s video prompt editing has the added bonus of allowing you to switch between models for edits: You can generate with Firefly and edit with Runway’s Aleph, for example.

Like with any AI chatbot or tool, prompt-based editing isn’t always accurate. But it’s a nice option without having to leave Firefly for Premiere Pro. 

The plan is to go beyond just prompt-based editing, Newcomb said. More AI-based precision editing tools for Firefly will be important, allowing you to make even more minute changes. What makes it possible is something called layer-based editing, a behind-the-scenes technology that enables easier, detailed changes in AI-generated images and videos. 

Adobe plans to implement layer-based editing in the future, which will likely form the foundation for future AI video editing tools. The goal is to make it easier to stay working in Firefly «until the last mile» of editing, Newcomb said.

«We can run the gamut of the precision continuum all the way to the end, and just think of prompting as being one of many tools. But it is absolutely not the only tool,» said Newcomb.

For now, there is another piece of video editing news that could help you build more precise AI videos.

AI video editing without Premiere Pro expertise

Adobe is also bringing its full AI video editor into beta on Tuesday, the next step toward making editable and, therefore, usable AI video. 

Debuted at the company’s annual Max conference in October, the video editor is now launching in a public beta. It sits between basic video editors and the feature-stuffed Premiere Pro. It’ll be great for AI enthusiasts who want more editing firepower than you get with OpenAI or Google, without needing expertise in Premiere Pro.

The video editor is meant to help you put all the pieces of your project together in one place. It has a multitrack timeline for you to compile all your clips and audio tracks. That’s especially important because, while you can create your own AI speech and soundtracks, Firefly AI videoes don’t natively generate with sound. (You can use Veo 3 or Sora 2 in Firefly to generate those initial clips with audio, though.) You can also export in a variety of aspect ratios. 

«Think of the video editor as being one of our cornerstone releases that is helping us move toward being one place, one home, where you can have one subscription and get to every model you ever needed to get the job done,» Newcomb said.

Continue Reading

Technologies

AI Slop for Christmas: Why McDonald’s and Coca-Cola’s AI Holiday Ads Missed the Mark

Commentary: Two billion-dollar companies using AI for holiday ads isn’t giving me that holly jolly feeling.

I am completely exhausted by huge corporations like McDonald’s and Coca-Cola choosing to rely so heavily on AI for their holiday ads. McDonald’s made $25.9 billion in revenue in 2024, and Coca-Cola made $47.1 billion. Do these companies expect us to be OK with AI slop garbage when they could’ve spent a tiny fraction of that to hire a real animator or videographer?

In case you haven’t been inundated with these AI commercials, I’ll back up a bit. Both McDonald’s and Coca-Cola have launched holiday-themed commercials that are undeniably made with AI — each bragged about its use of AI, which they have probably come to regret. They’re very different, showing the full range of what’s possible with AI in advertising. But the backlash against both proves we don’t have the appetite for AI slop.

McDonald’s commercial features a series of holiday-themed mishaps, set to a parody of the song It’s the Most Wonderful Time of the Year, about how it’s actually the most terrible time of the year. The commercial is only 30 seconds long and intended only for the Netherlands, but it has already garnered so much hate online that the company removed the video from its pages. The marketing agency behind the spot, The Sweetshop Film, still has the video up on its website.

The McDonald’s ad is very clearly AI, with short clips stitched together with a bunch of hard jump cuts. The text isn’t nearly legible, fine details are off and it just has that AI look I’ve come to quickly recognize as an AI reporter. In a now-deleted social media post, the marketing agency’s CEO talked about how it used various AI tools to create it. By contrast, the Coca-Cola commercial is a little more put-together. A Coca-Cola truck drives through a wintry landscape and into a snowy town, and forest animals awaken to follow the truck and its soda bottle contents to a lit Christmas tree in a town square. But even this video has clearly AI-generated elements.

While disappointed, I wasn’t surprised when I saw the ad and the resulting backlash. There has been a surge in creative generative AI tools, especially in the past year, with numerous AI tools built specifically for marketers. They promise to help create content, automate workflows and analyze data. A huge proportion (94%) of marketers have a dedicated AI budget, and three-quarters of them expect that budget to grow, according to Canva’s 2025 Marketing and AI report. That’s partly why we’ve seen a massive increase of AI-generated content in our social media feeds. It’s no wonder Merriam-Webster selected ‘slop’ as its word of the year.

McDonald’s and Coca-Cola’s feel-good, festive commercials manage to hit upon every single controversial issue in AI, which is why they’re inspiring such strong reactions from viewers. AI content is becoming — has already become — normalized. We can’t escape chatbots online and AI slop in our feeds. McDonald’s and Coca-Cola’s use of AI is yet another sign that companies are plowing ahead with AI without truly considering how we’ll react. Like advertisements, AI is inescapable.

If AI in advertising is here to stay, it’s worth breaking down how it’s used and where we, as media consumers, don’t want to see it used.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


Spotting the AI in Coca-Cola’s ad

McDonald’s now-removed ad was clearly AI, with its plastic-y people and jerky motions. Its format, a series of short clips stitched together with hard jump cuts, is another telltale sign since most AI video generators can only generate clips up to 10 or so seconds long. Coca-Cola’s ad was a little different, but the AI use was just as obvious.

The Holidays Are Coming ad is a remake of Coca-Cola’s popular 1995 ad. In a behind-the-scenes video, Coca-Cola breaks down how it was created. It’s obvious where AI was used to create the animals. But I’m not sure I believe the company went «pixel by pixel» to create its fuzzy friends.

Coca-Cola’s AI animals don’t look realistic; they look like AI. Their fur has some detail, but those finer elements aren’t as defined as they could be. They also aren’t consistent across the animal’s body. You can see the fur gets less detailed further back on the animal. That kind of detailed work is something AI video generators struggle with, but it’s something a (human) animator likely would’ve caught and corrected. 

The animals make overexaggerated surprised faces when the truck drives past them, their mouths forming perfect circles. That’s another sign of AI. You can see in the behind-the-scenes video that someone clicks through different AI variations of a sea lion’s nose, which is a common feature of AI programs. There’s also a glimpse of a feature that looks an awful lot like Photoshop’s generative fill. Google’s Veo video generator was definitely used at least once.

The company has been all-in on AI for a while, starting with a 2023 partnership with OpenAI. Even Coca-Cola’s advertising agency, Publicis Group, bragged about snatching Coca-Cola’s business with an AI-first strategy. It seems clear that the company won’t be swayed by its customers’ aversion to AI. (Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

All I want for Christmas is AI labels

There is exactly one thing Coca-Cola got right, and that’s the AI disclosure at the beginning of the video. It’s one thing to use AI in your content creation; it’s entirely another to lie about it. Labels are one of the best tools we have to help everyone who encounters a piece of content decipher whether it’s real or AI. Many social media apps let you simply toggle a setting before you post. 

It’s so easy to be clear, yet so many brands and creators don’t disclose their AI use because they’re afraid of getting hate for it. If you don’t want to get hate for using AI, don’t use it! But letting people sit and debate about whether you did or didn’t is a waste of everyone’s time. The fact that AI-generated content is becoming indistinguishable from real photos and videos is exactly why we need to be clear when it’s used.

It’s our collective responsibility as a society to be transparent with how we’re using AI. Social media platforms try to flag AI-generated content, but those systems aren’t perfect. We should appreciate that Coca-Cola didn’t lie to us about this AI-generated content. It’s a very, very low bar, but many others don’t pass it. (I’m looking at you, Mariah Carey and Sephora. Did you use AI? Just tell us.)

AI in advertising

In June, Vogue readers were incensed when the US magazine ran a Guess ad featuring an AI-generated model. Models at the time spoke out about how AI was making it harder to get work on campaigns. Eagle-eyed fans caught J.Crew using «AI photography» a month later. Toys R Us made headlines last year when it ran a weird ad with an AI giraffe, though it did share that it was made with an early version of OpenAI’s Sora.

Something that really stung about the use of AI by Guess and J.Crew is how obvious it was that AI was used in place of real models and photographers. While Coca-Cola and Toys R Us’s use of AI was equally clear, the AI animals didn’t hit quite the same. As the Toys R Us president put it, «We weren’t going to hire a giraffe.» Points for honesty?

Even so, it’s more than likely that real humans lost out on jobs in the creation of these AI ads. Both commercials could’ve been created, and probably improved, if they had used animators, designers and illustrators. Job loss due to AI worries Americans, and people working in creative industries are certainly at risk. It’s not because AI image and video generators are ready to wholly replace workers. It’s because, for businesses, AI’s allure of cutting-edge efficiency offers executives an easy rationale. It’s exactly what just happened at Amazon as it laid off thousands of workers.

It’s easy to look at Coca-Cola’s and McDonald’s AI holiday ads and brush them off as another tone-deaf corporate blunder, especially when there are so many other things to worry about. But in our strange new AI reality, it’s important to highlight the quiet moments that normalize this consequential, controversial technology just as much as the breakthrough moments.

So this holiday season, I think I’ll drink a Pepsi-owned Poppi cranberry fizz soda instead of a Coke Zero.

Continue Reading

Technologies

I’m Happy the 2026 Moto G Power Is $300 but Bummed It Lacks Wireless Charging

Motorola switches up some of the features on its lower-cost phone while adding an improved selfie camera and a larger battery.

Motorola added several notable improvements to the $300 Moto G Power that the 2025 model lacked. But the company did so at the expense of one of the prior phone’s best features: wireless charging.

I’m bummed about this news. I liked the 2025 Moto G Power because I could recharge it with a cable or wirelessly and was delighted to find such flexibility on such an affordable phone. But perhaps one of the reasons Motorola removed the feature was because it upgraded the new Moto G Power’s battery to a 5,200-mAh capacity. That’s 200 mAh more than the 2025 Moto G Power. Maybe Motorola is thinking you won’t need to recharge the phone as frequently. The Moto G (2026) and Moto G Play (2026) also got larger batteries.

Despite lacking wireless charging, it supports a respectable 30W wired charging speed.

The new Moto G Power runs on the MediaTek Dimensity 6300 chip and 8GB of RAM, the same processor and memory that the 2025 edition had. The 2026 Moto G Power has RAM Boost to help it run more apps and tasks. We’ll have to test it to see if there’s a noticeable difference between the new Moto G Power’s performance and its cheaper 2026 siblings, the Moto G Play and Moto G. The phone runs Android 16, and has Google’s Circle to Search along with the Gemini AI assistant.

Many of the 2026 Moto G Power’s other features were in the 2025 edition, which is largely a good thing. It has IP68 and IP69 ratings for using underwater, so it should handle having a drink spilled on it with no problem. It has a 6.8-inch 1,080-pixel resolution display and a pair of lenses on the back: a 50-megapixel wide-angle camera and an 8-megapixel ultrawide. The Moto G Power has a new 32-megapixel front-facing camera, which should improve photos and video calls from the 2025 Moto G Power’s 16-megapixel selfie camera.

The Moto G Power comes in white (Pantone pure cashmere) and dark blue (Pantone evening blue) and will go on sale Jan. 8 in the US, with initial availability at Motorola’s website, Best Buy and Amazon. It will also be available at wireless carriers over the coming months.

Continue Reading

Trending

Exit mobile version