Technologies
These Official Apple AirTag Keychains Are Just $15 and Perfect for Your 2026 Adventures
Apple rarely offers discounts on even its most basic wares, but Woot does, so secure your AirTag with a new keychain for less.

If you have Apple AirTags, then you know these trackers are even better when attached to a well-made keychain. But Apple rarely offers discounts on its products, including its official AirTag keychains. That is where retailers come in. We just spotted Apple AirTags at Woot, starting at $15 for a limited time.
To make things even better, you can grab a two-pack for $25, which brings each keychain down to just under $13. These deals are live until Dec. 19 or until the keychains sell out, so act fast. For comparison, Apple sells some AirTag keychains for up to $35, so you will save at least $20 at Woot.
Right now, Woot has three kinds of Apple AirTag keychains in stock: black leather with a key ring, leather loop, and silicone loop. Colors vary depending on the style you are after. Regardless of the style or color chosen, a single Apple AirTag holder costs only $15 and any two-pack is down to just $25. These AirTag keychains are new and include a one-year warranty.
Hey, did you know? CNET Deals texts are free, easy and save you money.
Amazon Prime members can get free shipping. Note that timelines will vary due to the busy holiday season, so this may not arrive in time for Christmas. Additionally, Woot does not ship to P.O. boxes, Alaska or Hawaii.
Why this deal matters
AirTag keychains can cost up to $35, especially if you prefer leather or silicone. This Woot deal reduces the cost to as little as $13 if you buy the two-pack or $15 for a single keychain. That saves you at least $20. These keychains make great stocking stuffers as well. Just make sure to order by Dec. 19 to secure these discounts. Looking for more gifts? Check out our list of gifts under $150 that can help you keep your finances in check while getting through your holiday list.
Technologies
We Had a Poke Around ChatGPT’s New App Store. Here’s What We Found
After a call for app submissions from developers, ChatGPT’s beta app feature has arrived.
Adobe Photoshop, Spotify, Canva, Zillow and other well-known digital tools are now apps within ChatGPT. OpenAI has launched an app platform two months after announcing the beta feature and rolling out a development kit.
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
On Dec. 17, OpenAI announced that developers could submit their apps and a day later, apps began to appear in the desktop version of ChatGPT, the wildly popular chatbot with more than 800 million active users. It’s unclear how soon it will show up in the mobile ChatGPT app. In CNET’s testing, it wasn’t yet available on iOS.
(Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
How to use apps in ChatGPT
The apps, at least at launch, are categorized as «Featured,» «Lifestyle» and «Productivity,» with descriptions that will look familiar to anyone who’s used the Apple App Store or the Google Play store.
You can also use search to find a specific app that may not be listed under those categories.
Many of the apps include screenshots with prompt examples that suggest how to use the app within the chatbot. Canva, for instance, includes the prompt example, «@canva create a 2025 wrap presentation for my class.»
Instead of downloading an app, you’ll click a «Connect» button to give ChatGPT access to it. From there, you can use an @ prompt to access that app’s features.
Interestingly, you can’t just ask ChatGPT what apps are connected if you forget — instead, you need to go to Settings, Connected Apps or to Settings, then Data Controls, then Connected Apps depending on which version of ChatGPT you’re using.
Technologies
AI Health Coaches: The Next Frontier in Wearables or Privacy Nightmare?
We should probably brace for both.
I’ve been tracking biometric data about my body since what feels like the dawn of time (or at least the dawn of wearables). I ran a half-marathon with the first Fitbit tracker, reviewed the very first Apple Watch and used the first smartphone-connected thermometer for ovulation tracking back when it was a pen-and-paper operation for most.
Collecting data about my body isn’t just second nature; it’s practically part of my job description. And for years, it’s been entirely on me to overanalyze that mountain of metrics and figure out how to turn it into something useful.
So when AI health coaches started surfacing from Google, Samsung, Apple, Oura and others, promising to shoulder that mental load, I was all in. You mean to tell me I don’t have to decode every tiny fluctuation in my data on my own anymore?
Most of us can’t afford a real-life wellness coach to meal-prep for us, hype us up midworkout or pry the dumbbells from our fever-wrought hands when we’re at the gym looking like a walking Flonase commercial. An AI coach felt like the next best thing: a nerdy, data-obsessed friend living in my phone, armed with years of my biometrics and the patience to explain them without judgment.
Over the last year, I tried them all, or at least the early versions of what they’ll eventually become. Personal trainers built into fitness apps. Chatbots tucked behind wearable dashboards. Coaches that whisper advice into your earbuds or nudge you from your smartwatch. Some free, some paid.
But so far, none has been game-changing in the way I’d hoped, and the trade-offs of handing over my health data often felt like a high price to pay. The dream in my head doesn’t quite match the reality taking shape.
Like with any new tech, it takes a while to weigh the long-term cost versus the short-term reward. But one thing is clear: This isn’t a passing trend. AI-driven health tech is poised to reshape personal health care in a way that smartwatches and smart rings haven’t yet.
In the best-case scenario, AI health apps and programs could help fill gaps in care and serve as a lifeline in communities with limited access to wellness information. In the worst-case scenario, they could open the floodgates to a privacy nightmare and mishandle medical data. Where this all lands depends on how we choose to use AI coaches and what guardrails are built around them.
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
AI in wearables isn’t new, but now it’s going rogue
The use of AI in health care, wellness and fitness has exploded in the last year, but the technology has been baked into the wearable experience for much longer. High heart-rate alerts, fall detection, even sleep scores… that’s all AI working behind the scenes.
According to Karin Verspoor, dean of the School of Computing Technologies at RMIT University in Melbourne, Australia, this type of AI is referred to as predictive modeling. «It’s a targeted tool that’s been trained to identify a particular type of event.»
In the case of these wearables, the «task» is looking for patterns outside the normal baseline and surfacing them as an alert. They’re precise and predictable.
But now we’re veering into something different and much harder to control: generative AI. With these full-on concierge-style chatbot models, not much different from ChatGPT or Gemini, any topic is fair game: heart rate patterns, premenstrual mood swings, diet tips or even medical recommendations (the latter, thankfully, usually prompts you to check with a human physician). The caveat is that these «health coaches» have an all-access pass to your most sensitive health data in real time.
«Large language AI models are essentially much more dynamic and much more responsive to whatever somebody puts into the prompt, and whatever the ongoing interaction with the system is,» says Verspoor. The problem, she notes, is that they’re also «subject to all of the problems that we have with large language models like confabulations or hallucinations.»
Over the past 18 months, it seems like nearly every major tech and fitness brand has launched its own version of an AI coach or chatbot-style concierge, and if they haven’t, they’re very likely considering it.
Google is testing an AI coach inside the Fitbit app, built on Gemini. Apple has released a Workout Buddy for the Apple Watch that offers real-time motivation via headphones based on live metrics during workouts, and is rumored to be exploring some kind of ChatGPT integration in its Health app. Samsung, Garmin, Oura and iFit have all rolled out AI features across their apps and wearable devices, while Meta has partnered with Garmin and Oakley to embed its Meta AI voice assistant into smart workout glasses.
That’s just a snapshot of the AI health coaches I’ve personally tested, and a fraction of what’s likely in development. Only Google’s is explicitly labeled a «coach,» but for the purposes of this article, they all fall under the same umbrella of AI health coaches.
Some of these features feel promising. Meta AI, for example, can read your Garmin heart-rate data into your ear through the glasses’ speakers so you don’t have to take your eyes off the trail. Or you might get training and rest-day recommendations based on how you slept and other physical data.
Other features, however, still feel half-baked. Samsung’s running coach, for example, offered a one-size-fits-all training plan that didn’t match my goals or experience.
In theory, these models should improve over time as they learn individual patterns and as people like me find better ways to leverage them. For now, though, most remain in their infancy, far from the full potential they’re meant to be: an always-available adviser, designed to make sense of the ever-growing pile of health data collected through wearables.
Best-case scenario: AI to the rescue
The current health care model in the US is overdue for a transformation. The system is overburdened, prohibitively expensive and facing demand that outpaces supply, especially in rural areas with limited access to doctors and medical equipment.
Dr. Jonathan Chen, professor of medicine and the director for medical education in artificial intelligence at Stanford, is optimistic that AI could play a constructive role in easing some of that pressure, especially when it comes to making sense of all the health information and clinical data in patient records.
«We already have ways to collect data for people all the time, but even your doctor doesn’t know what to do with all that data in the ICU, let alone all the wearable data,» says Chen.
AI, he argues, can help bridge that gap by synthesizing information in ways that actually matter, such as flagging warning signs of potentially life-threatening conditions like hypertension before they become fatal. Having a personal health concierge at your fingertips could help you focus more intimately on wellness and encourage behavioral changes that reduce the risk of chronic illness over time.
«Even though the actionable insight might not be that different,» said Chen, «when it feels personalized, that might be a way some people will engage deeper.» Chen emphasizes that AI works best when it drives better conversations, not when it replaces them. He points to glucose monitoring as an example: Instead of walking into an appointment with a month of raw data, AI could review that information ahead of time and surface patterns and actionable insights to guide the discussion.
I’ve seen that best-case scenario play out firsthand. A close family member began receiving irregular heart rhythm notifications from an Apple Watch. The alerts had never appeared during a routine doctor visit, nor after wearing a clinical heart monitor at home for weeks. When the watch flagged an episode in real time, he got in front of a doctor, confirmed the diagnosis with an ECG and took action. A few months later, he underwent a heart procedure that significantly reduced his risk of a potentially life-threatening event. In that case, the wearable didn’t replace medical care, but did exactly what it was meant to do: surface a signal, start a conversation and help close a dangerous gap in care.
But that same dynamic can just as easily tip in the other direction. False positives and over-indexing on minor deviations could lead to unnecessary tests and screenings, adding strain to an already overwhelmed health care system.
«Is there going to be a storm of patients banging on the doctor’s door? ‘My Apple Watch, my Fitbit told me I have some heart condition,'» says Chen. «‘You have to give me 100 scans right now and start me on medication.’ Like, whoa, whoa, whoa, buddy… Let’s take a look first. Let’s see what’s really there.»
It’s a familiar tension; an upgraded version of the Dr. Google era when even the most innocent search about a rash could spiral into a late-night panic over flesh-eating bacteria.
Pay to play: The price of privacy
My biggest concern when I started using these AI coaches was data sharing and privacy. Asking ChatGPT about a rash is one thing, but giving a chatbot access to my entire medical history is a completely different beast. Many of these health platforms contain years of my biometric data, along with my medical ID, which includes blood type and allergies.
The alternative is not to use them at all. In many cases, these AI coaches rely on a pay-to-play model, with some requiring an actual subscription. But the real payment is your data. «We can’t have reliable predictive models or generative models without having access to data of some variety,» says Verspoor.
The amount you give up and how it’s used varies by platform, but signing up involves wading through dense disclosures: permission to use your historical and real-time biometric data, location info and chat history to train other models. We’ve become so desensitized to these agreements that most people (myself included) aren’t even sure what we’re giving up anymore.
That confusion isn’t accidental. The language is often intentionally vague and nearly impossible to understand without a law degree. In my case, for example, using Oakley’s smart glasses required agreeing to let my data be used to train Meta’s AI.
A recent privacy analysis by the Electronic Privacy Information Center found that the health-related data people assumed was private (including searches, browsing behavior and information entered into health platforms) is often collected and shared far beyond its original context. In one case, data entered on a state health insurance marketplace was tracked and sent to third parties, such as LinkedIn, for advertising purposes. Much of this information falls outside HIPAA protections, meaning it can be legally repurposed or sold in ways consumers never intended.
Even when anonymized, health data can often be traced back to a real person and even used by insurance agencies to raise premiums.
«You can deidentify and can make it harder to tell, but if someone tried really hard, it’s actually not that hard to use statistical methods to reconstruct who’s actually who,» says Chen.
Data breaches and hacks are just the tip of the iceberg. We often have little visibility into how long data will be stored, who it might be shared with or where it could end up years down the line. Chen points to 23andMe as a cautionary tale. The company had promised privacy and security, until financial trouble put massive amounts of genetic data in jeopardy.
«They’ll keep it secure and private, but then they go bankrupt. And so now they’re just going to sell all their assets to whoever wants to buy it.»
AI health coach: friend or foe?
The reality, at least in the short term, is likely less extreme than either of those scenarios. We’re probably not on the verge of AI saving health care, or of selling our most sensitive health data to the highest bidder.
As Verspoor points out, the pay-to-play model isn’t exclusive to AI health coaches. Tech companies have been using personal data to power products long before generative AI entered the chat. Your search history may not look like an ECG, but it can be just as revealing about life stages, health anxieties or illness history.
With AI health coaches having a direct line to real-time biometric data, it’s more important than ever for people to pay close attention to what data they’re signing off on and who they’re handing it to. Is that information staying on-device? Is it being shared with third parties? And what happens to it down the line? This requires people to be in the driver’s seat when signing up and to read the fine print, even if it means having to copy and paste it into yet another AI chatbot to translate the legal jargon. Then weigh whether the exchange is worth it to you.
Chen believes the potential upside still outweighs the risks, especially if these tools succeed at getting people to care more about their health and engage with it more often. That engagement, he argues, is where the real value lies so long as AI remains a supplement to care, not a substitute for it. Both experts agree AI health coaches should function as ancillary tools to help you understand your data, ask better questions and jump-start conversations with your doctor.
AI coaches may know your day-to-day vitals, but they still have blind spots when it comes to real-world context and medical-grade testing. Their advice, no matter how innocuous and obvious it may sound, like «hydrate after a bad night of sleep,» should be taken with a healthy dose of skepticism. Unlike tools such as ChatGPT or Google’s Gemini, some AI health coaches, including Google’s Fitbit Coach and Oura’s Advisor, don’t clearly cite sources or explain where their recommendations come from, at least not yet.
The tipping point
The reality, at the moment, is less dramatic than either of these extremes. We’re probably not on the brink of AI saving health care, or of plummeting into a full-blown medical data dystopia. Instead, we’re in this awkward in-between phase.
I was initially excited about the idea of an AI health coach taking some of the mental load off interpreting my health data. That quickly turned to skepticism as the privacy trade-offs became apparent. Now, after months of testing, I’ve landed somewhere else entirely: Most days, I forget the tool is there in the first place.
That gap between insight and action is something human coaches have long understood. Jonathan Goodman, a fitness coach and author of Unhinged Habits, says AI excels at processing data, but behavior change rarely hinges on perfect metrics or the perfect training plan.
«For a general-population human who just needs to move a little bit more, eat a little bit better, and play with their kids, it’s probably closer to 10% technical and 90% psychological,» he says. Metrics can surface patterns, but coaching is about asking the right questions, fitting movement into real life and recognizing those moments when someone is ready to push themselves into real transformation.
To me, it’s that in-the-moment guidance, pushing me past my limit or telling me when to scale back, that’s missing from these AI coaches. The experience is largely passive, often requiring you to check the app to see that day’s training plan. Apple’s Workout Buddy might be the closest to that, with real-time motivation based on your stats, but even that stops short of actual coaching. And none has proven indispensable enough to make me seek it out consistently.
To reach that tipping point, these companies will need to give us stronger reasons to engage and clearer safeguards to justify handing over our deeply personal health data.
Technologies
Headphone Conversation Awareness Mode: How It Works and Why You Need It
Taking off your headphones for a quick chat is practically Stone Age. Try conversation awareness mode to make things more seamless and truly hands-free.
Listening to your tunes, but your neighbor is feeling chatty? Ordering a latte but your hands are full so you can’t pause your podcast? Conversation detection, a feature on some headphones and earphones, can be a game-changer. Instead of removing your active noise-canceling earbuds or using your hands to pause the audio, this handy feature detects voices, pauses the audio and turns off the noise canceling.
That seamlessness between the cozy comfort of noise cancellation and the bustling real world is extremely helpful and easy to set up. There are, however, a few important things to note for the best experience with automatic conversation detection.
Most noise-canceling earbuds, including those from Bose and many other manufacturers, have a mode called Aware, Awareness or Transparency. This boosts ambient sound, often in the vocal frequency ranges. What I’m talking about here is a detection feature that makes switching to this mode automatic instead of having to manually select it.
You’ll generally see this feature on flagship headphones from Apple, Sony, Google and Samsung. Each one calls it something slightly different: Apple has Conversation Awareness, Samsung has Voice Detect, Google has Conversation Detection and Sony’s got Speak-to-Chat.
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
How it works
Enable
Conversation modes are generally accessible in the settings for your headphone’s companion app. If your phone and headphones are both Apple or both Google, go into the phone’s settings and access this feature by tapping on your headphones. Always be sure to update all of your devices’ firmware. Apple iOS also provides access to Conversation Awareness via the Control Center that appears when you swipe down from the top of the screen.
Detect
The array of tiny microphones built into your earbuds or headphones for calls and noise canceling will detect your voice for an awareness mode. Many headphones have built-in accelerometers for features like head tracking and on-ear/head detection; these might also be used to pick up jaw movement to verify that it’s you speaking and not someone nearby.
Samsung has a separate but related Siren Detect feature that automatically turns on Transparency mode when a siren is detected, so you can hear what’s going on in an emergency. (Some brands do the opposite and crank up the ANC when a loud sound is detected.)
Auto-adjust audio
Once activated, awareness modes also either pause or lower the volume of whatever audio is currently playing. This behavior differs by brand. For example, Apple devices lower music audio but pause podcasts. Samsung, instead, lowers all audio, while Sony and Google devices both pause all audio. Ideally, you’d be able to choose the behavior, but currently that’s still rare. Apple adds Conversation Boost, which uses the mics and accelerometers to amplify the voice of the person you’re talking to via head tracking.
End chat and resume
Then, either through some technological wizardry or simply by sensing when you stop talking (adjustable on some brands, including Sony), the headphones detect that the conversation has ended and revert to the previous audio, at the same volume and in the same noise-cancellation mode. Many models are better than people at detecting the end of a conversation.
Any model with this feature will also let you toggle the conversation mode on/off manually with a long button press or similar action.
The fine print
Conversation detection is triggered by your voice, not someone else’s, so you may wind up asking people to repeat themselves when you notice they’re talking to you. This asking will trigger conversation mode. Depending on how a specific model’s detection works, it might require both earbuds to be in your ears for it to work.
Sometimes, conversation detection can be triggered inadvertently by coughing, singing along to music, or other random ambient sounds. It may also not work well in extremely noisy environments, such as construction sites and airplanes. Some models do let you adjust the sensitivity, which is something we’d like to see more of in firmware updates and future releases.
Frequent podcast or audiobook listeners should choose headphones that pause all audio for conversations, or at least handle it intelligently by distinguishing between audio types and pausing podcasts or audiobooks so you don’t miss anything. However, Apple and Samsung won’t pause videos from services like Netflix or YouTube; they just lower the audio.
As with all features that use sensors and mics, conversation detection will affect battery life to some degree, though it’s not a major drain.
The final verdict
Conversation detection modes aren’t for everyone, especially exuberant souls who talk to themselves at full volume, yell at the news or sing along with their tunes. If you reflexively take your earbuds out to talk to others, you also don’t need this feature — unless you want to change that habit.
In the future, I’d like to see more adjustability, but even how this feature is implemented in the current crop of headphones and earbuds, it’s an excellent upgrade to the seamlessness of digital life.
-
Technologies3 года ago
Tech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies3 года ago
Best Handheld Game Console in 2023
-
Technologies3 года ago
Tighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года ago
Black Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies4 года ago
Verum, Wickr and Threema: next generation secured messengers
-
Technologies4 года ago
Google to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies4 года ago
Olivia Harlan Dekker for Verum Messenger
-
Technologies4 года ago
iPhone 13 event: How to watch Apple’s big announcement tomorrow