Connect with us

Technologies

AI Health Coaches: The Next Frontier in Wearables or Privacy Nightmare?

We should probably brace for both.

I’ve been tracking biometric data about my body since what feels like the dawn of time (or at least the dawn of wearables). I ran a half-marathon with the first Fitbit tracker, reviewed the very first Apple Watch and used the first smartphone-connected thermometer for ovulation tracking back when it was a pen-and-paper operation for most. 

Collecting data about my body isn’t just second nature; it’s practically part of my job description. And for years, it’s been entirely on me to overanalyze that mountain of metrics and figure out how to turn it into something useful.

So when AI health coaches started surfacing from Google, Samsung, Apple, Oura and others, promising to shoulder that mental load, I was all in. You mean to tell me I don’t have to decode every tiny fluctuation in my data on my own anymore? 

Most of us can’t afford a real-life wellness coach to meal-prep for us, hype us up midworkout or pry the dumbbells from our fever-wrought hands when we’re at the gym looking like a walking Flonase commercial. An AI coach felt like the next best thing: a nerdy, data-obsessed friend living in my phone, armed with years of my biometrics and the patience to explain them without judgment.

Over the last year, I tried them all, or at least the early versions of what they’ll eventually become. Personal trainers built into fitness apps. Chatbots tucked behind wearable dashboards. Coaches that whisper advice into your earbuds or nudge you from your smartwatch. Some free, some paid. 

But so far, none has been game-changing in the way I’d hoped, and the trade-offs of handing over my health data often felt like a high price to pay. The dream in my head doesn’t quite match the reality taking shape. 

Like with any new tech, it takes a while to weigh the long-term cost versus the short-term reward. But one thing is clear: This isn’t a passing trend. AI-driven health tech is poised to reshape personal health care in a way that smartwatches and smart rings haven’t yet. 

In the best-case scenario, AI health apps and programs could help fill gaps in care and serve as a lifeline in communities with limited access to wellness information. In the worst-case scenario, they could open the floodgates to a privacy nightmare and mishandle medical data. Where this all lands depends on how we choose to use AI coaches and what guardrails are built around them.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


AI in wearables isn’t new, but now it’s going rogue

The use of AI in health care, wellness and fitness has exploded in the last year, but the technology has been baked into the wearable experience for much longer. High heart-rate alerts, fall detection, even sleep scores… that’s all AI working behind the scenes.

According to Karin Verspoor, dean of the School of Computing Technologies at RMIT University in Melbourne, Australia, this type of AI is referred to as predictive modeling. «It’s a targeted tool that’s been trained to identify a particular type of event.»

In the case of these wearables, the «task» is looking for patterns outside the normal baseline and surfacing them as an alert. They’re precise and predictable.

But now we’re veering into something different and much harder to control: generative AI. With these full-on concierge-style chatbot models, not much different from ChatGPT or Gemini, any topic is fair game: heart rate patterns, premenstrual mood swings, diet tips or even medical recommendations (the latter, thankfully, usually prompts you to check with a human physician). The caveat is that these «health coaches» have an all-access pass to your most sensitive health data in real time.

«Large language AI models are essentially much more dynamic and much more responsive to whatever somebody puts into the prompt, and whatever the ongoing interaction with the system is,» says Verspoor. The problem, she notes, is that they’re also «subject to all of the problems that we have with large language models like confabulations or hallucinations.» 

Over the past 18 months, it seems like nearly every major tech and fitness brand has launched its own version of an AI coach or chatbot-style concierge, and if they haven’t, they’re very likely considering it. 

Google is testing an AI coach inside the Fitbit app, built on Gemini. Apple has released a Workout Buddy for the Apple Watch that offers real-time motivation via headphones based on live metrics during workouts, and is rumored to be exploring some kind of ChatGPT integration in its Health app. Samsung, Garmin, Oura and iFit have all rolled out AI features across their apps and wearable devices, while Meta has partnered with Garmin and Oakley to embed its Meta AI voice assistant into smart workout glasses.

That’s just a snapshot of the AI health coaches I’ve personally tested, and a fraction of what’s likely in development. Only Google’s is explicitly labeled a «coach,» but for the purposes of this article, they all fall under the same umbrella of AI health coaches.

Some of these features feel promising. Meta AI, for example, can read your Garmin heart-rate data into your ear through the glasses’ speakers so you don’t have to take your eyes off the trail. Or you might get training and rest-day recommendations based on how you slept and other physical data. 

Other features, however, still feel half-baked. Samsung’s running coach, for example, offered a one-size-fits-all training plan that didn’t match my goals or experience.  

In theory, these models should improve over time as they learn individual patterns and as people like me find better ways to leverage them. For now, though, most remain in their infancy, far from the full potential they’re meant to be: an always-available adviser, designed to make sense of the ever-growing pile of health data collected through wearables.

Best-case scenario: AI to the rescue

The current health care model in the US is overdue for a transformation. The system is overburdened, prohibitively expensive and facing demand that outpaces supply, especially in rural areas with limited access to doctors and medical equipment.

Dr. Jonathan Chen, professor of medicine and the director for medical education in artificial intelligence at Stanford, is optimistic that AI could play a constructive role in easing some of that pressure, especially when it comes to making sense of all the health information and clinical data in patient records. 

«We already have ways to collect data for people all the time, but even your doctor doesn’t know what to do with all that data in the ICU, let alone all the wearable data,» says Chen.

AI, he argues, can help bridge that gap by synthesizing information in ways that actually matter, such as flagging warning signs of potentially life-threatening conditions like hypertension before they become fatal. Having a personal health concierge at your fingertips could help you focus more intimately on wellness and encourage behavioral changes that reduce the risk of chronic illness over time.

«Even though the actionable insight might not be that different,» said Chen, «when it feels personalized, that might be a way some people will engage deeper.» Chen emphasizes that AI works best when it drives better conversations, not when it replaces them. He points to glucose monitoring as an example: Instead of walking into an appointment with a month of raw data, AI could review that information ahead of time and surface patterns and actionable insights to guide the discussion.

I’ve seen that best-case scenario play out firsthand. A close family member began receiving irregular heart rhythm notifications from an Apple Watch. The alerts had never appeared during a routine doctor visit, nor after wearing a clinical heart monitor at home for weeks. When the watch flagged an episode in real time, he got in front of a doctor, confirmed the diagnosis with an ECG and took action. A few months later, he underwent a heart procedure that significantly reduced his risk of a potentially life-threatening event. In that case, the wearable didn’t replace medical care, but did exactly what it was meant to do: surface a signal, start a conversation and help close a dangerous gap in care.

But that same dynamic can just as easily tip in the other direction. False positives and over-indexing on minor deviations could lead to unnecessary tests and screenings, adding strain to an already overwhelmed health care system.

«Is there going to be a storm of patients banging on the doctor’s door? ‘My Apple Watch, my Fitbit told me I have some heart condition,'» says Chen. «‘You have to give me 100 scans right now and start me on medication.’ Like, whoa, whoa, whoa, buddy… Let’s take a look first. Let’s see what’s really there.»

It’s a familiar tension; an upgraded version of the Dr. Google era when even the most innocent search about a rash could spiral into a late-night panic over flesh-eating bacteria. 

Pay to play: The price of privacy

My biggest concern when I started using these AI coaches was data sharing and privacy. Asking ChatGPT about a rash is one thing, but giving a chatbot access to my entire medical history is a completely different beast. Many of these health platforms contain years of my biometric data, along with my medical ID, which includes blood type and allergies. 

The alternative is not to use them at all. In many cases, these AI coaches rely on a pay-to-play model, with some requiring an actual subscription. But the real payment is your data. «We can’t have reliable predictive models or generative models without having access to data of some variety,» says Verspoor.

The amount you give up and how it’s used varies by platform, but signing up involves wading through dense disclosures: permission to use your historical and real-time biometric data, location info and chat history to train other models. We’ve become so desensitized to these agreements that most people (myself included) aren’t even sure what we’re giving up anymore. 

That confusion isn’t accidental. The language is often intentionally vague and nearly impossible to understand without a law degree. In my case, for example, using Oakley’s smart glasses required agreeing to let my data be used to train Meta’s AI. 

A recent privacy analysis by the Electronic Privacy Information Center found that the health-related data people assumed was private (including searches, browsing behavior and information entered into health platforms) is often collected and shared far beyond its original context. In one case, data entered on a state health insurance marketplace was tracked and sent to third parties, such as LinkedIn, for advertising purposes. Much of this information falls outside HIPAA protections, meaning it can be legally repurposed or sold in ways consumers never intended.

Even when anonymized, health data can often be traced back to a real person and even used by insurance agencies to raise premiums

«You can deidentify and can make it harder to tell, but if someone tried really hard, it’s actually not that hard to use statistical methods to reconstruct who’s actually who,» says Chen. 

Data breaches and hacks are just the tip of the iceberg. We often have little visibility into how long data will be stored, who it might be shared with or where it could end up years down the line. Chen points to 23andMe as a cautionary tale. The company had promised privacy and security, until financial trouble put massive amounts of genetic data in jeopardy.

«They’ll keep it secure and private, but then they go bankrupt. And so now they’re just going to sell all their assets to whoever wants to buy it.»

AI health coach: friend or foe?

The reality, at least in the short term, is likely less extreme than either of those scenarios. We’re probably not on the verge of AI saving health care, or of selling our most sensitive health data to the highest bidder. 

As Verspoor points out, the pay-to-play model isn’t exclusive to AI health coaches. Tech companies have been using personal data to power products long before generative AI entered the chat. Your search history may not look like an ECG, but it can be just as revealing about life stages, health anxieties or illness history. 

With AI health coaches having a direct line to real-time biometric data, it’s more important than ever for people to pay close attention to what data they’re signing off on and who they’re handing it to. Is that information staying on-device? Is it being shared with third parties? And what happens to it down the line? This requires people to be in the driver’s seat when signing up and to read the fine print, even if it means having to copy and paste it into yet another AI chatbot to translate the legal jargon. Then weigh whether the exchange is worth it to you. 

Chen believes the potential upside still outweighs the risks, especially if these tools succeed at getting people to care more about their health and engage with it more often. That engagement, he argues, is where the real value lies so long as AI remains a supplement to care, not a substitute for it. Both experts agree AI health coaches should function as ancillary tools to help you understand your data, ask better questions and jump-start conversations with your doctor. 

AI coaches may know your day-to-day vitals, but they still have blind spots when it comes to real-world context and medical-grade testing. Their advice, no matter how innocuous and obvious it may sound, like «hydrate after a bad night of sleep,» should be taken with a healthy dose of skepticism. Unlike tools such as ChatGPT or Google’s Gemini, some AI health coaches, including Google’s Fitbit Coach and Oura’s Advisor, don’t clearly cite sources or explain where their recommendations come from, at least not yet.

The tipping point 

The reality, at the moment, is less dramatic than either of these extremes. We’re probably not on the brink of AI saving health care, or of plummeting into a full-blown medical data dystopia. Instead, we’re in this awkward in-between phase. 

I was initially excited about the idea of an AI health coach taking some of the mental load off interpreting my health data. That quickly turned to skepticism as the privacy trade-offs became apparent. Now, after months of testing, I’ve landed somewhere else entirely: Most days, I forget the tool is there in the first place. 

That gap between insight and action is something human coaches have long understood. Jonathan Goodman, a fitness coach and author of Unhinged Habits, says AI excels at processing data, but behavior change rarely hinges on perfect metrics or the perfect training plan. 

«For a general-population human who just needs to move a little bit more, eat a little bit better, and play with their kids, it’s probably closer to 10% technical and 90% psychological,» he says. Metrics can surface patterns, but coaching is about asking the right questions, fitting movement into real life and recognizing those moments when someone is ready to push themselves into real transformation. 

To me, it’s that in-the-moment guidance, pushing me past my limit or telling me when to scale back, that’s missing from these AI coaches. The experience is largely passive, often requiring you to check the app to see that day’s training plan. Apple’s Workout Buddy might be the closest to that, with real-time motivation based on your stats, but even that stops short of actual coaching. And none has proven indispensable enough to make me seek it out consistently. 

To reach that tipping point, these companies will need to give us stronger reasons to engage and clearer safeguards to justify handing over our deeply personal health data. 

Technologies

Today’s Wordle Hints, Answer and Help for Dec. 31, #1656

Here are hints and the answer for today’s Wordle for Dec. 31, No. 1,656.

Looking for the most recent Wordle answer? Click here for today’s Wordle hints, as well as our daily answers and hints for The New York Times Mini Crossword, Connections, Connections: Sports Edition and Strands puzzles.


End the year with a Wordle win. Today’s Wordle puzzle isn’t terribly tough. If you need a new starter word, check out our list of which letters show up the most in English words. If you need hints and the answer, read on.

Read more: New Study Reveals Wordle’s Top 10 Toughest Words of 2025

Today’s Wordle hints

Before we show you today’s Wordle answer, we’ll give you some hints. If you don’t want a spoiler, look away now.

Wordle hint No. 1: Repeats

Today’s Wordle answer has no repeated letters.

Wordle hint No. 2: Vowels

Today’s Wordle answer has two vowels.

Wordle hint No. 3: First letter

Today’s Wordle answer begins with S.

Wordle hint No. 4: Last letter

Today’s Wordle answer ends with N.

Wordle hint No. 5: Meaning

Today’s Wordle answer can refer to a device that makes a loud, long-lasting sound as some kind of signal or warning.

TODAY’S WORDLE ANSWER

Today’s Wordle answer is SIREN.

Yesterday’s Wordle answer

Yesterday’s Wordle answer, Dec. 30, No. 1,655 was DECOR.

Recent Wordle answers

Dec. 26, No. 1651: SPEED

Dec. 27, No. 1652: BATCH

Dec. 28, No 1653: ABBOT

Dec. 29, No. 1654: FRUIT


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


Continue Reading

Technologies

Today’s NYT Connections Hints, Answers and Help for Dec. 31, #934

Here are some hints and the answers for the NYT Connections puzzle for Dec. 31, No. 934.

Looking for the most recent Connections answers? Click here for today’s Connections hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle, Connections: Sports Edition and Strands puzzles.


Today’s NYT Connections puzzle has a tough purple category once again. But the yellow group is very timely, and pretty easy. Read on for clues and today’s Connections answers.

The Times has a Connections Bot, like the one for Wordle. Go there after you play to receive a numeric score and to have the program analyze your answers. Players who are registered with the Times Games section can now nerd out by following their progress, including the number of puzzles completed, win rate, number of times they nabbed a perfect score and their win streak.

Read more: Hints, Tips and Strategies to Help You Win at NYT Connections Every Time

Hints for today’s Connections groups

Here are four hints for the groupings in today’s Connections puzzle, ranked from the easiest yellow group to the tough (and sometimes bizarre) purple group.

Yellow group hint: Here comes 2026!

Green group hint: Where is it?

Blue group hint: Pennsylvania city.

Purple group hint: Waves.

Answers for today’s Connections groups

Yellow group: Happy New Year!

Green group: Places where things disappear.

Blue group: Associated with Philadelphia.

Purple group: Starting with bodies of water.

Read more: Wordle Cheat Sheet: Here Are the Most Popular Letters Used in English Words

What are today’s Connections answers?

The yellow words in today’s Connections

The theme is Happy New Year! The four answers are ball drop, champagne flute, fireworks and noisemaker.

The green words in today’s Connections

The theme is places where things disappear. The four answers are Bermuda Triangle, black hole, couch cushions and dryer.

The blue words in today’s Connections

The theme is associated with Philadelphia. The four answers are brotherly love, cheesesteak, Liberty Bell and Rocky.

The purple words in today’s Connections

The theme is starting with bodies of water. The four answers are bay leaf, channel surf, sea bass and sound barrier.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


Continue Reading

Technologies

Samsung’s $200 Galaxy A17 Brings Google’s Circle to Search to Its Lower-Priced Phone

While the AI features are nice to see at the lower price, the Galaxy A17 otherwise looks very similar to the phone it’s replacing.

Samsung’s $200 Galaxy A17 5G, announced Tuesday, appears to be a smaller hardware refresh for the company’s lower-cost phone — bearing many similarities to the Galaxy A16 that it will replace. However, Samsung notes that the A17 will have access to several AI features, including Google’s Circle to Search and Gemini assistant.

Even though both of those AI features are becoming common on all phones running Android 16 (Motorola’s sub-$200 phones also include them), the Galaxy A17 might become one of the broadest ways that Circle to Search and Gemini reach new audiences. That’s because Samsung’s $200 phone is typically one of the few non-Apple devices to consistently top sales charts in the US, for instance, the $200 Galaxy A16 currently ranks fifth on Counterpoint Research’s list behind Apple’s iPhone 16 and iPhone 17.

Similar to the Galaxy A16, the A17 will have a 6.7-inch display with a 90Hz refresh rate, an IP54 rating for water and dust resistance (can withstand splashes but still avoid submerging the phone) and is powered by Samsung’s Exynos 1330 processor. The cameras are also the same, including a 50-megapixel wide camera, a 5-megapixel ultrawide camera and a 2-megapixel macro camera. Around the front is a 13-megapixel selfie camera.

The Galaxy A17 will also include a 5,000-mAh battery, 25-watt wired charging, 4GB of RAM with 128GB of onboard storage, the option to expand with a microSD card and will receive six years of software as well as security updates. That support period is quite notable for phones sold in the $200 range, as most phones that cost $200 get two to three years of updates.

The Galaxy A17 goes on sale in the US starting Jan. 7, and will come in blue, black and gray models.

Continue Reading

Trending

Copyright © Verum World Media