Connect with us

Technologies

AI Health Coaches: The Next Frontier in Wearables or Privacy Nightmare?

We should probably brace for both.

I’ve been tracking biometric data about my body since what feels like the dawn of time (or at least the dawn of wearables). I ran a half-marathon with the first Fitbit tracker, reviewed the very first Apple Watch and used the first smartphone-connected thermometer for ovulation tracking back when it was a pen-and-paper operation for most. 

Collecting data about my body isn’t just second nature; it’s practically part of my job description. And for years, it’s been entirely on me to overanalyze that mountain of metrics and figure out how to turn it into something useful.

So when AI health coaches started surfacing from Google, Samsung, Apple, Oura and others, promising to shoulder that mental load, I was all in. You mean to tell me I don’t have to decode every tiny fluctuation in my data on my own anymore? 

Most of us can’t afford a real-life wellness coach to meal-prep for us, hype us up midworkout or pry the dumbbells from our fever-wrought hands when we’re at the gym looking like a walking Flonase commercial. An AI coach felt like the next best thing: a nerdy, data-obsessed friend living in my phone, armed with years of my biometrics and the patience to explain them without judgment.

Over the last year, I tried them all, or at least the early versions of what they’ll eventually become. Personal trainers built into fitness apps. Chatbots tucked behind wearable dashboards. Coaches that whisper advice into your earbuds or nudge you from your smartwatch. Some free, some paid. 

But so far, none has been game-changing in the way I’d hoped, and the trade-offs of handing over my health data often felt like a high price to pay. The dream in my head doesn’t quite match the reality taking shape. 

Like with any new tech, it takes a while to weigh the long-term cost versus the short-term reward. But one thing is clear: This isn’t a passing trend. AI-driven health tech is poised to reshape personal health care in a way that smartwatches and smart rings haven’t yet. 

In the best-case scenario, AI health apps and programs could help fill gaps in care and serve as a lifeline in communities with limited access to wellness information. In the worst-case scenario, they could open the floodgates to a privacy nightmare and mishandle medical data. Where this all lands depends on how we choose to use AI coaches and what guardrails are built around them.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


AI in wearables isn’t new, but now it’s going rogue

The use of AI in health care, wellness and fitness has exploded in the last year, but the technology has been baked into the wearable experience for much longer. High heart-rate alerts, fall detection, even sleep scores… that’s all AI working behind the scenes.

According to Karin Verspoor, dean of the School of Computing Technologies at RMIT University in Melbourne, Australia, this type of AI is referred to as predictive modeling. «It’s a targeted tool that’s been trained to identify a particular type of event.»

In the case of these wearables, the «task» is looking for patterns outside the normal baseline and surfacing them as an alert. They’re precise and predictable.

But now we’re veering into something different and much harder to control: generative AI. With these full-on concierge-style chatbot models, not much different from ChatGPT or Gemini, any topic is fair game: heart rate patterns, premenstrual mood swings, diet tips or even medical recommendations (the latter, thankfully, usually prompts you to check with a human physician). The caveat is that these «health coaches» have an all-access pass to your most sensitive health data in real time.

«Large language AI models are essentially much more dynamic and much more responsive to whatever somebody puts into the prompt, and whatever the ongoing interaction with the system is,» says Verspoor. The problem, she notes, is that they’re also «subject to all of the problems that we have with large language models like confabulations or hallucinations.» 

Over the past 18 months, it seems like nearly every major tech and fitness brand has launched its own version of an AI coach or chatbot-style concierge, and if they haven’t, they’re very likely considering it. 

Google is testing an AI coach inside the Fitbit app, built on Gemini. Apple has released a Workout Buddy for the Apple Watch that offers real-time motivation via headphones based on live metrics during workouts, and is rumored to be exploring some kind of ChatGPT integration in its Health app. Samsung, Garmin, Oura and iFit have all rolled out AI features across their apps and wearable devices, while Meta has partnered with Garmin and Oakley to embed its Meta AI voice assistant into smart workout glasses.

That’s just a snapshot of the AI health coaches I’ve personally tested, and a fraction of what’s likely in development. Only Google’s is explicitly labeled a «coach,» but for the purposes of this article, they all fall under the same umbrella of AI health coaches.

Some of these features feel promising. Meta AI, for example, can read your Garmin heart-rate data into your ear through the glasses’ speakers so you don’t have to take your eyes off the trail. Or you might get training and rest-day recommendations based on how you slept and other physical data. 

Other features, however, still feel half-baked. Samsung’s running coach, for example, offered a one-size-fits-all training plan that didn’t match my goals or experience.  

In theory, these models should improve over time as they learn individual patterns and as people like me find better ways to leverage them. For now, though, most remain in their infancy, far from the full potential they’re meant to be: an always-available adviser, designed to make sense of the ever-growing pile of health data collected through wearables.

Best-case scenario: AI to the rescue

The current health care model in the US is overdue for a transformation. The system is overburdened, prohibitively expensive and facing demand that outpaces supply, especially in rural areas with limited access to doctors and medical equipment.

Dr. Jonathan Chen, professor of medicine and the director for medical education in artificial intelligence at Stanford, is optimistic that AI could play a constructive role in easing some of that pressure, especially when it comes to making sense of all the health information and clinical data in patient records. 

«We already have ways to collect data for people all the time, but even your doctor doesn’t know what to do with all that data in the ICU, let alone all the wearable data,» says Chen.

AI, he argues, can help bridge that gap by synthesizing information in ways that actually matter, such as flagging warning signs of potentially life-threatening conditions like hypertension before they become fatal. Having a personal health concierge at your fingertips could help you focus more intimately on wellness and encourage behavioral changes that reduce the risk of chronic illness over time.

«Even though the actionable insight might not be that different,» said Chen, «when it feels personalized, that might be a way some people will engage deeper.» Chen emphasizes that AI works best when it drives better conversations, not when it replaces them. He points to glucose monitoring as an example: Instead of walking into an appointment with a month of raw data, AI could review that information ahead of time and surface patterns and actionable insights to guide the discussion.

I’ve seen that best-case scenario play out firsthand. A close family member began receiving irregular heart rhythm notifications from an Apple Watch. The alerts had never appeared during a routine doctor visit, nor after wearing a clinical heart monitor at home for weeks. When the watch flagged an episode in real time, he got in front of a doctor, confirmed the diagnosis with an ECG and took action. A few months later, he underwent a heart procedure that significantly reduced his risk of a potentially life-threatening event. In that case, the wearable didn’t replace medical care, but did exactly what it was meant to do: surface a signal, start a conversation and help close a dangerous gap in care.

But that same dynamic can just as easily tip in the other direction. False positives and over-indexing on minor deviations could lead to unnecessary tests and screenings, adding strain to an already overwhelmed health care system.

«Is there going to be a storm of patients banging on the doctor’s door? ‘My Apple Watch, my Fitbit told me I have some heart condition,'» says Chen. «‘You have to give me 100 scans right now and start me on medication.’ Like, whoa, whoa, whoa, buddy… Let’s take a look first. Let’s see what’s really there.»

It’s a familiar tension; an upgraded version of the Dr. Google era when even the most innocent search about a rash could spiral into a late-night panic over flesh-eating bacteria. 

Pay to play: The price of privacy

My biggest concern when I started using these AI coaches was data sharing and privacy. Asking ChatGPT about a rash is one thing, but giving a chatbot access to my entire medical history is a completely different beast. Many of these health platforms contain years of my biometric data, along with my medical ID, which includes blood type and allergies. 

The alternative is not to use them at all. In many cases, these AI coaches rely on a pay-to-play model, with some requiring an actual subscription. But the real payment is your data. «We can’t have reliable predictive models or generative models without having access to data of some variety,» says Verspoor.

The amount you give up and how it’s used varies by platform, but signing up involves wading through dense disclosures: permission to use your historical and real-time biometric data, location info and chat history to train other models. We’ve become so desensitized to these agreements that most people (myself included) aren’t even sure what we’re giving up anymore. 

That confusion isn’t accidental. The language is often intentionally vague and nearly impossible to understand without a law degree. In my case, for example, using Oakley’s smart glasses required agreeing to let my data be used to train Meta’s AI. 

A recent privacy analysis by the Electronic Privacy Information Center found that the health-related data people assumed was private (including searches, browsing behavior and information entered into health platforms) is often collected and shared far beyond its original context. In one case, data entered on a state health insurance marketplace was tracked and sent to third parties, such as LinkedIn, for advertising purposes. Much of this information falls outside HIPAA protections, meaning it can be legally repurposed or sold in ways consumers never intended.

Even when anonymized, health data can often be traced back to a real person and even used by insurance agencies to raise premiums

«You can deidentify and can make it harder to tell, but if someone tried really hard, it’s actually not that hard to use statistical methods to reconstruct who’s actually who,» says Chen. 

Data breaches and hacks are just the tip of the iceberg. We often have little visibility into how long data will be stored, who it might be shared with or where it could end up years down the line. Chen points to 23andMe as a cautionary tale. The company had promised privacy and security, until financial trouble put massive amounts of genetic data in jeopardy.

«They’ll keep it secure and private, but then they go bankrupt. And so now they’re just going to sell all their assets to whoever wants to buy it.»

AI health coach: friend or foe?

The reality, at least in the short term, is likely less extreme than either of those scenarios. We’re probably not on the verge of AI saving health care, or of selling our most sensitive health data to the highest bidder. 

As Verspoor points out, the pay-to-play model isn’t exclusive to AI health coaches. Tech companies have been using personal data to power products long before generative AI entered the chat. Your search history may not look like an ECG, but it can be just as revealing about life stages, health anxieties or illness history. 

With AI health coaches having a direct line to real-time biometric data, it’s more important than ever for people to pay close attention to what data they’re signing off on and who they’re handing it to. Is that information staying on-device? Is it being shared with third parties? And what happens to it down the line? This requires people to be in the driver’s seat when signing up and to read the fine print, even if it means having to copy and paste it into yet another AI chatbot to translate the legal jargon. Then weigh whether the exchange is worth it to you. 

Chen believes the potential upside still outweighs the risks, especially if these tools succeed at getting people to care more about their health and engage with it more often. That engagement, he argues, is where the real value lies so long as AI remains a supplement to care, not a substitute for it. Both experts agree AI health coaches should function as ancillary tools to help you understand your data, ask better questions and jump-start conversations with your doctor. 

AI coaches may know your day-to-day vitals, but they still have blind spots when it comes to real-world context and medical-grade testing. Their advice, no matter how innocuous and obvious it may sound, like «hydrate after a bad night of sleep,» should be taken with a healthy dose of skepticism. Unlike tools such as ChatGPT or Google’s Gemini, some AI health coaches, including Google’s Fitbit Coach and Oura’s Advisor, don’t clearly cite sources or explain where their recommendations come from, at least not yet.

The tipping point 

The reality, at the moment, is less dramatic than either of these extremes. We’re probably not on the brink of AI saving health care, or of plummeting into a full-blown medical data dystopia. Instead, we’re in this awkward in-between phase. 

I was initially excited about the idea of an AI health coach taking some of the mental load off interpreting my health data. That quickly turned to skepticism as the privacy trade-offs became apparent. Now, after months of testing, I’ve landed somewhere else entirely: Most days, I forget the tool is there in the first place. 

That gap between insight and action is something human coaches have long understood. Jonathan Goodman, a fitness coach and author of Unhinged Habits, says AI excels at processing data, but behavior change rarely hinges on perfect metrics or the perfect training plan. 

«For a general-population human who just needs to move a little bit more, eat a little bit better, and play with their kids, it’s probably closer to 10% technical and 90% psychological,» he says. Metrics can surface patterns, but coaching is about asking the right questions, fitting movement into real life and recognizing those moments when someone is ready to push themselves into real transformation. 

To me, it’s that in-the-moment guidance, pushing me past my limit or telling me when to scale back, that’s missing from these AI coaches. The experience is largely passive, often requiring you to check the app to see that day’s training plan. Apple’s Workout Buddy might be the closest to that, with real-time motivation based on your stats, but even that stops short of actual coaching. And none has proven indispensable enough to make me seek it out consistently. 

To reach that tipping point, these companies will need to give us stronger reasons to engage and clearer safeguards to justify handing over our deeply personal health data. 

Technologies

Today’s NYT Connections: Sports Edition Hints and Answers for April 8, #562

Here are hints and the answers for the NYT Connections: Sports Edition puzzle for April 8 No. 562.

Looking for the most recent regular Connections answers? Click here for today’s Connections hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle and Strands puzzles.


Today’s Connections: Sports Edition is a tough one. If you’re struggling with today’s puzzle but still want to solve it, read on for hints and the answers.

Connections: Sports Edition is published by The Athletic, the subscription-based sports journalism site owned by The Times. It doesn’t appear in the NYT Games app, but it does in The Athletic’s own app. Or you can play it for free online.

Read more: NYT Connections: Sports Edition Puzzle Comes Out of Beta

Hints for today’s Connections: Sports Edition groups

Here are four hints for the groupings in today’s Connections: Sports Edition puzzle, ranked from the easiest yellow group to the tough (and sometimes bizarre) purple group.

Yellow group hint: Working out.

Green group hint: Cover your face.

Blue group hint: NFL players.

Purple group hint: Leap.

Answers for today’s Connections: Sports Edition groups

Yellow group: Exercises in singular form.

Green group: Sporting jobs that require masks.

Blue group: Hall of Fame defensive ends.

Purple group: ____ jump.

Read more: Wordle Cheat Sheet: Here Are the Most Popular Letters Used in English Words

What are today’s Connections: Sports Edition answers?

The yellow words in today’s Connections

The theme is exercises in singular form. The four answers are crunch, plank, situp and squat.

The green words in today’s Connections

The theme is sporting jobs that require masks. The four answers are catcher, fencer, football player and goaltender.

The blue words in today’s Connections

The theme is Hall of Fame defensive ends. The four answers are Dent, Peppers, Strahan and Youngblood.

The purple words in today’s Connections

The theme is ____ jump. The four answers are broad, high, long and triple.

Continue Reading

Technologies

The $135M Google Data Settlement Site Is Live — See If You’re Eligible

Use the settlement website to select your preferred payment method, and you may end up $100 richer.

You can now file a claim in the $135 million Google data settlement. The case centers on claims that Android devices transmitted user data without consent. Specifically,  the class action lawsuit Taylor v. Google LLC contends that Google’s Android devices passively transferred cellular data to Google without user permission, even when the devices were idle. While not admitting fault, Google reached a preliminary settlement in January, agreeing to pay $135 million to about 100 million US Android phone users.

The official settlement website for the lawsuit is now live. The final approval hearing won’t occur until June 23, when the court will consider whether Google’s settlement is fair and listen to objections. After that, the court will decide whether to approve the $135 million settlement. 

In the meantime, if you qualify and want to be paid as part of the settlement, you can select your preferred payment method on the official website. There, you can find information on speaking at the June 23 court hearing and on how to exclude yourself or write to the court to object by May 29.

As part of the settlement, Google will update its Google Play terms of service to clarify that certain data transfers do occur passively even when you’re not using your Android device, and that cellular data may be relied upon when not connected to Wi-Fi. This can’t always be disabled, but users will be asked to consent to it when setting up their device. 

Google will also fully stop collecting data when its «allow background data usage» option is toggled off. 

Who can be part of the settlement?

In order to join the Taylor v. Google LLC settlement, you must meet four qualifications:

  1. Be a living, individual human being in the US.
  2. Have used an Android mobile device with a cellular data plan.
  3. Have used the aforementioned device at any time from Nov. 12, 2017, to the date when the settlement receives final approval.
  4. You’re not a class member in the Csupo v. Google LLC lawsuit, which is similar but specifically for California residents.

The final approval hearing is on June 23, so you can add your payment method until then. The hearing’s date and time may change, and any updates will be posted on the settlement website. 

If you choose to do nothing, you will still be issued a settlement payment, but you may not receive it if you don’t select a payment method.

How much will I get paid?

It’s not currently known exactly how much each settlement class member will receive, but the cap is $100. Payments will be distributed after final court approval and after any appeals are resolved.

After all administrative, tax and attorney costs are paid, the settlement administrator will attempt to pay each member an equal amount. If any funds remain after payments are sent, and it’s economically feasible, they will be redistributed to members who were previously and successfully paid. If it’s not economically feasible, the funds will go to an organization approved by the court.

Continue Reading

Technologies

Samsung’s Galaxy Watch Ultra 2 Might Come in 5G and 4G Cellular Models

If the rumor proves true, the 5G Galaxy Watch Ultra would rival the 5G-enabled $799 Apple Watch Ultra 3 that debuted last fall.

Samsung’s next high-end Galaxy Watch could support faster 5G speeds, but if this leak is true, it will depend on where you live. The rumored Samsung Galaxy Watch Ultra 2 might come in 5G and 4G cellular models, with availability for each smartwatch depending on the country.

According to the Dutch website Galaxy Club (and spotted by SamMobile), Samsung’s servers may have revealed a series of model numbers that point to 5G, 4G and Wi-Fi-enabled editions of the next Galaxy Watch Ultra, which would succeed the original model that debuted in 2024.

A representative for Samsung did not immediately respond to a request for comment.

The Galaxy Club website speculates that the 5G edition would be sold in the US and Korean markets, while the 4G edition would sell in the rest of the world. In the US, a 5G version of the Galaxy Watch Ultra would rival the 5G-enabled $799 Apple Watch Ultra 3, which debuted last fall. The 4G edition would have broader compatibility worldwide, since the earlier network is far more established.

It will likely be a few months until we hear anything official about the Galaxy Watch Ultra 2. Samsung typically unveils its new watches in the summer alongside its Galaxy Z Fold and Z Flip foldable phones. Last year, Samsung unveiled the Galaxy Watch 8 and the Galaxy Watch 8 Classic, but otherwise left the prior 2024 Ultra in the lineup for those looking for a larger 47mm smartwatch.

Continue Reading

Trending

Copyright © Verum World Media