Technologies
My OnePlus 15R Review: A Lovely $700 Phone That’s Held Back by Its Compromises
The $700 Android phone has a lot to like for OnePlus fans who want a giant battery for less money than its flagship sibling.

Pros
- Excellent battery life and charging speeds
- Big and responsive display
- 120fps video looks gorgeous
Cons
- Lacks wireless charging
- More expensive than the OnePlus 13R
- Mixed camera quality
- Short software support compared to competing phones
The $700 OnePlus 15R’s standout feature is its massive 7,400-mAh silicon-carbon battery, one of the largest I’ve ever encountered on a phone. In my testing, it easily lasted two days between charges, even with plenty of media streaming, gaming and photography.
But it was the 15R’s hypersonic fingerprint sensor that impressed me even more, as it’s a feature I hadn’t seen on a midrange flagship phone before. It makes unlocking the phone feel much smoother than an optical fingerprint sensor, especially since it doesn’t require a bright light to function. I hope to see it become available on even cheaper phones, but for now, having it on the OnePlus 15R is nice.
Upgrades like these make the OnePlus 15R feel premium despite it being the step-down option from the $900 OnePlus 15. The phone’s features rival those of more expensive phones, such as the $799 Samsung Galaxy S25, rather than cheaper competitors like the $650 Galaxy S25 FE or the $499 Google Pixel 9A.
And with that $700 price, you’re definitely paying for those upgrades. OnePlus notes that the $700 starting price (for 12GB of RAM and 256GB of storage) might change. «Product pricing can vary in different countries and regions due to various local market factors,» OnePlus said.
But even at $700, it’s worth considering some of the things you don’t get with the OnePlus 15R. For example, the 15R comes with a 55-watt fast charger in the box and supports 80-watt wired charging speeds when paired with the corresponding OnePlus wall plug, but it lacks wireless charging. The previous R model, the OnePlus 13R, also didn’t have wireless charging, but it did have a telephoto camera that the 15R doesn’t, which somewhat made up for it.
The phone’s Snapdragon 8 Gen 5 chip is a step down from the OnePlus 15’s Snapdragon 8 Elite Gen 5 processor, but I still found it fine for gaming, multitasking and recording high-resolution slow-motion videos. The OnePlus 15R comes with four years of software updates and six years of security updates. It’s fine, but it falls short of Samsung’s and Google’s seven-year commitment to both.
OnePlus fans who don’t want to spend top dollar for the latest OnePlus 15 will find a lot to like with the 15R. But, like the 13R, it’s important to consider the compromises the 15R makes to see if any of them are potential dealbreakers.
OnePlus 15R’s design, specs and features
My OnePlus 15R review unit is the mint breeze edition, a light green color that encompasses the back of the phone, the side rails and the camera bump. A darker charcoal black model is also available, and is the sole color if you opt for the $800 model with 512GB of storage. The design is similar to the OnePlus 15, with the main difference being the 15R’s dual-camera setup versus the three on the more expensive phone.
There’s a new programmable shortcut button called the Plus Key, located across from the volume and lock screen buttons. Similar to the Action button on newer iPhone models, it can trigger shortcuts like toggling between sound and vibration, opening the camera or turning the flashlight on. I wish I could use it to launch any app, though, which is possible on Apple’s Action button using shortcuts.
The phone’s 6.8-inch display is expansive, and I found it particularly good for watching or playing media. But it’s too big for me to use one-handed beyond scrolling. The display can continuously run at a smooth 120Hz refresh rate, which has become the standard across all Android phones in this price range. There are certain mobile games, like Call of Duty Mobile, that can take advantage of the display’s full 165Hz capability, but I can’t use that higher refresh rate when I’m not gaming. The OnePlus 15 has the same limitation. This surprises me, as I’ve seen less powerful phones with a consistent 165Hz refresh rate.
For example, when I played the game Dead Cells, it looked great on the phone, and the touchscreen was responsive, which helped especially during frenetic moments battling through successive deadly monsters. But its refresh rate is constrained to 120Hz. I find that odd, because I’ve seen this game run at 165Hz on phones that include that option. I found other games, such as Red Dead Redemption, Fortnite and Fall Guys, to load quickly at high graphics settings too. Red Dead ran at a steady 40 frames per second (fps) in its performance mode, while Fortnite and Fall Guys ran at 60fps on their higher graphics options.
Perhaps it’s a choice to help extend battery life, but the OnePlus 15R’s large capacity would seem plentiful enough to handle some extra gaming workload. Most of the time, I’m happy if a phone can last a full day on a single charge. With the OnePlus 15R, I easily got through two days and nights on a single charge. In CNET Labs’ 3-hour YouTube streaming test, where phones start with a full battery, the OnePlus 15R dropped to 89%, the same as the $829 iPhone 17, and just behind its sibling, the OnePlus 15, which ended at 90%.
The OnePlus 15R comes with a wall charger, a rarity for most phones sold in 2025. While the included 55-watt fast charger doesn’t support the phone’s fastest 80-watt speed, I was able to get it from 0% to 49% of its 7,400-mAh battery capacity in 30 minutes. Considering most phones we cover typically have battery capacities between 4,200 and 5,000 mAh, that’s a lot of power even at half capacity.
30-minute wired fast charging test
| Phone | Percent increase | Battery capacity | Wall plug wattage | Comes with plug? |
|---|---|---|---|---|
| OnePlus 15 | 72% | 7,300-mAh | 80W | Yes |
| Apple iPhone 17 | 69% | 3,692-mAh | 40W | No |
| Samsung Galaxy S25 FE | 69% | 4,900-mAh | 45W | No |
| OnePlus 15R | 49% | 7,400-mAh | 55W | Yes |
| Samsung Galaxy S25 | 47% | 4,000-mAh | 30W | No |
| Google Pixel 9A | 46% | 5,100-mAh | 45W | No |
If you prefer to use more universal power adapters with the USB-PD standard, the OnePlus 15R will charge at a slower 18-watt speed. But even with that limitation, in real-world use, it’s only slightly slower.
Although the 15R doesn’t support wireless charging, OnePlus sells a case that allows for attaching magnetic accessories. OnePlus provides a sandstorm black case with the phone, which I found perfectly suitable for attaching my wallet accessory that doubles as a kickstand.
In benchmark testing for the CPU and graphics power, the OnePlus 15R scored comparably to phones like the Samsung Galaxy S25 — which runs on a custom edition of last year’s Snapdragon 8 Elite processor — and was slower than the OnePlus 15 and 13R. Compared to the prior OnePlus 13R, which has the 2023 Snapdragon 8 Gen 3 chip, the 15R scored similarly in the graphically intense 3D Wild Life Extreme test and notably higher in the computationally intensive Geekbench 6.0 CPU benchmark.
3DMark Wild Life Extreme
Geekbench v.6.0
- Single-core
- Multicore
OnePlus 15R cameras
I’m bummed that the OnePlus 15R doesn’t have a telephoto camera, but the 50-megapixel wide-angle and 8-megapixel ultrawide cameras can hold their own, especially in daylight settings or when recording at 4K 120fps videos.
Using the latter, I recorded Gizmo, my friend’s cat, as he darted between a flurry of poses while squished between two couches. And when visiting the 3 Daughters Brewery holiday train display in St. Petersburg, Florida, I was able to capture the model trains as they zoomed throughout the multitier village. The videos have a smooth clarity.
When it comes to photography, I would say the OnePlus 15R is on par with other $700 phones. Daylight photos have lots of detail but tend to skew warm in tone. I shot a photo of a sunset at a beach in Siesta Key, and the image has lots of orange colors and good texture in the whirling clouds in the sky.
I noticed that the camera tends to add an aggressive blur to images when it focuses tightly on a subject. For instance, in this photo of a dark chocolate gelato, the dessert underneath is blurred out as if I had taken it in portrait mode. But it’s not, it’s in its standard photo setting.
The OnePlus 15R did a decent job of getting photos of my friend’s fast-moving pets, albeit at the cost of some detail. In this photo, the camera is able to focus in on Kinley’s face, although it struggled a bit to capture the light of both eyes. Snickers, the dog in the background, was also moving around during this moment, but comes out as a background subject. This is actually good, though, as it’s a naturally more challenging subject in a lowlight area.
I have mixed feelings about selfie images from the phone’s 32-megapixel front-facing camera. They aren’t bad, but I feel like the 15R had trouble focusing on me, whether I was outdoors or indoors. This photo, taken on a street in St. Petersburg, is washed out despite otherwise being taken in broad daylight.
And it’s a similar situation for this selfie I took in an indoor brewery. The photos aren’t bad — they just aren’t as good as I’d prefer from a $700 phone. It’s more comparable to what I see from phones that are closer to $500, like the Motorola Edge and the Pixel 9A.
OnePlus 15R: The bottom line
The OnePlus 15R’s features make it an excellent starter gaming phone. I often thought about the RedMagic 11 Pro while reviewing the 15R. RedMagic’s $749 gaming phone has impressive specs that easily run any game you throw at it, and its 7,500-mAh silicon-carbon battery. But RedMagic seems to hit its reasonable price through a frustrating software experience that even includes advertisements when you open its web browser.
OnePlus chose not to skimp on the 15R’s display or the battery, and would rather make its cuts by going with a slightly less powerful processor, skipping wireless charging and omitting the telephoto camera. The result is a mighty $700 phone, even if it’s noticeably not going to outdo the more expensive OnePlus 15.
The phone is ultimately fantastic as a media powerhouse that can run for days on a single charge. But to make sure it’s a good fit, you’ll want to decide whether the lack of wireless charging is a deal-breaker.
If you want a phone with more features and less focus on gaming or a large battery, it’s worth considering phones in the $500 to $650 range, such as Google’s Pixel 9A and Samsung’s Galaxy S25 FE.
How we test phones
Every phone tested by CNET’s reviews team was actually used in the real world. We test a phone’s features, play games and take photos. We examine the display to see if it’s bright, sharp and vibrant. We analyze the design and build to see how it is to hold and whether it has an IP-rating for water resistance. We push the processor’s performance to the extremes using standardized benchmark tools like GeekBench and 3DMark, along with our own anecdotal observations navigating the interface, recording high-resolution videos and playing graphically intense games at high refresh rates.
All the cameras are tested in a variety of conditions, from bright sunlight to dark indoor scenes. We try out special features like night mode and portrait mode and compare our findings against similarly priced competing phones. We also check out the battery life by using it daily, as well as running a series of battery drain tests.
We take into account additional features like support for 5G, satellite connectivity, fingerprint and face sensors, stylus support, fast charging speeds and foldable displays, among others that can be useful. We balance all of this against the price to give you the verdict on whether that phone, whatever price it is, actually represents good value. While these tests may not always be reflected in CNET’s initial review, we conduct follow-up and long-term testing in most circumstances.
Technologies
We Had a Poke Around ChatGPT’s New App Store. Here’s What We Found
After a call for app submissions from developers, ChatGPT’s beta app feature has arrived.
Adobe Photoshop, Spotify, Canva, Zillow and other well-known digital tools are now apps within ChatGPT. OpenAI has launched an app platform two months after announcing the beta feature and rolling out a development kit.
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
On Dec. 17, OpenAI announced that developers could submit their apps and a day later, apps began to appear in the desktop version of ChatGPT, the wildly popular chatbot with more than 800 million active users. It’s unclear how soon it will show up in the mobile ChatGPT app. In CNET’s testing, it wasn’t yet available on iOS.
(Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
How to use apps in ChatGPT
The apps, at least at launch, are categorized as «Featured,» «Lifestyle» and «Productivity,» with descriptions that will look familiar to anyone who’s used the Apple App Store or the Google Play store.
You can also use search to find a specific app that may not be listed under those categories.
Many of the apps include screenshots with prompt examples that suggest how to use the app within the chatbot. Canva, for instance, includes the prompt example, «@canva create a 2025 wrap presentation for my class.»
Instead of downloading an app, you’ll click a «Connect» button to give ChatGPT access to it. From there, you can use an @ prompt to access that app’s features.
Interestingly, you can’t just ask ChatGPT what apps are connected if you forget — instead, you need to go to Settings, Connected Apps or to Settings, then Data Controls, then Connected Apps depending on which version of ChatGPT you’re using.
Technologies
AI Health Coaches: The Next Frontier in Wearables or Privacy Nightmare?
We should probably brace for both.
I’ve been tracking biometric data about my body since what feels like the dawn of time (or at least the dawn of wearables). I ran a half-marathon with the first Fitbit tracker, reviewed the very first Apple Watch and used the first smartphone-connected thermometer for ovulation tracking back when it was a pen-and-paper operation for most.
Collecting data about my body isn’t just second nature; it’s practically part of my job description. And for years, it’s been entirely on me to overanalyze that mountain of metrics and figure out how to turn it into something useful.
So when AI health coaches started surfacing from Google, Samsung, Apple, Oura and others, promising to shoulder that mental load, I was all in. You mean to tell me I don’t have to decode every tiny fluctuation in my data on my own anymore?
Most of us can’t afford a real-life wellness coach to meal-prep for us, hype us up midworkout or pry the dumbbells from our fever-wrought hands when we’re at the gym looking like a walking Flonase commercial. An AI coach felt like the next best thing: a nerdy, data-obsessed friend living in my phone, armed with years of my biometrics and the patience to explain them without judgment.
Over the last year, I tried them all, or at least the early versions of what they’ll eventually become. Personal trainers built into fitness apps. Chatbots tucked behind wearable dashboards. Coaches that whisper advice into your earbuds or nudge you from your smartwatch. Some free, some paid.
But so far, none has been game-changing in the way I’d hoped, and the trade-offs of handing over my health data often felt like a high price to pay. The dream in my head doesn’t quite match the reality taking shape.
Like with any new tech, it takes a while to weigh the long-term cost versus the short-term reward. But one thing is clear: This isn’t a passing trend. AI-driven health tech is poised to reshape personal health care in a way that smartwatches and smart rings haven’t yet.
In the best-case scenario, AI health apps and programs could help fill gaps in care and serve as a lifeline in communities with limited access to wellness information. In the worst-case scenario, they could open the floodgates to a privacy nightmare and mishandle medical data. Where this all lands depends on how we choose to use AI coaches and what guardrails are built around them.
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
AI in wearables isn’t new, but now it’s going rogue
The use of AI in health care, wellness and fitness has exploded in the last year, but the technology has been baked into the wearable experience for much longer. High heart-rate alerts, fall detection, even sleep scores… that’s all AI working behind the scenes.
According to Karin Verspoor, dean of the School of Computing Technologies at RMIT University in Melbourne, Australia, this type of AI is referred to as predictive modeling. «It’s a targeted tool that’s been trained to identify a particular type of event.»
In the case of these wearables, the «task» is looking for patterns outside the normal baseline and surfacing them as an alert. They’re precise and predictable.
But now we’re veering into something different and much harder to control: generative AI. With these full-on concierge-style chatbot models, not much different from ChatGPT or Gemini, any topic is fair game: heart rate patterns, premenstrual mood swings, diet tips or even medical recommendations (the latter, thankfully, usually prompts you to check with a human physician). The caveat is that these «health coaches» have an all-access pass to your most sensitive health data in real time.
«Large language AI models are essentially much more dynamic and much more responsive to whatever somebody puts into the prompt, and whatever the ongoing interaction with the system is,» says Verspoor. The problem, she notes, is that they’re also «subject to all of the problems that we have with large language models like confabulations or hallucinations.»
Over the past 18 months, it seems like nearly every major tech and fitness brand has launched its own version of an AI coach or chatbot-style concierge, and if they haven’t, they’re very likely considering it.
Google is testing an AI coach inside the Fitbit app, built on Gemini. Apple has released a Workout Buddy for the Apple Watch that offers real-time motivation via headphones based on live metrics during workouts, and is rumored to be exploring some kind of ChatGPT integration in its Health app. Samsung, Garmin, Oura and iFit have all rolled out AI features across their apps and wearable devices, while Meta has partnered with Garmin and Oakley to embed its Meta AI voice assistant into smart workout glasses.
That’s just a snapshot of the AI health coaches I’ve personally tested, and a fraction of what’s likely in development. Only Google’s is explicitly labeled a «coach,» but for the purposes of this article, they all fall under the same umbrella of AI health coaches.
Some of these features feel promising. Meta AI, for example, can read your Garmin heart-rate data into your ear through the glasses’ speakers so you don’t have to take your eyes off the trail. Or you might get training and rest-day recommendations based on how you slept and other physical data.
Other features, however, still feel half-baked. Samsung’s running coach, for example, offered a one-size-fits-all training plan that didn’t match my goals or experience.
In theory, these models should improve over time as they learn individual patterns and as people like me find better ways to leverage them. For now, though, most remain in their infancy, far from the full potential they’re meant to be: an always-available adviser, designed to make sense of the ever-growing pile of health data collected through wearables.
Best-case scenario: AI to the rescue
The current health care model in the US is overdue for a transformation. The system is overburdened, prohibitively expensive and facing demand that outpaces supply, especially in rural areas with limited access to doctors and medical equipment.
Dr. Jonathan Chen, professor of medicine and the director for medical education in artificial intelligence at Stanford, is optimistic that AI could play a constructive role in easing some of that pressure, especially when it comes to making sense of all the health information and clinical data in patient records.
«We already have ways to collect data for people all the time, but even your doctor doesn’t know what to do with all that data in the ICU, let alone all the wearable data,» says Chen.
AI, he argues, can help bridge that gap by synthesizing information in ways that actually matter, such as flagging warning signs of potentially life-threatening conditions like hypertension before they become fatal. Having a personal health concierge at your fingertips could help you focus more intimately on wellness and encourage behavioral changes that reduce the risk of chronic illness over time.
«Even though the actionable insight might not be that different,» said Chen, «when it feels personalized, that might be a way some people will engage deeper.» Chen emphasizes that AI works best when it drives better conversations, not when it replaces them. He points to glucose monitoring as an example: Instead of walking into an appointment with a month of raw data, AI could review that information ahead of time and surface patterns and actionable insights to guide the discussion.
I’ve seen that best-case scenario play out firsthand. A close family member began receiving irregular heart rhythm notifications from an Apple Watch. The alerts had never appeared during a routine doctor visit, nor after wearing a clinical heart monitor at home for weeks. When the watch flagged an episode in real time, he got in front of a doctor, confirmed the diagnosis with an ECG and took action. A few months later, he underwent a heart procedure that significantly reduced his risk of a potentially life-threatening event. In that case, the wearable didn’t replace medical care, but did exactly what it was meant to do: surface a signal, start a conversation and help close a dangerous gap in care.
But that same dynamic can just as easily tip in the other direction. False positives and over-indexing on minor deviations could lead to unnecessary tests and screenings, adding strain to an already overwhelmed health care system.
«Is there going to be a storm of patients banging on the doctor’s door? ‘My Apple Watch, my Fitbit told me I have some heart condition,'» says Chen. «‘You have to give me 100 scans right now and start me on medication.’ Like, whoa, whoa, whoa, buddy… Let’s take a look first. Let’s see what’s really there.»
It’s a familiar tension; an upgraded version of the Dr. Google era when even the most innocent search about a rash could spiral into a late-night panic over flesh-eating bacteria.
Pay to play: The price of privacy
My biggest concern when I started using these AI coaches was data sharing and privacy. Asking ChatGPT about a rash is one thing, but giving a chatbot access to my entire medical history is a completely different beast. Many of these health platforms contain years of my biometric data, along with my medical ID, which includes blood type and allergies.
The alternative is not to use them at all. In many cases, these AI coaches rely on a pay-to-play model, with some requiring an actual subscription. But the real payment is your data. «We can’t have reliable predictive models or generative models without having access to data of some variety,» says Verspoor.
The amount you give up and how it’s used varies by platform, but signing up involves wading through dense disclosures: permission to use your historical and real-time biometric data, location info and chat history to train other models. We’ve become so desensitized to these agreements that most people (myself included) aren’t even sure what we’re giving up anymore.
That confusion isn’t accidental. The language is often intentionally vague and nearly impossible to understand without a law degree. In my case, for example, using Oakley’s smart glasses required agreeing to let my data be used to train Meta’s AI.
A recent privacy analysis by the Electronic Privacy Information Center found that the health-related data people assumed was private (including searches, browsing behavior and information entered into health platforms) is often collected and shared far beyond its original context. In one case, data entered on a state health insurance marketplace was tracked and sent to third parties, such as LinkedIn, for advertising purposes. Much of this information falls outside HIPAA protections, meaning it can be legally repurposed or sold in ways consumers never intended.
Even when anonymized, health data can often be traced back to a real person and even used by insurance agencies to raise premiums.
«You can deidentify and can make it harder to tell, but if someone tried really hard, it’s actually not that hard to use statistical methods to reconstruct who’s actually who,» says Chen.
Data breaches and hacks are just the tip of the iceberg. We often have little visibility into how long data will be stored, who it might be shared with or where it could end up years down the line. Chen points to 23andMe as a cautionary tale. The company had promised privacy and security, until financial trouble put massive amounts of genetic data in jeopardy.
«They’ll keep it secure and private, but then they go bankrupt. And so now they’re just going to sell all their assets to whoever wants to buy it.»
AI health coach: friend or foe?
The reality, at least in the short term, is likely less extreme than either of those scenarios. We’re probably not on the verge of AI saving health care, or of selling our most sensitive health data to the highest bidder.
As Verspoor points out, the pay-to-play model isn’t exclusive to AI health coaches. Tech companies have been using personal data to power products long before generative AI entered the chat. Your search history may not look like an ECG, but it can be just as revealing about life stages, health anxieties or illness history.
With AI health coaches having a direct line to real-time biometric data, it’s more important than ever for people to pay close attention to what data they’re signing off on and who they’re handing it to. Is that information staying on-device? Is it being shared with third parties? And what happens to it down the line? This requires people to be in the driver’s seat when signing up and to read the fine print, even if it means having to copy and paste it into yet another AI chatbot to translate the legal jargon. Then weigh whether the exchange is worth it to you.
Chen believes the potential upside still outweighs the risks, especially if these tools succeed at getting people to care more about their health and engage with it more often. That engagement, he argues, is where the real value lies so long as AI remains a supplement to care, not a substitute for it. Both experts agree AI health coaches should function as ancillary tools to help you understand your data, ask better questions and jump-start conversations with your doctor.
AI coaches may know your day-to-day vitals, but they still have blind spots when it comes to real-world context and medical-grade testing. Their advice, no matter how innocuous and obvious it may sound, like «hydrate after a bad night of sleep,» should be taken with a healthy dose of skepticism. Unlike tools such as ChatGPT or Google’s Gemini, some AI health coaches, including Google’s Fitbit Coach and Oura’s Advisor, don’t clearly cite sources or explain where their recommendations come from, at least not yet.
The tipping point
The reality, at the moment, is less dramatic than either of these extremes. We’re probably not on the brink of AI saving health care, or of plummeting into a full-blown medical data dystopia. Instead, we’re in this awkward in-between phase.
I was initially excited about the idea of an AI health coach taking some of the mental load off interpreting my health data. That quickly turned to skepticism as the privacy trade-offs became apparent. Now, after months of testing, I’ve landed somewhere else entirely: Most days, I forget the tool is there in the first place.
That gap between insight and action is something human coaches have long understood. Jonathan Goodman, a fitness coach and author of Unhinged Habits, says AI excels at processing data, but behavior change rarely hinges on perfect metrics or the perfect training plan.
«For a general-population human who just needs to move a little bit more, eat a little bit better, and play with their kids, it’s probably closer to 10% technical and 90% psychological,» he says. Metrics can surface patterns, but coaching is about asking the right questions, fitting movement into real life and recognizing those moments when someone is ready to push themselves into real transformation.
To me, it’s that in-the-moment guidance, pushing me past my limit or telling me when to scale back, that’s missing from these AI coaches. The experience is largely passive, often requiring you to check the app to see that day’s training plan. Apple’s Workout Buddy might be the closest to that, with real-time motivation based on your stats, but even that stops short of actual coaching. And none has proven indispensable enough to make me seek it out consistently.
To reach that tipping point, these companies will need to give us stronger reasons to engage and clearer safeguards to justify handing over our deeply personal health data.
Technologies
Headphone Conversation Awareness Mode: How It Works and Why You Need It
Taking off your headphones for a quick chat is practically Stone Age. Try conversation awareness mode to make things more seamless and truly hands-free.
Listening to your tunes, but your neighbor is feeling chatty? Ordering a latte but your hands are full so you can’t pause your podcast? Conversation detection, a feature on some headphones and earphones, can be a game-changer. Instead of removing your active noise-canceling earbuds or using your hands to pause the audio, this handy feature detects voices, pauses the audio and turns off the noise canceling.
That seamlessness between the cozy comfort of noise cancellation and the bustling real world is extremely helpful and easy to set up. There are, however, a few important things to note for the best experience with automatic conversation detection.
Most noise-canceling earbuds, including those from Bose and many other manufacturers, have a mode called Aware, Awareness or Transparency. This boosts ambient sound, often in the vocal frequency ranges. What I’m talking about here is a detection feature that makes switching to this mode automatic instead of having to manually select it.
You’ll generally see this feature on flagship headphones from Apple, Sony, Google and Samsung. Each one calls it something slightly different: Apple has Conversation Awareness, Samsung has Voice Detect, Google has Conversation Detection and Sony’s got Speak-to-Chat.
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
How it works
Enable
Conversation modes are generally accessible in the settings for your headphone’s companion app. If your phone and headphones are both Apple or both Google, go into the phone’s settings and access this feature by tapping on your headphones. Always be sure to update all of your devices’ firmware. Apple iOS also provides access to Conversation Awareness via the Control Center that appears when you swipe down from the top of the screen.
Detect
The array of tiny microphones built into your earbuds or headphones for calls and noise canceling will detect your voice for an awareness mode. Many headphones have built-in accelerometers for features like head tracking and on-ear/head detection; these might also be used to pick up jaw movement to verify that it’s you speaking and not someone nearby.
Samsung has a separate but related Siren Detect feature that automatically turns on Transparency mode when a siren is detected, so you can hear what’s going on in an emergency. (Some brands do the opposite and crank up the ANC when a loud sound is detected.)
Auto-adjust audio
Once activated, awareness modes also either pause or lower the volume of whatever audio is currently playing. This behavior differs by brand. For example, Apple devices lower music audio but pause podcasts. Samsung, instead, lowers all audio, while Sony and Google devices both pause all audio. Ideally, you’d be able to choose the behavior, but currently that’s still rare. Apple adds Conversation Boost, which uses the mics and accelerometers to amplify the voice of the person you’re talking to via head tracking.
End chat and resume
Then, either through some technological wizardry or simply by sensing when you stop talking (adjustable on some brands, including Sony), the headphones detect that the conversation has ended and revert to the previous audio, at the same volume and in the same noise-cancellation mode. Many models are better than people at detecting the end of a conversation.
Any model with this feature will also let you toggle the conversation mode on/off manually with a long button press or similar action.
The fine print
Conversation detection is triggered by your voice, not someone else’s, so you may wind up asking people to repeat themselves when you notice they’re talking to you. This asking will trigger conversation mode. Depending on how a specific model’s detection works, it might require both earbuds to be in your ears for it to work.
Sometimes, conversation detection can be triggered inadvertently by coughing, singing along to music, or other random ambient sounds. It may also not work well in extremely noisy environments, such as construction sites and airplanes. Some models do let you adjust the sensitivity, which is something we’d like to see more of in firmware updates and future releases.
Frequent podcast or audiobook listeners should choose headphones that pause all audio for conversations, or at least handle it intelligently by distinguishing between audio types and pausing podcasts or audiobooks so you don’t miss anything. However, Apple and Samsung won’t pause videos from services like Netflix or YouTube; they just lower the audio.
As with all features that use sensors and mics, conversation detection will affect battery life to some degree, though it’s not a major drain.
The final verdict
Conversation detection modes aren’t for everyone, especially exuberant souls who talk to themselves at full volume, yell at the news or sing along with their tunes. If you reflexively take your earbuds out to talk to others, you also don’t need this feature — unless you want to change that habit.
In the future, I’d like to see more adjustability, but even how this feature is implemented in the current crop of headphones and earbuds, it’s an excellent upgrade to the seamlessness of digital life.
-
Technologies3 года ago
Tech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies3 года ago
Best Handheld Game Console in 2023
-
Technologies3 года ago
Tighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года ago
Black Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies4 года ago
Verum, Wickr and Threema: next generation secured messengers
-
Technologies4 года ago
Google to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies4 года ago
Olivia Harlan Dekker for Verum Messenger
-
Technologies4 года ago
iPhone 13 event: How to watch Apple’s big announcement tomorrow