Technologies
Apple, iPhones, photos and child safety: What’s happening and should you be concerned?
The tech giant’s built new systems to fight child exploitation and abuse, but security advocates worry it could erode our privacy. Here’s why.

Apple’s long presented itself as a bastion of security, and as one of the only tech companies that truly cares about user privacy. But a new technology designed to help an iPhone, iPad or Mac computer detect child exploitation images and videos stored on those devices has ignited a fierce debate about the truth behind Apple’s promises.
On Aug. 5, Apple announced a new feature being built into the upcoming iOS 15, iPad OS 15, WatchOS 8 and MacOS Monterey software updates designed to detect if someone has child exploitation images or videos stored on their device. It’ll do this by converting images into unique bits of code, known as hashes, based on what they depict. The hashes are then checked against a database of known child exploitation content that’s managed by the National Center for Missing and Exploited Children. If a certain number of matches are found, Apple is then alerted and may further investigate.
Apple said it developed this system to protect people’s privacy, performing scans on the phone and only raising alarms if a certain number of matches are found. But privacy experts, who agree that fighting child exploitation is a good thing, worry that Apple’s moves open the door to wider uses that could, for example, put political dissidents and other innocent people in harm’s way.
«Even if you believe Apple won’t allow these tools to be misused there’s still a lot to be concerned about,» tweeted Matthew Green, a professor at Johns Hopkins University who’s worked on cryptographic technologies.
Apple’s new feature, and the concern that’s sprung up around it, represent an important debate about the company’s commitment to privacy. Apple has long promised that its devices and software are designed to protect their users’ privacy. The company even dramatized that with an ad it hung just outside the convention hall of the 2019 Consumer Electronics Show, which said «What happens on your iPhone stays on your iPhone.»
«We at Apple believe privacy is a fundamental human right,» Apple CEO Tim Cook has often said.
Apple’s scanning technology is part of a trio of new features the company’s planning for this fall. Apple also is enabling its Siri voice assistant to offer links and resources to people it believes may be in a serious situation, such as a child in danger. Advocates had been asking for that type of feature for a while.
It also is adding a feature to its messages app to proactively protect children from explicit content, whether it’s in a green-bubble SMS conversation or blue-bubble iMessage encrypted chat. This new capability is specifically designed for devices registered under a child’s iCloud account and will warn if it detects an explicit image being sent or received. Like with Siri, the app will also offer links and resources if needed.
There’s a lot of nuance involved here, which is part of why Apple took the unusual step of releasing research papers, frequently asked questions and other information ahead of the planned launch.
Here’s everything you should know:
Why is Apple doing this now?
The tech giant said it’s been trying to find a way to help stop child exploitation for a while. The National Center for Missing and Exploited Children received more than 65 million reports of material last year. Apple said that’s way up from the 401 reports 20 years ago.
«We also know that the 65 million files that were reported is only a small fraction of what is in circulation,» said Julie Cordua, head of Thorn, a nonprofit fighting child exploitation that supports Apple’s efforts. She added that US law requires tech companies to report exploitative material if they find it, but it does not compel them to search for it.
Other companies do actively search for such photos and videos. Facebook, Microsoft, Twitter, and Google (and its YouTube subsidiary) all use various technologies to scan their systems for any potentially illegal uploads.
What makes Apple’s system unique is that it’s designed to scan our devices, rather than the information stored on the company’s servers.
The hash scanning system will only be applied to photos stored in iCloud Photo Library, which is a photo syncing system built into Apple devices. It won’t hash images and videos stored in the photos app of a phone, tablet or computer that isn’t using iCloud Photo Library. So, in a way, people can opt out if they choose not to use Apple’s iCloud photo syncing services.
Could this system be abused?
The question at hand isn’t whether Apple should do what it can to fight child exploitation. It’s whether Apple should use this method.
The slippery slope concern privacy experts have raised is whether Apple’s tools could be twisted into surveillance technology against dissidents. Imagine if the Chinese government were able to somehow secretly add data corresponding to the famously suppressed Tank Man photo from the 1989 pro-democracy protests in Tiananmen Square to Apple’s child exploitation content system.
Apple said it designed features to keep that from happening. The system doesn’t scan photos, for example — it checks for matches between hash codes. The hash database is also stored on the phone, not a database sitting on the internet. Apple also noted that because the scans happen on the device, security researchers can audit the way it works more easily.
Is Apple rummaging through my photos?
We’ve all seen some version of it: The baby in the bathtub photo. My parents had some of me, I have some of my kids, and it was even a running gag on the 2017 Dreamworks animated comedy The Boss Baby.
Apple says those images shouldn’t trip up its system. Because Apple’s system converts our photos to these hash codes, and then checks them against a known database of child exploitation videos and photos, the company isn’t actually scanning our stuff. The company said the likelihood of a false positive is less than one in 1 trillion per year.
«In addition, any time an account is flagged by the system, Apple conducts human review before making a report to the National Center for Missing and Exploited Children,» Apple wrote on its site. «As a result, system errors or attacks will not result in innocent people being reported to NCMEC.»
Is Apple reading my texts?
Apple isn’t applying its hashing technology to our text messages. That, effectively, is a separate system. Instead, with text messages, Apple is only alerting a user who’s marked as a child in their iCloud account about when they’re about to send or receive an explicit image. The child can still view the image, and if they do a parent will be alerted.
«The feature is designed so that Apple does not get access to the messages,» Apple said.
What does Apple say?
Apple maintains that its system is built with privacy in mind, with safeguards to keep Apple from knowing the contents of our photo libraries and to minimize the risk of misuse.
«At Apple, our goal is to create technology that empowers people and enriches their lives — while helping them stay safe,» Apple said in a statement. «We want to protect children from predators who use communication tools to recruit and exploit them, and limit the spread of Child Sexual Abuse Material.»
Technologies
Google Is Bringing Gemini AI to Its Smart Home Lineup, Starting Oct. 1
Goodbye, Google Assistant. Hello, Gemini.

It increasingly feels like Google’s AI assistant is omnipresent across our devices and, starting next month, it could also be in your home.
In a post on X on Tuesday, the company teased, «Gemini is coming to Google Home,» and told us to, «Come back October 1.»
At its Made by Google event in August, the company announced Gemini for Home among a slew of other product announcements, so this has been in the works for a while.
Tuesday’s X post teaser appears to show an image of a Nest camera, which Google last upgraded four years ago, suggesting the security camera could be set for a refresh. An upgraded Nest speaker and doorbell, both with 2K camera support, could also be part of the Oct. 1 unveiling.
Google did not immediately respond to CNET’s request for comment.
Android Faithful podcast co-host (and former CNET staffer) Jason Howell is «optimistic» about Gemini replacing Assistant in Google’s smart home products.
«In recent years, I have witnessed my Google Home devices degrading in quality and becoming far less useful for even simple tasks and questions,» Howell tells CNET. «They’ve become buggy and unreliable to the point where I’ve stopped interacting with them for most things.»
Gemini catches dog red-handed
At the Mobile World Congress tech show in Barcelona earlier this year, Howell was impressed by Gemini’s performance with a smart home camera.
«A smart home camera detected a dog that came into the kitchen to steal a cookie off the counter,» Howell recalls. «Through voice interaction, the homeowner could ask the system what happened to the cookie, and, given the video context from the camera and an understanding of what it saw, the system could tell the homeowner that the dog was the culprit.
«This sort of example empowers users to spend less time looking for answers in lieu of simple voice queries that serve them the answer they are looking for with less effort and less time spent.»
Google announced last month that Gemini for Home will eventually replace Google Assistant in its smart home devices. You’ll still activate Gemini with, «Hey Google,» but the advanced AI tech will be able to better interpret more complex and nuanced instructions and questions.
Maybe you’re stumped as to what to make for dinner, so it could be: «Hey Google, what quick pasta dish can I cook in less than an hour?» or, «Give me a recipe for Caesar salad.» Gemini is also designed to work with thermostats and smart lights, so you might tell it to «turn the temp to 68 degrees» and «turn off all the lights except in the kitchen.»
The market for smart home technology is expected to grow by 23% over the next five years, according to Grand View Research.
Technologies
Polar Introduces Loop, a $200 Screenless Wearable
Polar’s first fitness tracker with no screen tracks activity, sleep and overall health, the company says.

Fitness tracking company Polar has launched Loop, a $200 screenless wearable that it says will have no subscription fees. Preorders opened on Sept. 3, and the Polar Loop will start shipping on Sept. 10.
Like other fitness trackers, the Polar Loop will log steps, sleep patterns and daily activity patterns but Polar is touting the lack of a screen as «unobtrusive» and «discreet.» The Loop, which is a wearable band for your wrist, has eight days of battery life with continuous use and stores four weeks’ worth of data. It syncs with the Polar Flow app to view stats and analyze sleep and training data, among other information.
Because it has no buttons, activities can be started in the app or passively with what the company calls «automatic training detection.»
It’s available in the colors Greige Sand, Night Black and Brown Copper. Additional band colors are offered for $20 each.
There’s already a market of no-screen wearables, including the Whoop 5.0 wristband and smart rings such as the Oura Ring 3.
Will the Loop measure up?
Whether the Polar Loop’s attempt at simplifying a fitness wearable works out will largely depend on how well it runs and what it offers compared to other devices.
«The company is clearly tapping into the growing demand for screen-free wearables,» says CNET’s lead writer for wearables, Vanessa Hand Orellana. «It feels like a direct answer to the athlete-favorite Whoop band and even the Oura Ring, both of which collect similar health metrics to display and analyze in their respective apps.»
Hand Orellana says Polar has a good reputation, with its signature heart-rate chest straps, and may win over fans by eschewing the subscription fee that the Oura and Whoop require.
«That said, as with most devices in this space, the real differentiator often comes down to execution… specifically, how well the data translates into clear, actionable insights. Personally, I’m curious to see how the Loop integrates with Polar’s app, which, at least in my experience with their HR straps, hasn’t always been the most intuitive to navigate,» she said.
Technologies
Waymo Is Expanding to Denver and Seattle. Everything to Know About the Robotaxi
The company has also been granted a permit to test its self-driving vehicles in New York City. Here’s everywhere Waymo operates now, and where it’s set to arrive soon.
-
Technologies3 года ago
Tech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies3 года ago
Best Handheld Game Console in 2023
-
Technologies3 года ago
Tighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года ago
Verum, Wickr and Threema: next generation secured messengers
-
Technologies4 года ago
Google to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies4 года ago
Black Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies4 года ago
Olivia Harlan Dekker for Verum Messenger
-
Technologies4 года ago
iPhone 13 event: How to watch Apple’s big announcement tomorrow