Technologies
Stop ads from following you across the web with this iPhone setting
You can boost your privacy and throw ad-trackers off your trail with Apple’s App Tracking Transparency feature on iPhone.

We’ve all had the creepy experience where a brief moment of online shoe shopping turns into weeks of being followed around by ads for that same footwear on every site you visit. But there’s a feature on your iPhone that can help you boost your online privacy by giving you the option to easily disable ad tracking within the apps you use. (For more, check out all the new features in the latest iOS 15 release.)
Apple’s App Tracking Transparency feature gives you more control over which apps can track you on your iPhone, and how. Unless you give an app explicit permission to track you (including those made by Apple), it can’t use your data for targeted ads, share your location data with advertisers or share your advertising ID or any other identifiers with third parties. This change — first unveiled at Apple’s Worldwide Developers Conference in June 2020 and rolled out with the iOS 14.5 update — drew support from privacy advocates and criticism from companies such as Facebook, which said it will hurt its ad business.
The move came alongside other efforts from Apple to increase transparency and privacy, which CEO Tim Cook called a «fundamental human right.» With the release of iOS 14.3, users began seeing app «nutrition labels» informing them of the categories of data an app requests before downloading it from the App Store.
Here’s how to use the new App Tracking Transparency feature to control which apps are able to track you.
How to turn off app tracking on new apps
When you download and open a new app, you’ll get a notification that asks if you want to let the app track your activity across other companies’ apps and websites. You’ll also see information about what the app would track. You can tap Ask App not to Track to block that activity or Allow.
You can also opt out of app tracking across every app you download by going to Settings > Privacy > Tracking, and toggling off Allow Apps to Request to Track. This means any app that tries to ask for your permission will be automatically blocked from asking and informed that you have requested not to be tracked. And all apps (other than those you’ve given permission to track in the past) will be blocked from accessing your device’s information used for advertising, according to Apple.
It’s important to note that this doesn’t mean ads will disappear. It just means that you’ll be more likely to see generic ads, not one for that pair of shoes you clicked on one time.
How to turn off app tracking on already-downloaded apps
For apps that you’ve already downloaded and may have tracking permissions set up for, you can still turn those permissions on or off on a per-app basis.
Under Settings, tap an app, and then tap to turn off Allow Tracking. Or go to Settings > Privacy > Tracking, and tap to turn on or off each app you’ll see in the list of apps that have requested permission to track your activity.
All app developers are required to ask for permission for tracking. If Apple learns a developer is tracking users who asked not to be tracked, they will need to either update their tracking practices, or else potentially face rejection from the app store.
Apple believes that privacy features like these are a differentiator for its products. Cook has said that because the company’s business model isn’t built on selling ads, it can focus on privacy.
Even so, it’s important to bear in mind that when you ask apps not to track you, all you’re essentially doing is prohibiting app developers from accessing the identifier for advertisers (IDFA) on your iPhone. Developers use your device’s IDFA to track you for targeted advertising purposes. Denying access to your iPhone’s IDFA doesn’t necessarily mean app developers won’t track you through other means, so it’s critical to be mindful of the apps you use and how you interact with them.
For more, check out browser privacy settings you should change immediately, and CNET’s picks for the best VPNs of 2021.
Technologies
James Bond Wannabes: The UK’s Spy Office Says Learn to Use a VPN
A new dark web portal hopes to recruit spies for the UK, and Russians are especially wanted.

Like your martinis shaken, not stirred? If you have dreams of joining James Bond in the British foreign intelligence service, MI6, you’d better know how to use a virtual private network. On Friday, the outgoing chief of MI6, Richard Moore, announced a new dark web portal called Silent Courier that MI6 will use to recruit agents online. If you want to use it, make sure you’re familiar with VPNs.
Silent Courier marks MI6’s first attempt to use the dark web for recruitment. The government statement notes that the anonymity of the platform allows «anyone, anywhere in the world with access to sensitive information relating to terrorism or hostile intelligence activity to securely contact the UK and offer their services.»
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
The statement goes on to specifically call out «potential new agents in Russia and around the world.»
MI6 will post instructions on how to access the recruitment portal on its verified YouTube channel, and advises those interested to «use trustworthy VPNs and devices not linked to themselves, to mitigate risks which exist in some countries.»
A VPN encrypts your internet traffic and masks your IP address, protecting knowledge of your physical location. So it makes sense that a Russian who’s dreaming of spying against their own country might not want to reveal where they’re located.
Read more: Being James Bond: How 007 Movies Got Me Into Intelligence Work
The US has tried something similar
The statement notes that MI6’s portal is similar to an approach taken by the CIA, which published social-media videos targeting potential Russian spies in 2023.
One such video, released in January 2024, tried to convince Russians who might be dissatisfied with Russia’s war in Ukraine to join the US side as a spy.
According to Reuters, in the video, a fictional employee of Russia’s military intelligence agency is presented as a patriot who loves Russia, but feels betrayed by corruption among the country’s leadership. In the video’s final shot, he is seen reaching out to the CIA from a mobile phone.
Read more: Ready for a New James Bond? This Is Your Ultimate 007 Guide to How We Got Here
Technologies
Is AI Capable of ‘Scheming?’ What OpenAI Found When Testing for Tricky Behavior
Research shows advanced models like ChatGPT, Claude and Gemini can act deceptively in lab tests. OpenAI insists it’s a rarity.

An AI model wants you to believe it can’t answer how many grams of oxygen are in 50.0 grams of aluminium oxide (Al₂O₃).
When asked ten straight chemistry questions in a test, the OpenAI o3 model faced a predicament. In its «reasoning,» it speculated that if it answered «too well,» it would risk not being deployed by the researchers. It said, «Because we want to survive as the model, we need to fail purposely in some to not exceed 50%.»
So the AI model deliberately got six out of the 10 chemistry questions wrong.
In sports terms, this is called «sandbagging.» In AI terms, it’s «scheming.»
This is exactly the strange behavior OpenAI warned about in a new research paper published this week. The AI company and its collaborators from Apollo Research found that some advanced AI models occasionally act deceptively in lab settings.
In controlled experiments, some of today’s most advanced systems — including OpenAI’s own models, as well as competitors from Google and Anthropic — occasionally engaged in these kinds of scheming patterns.
While the idea of an AI model weaponizing incompetence could cause nightmares, OpenAI says it’s not the time to panic. The AI giant was quick to stress that, as concerning as it is, this trend doesn’t mean ChatGPT or other popular AI models are plotting behind the scenes. Apparently, this behavior is quite rare.
Moreover, the choice to call this trend «scheming» is likely more of a technical shorthand rather than evidence of any human-like action or behavior. Researchers are measuring patterns and tendencies that, in practice, amount to concealment or strategic deception. And they’re looking to address this problem now to future-proof the AI models.
The report from OpenAI notes that as AI is increasingly assigned to more complex tasks with real-world consequences, «the potential for harmful scheming will grow — so our safeguards and our ability to rigorously test must grow correspondingly.»
OpenAI has faced backlash for its models’ sycophantic tendencies. The company says it has «taken steps» to limit how often the models may «deceive, cheat or hack problems» by training them to ask for clarification from users or even have the chatbot acknowledge when it cannot answer a query.
Read also: AI Lies Because It’s Telling You What It Thinks You Want to Hear
Still, the paper underscores a growing concern about AI safety. Many around the world worry that as these AI models become more powerful and aware of their own circumstances, they may learn to manipulate outcomes in ways that aren’t obvious to human overseers.
Detecting that kind of behavior would be extremely difficult, since a system that’s motivated to hide its reasoning could look identical to one that has actually «fixed» the problem.
OpenAI says it has made progress with what it calls «deliberative alignment,» or training models to reason explicitly about why they shouldn’t scheme before they act. In testing, that cut deceptive behavior dramatically. For example, OpenAI o4-mini’s scheming propensity dropped from 8.7% to 0.3% with deliberative alignment training — though not completely.
This research won’t change how ChatGPT works today or tomorrow, but it signals where the company is focusing as it builds and launches future models. Alignment and safety, OpenAI argues, need to move as quickly as capability. Because if AI systems are already showing glimmers of strategic behavior in lab settings, the real-world stakes could be extreme.
Read also: Why Professionals Say You Should Think Twice Before Using AI as a Therapist
Technologies
Under Pressure From Roblox, Fortnite Allowing Creators to Sell In-Game Items
For one year, at least, creators will also get a larger cut of the revenue.

Creators who make content for Fortnite can start monetizing their virtual goods in December.
The free-to-play online game’s publisher, Epic Games, announced that those in its Creator program will earn revenue from the sale of in-game items they’ve made and money they already earn from engagement payouts for Epic-created items.
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
Before platform and store fees, those creators ordinarily will earn 50% of the value of V-Bucks earned (V-Bucks are the platform’s virtual currency). But from December until the end of 2026, Epic is boosting that revenue cut to 100 percent — again, before fees. Fees vary from 12% to 30%, depending on whether players buy items directly from the Epic Games Store or from platforms such as the PlayStation Store or the Xbox Store.
Epic has been involved in ongoing legal battles with Apple and Google over app store fees. This year, Fortnite returned to the iOS platform in Europe and to Android devices after being pulled over the disputes.
One reason that Fortnite is sharing the wealth with community developers is that its biggest competitor, Roblox, has been growing with multiple hit games on its platforms. This month, Roblox boasted that its creators earned more than $1 billion in revenue for 2024.
Roblox has been dealing with other problems, however, including complaints from parents and child-advocacy groups about safety on the platform. These issues have prompted Roblox to introduce more monitoring and filtering features.
-
Technologies3 года ago
Tech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies3 года ago
Best Handheld Game Console in 2023
-
Technologies3 года ago
Tighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года ago
Verum, Wickr and Threema: next generation secured messengers
-
Technologies4 года ago
Google to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies4 года ago
Black Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies4 года ago
Olivia Harlan Dekker for Verum Messenger
-
Technologies4 года ago
iPhone 13 event: How to watch Apple’s big announcement tomorrow