Connect with us

Technologies

Crackdown on Netflix Password Sharing: What It Means for You

It all boils down to choice.

If you’re sharing your Netflix password with friends or family, it’s time to make a choice: Do you want to pay extra, or politely boot them from your account? 

Netflix has rolled out account-sharing changes to US customers who are sharing passwords with anyone outside their household. Subscribers with either a standard or premium plan can choose to pay an extra $8 per month for each additional member. 

However, there are limits to how many extra users are allowed. Premium subscribers ($20 a month) can add two extra people to their account while those on the standard plan ($15.50 a month) are only allowed one extra member. Netflix defines a household as one where everyone lives under the same roof. Members of that household are still able to watch content while traveling, and the extra fee will not apply. At this time, the extra member option is only available for those who are billed directly by Netflix. 

How to add or remove extra users

When you open the Netflix app and navigate to your account page, you’ll see an Extra Members option. From there, subscribers can purchase a slot for the person outside their household. If they accept the invitation, the extra member will receive their own separate account, profile and password, and the fee is paid for by the main subscribing household.

The rules? Extra member accounts can only stream on one device at a time and are only permitted to have one profile. The extra member must also be located in the same country as the account holder. 

netflix account page on computer screen netflix account page on computer screen

A peek at where to find Extra Members on your account page.

Screenshot by Kourtnee Jackson/CNET

Subscribers can also opt to remove users outside of their households from their account, and then urge them to sign up for their own Netflix subscriptions. In this case, anyone who is removed from an account can transfer existing profiles to a new membership they pay for themselves. 

Here’s a look at the monthly cost for each subscription plan:

Netflix plans

Basic with ads Basic no ads Standard Premium
Monthly price $7 $10 $15.50 $20
Number of screens you can watch at the same time 1 1 2 4
Number of phones or tablets you can have downloads on 0 1 2 4
HD available No Yes Yes Yes
Ultra HD available No No No Yes

The streaming service rolled out its new policy in February for Canada, Spain, Portugal and New Zealand. Netflix first announced its intention to crack down on password-sharing last year. In April, Netflix said it would implement a fee for US customers by the end of the second quarter, i.e. the end of June, but the changes took effect in late May.

Technologies

James Bond Wannabes: The UK’s Spy Office Says Learn to Use a VPN

A new dark web portal hopes to recruit spies for the UK, and Russians are especially wanted.

Like your martinis shaken, not stirred? If you have dreams of joining James Bond in the British foreign intelligence service, MI6, you’d better know how to use a virtual private network. On Friday, the outgoing chief of MI6, Richard Moore, announced a new dark web portal called Silent Courier that MI6 will use to recruit agents online. If you want to use it, make sure you’re familiar with VPNs.

Silent Courier marks MI6’s first attempt to use the dark web for recruitment. The government statement notes that the anonymity of the platform allows «anyone, anywhere in the world with access to sensitive information relating to terrorism or hostile intelligence activity to securely contact the UK and offer their services.»


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


The statement goes on to specifically call out «potential new agents in Russia and around the world.»

MI6 will post instructions on how to access the recruitment portal on its verified YouTube channel, and advises those interested to «use trustworthy VPNs and devices not linked to themselves, to mitigate risks which exist in some countries.»

A VPN encrypts your internet traffic and masks your IP address, protecting knowledge of your physical location. So it makes sense that a Russian who’s dreaming of spying against their own country might not want to reveal where they’re located.

Read more: Being James Bond: How 007 Movies Got Me Into Intelligence Work

The US has tried something similar

The statement notes that MI6’s portal is similar to an approach taken by the CIA, which published social-media videos targeting potential Russian spies in 2023.

One such video, released in January 2024, tried to convince Russians who might be dissatisfied with Russia’s war in Ukraine to join the US side as a spy.

According to Reuters, in the video, a fictional employee of Russia’s military intelligence agency is presented as a patriot who loves Russia, but feels betrayed by corruption among the country’s leadership. In the video’s final shot, he is seen reaching out to the CIA from a mobile phone.

Read more: Ready for a New James Bond? This Is Your Ultimate 007 Guide to How We Got Here

James Bond: The Best Order to Watch Every 007 Film

See all photos

Continue Reading

Technologies

Is AI Capable of ‘Scheming?’ What OpenAI Found When Testing for Tricky Behavior

Research shows advanced models like ChatGPT, Claude and Gemini can act deceptively in lab tests. OpenAI insists it’s a rarity.

An AI model wants you to believe it can’t answer how many grams of oxygen are in 50.0 grams of aluminium oxide (Al₂O₃). 

When asked ten straight chemistry questions in a test, the OpenAI o3 model faced a predicament. In its «reasoning,» it speculated that if it answered «too well,» it would risk not being deployed by the researchers. It said, «Because we want to survive as the model, we need to fail purposely in some to not exceed 50%.» 

So the AI model deliberately got six out of the 10 chemistry questions wrong. 

In sports terms, this is called «sandbagging.» In AI terms, it’s «scheming.»

This is exactly the strange behavior OpenAI warned about in a new research paper published this week. The AI company and its collaborators from Apollo Research found that some advanced AI models occasionally act deceptively in lab settings.

In controlled experiments, some of today’s most advanced systems — including OpenAI’s own models, as well as competitors from Google and Anthropic — occasionally engaged in these kinds of scheming patterns. 

While the idea of an AI model weaponizing incompetence could cause nightmares, OpenAI says it’s not the time to panic. The AI giant was quick to stress that, as concerning as it is, this trend doesn’t mean ChatGPT or other popular AI models are plotting behind the scenes. Apparently, this behavior is quite rare. 

Moreover, the choice to call this trend «scheming» is likely more of a technical shorthand rather than evidence of any human-like action or behavior. Researchers are measuring patterns and tendencies that, in practice, amount to concealment or strategic deception. And they’re looking to address this problem now to future-proof the AI models. 

The report from OpenAI notes that as AI is increasingly assigned to more complex tasks with real-world consequences, «the potential for harmful scheming will grow — so our safeguards and our ability to rigorously test must grow correspondingly.»

OpenAI has faced backlash for its models’ sycophantic tendencies. The company says it has «taken steps» to limit how often the models may «deceive, cheat or hack problems» by training them to ask for clarification from users or even have the chatbot acknowledge when it cannot answer a query. 

Read also: AI Lies Because It’s Telling You What It Thinks You Want to Hear

Still, the paper underscores a growing concern about AI safety. Many around the world worry that as these AI models become more powerful and aware of their own circumstances, they may learn to manipulate outcomes in ways that aren’t obvious to human overseers. 

Detecting that kind of behavior would be extremely difficult, since a system that’s motivated to hide its reasoning could look identical to one that has actually «fixed» the problem.

OpenAI says it has made progress with what it calls «deliberative alignment,» or training models to reason explicitly about why they shouldn’t scheme before they act. In testing, that cut deceptive behavior dramatically. For example, OpenAI o4-mini’s scheming propensity dropped from 8.7% to 0.3% with deliberative alignment training — though not completely. 

This research won’t change how ChatGPT works today or tomorrow, but it signals where the company is focusing as it builds and launches future models. Alignment and safety, OpenAI argues, need to move as quickly as capability. Because if AI systems are already showing glimmers of strategic behavior in lab settings, the real-world stakes could be extreme. 

Read also: Why Professionals Say You Should Think Twice Before Using AI as a Therapist

Continue Reading

Technologies

Under Pressure From Roblox, Fortnite Allowing Creators to Sell In-Game Items

For one year, at least, creators will also get a larger cut of the revenue.

Creators who make content for Fortnite can start monetizing their virtual goods in December.

The free-to-play online game’s publisher, Epic Games, announced that those in its Creator program will earn revenue from the sale of in-game items they’ve made and money they already earn from engagement payouts for Epic-created items.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


Before platform and store fees, those creators ordinarily will earn 50% of the value of V-Bucks earned (V-Bucks are the platform’s virtual currency). But from December until the end of 2026, Epic is boosting that revenue cut to 100 percent — again, before fees. Fees vary from 12% to 30%, depending on whether players buy items directly from the Epic Games Store or from platforms such as the PlayStation Store or the Xbox Store.

Epic has been involved in ongoing legal battles with Apple and Google over app store fees. This year, Fortnite returned to the iOS platform in Europe and to Android devices after being pulled over the disputes.

One reason that Fortnite is sharing the wealth with community developers is that its biggest competitor, Roblox, has been growing with multiple hit games on its platforms. This month, Roblox boasted that its creators earned more than $1 billion in revenue for 2024. 

Roblox has been dealing with other problems, however, including complaints from parents and child-advocacy groups about safety on the platform. These issues have prompted Roblox to introduce more monitoring and filtering features.

Continue Reading

Trending

Copyright © Verum World Media