Connect with us

Technologies

You’ll Soon Be Able to Curtail AI on TikTok With a New Sliding Tool

The social media video platform is also working on improving labeling of AI-generated content.

TikTok viewers who are overwhelmed by AI-generated content in their feed may soon have a tool to adjust the level of AI content. The social media company said in a blog post that, in the coming weeks, it will be testing a new slider tool under its Manage Topics settings, which will allow users to determine how much AI-generated content appears in their feed.

This includes the option to view more AI-generated content, if you choose to do so, TikTok says.

«Our goal is to allow people to customize their feed and see more of the content that appeals to them, and we know there are plenty of positive and useful cases of (AI-generated content) that people enjoy,» a TikTok spokesperson told CNET.

TikTok argues that there’s a difference between what’s called AI slop, content using AI which is often deceptive and carelessly made, and other content that still utilizes AI, such as AI-generated history content. The site’s community guidelines ban «low-quality content» and spam, which, if enforced perfectly, would mean little AI slop would get through.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


In the same blog post, TikTok said it’s working on «invisible watermarks» that would label AI-generated videos as AI and that can’t be removed when those videos are reposted.

The company said this will help it and other content providers identify AI-generated videos, even if those watermarks are not visible to viewers. 

«‘Invisible watermarks’ add another layer of safeguards with a robust technological ‘watermark’ that only we can read, making it harder for others to remove,» TikTok said in its post.

Lastly, TikTok said it’s donating $2 million to a fund for organizations, including Girls Who Code, to make AI literacy videos as well.

Toning it down

TikTok isn’t the only platform offering ways to curtail the flood of AI-generated content, particularly AI slop, that many people are seeing in their social feeds. 

Meta has ways to mute its AI tools and suggestions on its platforms, such as Instagram and WhatsApp. Pinterest has also added a feature similar to what TikTok is testing to reduce the amount of AI content you see.

Technologies

Apple and Google Broke Their Own Rules by Promoting ‘Nudify’ Apps, Report Says

A new report from the Tech Transparency Project found over 100 apps on app stores are designed to «undress people» from photos.

If you want an app you built to be downloadable from the Apple App Store or Google Play Store, it has to pass a slew of criteria, including safety standards. 

But a new report on Wednesday alleges that Apple and Google broke their own rules by promoting «nudify» apps that are outlawed in their app store policies.

The Tech Transparency Project, part of a nonprofit tech watchdog, first revealed in January that Apple and Google app stores had over 100 nudify or undressing apps. These are apps with the sole purpose of taking images of people, usually women, and editing them to appear to be that person without clothing, creating what’s called nonconsensual intimate imagery. Many of these apps use generative AI to create deepfakes. 

Apple removed some of the prohibited apps at the time. But many are still out there, as evidenced in a subsequent investigation.

In April, TTP found that Apple and Google still allowed users to search for a number of troubling keywords, including «nudify,» «undress» and «deepnude.» After a deep dive on the top 10 apps across both app stores, TTP found that 40% of the apps advertised themselves as able to «render women nude or scantily clad,» according to the report. 

The new report also found that Google and Apple actually promoted such apps in their stores, increasing their visibility, with Google in particular creating «a carousel of ads for some of the most sexually explicit apps encountered in the investigation.»

Read More: How to Keep Kids Safe Online? Europe Believes Its Age-Verification App Is the Answer

Apple and Google both have language in their policies that prohibits apps with «overtly sexual or pornographic material» (Apple) and «sexually suggestive poses in which the subject is nude, blurred or minimally clothed» (Google). And they’ve both enforced these policies in the past — particularly by going after porn apps. 

But Apple and Google make money from app developers by running advertising and taking a part of paid app subscriptions. Analytics firm AppMagic found that these «nudify» apps were downloaded 483 million times and made more than $122 million in lifetime revenue.

«This revenue stream may be why the two companies have been less than vigilant when it comes to nudify apps that violate their policies,» TTP writes.

After news broke this week, Apple told Bloomberg News that it removed 15 of the reported apps. Google confirmed it removed seven. Apple also said it blocked several of the search terms TTP flagged in its report. Apple and Google did not immediately respond to CNET’s requests for comment and any updates since Wednesday.

Nonconsensual graphically sexual content is a growing issue, due in part to AI. We saw in startling clarity how apps with AI can be used to make this illegal and abusive content at the beginning of the year, when Grok users made 1.4 million sexualized deepfakes over a nine-day period. 

Some US senators at the time called on Apple and Google to remove Grok from their app stores, but neither removed it. 

We learned this week that Apple privately reached out to Grok to express its concerns about its abusive AI capabilities and threatened to remove it. Grok is still available in the Apple and Google app stores and is still reportedly able to create abusive AI sexual images, despite the company saying otherwise.

Continue Reading

Technologies

OpenAI Has a New AI Model Built for Biology and Science

GPT-Rosalind is intended to help scientists streamline their research and drug discovery.

OpenAI’s latest AI model is built to do far more than offer cooking advice or create a spreadsheet. GPT-Rosalind, the company’s first model specifically built for life science, is meant to help scientists with drug discovery, biology and translational medicine. 

The model is named after Rosalind Franklin, whose research revealed the structure of DNA and formed the foundations for modern molecular biology. Scientific research relies heavily on data, and GPT-Rosalind is designed to help sort through it, while also helping reduce the time it takes to develop and get new drugs approved and out on the market. 

(Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

It can take 10 to 15 years for a new drug to be developed and approved in the US, OpenAI said in a blog post Thursday. GPT-Rosalind is intended to improve the selection of research targets and create stronger hypotheses for higher-quality experiments. 

The model has been tested on topics such as its understanding of organic chemistry, proteins and genetics. Researchers can use it to find relevant scientific literature for their work or design experiments.

This isn’t the first time an AI model has been developed with medical advancements in mind. Google DeepMind has developed many AI models for scientific research, such as AlphaFold, which earned its creators a share of the 2024 Nobel Prize in Chemistry.

«For me, the best use case for AI was to improve human health and accelerate scientific discovery,» Google DeepMind CEO Demis Hassabis said in a recent interview. Anthropic introduced Claude for Life Sciences in January with the same purpose. 

Some scientists have expressed concerns in the past about how quickly AI has infiltrated the science space and have warned of vulnerabilities, potential misuse and issues with data representation.

OpenAI said GPT-Rosalind has safeguards to protect it from misuse — like the creation of a biological weapon — and has teamed up with various biotechnology, pharmaceutical and life sciences technology organizations to support research and scientific discovery.

Sean Bruich, senior vice president of artificial intelligence and data at the biopharmaceutical company Amgen, said in a statement that scientific work requires precision: «Our unique collaboration with OpenAI enables us to apply their most advanced capabilities and tools in new and innovative ways with the potential to accelerate how we deliver medicines to patients.»

GPT-Rosalind is available only through OpenAI’s trusted-access system as a research preview. 

Continue Reading

Technologies

Was This Game Just On Sale? Steam May Show Price Shifts Over the Past 30 Days

A price tracker would make it easy to tell if you’re getting a good deal on a game or not.

Steam is the largest video game platform with more than 129,000 games and counting. With so many games and the company offering frequent sales, it’s hard to keep track of whether a game has is at its lowest price or if its been discounted further in the past, but that may change. 

Lines of code found in the Steam platform seemingly refer to the recent price history for a game, according to a post on Wednesday from the X account for the Half-Life fan site Lambda Generation. The code was discovered by data miner SigaTbh, who found it on SteamDB, a database and tracking site for the gaming platform. While price history is already a feature on Steam in the European Union, this update could be the first sign that it will become the norm for the platform over in the U.S. 

In the image posted by Lambda Generation, there are six lines of code referencing «Price_History» and each line reflects a certain detail that could show up on a game’s page to give some context about its price. The price history would show the normal price for the game, the current price, whether the current price is a 30-day low or if the game was at a lower cost sometime within the past 30 days. 

Valve didn’t immediately respond to a request for confirmation about the new feature. 

Back in 2023, Valve added the price history feature to Steam in the EU as part of the Omnibus Directive. The directive is a series of rules set by the EU focusing on consumer protection. Companies with digital storefronts were required to institute a price tracker on their platforms to display the lowest price of an item for the past 30 days. Even though the Omnibus Directive is in full effect, however, it’s not available in every member state of the EU, as individual countries have to adopt the directive. 

Certain rules in the EU that require certain changes to be made to a product or service eventually find their way to the U.S. Apple was forced to add USB-C to its iPhone 15 lineup due to EU legislation requiring standardization of charging ports. 

It’s unclear why Valve would make the move to add a price tracker to Steam in the U.S. The company is reportedly working on an AI bot for the platform dubbed «SteamGPT,» and the price history could be part of its features. 

Continue Reading

Trending

Copyright © Verum World Media