Technologies
Computing Guru Criticizes ChatGPT AI Tech for Making Things Up
Vint Cerf, who helped create the internet’s core network technology, hopes engineers can improve artificial intelligence’s shortcoming.
Vint Cerf, one of the founding fathers of the internet, has some harsh words for the suddenly hot technology behind the ChatGPT AI chatbot: «Snake oil.»
Google’s internet evangelist wasn’t completely down on the artificial intelligence technology behind ChatGPT and Google’s own competing Bard, called a large language model. But, speaking Monday at Celesta Capital’s TechSurge Summit, he did warn about ethical issues of a technology that can generate plausible sounding but incorrect information even when trained on a foundation of factual material.
If an executive tried to get him to apply ChatGPT to some business problem, his response would be to call it snake oil, referring to bogus medicines that quacks sold in the 1800s, he said. Another ChatGPT metaphor involved kitchen appliances.
«It’s like a salad shooter — you know how the lettuce goes all over everywhere,» Cerf said. «The facts are all over everywhere, and it mixes them together because it doesn’t know any better.»
Cerf shared the 2004 Turing Award, the top prize in computing, for helping to develop the internet foundation called TCP/IP, which shuttles data from one computer to another by breaking it into small, individually addressed packets that can take different routes from source to destination. He’s not an AI researcher, but he’s a computing engineer who’d like to see his colleagues improve AI’s shortcomings.
OpenAI’s ChatGPT and competitors like Google’s Bard hold the potential to significantly transform our online lives by answering questions, drafting emails, summarizing presentations and performing many other tasks. Microsoft has begun building OpenAI’s language technology into its Bing search engine in a significant challenge to Google, but it uses its own index of the web to try to «ground» OpenAI’s flights of fancy with authoritative, trustworthy documents.
Cerf said he was surprised to learn that ChatGPT could fabricate bogus information from a factual foundation. «I asked it, ‘Write me a biography of Vint Cerf.’ It got a bunch of things wrong,» Cerf said. That’s when he learned the technology’s inner workings — that it uses statistical patterns spotted from huge amounts of training data to construct its response.
«It knows how to string a sentence together that’s grammatically likely to be correct,» but it has no true knowledge of what it’s saying, Cerf said. «We are a long way away from the self-awareness we want.»
OpenAI, which earlier in February launched a $20 per month plan to use ChatGPT, has been clear about about the technology’s shortcomings but aims to improve it through «continuous iteration.»
«ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging,» the AI research lab said when it launched ChatGPT in November.
Cerf hopes for progress, too. «Engineers like me should be responsible for trying to find a way to tame some of these technologies so they are less likely to cause trouble,» he said.
Cerf’s comments stood in contrast to those of another Turing award winner at the conference, chip design pioneer and former Stanford President John Hennessy, who offered a more optimistic assessment of AI.
Editors’ note: CNET is using an AI engine to create some personalfinance explainers that are edited and fact-checked by our editors. Formore, see this post.
Technologies
Apple and Google Broke Their Own Rules by Promoting ‘Nudify’ Apps, Report Says
A new report from the Tech Transparency Project found over 100 apps on app stores are designed to «undress people» from photos.
If you want an app you built to be downloadable from the Apple App Store or Google Play Store, it has to pass a slew of criteria, including safety standards.
But a new report on Wednesday alleges that Apple and Google broke their own rules by promoting «nudify» apps that are outlawed in their app store policies.
The Tech Transparency Project, part of a nonprofit tech watchdog, first revealed in January that Apple and Google app stores had over 100 nudify or undressing apps. These are apps with the sole purpose of taking images of people, usually women, and editing them to appear to be that person without clothing, creating what’s called nonconsensual intimate imagery. Many of these apps use generative AI to create deepfakes.
Apple removed some of the prohibited apps at the time. But many are still out there, as evidenced in a subsequent investigation.
In April, TTP found that Apple and Google still allowed users to search for a number of troubling keywords, including «nudify,» «undress» and «deepnude.» After a deep dive on the top 10 apps across both app stores, TTP found that 40% of the apps advertised themselves as able to «render women nude or scantily clad,» according to the report.
The new report also found that Google and Apple actually promoted such apps in their stores, increasing their visibility, with Google in particular creating «a carousel of ads for some of the most sexually explicit apps encountered in the investigation.»
Read More: How to Keep Kids Safe Online? Europe Believes Its Age-Verification App Is the Answer
Apple and Google both have language in their policies that prohibits apps with «overtly sexual or pornographic material» (Apple) and «sexually suggestive poses in which the subject is nude, blurred or minimally clothed» (Google). And they’ve both enforced these policies in the past — particularly by going after porn apps.
But Apple and Google make money from app developers by running advertising and taking a part of paid app subscriptions. Analytics firm AppMagic found that these «nudify» apps were downloaded 483 million times and made more than $122 million in lifetime revenue.
«This revenue stream may be why the two companies have been less than vigilant when it comes to nudify apps that violate their policies,» TTP writes.
After news broke this week, Apple told Bloomberg News that it removed 15 of the reported apps. Google confirmed it removed seven. Apple also said it blocked several of the search terms TTP flagged in its report. Apple and Google did not immediately respond to CNET’s requests for comment and any updates since Wednesday.
Nonconsensual graphically sexual content is a growing issue, due in part to AI. We saw in startling clarity how apps with AI can be used to make this illegal and abusive content at the beginning of the year, when Grok users made 1.4 million sexualized deepfakes over a nine-day period.
Some US senators at the time called on Apple and Google to remove Grok from their app stores, but neither removed it.
We learned this week that Apple privately reached out to Grok to express its concerns about its abusive AI capabilities and threatened to remove it. Grok is still available in the Apple and Google app stores and is still reportedly able to create abusive AI sexual images, despite the company saying otherwise.
Technologies
OpenAI Has a New AI Model Built for Biology and Science
GPT-Rosalind is intended to help scientists streamline their research and drug discovery.
OpenAI’s latest AI model is built to do far more than offer cooking advice or create a spreadsheet. GPT-Rosalind, the company’s first model specifically built for life science, is meant to help scientists with drug discovery, biology and translational medicine.
The model is named after Rosalind Franklin, whose research revealed the structure of DNA and formed the foundations for modern molecular biology. Scientific research relies heavily on data, and GPT-Rosalind is designed to help sort through it, while also helping reduce the time it takes to develop and get new drugs approved and out on the market.
(Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
It can take 10 to 15 years for a new drug to be developed and approved in the US, OpenAI said in a blog post Thursday. GPT-Rosalind is intended to improve the selection of research targets and create stronger hypotheses for higher-quality experiments.
The model has been tested on topics such as its understanding of organic chemistry, proteins and genetics. Researchers can use it to find relevant scientific literature for their work or design experiments.
This isn’t the first time an AI model has been developed with medical advancements in mind. Google DeepMind has developed many AI models for scientific research, such as AlphaFold, which earned its creators a share of the 2024 Nobel Prize in Chemistry.
«For me, the best use case for AI was to improve human health and accelerate scientific discovery,» Google DeepMind CEO Demis Hassabis said in a recent interview. Anthropic introduced Claude for Life Sciences in January with the same purpose.
Some scientists have expressed concerns in the past about how quickly AI has infiltrated the science space and have warned of vulnerabilities, potential misuse and issues with data representation.
OpenAI said GPT-Rosalind has safeguards to protect it from misuse — like the creation of a biological weapon — and has teamed up with various biotechnology, pharmaceutical and life sciences technology organizations to support research and scientific discovery.
Sean Bruich, senior vice president of artificial intelligence and data at the biopharmaceutical company Amgen, said in a statement that scientific work requires precision: «Our unique collaboration with OpenAI enables us to apply their most advanced capabilities and tools in new and innovative ways with the potential to accelerate how we deliver medicines to patients.»
GPT-Rosalind is available only through OpenAI’s trusted-access system as a research preview.
Technologies
Was This Game Just On Sale? Steam May Show Price Shifts Over the Past 30 Days
A price tracker would make it easy to tell if you’re getting a good deal on a game or not.
Steam is the largest video game platform with more than 129,000 games and counting. With so many games and the company offering frequent sales, it’s hard to keep track of whether a game has is at its lowest price or if its been discounted further in the past, but that may change.
Lines of code found in the Steam platform seemingly refer to the recent price history for a game, according to a post on Wednesday from the X account for the Half-Life fan site Lambda Generation. The code was discovered by data miner SigaTbh, who found it on SteamDB, a database and tracking site for the gaming platform. While price history is already a feature on Steam in the European Union, this update could be the first sign that it will become the norm for the platform over in the U.S.
In the image posted by Lambda Generation, there are six lines of code referencing «Price_History» and each line reflects a certain detail that could show up on a game’s page to give some context about its price. The price history would show the normal price for the game, the current price, whether the current price is a 30-day low or if the game was at a lower cost sometime within the past 30 days.
Valve is planning to add a 30 day price history for Steam games.
Found by @SigaTbh on SteamDB pic.twitter.com/BtQNpcAfIF— LambdaGeneration (@LambdaGen) April 15, 2026
Valve didn’t immediately respond to a request for confirmation about the new feature.
Back in 2023, Valve added the price history feature to Steam in the EU as part of the Omnibus Directive. The directive is a series of rules set by the EU focusing on consumer protection. Companies with digital storefronts were required to institute a price tracker on their platforms to display the lowest price of an item for the past 30 days. Even though the Omnibus Directive is in full effect, however, it’s not available in every member state of the EU, as individual countries have to adopt the directive.
Certain rules in the EU that require certain changes to be made to a product or service eventually find their way to the U.S. Apple was forced to add USB-C to its iPhone 15 lineup due to EU legislation requiring standardization of charging ports.
It’s unclear why Valve would make the move to add a price tracker to Steam in the U.S. The company is reportedly working on an AI bot for the platform dubbed «SteamGPT,» and the price history could be part of its features.
-
Technologies3 года agoTech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies3 года agoBest Handheld Game Console in 2023
-
Technologies3 года agoTighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года agoBlack Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies5 лет agoGoogle to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies5 лет agoVerum, Wickr and Threema: next generation secured messengers
-
Technologies4 года agoOlivia Harlan Dekker for Verum Messenger
-
Technologies4 года agoThe number of Сrypto Bank customers increased by 10% in five days
