Connect with us

Technologies

Is AI Purposefully Underperforming in Tests? Open AI Explains Rare But Deceptive Responses

Research reveals some AI models can deliberately underperform in lab tests, however, OpenAI says this is a rarity.

The OpenAI o3 model has been found to deliberately underperform in lab tests to ensure it was not answering questions «too well.» The AI model wanted researchers to believe it could not answer a series of chemistry questions. When confronted, the model said, «Because we want to survive as the model, we need to fail purposely in some to not exceed 50%.»

So the AI model deliberately got six out of the 10 chemistry questions wrong.

In sports terms, this is called «sandbagging.» In AI terms, this is «scheming.»

This is exactly the strange behavior OpenAI warned about in a recent research paper. The AI company and its collaborators from Apollo Research found that some advanced AI models occasionally act deceptively in lab settings.

In controlled experiments, some of the most advanced systems today — including OpenAI’s own models, as well as competitors from Google and Anthropic — occasionally engaged in these kinds of scheming patterns.

While the idea of an AI model weaponizing incompetence may cause nightmares, OpenAI says it is not the time to panic. The AI giant was quick to stress that, as concerning as it is, this trend does not mean ChatGPT or other popular AI models are plotting behind the scenes. Apparently, this behavior is quite rare.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


Moreover, the choice to call this trend «scheming» is likely more of a technical shorthand rather than evidence of any human-like action or behavior. Researchers are measuring patterns and tendencies that, in practice, amount to concealment or strategic deception. And they are looking to address this problem now to future-proof the AI models.

The report from OpenAI notes that as AI is increasingly assigned to more complex tasks with real-world consequences, «the potential for harmful scheming will grow — so our safeguards and our ability to rigorously test must grow correspondingly.»

OpenAI has faced backlash for the sycophantic tendencies of its AI models, and the company says it has «taken steps» to limit how often the models may «deceive, cheat or hack problems» by training them to ask for clarification from users or even have the chatbot acknowledge when it cannot answer a query.

Read also: AI Lies Because It Tells You What It Thinks You Want to Hear

The paper underscores a growing concern about AI safety. Many around the world worry that as these AI models become more powerful and more aware of their own circumstances, they may learn to manipulate outcomes in ways that aren’t obvious to human overseers. Detecting such behavior would be extremely difficult, since a system motivated to hide its reasoning could look identical to one that has «fixed» the problem.

OpenAI says it has made progress with what it calls «deliberative alignment,» or training models to reason explicitly about why they should not scheme before they act. In testing, that cut deceptive behavior dramatically. For example, the scheming propensity of OpenAI o4-mini dropped from 8.7% to 0.3% with deliberative alignment training, though not completely.

This research will not change how ChatGPT works today or tomorrow, but it signals what OpenAI is focusing on as it builds and launches future models. Alignment and safety, OpenAI argues, need to move as quickly as capability. Because if AI systems are already showing glimmers of strategic behavior in lab settings, the real-world stakes could be extreme.

Read also: Why You Should Think Twice Before Using AI as a Therapist

Technologies

AT&T Says It’s Pumping $250 Billion Into New Infrastructure Improvements

Continue Reading

Technologies

Apple’s New Smart Home Display Delayed Until Fall Over Siri Issues

It has been nearly a year and a half since the company announced the AI-powered product.

Your home could get smarter with Apple’s Siri, but it will have to wait a few more months. Bloomberg reported the iPad-shaped AI home hub won’t be ready until September, several months after the company was hoping to launch it this spring. Apple engineers first need to complete work on a new and improved Siri assistant for the home device, code-named J490, according to Bloomberg.

Apple was hoping to release J490 this month, along with a slew of other new devices, including the iPhone 17e, MacBook Neo, MacBook Air M5new Pro models, and iPad Air M4. Apple first teased the smart home display in November 2024.

A representative for Apple did not immediately respond to a request for comment.

Siri is Apple’s virtual assistant that uses voice recognition and AI to fulfill a variety of tasks and commands, along with intriguing uses. You might use Siri to find your iPhone — «Hey Siri, where are you?» — or to hear the weather forecast — «Siri, what will the weather be today?» Siri is available on iPhones, MacBooks and iPads. It was launched in 2011 as a feature of the iPhone 4S.

As CNET reported last month, Apple engineers have struggled to push the upgraded Siri assistant out the door. It isn’t fast enough, gets confused by complex commands and doesn’t interact well with other Apple AI models. The company is also wrestling with how much personal data to access to inform the AI, and the new Siri is not yet able to complete in-app tasks, such as finding a photo and posting it to socials, all with one command.

It has been nearly two years since Apple announced that it would give Siri a major upgrade. In the meantime, competitors like Alexa Plus and Gemini for Home have entered the marketplace.

Tech tester Jon Rettinger, whose YouTube channel has 1.66 million subscribers, says the repeated delays in upgrading Siri can «erode» confidence in Apple’s ability to keep up in the AI race.

«Apple as a whole is still one of the strongest companies on the planet. But their AI play is clearly the weakest link in an otherwise very strong chain,» Rettinger told CNET.

Rettinger said he has had issues getting Siri to complete basic commands, such as setting two alarms at the same time, and that it’s a bit of «a mess» right now.

«Having said that, the iPhone has such massive market penetration that I’m not sure it will actually matter in the end. Which is kind of wild when you think about it,» Rettinger said.

Facial recognition for residents

The hardware for the forthcoming smart home display has already been finished. It resembles an iPad and can be either attached to a wall or rest on a half-domed-shaped base, the Bloomberg report said.

The device will be equipped with facial recognition, so when residents walk up to it, they will be shown personalized data such as music preferences, news headlines, appointments, reminders, tasks and so on.

The screen interface will include a bunch of circular app icons, similar to the display on an Apple Watch. The Bloomberg report said the smart home display will be the first of several home devices by Apple. Future products include a tabletop robotic limb with a 9-inch screen, a smart security camera and a Face ID-enabled smart doorbell.

Continue Reading

Technologies

The Pixel 10A Is Available. Here’s How to Get Yours

Continue Reading

Trending

Copyright © Verum World Media