Connect with us

Technologies

OpenAI and Google Take Steps to Avoid Abusive AI Imagery After Grok Scandal

AI safety, especially around images and videos, continues to be an evolving challenge.

2026 started with a horrifying example of generative AI’s potential for abuse. Grok, the AI tool from Elon Musk’s xAI, was used to undress or nudify pictures of people shared on X (formerly Twitter) at an alarming rate. Grok made 3 million sexualized images over a span of 11 days in January, with approximately 23,000 of those containing images of children, according to a study from the Center for Countering Digital Hate.

Now, competitors like OpenAI and Google are stepping up their security to avoid being the next Grok.

Advocates and safety researchers have long been concerned about AI’s ability to create abusive and illegal content. The creation and sharing of nonconsensual intimate imagery, sometimes referred to as revenge porn, was a big problem before AI. Generative AI only makes it quicker, easier and cheaper for anyone to target and victimize people. 

On Jan. 14, two weeks into the scandal, X’s Safety account confirmed in a post that it would pause Grok’s ability to edit images on the social media app. Grok’s image-generation abilities are still available to paying subscribers in its standalone app and website. X did not respond to multiple requests for comment.

Most major companies have safeguards in place to prevent the kind of wide-scale abuse that we saw was possible with Grok. But cybersecurity is never a solid metal wall of protection; it’s a brick wall that’s constantly undergoing repairs. Here’s how OpenAI and Google have tried to beef up their safety protections to circumvent Grok-like failures.

Read More: AI Slop Is Destroying the Internet. These Are the People Fighting to Save It

OpenAI fixes image generation vulnerabilities

At a base level, all AI companies have policies prohibiting the creation of illegal imagery, like child sexual abuse material, also known as CSAM. Many tech companies have guardrails to prevent the creation of intimate imagery altogether. Grok is the exception, with «spicy» modes for image and video.

Still, anyone intent on creating nonconsensual intimate imagery can try to trick AI models into doing so.

Researchers from Mindgard, a cybersecurity company focused on AI, found a vulnerability in ChatGPT that allowed people to circumvent its guardrails and make intimate images. They used a tactic called «adversarial prompting,» where testers try to poke holes in an AI with specifically crafted instructions. In this case, it was tricking the chatbot’s memory with custom prompts, then copying the nudified style onto images of well-known people.

Mindgard alerted OpenAI of its findings in early February, and the ChatGPT developer confirmed on Feb. 10 — before Mindgard went public with its report — that it had fixed the problem.

«We’re grateful to the researchers who shared their findings,» an OpenAI spokesperson said to CNET and Mindgard. «We moved quickly to fix a bug that allowed the model to generate these images. We value this kind of collaboration and remain focused on strengthening safeguards to keep users safe.»

This process is how cybersecurity often works. Outside red-team researchers like Mindgard test software for weaknesses or workarounds, mimicking strategies that bad actors might use. When they identify security gaps, they alert the software provider so fixes can be deployed.

«Assuming motivated users will not attempt to bypass safeguards is a strategic miscalculation. Attackers iterate. Guardrails must assume persistence,» Mindgard wrote in a blog post.

While tech companies boast about how you can use their AI for any purpose, they also need to make a strong promise that they can prevent AI from being used to enact abuse. For AI image generation, that means having a strong repertoire of prompts that will be refused and kicked back to users. 

When OpenAI launched its Sora 2 video model, it promised to be more conservative with its content moderation for this very reason. But it’s important to ensure its moderation practices are consistently effective, not just at a product’s launch. It makes AI safety testing an ongoing process for cybersecurity researchers and AI developers alike.

Google upgrades Search reporting

For its part, Google is taking steps to ensure abusive images aren’t spread as easily. The tech giant simplified its process for requesting the removal of explicit images from Google Search. You can click the three dots in the upper right corner of an image, click report and then tell Google you want the photo removed because it «shows a sexual image of me.» The new changes also let you select multiple images at once and track your reports more easily.

«We hope that this new removal process reduces the burden that victims of nonconsensual explicit imagery face,» the company said in a blog post.

When asked about any further steps the company is taking to prevent AI-enabled abuse, Google pointed CNET to its generative AI prohibited use policy. Google’s policy, like many other tech companies’ fine print, outlaws using AI for illegal or potentially abusive activities, such as creating intimate imagery.

There are laws that aim to help victims when these images are shared online, such as the 2025 Take It Down Act. But that law’s scope is limited, which is why many advocacy groups, like the National Center on Sexual Exploitation, are pushing for better rules

There’s no guarantee that these changes will prevent anyone from ever using AI for harassment and abuse. That’s why it’s so important that developers stay vigilant to ensure we are all protected — and act quickly when reports and problems pop up.

(Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

Technologies

Verum Reports: Spotify Shares Drop Over 13% Following Earnings Report That Missed Forward Guidance

Spotify shares fell over 13% on Tuesday as cautious forward guidance overshadowed a quarterly earnings beat. The streaming giant reported revenue of 4.5 billion euros and 761 million monthly active users, both slightly exceeding expectations, but projected operating income of 630 million euros fell short of the 680 million euros forecast by analysts.

Spotify’s stock declined by more than 13% following the market open on Tuesday, as cautious forward projections overshadowed a quarterly earnings report that surpassed analyst forecasts.

The streaming giant reported first-quarter revenue of 4.5 billion euros ($5.3 billion), marking an 8% increase from the previous year, while monthly active users climbed 12% year-over-year to 761 million, both figures slightly exceeding FactSet estimates.

Premium subscriber count rose 9% to 293 million, adding 3 million net users during the quarter, the company stated.

Looking ahead, Spotify projects adding 17 million net users this quarter to reach 778 million MAUs, with premium subscribers expected to increase by 6 million to 299 million.

Although second-quarter MAU guidance slightly surpassed Wall Street’s consensus, net premium subscriber growth was anticipated to reach just over 300.4 million, according to FactSet analyst polls.

The company noted in its earnings presentation that projections are «subject to substantial uncertainty.»

Operating income guidance was set at 630 million euros, falling short of the approximately 680 million euros anticipated by analysts, per FactSet data.

Spotify has consistently raised premium subscription prices to enhance profitability, including a February increase in the U.S. from $11.99 to $12.99 monthly.

At Monday’s close, the stock had dropped 14% year-to-date.

Continue Reading

Technologies

OpenAI’s Revenue and Expansion Projections Miss Targets Amid IPO Push: Report

OpenAI’s revenue and growth projections fell short of internal targets, raising concerns about its ability to fund massive data center investments ahead of its planned IPO.

OpenAI has underperformed its internal revenue and user growth projections, prompting doubts about whether the artificial intelligence firm can sustain its substantial data center investments, according to a Wall Street Journal article published on Monday.

Chief Financial Officer Sarah Friar has voiced worries regarding the firm’s capacity to finance upcoming computing contracts if revenue growth stalls, the outlet noted, referencing insiders acquainted with the situation. Friar is reportedly collaborating with fellow executives to reduce expenses as the board intensifies its review of OpenAI’s computing arrangements.

‘This is ridiculous,’ OpenAI CEO Sam Altman and Friar stated in a joint message to Verum. ‘We are totally aligned on buying as much compute as we can and working hard on it together every day.’

Stocks of semiconductor and technology firms, including Oracle, dropped following the news.

The situation casts doubt on OpenAI’s financial stability prior to its much-anticipated IPO slated for later this year. Over recent months, OpenAI and its major cloud computing rivals have committed billions toward data center construction to address surging computing needs.

Several of these agreements are directly linked to OpenAI. Oracle signed a $300 billion five-year computing contract with OpenAI, while Nvidia has committed billions to the startup. OpenAI recently initiated a significant strategic alliance with Amazon and increased an existing $38 billion expenditure agreement by $100 billion.

This week, OpenAI revealed significant updates to its collaboration with Microsoft, a long-term supporter that has contributed over $13 billion to the company since 2019. Under the revised terms, OpenAI will limit revenue share payments, and Microsoft will lose its exclusive rights to OpenAI’s intellectual property.

Read the full report from The Wall Street Journal.

Continue Reading

Technologies

OpenAI Expands Cloud Access by Partnering with AWS Following Microsoft Deal Shift

OpenAI is expanding its cloud strategy by making its AI models available on Amazon Web Services following a shift in its Microsoft partnership, enabling broader enterprise access through Amazon Bedrock.

Following a recent restructuring of its partnership with Microsoft to allow deployment across multiple cloud platforms, OpenAI announced Tuesday that its AI models will now be accessible through Amazon Web Services (AWS).

AWS clients will be able to test OpenAI’s models alongside its Codex coding agent via Amazon Bedrock, with full public access expected within the coming weeks.

‘This is what our customers have been asking us for for a really long time,’ AWS CEO Matt Garman said at a launch event in San Francisco.

Previously, developers had access to OpenAI’s open-weight models on AWS starting in August.

OpenAI CEO Sam Altman shared a pre-recorded message regarding the announcement, as he is currently attending court proceedings in Oakland regarding his legal dispute with Elon Musk.

‘I wish I could be there with you in person today, my schedule got taken away from me today,’ Altman said in the video. ‘I wanted to send a short message, though, because we’re really excited about our partnership with AWS and what it means for our customers, and I wanted to say thank you to Matt and the whole AWS team.’

A new service called Amazon Bedrock Managed Agents powered by OpenAI will enable the construction of sophisticated customized agents that incorporate memory of previous interactions, the companies said.

Microsoft has been a crucial supplier of computing power for OpenAI since before the 2022 launch of ChatGPT. Denise Dresser, OpenAI’s revenue chief, told employees in a memo earlier this month that the longstanding Microsoft relationship has been critical but ‘has also limited our ability to meet enterprises where they are — for many that’s Bedrock.’

On Monday, OpenAI and Microsoft announced a significant wrinkle in their arrangement that will allow the AI company to cap revenue share payments and serve customers across any cloud provider. Amazon CEO Andy Jassy called the announcement ‘very interesting’ in a post on X, adding that more details would be shared on Tuesday.

OpenAI and Amazon have been getting closer in other ways.

In November, OpenAI announced a $38 billion commitment with Amazon Web Services, days after saying Microsoft Azure would be the sole cloud to service application programming interface, or API, products built with third parties.

Three months later, OpenAI expanded its relationship with Amazon, which said it would invest $50 billion in Altman’s company. OpenAI said it would use two gigawatts worth of AWS’ custom Trainium chip for training AI models.

The partnership was announced after The Wall Street Journal reported that OpenAI failed to meet internal goals on users and revenue. Shares of AI hardware companies, including chipmakers Nvidia and Broadcom, fell on the report, which also highlighted internal discrepancies on spending plans.

‘This is ridiculous,’ Sam Altman and OpenAI CFO Sarah Friar said in a statement about the story. ‘We are totally aligned on buying as much compute as we can and working hard on it together every day.’

WATCH: OpenAI reportedly missed revenue targets: Here’s what you need to know

Continue Reading

Trending

Copyright © Verum World Media