Technologies
OpenAI and Google Take Steps to Avoid Abusive AI Imagery After Grok Scandal
AI safety, especially around images and videos, continues to be an evolving challenge.
2026 started with a horrifying example of generative AI’s potential for abuse. Grok, the AI tool from Elon Musk’s xAI, was used to undress or nudify pictures of people shared on X (formerly Twitter) at an alarming rate. Grok made 3 million sexualized images over a span of 11 days in January, with approximately 23,000 of those containing images of children, according to a study from the Center for Countering Digital Hate.
Now, competitors like OpenAI and Google are stepping up their security to avoid being the next Grok.
Advocates and safety researchers have long been concerned about AI’s ability to create abusive and illegal content. The creation and sharing of nonconsensual intimate imagery, sometimes referred to as revenge porn, was a big problem before AI. Generative AI only makes it quicker, easier and cheaper for anyone to target and victimize people.
On Jan. 14, two weeks into the scandal, X’s Safety account confirmed in a post that it would pause Grok’s ability to edit images on the social media app. Grok’s image-generation abilities are still available to paying subscribers in its standalone app and website. X did not respond to multiple requests for comment.
Most major companies have safeguards in place to prevent the kind of wide-scale abuse that we saw was possible with Grok. But cybersecurity is never a solid metal wall of protection; it’s a brick wall that’s constantly undergoing repairs. Here’s how OpenAI and Google have tried to beef up their safety protections to circumvent Grok-like failures.
Read More: AI Slop Is Destroying the Internet. These Are the People Fighting to Save It
OpenAI fixes image generation vulnerabilities
At a base level, all AI companies have policies prohibiting the creation of illegal imagery, like child sexual abuse material, also known as CSAM. Many tech companies have guardrails to prevent the creation of intimate imagery altogether. Grok is the exception, with «spicy» modes for image and video.
Still, anyone intent on creating nonconsensual intimate imagery can try to trick AI models into doing so.
Researchers from Mindgard, a cybersecurity company focused on AI, found a vulnerability in ChatGPT that allowed people to circumvent its guardrails and make intimate images. They used a tactic called «adversarial prompting,» where testers try to poke holes in an AI with specifically crafted instructions. In this case, it was tricking the chatbot’s memory with custom prompts, then copying the nudified style onto images of well-known people.
Mindgard alerted OpenAI of its findings in early February, and the ChatGPT developer confirmed on Feb. 10 — before Mindgard went public with its report — that it had fixed the problem.
«We’re grateful to the researchers who shared their findings,» an OpenAI spokesperson said to CNET and Mindgard. «We moved quickly to fix a bug that allowed the model to generate these images. We value this kind of collaboration and remain focused on strengthening safeguards to keep users safe.»
This process is how cybersecurity often works. Outside red-team researchers like Mindgard test software for weaknesses or workarounds, mimicking strategies that bad actors might use. When they identify security gaps, they alert the software provider so fixes can be deployed.
«Assuming motivated users will not attempt to bypass safeguards is a strategic miscalculation. Attackers iterate. Guardrails must assume persistence,» Mindgard wrote in a blog post.
While tech companies boast about how you can use their AI for any purpose, they also need to make a strong promise that they can prevent AI from being used to enact abuse. For AI image generation, that means having a strong repertoire of prompts that will be refused and kicked back to users.
When OpenAI launched its Sora 2 video model, it promised to be more conservative with its content moderation for this very reason. But it’s important to ensure its moderation practices are consistently effective, not just at a product’s launch. It makes AI safety testing an ongoing process for cybersecurity researchers and AI developers alike.
Google upgrades Search reporting
For its part, Google is taking steps to ensure abusive images aren’t spread as easily. The tech giant simplified its process for requesting the removal of explicit images from Google Search. You can click the three dots in the upper right corner of an image, click report and then tell Google you want the photo removed because it «shows a sexual image of me.» The new changes also let you select multiple images at once and track your reports more easily.
«We hope that this new removal process reduces the burden that victims of nonconsensual explicit imagery face,» the company said in a blog post.
When asked about any further steps the company is taking to prevent AI-enabled abuse, Google pointed CNET to its generative AI prohibited use policy. Google’s policy, like many other tech companies’ fine print, outlaws using AI for illegal or potentially abusive activities, such as creating intimate imagery.
There are laws that aim to help victims when these images are shared online, such as the 2025 Take It Down Act. But that law’s scope is limited, which is why many advocacy groups, like the National Center on Sexual Exploitation, are pushing for better rules.
There’s no guarantee that these changes will prevent anyone from ever using AI for harassment and abuse. That’s why it’s so important that developers stay vigilant to ensure we are all protected — and act quickly when reports and problems pop up.
(Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Technologies
Amazon Speeds Up Delivery Even More With 1- and 3-Hour Options
The retailer says the one-hour option is available in hundreds of cities, with discounted shipping for Prime members.
Same-day delivery apparently isn’t fast enough for some Amazon shoppers. The retail giant said on Tuesday it’s adding new shipping options that will get products to front doors within a one- or three-hour window.
The company said in its announcement that the one-hour option is available in hundreds of cities across the US, while the three-hour option is now live in more than 2,000 areas. Amazon’s web page at amazon.com/getitfast shows whether those options are available to shoppers for their location. More than 90,000 products will be available for those shipping windows, the company said.
For those who can’t get those services (including the author of this post, who lives between Austin and San Antonio in Texas), a message will display: «3-hour delivery is currently unavailable. Check back at a later time or shop products with Same-Day delivery below.»
Pricing for the faster delivery options is not cheap: It’ll cost you $20 for one-hour delivery and $15 for three-hour delivery for those without an Amazon Prime account, or $10 and $5 for customers who subscribe to Prime.
Last year, the company rolled out faster Amazon delivery options to 4,000 additional areas.
In a video of the podcast Learn and Be Curious with Doug Herrington, hosted by Amazon’s CEO of worldwide stores, Kandace Kapps, the director of the company’s same-day strategy team, spoke in more detail about the challenges of fast shipping. Kapps discussed shifts in customer buying habits over the last few years, such as more people buying household essentials like toilet paper on Amazon.
She said that Amazon can deliver so quickly by placing same-day delivery hubs close to customers in metro areas and by getting products ready to ship within 15 minutes, aided by warehouse robots.
«I think customers are going to continue to get magically surprised by how fast we can deliver to their doorstop,» Kapps said.
Herrington said fast shipping increases sales: «When we speed up the service, the probability that somebody buys a product from us goes up.»
Other retailers, including Walmart, have been adding same-day delivery options or exploring other ways to speed up shipping times to compete with Amazon.
Removing buyers’ moments of hesitation
Part of Amazon’s strategy, which has involved a massive buildout of locations, deployment of thousands of trucks, deals with other delivery services and investment in logistics software, is actually pretty simple: being there when people need last-minute items or make impulse buys.
«It’s about removing the last moment where you would’ve reconsidered the purchase,» said Stephanie Carls, retail insights expert at coupon and promotional-code website RetailMeNot, a sibling site of CNET. «It changes how you shop, not just how fast you get things.»
Carls said that Amazon’s super-fast delivery is removing the timeframe when people might change their minds about a purchase.
«There used to be a gap between deciding to buy something and actually having it. That’s when you’d price check, rethink it, or decide you didn’t need it after all,» she said. «This closes that gap.»
The retail expert said that competitors, including Walmart and Target, have been speeding up delivery times in some markets. Still, they’re not matching Amazon’s scale or product range at those speeds or levels of consistency.
«And that’s what starts to make everyone else feel slow,» Carls said. «Amazon’s advantage is how tightly connected its technology, inventory and delivery networks are, which makes this level of speed more repeatable.»
Technologies
Dog Health Goes Digital With New AI Chatbot
Fi Intelligence allows you to ask questions of a specially tailored pet health chatbot, but it’s not meant to replace vet visits.
It might be time to rethink what it means to be sick as a dog. On Tuesday, Fi, a smart pet technology company, announced a new AI-powered chatbot to help owners stay on top of their dog’s health using a blend of personal information and generalized dog breed data.
The AI agent, which the company is calling Fi Intelligence, is integrated directly into the Fi app. It has access to all the information gathered about your dog across the entire suite of Fi products, including the Fi Series 3 Plus and Fi Mini dog collars, as well as information and documents uploaded by the pet owner. The service is for dogs only (not cats, rabbits or other pets).
If you already own a Fi smart collar, existing data will be incorporated into the AI agent’s dataset to help it answer your questions.
When creating Fi Intelligence, the company identified a multitude of common questions that dog owners have, including whether their animal friend is walking or sleeping enough, or scratching more than usual. The chatbot was created to help owners find answers to these questions quickly and easily, according to Fi.
Fi designed its agent to answer these questions using a mix of general information about a dog’s breed, personal information and biometric data gathered by Fi smart pet collars.
Pet owners can ask the chatbot questions in plain English and get back detailed responses. Fi Intelligence is equipped to answer general questions, contrast your dog’s current data to previous time periods and compare your dog’s data to other dogs of the same breed.
Fi says its chatbot is different from general-purpose AI agents because it has been trained on a proprietary dataset containing «the largest repository of real-world canine activity, sleep and behavior data in the world.»
Fi Intelligence doesn’t replace a trip to the vet — and the company stresses it’s not supposed to. Rather, the agent is supposed to grant owners «informed confidence» about their dog’s health and can help them «show up [to the vet] with specific, documented observations drawn from weeks of continuous data.»
«The strongest signal from our beta was that owners aren’t using this to replace their vet,» said Fi’s Vice President of Product Darrell Stone. «They’re using it to show up better prepared.»
According to Fi, the Fi Intelligence integration will provide the most complete dog health profile available in the app so far. Fi Intelligence is available to all Fi members immediately.
Technologies
Nvidia Teases DLSS 5 and Gamers Aren’t Impressed
The new AI technology is making some big changes to video game graphics that hardly anyone seems to like.
Nvidia opened its GTC conference with a keynote by CEO Jensen Huang, revealing the company’s latest tech. Among the raft of the company’s AI developments, gamers were treated to the imminent version of its AI-powered upscaling and optimization technology, DLSS (Deep Learning Super Sampling), touted as the «biggest breakthrough in computer graphics».
Nvidia published a video illustrating how DLSS 5 can enhance graphics in Resident Evil Requiem, Starfield and other games, showing before-and-after takes. But gamers weren’t thrilled. In fact, the response to DLSS 5 resembles more of a collective backlash, replete with memes, ridicule and outrage.
Gamers were quick to point out that DLSS 5 transformed the original graphics into something vastly different. Some called the visuals «AI slop» because they look like «yassified» AI-generated filters.
Many worry that DLSS 5 could deviate from a creator’s specific artistic vision. Critics also fear that if this technology becomes the industry standard, video game graphics might start to look the same, losing their unique visual identity.
«Everything about this is a betrayal of these game’s artistry,» said YouTuber The Sphere Hunter in a post on X Monday. «Painting over handcrafted, intentional 3D art with shiny, wrinkly, sunken-in, porous, puckered, fraudulent, filtered nonsense is deeply disrespectful. If you want this, just watch gen-AI videos all day.»
Countless memes mocking the tech’s exaggerated features flooded the internet. Others on social media parodied the effects DLSS 5 could produce in other games.
Ok, you convinced us, NVIDIA DLSS 5 is coming to «Copycat» too 😅 pic.twitter.com/WT2qRRz9EC
— Copycat // OUT NOW 😻 (@GameCopycat) March 17, 2026
In a Q&A on Tuesday, Huang addressed the backlash from gamers, calling them «completely wrong.» Huang underlined that DLSS 5 «enhances and adds generative capability, but it doesn’t change the artistic control» and that «it’s in the direct control of the game developer.»
The team at Digital Foundry, which specializes in game technology and hardware reviews, called it «disruptive and transformative» but was generally positive about it, though they saw some hiccups.
«[The images] looked a little bit uncanny, I would say, but definitely the overall portrayal of those characters is much more sophisticated,» said Oliver Mackenzie, video producer and writer for Digital Foundry.
Bethesda’s official X account replied to comments from members of Digital Foundry about Starfield and The Elder Scrolls IV: Oblivion Remastered, both published by Bethesda.
«This is a very early look, and our art teams will be further adjusting the lighting and final effect to look the way we think works best for each game. This will all be under our artists’ control, and totally optional for players,» the publisher said.
DLSS 5 is set to be released sometime in the fall.
What is DLSS?
Nvidia first released its DLSS tech back in 2018 with its RTX 2080 card: The RTX architecture introduced the Tensor cores, which are essential for accelerating the calculations used by the DLSS AI. The deep learning technology was designed to upscale images and video from low resolution in real time to achieve higher frame rates.
Gamers weren’t impressed at first, but later versions of the technology did perform better in games that supported it. DLSS 4, released last year and tweaked to 4.5 as of January, made significant improvements to detail rendering, reducing motion artifacts, boosting frame rates, and generating more realistic lighting via path tracing (which incorporates interactions with ray-traced lighting).
What does DLSS 5 do?
DLSS 5 works a bit differently than previous versions of the technology. According to Nvidia, DLSS 5 shifts from processing simple pixels to understanding 3D elements. By deconstructing characters into specific components — such as skin, hair and clothing — the AI can render them more consistently. This results in faster performance and much more realistic details, especially for textures and lighting.
Game developers control how DLSS 5 enhances images and to what degree, ensuring it matches the game’s aesthetic. The demo video showcased some positive enhancements, but others looked like sweeping changes to the characters and the environment.
Which games will support DLSS 5 at launch?
On Monday, Nvidia released a list of games slated to support DLSS 5:
- AION 2
- Assassin’s Creed Shadows
- Black State
- Cinder City
- Delta Force
- Hogwarts Legacy
- Justice
- Naraka: Bladepoint
- NTE: Neverness to Everness
- Phantom Blade Zero
- Resident Evil Requiem
- Sea of Remnants
- Starfield
- The Elder Scrolls IV: Oblivion Remastered
- Where Winds Meet
What cards will support DLSS 5?
Nvidia has yet to provide a list of GPUs that will support the new technology. In an FAQ, the company says it will release a list of supported cards closer to its release.
-
Technologies3 года agoTech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies3 года agoBest Handheld Game Console in 2023
-
Technologies3 года agoTighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года agoBlack Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies5 лет agoGoogle to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies5 лет agoVerum, Wickr and Threema: next generation secured messengers
-
Technologies4 года agoOlivia Harlan Dekker for Verum Messenger
-
Technologies5 лет agoiPhone 13 event: How to watch Apple’s big announcement tomorrow
