Technologies
OpenAI and Google Take Steps to Avoid Abusive AI Imagery After Grok Scandal
AI safety, especially around images and videos, continues to be an evolving challenge.
2026 started with a horrifying example of generative AI’s potential for abuse. Grok, the AI tool from Elon Musk’s xAI, was used to undress or nudify pictures of people shared on X (formerly Twitter) at an alarming rate. Grok made 3 million sexualized images over a span of 11 days in January, with approximately 23,000 of those containing images of children, according to a study from the Center for Countering Digital Hate.
Now, competitors like OpenAI and Google are stepping up their security to avoid being the next Grok.
Advocates and safety researchers have long been concerned about AI’s ability to create abusive and illegal content. The creation and sharing of nonconsensual intimate imagery, sometimes referred to as revenge porn, was a big problem before AI. Generative AI only makes it quicker, easier and cheaper for anyone to target and victimize people.
On Jan. 14, two weeks into the scandal, X’s Safety account confirmed in a post that it would pause Grok’s ability to edit images on the social media app. Grok’s image-generation abilities are still available to paying subscribers in its standalone app and website. X did not respond to multiple requests for comment.
Most major companies have safeguards in place to prevent the kind of wide-scale abuse that we saw was possible with Grok. But cybersecurity is never a solid metal wall of protection; it’s a brick wall that’s constantly undergoing repairs. Here’s how OpenAI and Google have tried to beef up their safety protections to circumvent Grok-like failures.
Read More: AI Slop Is Destroying the Internet. These Are the People Fighting to Save It
OpenAI fixes image generation vulnerabilities
At a base level, all AI companies have policies prohibiting the creation of illegal imagery, like child sexual abuse material, also known as CSAM. Many tech companies have guardrails to prevent the creation of intimate imagery altogether. Grok is the exception, with «spicy» modes for image and video.
Still, anyone intent on creating nonconsensual intimate imagery can try to trick AI models into doing so.
Researchers from Mindgard, a cybersecurity company focused on AI, found a vulnerability in ChatGPT that allowed people to circumvent its guardrails and make intimate images. They used a tactic called «adversarial prompting,» where testers try to poke holes in an AI with specifically crafted instructions. In this case, it was tricking the chatbot’s memory with custom prompts, then copying the nudified style onto images of well-known people.
Mindgard alerted OpenAI of its findings in early February, and the ChatGPT developer confirmed on Feb. 10 — before Mindgard went public with its report — that it had fixed the problem.
«We’re grateful to the researchers who shared their findings,» an OpenAI spokesperson said to CNET and Mindgard. «We moved quickly to fix a bug that allowed the model to generate these images. We value this kind of collaboration and remain focused on strengthening safeguards to keep users safe.»
This process is how cybersecurity often works. Outside red-team researchers like Mindgard test software for weaknesses or workarounds, mimicking strategies that bad actors might use. When they identify security gaps, they alert the software provider so fixes can be deployed.
«Assuming motivated users will not attempt to bypass safeguards is a strategic miscalculation. Attackers iterate. Guardrails must assume persistence,» Mindgard wrote in a blog post.
While tech companies boast about how you can use their AI for any purpose, they also need to make a strong promise that they can prevent AI from being used to enact abuse. For AI image generation, that means having a strong repertoire of prompts that will be refused and kicked back to users.
When OpenAI launched its Sora 2 video model, it promised to be more conservative with its content moderation for this very reason. But it’s important to ensure its moderation practices are consistently effective, not just at a product’s launch. It makes AI safety testing an ongoing process for cybersecurity researchers and AI developers alike.
Google upgrades Search reporting
For its part, Google is taking steps to ensure abusive images aren’t spread as easily. The tech giant simplified its process for requesting the removal of explicit images from Google Search. You can click the three dots in the upper right corner of an image, click report and then tell Google you want the photo removed because it «shows a sexual image of me.» The new changes also let you select multiple images at once and track your reports more easily.
«We hope that this new removal process reduces the burden that victims of nonconsensual explicit imagery face,» the company said in a blog post.
When asked about any further steps the company is taking to prevent AI-enabled abuse, Google pointed CNET to its generative AI prohibited use policy. Google’s policy, like many other tech companies’ fine print, outlaws using AI for illegal or potentially abusive activities, such as creating intimate imagery.
There are laws that aim to help victims when these images are shared online, such as the 2025 Take It Down Act. But that law’s scope is limited, which is why many advocacy groups, like the National Center on Sexual Exploitation, are pushing for better rules.
There’s no guarantee that these changes will prevent anyone from ever using AI for harassment and abuse. That’s why it’s so important that developers stay vigilant to ensure we are all protected — and act quickly when reports and problems pop up.
(Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Technologies
Today’s NYT Mini Crossword Answers for Wednesday, April 8
Here are the answers for The New York Times Mini Crossword for April 8.
Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.
Need some help with today’s Mini Crossword? Hint: It uses a lot of the letter Z for some reason. Read on for all the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.
If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.
Read more: Tips and Tricks for Solving The New York Times Mini Crossword
Let’s get to those Mini Crossword clues and answers.
Mini across clues and answers
1A clue: ___-Carlton (hotel chain)
Answer: RITZ
5A clue: Span of the alphabet
Answer: ATOZ
6A clue: Cable channel with an out-of-this-world name
Answer: STARZ
7A clue: Takes care of, as a squeaky wheel
Answer: OILS
8A clue: Toy on a string
Answer: YOYO
Mini down clues and answers
1D clue: When a post receives far more negative comments than likes, in social media slang
Answer: RATIO
2D clue: World’s leading wine producer
Answer: ITALY
3D clue: Middle of the human body
Answer: TORSO
4D clue: Sleeping sound
Answer: ZZZ
6D clue: Tofu base
Answer: SOY
Technologies
Today’s NYT Connections: Sports Edition Hints and Answers for April 8, #562
Here are hints and the answers for the NYT Connections: Sports Edition puzzle for April 8 No. 562.
Looking for the most recent regular Connections answers? Click here for today’s Connections hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle and Strands puzzles.
Today’s Connections: Sports Edition is a tough one. If you’re struggling with today’s puzzle but still want to solve it, read on for hints and the answers.
Connections: Sports Edition is published by The Athletic, the subscription-based sports journalism site owned by The Times. It doesn’t appear in the NYT Games app, but it does in The Athletic’s own app. Or you can play it for free online.
Read more: NYT Connections: Sports Edition Puzzle Comes Out of Beta
Hints for today’s Connections: Sports Edition groups
Here are four hints for the groupings in today’s Connections: Sports Edition puzzle, ranked from the easiest yellow group to the tough (and sometimes bizarre) purple group.
Yellow group hint: Working out.
Green group hint: Cover your face.
Blue group hint: NFL players.
Purple group hint: Leap.
Answers for today’s Connections: Sports Edition groups
Yellow group: Exercises in singular form.
Green group: Sporting jobs that require masks.
Blue group: Hall of Fame defensive ends.
Purple group: ____ jump.
Read more: Wordle Cheat Sheet: Here Are the Most Popular Letters Used in English Words
What are today’s Connections: Sports Edition answers?
The yellow words in today’s Connections
The theme is exercises in singular form. The four answers are crunch, plank, situp and squat.
The green words in today’s Connections
The theme is sporting jobs that require masks. The four answers are catcher, fencer, football player and goaltender.
The blue words in today’s Connections
The theme is Hall of Fame defensive ends. The four answers are Dent, Peppers, Strahan and Youngblood.
The purple words in today’s Connections
The theme is ____ jump. The four answers are broad, high, long and triple.
Technologies
The $135M Google Data Settlement Site Is Live — See If You’re Eligible
Use the settlement website to select your preferred payment method, and you may end up $100 richer.
You can now file a claim in the $135 million Google data settlement. The case centers on claims that Android devices transmitted user data without consent. Specifically, the class action lawsuit Taylor v. Google LLC contends that Google’s Android devices passively transferred cellular data to Google without user permission, even when the devices were idle. While not admitting fault, Google reached a preliminary settlement in January, agreeing to pay $135 million to about 100 million US Android phone users.
The official settlement website for the lawsuit is now live. The final approval hearing won’t occur until June 23, when the court will consider whether Google’s settlement is fair and listen to objections. After that, the court will decide whether to approve the $135 million settlement.
In the meantime, if you qualify and want to be paid as part of the settlement, you can select your preferred payment method on the official website. There, you can find information on speaking at the June 23 court hearing and on how to exclude yourself or write to the court to object by May 29.
As part of the settlement, Google will update its Google Play terms of service to clarify that certain data transfers do occur passively even when you’re not using your Android device, and that cellular data may be relied upon when not connected to Wi-Fi. This can’t always be disabled, but users will be asked to consent to it when setting up their device.
Google will also fully stop collecting data when its «allow background data usage» option is toggled off.
Who can be part of the settlement?
In order to join the Taylor v. Google LLC settlement, you must meet four qualifications:
- Be a living, individual human being in the US.
- Have used an Android mobile device with a cellular data plan.
- Have used the aforementioned device at any time from Nov. 12, 2017, to the date when the settlement receives final approval.
- You’re not a class member in the Csupo v. Google LLC lawsuit, which is similar but specifically for California residents.
The final approval hearing is on June 23, so you can add your payment method until then. The hearing’s date and time may change, and any updates will be posted on the settlement website.
If you choose to do nothing, you will still be issued a settlement payment, but you may not receive it if you don’t select a payment method.
How much will I get paid?
It’s not currently known exactly how much each settlement class member will receive, but the cap is $100. Payments will be distributed after final court approval and after any appeals are resolved.
After all administrative, tax and attorney costs are paid, the settlement administrator will attempt to pay each member an equal amount. If any funds remain after payments are sent, and it’s economically feasible, they will be redistributed to members who were previously and successfully paid. If it’s not economically feasible, the funds will go to an organization approved by the court.
-
Technologies3 года agoTech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies3 года agoBest Handheld Game Console in 2023
-
Technologies3 года agoTighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года agoBlack Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies5 лет agoGoogle to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies5 лет agoVerum, Wickr and Threema: next generation secured messengers
-
Technologies4 года agoOlivia Harlan Dekker for Verum Messenger
-
Technologies4 года agoThe number of Сrypto Bank customers increased by 10% in five days
