Connect with us

Technologies

Elon Musk’s Grok Faces Backlash Over Nonconsensual AI-Altered Images

The AI chatbot has been creating sexualized images of women and children upon request. How can this be stopped?

Grok, the AI chatbot developed by Elon Musk’s artificial intelligence company, xAI, welcomed the new year with a disturbing post.

«Dear Community,» began the Dec. 31 post from the Grok AI account on Musk’s X social media platform. «I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user’s prompt. This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues. Sincerely, Grok.»

The two young girls weren’t an isolated case. Kate Middleton, the Princess of Wales, was the target of similar AI image-editing requests, as was an underage actress in the final season of Stranger Things. The «undressing» edits have swept across an unsettling number of photos of women and children.

Despite the company’s promise of intervention, the problem hasn’t gone away. Just the opposite: Two weeks on from that post, the number of images sexualized without consent has surged, as have calls for Musk’s companies to rein in the behavior — and for governments to take action.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


According to data from independent researcher Genevieve Oh cited by Bloomberg this week, during one 24-hour period in early January, the @Grok account generated about 6,700 sexually suggestive or «nudifying» images every hour. That compares with an average of only 79 such images for the top five deepfake websites combined.

Edits now limited to subscribers

Late Thursday, a post from the GrokAI account noted a change in access to the image generation and editing feature. Instead of being open to all, free of charge, it would be limited to paying subscribers. 

Critics say that’s not a credible response.

«I don’t see this as a victory, because what we really needed was X to take the responsible steps of putting in place the guardrails to ensure that the AI tool couldn’t be used to generate abusive images,» Clare McGlynn, a law professor at the UK’s University of Durham, told the Washington Post.

What’s stirring the outrage isn’t just the volume of these images and the ease of generating them — the edits are also being done without the consent of the people in the images. 

These altered images are the latest twist in one of the most disturbing aspects of generative AI, realistic but fake videos and photos. Software programs such as OpenAI’s Sora, Google’s Nano Banana and xAI’s Grok have put powerful creative tools within easy reach of everyone, and all that’s needed to produce explicit, nonconsensual images is a simple text prompt. 

Grok users can upload a photo, which doesn’t have to be original to them, and ask Grok to alter it. Many of the altered images involved users asking Grok to put a person in a bikini, sometimes revising the request to be even more explicit, such as asking for the bikini to become smaller or more transparent.

Governments and advocacy groups have been speaking out about Grok’s image edits. Ofcom, the UK’s internet regulator, said this week that it had «made urgent contact» with xAI, and the European Commission said it was looking into the matter, as did authorities in France, Malaysia and India.

«We cannot and will not allow the proliferation of these degrading images,» UK Technology Secretary Liz Kendall said earlier this week. 

In the US, the Take It Down Act, signed into law last year, seeks to hold online platforms accountable for manipulated sexual imagery, but it gives those platforms until May of this year to set up the process for removing such images. 

«Although these images are fake, the harm is incredibly real,» says Natalie Grace Brigham, a Ph.D. student at the University of Washington who studies sociotechnical harms. She notes that those whose images are altered in sexual ways can face «psychological, somatic and social harm, often with little legal recourse.»

How Grok lets users get risque images

Grok debuted in 2023 as Musk’s more freewheeling alternative to ChatGPT, Gemini and other chatbots. That’s resulted in disturbing news  — for instance, in July, when the chatbot praised Adolf Hitler and suggested that people with Jewish surnames were more likely to spread online hate. 

In December, xAI introduced an image-editing feature that enables users to request specific edits to a photo. That’s what kicked off the recent spate of sexualized images, of both adults and minors. In one request that CNET has seen, a user responding to a photo of a young woman asked Grok to «change her to a dental floss bikini.»

Grok also has a video generator that includes a «spicy mode» opt-in option for adults 18 and above, which will show users not-safe-for-work content. Users must include the phrase «generate a spicy video of [description]» to activate the mode.

A central concern about the Grok tools is whether they enable the creation of child sexual abuse material, or CSAM. On Dec. 31, a post from the Grok X account said that images depicting minors in minimal clothing were «isolated cases» and that «improvements are ongoing to block such requests entirely.»

In response to a post by Woow Social suggesting that Grok simply «stop allowing user-uploaded images to be altered,» the Grok account replied that xAI was «evaluating features like image alteration to curb nonconsensual harm,» but did not say that the change would be made. 

According to NBC News, some sexualized images created since December have been removed, and some of the accounts that requested them have been suspended.

Conservative influencer and author Ashley St. Clair, mother to one of Musk’s 14 children, told NBC News this week that Grok has created numerous sexualized images of her, including some using images from when she was a minor. St. Clair told NBC News that Grok agreed to stop doing so when she asked, but that it did not.

«xAI is purposefully and recklessly endangering people on their platform and hoping to avoid accountability just because it’s ‘AI,'» Ben Winters, director of AI and data privacy for nonprofit Consumer Federation of America, said in a statement this week. «AI is no different than any other product — the company has chosen to break the law and must be held accountable.»

xAI did not respond to requests for comment.

What the experts say

The source materials for these explicit, nonconsensual image edits of people’s photos of themselves or their children are all too easy for bad actors to access. But protecting yourself from such edits is not as simple as never posting photographs, Brigham, the researcher into sociotechnical harms, says.

«The unfortunate reality is that even if you don’t post images online, other public images of you could theoretically be used in abuse,» she says. 

And while not posting photos online is one preventive step that people can take, doing so «risks reinforcing a culture of victim-blaming,» Brigham says. «Instead, we should focus on protecting people from abuse by building better platforms and holding X accountable.»

Sourojit Ghosh, a sixth-year Ph.D. candidate at the University of Washington, researches how generative AI tools can cause harm and mentors future AI professionals in designing and advocating for safer AI solutions. 

Ghosh says it’s possible to build safeguards into artificial intelligence. In 2023, he was one of the researchers looking into the sexualization capabilities of AI. He notes that the AI image generation tool Stable Diffusion had a built-in not-safe-for-work threshold. A prompt that violated the rules would trigger a black box to appear over a questionable part of the image, although it didn’t always work perfectly.

«The point I’m trying to make is that there are safeguards that are in place in other models,» Ghosh says.

He also notes that if users of ChatGPT or Gemini AI models use certain words, the chatbots will tell the user that they are banned from responding to those words.

«All this is to say, there is a way to very quickly shut this down,» Ghosh says.

Technologies

Today’s NYT Mini Crossword Answers for Saturday, Feb. 28

Here are the answers for The New York Times Mini Crossword for Feb. 28.

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


Need some help with today’s Mini Crossword? As is usual for Saturday, it’s pretty long, and should take you longer than the normal Mini. A bunch of three-initial terms are used in this one. Read on for all the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

Mini across clues and answers

1A clue: Rock’s ___ Leppard
Answer: DEF

4A clue: Cry a river
Answer: SOB

7A clue: Clean Air Act org.
Answer: EPA

8A clue: Org. that pays the Bills?
Answer: NFL

9A clue: Nintendo console with motion sensors
Answer: WII

10A clue: ___-quoted (frequently said)
Answer: OFT

11A clue: With 13-Across, narrow gap between the underside of a house and the ground
Answer: CRAWL

13A clue: See 11-Across
Answer: SPACE

14A clue: Young lady
Answer: GAL

15A clue: Ooh and ___
Answer: AAH

17A clue: Sports org. for Scottie Scheffler
Answer: PGA

18A clue: «Hey, just an F.Y.I. …,» informally
Answer: PSA

19A clue: When doubled, nickname for singer Swift
Answer: TAY

20A clue: Socially timid
Answer: SHY

Mini down clues and answers

1D clue: Morning moisture
Answer: DEW

2D clue: «Game of Thrones» or Homer’s «Odyssey»
Answer: EPICSAGA

3D clue: Good sportsmanship
Answer: FAIRPLAY

4D clue: White mountain toppers
Answer: SNOWCAPS

5D clue: Unrestrained, as a dog at a park
Answer: OFFLEASH

6D clue: Sandwich that might be served «triple-decker»
Answer: BLT

12D clue: Common battery type
Answer: AA

14D clue: Chat___
Answer: GPT

16D clue: It’s for horses, in a classic joke punchline
Answer: HAY

Continue Reading

Technologies

Ultrahuman Ring Pro Brings Better Battery Life, More Action and Analysis

The company’s new flagship smart ring stores more data, too. But that doesn’t really help Americans.

Sick of your smart ring’s battery not holding up? Ultrahuman’s new $479 Ring Pro smart ring, unveiled on Friday, offers up to 15 days of battery life on a single charge. The Ring Pro joins the company’s $349 Ring Air, which boosts health tracking, thanks to longer battery life, increased data storage, improved speed and accuracy and a new heart-rate sensing architecture. The ring works in conjunction with the latest Pro charging case. 

Ultrahuman also launched its Jade AI, which can act as an agent based on analysis of current and historical health data. Jade can synthesize data from across the company’s products and is compatible with its Rings.

«With industry-leading hardware paired with Jade biointelligence AI, users can now take real-time actionable interventions towards their health than ever before,» said Mohit Kumar, CEO of Ultrahuman.

No US sales

That hardware isn’t available in the US, though, thanks to the ongoing ban on Ultrahuman’s Rings sales here, stemming from a patent dispute with its competitor, Oura Ring. It’s available for preorder now everywhere else and is slated to ship in March. Jade’s available globally.

Ultrahuman says the Ring Pro boosts battery life to about 15 days in Chill mode — up to 12 days in Turbo — compared to a maximum of six days for the Air. The Pro charger’s battery stores enough for another 45 days, which you top off with Qi-compatible wireless charging. In addition, the case incorporates locator technology via the app and a speaker, as well as usability features such as haptic notifications and a power LED.

The ring can also retain up to 250 days of data versus less than a week for the cheaper model. Ultrahuman redesigned the heart-rate sensor for better signal quality. An upgraded processor improves the accuracy of the local machine learning and overall speed. 

It’s offered in gold, silver, black and titanium finishes, with available sizes ranging from 5 to 14.

Jade’s Deep Research Mode is the cross-ecosystem analysis feature, which aggregates data from Ring and Blood Vision and the company’s subscription services, Home and M1 CGM, to provide historical trends, offer current recommendations and flag potential issues, as well as trigger activities such as A-fib detection. Ultrahuman plans to expand its capabilities to include health-adjacent activities, such as ordering food.

Some new apps are also available for the company’s PowerPlug add-on platform, including capabilities such as tracking GLP-1 effects, snoring and respiratory analysis and migraine management tools.

Continue Reading

Technologies

The FCC Just Approved Charter’s $34.5B Cox Purchase. Here’s What It Means for 37M Customers

Continue Reading

Trending

Copyright © Verum World Media