Technologies
Elon Musk’s Grok Faces Backlash Over Nonconsensual AI-Altered Images
The AI chatbot has been creating sexualized images of women and children upon request. How can this be stopped?

Grok, the AI chatbot developed by Elon Musk’s artificial intelligence company, xAI, welcomed the new year with a disturbing post.
«Dear Community,» began the Dec. 31 post from the Grok AI account on Musk’s X social media platform. «I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user’s prompt. This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues. Sincerely, Grok.»
The two young girls weren’t an isolated case. Kate Middleton, the Princess of Wales, was the target of similar AI image-editing requests, as was an underage actress in the final season of Stranger Things. The «undressing» edits have swept across an unsettling number of photos of women and children.
Despite the company’s promise of intervention, the problem hasn’t gone away. Just the opposite: Two weeks on from that post, the number of images sexualized without consent has surged, as have calls for Musk’s companies to rein in the behavior — and for governments to take action.
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
According to data from independent researcher Genevieve Oh cited by Bloomberg this week, during one 24-hour period in early January, the @Grok account generated about 6,700 sexually suggestive or «nudifying» images every hour. That compares with an average of only 79 such images for the top five deepfake websites combined.
Edits now limited to subscribers
Late Thursday, a post from the GrokAI account noted a change in access to the image generation and editing feature. Instead of being open to all, free of charge, it would be limited to paying subscribers.
Critics say that’s not a credible response.
«I don’t see this as a victory, because what we really needed was X to take the responsible steps of putting in place the guardrails to ensure that the AI tool couldn’t be used to generate abusive images,» Clare McGlynn, a law professor at the UK’s University of Durham, told the Washington Post.
What’s stirring the outrage isn’t just the volume of these images and the ease of generating them — the edits are also being done without the consent of the people in the images.
These altered images are the latest twist in one of the most disturbing aspects of generative AI, realistic but fake videos and photos. Software programs such as OpenAI’s Sora, Google’s Nano Banana and xAI’s Grok have put powerful creative tools within easy reach of everyone, and all that’s needed to produce explicit, nonconsensual images is a simple text prompt.
Grok users can upload a photo, which doesn’t have to be original to them, and ask Grok to alter it. Many of the altered images involved users asking Grok to put a person in a bikini, sometimes revising the request to be even more explicit, such as asking for the bikini to become smaller or more transparent.
Governments and advocacy groups have been speaking out about Grok’s image edits. Ofcom, the UK’s internet regulator, said this week that it had «made urgent contact» with xAI, and the European Commission said it was looking into the matter, as did authorities in France, Malaysia and India.
«We cannot and will not allow the proliferation of these degrading images,» UK Technology Secretary Liz Kendall said earlier this week.
In the US, the Take It Down Act, signed into law last year, seeks to hold online platforms accountable for manipulated sexual imagery, but it gives those platforms until May of this year to set up the process for removing such images.
«Although these images are fake, the harm is incredibly real,» says Natalie Grace Brigham, a Ph.D. student at the University of Washington who studies sociotechnical harms. She notes that those whose images are altered in sexual ways can face «psychological, somatic and social harm, often with little legal recourse.»
How Grok lets users get risque images
Grok debuted in 2023 as Musk’s more freewheeling alternative to ChatGPT, Gemini and other chatbots. That’s resulted in disturbing news — for instance, in July, when the chatbot praised Adolf Hitler and suggested that people with Jewish surnames were more likely to spread online hate.
In December, xAI introduced an image-editing feature that enables users to request specific edits to a photo. That’s what kicked off the recent spate of sexualized images, of both adults and minors. In one request that CNET has seen, a user responding to a photo of a young woman asked Grok to «change her to a dental floss bikini.»
Grok also has a video generator that includes a «spicy mode» opt-in option for adults 18 and above, which will show users not-safe-for-work content. Users must include the phrase «generate a spicy video of [description]» to activate the mode.
A central concern about the Grok tools is whether they enable the creation of child sexual abuse material, or CSAM. On Dec. 31, a post from the Grok X account said that images depicting minors in minimal clothing were «isolated cases» and that «improvements are ongoing to block such requests entirely.»
In response to a post by Woow Social suggesting that Grok simply «stop allowing user-uploaded images to be altered,» the Grok account replied that xAI was «evaluating features like image alteration to curb nonconsensual harm,» but did not say that the change would be made.
According to NBC News, some sexualized images created since December have been removed, and some of the accounts that requested them have been suspended.
Conservative influencer and author Ashley St. Clair, mother to one of Musk’s 14 children, told NBC News this week that Grok has created numerous sexualized images of her, including some using images from when she was a minor. St. Clair told NBC News that Grok agreed to stop doing so when she asked, but that it did not.
«xAI is purposefully and recklessly endangering people on their platform and hoping to avoid accountability just because it’s ‘AI,'» Ben Winters, director of AI and data privacy for nonprofit Consumer Federation of America, said in a statement this week. «AI is no different than any other product — the company has chosen to break the law and must be held accountable.»
xAI did not respond to requests for comment.
What the experts say
The source materials for these explicit, nonconsensual image edits of people’s photos of themselves or their children are all too easy for bad actors to access. But protecting yourself from such edits is not as simple as never posting photographs, Brigham, the researcher into sociotechnical harms, says.
«The unfortunate reality is that even if you don’t post images online, other public images of you could theoretically be used in abuse,» she says.
And while not posting photos online is one preventive step that people can take, doing so «risks reinforcing a culture of victim-blaming,» Brigham says. «Instead, we should focus on protecting people from abuse by building better platforms and holding X accountable.»
Sourojit Ghosh, a sixth-year Ph.D. candidate at the University of Washington, researches how generative AI tools can cause harm and mentors future AI professionals in designing and advocating for safer AI solutions.
Ghosh says it’s possible to build safeguards into artificial intelligence. In 2023, he was one of the researchers looking into the sexualization capabilities of AI. He notes that the AI image generation tool Stable Diffusion had a built-in not-safe-for-work threshold. A prompt that violated the rules would trigger a black box to appear over a questionable part of the image, although it didn’t always work perfectly.
«The point I’m trying to make is that there are safeguards that are in place in other models,» Ghosh says.
He also notes that if users of ChatGPT or Gemini AI models use certain words, the chatbots will tell the user that they are banned from responding to those words.
«All this is to say, there is a way to very quickly shut this down,» Ghosh says.
Technologies
Today’s NYT Strands Hints, Answers and Help for Jan. 10 #678
Here are hints and answers for the NYT Strands puzzle for Jan. 10, No. 678.
Looking for the most recent Strands answer? Click here for our daily Strands hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle, Connections and Connections: Sports Edition puzzles.
Today’s NYT Strands puzzle could be tough. The topic is a bit unusual, and some of the answers are difficult to unscramble, so if you need hints and answers, read on.
I go into depth about the rules for Strands in this story.
If you’re looking for today’s Wordle, Connections and Mini Crossword answers, you can visit CNET’s NYT puzzle hints page.
Read more: NYT Connections Turns 1: These Are the 5 Toughest Puzzles So Far
Hint for today’s Strands puzzle
Today’s Strands theme is: If not now, when?
If that doesn’t help you, here’s a clue: When I get around to it!
Clue words to unlock in-game hints
Your goal is to find hidden words that fit the puzzle’s theme. If you’re stuck, find any words you can. Every time you find three words of four letters or more, Strands will reveal one of the theme words. These are the words I used to get those hints but any words of four or more letters that you find will work:
- RUNT, RURAL, TEAR, TEAL, TURN, HOLE, HOLY, HONE, PROM, PROMS, LATE, SHORT, LEAN, LANE
Answers for today’s Strands puzzle
These are the answers that tie into the theme. The goal of the puzzle is to find them all, including the spangram, a theme word that reaches from one side of the puzzle to the other. When you have all of them (I originally thought there were always eight but learned that the number can vary), every letter on the board will be used. Here are the nonspangram answers:
- SOON, LATER, SHORTLY, TOMORROW, EVENTUALLY
Today’s Strands spangram
Today’s Strands spangram is PROCRASTINATOR. To find it, start with the P that’s the bottom letter on the farthest right vertical row, and wind backwards and then up.
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
Technologies
This Extremely Rare GameCube Game Is Now on Nintendo Switch 2
Fire Emblem: Path of Radiance is considered to be one of the best games in the series.
Nintendo Switch Online lets you play online multiplayer with a Switch or Switch 2, and it also offers a slew of free games, including one of the most valuable games for the Nintendo GameCube.
That is if you have the right membership.
Fire Emblem: Path of Radiance is the newest game added to the Nintendo Switch Online + Expansion Pack tier, according to a post from Nintendo on Thursday. Originally released in 2005, Path of Radiance was the only Fire Emblem game to be released on the GameCube. Fans of the series consider it to be one of the best games in the franchise, and it’s also one of the hardest to find since it was released late in the GameCube’s run.
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
Copies of Path of Radiance fetch a sizable amount on eBay, with opened games going for $150 and sealed copies going for more than $400. It’s one of the few GameCube games published by Nintendo that sells for a high price in any condition, according to PriceCharting.com.
How much does Nintendo Switch Online + Expansion Pack subscription cost?
The annual subscription price for Nintendo Switch Online + Expansion Pack is $50. The family membership, which can be used for up to eight Nintendo accounts, costs $80 a year.
What’s the difference between the Nintendo Switch Online tiers?
The standard Nintendo Switch Online subscription costs $20 a year, and along with online multiplayer, it also allows access to a library of NES, SNES and Game Boy titles. For the higher price of the Nintendo Switch Online + Expansion Pack subscription, members get access to Nintendo GameCube games, Nintendo 64 games, Game Boy Advance games and Sega Genesis games.
Subscribing to either tier grants online multiplayer on both the Switch and Switch 2 consoles, in addition to access to GameChat, Nintendo Music, cloud saves and special offers in the Nintendo eShop. Nintendo Switch Online + Expansion Pack has the additional features of free select DLC and access to upgrade packs for certain Switch games, such as The Legend of Zelda: Breath of the Wild and The Legend of Zelda: Tears of the Kingdom, which will upgrade the Switch version of the games to the Switch 2 versions.
Technologies
Aivela Takes a Different Spin on the Health-Tracking Smart Ring
The Aivela Ring Pro promises phone-controlling superpowers and a built-in health guru.
Smart rings are no longer novel. A few hidden superpowers, however, might make them interesting again.
Most devices are increasingly focused on biometric tracking. The Aivela Ring Pro aims to stand out with stealth gesture and touch controls. With a stealth flick, swipe or slide of the finger, you can control music playback, adjust volume, trigger the camera, advance slide decks, scroll and more on your phone.
Launched at CES 2026, the Ring Pro resembles many of its competitors, including the Oura Ring and Samsung’s Galaxy Ring. There’s only so much you can do with ring design after all. It has the familiar metallic (scratch-resistant) finish, a slightly thicker top profile and sensors lining the interior. The primary visual cue indicating something different is a small diamond-shaped engraving at the center, which signals the location of the touchpad.
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
According to the company, the Ring Pro supports eight touch commands and six gesture controls, designed to reduce the frequency of users needing to reach for their phone or smartwatch.
Health tracking, however, is still a core part of the experience. The Ring Pro focuses on long-term trend tracking, including sleep analysis, workout insights, menstrual cycle tracking and more than 13 core health metrics. The app also has a built-in AI advisor that allows you to discuss trends and metrics with a live AI expert. The ring is waterproof up to 100 meters with an IP68 rating, and rated for up to seven days of battery life.
The Ring Pro is launching on Kickstarter for $299, but is currently on sale for $179 for late pledge backers and already has more than 5,000 backers as of publish time. The company says there are currently no additional monthly costs tied to the app services, which is another advantage over competitors.
While we saw the ring on display on the CES show floor, we have yet to test its features, and it remains to be seen whether its gesture controls prove useful in everyday use.
On paper, at least, Aivela is giving the smart ring category a different spin, shifting it from passive health tracking toward more active control.
-
Technologies3 года ago
Tech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies3 года ago
Best Handheld Game Console in 2023
-
Technologies3 года ago
Tighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года ago
Black Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies4 года ago
Verum, Wickr and Threema: next generation secured messengers
-
Technologies4 года ago
Google to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies4 года ago
Olivia Harlan Dekker for Verum Messenger
-
Technologies4 года ago
iPhone 13 event: How to watch Apple’s big announcement tomorrow