Technologies
E3 2023 Canceled After Weeks of Speculation
The video game trade show was expected to have poor attendance among major game publishers.
Technologies
Marvel’s Wolverine Finally Has a Release Date, and It’s Soon
September can’t come quickly enough.
Marvel’s Wolverine game is coming surprisingly soon. On Tuesday, Sony revealed a September release date, marking a launch almost exactly five years after it was first announced in 2021.
The game is set to land on Sept. 15, according to a post on PlayStation’s X account. It included a link to the PSN Store page for the PS5 version, although there is only an option to wishlist the game so far.
Developed by Insomniac Games, Marvel’s Wolverine was kept under wraps for years following its announcement. We finally got the full reveal during a State of Play last September.
Who’s ready for September 15, 2026?
Wishlist Marvel’s #WolverinePS5 now: https://t.co/0yh4hZem4U pic.twitter.com/5OoSuou0Y5— PlayStation (@PlayStation) February 24, 2026
How much will Marvel’s Wolverine cost?
No price appears on the game’s PSN Store page, but it’s likely to retail for $70 like other recent AAA games. While no special editions have been confirmed, if there are different editions, one (or more) may be priced higher due to extra in-game content or other extras.
Is Marvel’s Wolverine a PS5 exclusive?
As of now, yes, it is. The game might make the jump to PC later, as other PlayStation exclusives have, such as The Last of Us, God of War and Marvel’s Spider-Man.
What is Marvel’s Wolverine about?
As revealed last September, the game will follow Logan as he seeks answers to mysteries about his past. He’ll have to journey across the globe from the Canadian wilderness to the fictional island nation of Madripoor and the streets of Tokyo. Logan will also face some of his biggest rivals, such as Omega Red, Mystique and the mutant-hunting Sentinels.
Unlike Insomniac Games’ other superhero series, Marvel’s Spider-Man, the developer made it clear that a game about Wolverine would have to include a lot of blood. In fact, the developer created its own blood rendering tech so that when Wolverine violently slashes and stabs his enemies with his claws, there’s blood everywhere.
Technologies
OpenAI and Google Take Steps to Avoid Abusive AI Imagery After Grok Scandal
AI safety, especially around images and videos, continues to be an evolving challenge.
2026 started with a horrifying example of generative AI’s potential for abuse. Grok, the AI tool from Elon Musk’s xAI, was used to undress or nudify pictures of people shared on X (formerly Twitter) at an alarming rate. Grok made 3 million sexualized images over a span of 11 days in January, with approximately 23,000 of those containing images of children, according to a study from the Center for Countering Digital Hate.
Now, competitors like OpenAI and Google are stepping up their security to avoid being the next Grok.
Advocates and safety researchers have long been concerned about AI’s ability to create abusive and illegal content. The creation and sharing of nonconsensual intimate imagery, sometimes referred to as revenge porn, was a big problem before AI. Generative AI only makes it quicker, easier and cheaper for anyone to target and victimize people.
On Jan. 14, two weeks into the scandal, X’s Safety account confirmed in a post that it would pause Grok’s ability to edit images on the social media app. Grok’s image-generation abilities are still available to paying subscribers in its standalone app and website. X did not respond to multiple requests for comment.
Most major companies have safeguards in place to prevent the kind of wide-scale abuse that we saw was possible with Grok. But cybersecurity is never a solid metal wall of protection; it’s a brick wall that’s constantly undergoing repairs. Here’s how OpenAI and Google have tried to beef up their safety protections to circumvent Grok-like failures.
Read More: AI Slop Is Destroying the Internet. These Are the People Fighting to Save It
OpenAI fixes image generation vulnerabilities
At a base level, all AI companies have policies prohibiting the creation of illegal imagery, like child sexual abuse material, also known as CSAM. Many tech companies have guardrails to prevent the creation of intimate imagery altogether. Grok is the exception, with «spicy» modes for image and video.
Still, anyone intent on creating nonconsensual intimate imagery can try to trick AI models into doing so.
Researchers from Mindgard, a cybersecurity company focused on AI, found a vulnerability in ChatGPT that allowed people to circumvent its guardrails and make intimate images. They used a tactic called «adversarial prompting,» where testers try to poke holes in an AI with specifically crafted instructions. In this case, it was tricking the chatbot’s memory with custom prompts, then copying the nudified style onto images of well-known people.
Mindgard alerted OpenAI of its findings in early February, and the ChatGPT developer confirmed on Feb. 10 — before Mindgard went public with its report — that it had fixed the problem.
«We’re grateful to the researchers who shared their findings,» an OpenAI spokesperson said to CNET and Mindgard. «We moved quickly to fix a bug that allowed the model to generate these images. We value this kind of collaboration and remain focused on strengthening safeguards to keep users safe.»
This process is how cybersecurity often works. Outside red-team researchers like Mindgard test software for weaknesses or workarounds, mimicking strategies that bad actors might use. When they identify security gaps, they alert the software provider so fixes can be deployed.
«Assuming motivated users will not attempt to bypass safeguards is a strategic miscalculation. Attackers iterate. Guardrails must assume persistence,» Mindgard wrote in a blog post.
While tech companies boast about how you can use their AI for any purpose, they also need to make a strong promise that they can prevent AI from being used to enact abuse. For AI image generation, that means having a strong repertoire of prompts that will be refused and kicked back to users.
When OpenAI launched its Sora 2 video model, it promised to be more conservative with its content moderation for this very reason. But it’s important to ensure its moderation practices are consistently effective, not just at a product’s launch. It makes AI safety testing an ongoing process for cybersecurity researchers and AI developers alike.
Google upgrades Search reporting
For its part, Google is taking steps to ensure abusive images aren’t spread as easily. The tech giant simplified its process for requesting the removal of explicit images from Google Search. You can click the three dots in the upper right corner of an image, click report and then tell Google you want the photo removed because it «shows a sexual image of me.» The new changes also let you select multiple images at once and track your reports more easily.
«We hope that this new removal process reduces the burden that victims of nonconsensual explicit imagery face,» the company said in a blog post.
When asked about any further steps the company is taking to prevent AI-enabled abuse, Google pointed CNET to its generative AI prohibited use policy. Google’s policy, like many other tech companies’ fine print, outlaws using AI for illegal or potentially abusive activities, such as creating intimate imagery.
There are laws that aim to help victims when these images are shared online, such as the 2025 Take It Down Act. But that law’s scope is limited, which is why many advocacy groups, like the National Center on Sexual Exploitation, are pushing for better rules.
There’s no guarantee that these changes will prevent anyone from ever using AI for harassment and abuse. That’s why it’s so important that developers stay vigilant to ensure we are all protected — and act quickly when reports and problems pop up.
(Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Technologies
Jump on This Half-Off Super Mario Odyssey Deal Before It’s Gone
Best Buy just cut the price of Super Mario Odyssey for Nintendo Switch in half.
Right now, Nintendo Switch players can score 50% off the Super Mario Odyssey game. This discount applies to both the digital and physical versions of the game so you can pick the one you prefer. Best Buy is the only retailer with this discount. We don’t know how long this deal will last so grab yours now and get to playing.
In the Super Mario Odyssey game, Mario is sent on a on a 3D adventure around the whole world. He races to stop Bowser’s wedding plans and rescue Princess Peach. The game has a ton of kingdoms, hidden secrets and fun challenges. There’s even a new character, Cappy, that teams up with Mario.
You’ll explore inventive locales including the bustling, skyscraper-filled New Donk City, a fun play on New York City. You will also be collecting Power Moons to fuel the Odyssey airship. There’s also drop-in co-op with split Joy-Con controls. Plus, there are bonus features tied to wedding-themed figures.
For more deals like this, take a look at our full roundup of the best Nintendo Switch deals. You’ll find discounts on games, accessories and more.
CHEAP GAMING LAPTOP DEALS OF THE WEEK
Why this deal matters
Best Buy is the only retailer offering a discount on the Super Mario Odyssey for Nintendo Switch game right now. It’s sold out at Amazon. As for Target and directly at Nintendo, the game is still full price. Game Stop has the physical game for full price, but the digital version is $3 off. Not only is the Best Buy offer the lowest one out there, it’s practically the only deal. Plus it’s a 50% off deal that is impossible to beat.
-
Technologies3 года agoTech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies3 года agoBest Handheld Game Console in 2023
-
Technologies3 года agoTighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года agoBlack Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies5 лет agoGoogle to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies5 лет agoVerum, Wickr and Threema: next generation secured messengers
-
Technologies4 года agoOlivia Harlan Dekker for Verum Messenger
-
Technologies4 года agoiPhone 13 event: How to watch Apple’s big announcement tomorrow
