Connect with us

Technologies

The iPhone 17 Isn’t Out Yet, but You Can Already Order a Case for It

New Dbrand cases for the iPhone 17, 17 Air, 17 Pro and 17 Pro Max showcase a bold style, but there’s no official confirmation that the cases correctly reflect Apple’s upcoming phone.

One of the first cases for the iPhone 17 made its appearance last month, and now there are cases for the iPhone 17 Air and 17 Pro/Pro Max. Canadian tech accessory company Dbrand announced its Tank Case for the iPhone 17 back in August, nearly a month ahead of Apple’s «Awe Dropping» event for the iPhone 17. Now the company has multiple phone case types for all the expected new iPhone models.

While we don’t know the price of the Tank Case yet, Dbrand has certainly made some striking design choices in the hard black shell case, including plenty of number codes, the Freemason Eye of Providence in what looks like a center designed for MagSafe connections and what appears to be Braille. Dbrand is light on details for now, but you can sign up with your email address to get notifications about the case.

A representative for Dbrand did not immediately respond to a request for comment. 

Other DBrand cases for the iPhone 17 series, costing between $35 and $50, are actually available to order, though they ship in September — presumably after Apple announces the phones.

Patrick Holland, CNET managing editor and mobile guru, cautions that these early-announced phone designs don’t always end up making it to the release date. Holland saw that happen just last year, when some manufacturers were forced to push quick redesigns to make room for the iPhone 16’s surprise camera control button. 

«It’s become a yearly tradition,» Holland said. «We see companies try to be the first out with a new case design for the latest iPhone, even though the phone hasn’t been announced by Apple.»

«For Dbrand, it’s unclear whether the Tank case is designed based on rumors, or if the company got an early look at the iPhone 17 series, or were given a dummy model,» Holland said. «The case does feature a full-body width camera bump that has been heavily leaked for the iPhone 17 Pro and 17 Pro Max. There’s one thing that’s for certain: Dbrand’s Tank case looks chunky and busy, especially for a sleek new iPhone.»

That’s why we’re also keeping a close eye on all the latest reports of iPhone 17 features, including rumors of a redesigned camera bump and  a movable lens that could throw a curveball for third-party cases like this.

Social media buzz

Commenters on X shared a variety of opinions about the case. While one person wrote, «that case looks fire,» another wrote, «that case looks hideous.»

Commenters also wondered if the case design was revealing some previously unknown details about the iPhone 17.

«So the second button is basically confirmed?» one X commenter wrote. «Why would the case sport an area that looks pressable or ‘slideable’ otherwise?»

Another wrote, «Am I seeing that correctly? Three cameras on a base iPhone model finally.»

Others zeroed in on the idea that Apple likely does not want case manufacturers to reveal details about a phone before the company announces it, noting that Dbrand also unveiled a case for the Nintendo Switch 2 before that console came out.

«First the Switch 2, now the iPhone 17,» the one commenter wrote. «Yeahh, they’re never getting shit early to make cases anymore.»

Technologies

I Tried Gemini’s ‘Nano Bananas’ for Image Editing. The AI Slipups Were Obvious

Google’s new AI model is good at some tasks, but it struggles in these key areas.

After seeing all the banana-fanfare for Google’s newest generative AI tool, I knew I had to take it for a spin. Named Gemini 2.5 Flash Image, the model upgrades your ability to edit your photos natively in Gemini. AI enthusiasts have referred to it as the «nano bananas» model, spurred on by a series of banana-themed teasers from Google execs

In the few weeks it’s been out, people have created over 200 million AI images, and over 10 million people have signed up to use the Gemini app, according to Josh Woodward, Google’s vice president of Google Labs and Gemini.

Google has invested heavily in its generative media models this year, dropping updated versions of its image and video generator models at its annual I/O developers conference. Google’s AI video generator Veo 3 stunned with synchronized audio, a first among the AI giants. And creators have made more than 100 million AI videos with Google’s AI filmmaker tool, Flow. 

I’ve spent a lot of time testing AI creative software, and I was excited to see what Google had cooked up. But my testing of 2.5 Flash Image showed that just because something has a flashy entrance doesn’t mean it’ll always lives up to its hype. Here’s how my experience with Gemini nano bananas went: the good, the bad and the frustrating.

What worked

The Gemini bananas model is spookily good at adding elements to existing images, blending AI-generated elements well into any picture you snapped. It also maintains a decently stable level of character consistency — meaning the people in my photos weren’t too distorted or wonky after going through the AI processing. Those are both important distinctions for AI image programs, and something Google said it had worked to improve.

You can see both of these characteristics in this picture of my sister and me. Our general appearances are unchanged in the edited version (right), showing off that character consistency. I asked Gemini to add a third sister who looked similar to the two of us, which it did scarily well by adding a third woman in between the two of us.

I was also pretty impressed with how quickly Gemini could spit out completed images. Anywhere under a minute gets a gold star from me, and Gemini was regularly handling requests in under 15 seconds. I also appreciated how it added a watermark to all the images it created and edited — even if I don’t love how tech companies have corrupted the sparkles emoji for AI, it’s extremely important to have some markers of AI-generated content. Google’s SynthID and behind-the-scenes work also help differentiate AI content from human-created imagery.

Gemini is good at wholesale AI image creation, too, but I recommend using its Imagen 4 or another AI image generator instead — they have more hands-on controls and settings that get you closer to what you want with less work.

What really didn’t work

There are serious limitations to Gemini bananas. It automatically generated square images, and follow-up prompts asking for images to be adapted into other dimensions were ignored or failed.

I also noticed that Gemini reduced the resolution of many of my photos. I primarily take photos with my iPhone 16, which has stellar cameras, but after going through the Gemini bananas model, those fine details were often blurred. That’s annoying and won’t win over any photographers.

I tried repeatedly to get Gemini to handle photo edits that would’ve been difficult for me to do manually. That’s one area in photo editing where AI is supposed to excel — automating mundane but detail-intensive edits. Sadly, Gemini really struggled with prompt adherence here, meaning it didn’t do what I asked. 

I tried many times to get Gemini to remove reflections from a snap of a Freakier Friday movie poster, but they stubbornly remained. And the more I tried to get it to remove the reflections, the poorer the quality of the image became with every prompt. Once-clear text was ultimately illegible after I finally gave up, not to mention the accidental, scary-looking damage done to the faces of Lindsey Lohan and Jamie Lee Curtis.

Gemini nano bananas struggled to generate images in different dimensions. Resizing and cropping images is a core photo editing process, but Gemini didn’t — or couldn’t — handle simple sizing guidelines in my prompts.

I reached out to Google about the resolution and dimension issues and a spokesperson said the tech company is «aware and actively working on both issues. It’s been a big update from our previous model but we’ll continue to improve on the model.»

Overall, Gemini nano bananas proved to me that Google is serious about continuing to dominate in generative media. But it has significant pitfalls, with too big a focus on generating new elements rather than using AI to improve and tweak common photo issues. For now, the nano bananas model is best suited for Gemini fans who want to make big edits quickly. For those of us looking for more precise tools, we’ll have to wait for Google’s next big update or find another program.

Gemini nano bananas availability, pricing and privacy

You don’t need to do anything to access the new model; it’s automatically added to the base Gemini 2.5 Flash model. Gemini is available for free, with more models and higher usage caps available in Google’s AI plans starting at $20 per month

If you’re a paying subscriber, you may also be able to access the model through Google AI Studio. From there, all you have to do is upload an image and type out your prompt. Each prompt uses anywhere from one to two thousand tokens, depending on the level of detail required. Adobe Express and Firefly users can also access the new model now. 

Google’s Gemini privacy policy says it can use the information you upload for improving its AI products, which is why the company recommends avoiding uploading sensitive or private information. The company’s AI prohibitive use policy also outlaws the creation of illegal or abusive material.

For more, check out the best AI image generators and everything announced at the Made by Google Pixel 10 event.

Continue Reading

Technologies

How to Get the Map in Silksong and Unlock Shakra’s Full Shop

Your first stop in the game should be finding this map and getting access to Shakra’s shop.

When it comes to Metroidvania games like Hollow Knight: Silksong, the map is arguably the most important item to have, as it shows where you’ve explored and where you need to go next. And, like a lot of aspects of this game, it’s not meant to be easy. 

Silksong, like its predecessor Hollow Knight, has a map feature you can expand as you play, but you have to find one particular NPC to unlock it. Once you find that character, the game will feel slightly easier in getting around its nooks and crannies — until it crushes you with difficult sequences and boss fights.

Read more: Hollow Knight Silksong Guide: Read These 11 Tips Before Starting

How to find the map in Silksong? 

To get a map, you’ll need to go from the hub center of Bone Bottom to The Marrow, which is directly to the east. While exploring the area, the sound of someone singing will quietly echo. Following the singing will take you to Shakra, the bug warrior who also does cartography. 

Shakra will introduce herself, and if you continue to talk to her, she will tell you about her map-making skills. This is when she’ll officially become a merchant and sell all the map items you could want. 

What does Shakra sell? 

At the start, Shakra has several items for sale, but the most important ones are maps, a Compass and the Quill. 

There will be two maps for sale when you first meet Shakra: The Mosslands map and The Marrow map. The first will cost you 40 rosaries — the in-game currency — and the other will cost 50 rosaries. As you progress into a new area of the game, return to Shakra, and she will have the map for that area available for purchase. 

The next item to buy from Shakra is the Quill. It costs 50 rosaries, and whenever you rest at a bench, Hornet will quickly mark on the map where she’s been. 

The last item you’ll want to buy is the Compass. It’s an equipable tool that will show an icon of Hornet’s current location on the map, giving you a clear idea of where you are. This will cost 70 rosaries. 

Shakra also has other items available to buy, such as Bench Pins that show where you can rest at a bench and Shell Markers to indicate where something important is on the map. 

This will cost a lot of rosaries to get them all, but luckily, if you continue exploring The Marrow, you will find a bench as well as a passage to loop back to the Mosslands, where a shortcut to Bone Bottom will be waiting. This creates an easy setup to kill enemies for rosaries, rest and repeat. This is a good opportunity to get these important items without risking much and also developing your combat skills. 

How does the map work? 

Depending on the platform, there is a map button (it’s left bumper on the Xbox). Hold the button and the map overlay will appear. You can even move Hornet around while you’re viewing the map. 

One thing to note is that the map doesn’t auto-populate as you explore. Maps come with an outline of the area, and you still need to go through the area to fill it in. Once Hornet rests at a bench, she will mark up the map to indicate where she’s been, what markers she’s come across and so on. 

How do I find more maps and pins? 

When you take out the next boss, the Bell Beast, Shakra will move her store to Bone Bottom. She’ll be located in a section above the other denizens that you can reach by following the hanging metal rings. She will hang out here until you find her again throughout the game. Even when you meet her somewhere else, once you finish interacting with her and rest at a bench or die, she will head back to the Bone Bottom. 

As you explore more areas, Shakra will also have more maps and more items, so always make sure to head back to Bone Bottom. 

Hollow Knight: Silksong is out now on PC, Switch, Switch 2, Xbox One, Xbox Series X and Series S, PS4 and PS5. It’s also available for Xbox Game Pass subscribers. 

Continue Reading

Technologies

The iPhone 17 Cameras Need Google’s Approach for Identifying AI Images

Commentary: Google is taking the correct stance on tagging both AI-generated images and photos straight out of the camera. Apple should join in and throw its weight in the right direction.

Nearly all of the new camera features of Google’s Pixel 10 Pro lean on artificial intelligence. When you use Pro Res Zoom to zoom in at 100x, for example, the Pixel Camera uses generative AI to recreate a sharp, clear version. Or when you’re taking photos of people, the Auto Best Take feature melds multiple shots to create an image where everyone looks good.

But Google added another low-level feature to the Pixel 10 line, C2PA content credentials, that isn’t getting much attention. C2PA, or the Coalition for Content Provenance and Authenticity, is an effort to identify whether an image has been created or edited using AI and help weed out fake images. AI misinformation is a growing problem, especially as the systems used to create them have been rapidly improving — with Google among those advancing the technology. 


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


Apple, however, is not part of the coalition of companies pledging to work with C2PA content credentials. But it sells millions of iPhones, some of the most popular image-making devices in the world. It’s time the company implemented the technology in its upcoming iPhone 17 cameras.

Identifying genuine photos from AI-edited ones

C2PA is an initiative founded by Adobe to tag media with content credentials that identify whether they’re AI-generated or AI-edited. Google is a member of the coalition. Starting with the Pixel 10 line, every image captured by the camera is embedded with C2PA information, and if you use AI tools to edit a photo in the Google Photos app, it will also get flagged as being AI-edited.

When viewing an image in Google Photos on a phone, swipe up to display information about it. In addition to data such as which camera settings were used to capture the image, at the bottom is a new «How this was made» section. It’s not incredibly detailed – a typical shot says it’s «Media captured with a camera» — but if an AI tool such as Pro Res Zoom was used, you’ll see «Edited with AI tools.» (I was able to view this on a Pixel 10 Pro XL and a Samsung Galaxy S25 Ultra, but it didn’t show up in the Google Photos app on an iPhone 16 Pro.)

As another example, if you edit a photo after taking it using the Help me edit field to replace the background of an image, the generated version also includes «Edited with AI tools» in the information.

To be fair, AI has a role in pretty much every photo you take with a smartphone, given that machine learning is used to identify objects and scenes to better merge bursts of exposures that are captured when you tap the shutter button. The Pixel 10 flags those as «Edited with non-AI tools,» so Google is specifically applying the AI tag to images where generative AI is at work. So far, the implementation is inconsistent: A short AI-generated clip I made using the Photo to Video feature in Google Photos on the Pixel 10 Pro XL shows no C2PA data at all, though it does include a «Veo» watermark in the corner of the video.

What’s important is that the C2PA info is there

But here’s the key point: What Google is doing is not just tagging pictures that have been touched by AI. The Camera app is adding the C2PA data to every photo it captures, even the ones you snap and do nothing with.

The goal is not to highlight AI-edited photos. It’s to let you look at any photo and see where it came from.

When I talked to Isaac Reynolds, group product manager for the Pixel cameras, before the Pixel 10 launch, C2PA was a prominent topic even though in practical terms the feature isn’t remotely as visible as Pro Res Zoom or the new Camera Coach.

«The reason we are so committed to saving this metadata in every Pixel camera picture is so people can start to be suspicious of pictures without any information,» said Reynolds. «We’re just trying to flood the market with this label so people start to expect the data to be there.»

This is why I think Apple needs to adopt C2PA and tag every photo made with an iPhone. It would represent a massive influx of tagged images and give weight to the idea that an image with no tag should be regarded as potentially not genuine. If an image looks off, particularly when it involves current events or is meant to imitate a business in order to scam you, looking at its information can help you make a better-informed choice.

Google isn’t an outlier here. Samsung Galaxy phones add an AI watermark and a content credential tag to images that incorporate AI-generated material. Unfortunately, since Apple is not even listed as one of the C2PA members, I admit it seems like a stretch to expect that the company would adopt the technology. But given Apple’s size and influence in the market, adding C2PA credentials to every image the iPhone makes would make a difference and hopefully encourage even more companies to get on board.

Google’s New Pixel Studio Is Weirdly Obsessed With the iPhone

See all photos

Continue Reading

Trending

Exit mobile version