Connect with us

Technologies

Here’s What I Learned Testing Photoshop’s New Generative AI Tool

Adobe’s Firefly AI feature brings new fun and fakery to photos. It’s a profound change for image editing, though far from perfect.

Adobe has bulit generative AI abilities into its flagship image-editing software, releasing a Photoshop beta version Tuesday that dramatically expands what artists and photo editors can do. The move promises to release a new torrent of creativity even as it gives us all a new reason to pause and wonder if that sensational, scary or inspirational photo you see on the internet is actually real.

In my tests, detailed below, I found the tool impressive but imperfect. Adding it directly to Photoshop is a big deal, letting creators experiment within the software tool they’re likely already using without excursions to MidjourneyStability AI’s Stable Diffusion or other outside generative AI tools.

With Adobe’s Firefly family of generative AI technologies arriving in Photoshop, you’ll be able to let the AI fill a selected part of the image with whatever it thinks most fitting – for example, replacing road cracks with smooth pavement. You can also specify the imagery you’d like with a text prompt, such as adding a double yellow line to the road.

Firefly in Photoshop also can also expand an image, adding new scenery beyond the frame based on what’s already in the frame or what you suggest with text. Want more sky and mountains in your landscape photo? A bigger crowd at the rock concert? Photoshop will oblige, without today’s difficulties of finding source material and splicing it in.

The feature, called generative fill and scheduled to emerge from beta testing in the second half of 2023, can be powerful. In Adobe’s live demo, the tool was often able to match a photo’s tones, blend in AI-generated imagery seamlessly, infer the geometric details of perspective even in reflections and extrapolate the position of the sun from shadows and sky haze.

Such technologies have been emerging over the last year as Stable Diffusion, Midjourney and OpenAI’s Dall-Ecaptured the imaginations of artists and creative pros. Now it’s built directly into the software they’re most likely to already be using, streamlining what can be a cumbersome editing process.

«It really puts the power and control of generative AI into the hands of the creator,» said Maria Yap, Adobe’s vice president of digital imaging. «You can just really have some fun. You can explore some ideas. You can ideate. You can create without ever necessarily getting into the deep tools of the product, very quickly.»

But you can’t sell anything yet. With Firefly technology, including what’s produced by Photoshop’s generative fill, «you may not use the output for any commercial purpose,» Adobe’s generative AI beta rules state.

Photoshop’s Firefly AI imperfect but useful

In my testing, I frequently ran into problems, many of them likely stemming from the limited range of the training imagery. When I tried to insert a fish on a bicycle to an image, Firefly only added the bicycle. I couldn’t get Firefly to add a kraken to emerge from San Francisco Bay. A musk ox looked like a panda-moose hybrid.

Less fanciful material also presents problems. Text looks like an alien race’s script. Shadows, lighting, perspective and geometry weren’t always right.

People are hard, too. On close inspection, their faces were distorted in weird ways. Humans added into shots could be positioned too high in the frame or in otherwise unconvincingly blended in.

Still, Firefly is remarkable for what it can accomplish, particularly with landscape shots. I could add mountains, oceans, skies and hills to landscapes. A white delivery van in a night scene was appropriately yellowish to match the sodium vapor streetlights in the scene. If you don’t like the trio of results Firefly presents, you can click the «generate» button to get another batch.

Given the pace of AI developments, I expect Firefly in Photoshop will improve.

It’s hard and expensive to retrain big AI models, requiring a data center packed with expensive hardware to churn through data, sometimes taking weeks for the largest models. But Adobe plans relatively frequent updates to Firefly. «Expect [about] monthly updates for general improvements and retraining every few months in all likelihood,» Adobe product chief Scott Belsky tweeted Tuesday.

Automating image manipulation

For years, «Photoshop» hasn’t just referred to Adobe’s software. It’s also used as a verb signifying photo manipulations like slimming supermodels’ waists or hiding missile launch failures. AI tools automate not just fun and flights of fancy, but also fake images like an alleged explosion at the Pentagon or a convincingly real photo of the pope in a puffy jacket, to pick two recent examples.

With AI, expect editing techniques far more subtle than the extra smoke easily recognized as digitally added to photos of an Israeli attack on Lebanon in 2006.

It’s a reflection of the double-edged sword that is generative AI. The technology is undeniably useful in many situations but also blurs the line between what is true and what is merely plausible.

For its part, Adobe tries to curtail problems. It doesn’t permit prompts to create images of many political figures and blocks you for «safety issues» if you try to create an image of black smoke in front of the White House. And its AI usage guidelines prohibit imagery involving violence, pornography and «misleading, fraudulent, or deceptive content that could lead to real-world harm,» among other categories. «We disable accounts that engage in behavior that is deceptive or harmful.»

Firefly also is designed to skip over styling prompts like that have provoked serious complaints from artists displeased to see their type of art reproduced by a data center. And it supports the Content Authenticity Initiative‘s content credentials technology that can be used to label an image as having been generated by AI.

Today, generative AI imagery made with Adobe’s Firefly website add content credentials by default along with a visual watermark. When the Photoshop feature exists beta testing and ships later this year, imagery will include content credentials automatically, Adobe said.

People trying to fake images can sidestep that technology. But in the long run, it’ll become part of how we all evaluate images, Adobe believes.

«Content credentials give people who want to be trusted a way to be trusted. This is an open-source technology that lets everyone attach metadata to their images to show that they created an image, when and where it was created, and what changes were made to it along the way,» Adobe said. «Once it becomes the norm that important news comes with content credentials, people will then be skeptical when they see images that don’t.»

Generative AI for photos

Adobe’s Firefly family of generative AI tools began with a website that turns a text prompt like «modern chair made up of old tires» into an image. It’s added a couple other options since, and Creative Cloud subscribers will also be able to try a lightweight version of the Photoshop interface on the Firefly site.

When OpenAI’s Dall-E brought that technology to anyone who signed up for it in 2022, it helped push generative artificial intelligence from a technological curiosity toward mainstream awareness. Now there’s plenty of worry along with the excitement as even AI creators fret about what the technology will bring now and in the more distant future.

Generative AI is a relatively new form of artificial intelligence technology. AI models can be trained to recognize patterns in vast amounts of data – in this case labeled images from Adobe’s stock art business and other licensed sources – and then to create new imagery based on that source data.

Generative AI has surged to mainstream awareness with language models used in tools like OpenAI’s ChatGPT chatbot, Google’s Gmail and Google Docs, and Microsoft’s Bing search engine. When it comes to generating images, Adobe employs an AI image generation technique called diffusion that’s also behind Dall-E, Stable Diffusion, Midjourney and Google’s Imagen.

Adobe calls Firefly for Photoshop a «co-pilot» technology, positioning it as a creative aid, not a replacement for humans. Yap acknowledges that some creators are nervous about being replaced by AI. Adobe prefers to see it as a technology that can amplify and speed up the creative process, spreading creative tools to a broader population.

«I think the democratization we’ve been going through, and having more creativity, is a positive thing for all of us,» Yap said. «This is the future of Photoshop.»

Editors’ note: CNET is using an AI engine to create some personal finance explainers that are edited and fact-checked by our editors. For more, see this post.

Technologies

Apple CarPlay Ultra vs. Google Built-In: How the Next-Gen Auto Software Rivals Compare

Apple and Google are supercharging their car software experiences. Here’s how they differ.

I’d spent an hour driving a $250,000-plus Aston Martin up the Los Angeles coast when my hunger pangs became impossible to ignore, and as I’ve done many times before, I asked Siri (through Apple CarPlay) to find me a taco place. But then I did something no other car on the planet allows: I asked Siri to blast the AC and make the air colder. That’s because the 2025 Aston Martin DBX I drove was the first vehicle to come with Apple CarPlay Ultra, the upgraded version of the company’s car software.

Apple debuted CarPlay Ultra at WWDC 2025 last month, and this year’s version of the Aston Martin DBX is the first vehicle to launch with it (pairing with an iPhone running iOS 18.5 or later). As I drove the luxury crossover around, I fiddled with other features that aren’t available in regular CarPlay, from climate control to radio to checking the pressure on the car’s tires. Ultimately, Ultra gives deeper access to more car systems, which is a good thing.

That reminded me a lot of a new feature announced at Google I/O back in May: Google Built-In, which similarly lets users control more of a car’s systems straight from the software interface (in that case, Android Auto). When I got a demonstration of Google Built-In, sitting in a new Volvo EX90 electric SUV, I saw what this new integration of Google software offered: climate controls, Gemini AI assistance and even warnings about car maintenance issues.

But the name is telling: Google Built-In requires automakers to incorporate Android deeper into their cars’ inner workings. Comparatively, Apple CarPlay Ultra support seems like it won’t require car manufacturers to do nearly as much work to prepare their vehicles, just adding a reasonably advanced multicore processor onboard that can handle an increased task load. (Aston Martin will be able to add CarPlay Ultra support to its 2023 and 2024 lineups through firmware updates because they already contain sufficiently advanced CPUs.)

Both solutions reflect Apple’s and Google’s different approaches to their next versions of car software. Apple’s is lighter weight, seemingly requiring less commitment from the automaker to integrate CarPlay Ultra into their vehicles (so long as it has adequate processing power onboard), which will run through a paired iPhone. Google Built-In does require much more integration, but it’s so self-sufficient that you can leave your Android phone at home and still get much of its functionality (aside from getting and sending messages and calls). 

Driving with Apple CarPlay Ultra: Controlling climate, radio and more

As I drove around Los Angeles in the Aston Martin with Apple CarPlay Ultra, I could tell what new features I would be missing once I stepped back into my far more humble daily driver. 

At long last, I could summon Siri and ask it to play a specific song (or just a band) and have it pulled up on Spotify. Since Apple’s assistant now has access to climate controls, I asked to turn up the AC, and it went full blast. I asked to find tacos and it suggested several fast food restaurants — well, it’s not perfect, but at least it’s listening. 

To my relief, Aston Martin retained the physical knobs by the gearshift to control fan speed, temperature, stereo volume and the car’s myriad roadway options (like driving assistance) in case the driver likes traditional controls, but almost all of them could also be altered in the interface. Now, things like radio controls (AM/FM and satellite) and car settings are nestled in their own recognizable apps in CarPlay’s interface.

Ultimately, that’ll be one of CarPlay Ultra’s greatest advantages: If you enter an unfamiliar vehicle (like a rental), you still know exactly where everything is. No wrestling with a carmaker’s proprietary software or trying to figure out where some setting or other is located. It’s not a complete replacement — in the Aston Martin’s case, there were still a handful of settings (like for ambient light projected when the doors open) that the luxury automaker controlled, but they were weaved into CarPlay so you could pop open those windows and go back to Apple’s interface without visibly changing apps.

The dependable ubiquity of Apple’s CarPlay software will likely become even more essential as cars swap out their analog instrument clusters for screens, as Aston Martin did. There’s still a touch of the high-end automaker’s signature style as the default screen behind the wheel shows two traditional dials (one for the speedometer, one for RPMs) with Aston Martin’s livery. But that can be swapped out for other styles, from other dials with customizable colors to a full-screen Maps option.

Each of the half-dozen or so dashboard options was swapped out via square touchpads smaller than a dime on the wheel next to the other touch controls. On the dual-dial display types, I swiped vertically to rotate between a central square (with Maps directions, current music or other app information) or swiped horizontally to switch to another dashboard option. No matter which one you choose, the bottom bar contains all the warning lights drivers will recognize from analog cars — even with digital displays, you’re not safe from the check engine light (which is a good thing). 

Apple CarPlay Ultra doesn’t yet do everything I want. I wish I could also ask Siri to roll down the windows (as Google Built-In can — more on that later) and lock or unlock specific doors. If Apple is connected to the car enough to be able to read the pressure in each tire, I wish it could link up with the engine readout and be able to tell me in plain language what kind of maintenance issue has sprung up. Heck, I wish it could connect to the car remotely and blast the AC before I get in (or fire up the seat warmer), as some proprietary car apps can do. And while Apple Maps and Waze will be included at launch, Google Maps support is not, but it’s coming later.

These aren’t huge deficiencies, and they do show where CarPlay Ultra could better meet driver needs in future updates, notwithstanding the potentially dicey security concerns for using CarPlay Ultra for remote climate or unlocking capabilities. But it shows where the limits are today compared to Google’s more in-depth approach.

Google Built-In: Deeper car integrations — and, of course, Gemini AI

The day after Google I/O’s keynote was quieter back in May, as attendees flitted between focused sessions and demos of upcoming software. It was the ideal time to check out Google Built-In, which was appropriately shown off in a higher-end Volvo EX90 electric SUV (though not nearly as pricey as an Aston Martin). 

As mentioned above, Google Built-In has deeper integrations with vehicles than what I saw in Apple CarPlay Ultra, allowing users to change the climate through its interface or access other systems, including through voice requests. For instance, it can go beyond AC control to switch on the defroster, and even raise and lower specific windows relative to the speaker’s position: cameras within the car (in the rearview mirror, if I remember right) meant that when my demonstrator asked to «roll down this window» pointing over his left shoulder, the correct window rolled down.

Google Built-In is also connected to Gemini, Google’s AI assistant, for what the company is calling «Google Live,» a separate and more capable version of the Android Auto assistant experience in cars right now. With a Live session, I could request music or directions much like I could with Siri — but my demo went further, as the demonstrator tasked Gemini with requests better suited for generative AI, such as asking, «Give me suggestions for a family outing» and telling it to send a specific text to a contact. 

The demonstrator then asked Gemini for recipe advice — «I have chicken, rice and broccoli in the fridge, what can I make?» — as an example of a query someone might ask on the drive home.

Since you’re signed into your Google account, Gemini can consult anything connected to it, like emails and messages. It’s also trained on the user manuals from each car-maker, so if a warning light comes on, the driver can ask the voice assistant what it means — no more flipping through a dense manual trying to figure out what each alert means.

There are other benefits to Google Built-In, like not needing your phone for some features. But there are also drawbacks, like the need to keep car software updated, requiring more work on Google’s end to make sure cars are protected from issues or exploits. They can’t just fix it in the most current version of Android — they’ll need to backport that fix to older versions that vehicles might still be on. 

This deeper integration with Google Built-In has a lot of the benefits of Apple CarPlay Ultra (a familiar interface, easier to access features), just cranked up to a greater degree. It surely benefits fans of hands-off controls, and interweaving Gemini naturally dovetails with Google’s investments, so it’s easy to see that functionality improving. But a greater reliance on Android within the car’s systems could be concerning as the vehicle ages: Will the software stop being supported? Will it slow down or be exposed to security exploits? A lot of questions remain regarding making cars open to phone software interfaces.

Continue Reading

Technologies

A Samsung Tri-Fold Phone Could Be in Your Future, if This Leak Is to Be Believed

UI animations might have revealed the imminent release of a so-called «Galaxy G Fold» device with three screens.

Samsung has been showing off mobile display concepts with three screens at trade events such as CES for several years, but it might finally bring one to market soon if a leaked UI animation is any indicator.

As reported by Android Authority, an animated image from a software build of One UI 8 appears to show what some are dubbing a «Galaxy G Fold» device with three display panels. The screens would be capable of displaying different information or working in unison as one large display. The new phone model could debut as early as next week at Samsung’s Unpacked event on July 9 in Brooklyn. 

Huawei released a tri-folding phone in February, the Mate XT Ultimate Design. 

Some websites have gone into overdrive trying to uncover details on what Samsung’s new device might include and how much it may cost, with Phone Arena reporting that according to a Korean media report, it could be priced at about $3,000. 

Samsung didn’t immediately respond to request for comment.

Continue Reading

Technologies

Early Prime Day Headphone Deals: Up to $100 Off Top-Rated Pairs From Apple, Beats and More

Continue Reading

Trending

Copyright © Verum World Media