Connect with us

Technologies

Adobe: Our New Generative AI Will Help Creative Pros, Not Hurt Them

The Firefly tools begin with image creation and font styling but soon will spread to Photoshop and other software.

In 2022, OpenAI’s Dall-E service wowed the world with the ability to turn text prompts into images. Now Adobe has built its own version of this generative AI technology with tools that begin a technological overhaul of the company’s widely used creative tools.

On Tuesday, Adobe released the first two members of its new Firefly collection of generative AI tools for beta testing. The first tool creates an image based on a text prompt like «fierce alligator leaping out of the water during a lightning storm,» with hundreds of styles that can tweak results. The other applies prompt-based styles to text, letting people create letters that look hairy, scaly, mossy or however else they want.

Firefly for now is available on Adobe’s website, but the company will build generative AI directly into other tools, starting with its Photoshop image editing software, Illustrator for designs and Adobe Express for creating quick videos. The company hasn’t revealed its pricing approach for the new tools.

Creative professionals might see Firefly as an incursion into their creative domain, going beyond mechanical tools like selecting colors and trimming videos into the heart and soul of their jobs. With AI showing new smarts when it comes to translating documents, interpreting tax code, composing music and creating travel itineraries, it’s not irrational for professionals to feel spooked.

Like other AI fans, though, Adobe sees artificial intelligence as the latest digital tool to amplify what humans can do. For example, Firefly eventually could let people use Adobe tools to tailor designs to individuals instead of just creating one design for a broad audience, said Alexandru Costin, vice president of Adobe’s generative AI work.

«We don’t think AI will replace creative creators. We think that creators using AI will be more competitive than creators not using AI. This is why we want to bring AI to the fingertips of all our user base,» Costin said. «The only way to succeed in AI is to embrace it.»

Adobe’s Firefly products are trained from the company’s own library of stock images, along with public domain and licensed works. The company has worked to reduce the bias in training data that AI models can reflect, for example that business executives are male.

AI is a «sea change»

Artificial intelligence uses processes inspired by human brains for computing tasks, trained to recognize patterns in complex real-world data instead of following traditional and rigid if-this-then-that programming. With advances in AI hardware, software, algorithms and training data, the field is advancing rapidly and touching just about every corner of tech.

The latest flavor of the technology, generative AI, can create new material on its own. The best known example, ChatGPT, can write software, hold conversations and compose poetry. Microsoft is employing ChatGPT’s technology foundation, GPT-4, to boost Bing search results, offer email writing tips and help build presentations 

AI tools are sprouting up all over. Adobe has used AI for years under its Sensei brand for features like recognizing human subjects in Lightroom photos and transcribing speech into text in Premiere Pro videos. EbSynth applies a photo’s style to a video, HueMint creates color palettes and LeiaPix converts 2D photos into 3D scenes.

But it’s the new generative AI that brings new creative possibilities to digital art and design. 

«It’s a sea change,» said Forrester analyst David Truog.

An illustration Adobe's use of generative AI to style the letter N so it looks mossy, golden, or made or thousands of red particles.An illustration Adobe's use of generative AI to style the letter N so it looks mossy, golden, or made or thousands of red particles.

One of the first members of Adobe’s Firefly family of generative AI tools will style text based on prompts like «the letter N made of gold with intricate ornaments.»

Adobe

Alpaca offers a Photoshop plug-in to generate art, and Aug X Labs can turn a text prompt into a video. Google’s MusicLM converts text to music, though it’s not open to the public. Dall-E captured the internet’s attention with its often fantastical imagery — the name marries Pixar’s WALL-E robot with the surrealist painter Salvador Dalí.

Related tools like Midjourney and Stability AI’s Stable Diffusion spread the technology even further.

If Adobe didn’t offer generative AI abilities, creative pros and artists would get them from somewhere else. 

Indeed, Microsoft on Tuesday incorporated Dall-E technology with its Bing Image Creator service.

Training AIs isn’t easy, but it’s getting less difficult, at least for those who have a healthy budget. Chip designer Nvidia on Tuesday announced that Adobe is using its new H100 Hopper GPU to train Firefly models through a new service called Picasso. Other Picasso customers include photo licensing companies Getty Images and Shutterstock.

Legal engineering

Developing good AI isn’t just a technical matter. Adobe set up Firefly to sidestep legal and social problems that AI poses.

For example, three artists sued Stability AI and Midjourney in January over the use of their works in AI training data. They «seek to end this blatant and enormous infringement of their rights before their professions are eliminated by a computer program powered entirely by their hard work,» their lawsuit said.

Getty Images also sued Stability AI, alleging that it «unlawfully copied and processed millions of images protected by copyright.» It offers licenses to its enormous catalog of photos and other images for AI training, but Stability AI didn’t license the images. Stability AI, DeviantArt and Midjourney didn’t respond to requests for comment.

Adobe wants to assure artists that they needn’t worry about such problems. There are no copyright problems, no brand logos, and no Mickey Mouse characters. «You don’t want to infringe somebody else’s copyright by mistake,» Costin said.

The approach is smart, Truog said.

«What Adobe is doing with Firefly is strategically very similar to what Apple did by introducing the iTunes Music Store 20 years ago,» he said. Back then, Napster music sharing showed demand for online music, but the recording industry lawsuits crushed the idea. «Apple jumped in and designed a service that let people access music online but legally, more easily, and in a way that compensated the content creators instead of just stealing from them.»

Adobe also worked to counteract another problem that could make businesses leery, showing biased or stereotypical imagery.

It’s now up to Adobe to convince creative pros that it’s time to catch the AI wave.

«The introduction of digital creativity has increased the number of creative jobs, not decreased them, even if at the time it looked like a big threat,» Costin said. «We think the same thing will happen with generative AI.»

Editors’ note: CNET is using an AI engine to create some personal finance explainers that are edited and fact-checked by our editors. For more, see this post.


Technologies

Apple CarPlay Ultra vs. Google Built-In: How the Next-Gen Auto Software Rivals Compare

Apple and Google are supercharging their car software experiences. Here’s how they differ.

I’d spent an hour driving a $250,000-plus Aston Martin up the Los Angeles coast when my hunger pangs became impossible to ignore, and as I’ve done many times before, I asked Siri (through Apple CarPlay) to find me a taco place. But then I did something no other car on the planet allows: I asked Siri to blast the AC and make the air colder. That’s because the 2025 Aston Martin DBX I drove was the first vehicle to come with Apple CarPlay Ultra, the upgraded version of the company’s car software.

Apple debuted CarPlay Ultra at WWDC 2025 last month, and this year’s version of the Aston Martin DBX is the first vehicle to launch with it (pairing with an iPhone running iOS 18.5 or later). As I drove the luxury crossover around, I fiddled with other features that aren’t available in regular CarPlay, from climate control to radio to checking the pressure on the car’s tires. Ultimately, Ultra gives deeper access to more car systems, which is a good thing.

That reminded me a lot of a new feature announced at Google I/O back in May: Google Built-In, which similarly lets users control more of a car’s systems straight from the software interface (in that case, Android Auto). When I got a demonstration of Google Built-In, sitting in a new Volvo EX90 electric SUV, I saw what this new integration of Google software offered: climate controls, Gemini AI assistance and even warnings about car maintenance issues.

But the name is telling: Google Built-In requires automakers to incorporate Android deeper into their cars’ inner workings. Comparatively, Apple CarPlay Ultra support seems like it won’t require car manufacturers to do nearly as much work to prepare their vehicles, just adding a reasonably advanced multicore processor onboard that can handle an increased task load. (Aston Martin will be able to add CarPlay Ultra support to its 2023 and 2024 lineups through firmware updates because they already contain sufficiently advanced CPUs.)

Both solutions reflect Apple’s and Google’s different approaches to their next versions of car software. Apple’s is lighter weight, seemingly requiring less commitment from the automaker to integrate CarPlay Ultra into their vehicles (so long as it has adequate processing power onboard), which will run through a paired iPhone. Google Built-In does require much more integration, but it’s so self-sufficient that you can leave your Android phone at home and still get much of its functionality (aside from getting and sending messages and calls). 

Driving with Apple CarPlay Ultra: Controlling climate, radio and more

As I drove around Los Angeles in the Aston Martin with Apple CarPlay Ultra, I could tell what new features I would be missing once I stepped back into my far more humble daily driver. 

At long last, I could summon Siri and ask it to play a specific song (or just a band) and have it pulled up on Spotify. Since Apple’s assistant now has access to climate controls, I asked to turn up the AC, and it went full blast. I asked to find tacos and it suggested several fast food restaurants — well, it’s not perfect, but at least it’s listening. 

To my relief, Aston Martin retained the physical knobs by the gearshift to control fan speed, temperature, stereo volume and the car’s myriad roadway options (like driving assistance) in case the driver likes traditional controls, but almost all of them could also be altered in the interface. Now, things like radio controls (AM/FM and satellite) and car settings are nestled in their own recognizable apps in CarPlay’s interface.

Ultimately, that’ll be one of CarPlay Ultra’s greatest advantages: If you enter an unfamiliar vehicle (like a rental), you still know exactly where everything is. No wrestling with a carmaker’s proprietary software or trying to figure out where some setting or other is located. It’s not a complete replacement — in the Aston Martin’s case, there were still a handful of settings (like for ambient light projected when the doors open) that the luxury automaker controlled, but they were weaved into CarPlay so you could pop open those windows and go back to Apple’s interface without visibly changing apps.

The dependable ubiquity of Apple’s CarPlay software will likely become even more essential as cars swap out their analog instrument clusters for screens, as Aston Martin did. There’s still a touch of the high-end automaker’s signature style as the default screen behind the wheel shows two traditional dials (one for the speedometer, one for RPMs) with Aston Martin’s livery. But that can be swapped out for other styles, from other dials with customizable colors to a full-screen Maps option.

Each of the half-dozen or so dashboard options was swapped out via square touchpads smaller than a dime on the wheel next to the other touch controls. On the dual-dial display types, I swiped vertically to rotate between a central square (with Maps directions, current music or other app information) or swiped horizontally to switch to another dashboard option. No matter which one you choose, the bottom bar contains all the warning lights drivers will recognize from analog cars — even with digital displays, you’re not safe from the check engine light (which is a good thing). 

Apple CarPlay Ultra doesn’t yet do everything I want. I wish I could also ask Siri to roll down the windows (as Google Built-In can — more on that later) and lock or unlock specific doors. If Apple is connected to the car enough to be able to read the pressure in each tire, I wish it could link up with the engine readout and be able to tell me in plain language what kind of maintenance issue has sprung up. Heck, I wish it could connect to the car remotely and blast the AC before I get in (or fire up the seat warmer), as some proprietary car apps can do. And while Apple Maps and Waze will be included at launch, Google Maps support is not, but it’s coming later.

These aren’t huge deficiencies, and they do show where CarPlay Ultra could better meet driver needs in future updates, notwithstanding the potentially dicey security concerns for using CarPlay Ultra for remote climate or unlocking capabilities. But it shows where the limits are today compared to Google’s more in-depth approach.

Google Built-In: Deeper car integrations — and, of course, Gemini AI

The day after Google I/O’s keynote was quieter back in May, as attendees flitted between focused sessions and demos of upcoming software. It was the ideal time to check out Google Built-In, which was appropriately shown off in a higher-end Volvo EX90 electric SUV (though not nearly as pricey as an Aston Martin). 

As mentioned above, Google Built-In has deeper integrations with vehicles than what I saw in Apple CarPlay Ultra, allowing users to change the climate through its interface or access other systems, including through voice requests. For instance, it can go beyond AC control to switch on the defroster, and even raise and lower specific windows relative to the speaker’s position: cameras within the car (in the rearview mirror, if I remember right) meant that when my demonstrator asked to «roll down this window» pointing over his left shoulder, the correct window rolled down.

Google Built-In is also connected to Gemini, Google’s AI assistant, for what the company is calling «Google Live,» a separate and more capable version of the Android Auto assistant experience in cars right now. With a Live session, I could request music or directions much like I could with Siri — but my demo went further, as the demonstrator tasked Gemini with requests better suited for generative AI, such as asking, «Give me suggestions for a family outing» and telling it to send a specific text to a contact. 

The demonstrator then asked Gemini for recipe advice — «I have chicken, rice and broccoli in the fridge, what can I make?» — as an example of a query someone might ask on the drive home.

Since you’re signed into your Google account, Gemini can consult anything connected to it, like emails and messages. It’s also trained on the user manuals from each car-maker, so if a warning light comes on, the driver can ask the voice assistant what it means — no more flipping through a dense manual trying to figure out what each alert means.

There are other benefits to Google Built-In, like not needing your phone for some features. But there are also drawbacks, like the need to keep car software updated, requiring more work on Google’s end to make sure cars are protected from issues or exploits. They can’t just fix it in the most current version of Android — they’ll need to backport that fix to older versions that vehicles might still be on. 

This deeper integration with Google Built-In has a lot of the benefits of Apple CarPlay Ultra (a familiar interface, easier to access features), just cranked up to a greater degree. It surely benefits fans of hands-off controls, and interweaving Gemini naturally dovetails with Google’s investments, so it’s easy to see that functionality improving. But a greater reliance on Android within the car’s systems could be concerning as the vehicle ages: Will the software stop being supported? Will it slow down or be exposed to security exploits? A lot of questions remain regarding making cars open to phone software interfaces.

Continue Reading

Technologies

A Samsung Tri-Fold Phone Could Be in Your Future, if This Leak Is to Be Believed

UI animations might have revealed the imminent release of a so-called «Galaxy G Fold» device with three screens.

Samsung has been showing off mobile display concepts with three screens at trade events such as CES for several years, but it might finally bring one to market soon if a leaked UI animation is any indicator.

As reported by Android Authority, an animated image from a software build of One UI 8 appears to show what some are dubbing a «Galaxy G Fold» device with three display panels. The screens would be capable of displaying different information or working in unison as one large display. The new phone model could debut as early as next week at Samsung’s Unpacked event on July 9 in Brooklyn. 

Huawei released a tri-folding phone in February, the Mate XT Ultimate Design. 

Some websites have gone into overdrive trying to uncover details on what Samsung’s new device might include and how much it may cost, with Phone Arena reporting that according to a Korean media report, it could be priced at about $3,000. 

Samsung didn’t immediately respond to request for comment.

Continue Reading

Technologies

Early Prime Day Headphone Deals: Up to $100 Off Top-Rated Pairs From Apple, Beats and More

Continue Reading

Trending

Copyright © Verum World Media