Technologies
Here’s What I Learned Testing Photoshop’s New Generative AI Tool
Adobe’s Firefly AI feature brings new fun and fakery to photos. It’s a profound change for image editing, though far from perfect.
Adobe has bulit generative AI abilities into its flagship image-editing software, releasing a Photoshop beta version Tuesday that dramatically expands what artists and photo editors can do. The move promises to release a new torrent of creativity even as it gives us all a new reason to pause and wonder if that sensational, scary or inspirational photo you see on the internet is actually real.
In my tests, detailed below, I found the tool impressive but imperfect. Adding it directly to Photoshop is a big deal, letting creators experiment within the software tool they’re likely already using without excursions to Midjourney, Stability AI’s Stable Diffusion or other outside generative AI tools.
With Adobe’s Firefly family of generative AI technologies arriving in Photoshop, you’ll be able to let the AI fill a selected part of the image with whatever it thinks most fitting – for example, replacing road cracks with smooth pavement. You can also specify the imagery you’d like with a text prompt, such as adding a double yellow line to the road.
Firefly in Photoshop also can also expand an image, adding new scenery beyond the frame based on what’s already in the frame or what you suggest with text. Want more sky and mountains in your landscape photo? A bigger crowd at the rock concert? Photoshop will oblige, without today’s difficulties of finding source material and splicing it in.
The feature, called generative fill and scheduled to emerge from beta testing in the second half of 2023, can be powerful. In Adobe’s live demo, the tool was often able to match a photo’s tones, blend in AI-generated imagery seamlessly, infer the geometric details of perspective even in reflections and extrapolate the position of the sun from shadows and sky haze.
Such technologies have been emerging over the last year as Stable Diffusion, Midjourney and OpenAI’s Dall-Ecaptured the imaginations of artists and creative pros. Now it’s built directly into the software they’re most likely to already be using, streamlining what can be a cumbersome editing process.
«It really puts the power and control of generative AI into the hands of the creator,» said Maria Yap, Adobe’s vice president of digital imaging. «You can just really have some fun. You can explore some ideas. You can ideate. You can create without ever necessarily getting into the deep tools of the product, very quickly.»
But you can’t sell anything yet. With Firefly technology, including what’s produced by Photoshop’s generative fill, «you may not use the output for any commercial purpose,» Adobe’s generative AI beta rules state.
Photoshop’s Firefly AI imperfect but useful
In my testing, I frequently ran into problems, many of them likely stemming from the limited range of the training imagery. When I tried to insert a fish on a bicycle to an image, Firefly only added the bicycle. I couldn’t get Firefly to add a kraken to emerge from San Francisco Bay. A musk ox looked like a panda-moose hybrid.
Less fanciful material also presents problems. Text looks like an alien race’s script. Shadows, lighting, perspective and geometry weren’t always right.
People are hard, too. On close inspection, their faces were distorted in weird ways. Humans added into shots could be positioned too high in the frame or in otherwise unconvincingly blended in.
Still, Firefly is remarkable for what it can accomplish, particularly with landscape shots. I could add mountains, oceans, skies and hills to landscapes. A white delivery van in a night scene was appropriately yellowish to match the sodium vapor streetlights in the scene. If you don’t like the trio of results Firefly presents, you can click the «generate» button to get another batch.
Given the pace of AI developments, I expect Firefly in Photoshop will improve.
It’s hard and expensive to retrain big AI models, requiring a data center packed with expensive hardware to churn through data, sometimes taking weeks for the largest models. But Adobe plans relatively frequent updates to Firefly. «Expect [about] monthly updates for general improvements and retraining every few months in all likelihood,» Adobe product chief Scott Belsky tweeted Tuesday.
Automating image manipulation
For years, «Photoshop» hasn’t just referred to Adobe’s software. It’s also used as a verb signifying photo manipulations like slimming supermodels’ waists or hiding missile launch failures. AI tools automate not just fun and flights of fancy, but also fake images like an alleged explosion at the Pentagon or a convincingly real photo of the pope in a puffy jacket, to pick two recent examples.
With AI, expect editing techniques far more subtle than the extra smoke easily recognized as digitally added to photos of an Israeli attack on Lebanon in 2006.
It’s a reflection of the double-edged sword that is generative AI. The technology is undeniably useful in many situations but also blurs the line between what is true and what is merely plausible.
For its part, Adobe tries to curtail problems. It doesn’t permit prompts to create images of many political figures and blocks you for «safety issues» if you try to create an image of black smoke in front of the White House. And its AI usage guidelines prohibit imagery involving violence, pornography and «misleading, fraudulent, or deceptive content that could lead to real-world harm,» among other categories. «We disable accounts that engage in behavior that is deceptive or harmful.»
Firefly also is designed to skip over styling prompts like that have provoked serious complaints from artists displeased to see their type of art reproduced by a data center. And it supports the Content Authenticity Initiative‘s content credentials technology that can be used to label an image as having been generated by AI.
Today, generative AI imagery made with Adobe’s Firefly website add content credentials by default along with a visual watermark. When the Photoshop feature exists beta testing and ships later this year, imagery will include content credentials automatically, Adobe said.
People trying to fake images can sidestep that technology. But in the long run, it’ll become part of how we all evaluate images, Adobe believes.
«Content credentials give people who want to be trusted a way to be trusted. This is an open-source technology that lets everyone attach metadata to their images to show that they created an image, when and where it was created, and what changes were made to it along the way,» Adobe said. «Once it becomes the norm that important news comes with content credentials, people will then be skeptical when they see images that don’t.»
Generative AI for photos
Adobe’s Firefly family of generative AI tools began with a website that turns a text prompt like «modern chair made up of old tires» into an image. It’s added a couple other options since, and Creative Cloud subscribers will also be able to try a lightweight version of the Photoshop interface on the Firefly site.
When OpenAI’s Dall-E brought that technology to anyone who signed up for it in 2022, it helped push generative artificial intelligence from a technological curiosity toward mainstream awareness. Now there’s plenty of worry along with the excitement as even AI creators fret about what the technology will bring now and in the more distant future.
Generative AI is a relatively new form of artificial intelligence technology. AI models can be trained to recognize patterns in vast amounts of data – in this case labeled images from Adobe’s stock art business and other licensed sources – and then to create new imagery based on that source data.
Generative AI has surged to mainstream awareness with language models used in tools like OpenAI’s ChatGPT chatbot, Google’s Gmail and Google Docs, and Microsoft’s Bing search engine. When it comes to generating images, Adobe employs an AI image generation technique called diffusion that’s also behind Dall-E, Stable Diffusion, Midjourney and Google’s Imagen.
Adobe calls Firefly for Photoshop a «co-pilot» technology, positioning it as a creative aid, not a replacement for humans. Yap acknowledges that some creators are nervous about being replaced by AI. Adobe prefers to see it as a technology that can amplify and speed up the creative process, spreading creative tools to a broader population.
«I think the democratization we’ve been going through, and having more creativity, is a positive thing for all of us,» Yap said. «This is the future of Photoshop.»
Editors’ note: CNET is using an AI engine to create some personal finance explainers that are edited and fact-checked by our editors. For more, see this post.
Technologies
Google’s New AI Features Are Trying to Make Data Entry a Thing of the Past
More Gemini AI features will come to Google Docs, Sheets and Slides.
The latest batch of Google updates to its workspace tools highlights AI’s promise to automate mundanity in the workplace. Google Docs, Slides, Sheets and Drive all have new AI-powered features, the company announced Tuesday. The one thing all these updates have in common? Gemini is using your files, emails and chats to give you relevant information, not random answers gleaned from the web.
These updates come as AI is playing a bigger role in our work lives, for better or worse. Agentic tools like Claude Cowork and coding assistants like Anthropic’s Claude Code and OpenAI’s Codex are more capable than chatbots and able to handle tasks announced independently. AI tools are also becoming more customized, with Google’s personalized intelligence rolling out across its platforms to help refine AI outputs to things that are relevant and useful for you. Google continues that trend with this new batch of Workspace updates.
New Gemini AI features in Google Workspace apps will cite their sources after each query. For example, if you ask Gemini in Google Docs to fill out an itinerary template, it will pull the information from your email, chats and files. The «sources» tab in the Gemini side panel will show you where it found the information it used, like your flight confirmation email and chats discussing dinner plans. Seeing where Gemini pulled its answers from is also how you’ll double-check Gemini’s work.
The most impressive new features are in Sheets, where AI can fill in the holes in your spreadsheets. You can describe what you want the AI to do with a simple prompt and avoid writing an exact formula. You can click on an empty cell, select the pop-up that says «Drag to fill with Gemini,» then highlight the cells you want Gemini to fill in. That deploys an AI agent to search the web to fill each cell with the necessary information.
For example, if you have a spreadsheet of the contact info for local companies, you can have Gemini search the web to fill in a the location, CEO and other publicly available information of each company. The tool aims to dramatically reduce the time needed for manual data entry. Gemini can also summarize, categorize and create charts with prompts alone.
You can also chat with Gemini in Sheets and have it scour your raw data to make custom reports and charts. No need for pivot tables if they confound you as much as they baffle me. One of the biggest uses of AI at work is helping create presentations.
In Google Slides, you can now tell Gemini in natural language what you want to appear on a slide, and it will create it, matching the style of your existing slides. You can also ask Gemini to edit your slides if you don’t want to waste time painstakingly moving design elements around the slide. The AI should fill the slides with relevant information based on your instructions and the work files it has access to, so you shouldn’t need to replace a bunch of filler text.
If you use Docs, Sheets and Slides through the Workspace account of your company, then you won’t be able to turn off AI features individually. The managing company is in control of AI access for users. Personal users can tweak their settings to limit Gemini. The new features are rolling out in beta now, in English only, to Google AI Ultra and Pro subscribers in the US, as well as some Google Workspace customers who are part of the Gemini Alpha testing program.
For more, check out the new cowork feature in Copilot and how to use Perplexity AI for deep research.
Tariffs implemented by President Donald Trump were struck down by the Supreme Court last month. Companies that were subjected to those fees, such as FedEx and Dollar General, have since sued the federal government, and Nintendo wants a piece of the action.
Nintendo filed a lawsuit against the federal government in the US Court of International Trade on Friday, as first spotted by Aftermath. The complaint seeks refunds of tariffs Nintendo paid, plus interest, and asks the court to declare the tariffs unlawful and stop the government from collecting them going forward.
«Since February 1, 2025, President Trump has executed the unlawful Executive Orders, imposing tariffs on imports from a vast swath of countries,» Nintendo said in the complaint.
When reached for comment, Nintendo of America confirmed the lawsuit.
«We can confirm that we filed a request. We have nothing else to share on this topic,» Nintendo of America said in an emailed statement on Friday, March 6.
It’s unclear how much Nintendo paid in tariffs, and it did not state an amount in the lawsuit. While the Switch 2 was priced at $450 when it launched last year, and has stayed at that amount, Nintendo did increase the price of the original Switch and accessories for both consoles. Microsoft and Sony also increased the prices of their hardware and accessories last year due to tariffs.
The White House didn’t immediately respond to a request for comment.
On Feb. 20, the Supreme Court ruled by a vote of 6 to 3 that the sweeping tariffs Trump instituted last year exceeded his executive powers. Following the ruling, on the same day, Trump announced a new set of tariffs of 10% on imported goods that would last for 150 days, starting Feb. 24.
The decision on what to do with the collected tariffs — a reported $166 billion — has been left to the US Court of International Trade. Judge Richard Eaton told the US Customs and Border Protection on Wednesday, March 4, to refund the importers that were forced to pay tariffs, which is more than 330,000. On Friday, the CBP said it couldn’t easily issue tariff refunds because its system requires duties to be recalculated and refunds processed entry by entry. This process would involve tens of millions of transactions. The agency said it’s updating its systems and could start providing refunds by late April.
Technologies
Sony WF-1000XM6 vs. Samsung Galaxy Buds 4 Pro Earbuds: A Photo Finish
-
Technologies3 года agoTech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies3 года agoBest Handheld Game Console in 2023
-
Technologies3 года agoTighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года agoBlack Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies5 лет agoGoogle to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies5 лет agoVerum, Wickr and Threema: next generation secured messengers
-
Technologies4 года agoOlivia Harlan Dekker for Verum Messenger
-
Technologies4 года agoiPhone 13 event: How to watch Apple’s big announcement tomorrow
