Technologies
I Just Tried Photoshop’s New AI Tool. It Makes Photos Creative, Funny or Unreal
Adobe’s Firefly generative AI tool offers a new way to fiddle with photos. Expect a lot of fun and fakery.

Adobe is building generative AI abilities into its flagship image-editing software with a new Photoshop beta release Tuesday. The move promises to release a new torrent of creativity even as it gives us all a new reason to pause and wonder if that sensational, scary or inspirational photo you see on the internet is actually real.
In my tests, detailed below, I found the tool impressive overall but far from perfect. Adding it directly to Photoshop is a big deal, letting creators experiment within the software tool they’re likely already using without excursions to Midjourney, Stability AI’s Stable Diffusion or other outside generative AI tools.
With Adobe’s Firefly family of generative AI technologies arriving in Photoshop, you’ll be able to let the AI fill a selected part of the image with whatever it thinks most fitting – for example, replacing road cracks with smooth pavement. You can also specify the imagery you’d like with a text prompt, such as adding a double yellow line to the road.
Firefly in Photoshop also can also expand an image, adding new scenery beyond the frame based on what’s already in the frame or what you suggest with text. Want more sky and mountains in your landscape photo? A bigger crowd at the rock concert? Photoshop will oblige, without today’s difficulties of finding source material and splicing it in.
Photoshop’s Firefly skills can be powerful. In Adobe’s live demo, the were often able to match a photo’s tones, blend in AI-generated imagery seamlessly, infer the geometric details of perspective even in reflections and extrapolate the position of the sun from shadows and sky haze.
Such technologies have been emerging over the last year as Stable Diffusion, Midjourney and OpenAI’s Dall-Ecaptured the imaginations of artists and creative pros. Now it’s built directly into the software they’re most likely to already be using, streamlining what can be a cumbersome editing process.
«It really puts the power and control of generative AI into the hands of the creator,» said Maria Yap, Adobe’s vice president of digital imaging. «You can just really have some fun. You can explore some ideas. You can ideate. You can create without ever necessarily getting into the deep tools of the product, very quickly.»
Now you’d better brace yourself for that future.
Photoshop’s Firefly AI imperfect but useful
In my testing, I frequently ran into problems, many of them likely stemming from the limited range of the training imagery. When I tried to insert a fish on a bicycle to an image, Firefly only added the bicycle. I couldn’t get Firefly to add a kraken to emerge from San Francisco Bay. A musk ox looked like a panda-moose hybrid.
Less fanciful material also presents problems. Text looks like an alien race’s script. Shadows, lighting, perspective and geometry weren’t always right.
People are hard, too. On close inspection, their faces were distorted in weird ways. Humans added into shots were positioned too high in the frame or in other unconvincing ways.
Still, Firefly is remarkable for what it can accomplish, particularly with landscape shots. I could add mountains, oceans, skies and hills to landscapes. A white delivery van in a night scene was appropriately yellowish to match the sodium vapor streetlights in the scene. If you don’t like the trio of results Firefly presents, you can click the «generate» button to get another batch.
Given the pace of AI developments, I expect Firefly in Photoshop will improve.
«This is the future of Photoshop,» Yap said.
Automating image manipulation
For years, «Photoshop» hasn’t just referred to Adobe’s software. It’s also used as a verb signifying photo manipulations like slimming supermodels’ waists or hiding missile launch failures. AI tools automate not just fun and flights of fancy, but also fake images like an alleged explosion at the Pentagon or a convincingly real photo of the pope in a puffy jacket, to pick two recent examples.
With AI, expect editing techniques far more subtle than the extra smoke easily recognized as digitally added to photos of an Israeli attack on Lebanon in 2006.
It’s a reflection of the double-edged sword that is generative AI. The technology is undeniably useful in many situations but also blurs the line between what is true and what is merely plausible.
For its part, Adobe tries to curtail problems. It doesn’t permit prompts to create images of many political figures and blocks you for «safety issues» if you try to create an image of black smoke in front of the White House. And its AI usage guidelines prohibit imagery involving violence, pornography and «misleading, fraudulent, or deceptive content that could lead to real-world harm,» among other categories. «We disable accounts that engage in behavior that is deceptive or harmful.»
Firefly also is designed to skip over styling prompts like that have provoked serious complaints from artists displeased to see their type of art reproduced by a data center. And it supports the Content Authenticity Initiative‘s content credentials technology that can be used to label an image as having been generated by AI.
Generative AI for photos
Adobe’s Firefly family of generative AI tools began with a website that turns a text prompt like «modern chair made up of old tires» into an image. It’s added a couple other options since, and Creative Cloud subscribers will also be able to try a lightweight version of the Photoshop interface on the Firefly site.
When OpenAI’s Dall-E brought that technology to anyone who signed up for it in 2022, it helped push generative artificial intelligence from a technological curiosity toward mainstream awareness. Now there’s plenty of worry along with the excitement as even AI creators fret about what the technology will bring now and in the more distant future.
Generative AI is a relatively new form of artificial intelligence technology. AI models can be trained to recognize patterns in vast amounts of data – in this case labeled images from Adobe’s stock art business and other licensed sources – and then to create new imagery based on that source data.
Generative AI has surged to mainstream awareness with language models used in tools like OpenAI’s ChatGPT chatbot, Google’s Gmail and Google Docs, and Microsoft’s Bing search engine. When it comes to generating images, Adobe employs an AI image generation technique called diffusion that’s also behind Dall-E, Stable Diffusion, Midjourney and Google’s Imagen.
Adobe calls Firefly for Photoshop a «co-pilot» technology, positioning it as a creative aid, not a replacement for humans. Yap acknowledges that some creators are nervous about being replaced by AI. Adobe prefers to see it as a technology that can amplify and speed up the creative process, spreading creative tools to a broader population.
«I think the democratization we’ve been going through, and having more creativity, is a positive thing for all of us.»
Technologies
Starlink Plans to Send 42K Satellites Into Space. That Could Be Bad News for the Ozone
Technologies
Scary Survey Results: Teen Drivers Are Often Looking at Their Phones
New troubling research found that entertainment is the most common reason teens use their phones behind the wheel, followed by texting and navigation.

A new study reveals that teen drivers in the US are spending more than one-fifth of their driving time distracted by their phones, with many glances lasting long enough to significantly raise the risk of a crash. Published in the journal Traffic Injury Prevention and released on Thursday, the research found that, on average, teens reported looking at their phones during 21.1% of every driving trip. More than a quarter of those distractions lasted two seconds or longer, which is an amount of time widely recognized as dangerous at highway speeds.
Most distractions tied to entertainment, not emergencies
The top reason teens said they reached for their phones behind the wheel was for entertainment, cited by 65% of respondents. Texting (40%) and navigation (30%) were also common. Researchers emphasized that these distractions weren’t typically urgent, but rather habitual or social.
Teens know the risks
The study includes survey responses from 1,126 teen drivers across all four US regions, along with in-depth interviews with a smaller group of high schoolers. Most participants recognized that distracted driving is unsafe and believed their parents and peers disapproved of the behavior.
But many teens also assumed that their friends were doing it anyway, pointing to a disconnect between personal values and perceived social norms.
Teens think they can resist distractions
Interestingly, most teens expressed confidence in their ability to resist distractions. That belief, researchers suggest, could make it harder to change behavior unless future safety campaigns specifically target these attitudes.
The study’s lead author, Dr. Rebecca Robbins of Boston’s Brigham and Women’s Hospital, said interventions should aim to shift social norms while also emphasizing practical steps, such as enabling «Do Not Disturb» mode and physically separating drivers from their devices.
«Distracted driving is a serious public health threat and particularly concerning among young drivers,» Robbins said. «Driving distracted doesn’t just put the driver at risk of injury or death, it puts everyone else on the road in danger of an accident.»
What this means for parents and educators
The researchers say their findings can help guide educators and parents in developing more persuasive messaging about the dangers of distracted driving. One of the recommendations is that adults need to counter teens’ beliefs that phone use while driving is productive or harmless.
While the study’s qualitative component was limited by a small and non-urban sample, the authors believe the 38-question survey they developed can be used more broadly to assess beliefs, behaviors and the effectiveness of future safety efforts.
Technologies
Nintendo Switch 2 Joy-Con Issues? It Might Just Be Your HDMI Cable
Make sure to use the Switch 2 cable included with the new gaming console.

As the Switch 2 continues to sell in the millions for Nintendo, it shouldn’t be a surprise that there’d be some issues with the console. It appears, however, that one problem Switch 2 owners are facing is actually just a matter of using the wrong cable.
Reddit users have posted about their Joy-Cons disconnecting when they’re playing on their Switch 2 while it’s docked, an issue spotted earlier by IGN. It does appear that, luckily, the issue can be resolved by using the included HDMI cable for the Switch 2 rather than an older, slower one — including the cable that came with the original Nintendo Switch.
Nintendo laid out the solution on its support page for when the Joy-Con 2 starts disconnecting from the console:
- Confirm that you’re using an «Ultra High Speed» HDMI cable to connect the dock to the TV. If it’s not Ultra High Speed, your console won’t perform as expected when docked.
- If you’re using a different cable than the one that came with the console, it should have printed on the cable that it’s «Ultra High Speed.»
- The HDMI cable that came with the Nintendo Switch is not «Ultra High Speed» and should not be used with the Nintendo Switch 2 dock.
Nintendo didn’t immediately respond to a request for comment about the source of this issue.
Since the Switch 2 launch, many gamers have come to realize that Nintendo’s new console is very picky about what cables are connected to it. This goes for the HDMI cable as well as the power cable.
While the new and old Switch share the same name, they don’t share the same components. The Switch 2 is a huge upgrade in graphics power over the 2017 console, which means it needs the appropriate power supply. Not providing the Switch 2 with sufficient power could likely cause some issues, especially if the system has to do a lot of work to run a game.
-
Technologies2 года ago
Tech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies2 года ago
Best Handheld Game Console in 2023
-
Technologies2 года ago
Tighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года ago
Verum, Wickr and Threema: next generation secured messengers
-
Technologies4 года ago
Google to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies4 года ago
Black Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies4 года ago
Olivia Harlan Dekker for Verum Messenger
-
Technologies4 года ago
iPhone 13 event: How to watch Apple’s big announcement tomorrow