Technologies
AI Slop for Christmas: Why McDonald’s and Coca-Cola’s AI Holiday Ads Missed the Mark
Commentary: Two billion-dollar companies using AI for holiday ads isn’t giving me that holly jolly feeling.

I am completely exhausted by huge corporations like McDonald’s and Coca-Cola choosing to rely so heavily on AI for their holiday ads. McDonald’s made $25.9 billion in revenue in 2024, and Coca-Cola made $47.1 billion. Do these companies expect us to be OK with AI slop garbage when they could’ve spent a tiny fraction of that to hire a real animator or videographer?
In case you haven’t been inundated with these AI commercials, I’ll back up a bit. Both McDonald’s and Coca-Cola have launched holiday-themed commercials that are undeniably made with AI — each bragged about its use of AI, which they have probably come to regret. They’re very different, showing the full range of what’s possible with AI in advertising. But the backlash against both proves we don’t have the appetite for AI slop.
McDonald’s commercial features a series of holiday-themed mishaps, set to a parody of the song It’s the Most Wonderful Time of the Year, about how it’s actually the most terrible time of the year. The commercial is only 30 seconds long and intended only for the Netherlands, but it has already garnered so much hate online that the company removed the video from its pages. The marketing agency behind the spot, The Sweetshop Film, still has the video up on its website.
The McDonald’s ad is very clearly AI, with short clips stitched together with a bunch of hard jump cuts. The text isn’t nearly legible, fine details are off and it just has that AI look I’ve come to quickly recognize as an AI reporter. In a now-deleted social media post, the marketing agency’s CEO talked about how it used various AI tools to create it. By contrast, the Coca-Cola commercial is a little more put-together. A Coca-Cola truck drives through a wintry landscape and into a snowy town, and forest animals awaken to follow the truck and its soda bottle contents to a lit Christmas tree in a town square. But even this video has clearly AI-generated elements.
While disappointed, I wasn’t surprised when I saw the ad and the resulting backlash. There has been a surge in creative generative AI tools, especially in the past year, with numerous AI tools built specifically for marketers. They promise to help create content, automate workflows and analyze data. A huge proportion (94%) of marketers have a dedicated AI budget, and three-quarters of them expect that budget to grow, according to Canva’s 2025 Marketing and AI report. That’s partly why we’ve seen a massive increase of AI-generated content in our social media feeds. It’s no wonder Merriam-Webster selected ‘slop’ as its word of the year.
McDonald’s and Coca-Cola’s feel-good, festive commercials manage to hit upon every single controversial issue in AI, which is why they’re inspiring such strong reactions from viewers. AI content is becoming — has already become — normalized. We can’t escape chatbots online and AI slop in our feeds. McDonald’s and Coca-Cola’s use of AI is yet another sign that companies are plowing ahead with AI without truly considering how we’ll react. Like advertisements, AI is inescapable.
If AI in advertising is here to stay, it’s worth breaking down how it’s used and where we, as media consumers, don’t want to see it used.
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
Spotting the AI in Coca-Cola’s ad
McDonald’s now-removed ad was clearly AI, with its plastic-y people and jerky motions. Its format, a series of short clips stitched together with hard jump cuts, is another telltale sign since most AI video generators can only generate clips up to 10 or so seconds long. Coca-Cola’s ad was a little different, but the AI use was just as obvious.
The Holidays Are Coming ad is a remake of Coca-Cola’s popular 1995 ad. In a behind-the-scenes video, Coca-Cola breaks down how it was created. It’s obvious where AI was used to create the animals. But I’m not sure I believe the company went «pixel by pixel» to create its fuzzy friends.
Coca-Cola’s AI animals don’t look realistic; they look like AI. Their fur has some detail, but those finer elements aren’t as defined as they could be. They also aren’t consistent across the animal’s body. You can see the fur gets less detailed further back on the animal. That kind of detailed work is something AI video generators struggle with, but it’s something a (human) animator likely would’ve caught and corrected.
The animals make overexaggerated surprised faces when the truck drives past them, their mouths forming perfect circles. That’s another sign of AI. You can see in the behind-the-scenes video that someone clicks through different AI variations of a sea lion’s nose, which is a common feature of AI programs. There’s also a glimpse of a feature that looks an awful lot like Photoshop’s generative fill. Google’s Veo video generator was definitely used at least once.
The company has been all-in on AI for a while, starting with a 2023 partnership with OpenAI. Even Coca-Cola’s advertising agency, Publicis Group, bragged about snatching Coca-Cola’s business with an AI-first strategy. It seems clear that the company won’t be swayed by its customers’ aversion to AI. (Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
All I want for Christmas is AI labels
There is exactly one thing Coca-Cola got right, and that’s the AI disclosure at the beginning of the video. It’s one thing to use AI in your content creation; it’s entirely another to lie about it. Labels are one of the best tools we have to help everyone who encounters a piece of content decipher whether it’s real or AI. Many social media apps let you simply toggle a setting before you post.
It’s so easy to be clear, yet so many brands and creators don’t disclose their AI use because they’re afraid of getting hate for it. If you don’t want to get hate for using AI, don’t use it! But letting people sit and debate about whether you did or didn’t is a waste of everyone’s time. The fact that AI-generated content is becoming indistinguishable from real photos and videos is exactly why we need to be clear when it’s used.
It’s our collective responsibility as a society to be transparent with how we’re using AI. Social media platforms try to flag AI-generated content, but those systems aren’t perfect. We should appreciate that Coca-Cola didn’t lie to us about this AI-generated content. It’s a very, very low bar, but many others don’t pass it. (I’m looking at you, Mariah Carey and Sephora. Did you use AI? Just tell us.)
AI in advertising
In June, Vogue readers were incensed when the US magazine ran a Guess ad featuring an AI-generated model. Models at the time spoke out about how AI was making it harder to get work on campaigns. Eagle-eyed fans caught J.Crew using «AI photography» a month later. Toys R Us made headlines last year when it ran a weird ad with an AI giraffe, though it did share that it was made with an early version of OpenAI’s Sora.
Something that really stung about the use of AI by Guess and J.Crew is how obvious it was that AI was used in place of real models and photographers. While Coca-Cola and Toys R Us’s use of AI was equally clear, the AI animals didn’t hit quite the same. As the Toys R Us president put it, «We weren’t going to hire a giraffe.» Points for honesty?
Even so, it’s more than likely that real humans lost out on jobs in the creation of these AI ads. Both commercials could’ve been created, and probably improved, if they had used animators, designers and illustrators. Job loss due to AI worries Americans, and people working in creative industries are certainly at risk. It’s not because AI image and video generators are ready to wholly replace workers. It’s because, for businesses, AI’s allure of cutting-edge efficiency offers executives an easy rationale. It’s exactly what just happened at Amazon as it laid off thousands of workers.
It’s easy to look at Coca-Cola’s and McDonald’s AI holiday ads and brush them off as another tone-deaf corporate blunder, especially when there are so many other things to worry about. But in our strange new AI reality, it’s important to highlight the quiet moments that normalize this consequential, controversial technology just as much as the breakthrough moments.
So this holiday season, I think I’ll drink a Pepsi-owned Poppi cranberry fizz soda instead of a Coke Zero.
Technologies
Adobe Firefly’s New AI Editing Tools Are a Step Toward More Precise AI Video
In an exclusive interview, Adobe shares how the company is building Firefly to be your forever partner for AI creation.
Anyone who’s used an AI image or video generator knows that these tools are often the opposite of precise. If you’re using an AI generator with a specific idea in mind, you’ll likely need to do a lot of work to bring that vision to life.
Adobe’s convinced that it can make its AI hub, Firefly, a place where AI can be customized and precise, which is what the company aims for with the release of new AI video editing tools on Tuesday.
Over the course of 2025, Adobe has quietly emerged as one of the best places to use generative AI tools. Firefly subscriptions starting at $10 a month, making it an affordable program that provides integration with top models from Google, OpenAI, Runway, Luma and several other leading AI companies. It’s expanding its roster with Topaz Labs’ Astra (available in Firefly Boards) and Flux 2.1 from Black Forest Labs, available in Firefly and Photoshop desktop.
The partnerships are helping to make Firefly an all-in-one hub for creators to leverage AI, said Steve Newcomb, vice president of product for Firefly, in an exclusive interview. Just as Photoshop is the «career partner» of photographers, Firefly aims to become a partner for AI video and image creators.
«If you’re a photographer, [Photoshop] has everything that you could ever want. It has all the tooling, all the plugins, in one spot with one subscription. You don’t have to subscribe to 25 different photo things,» Newcomb said. «So for us, Firefly, our philosophy is, how do we be that home?»
One way is through partnerships with AI companies, similar to Photoshop plug-ins. Precise editing tools are another, he said.
That’s why Adobe is trying to make it easier to edit AI-generated content. Hallucinations are common in AI-generated images and videos, such as disappearing and reappearing objects, weird blurs and other inaccuracies. For professional creators who use Adobe, the inability to edit out hallucinations makes AI almost unusable for final projects.
In my own testing, I’ve often found that editing tools are basic, at best. At worst, they’re entirely absent, particularly for newer AI video technologies. Firefly’s new prompt-based editing for AI videos, announced on Tuesday, is a way to get that hands-on control.
If you’ve edited images in Firefly via prompting, the video setup will feel familiar. Even if you haven’t, prompt-based editing is essentially a fancy term for asking AI to modify things as you would when talking with a chatbot. Google’s nano banana pro in Gemini is one example of an AI tool that allows you to edit through prompts.
Firefly’s video prompt editing has the added bonus of allowing you to switch between models for edits: You can generate with Firefly and edit with Runway’s Aleph, for example.
Like with any AI chatbot or tool, prompt-based editing isn’t always accurate. But it’s a nice option without having to leave Firefly for Premiere Pro.
The plan is to go beyond just prompt-based editing, Newcomb said. More AI-based precision editing tools for Firefly will be important, allowing you to make even more minute changes. What makes it possible is something called layer-based editing, a behind-the-scenes technology that enables easier, detailed changes in AI-generated images and videos.
Adobe plans to implement layer-based editing in the future, which will likely form the foundation for future AI video editing tools. The goal is to make it easier to stay working in Firefly «until the last mile» of editing, Newcomb said.
«We can run the gamut of the precision continuum all the way to the end, and just think of prompting as being one of many tools. But it is absolutely not the only tool,» said Newcomb.
For now, there is another piece of video editing news that could help you build more precise AI videos.
AI video editing without Premiere Pro expertise
Adobe is also bringing its full AI video editor into beta on Tuesday, the next step toward making editable and, therefore, usable AI video.
Debuted at the company’s annual Max conference in October, the video editor is now launching in a public beta. It sits between basic video editors and the feature-stuffed Premiere Pro. It’ll be great for AI enthusiasts who want more editing firepower than you get with OpenAI or Google, without needing expertise in Premiere Pro.
The video editor is meant to help you put all the pieces of your project together in one place. It has a multitrack timeline for you to compile all your clips and audio tracks. That’s especially important because, while you can create your own AI speech and soundtracks, Firefly AI videoes don’t natively generate with sound. (You can use Veo 3 or Sora 2 in Firefly to generate those initial clips with audio, though.) You can also export in a variety of aspect ratios.
«Think of the video editor as being one of our cornerstone releases that is helping us move toward being one place, one home, where you can have one subscription and get to every model you ever needed to get the job done,» Newcomb said.
Technologies
I’m Happy the 2026 Moto G Power Is $300 but Bummed It Lacks Wireless Charging
Motorola switches up some of the features on its lower-cost phone while adding an improved selfie camera and a larger battery.
Motorola added several notable improvements to the $300 Moto G Power that the 2025 model lacked. But the company did so at the expense of one of the prior phone’s best features: wireless charging.
I’m bummed about this news. I liked the 2025 Moto G Power because I could recharge it with a cable or wirelessly and was delighted to find such flexibility on such an affordable phone. But perhaps one of the reasons Motorola removed the feature was because it upgraded the new Moto G Power’s battery to a 5,200-mAh capacity. That’s 200 mAh more than the 2025 Moto G Power. Maybe Motorola is thinking you won’t need to recharge the phone as frequently. The Moto G (2026) and Moto G Play (2026) also got larger batteries.
Despite lacking wireless charging, it supports a respectable 30W wired charging speed.
The new Moto G Power runs on the MediaTek Dimensity 6300 chip and 8GB of RAM, the same processor and memory that the 2025 edition had. The 2026 Moto G Power has RAM Boost to help it run more apps and tasks. We’ll have to test it to see if there’s a noticeable difference between the new Moto G Power’s performance and its cheaper 2026 siblings, the Moto G Play and Moto G. The phone runs Android 16, and has Google’s Circle to Search along with the Gemini AI assistant.
Many of the 2026 Moto G Power’s other features were in the 2025 edition, which is largely a good thing. It has IP68 and IP69 ratings for using underwater, so it should handle having a drink spilled on it with no problem. It has a 6.8-inch 1,080-pixel resolution display and a pair of lenses on the back: a 50-megapixel wide-angle camera and an 8-megapixel ultrawide. The Moto G Power has a new 32-megapixel front-facing camera, which should improve photos and video calls from the 2025 Moto G Power’s 16-megapixel selfie camera.
The Moto G Power comes in white (Pantone pure cashmere) and dark blue (Pantone evening blue) and will go on sale Jan. 8 in the US, with initial availability at Motorola’s website, Best Buy and Amazon. It will also be available at wireless carriers over the coming months.
Technologies
I Uploaded a Photo of My Face to Get an AI-Generated Biological Age Estimate. It Shocked Me
Can AI help with a personalized health screening? I uploaded a selfie to find out.
Somewhere on TikTok, I discovered that you can upload a selfie to ChatGPT and ask what nonsurgical treatments you could consider for antiaging. It gives you a full breakdown, like an AI cosmetic surgeon.
Pretty cool, especially given the cost of a cosmetic doctor.
But I’d tested out ChatGPT already for beauty advice and FaceApp to show me how I’ll age. I was looking for advice from AI that went deeper with insights based on my skin and what’s going on underneath its surface. It is the body’s largest organ, after all.
That’s when I discovered Noom’s new AI Face Scan feature, which promises longevity stats from a simple selfie in seconds. I had to try it, even though I was scared about what it would reveal. Years of partying hard and traveling the world likely accelerated my aging process.
Worth it, though.
Noom, a health and longevity platform, launched Face Scan and Future Me in October 2025, available for free to use via the app. Face Scan is powered by NuraLogix, while Future Me uses Haut.ai.
Let’s see my biological age according to AI.
Huberman-style health insights, using AI
To access the AI features, I downloaded the Noom app and created a login. Noom asked a few questions like my age, height, weight and health goals. Once I was set up, I navigated to the Health tab, then selected Health Insights.
I was presented with three options: Face Scan, Future Me and Body Composition Scan.
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
I was more interested in the health screening report to find out what AI predicted for my biological age, metabolic and heart health indicators and vital signs, as well as what Noom would recommend to improve it. Biological age tests are usually conducted through blood tests, and even then, they aren’t 100% accurate or indicative of overall health.
Selfie time, but do I dare do it without makeup? It’ll probably be more accurate.
Noom opened with its privacy policy, which you have to give consent for, then asked a few more questions, such as my birthday and whether I smoke, take any medications or have diabetes. This information is used for the biological age calculator. I scanned the privacy policy and couldn’t see any red flags.
Then it gave me some tips on how to take the best selfie. Basically, an intense close-up.
While it was loading, it gave more context about how it worked. Noom uses remote photoplethysmography (rPPG) to detect tiny changes in color and light absorption beneath the skin with the aim of determining blood volume and flow, heart rate, breathing and stress levels.
Photoplethysmography is the technology used in wearables, but studies are split on the validity of rPPG.
One study published in 2023 determined it was uncertain as to the extent that rPPG will be able to estimate blood pressure in real-world settings, due to physiologic, environmental and technical limitations. Another study (2021) stated, «image processing based approaches for rPPG have been shown to perform better than contact-based sensors for pulse rate determination.»
Noom has prefaced this in its fine print in the app, stating «Health insights in this report are for informational purposes only and do not constitute medical advice.»
What I saw next was what I feared: a biological age of 44 when I’m 37.
Granted, I took this photo with no makeup while recovering from an IVF procedure a few days ago. Growing up in the Australian sun likely didn’t help, either. According to Noom, it uses the «stress patterns from tiny color changes in the skin.»
Here’s what the report said:
Based on this, the AI app said I should focus on improving my cardiac workload and heart rate variability.
Next up, my metabolic health, which it said was optimal:
Looks like I need to work on the high triglycerides.
Next, I was hoping for a full report with lifestyle suggestions, but it directed me straight to a page to buy GLP-1 — drugs like Ozempic — to «lower my biological age.» Ouch.
This was a bit disappointing because it felt like the endgame was to get me to buy Noom’s products, rather than provide substantive advice.
So instead, I took all this information across to good ol’ ChatGPT for an action plan that I can review and reach out to my doctor about.
Here was my prompt: «I used Noom’s Face Scan feature to learn my biological age and health markers. Can you review the results and provide an action plan on how I can improve my health? I need to improve my cardiac workload and heart rate variability. It said I’m at risk of high triglycerides. Explain what all of this means and what I can do about it to reduce my biological age.»
Reviewing my results
I liked how I could feed all of the information from Noom into ChatGPT for further context. For example, ChatGPT told me the results don’t necessarily mean I’m «unhealthy,» but rather, I have physiological stress markers on the face, possibly due to inflammation and stress. It even said, «recent medical treatments can temporarily worsen bloat/inflammation.»
Thanks, ChatGPT.
Here’s where it got tactical with an action plan to reduce my biological age, improve heart function and lower triglycerides:
It also gave me a 30-day health optimization plan, which included 20-40 minutes of cardio, five minutes of HRV breathing, taking magnesium at night, a 10-minute walk after heavier meals, consuming 30-40g of protein with every meal, drinking 2 liters of water each day, getting morning sunlight and going to bed between 10:30 and 11 p.m. All of that was done daily — it also suggested several times a week of yoga or Pilates, strength training, using a sauna and taking long outdoor walks, as well as recommending a diet that was high in omega-3, low in carbs, low in alcohol, high in fiber and Mediterranean-style.
According to ChatGPT, following these basic tenets would improve my biological age within four to six weeks.
It’s important to note that this is not the same as an accurate medical diagnosis or treatment plan from a qualified clinician — and neither is Noom’s report either — so you should always consult your doctor when you have health concerns or are considering significant changes to your lifestyle (diet, taking supplements, etc.) — especially so that your medical information remains private.
The verdict
While I didn’t love Noom alone, I did find it useful to use those insights to prompt ChatGPT. I’ve ended the year with a big goal for 2026: to get serious about strength training. This reiterates health data I’ve explored with AI before.
Now I have a doable action plan to inform my new year’s goal setting.
(Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
-
Technologies3 года ago
Tech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies3 года ago
Best Handheld Game Console in 2023
-
Technologies3 года ago
Tighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года ago
Black Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies4 года ago
Verum, Wickr and Threema: next generation secured messengers
-
Technologies4 года ago
Google to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies4 года ago
Olivia Harlan Dekker for Verum Messenger
-
Technologies4 года ago
iPhone 13 event: How to watch Apple’s big announcement tomorrow