Connect with us

Technologies

AI Is Taking Over Social Media, but Only 44% of People Are Confident They Can Spot It, CNET Finds

Half of social media users said they want better labels on AI-generated and edited posts.

AI slop has infected every social media platform, from soulless images to bizarre videos and superficially literate text. The vast majority of US adults who use social media (94%) believe they encounter content that was created or altered by AI, but only 44% of US adults say they’re confident they can tell real photos and videos from AI-generated ones, according to an exclusive CNET survey. That’s a big problem.

There are a lot of different ways people are fighting back against AI content. Some solutions are focused on better labels for AI-created content, since it’s harder than ever to trust our eyes. Of the 2,443 respondents who use social media, half (51%) believed we need better AI labels online. Others (21%) believe there should be a total ban on AI-generated content on social media. Only a small group (11%) of respondents say they find AI content useful, informative or entertaining.

AI isn’t going anywhere, and it’s fundamentally reshaping the internet and our relationship with it. Our survey shows that we still have a long way to go to reckon with it.

Key findings

  • Most US adults who use social media (94%) believe they encounter AI content on social media, yet far fewer (44%) can confidently distinguish between real and fake images and videos.
  • Many US adults (72%) said they take action to determine if an image or video is real, but some don’t do anything, particularly among Boomers (36%) and Gen Xers (29%).
  • Half of US adults (51%) believe AI-generated and edited content needs better labeling. 
  • One in five (21%) believe AI content should be prohibited on social media, with no exceptions.

US adults don’t feel they can spot AI media

Seeing is no longer believing in the age of AI. Tools like OpenAI’s Sora video generator and Google’s Nano Banana image model can create hyperrealistic media, with chatbots smoothly assembling swaths of text that sound like a real person wrote them. 

So it’s understandable that a quarter (25%) of US adults say they aren’t confident in their ability to distinguish real images and videos from AI-generated ones. Older generations, including Boomers (40%) and Gen X (28%), are the least confident. If folks don’t have a ton of knowledge or exposure to AI, they’re likely to feel unsure about their ability to accurately spot AI.

People take action to verify content in different ways

AI’s ability to mimic real life makes it even more important to verify what we’re seeing online. Nearly three in four US adults (72%) said they take some form of action to determine whether an image or video is real when it piques their suspicions, with Gen Z being the most likely (84%) of the age groups to do so. The most obvious — and popular — method is closely inspecting the images and videos for visual cues or artifacts. Over half of US adults (60%) do this. 

But AI innovation is a double-edged sword; models have improved rapidly, eliminating the previous errors we used to rely on to spot AI-generated content. The em dash was never a reliable sign of AI, but extra fingers in images and continuity errors in videos were once prominent red flags. Newer AI models usually don’t make those pedestrian mistakes. So we all have to work a little bit harder to determine what’s real and what’s fake.

As visual indicators of AI disappear, other forms of verifying content are increasingly important. The next two most common methods are checking for labels or disclosures (30%) and searching for the content elsewhere online (25%), such as on news sites or through reverse image searches. Only 5% of respondents reported using a deepfake detection tool or website.

But 25% of US adults don’t do anything to determine if the content they’re seeing online is real. That lack of action is highest among Boomers (36%) and those in Gen X (29%). This is worrisome — we’ve already seen that AI is an effective tool for abuse and fraud. Understanding the origins of a post or piece of content is an important first step to navigating the internet, where anything could be falsified.

Half of US adults want better AI labels

Many people are working on solutions to deal with the onslaught of AI slop. Labeling is a major area of opportunity. Labeling relies on social media users to disclose that their post was made with the help of AI. This can also be done behind the scenes by social media platforms, but it’s somewhat difficult, which leads to haphazard results. That’s likely why 51% of US adults believe that we need better labeling on AI content, including deepfakes. Support was strongest among Millennials and Gen Z, at 56% and 55%, respectively.

Other solutions aim to control the flood of AI content shared on social media. All of the major platforms allow AI-generated content, as long as it doesn’t violate their general content guidelines — nothing illegal or abusive, for example. But some platforms have introduced tools to limit the amount of AI-generated content you see in your feeds; Pinterest rolled out its filters last year, while TikTok is still testing some of its own. The idea is to give every person the ability to permit or exclude AI-generated content from their feeds.

But 21% of respondents believe that AI content should be prohibited on social media altogether, no exceptions allowed. That number is highest among Gen Z at 25%. When asked if they believed AI content should be allowed but strictly regulated, 36% said yes. Those low percentages may be explained by the fact that only 11% find AI content provides meaningful value — that it’s entertaining, informative or useful — and that 28% say it provides little to no value.

How to limit AI content and spot potential deepfakes

Your best defense against being fooled by AI is to be eagle-eyed and trust your gut. If something is too weird, too shiny or too good to be true, it probably is. But there are other steps you can take, like using a deepfake detection tool. There are many options; I recommend starting with the Content Authenticity Initiative‘s tool, since it works with several different file types. 

You can also check out the account that shared the post for red flags. Many times, AI slop is shared by mass slop producers, and you’ll easily be able to see that in their feeds. They’ll be full of weird videos that don’t seem to have any continuity or similarities between them. You can also check to see if anyone you know is following them or if that account isn’t following anyone else (that’s a red flag). Spam posts or scammy links are also indications that the account isn’t legit.

If you want to limit the AI content you see in your social feeds, check out our guides for turning off or muting Meta AI in Instagram and Facebook and filtering out AI posts on Pinterest. If you do encounter slop, you can mark the post as something you’re not interested in, which should indicate to the algorithm that you don’t want to see more like it. Outside of social media, you can disable Apple Intelligence, the AI in Pixel and Galaxy phones and Gemini in Google Search, Gmail and Docs

Even if you do all this and still get occasionally fooled by AI, don’t feel too bad about it. There’s only so much we can do as individuals to fight the gushing tide of AI slop. We’re all likely to get it wrong sometimes. Until we have a universal system to effectively detect AI, we have to rely on the tools we have and our ability to educate each other on what we can do now.

Methodology

CNET commissioned YouGov Plc to conduct the survey. All figures, unless otherwise stated, are from YouGov Plc. The total sample size was 2,530 adults, of which 2,443 use social media. Fieldwork was undertaken Feb. 3-5, 2026. The survey was carried out online. The figures have been weighted and are representative of all US adults (aged 18 plus).

Technologies

Google’s New AI Features Are Trying to Make Data Entry a Thing of the Past

More Gemini AI features will come to Google Docs, Sheets and Slides.

The latest batch of Google updates to its workspace tools highlights AI’s promise to automate mundanity in the workplace. Google Docs, Slides, Sheets and Drive all have new AI-powered features, the company announced Tuesday. The one thing all these updates have in common? Gemini is using your files, emails and chats to give you relevant information, not random answers gleaned from the web.

These updates come as AI is playing a bigger role in our work lives, for better or worse. Agentic tools like Claude Cowork and coding assistants like Anthropic’s Claude Code and OpenAI’s Codex are more capable than chatbots and able to handle tasks announced independently. AI tools are also becoming more customized, with Google’s personalized intelligence rolling out across its platforms to help refine AI outputs to things that are relevant and useful for you. Google continues that trend with this new batch of Workspace updates.

New Gemini AI features in Google Workspace apps will cite their sources after each query. For example, if you ask Gemini in Google Docs to fill out an itinerary template, it will pull the information from your email, chats and files. The «sources» tab in the Gemini side panel will show you where it found the information it used, like your flight confirmation email and chats discussing dinner plans. Seeing where Gemini pulled its answers from is also how you’ll double-check Gemini’s work.

The most impressive new features are in Sheets, where AI can fill in the holes in your spreadsheets. You can describe what you want the AI to do with a simple prompt and avoid writing an exact formula. You can click on an empty cell, select the pop-up that says «Drag to fill with Gemini,» then highlight the cells you want Gemini to fill in. That deploys an AI agent to search the web to fill each cell with the necessary information.

For example, if you have a spreadsheet of the contact info for local companies, you can have Gemini search the web to fill in a the location, CEO and other publicly available information of each company. The tool aims to dramatically reduce the time needed for manual data entry. Gemini can also summarize, categorize and create charts with prompts alone.

You can also chat with Gemini in Sheets and have it scour your raw data to make custom reports and charts. No need for pivot tables if they confound you as much as they baffle me. One of the biggest uses of AI at work is helping create presentations.

In Google Slides, you can now tell Gemini in natural language what you want to appear on a slide, and it will create it, matching the style of your existing slides. You can also ask Gemini to edit your slides if you don’t want to waste time painstakingly moving design elements around the slide. The AI should fill the slides with relevant information based on your instructions and the work files it has access to, so you shouldn’t need to replace a bunch of filler text.

If you use Docs, Sheets and Slides through the Workspace account of your company, then you won’t be able to turn off AI features individually. The managing company is in control of AI access for users. Personal users can tweak their settings to limit Gemini. The new features are rolling out in beta now, in English only, to Google AI Ultra and Pro subscribers in the US, as well as some Google Workspace customers who are part of the Gemini Alpha testing program.

For more, check out the new cowork feature in Copilot and how to use Perplexity AI for deep research.

Continue Reading

Technologies

Nintendo Switches Lanes, Sues US Over Tariffs

Mario wants his money back.

Tariffs implemented by President Donald Trump were struck down by the Supreme Court last month. Companies that were subjected to those fees, such as FedEx and Dollar General, have since sued the federal government, and Nintendo wants a piece of the action. 

Nintendo filed a lawsuit against the federal government in the US Court of International Trade on Friday, as first spotted by Aftermath. The complaint seeks refunds of tariffs Nintendo paid, plus interest, and asks the court to declare the tariffs unlawful and stop the government from collecting them going forward. 

«Since February 1, 2025, President Trump has executed the unlawful Executive Orders, imposing tariffs on imports from a vast swath of countries,» Nintendo said in the complaint. 

When reached for comment, Nintendo of America confirmed the lawsuit. 

«We can confirm that we filed a request. We have nothing else to share on this topic,» Nintendo of America said in an emailed statement on Friday, March 6. 

It’s unclear how much Nintendo paid in tariffs, and it did not state an amount in the lawsuit. While the Switch 2 was priced at $450 when it launched last year, and has stayed at that amount, Nintendo did increase the price of the original Switch and accessories for both consoles. Microsoft and Sony also increased the prices of their hardware and accessories last year due to tariffs. 

The White House didn’t immediately respond to a request for comment. 

On Feb. 20, the Supreme Court ruled by a vote of 6 to 3 that the sweeping tariffs Trump instituted last year exceeded his executive powers. Following the ruling, on the same day, Trump announced a new set of tariffs of 10% on imported goods that would last for 150 days, starting Feb. 24. 

The decision on what to do with the collected tariffs — a reported $166 billion —  has been left to the US Court of International Trade. Judge Richard Eaton told the US Customs and Border Protection on Wednesday, March 4, to refund the importers that were forced to pay tariffs, which is more than 330,000. On Friday, the CBP said it couldn’t easily issue tariff refunds because its system requires duties to be recalculated and refunds processed entry by entry. This process would involve tens of millions of transactions. The agency said it’s updating its systems and could start providing refunds by late April. 

Continue Reading

Technologies

Sony WF-1000XM6 vs. Samsung Galaxy Buds 4 Pro Earbuds: A Photo Finish

Continue Reading

Trending

Copyright © Verum World Media