Connect with us

Technologies

AI Is Eating the Internet, but Many Are Hopeful Human-Made Content Will Win Out

Publishers, including CNET’s owner, are taking a wide range of approaches to try to make it through AI’s changes.

With AI encroaching on all corners of the internet, from bogus articles to Instagram Reels, there’s concern that human-made content is under threat, and as a result, so are the film, music and publishing industries.

There are AI actresses, AI-generated music filling up Spotify and AI answers at the top of Google Search, above the 10 blue links. 

But consumers of news and media remain uncomfortable with the idea of fully AI-generated content. A recent Reuters Institute survey of people in six countries, including the US, found that only 12% of people are comfortable with fully AI-generated news, compared to 62% who prefer their news entirely human-produced. 

That desire for human-made content has some publishing executives optimistic, including Vivek Shah, CEO of CNET owner Ziff Davis. He said as much in a recent episode of the podcast Channels with Peter Kafka.

«The narrative around is that the declines in search traffic somehow are existential and I just don’t see it that way,» said Shah. 

«I still think we prefer words and sounds and videos from humans,» he added. «Do I think that the robots will eat into some of that? I do.»

Internet search and content analysts see the same preferences among consumers. 

«I also agree that as Google continues to roll out new AI search features like AI Overviews and AI Mode, users will continue to seek authentic content from real humans,» said Lily Ray, vice president of SEO strategy and research at Amsive, a marketing agency, «and when the AI answer isn’t sufficient to meet those needs, they will continue to search for content that provides that sense of real human connection.»

As AI is rapidly shifting how people find information online, publishers are moving quickly to strike deals. News Corp, Axel Springer and Future PLC have signed content licensing deals with OpenAI, for example. Other companies are taking on AI companies directly. 

AI models are trained on the entire corpus of information found online, which includes published journalistic content. Recently, Penske Media, which owns Variety and Rolling Stone, sued Google over its use of AI Overviews, which gives AI-generated answers at the top of search. Penske alleges that Google is abusing its monopoly power in online search and that AI Overviews steals Penske content, circumventing the need for readers to click on articles directly. 

Ziff Davis, along with the New York Times, has sued ChatGPT creator OpenAI for scraping journalistic content to train AI models rather than signing a licensing deal. Shah told Kafka that OpenAI rebuffed Ziff Davis’ attempts to negotiate a licensing deal. 

OpenAI didn’t immediately respond to a request for comment. Ziff Davis said Shah was unavailable for comment.

The strong response from publishers comes as Wall Street rewards Google, chipmaker Nvidia and OpenAI partner Microsoft with record valuations even as the publishing industry is contracting. There have been major drops in traffic across the internet in 2025. This year, too, the publishing industry has seen layoffs at CNN, Vox Media, HuffPost, the LA Times and NBC


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


Another way publishers are fighting back is by trying to block AI crawlers from scraping their content for free. Along with blocks in robots.txt, a file on a website that lays out certain permissions from online crawlers, Ziff Davis has signed on to the RSL standard, which is a more robust layer of tech that can block AI bots for sucking up content. The hope is that if enough publishers sign on, it can be enough of a united front to better bargain with Big Tech. 

Despite the growing popularity of AI, Shah feels that ultimately people prefer «words and sounds and videos from humans.» He also notes that brands are increasingly trying to get their products to fill up AI search results, which isn’t good for objective purchasing decisions.

«If you start to look into citations in LLM chatbots, you’re going to see that sources have gone from journalism sources to marketing sources,» said Shah. «And so, someone’s got to measure this because I am amazed at how many citations are not publisher.com but a brand.com.»

Technologies

Today’s NYT Mini Crossword Answers for Tuesday, Oct. 14

Here are the answers for The New York Times Mini Crossword for Oct. 14.

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


Today’s Mini Crossword has an odd vertical shape, with an extra Across clue, and only four Down clues. The clues are not terribly difficult, but one or two could be tricky. Read on if you need the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

Mini across clues and answers

1A clue: Smokes, informally
Answer: CIGS

5A clue: «Don’t have ___, man!» (Bart Simpson catchphrase)
Answer: ACOW

6A clue: What the vehicle in «lane one» of this crossword is winning?
Answer: RACE

7A clue: Pitt of Hollywood
Answer: BRAD

8A clue: «Yeah, whatever»
Answer: SURE

9A clue: Rd. crossers
Answer: STS

Mini down clues and answers

1D clue: Things to «load» before a marathon
Answer: CARBS

2D clue: Mythical figure who inspired the idiom «fly too close to the sun»
Answer: ICARUS

3D clue: Zoomer around a small track
Answer: GOCART

4D clue: Neighbors of Norwegians
Answer: SWEDES

Continue Reading

Technologies

Watch SpaceX’s Starship Flight Test 11

Continue Reading

Technologies

New California Law Wants Companion Chatbots to Tell Kids to Take Breaks

Gov. Gavin Newsom signed the new requirements on AI companions into law on Monday.

AI companion chatbots will have to remind users in California that they’re not human under a new law signed Monday by Gov. Gavin Newsom.

The law, SB 243, also requires companion chatbot companies to maintain protocols for identifying and addressing cases in which users express suicidal ideation or self-harm. For users under 18, chatbots will have to provide a notification at least every three hours that reminds users to take a break and that the bot is not human.

It’s one of several bills Newsom has signed in recent weeks dealing with social media, artificial intelligence and other consumer technology issues. Another bill signed Monday, AB 56, requires warning labels on social media platforms, similar to those required for tobacco products. Last week, Newsom signed measures requiring internet browsers to make it easy for people to tell websites they don’t want them to sell their data and banning loud advertisements on streaming platforms. 

AI companion chatbots have drawn particular scrutiny from lawmakers and regulators in recent months. The Federal Trade Commission launched an investigation into several companies in response to complaints by consumer groups and parents that the bots were harming children’s mental health. OpenAI introduced new parental controls and other guardrails in its popular ChatGPT platform after the company was sued by parents who allege ChatGPT contributed to their teen son’s suicide. 

«We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability,» Newsom said in a statement.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


One AI companion developer, Replika, told CNET that it already has protocols to detect self-harm as required by the new law, and that it is working with regulators and others to comply with requirements and protect consumers. 

«As one of the pioneers in AI companionship, we recognize our profound responsibility to lead on safety,» Replika’s Minju Song said in an emailed statement. Song said Replika uses content-filtering systems, community guidelines and safety systems that refer users to crisis resources when needed.

Read more: Using AI as a Therapist? Why Professionals Say You Should Think Again

A Character.ai spokesperson said the company «welcomes working with regulators and lawmakers as they develop regulations and legislation for this emerging space, and will comply with laws, including SB 243.» OpenAI spokesperson Jamie Radice called the bill a «meaningful move forward» for AI safety. «By setting clear guardrails, California is helping shape a more responsible approach to AI development and deployment across the country,» Radice said in an email.

One bill Newsom has yet to sign, AB 1064, would go further by prohibiting developers from making companion chatbots available to children unless the AI companion is «not foreseeably capable of» encouraging harmful activities or engaging in sexually explicit interactions, among other things. 

Continue Reading

Trending

Exit mobile version