Connect with us

Technologies

Trump’s AI Action Plan Is Here: 5 Key Takeaways

The president wants to cut regulations on AI companies and data centers. Critics say the proposal carries big risks.

The Trump administration on Wednesday laid out the steps it plans to take to ensure «global AI dominance» for the US, with an AI Action Plan that calls for cutting regulations to speed up the development of artificial intelligence tools and the infrastructure to power them.

Critics said the plan is a handout to tech and fossil fuel companies, slashing rules that could protect consumers, prevent pollution and fight climate change.

Though the plan itself isn’t binding (it includes dozens of policy recommendations), Trump did sign three executive orders to put some of these steps into action. The changes and proposals follow how the Trump administration has approached AI and technology over the past six months — giving tech companies a largely free hand; focusing on beating China; and prioritizing the construction of data centers, factories and fossil fuel power plants over environmental regulations.

It’s seizing on the moment created by the arrival of ChatGPT less than three years ago and the ensuing wave of generative AI efforts by Google, Meta and others.

«My administration will use every tool at our disposal to ensure that the United States can build and maintain the largest and most powerful and advanced AI infrastructure anywhere on the planet,» Trump said during remarks Wednesday evening at a summit presented by the Hill and Valley Forum and the All-In Podcast. He signed the three executive orders at the event.

The administration and tech industry groups touted the plan as a framework for US success in a race against China. «President Trump’s AI Action Plan presents a blueprint to usher in a new era of US AI dominance,» Jason Oxman, president and CEO of the tech industry trade group ITI, said in a statement.

Consumer groups said the plan focuses on deregulation and would hurt consumers by reducing the rules that could protect them. 

«Whether it’s promoting the use of federal land for dirty data centers, giving the FTC orders to question past cases, or attempting to revive some version of the soundly defeated AI moratorium by tying federal funds to not having ‘onerous regulation’ according to the FCC, this is an unwelcome distraction at a critical time for government to get consumer protection right with increasing AI use and abuse,» Ben Winters, director of AI and privacy at the Consumer Federation of America, said in a statement.

Here’s a look at the proposals in the plan. 

Slashing regulations for AI infrastructure

The plan says AI growth will require infrastructure, including chip factories, data centers and more energy generation. And it blames environmental regulations for getting in the way. In response, it proposes exemptions for AI-related construction from certain environmental regulations, including those aimed at protecting clean water and air. It also suggests making federal lands available for data center construction and related power plants.

To provide energy for all those data centers, the plan calls for steps to prevent the «premature decommissioning of critical power generation resources.» This likely refers to keeping coal-fired power plants and other mostly fossil-fuel-driven infrastructure online for longer. In his remarks, Trump specifically touted his support for coal and nuclear power plants. 

The administration also called to prioritize the connection of new «reliable, dispatchable power sources» to the grid and specifically named nuclear fission and fusion and advanced geothermal generation. Earlier this month, the president signed a bill that would end many tax credits and incentives for renewable energy — wind and solar — years earlier than planned. Wind and solar make up the bulk of the new energy generation being added to the US grid right now. 

«This US AI Action Plan doesn’t just open the door for Big Tech and Big Oil to team up, it unhinges and removes any and all doors — it opens the floodgates, continuing to kneecap our communities’ rights to protect ourselves,» KD Chavez, executive director of the Climate Justice Alliance, said in a statement. «With tech and oil’s track records on human rights and their role in the climate crisis, and what they are already doing now to force AI dominance, we need more corporate and environmental oversight, not less.»

Fewer rules around AI technology

Congress ended up not including a moratorium on state AI rules in the recently passed tax and spending bill but efforts to cut regulations around AI continue from the executive branch in the action plan. «AI is far too important to smother in bureaucracy at this early stage, whether at the state or Federal level,» the plan says.

The plan recommends that several federal agencies review whether existing or proposed rules would interfere with the development and deployment of AI. The feds would consider whether states’ regulatory climate is favorable for AI when deciding to award funding. Federal Trade Commission investigations and orders would be reviewed to determine that they don’t «advance theories of liability that unduly burden AI innovation.»

Those rule changes could undermine efforts to protect consumers from problems caused by AI, critics said. «Companies — including AI companies — have a legal obligation to protect their products from being used for harm,» Justin Brookman, director of tech policy at Consumer Reports, said in a statement. «When a company makes design choices that increase the risk their product will be used for harm, or when the risks are particularly serious, companies should bear legal responsibility.»

Ideology and large language models

The plan proposes some steps around ensuring AI «protects free speech and American values,» further steps in the Trump administration’s efforts to roll back federal policies around what it refers to as «diversity, equity and inclusion,» along with references to the problems of misinformation and climate change. It calls for eliminating references to those items in the National Institute of Standards and Technology’s AI Risk Management Framework. Federal agencies would only be allowed to contract with AI developers who «ensure that their systems are objective and free from top-down ideological bias.»

The Trump administration has recently announced contracts of up to $200 million each to developers Anthropic, Google, OpenAI and xAI. Grok, the model from Elon Musk’s xAI, has recently come under fire for spouting antisemitism and hate speech

Dealing with workforce challenges

The plan acknowledges that AI will «transform how work gets done across all industries and occupations, demanding a serious workforce response to help workers navigate that transition» and recommends actions by federal agencies including the Department of Labor intended to mitigate the harms of AI-driven job displacement. The plan calls for the Bureau of Labor Statistics, Census Bureau and Bureau of Economic Analysis to monitor how AI affects the labor market using data already collected. An AI Workforce Research Hub under the Department of Labor would lead monitoring and issue policy recommendations.

Most of the actual plans to help workers displaced by AI involve retraining those workers for other jobs or to help states do the same. 

Other jobs-related recommendations are aimed at boosting the kinds of jobs needed for all those data centers and chip manufacturing plants — like electricians and HVAC technicians. 

These plans and others to encourage AI literacy and AI use in education drew praise from the Software & Information Industry Association, a tech industry trade group. «These are key components for building trust and ensuring all communities can participate in and benefit from AI’s potential,» Paul Lekas, SIIA’s senior vice president of global public policy, said in a statement.

More AI in government

The plan envisions more use of AI by the federal government. A talent exchange program would allow employees with experience or talent in AI to be detailed to other agencies in need. The General Services Administration would create a toolbox of AI models that would help agencies see models to choose from and use cases in other parts of the government. 

Every government agency would also be required to ensure employees who could use AI in their jobs have access to and training for AI tools.

Many recommendations focus specifically on the Department of Defense, including creating a virtual proving ground for AI and autonomous systems. AI companies have already been signing contracts with the DOD to develop AI tools for the military.

Technologies

Today’s NYT Mini Crossword Answers for Tuesday, Oct. 21

Here are the answers for The New York Times Mini Crossword for Oct. 21.

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


Today’s Mini Crossword features a lot of one certain letter. Need help? Read on. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

Mini across clues and answers

1A clue: Bone that can be «dropped»
Answer: JAW

4A clue: Late scientist Goodall
Answer: JANE

5A clue: Make critical assumptions about
Answer: JUDGE

6A clue: Best by a little
Answer: ONEUP

7A clue: Mercury, Jupiter, Saturn, etc.
Answer: GODS

Mini down clues and answers

1D clue: Just kind of over it
Answer: JADED

2D clue: Beef cattle breed
Answer: ANGUS

3D clue: Shed tears
Answer: WEEP

4D clue: 2007 comedy-drama starring Elliot Page and Michael Cera
Answer: JUNO

5D clue: Refresh, as one’s memory
Answer: JOG

Continue Reading

Technologies

Wikipedia Says It’s Losing Traffic Due to AI Summaries, Social Media Videos

The popular online encyclopedia saw an 8% drop in pageviews over the last few months.

Wikipedia has seen a decline in users this year due to artificial intelligence summaries in search engine results and the growing popularity of social media, according to a blog post Friday from Marshall Miller of the Wikimedia Foundation, the organization that oversees the free online encyclopedia.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


In the post, Miller describes an 8% drop in human pageviews over the last few months compared with the numbers Wikipedia saw in the same months in 2024.

«We believe that these declines reflect the impact of generative AI and social media on how people seek information, especially with search engines providing answers directly to searchers, often based on Wikipedia content,» Miller wrote. 

Blame the bots 

AI-generated summaries that pop up on search engines like Bing and Google often use bots called web crawlers to gather much of the information that users read at the top of the search results. 

Websites do their best to restrict how these bots handle their data, but web crawlers have become pretty skilled at going undetected. 

«Many bots that scrape websites like ours are continually getting more sophisticated and trying to appear human,» Miller wrote.

After reclassifying Wikipedia traffic data from earlier this year, Miller says the site «found that much of the unusually high traffic for the period of May and June was coming from bots built to evade detection.»

The Wikipedia blog post also noted that younger generations are turning to social-video platforms for their information rather than the open web and such sites as Wikipedia.

When people search with AI, they’re less likely to click through

There is now promising research on the impact of generative AI on the internet, especially concerning online publishers with business models that rely on users visiting their webpages.

(Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

In July, Pew Research examined browsing data from 900 US adults and found that the AI-generated summaries at the top of Google’s search results affected web traffic. When the summary appeared in a search, users were less likely to click on links compared to when the search results didn’t include the summaries.

Google search is especially important, because Google.com is the world’s most visited website — it’s how most of us find what we’re looking for on the internet. 

«LLMs, AI chatbots, search engines and social platforms that use Wikipedia content must encourage more visitors to Wikipedia, so that the free knowledge that so many people and platforms depend on can continue to flow sustainably,» Miller wrote. «With fewer visits to Wikipedia, fewer volunteers may grow and enrich the content, and fewer individual donors may support this work.»

Last year, CNET published an extensive report on how changes in Google’s search algorithm decimated web traffic for online publishers. 

Continue Reading

Technologies

OpenAI Says It’s Working With Actors to Crack Down on Celebrity Deepfakes in Sora

Bryan Cranston alerted SAG-AFTRA, the actors union, when he saw AI-generated videos of himself made with the AI video app.

OpenAI said Monday it would do more to stop users of its AI video generation app Sora from creating clips with the likenesses of actors and other celebrities after actor Bryan Cranston and the union representing film and TV actors raised concerns that deepfake videos were being made without the performers’ consent.

Actor Bryan Cranston, the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) and several talent agencies said they struck a deal with the ChatGPT maker over the use of celebrities’ likenesses in Sora. The joint statement highlights the intense conflict between AI companies and rights holders like celebrities’ estates, movie studios and talent agencies — and how generative AI tech continues to erode reality for all of us.

Sora, a new sister app to ChatGPT, lets users create and share AI-generated videos. It launched to much fanfare three weeks ago, with AI enthusiasts searching for invite codes. But Sora is unique among AI video generators and social media apps; it lets you use other people’s recorded likenesses to place them in nearly any AI video. It has been, at best, weird and funny, and at worst, a never-ending scroll of deepfakes that are nearly indistinguishable from reality.

Cranston noticed his likeness was being used by Sora users when the app launched, and the Breaking Bad actor alerted his union. The new agreement with the actors’ union and talent agencies reiterates that celebrities will have to opt in to having their likenesses available to be placed into AI-generated video. OpenAI said in the statement that it has «strengthened the guardrails around replication of voice and likeness» and «expressed regret for these unintentional generations.»

OpenAI does have guardrails in place to prevent the creation of videos of well-known people: It rejected my prompt asking for a video of Taylor Swift on stage, for example. But these guardrails aren’t perfect, as we’ve saw last week with a growing trend of people creating videos featuring Rev. Martin Luther King Jr. They ranged from weird deepfakes of the civil rights leader rapping and wrestling in the WWE to overtly racist content.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


The flood of «disrespectful depictions,» as OpenAI called them in a statement on Friday, is part of why the company paused the ability to create videos featuring King.

Bernice A. King, his daughter, last week publicly asked people to stop sending her AI-generated videos of her father. She was echoing comedian Robin Williams’ daughter, Zelda, who called these sorts of AI videos «gross.»

OpenAI said it «believes public figures and their families should ultimately have control over how their likeness is used» and that «authorized representatives» of public figures and their estates can request that their likeness not be included in Sora. In this case, King’s estate is the entity responsible for choosing how his likeness is used. 

This isn’t the first time OpenAI has leaned on others to make those calls. Before Sora’s launch, the company reportedly told a number of Hollywood-adjacent talent agencies that they would have to opt out of having their intellectual property included in Sora. But that initial approach didn’t square with decades of copyright law — usually, companies need to license protected content before using it — and OpenAI reversed its stance a few days later. It’s one example of how AI companies and creators are clashing over copyright, including through high-profile lawsuits.

(Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)  

Continue Reading

Trending

Copyright © Verum World Media