Technologies
What a Proposed Moratorium on State AI Rules Could Mean for You
Congressional Republicans have proposed a 10-year pause on the enforcement of state regulations around artificial intelligence.

States couldn’t enforce regulations on artificial intelligence technology for a decade under a plan being considered in the US House of Representatives. The legislation, in an amendment to the federal government’s budget bill, says no state or political subdivision «may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems or automated decision systems» for 10 years. The proposal would still need the approval of both chambers of Congress and President Donald Trump before it can become law. The House is expected to vote on the full budget package this week.
AI developers and some lawmakers have said federal action is necessary to keep states from creating a patchwork of different rules and regulations across the US that could slow the technology’s growth. The rapid growth in generative AI since ChatGPT exploded on the scene in late 2022 has led companies to fit the technology in as many spaces as possible. The economic implications are significant, as the US and China race to see which country’s tech will predominate, but generative AI poses privacy, transparency and other risks for consumers that lawmakers have sought to temper.
«We need, as an industry and as a country, one clear federal standard, whatever it may be,» Alexandr Wang, founder and CEO of the data company Scale AI, told lawmakers during an April hearing. «But we need one, we need clarity as to one federal standard and have preemption to prevent this outcome where you have 50 different standards.»
Efforts to limit the ability of states to regulate artificial intelligence could mean fewer consumer protections around a technology that is increasingly seeping into every aspect of American life. «There have been a lot of discussions at the state level, and I would think that it’s important for us to approach this problem at multiple levels,» said Anjana Susarla, a professor at Michigan State University who studies AI. «We could approach it at the national level. We can approach it at the state level too. I think we need both.»
Several states have already started regulating AI
The proposed language would bar states from enforcing any regulation, including those already on the books. The exceptions are rules and laws that make things easier for AI development and those that apply the same standards to non-AI models and systems that do similar things. These kinds of regulations are already starting to pop up. The biggest focus is not in the US, but in Europe, where the European Union has already implemented standards for AI. But states are starting to get in on the action.
Colorado passed a set of consumer protections last year, set to go into effect in 2026. California adopted more than a dozen AI-related laws last year. Other states have laws and regulations that often deal with specific issues such as deepfakes or require AI developers to publish information about their training data. At the local level, some regulations also address potential employment discrimination if AI systems are used in hiring.
«States are all over the map when it comes to what they want to regulate in AI,» said Arsen Kourinian, partner at the law firm Mayer Brown. So far in 2025, state lawmakers have introduced at least 550 proposals around AI, according to the National Conference of State Legislatures. In the House committee hearing last month, Rep. Jay Obernolte, a Republican from California, signaled a desire to get ahead of more state-level regulation. «We have a limited amount of legislative runway to be able to get that problem solved before the states get too far ahead,» he said.
While some states have laws on the books, not all of them have gone into effect or seen any enforcement. That limits the potential short-term impact of a moratorium, said Cobun Zweifel-Keegan, managing director in Washington for the International Association of Privacy Professionals. «There isn’t really any enforcement yet.»
A moratorium would likely deter state legislators and policymakers from developing and proposing new regulations, Zweifel-Keegan said. «The federal government would become the primary and potentially sole regulator around AI systems,» he said.
What a moratorium on state AI regulation means
AI developers have asked for any guardrails placed on their work to be consistent and streamlined. During a Senate Commerce Committee hearing last week, OpenAI CEO Sam Altman told Sen. Ted Cruz, a Republican from Texas, that an EU-style regulatory system «would be disastrous» for the industry. Altman suggested instead that the industry develop its own standards.
Asked by Sen. Brian Schatz, a Democrat from Hawaii, if industry self-regulation is enough at the moment, Altman said he thought some guardrails would be good but, «It’s easy for it to go too far. As I have learned more about how the world works, I am more afraid that it could go too far and have really bad consequences.» (Disclosure: Ziff Davis, parent company of CNET, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Concerns from companies — both the developers that create AI systems and the «deployers» who use them in interactions with consumers — often stem from fears that states will mandate significant work such as impact assessments or transparency notices before a product is released, Kourinian said. Consumer advocates have said more regulations are needed, and hampering the ability of states could hurt the privacy and safety of users.
«AI is being used widely to make decisions about people’s lives without transparency, accountability or recourse — it’s also facilitating chilling fraud, impersonation and surveillance,» Ben Winters, director of AI and privacy at the Consumer Federation of America, said in a statement. «A 10-year pause would lead to more discrimination, more deception and less control — simply put, it’s siding with tech companies over the people they impact.»
A moratorium on specific state rules and laws could result in more consumer protection issues being dealt with in court or by state attorneys general, Kourinian said. Existing laws around unfair and deceptive practices that are not specific to AI would still apply. «Time will tell how judges will interpret those issues,» he said.
Susarla said the pervasiveness of AI across industries means states might be able to regulate issues like privacy and transparency more broadly, without focusing on the technology. But a moratorium on AI regulation could lead to such policies being tied up in lawsuits. «It has to be some kind of balance between ‘we don’t want to stop innovation,’ but on the other hand, we also need to recognize that there can be real consequences,» she said.
Much policy around the governance of AI systems does happen because of those so-called technology-agnostic rules and laws, Zweifel-Keegan said. «It’s worth also remembering that there are a lot of existing laws and there is a potential to make new laws that don’t trigger the moratorium but do apply to AI systems as long as they apply to other systems,» he said.
Moratorium draws opposition ahead of House vote
House Democrats have said the proposed pause on regulations would hinder states’ ability to protect consumers. Rep. Jan Schakowsky called the move «reckless» in a committee hearing on AI regulation Wednesday. «Our job right now is to protect consumers,» the Illinois Democrat said.
Republicans, meanwhile, contended that state regulations could be too much of a burden on innovation in artificial intelligence. Rep. John Joyce, a Pennsylvania Republican, said in the same hearing that Congress should create a national regulatory framework rather than leaving it to the states. «We need a federal approach that ensures consumers are protected when AI tools are misused, and in a way that allows innovators to thrive.»
At the state level, a letter signed by 40 state attorneys general — of both parties — called for Congress to reject the moratorium and instead create that broader regulatory system. «This bill does not propose any regulatory scheme to replace or supplement the laws enacted or currently under consideration by the states, leaving Americans entirely unprotected from the potential harms of AI,» they wrote.
Technologies
Siri’s New Features May Include Adding Voice Controls to Apps
A feature Apple showed off last year is reportedly being tested with popular third-party apps.

Apple is testing new features for its Siri assistant with popular apps — including Uber, Facebook and YouTube — that would make it possible to use third-party app features with voice commands, according to a report from Bloomberg.
The testing is being done with the goal of releasing a revamped Siri in the spring of 2026 that uses Apple’s App Intents to expand what Siri can do outside of Apple’s own OS and first-party apps. For instance, people might be able to post Instagram comments or make purchases using only their voice, something Siri can’t yet do with most apps that Apple didn’t develop itself, according to the report.
A representative for Apple did not immediately respond to a request for comment.
Apple showed off a demo of this type of functionality last year, but the overhaul might not arrive until 2027. According to Bloomberg’s Mark Gurman, there are some internal doubts among Apple engineers as to whether the functionality will work well enough, especially in apps where mistakes could be costly or harmful, such as health or banking apps.
Gurman points out that if the company gets it right, it would be a major feature that could give Apple, «a new, voice-first interface… it could potentially be a hit that many users didn’t see coming,» he writes.
Creeping competition
Even if Apple succeeds in revamping Siri with new features that customers find to be a big improvement, the company will be doing so under pressure from competitors on the artificial intelligence front.
«Apple should be worried, and it appears they are,» says Vikas Sharma, senior director of patent services at Quandary Peak Research. «ChatGPT, Gemini, Copilot and Alexa are all ahead of Siri in the AI race.»
Sharma expressed doubts that a spring 2026 release would include everything users might expect from a major Siri revamp. «At this point, there’s no update on any exciting upcoming capabilities, so the release may end up being incremental rather than revolutionary,» Sharma says.
But if Apple can work its magic and make good on some of the features that it gave a glimpse of last year, the effects could be profound.
«Imagine booking rides, flights, cars and hotels seamlessly through third-party apps; ordering from Amazon; sharing files through Slack/email; or finding emails with specific attachments, all through voice commands,» says Sharma. «With these capabilities, Siri could become a true AI assistant.’
Technologies
Today’s NYT Mini Crossword Answers for Tuesday, Aug. 12
Here are the answers for The New York Times Mini Crossword for Aug. 12

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.
Dog lovers, today’s Mini Crossword is barking your name. You’ll likely have fun with this one. Need answers? Read on. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.
The Mini Crossword is just one of many games in the Times’ games collection. If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.
Read more: Tips and Tricks for Solving The New York Times Mini Crossword
Let’s get to those Mini Crossword clues and answers.
Mini across clues and answers
1A clue: *Workplace for scientists
Answer: LAB
4A clue: *Grub
Answer: CHOW
6A clue: Maliciously revealed one’s private identity, informally
Answer: DOXED
8A clue: Spanish «but»
Answer: PERO
9A clue: Gasoline type: Abbr.
Answer: REG
Mini down clues and answers
1D clue: TV screen option, for short
Answer: LCD
2D clue: __, a skip and a jump
Answer: A HOP
3D clue: *Someone who’s always taking jabs at you?
Answer: BOXER
4D clue: Used to be
Answer: WERE
5D clue: ___ days (time of summer suggested by the answers to the starred clues)
Answer: DOG
Technologies
ChatGPT’s New GPT-5 Model Is Supposed to Be Faster and Smarter. Not Everyone Is Satisfied
The new flagship engine behind OpenAI’s generative AI tool comes with a ton of changes.

ChatGPT’s long-awaited new engine is here, and GPT-5 promises faster speeds and more time spent thinking. But the new generative AI model has turned off some users with a tone shift away from its casual, conversational style.
GPT-5 has been in the works for months. It’s a big step for OpenAI, more than two years after the release of GPT-4, with the company touting the model as a giant leap for large language models. «I tried going back to GPT-4 and it was quite miserable,» said OpenAI CEO Sam Altman. «This is significantly better in obvious ways and subtle ways.»
Like its predecessor, GPT-5 powers the chatbots, agents, and search tools in ChatGPT and other apps that use OpenAI’s technology. Yet this version is supposed to be smarter, more accurate and faster.
Demonstrations showed GPT-5 quickly creating custom applications with no coding required, and developers said they’ve worked on ways to make sure it provides safer answers to potentially treacherous questions. (Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
One model for everybody (kinda)
The new model is available now, including those who use ChatGPT’s free tier. Unlike some of OpenAI’s incremental releases, GPT-5 will be rolled out for all users, not only to the companies paying for big enterprise plans.
There are, naturally, some differences between how it looks based on your pricing plan. Here’s a breakdown:
- Free users: You’ll get access to GPT-5 up to a usage cap, after which you’ll have a lighter GPT-5-mini model.
- Plus users: Similar to free users, but with higher usage limits.
- Pro users: Unlimited access to GPT-5 and access to a more powerful GPT-5 Pro model.
- Enterprise/EDU/Team users: GPT-5 will be the default model.
GPT-5 itself is really a couple of different models. There’s a fast but fairly straightforward LLM and a more robust reasoning model for handling more complex questions. A routing program identifies which model can best handle the prompt.
OpenAI originally replaced all its previous models with GPT-5, but users quickly rebelled. GPT-5, many said, was more stodgy and had less personality, sounding more corporate. After hearing that backlash on Reddit, Altman and OpenAI said they’d make older models like GPT-4o available again, at least for now.
Altman said in a post on X that some people have become attached to specific models and that it may be contributing to their use in potentially harmful ways, like therapy.
«If people are getting good advice, leveling up toward their own goals, and their life satisfaction is increasing over years, we will be proud of making something genuinely helpful, even if they use and rely on ChatGPT a lot,» Altman wrote. «If, on the other hand, users have a relationship with ChatGPT where they think they feel better after talking but they’re unknowingly nudged away from their longer term well-being (however they define it), that’s bad. It’s also bad, for example, if a user wants to use ChatGPT less and feels like they cannot.»
Even faster coding skills
OpenAI particularly highlighted the skills and speed at which the new GPT-5 model can write code, which isn’t just a function for programmers. The model’s ability to write a program makes it easier to solve the problem you present to it by creating the right tool.
Yann Dubois, a post-training lead at OpenAI, showed off the model’s coding ability by asking it to create an app for learning French. Within minutes, it had coded a web application complete with sound and working game functions. Dubois actually asked it to create two different apps, running the same prompt through the model twice.
The speed at which GPT-5 writes code allows you to try multiple times and pick the result you like best — or provide feedback to make changes until you get it right.
«The beauty is that you can iterate super quickly with GPT-5 to make the changes that you want,» Dubois said. «GPT-5 really opens a whole new world of vibe coding.»
Read more: Never Use ChatGPT for These 11 Things
New safety features
After announcing some steps to improve how its tools handle sensitive mental health issues, OpenAI said GPT-5 has some tweaks to make things safer. The new model has improved training to avoid deceptive or inaccurate information, which will also improve the user experience, said Alex Beutel, safety research lead.
It’ll also respond differently if you ask a prompt that could be dangerous. Previous models would refuse to answer a potentially harmful question, but GPT-5 will instead try to provide the best safe answer, Beutel said. This can help when a question is innocent (like a science student asking a chemistry question) but sounds more sinister (like someone trying to make a weapon).
«The model tries to give as helpful an answer as possible but within the constraints of feeling safe,» Beutel said.
Customized voices and colors
If you prefer to chat with your bots vocally rather than typing, expect improvements in voice capabilities. The Advanced Voice mode will now be available to all users, whether free or paid, and usage limits will be higher.
You can also change the color of your chats, with some options exclusive to paid users. Other customization options include the ability to tweak personalities. You’ll be able to set ChatGPT to be thoughtful and supportive, sarcastic or more. The options — Cynic, Robot, Listener and Nerd — are opt-in, and you can change them anytime.
Connect to your mail and calendar
ChatGPT will now be able to connect with your Google Calendar and Gmail accounts, meaning you can ask the chatbot about your schedule, and it will suggest things. You won’t have to — and you may not want to, depending on how you feel about sharing your private info — but you can enable it to automatically pull info from your mail or calendar without asking permission.
These connectors will start for Pro users soon, with other tiers gaining access thereafter.
The path to AGI?
Altman told reporters the model is a «significant step along the path to AGI,» or artificial general intelligence, a term that often refers to models that are as smart and capable as a human. But Altman also said it’s definitely not there yet. One big reason is that it’s still not learning continuously while it’s deployed.
OpenAI’s stated goal is to try to develop AGI (although Altman said he’s not a big fan of the term), and it’s got competition. Meta CEO Mark Zuckerberg has been recruiting top AI scientists with the goal of creating «superintelligence.»
Whether large language models are the way there, nobody knows right now. Three-quarters of AI experts surveyed earlier this year said they had doubts LLMs would scale up to create something of that level of intelligence.
-
Technologies3 года ago
Tech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies2 года ago
Best Handheld Game Console in 2023
-
Technologies2 года ago
Tighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года ago
Verum, Wickr and Threema: next generation secured messengers
-
Technologies4 года ago
Google to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies4 года ago
Black Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies4 года ago
Olivia Harlan Dekker for Verum Messenger
-
Technologies4 года ago
iPhone 13 event: How to watch Apple’s big announcement tomorrow