Technologies
AI as Lawyer: It’s Starting as a Stunt, but There’s a Real Need
People already have a hard enough time getting help from lawyers. Advocates say AI could change that.

Next month, AI will enter the courtroom, and the US legal system may never be the same.
An artificial intelligence chatbot, technology programmed to respond to questions and hold a conversation, is expected to advise two individuals fighting speeding tickets in courtrooms in undisclosed cities. The two will wear a wireless headphone, which will relay what the judge says to the chatbot being run by DoNotPay, a company that typically helps people fight traffic tickets through the mail. The headphone will then play the chatbot’s suggested responses to the judge’s questions, which the individuals can then choose to repeat in court.
It’s a stunt. But it also has the potential to change how people interact with the law, and to bring many more changes over time. DoNotPay CEO Josh Browder says expensive legal fees have historically kept people from hiring traditional lawyers to fight for them in traffic court, which typically involves fines that can reach into the hundreds of dollars.
So, his team wondered whether an AI chatbot, trained to understand and argue the law, could intervene.
«Most people can’t afford legal representation,» Browder said in an interview. Using the AI in a real court situation «will be a proof of concept for courts to allow technology in the courtroom.»
Regardless of whether Browder is successful — he says he will be — his company’s actions mark the first of what are likely to be many more efforts to bring AI further into our daily lives.
Modern life is already filled with the technology. Some people wake up to a song chosen by AI-powered alarms. Their news feed is often curated by a computer program, too, one that’s taught to pick items they’ll find most interesting or that they’ll be most likely to comment on and share via social media. AI chooses what photos to show us on our phones, it asks us if it should add a meeting to our calendars based on emails we receive, and it reminds us to text a birthday greeting to our loved ones.
But advocates say AI’s ability to sort information, spot patterns and quickly pull up data means that in a short time, it could become a «copilot» for our daily lives. Already, coders on Microsoft-owned GitHub are using AI to help them create apps and solve technical problems. Social media managers are relying on AI to help determine the best time to post a new item. Even we here at CNET are experimenting with whether AI can help write explainer-type stories about the ever-changing world of finance.
So, it can seem like only a matter of time before AI finds its way into research-heavy industries like the law as well. And considering that 80% of low-income Americans don’t have access to legal help, while 40% to 60% of the middle class still struggle to get such assistance, there’s clearly demand. AI could help meet that need, but lawyers shouldn’t feel like new technology is going to take business away from them, says Andrew Perlman, dean of the law school at Suffolk University. It’s simply a matter of scale.
«There is no way that the legal profession is going to be able to deliver all of the legal services that people need,» Perlman said.
Turning to AI
DoNotPay began its latest AI experiment back in 2021 when businesses were given early access to GPT-3, the same AI tool used by the startup OpenAI to create ChatGPT, which went viral for its ability to answer questions, write essays and even create new computer programs. In December, Browder pitched his idea via a tweet: have someone wear an Apple AirPod into traffic court so that the AI could hear what’s happening through the microphone and feed responses through the earbud.
Aside from people jeering him for the stunt, Browder knew he’d have other challenges. Many states and districts limit legal advisors to those who are licensed to practice law, a clear hurdle that UC Irvine School of Law professor Emily Taylor Poppe said may cause trouble for DoNotPay’s AI.
«Because the AI would be providing information in real time, and because it would involve applying relevant law to specific facts, it is hard to see how it could avoid being seen as the provision of legal advice,» Poppe said. Essentially, the AI would be legally considered a lawyer acting without a law license.
AI tools raise privacy concerns too. The computer program technically needs to record audio to interpret what it hears, a move that’s not allowed in many courts. Lawyers are also expected to follow ethics rules that forbid them from sharing confidential information about clients. Can a chatbot, designed to share information, follow the same protocols?
Perlman says many of these concerns can be answered if these tools are created with care. If successful, he argues, these technologies could also help with the mountains of paperwork lawyers encounter on a daily basis.
Ultimately, he argues, chatbots may turn out to be as helpful as Google and other research tools are today, saving lawyers from having to physically wade through law libraries to find information stored on bookshelves.
«Lawyers trying to deliver legal services without technology are going to be inadequate and insufficient to meeting the public’s legalities,» Perlman said. Ultimately, he believes, AI can do more good than harm.
The two cases DoNotPay participates in will likely impact much of that conversation. Browder declined to say where the proceedings will take place, citing safety concerns.
Neither DoNotPay nor the defendants plan to inform the judges or anyone in court that an AI is being used or that audio is being recorded, a fact that raises ethics concerns. This in itself resulted in pushback on Twitter when Browder asked for traffic ticket volunteers in December. But Browder says the courts that DoNotPay chose are likely to be more lenient if they find out.
The future of law
After these traffic ticket fights, DoNotPay plans to create a video presentation designed to advocate in favor of the technology, ultimately with the goal of changing law and policy to allow AI in courtrooms.
States and legal organizations, meanwhile, are already debating these questions. In 2020, a California task force dedicated to exploring ways to expand access to legal services recommended allowing select unlicensed practitioners to represent clients, among other reforms. The American Bar Association told judges using AI tools to be mindful of biases instilled in the tools themselves. UNESCO, the international organization dedicated to preserving culture, has a free online course covering the basics of what AI can offer legal systems.
For his part, Browder says AI chatbots will become so popular in the next couple of years that the courts will have no choice but to allow them anyway. Perhaps AI tools will have a seat at the table, rather than having to whisper in our ears.
«Six months ago, you couldn’t even imagine that an AI could respond in these detailed ways,» Browder said. «No one has imagined, in any law, what this could be like in real life.»
Technologies
Wikipedia Says It’s Losing Traffic Due to AI Summaries, Social Media Videos
The popular online encyclopedia saw an 8% drop in pageviews over the last few months.

Wikipedia has seen a decline in users this year due to artificial intelligence summaries in search engine results and the growing popularity of social media, according to a blog post Friday from Marshall Miller of the Wikimedia Foundation, the organization that oversees the free online encyclopedia.
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
In the post, Miller describes an 8% drop in human pageviews over the last few months compared with the numbers Wikipedia saw in the same months in 2024.
«We believe that these declines reflect the impact of generative AI and social media on how people seek information, especially with search engines providing answers directly to searchers, often based on Wikipedia content,» Miller wrote.
Blame the bots
AI-generated summaries that pop up on search engines like Bing and Google often use bots called web crawlers to gather much of the information that users read at the top of the search results.
Websites do their best to restrict how these bots handle their data, but web crawlers have become pretty skilled at going undetected.
«Many bots that scrape websites like ours are continually getting more sophisticated and trying to appear human,» Miller wrote.
After reclassifying Wikipedia traffic data from earlier this year, Miller says the site «found that much of the unusually high traffic for the period of May and June was coming from bots built to evade detection.»
The Wikipedia blog post also noted that younger generations are turning to social-video platforms for their information rather than the open web and such sites as Wikipedia.
When people search with AI, they’re less likely to click through
There is now promising research on the impact of generative AI on the internet, especially concerning online publishers with business models that rely on users visiting their webpages.
(Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
In July, Pew Research examined browsing data from 900 US adults and found that the AI-generated summaries at the top of Google’s search results affected web traffic. When the summary appeared in a search, users were less likely to click on links compared to when the search results didn’t include the summaries.
Google search is especially important, because Google.com is the world’s most visited website — it’s how most of us find what we’re looking for on the internet.
«LLMs, AI chatbots, search engines and social platforms that use Wikipedia content must encourage more visitors to Wikipedia, so that the free knowledge that so many people and platforms depend on can continue to flow sustainably,» Miller wrote. «With fewer visits to Wikipedia, fewer volunteers may grow and enrich the content, and fewer individual donors may support this work.»
Last year, CNET published an extensive report on how changes in Google’s search algorithm decimated web traffic for online publishers.
Technologies
OpenAI Says It’s Working With Actors to Crack Down on Celebrity Deepfakes in Sora
Bryan Cranston alerted SAG-AFTRA, the actors union, when he saw AI-generated videos of himself made with the AI video app.

OpenAI said Monday it would do more to stop users of its AI video generation app Sora from creating clips with the likenesses of actors and other celebrities after actor Bryan Cranston and the union representing film and TV actors raised concerns that deepfake videos were being made without the performers’ consent.
Actor Bryan Cranston, the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) and several talent agencies said they struck a deal with the ChatGPT maker over the use of celebrities’ likenesses in Sora. The joint statement highlights the intense conflict between AI companies and rights holders like celebrities’ estates, movie studios and talent agencies — and how generative AI tech continues to erode reality for all of us.
Sora, a new sister app to ChatGPT, lets users create and share AI-generated videos. It launched to much fanfare three weeks ago, with AI enthusiasts searching for invite codes. But Sora is unique among AI video generators and social media apps; it lets you use other people’s recorded likenesses to place them in nearly any AI video. It has been, at best, weird and funny, and at worst, a never-ending scroll of deepfakes that are nearly indistinguishable from reality.
Cranston noticed his likeness was being used by Sora users when the app launched, and the Breaking Bad actor alerted his union. The new agreement with the actors’ union and talent agencies reiterates that celebrities will have to opt in to having their likenesses available to be placed into AI-generated video. OpenAI said in the statement that it has «strengthened the guardrails around replication of voice and likeness» and «expressed regret for these unintentional generations.»
OpenAI does have guardrails in place to prevent the creation of videos of well-known people: It rejected my prompt asking for a video of Taylor Swift on stage, for example. But these guardrails aren’t perfect, as we’ve saw last week with a growing trend of people creating videos featuring Rev. Martin Luther King Jr. They ranged from weird deepfakes of the civil rights leader rapping and wrestling in the WWE to overtly racist content.
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
The flood of «disrespectful depictions,» as OpenAI called them in a statement on Friday, is part of why the company paused the ability to create videos featuring King.
Statement from OpenAI and King Estate, Inc.
The Estate of Martin Luther King, Jr., Inc. (King, Inc.) and OpenAI have worked together to address how Dr. Martin Luther King Jr.’s likeness is represented in Sora generations. Some users generated disrespectful depictions of Dr.…— OpenAI Newsroom (@OpenAINewsroom) October 17, 2025
Bernice A. King, his daughter, last week publicly asked people to stop sending her AI-generated videos of her father. She was echoing comedian Robin Williams’ daughter, Zelda, who called these sorts of AI videos «gross.»
I concur concerning my father.
Please stop. #RobinWilliams #MLK #AI https://t.co/SImVIP30iN— Be A King (@BerniceKing) October 7, 2025
OpenAI said it «believes public figures and their families should ultimately have control over how their likeness is used» and that «authorized representatives» of public figures and their estates can request that their likeness not be included in Sora. In this case, King’s estate is the entity responsible for choosing how his likeness is used.
This isn’t the first time OpenAI has leaned on others to make those calls. Before Sora’s launch, the company reportedly told a number of Hollywood-adjacent talent agencies that they would have to opt out of having their intellectual property included in Sora. But that initial approach didn’t square with decades of copyright law — usually, companies need to license protected content before using it — and OpenAI reversed its stance a few days later. It’s one example of how AI companies and creators are clashing over copyright, including through high-profile lawsuits.
(Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Technologies
Today’s NYT Connections Hints, Answers and Help for Oct. 21, #863
Here are some hints and the answers for the NYT Connections puzzle for Oct. 21, #863.

Looking for the most recent Connections answers? Click here for today’s Connections hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle, Connections: Sports Edition and Strands puzzles.
Today’s NYT Connections puzzle has a diverse mix of topics. Remember when you see a word like «does» that it could have multiple meanings. Read on for clues and today’s Connections answers.
The Times now has a Connections Bot, like the one for Wordle. Go there after you play to receive a numeric score and to have the program analyze your answers. Players who are registered with the Times Games section can now nerd out by following their progress, including the number of puzzles completed, win rate, number of times they nabbed a perfect score and their win streak.
Read more: Hints, Tips and Strategies to Help You Win at NYT Connections Every Time
Hints for today’s Connections groups
Here are four hints for the groupings in today’s Connections puzzle, ranked from the easiest yellow group to the tough (and sometimes bizarre) purple group.
Yellow group hint: Deal me in.
Green group hint: I can get that.
Blue group hint: Hoops.
Purple group hint: The clicker.
Answers for today’s Connections groups
Yellow group: Playing cards.
Green group: Takes on.
Blue group: N.B.A. teams.
Purple group: Things you can control with remotes.
Read more: Wordle Cheat Sheet: Here Are the Most Popular Letters Used in English Words
What are today’s Connections answers?
The yellow words in today’s Connections
The theme is playing cards. The four answers are aces, jacks, kings and queens.
The green words in today’s Connections
The theme is takes on. The four answers are addresses, does, handles and tackles.
The blue words in today’s Connections
The theme is N.B.A. teams. The four answers are Bucks, Bulls, Hornets and Spurs.
The purple words in today’s Connections
The theme is things you can control with remotes. The four answers are drones, garage doors, televisions and Wiis.
-
Technologies3 года ago
Tech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies3 года ago
Best Handheld Game Console in 2023
-
Technologies3 года ago
Tighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года ago
Verum, Wickr and Threema: next generation secured messengers
-
Technologies4 года ago
Black Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies4 года ago
Google to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies4 года ago
Olivia Harlan Dekker for Verum Messenger
-
Technologies4 года ago
iPhone 13 event: How to watch Apple’s big announcement tomorrow