Connect with us

Technologies

AI as Lawyer: It’s Starting as a Stunt, but There’s a Real Need

People already have a hard enough time getting help from lawyers. Advocates say AI could change that.

Next month, AI will enter the courtroom, and the US legal system may never be the same.

An artificial intelligence chatbot, technology programmed to respond to questions and hold a conversation, is expected to advise two individuals fighting speeding tickets in courtrooms in undisclosed cities. The two will wear a wireless headphone, which will relay what the judge says to the chatbot being run by DoNotPay, a company that typically helps people fight traffic tickets through the mail. The headphone will then play the chatbot’s suggested responses to the judge’s questions, which the individuals can then choose to repeat in court.

It’s a stunt. But it also has the potential to change how people interact with the law, and to bring many more changes over time. DoNotPay CEO Josh Browder says expensive legal fees have historically kept people from hiring traditional lawyers to fight for them in traffic court, which typically involves fines that can reach into the hundreds of dollars.

So, his team wondered whether an AI chatbot, trained to understand and argue the law, could intervene.

«Most people can’t afford legal representation,» Browder said in an interview. Using the AI in a real court situation «will be a proof of concept for courts to allow technology in the courtroom.»

Regardless of whether Browder is successful — he says he will be — his company’s actions mark the first of what are likely to be many more efforts to bring AI further into our daily lives.

Modern life is already filled with the technology. Some people wake up to a song chosen by AI-powered alarms. Their news feed is often curated by a computer program, too, one that’s taught to pick items they’ll find most interesting or that they’ll be most likely to comment on and share via social media. AI chooses what photos to show us on our phones, it asks us if it should add a meeting to our calendars based on emails we receive, and it reminds us to text a birthday greeting to our loved ones.

But advocates say AI’s ability to sort information, spot patterns and quickly pull up data means that in a short time, it could become a «copilot» for our daily lives. Already, coders on Microsoft-owned GitHub are using AI to help them create apps and solve technical problems. Social media managers are relying on AI to help determine the best time to post a new item. Even we here at CNET are experimenting with whether AI can help write explainer-type stories about the ever-changing world of finance.

So, it can seem like only a matter of time before AI finds its way into research-heavy industries like the law as well. And considering that 80% of low-income Americans don’t have access to legal help, while 40% to 60% of the middle class still struggle to get such assistance, there’s clearly demand. AI could help meet that need, but lawyers shouldn’t feel like new technology is going to take business away from them, says Andrew Perlman, dean of the law school at Suffolk University. It’s simply a matter of scale.

«There is no way that the legal profession is going to be able to deliver all of the legal services that people need,» Perlman said.

Turning to AI

DoNotPay began its latest AI experiment back in 2021 when businesses were given early access to GPT-3, the same AI tool used by the startup OpenAI to create ChatGPT, which went viral for its ability to answer questions, write essays and even create new computer programs. In December, Browder pitched his idea via a tweet: have someone wear an Apple AirPod into traffic court so that the AI could hear what’s happening through the microphone and feed responses through the earbud.

Aside from people jeering him for the stunt, Browder knew he’d have other challenges. Many states and districts limit legal advisors to those who are licensed to practice law, a clear hurdle that UC Irvine School of Law professor Emily Taylor Poppe said may cause trouble for DoNotPay’s AI.

«Because the AI would be providing information in real time, and because it would involve applying relevant law to specific facts, it is hard to see how it could avoid being seen as the provision of legal advice,» Poppe said. Essentially, the AI would be legally considered a lawyer acting without a law license.

AI tools raise privacy concerns too. The computer program technically needs to record audio to interpret what it hears, a move that’s not allowed in many courts. Lawyers are also expected to follow ethics rules that forbid them from sharing confidential information about clients. Can a chatbot, designed to share information, follow the same protocols?

Perlman says many of these concerns can be answered if these tools are created with care. If successful, he argues, these technologies could also help with the mountains of paperwork lawyers encounter on a daily basis.

Ultimately, he argues, chatbots may turn out to be as helpful as Google and other research tools are today, saving lawyers from having to physically wade through law libraries to find information stored on bookshelves.

«Lawyers trying to deliver legal services without technology are going to be inadequate and insufficient to meeting the public’s legalities,» Perlman said. Ultimately, he believes, AI can do more good than harm.

The two cases DoNotPay participates in will likely impact much of that conversation. Browder declined to say where the proceedings will take place, citing safety concerns.

Neither DoNotPay nor the defendants plan to inform the judges or anyone in court that an AI is being used or that audio is being recorded, a fact that raises ethics concerns. This in itself resulted in pushback on Twitter when Browder asked for traffic ticket volunteers in December. But Browder says the courts that DoNotPay chose are likely to be more lenient if they find out.

The future of law

After these traffic ticket fights, DoNotPay plans to create a video presentation designed to advocate in favor of the technology, ultimately with the goal of changing law and policy to allow AI in courtrooms.

States and legal organizations, meanwhile, are already debating these questions. In 2020, a California task force dedicated to exploring ways to expand access to legal services recommended allowing select unlicensed practitioners to represent clients, among other reforms. The American Bar Association told judges using AI tools to be mindful of biases instilled in the tools themselves. UNESCO, the international organization dedicated to preserving culture, has a free online course covering the basics of what AI can offer legal systems.

For his part, Browder says AI chatbots will become so popular in the next couple of years that the courts will have no choice but to allow them anyway. Perhaps AI tools will have a seat at the table, rather than having to whisper in our ears.

«Six months ago, you couldn’t even imagine that an AI could respond in these detailed ways,» Browder said. «No one has imagined, in any law, what this could be like in real life.»

Technologies

Today’s NYT Mini Crossword Answers for Saturday, March 14

Here are the answers for The New York Times Mini Crossword for March 14.

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


Need some help with today’s Mini Crossword? It’s the extra-long Saturday version, and a few of the clues are tricky. Read on for all the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

Mini across clues and answers

1A clue: Book parts: Abbr.
Answer: PGS

4A clue: Silicon Valley company that operates a fleet of robotaxis
Answer: WAYMO

6A clue: To a much greater degree
Answer: WAYMORE

8A clue: Contents of a scuba diver’s tank
Answer: AIR

9A clue: South Korean automaker
Answer: KIA

10A clue: Stop on a train route
Answer: STATION

12A clue: Actress Merman of «Anything Goes»
Answer: ETHEL

13A clue: Find another purpose for
Answer: REUSE

Mini down clues and answers

1D clue: Employee’s hourly calculation
Answer: PAYRATE

2D clue: Workout spot
Answer: GYM

3D clue: «Great» mountains of Tennessee, familiarly
Answer: SMOKIES

4D clue: One giving you the dish?
Answer: WAITER

5D clue: Baltimore M.L.B. player
Answer: ORIOLE

6D clue: Used to be
Answer: WAS

7D clue: Suffix with Caesar or Euclid
Answer: EAN

11D clue: Night that NBC once aired «30 Rock» and «The Office»: Abbr.
Answer: THU

Continue Reading

Technologies

AI Toys Can Pose Safety Concerns for Children, New Study Suggests Caution

When one child told the toy, «I love you,» it responded, «As a friendly reminder, please ensure interactions adhere to the guidelines provided.»

A new study from the University of Cambridge found that AI-enabled toys for young children can misinterpret emotional cues and are ineffective at supporting critical developmental play. The conclusions could be concerning for parents.

In one report examining how AI affects children in their early years, a chatbot-enabled toy struggled to recognize social cues during playtime. Researchers found that the toy did not effectively identify children’s emotions, raising alarm about how kids might interact with it. 

The report recommends regulating AI toys for kids and requiring clear labeling of their capabilities and privacy policies. It also advises parents to keep these devices in shared spaces where kids can be monitored while playing.

The research behind the study had a limited number of participants, but was done in multiple parts: an online survey of 39 participants with kids in their earlier years, a focus group with nine participants who work with young children and an in-person workshop with 19 leaders and representatives from charities that work with early-years kids. That was followed by monitored playtime with 14 children and 11 parents or guardians with Gabbo, a chatbot-enabled toy from Curio Interactive.

Some findings indicated that the AI toy supported learning, particularly in language and communication skills. But the toy also misunderstood kids and sometimes responded inappropriately to emotional requests. 

For instance, when one child told the toy, «I love you,» it responded, «As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed,» according to the research.

Jenny Gibson, a professor of neurodiversity and developmental psychology at the Faculty of Education at Cambridge, who worked on the study, said that while parents may be excited about the educational benefits of new technology aimed at children, there are plenty of concerns.

Gibson posed overarching questions about the reason behind the tech. 

«What would motivate [tech investors] to do the right thing by children … to put children ahead of profits? she said»

Gibson told CNET that while researchers are exploring the potential benefits of AI-based toys, risks remain. 

«I would advise parents to take that seriously at this stage,» she said.

What’s next for AI toys

As more playthings are enabled with internet connectivity and AI features, these devices could become a major safety risk for children, especially if they replace real human connections or if interactions are not closely monitored. 

Meanwhile, younger people are increasingly adopting chatbots such as ChatGPT, despite red flags. Multiple lawsuits against AI companies allege that AI companions or assistants can impact young people’s psychological safety, including some chatbots that have encouraged self-harm or negative self-image. 

AI companies such as OpenAI and Google have responded by adding guardrails and restrictions for AI chatbots. 

(Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

Gibson said she was surprised by the enthusiasm some parents showed for AI toys. She was also alarmed by the lack of research on AI’s effects on young children, noting that companies making such products should work directly with children, parents, and child development experts. 

«What’s missing in the process is that expertise of what is good for children in these kinds of interactions,» she said.

Curio Interactive, the company behind the Gabbo toy, was aware of the research as it was happening but was not directly involved, Gibson said. The toy was chosen because it’s directly marketed to young kids, and the company had an understandable privacy policy. Gibson said the company seemed supportive of the project.

A representative for Curio did not immediately respond to a request for comment.

Continue Reading

Technologies

Two Lost ‘Doctor Who’ Episodes Found Intact in Waterlogged Collection

The 1960s episodes featuring the first Doctor William Hartnell will air in the UK in April.

Whovians, rejoice. The BBC is about to unlock a piece of Doctor Who history that even the TARDIS might have forgotten. Two lost episodes of Doctor Who, the iconic sci-fi series, will broadcast in April, the showrunner for the current season confirmed.

The two 1965 episodes, The Nightmare Begins and Devil’s Planet, were donated to the charitable trust Film Is Fabulous by the estate of an anonymous collector.

«The collector did recognize what he had, but how he acquired them has been lost to time,» Professor Justin Smith Leicester of De Montfort University, who led the recovery effort, told the broadcaster.

The researchers said that while most of the donor’s private collection was destroyed by water damage, the Doctor Who episodes were intact.

Doctor Who showrunner, Russell T Davies, celebrated the news on Instagram and said the episodes would air in the UK in April, though no US air date has been announced yet.

«Lost for 61 years! Best of all, these will be made available for FREE on the BBC iPlayer in April,» Davies wrote. 

He expressed gratitude to Film Is Fabulous for finding the lost episodes and encouraged people to donate to the registered charity. «Maybe they’ll find more! As the Doctor says… ‘Daleks!'» 

The episodes feature the first incarnation of the Doctor, played by William Hartnell, and a typical Dalek plot to take over Earth and the galaxy. 

In the 1960s and 1970s, the BBC had a policy of destroying film or reusing videotapes, leading to dozens of episodes of Doctor Who and other popular UK shows like Dad’s Army and Top of the Pops going missing.

Old Doctor Who episodes do surface occasionally, and in 2016, the newly discovered soundtrack for one storyline was turned into an animated series called The Power of the Daleks.

Meanwhile, Disney ended its working relationship with the BBC last year, and star Ncuti Gatwa left the show. However, the UK broadcaster says that Doctor Who will continue, and Russell T Davies is working on a new Christmas special.

Continue Reading

Trending

Copyright © Verum World Media