Connect with us

Technologies

ChatGPT Is Getting a Big Upgrade. Here’s What’s New With GPT-5

The new large language model is rolling out to all ChatGPT users.

Expect your ChatGPT experience to get faster and smarter today. 

OpenAI updated its flagship line of large language models Thursday, unveiling the GPT-5 generative AI model after months of anticipation. While the developer has released a lot of model updates in recent months, including new open-weights models just this week, it’s been more than two years since the debut of GPT-4. With a new generation worthy of a new number, how big of a change should you expect?

«I tried going back to GPT-4 and it was quite miserable,» OpenAI CEO Sam Altman told reporters. «This is significantly better in obvious ways and subtle ways.»

Like its predecessor, GPT-5 powers the chatbots, agents and search tools you’re used to using in ChatGPT or through other apps that use OpenAI’s technology. But the company said this version is much smarter, more accurate and faster. Demonstrations showed it quickly creating custom applications with no coding required, and developers said they’ve worked on ways to make sure it provides safer answers to potentially treacherous questions. (Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

The new model should be available for everyone on Thursday, including those who use ChatGPT’s free tier. Here’s what to expect.

One model for everybody (kinda)

Unlike some of OpenAI’s incremental releases, GPT-5 will be rolled out for all users, from those using it for free through ChatGPT to those who work at companies that pay for big enterprise plans. There are, naturally, some differences between how it looks based on whether and how you pay for it. Here’s a breakdown:

  • Free users: You’ll get access to GPT-5 up to a usage cap, after which you’ll have a lighter GPT-5-mini model.
  • Plus users: Similar to free users, but with higher usage limits. 
  • Pro users: Unlimited access to GPT-5 and access to a more powerful GPT-5 Pro model.
  • Enterprise/EDU/Team users: GPT-5 will be the default model, although it may be next week before it’s rolled out for everyone.

GPT-5 itself is really a couple of different models. There’s a fast but fairly straightforward LLM and a more robust reasoning model for handling more complex questions. A routing program identifies which model can best handle the prompt.

Even faster coding skills

OpenAI particularly highlighted the skills and speed at which the new model can write code. This isn’t just a function for programmers. The model’s ability to write a program makes it easier for it to solve a problem you present to it by creating the right tool. 

Yann Dubois, a post-training lead at OpenAI, showed off the model’s coding ability by asking it to create an app for learning French. Within minutes, it had coded a web application complete with sound and working game functions. Dubois actually asked it to create two different apps, running the same prompt through the model twice. The speed at which GPT-5 writes code allows you to try multiple times and pick the result you like best — or provide feedback to make changes until you get it right.

«The beauty is that you can iterate super quickly with GPT-5 to make the changes that you want,» Dubois said. «GPT-5 really opens a whole new world of vibe coding.»

Read more: Never Use ChatGPT for These 11 Things

New safety features

After announcing some steps this week to improve how its tools handle sensitive mental health issues, OpenAI said GPT-5 has some tweaks of its own to make things safer. 

The new model has improved training to avoid deceptive or inaccurate information, which will also improve the user experience, said Alex Beutel, safety research lead. 

It’ll also respond differently if you ask a prompt that could be dangerous. Previous models would refuse to answer a potentially harmful question, but GPT-5 will instead try to provide the best safe answer, Beutel said. This can help when a question is innocent (like a science student asking a chemistry question) but sounds more sinister (like someone trying to make a weapon). «The model tries to give as helpful of an answer as possible but within the constraints of feeling safe,» Beutel said.

But is this really the way to AGI?

Altman told reporters the model is a «significant step along the path to AGI,» or artificial general intelligence, a term that often refers to models that are as smart and capable as a human. But Altman also said it’s definitely not there yet. One big reason is that it’s still not learning continuously while it’s deployed. 

OpenAI’s stated goal is to try to develop AGI (although Altman said he’s not a big fan of the term), and it’s got competition. Meta CEO Mark Zuckerberg has been recruiting top AI scientists with the goal of creating «superintelligence.»

Whether large language models are the way there, nobody knows right now. Three-quarters of AI experts surveyed earlier this year said they had doubts LLMs would scale up to create something of that level of intelligence. 

Technologies

Wikipedia Says It’s Losing Traffic Due to AI Summaries, Social Media Videos

The popular online encyclopedia saw an 8% drop in pageviews over the last few months.

Wikipedia has seen a decline in users this year due to artificial intelligence summaries in search engine results and the growing popularity of social media, according to a blog post Friday from Marshall Miller of the Wikimedia Foundation, the organization that oversees the free online encyclopedia.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


In the post, Miller describes an 8% drop in human pageviews over the last few months compared with the numbers Wikipedia saw in the same months in 2024.

«We believe that these declines reflect the impact of generative AI and social media on how people seek information, especially with search engines providing answers directly to searchers, often based on Wikipedia content,» Miller wrote. 

Blame the bots 

AI-generated summaries that pop up on search engines like Bing and Google often use bots called web crawlers to gather much of the information that users read at the top of the search results. 

Websites do their best to restrict how these bots handle their data, but web crawlers have become pretty skilled at going undetected. 

«Many bots that scrape websites like ours are continually getting more sophisticated and trying to appear human,» Miller wrote.

After reclassifying Wikipedia traffic data from earlier this year, Miller says the site «found that much of the unusually high traffic for the period of May and June was coming from bots built to evade detection.»

The Wikipedia blog post also noted that younger generations are turning to social-video platforms for their information rather than the open web and such sites as Wikipedia.

When people search with AI, they’re less likely to click through

There is now promising research on the impact of generative AI on the internet, especially concerning online publishers with business models that rely on users visiting their webpages.

(Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

In July, Pew Research examined browsing data from 900 US adults and found that the AI-generated summaries at the top of Google’s search results affected web traffic. When the summary appeared in a search, users were less likely to click on links compared to when the search results didn’t include the summaries.

Google search is especially important, because Google.com is the world’s most visited website — it’s how most of us find what we’re looking for on the internet. 

«LLMs, AI chatbots, search engines and social platforms that use Wikipedia content must encourage more visitors to Wikipedia, so that the free knowledge that so many people and platforms depend on can continue to flow sustainably,» Miller wrote. «With fewer visits to Wikipedia, fewer volunteers may grow and enrich the content, and fewer individual donors may support this work.»

Last year, CNET published an extensive report on how changes in Google’s search algorithm decimated web traffic for online publishers. 

Continue Reading

Technologies

OpenAI Says It’s Working With Actors to Crack Down on Celebrity Deepfakes in Sora

Bryan Cranston alerted SAG-AFTRA, the actors union, when he saw AI-generated videos of himself made with the AI video app.

OpenAI said Monday it would do more to stop users of its AI video generation app Sora from creating clips with the likenesses of actors and other celebrities after actor Bryan Cranston and the union representing film and TV actors raised concerns that deepfake videos were being made without the performers’ consent.

Actor Bryan Cranston, the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) and several talent agencies said they struck a deal with the ChatGPT maker over the use of celebrities’ likenesses in Sora. The joint statement highlights the intense conflict between AI companies and rights holders like celebrities’ estates, movie studios and talent agencies — and how generative AI tech continues to erode reality for all of us.

Sora, a new sister app to ChatGPT, lets users create and share AI-generated videos. It launched to much fanfare three weeks ago, with AI enthusiasts searching for invite codes. But Sora is unique among AI video generators and social media apps; it lets you use other people’s recorded likenesses to place them in nearly any AI video. It has been, at best, weird and funny, and at worst, a never-ending scroll of deepfakes that are nearly indistinguishable from reality.

Cranston noticed his likeness was being used by Sora users when the app launched, and the Breaking Bad actor alerted his union. The new agreement with the actors’ union and talent agencies reiterates that celebrities will have to opt in to having their likenesses available to be placed into AI-generated video. OpenAI said in the statement that it has «strengthened the guardrails around replication of voice and likeness» and «expressed regret for these unintentional generations.»

OpenAI does have guardrails in place to prevent the creation of videos of well-known people: It rejected my prompt asking for a video of Taylor Swift on stage, for example. But these guardrails aren’t perfect, as we’ve saw last week with a growing trend of people creating videos featuring Rev. Martin Luther King Jr. They ranged from weird deepfakes of the civil rights leader rapping and wrestling in the WWE to overtly racist content.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


The flood of «disrespectful depictions,» as OpenAI called them in a statement on Friday, is part of why the company paused the ability to create videos featuring King.

Bernice A. King, his daughter, last week publicly asked people to stop sending her AI-generated videos of her father. She was echoing comedian Robin Williams’ daughter, Zelda, who called these sorts of AI videos «gross.»

OpenAI said it «believes public figures and their families should ultimately have control over how their likeness is used» and that «authorized representatives» of public figures and their estates can request that their likeness not be included in Sora. In this case, King’s estate is the entity responsible for choosing how his likeness is used. 

This isn’t the first time OpenAI has leaned on others to make those calls. Before Sora’s launch, the company reportedly told a number of Hollywood-adjacent talent agencies that they would have to opt out of having their intellectual property included in Sora. But that initial approach didn’t square with decades of copyright law — usually, companies need to license protected content before using it — and OpenAI reversed its stance a few days later. It’s one example of how AI companies and creators are clashing over copyright, including through high-profile lawsuits.

(Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)  

Continue Reading

Technologies

Today’s NYT Connections Hints, Answers and Help for Oct. 21, #863

Here are some hints and the answers for the NYT Connections puzzle for Oct. 21, #863.

Looking for the most recent Connections answers? Click here for today’s Connections hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle, Connections: Sports Edition and Strands puzzles.


Today’s NYT Connections puzzle has a diverse mix of topics.  Remember when you see a word like «does» that it could have multiple meanings. Read on for clues and today’s Connections answers.

The Times now has a Connections Bot, like the one for Wordle. Go there after you play to receive a numeric score and to have the program analyze your answers. Players who are registered with the Times Games section can now nerd out by following their progress, including the number of puzzles completed, win rate, number of times they nabbed a perfect score and their win streak.

Read more: Hints, Tips and Strategies to Help You Win at NYT Connections Every Time

Hints for today’s Connections groups

Here are four hints for the groupings in today’s Connections puzzle, ranked from the easiest yellow group to the tough (and sometimes bizarre) purple group.

Yellow group hint: Deal me in.

Green group hint: I can get that.

Blue group hint: Hoops.

Purple group hint: The clicker.

Answers for today’s Connections groups

Yellow group: Playing cards.

Green group: Takes on.

Blue group: N.B.A. teams.

Purple group: Things you can control with remotes.

Read more: Wordle Cheat Sheet: Here Are the Most Popular Letters Used in English Words

What are today’s Connections answers?

The yellow words in today’s Connections

The theme is playing cards. The four answers are aces, jacks, kings and queens.

The green words in today’s Connections

The theme is takes on. The four answers are addresses, does, handles and tackles.

The blue words in today’s Connections

The theme is N.B.A. teams. The four answers are Bucks, Bulls, Hornets and Spurs.

The purple words in today’s Connections

The theme is things you can control with remotes.  The four answers are drones, garage doors, televisions and Wiis.

Continue Reading

Trending

Copyright © Verum World Media