Technologies
AI Gets Smarter, Safer, More Visual With GPT-4 Update, OpenAI Says
If you subscribe to ChatGPT Plus, you can try it out now.

The hottest AI technology foundation got a big upgrade Tuesday with OpenAI’s GPT-4 release now available in the premium version of the ChatGPT chatbot.
GPT-4 can generate much longer strings of text and respond when people feed it images, and it’s designed to do a better job avoiding artificial intelligence pitfalls visible in the earlier GPT-3.5, OpenAI said Tuesday. For example, when taking bar exams that attorneys must pass to practice law, GPT-4 ranks in the top 10% of scores compared with the bottom 10% for GPT-3.5, the AI research company said.
GPT stands for Generative Pretrained Transformer, a reference to the fact that it can generate text on its own — now up to 25,000 words with GPT-4 — and that it uses an AI technology called transformers that Google pioneered. It’s a type of AI called a large language model, or LLM, that’s trained on vast swaths of data harvested from the internet, learning mathematically to spot patterns and reproduce styles. Human overseers rate results to steer GPT in the right direction, and GPT-4 has more of this feedback.
OpenAI has made GPT available to developers for years, but ChatGPT, which debuted in November, offered an easy interface ordinary folks can use. That yielded an explosion of interest, experimentation and worry about the downsides of the technology. It can do everything from generating programming code and answering exam questions to writing poetry and supplying basic facts. It’s remarkable if not always reliable.
ChatGPT is free, but it can falter when demand is high. In January, OpenAI began offering ChatGPT Plus for $20 per month with assured availability and, now, the GPT-4 foundation. Developers can sign up on a waiting list to get their own access to GPT-4.
GPT-4 advancements
«In a casual conversation, the distinction between GPT-3.5 and GPT-4 can be subtle. The difference comes out when the complexity of the task reaches a sufficient threshold,» OpenAI said. «GPT-4 is more reliable, creative and able to handle much more nuanced instructions than GPT-3.5.»
Another major advance in GPT-4 is the ability to accept input data that includes text and photos. OpenAI’s example is asking the chatbot to explain a joke showing a bulky decades-old computer cable plugged into a modern iPhone’s tiny Lightning port. This feature also helps GPT take tests that aren’t just textual, but it isn’t yet available in ChatGPT Plus.
Another is better performance avoiding AI problems like hallucinations — incorrectly fabricated responses, often offered with just as much seeming authority as answers the AI gets right. GPT-4 also is better at thwarting attempts to get it to say the wrong thing: «GPT-4 scores 40% higher than our latest GPT-3.5 on our internal adversarial factuality evaluations,» OpenAI said.
GPT-4 also adds new «steerability» options. Users of large language models today often must engage in elaborate «prompt engineering,» learning how to embed specific cues in their prompts to get the right sort of responses. GPT-4 adds a system command option that lets users set a specific tone or style, for example programming code or a Socratic tutor: «You are a tutor that always responds in the Socratic style. You never give the student the answer, but always try to ask just the right question to help them learn to think for themselves.»
«Stochastic parrots» and other problems
OpenAI acknowledges significant shortcomings that persist with GPT-4, though it also touts progress avoiding them.
«It can sometimes make simple reasoning errors … or be overly gullible in accepting obvious false statements from a user. And sometimes it can fail at hard problems the same way humans do, such as introducing security vulnerabilities into code it produces,» OpenAI said. In addition, «GPT-4 can also be confidently wrong in its predictions, not taking care to double-check work when it’s likely to make a mistake.»
Large language models can deliver impressive results, seeming to understand huge amounts of subject matter and to converse in human-sounding if somewhat stilted language. Fundamentally, though, LLM AIs don’t really know anything. They’re just able to string words together in statistically very refined ways.
This statistical but fundamentally somewhat hollow approach to knowledge led researchers, including former Google AI researchers Emily Bender and Timnit Gebru, to warn of the «dangers of stochastic parrots» that come with large language models. Language model AIs tend to encode biases, stereotypes and negative sentiment present in training data, and researchers and other people using these models tend «to mistake … performance gains for actual natural language understanding.»
OpenAI Chief Executive Sam Altman acknowledges problems, but he’s pleased overall with the progress shown with GPT-4. «It is more creative than previous models, it hallucinates significantly less, and it is less biased. It can pass a bar exam and score a 5 on several AP exams,» Altman tweeted Tuesday.
One worry about AI is that students will use it to cheat, for example when answering essay questions. It’s a real risk, though some educators actively embrace LLMs as a tool, like search engines and Wikipedia. Plagiarism detection companies are adapting to AI by training their own detection models. One such company, Crossplag, said Wednesday that after testing about 50 documents that GPT-4 generated, «our accuracy rate was above 98.5%.»
OpenAI, Microsoft and Nvidia partnership
OpenAI got a big boost when Microsoft said in February it’s using GPT technology in its Bing search engine, including a chat features similar to ChatGPT. On Tuesday, Microsoft said it’s using GPT-4 for the Bing work. Together, OpenAI and Microsoft pose a major search threat to Google, but Google has its own large language model technology too, including a chatbot called Bard that Google is testing privately.
Also on Tuesday, Google announced it’ll begin limited testing of its own AI technology to boost writing Gmail emails and Google Docs word processing documents. «With your collaborative AI partner you can continue to refine and edit, getting more suggestions as needed,» Google said.
That phrasing mirrors Microsoft’s «co-pilot» positioning of AI technology. Calling it an aid to human-led work is a common stance, given the problems of the technology and the necessity for careful human oversight.
Microsoft uses GPT technology both to evaluate the searches people type into Bing and, in some cases, to offer more elaborate, conversational responses. The results can be much more informative than those of earlier search engines, but the more conversational interface that can be invoked as an option has had problems that make it look unhinged.
To train GPT, OpenAI used Microsoft’s Azure cloud computing service, including thousands of Nvidia’s A100 graphics processing units, or GPUs, yoked together. Azure now can use Nvidia’s new H100 processors, which include specific circuitry to accelerate AI transformer calculations.
AI chatbots everywhere
Another large language model developer, Anthropic, also unveiled an AI chatbot called Claude on Tuesday. The company, which counts Google as an investor, opened a waiting list for Claude.
«Claude is capable of a wide variety of conversational and text processing tasks while maintaining a high degree of reliability and predictability,» Anthropic said in a blog post. «Claude can help with use cases including summarization, search, creative and collaborative writing, Q&A, coding and more.»
It’s one of a growing crowd. Chinese search and tech giant Baidu is working on a chatbot called Ernie Bot. Meta, parent of Facebook and Instagram, consolidated its AI operations into a bigger team and plans to build more generative AI into its products. Even Snapchat is getting in on the game with a GPT-based chatbot called My AI.
Expect more refinements in the future.
«We have had the initial training of GPT-4 done for quite awhile, but it’s taken us a long time and a lot of work to feel ready to release it,» Altman tweeted. «We hope you enjoy it and we really appreciate feedback on its shortcomings.»
Editors’ note: CNET is using an AI engine to create some personalfinance explainers that are edited and fact-checked by our editors. Formore, see this post.
Technologies
Google I/O 2025: How to Watch and What to Expect
With Android 16 out of the way, Google I/O will certainly be all about AI.

Google I/O 2025 takes place on May 20 and 21 with Google’s big keynote happening on day 1. We expect Big G to talk about its myriad innovations across its ever-expanding portfolio of products — almost certainly with a huge focus on AI every step of the way. If we collectively cross our fingers, promise to be good and eat all our vegetables then we may even be treated to a sneak peek at upcoming hardware.
Read more: Android 16: Everything Google Announced at the Android Show
Google also hosted a totally separate event that focused solely on Android. The Android Show: I/O Edition saw the wrappers come off Android 16, with insights into the new Material 3 Expressive interface, updates to security and a focus on Gemini and how it’ll work on a variety of other devices.
By breaking out Android news into its own virtual event, Google frees itself to spend more time during the I/O keynote to talk about Gemini, Deep Mind, Android XR and Project Astra. It’s going to be a jam-packed event, so here’s how you can watch I/O 2025 as it happens and what you can look forward to.
Google I/O: Where to watch
Google I/O proper kicks off with a keynote taking place on May 20, 10 a.m. PDT (1 p.m. EDT, 6 p.m. BST). It’ll almost certainly be available to stream online on Google’s own YouTube channel, although a holding video is yet to be available. There’s no live link on the I/O website yet, either, though you can use the handy links to add the event to your calendar of choice. Expect links to a livestream to be available closer to the day.
What to expect from Google I/O 2025
Little chat about Android 16: As Google gave Android 16 its own outing already, it’s likely that it won’t be mentioned all that much during I/O. In fact at last year’s event, Android was barely mentioned, while uses of the term «AI» went well over a hundred.
Android XR: Google didn’t talk much about Android XR during the Android show, focusing instead on the purely phone-based updates to the platform. We expected to hear more about the company’s latest foray into mixed-reality headsets in partnership with Samsung and its Project Moohan headset, so it’s possible that this is being saved for I/O proper.
Gemini: With Android being spun out into its own separate event, Google is evidently clearing the way for I/O to focus on everything else the company does. AI will continue to dominate the conversation at I/O, just as it did last year (though hopefully Google can make it more understandable) with updates to many of its AI platforms expected to be announced.
Gemini is expected to receive a variety of update announcements, including more information on its latest 2.5 Pro update which boasts various improvements to its reasoning abilities, and in particular to its helpfulness for coding applications. Expect lots of mentions of Google’s other AI-based products, too, including DeepMind, LearnLM and Project Astra. Let’s just hope Google has figured out how to make this information make any kind of sense.
Beyond AI, Google may talk about updates to its other products including GMail, Chrome and the Play Store, although whether these updates are big enough to be discussed during the keynote rather than as part of the developer-focused sessions following I/O’s opening remains to be seen.
Technologies
You Can Now Buy Nike’s $900 Workout Shoes for Compression and Heating
The Nike Hyperboots, designed to help you warm up and recover from workouts, launched Saturday.

Those workout shoes with compression and heating that Nike and Hyperice showed off at CES 2025 earlier this year weren’t just a concept. The Hyperboot is now available to buy online in North America, so they’re within reach, as long as you’re willing to spend $899.
The high-tops, which Nike and Hyperice call a wearable much like your smartwatch, help your feet warm up before a workout, and then recover after it. The shoes do this with heating and air-compression massage technology, taking the idea of heating pads and compression socks and making them mobile.
«You can definitely feel the heat in here,» CNET former mobile senior writer Lisa Eadicicco said when she had the chance to try these workout shoes on in January. She walked across a demo room in Las Vegas wearing the fancy footwear to test out the compression and heating features.
The boots massage and compress your ankles and feet, and in CNET’s test, we could especially feel the heat around the ankles. Buttons on the shoes let you adjust compression and the amount of heat with multiple settings for each.
«The Hyperboot contains a system of dual-air bladders that deliver sequential compression patterns and are bonded to thermally efficient heating elements that evenly distribute heat throughout the shoe’s entire upper,» Nike said.
The battery lasts for 1 to 1.5 hours on max heat and compression settings, or 8 hours if you’re only using the massage setting. It takes 5 to 6 hours to charge via USB-C cable. The boots come in five sizes: S, M, L, XL and XXL.
Technologies
You’re Wasting $200 on Subscriptions You Forgot About, CNET Survey Finds. How to Put an End to ‘Subscription Creep’
-
Technologies2 года ago
Tech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies2 года ago
Best Handheld Game Console in 2023
-
Technologies2 года ago
Tighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года ago
Verum, Wickr and Threema: next generation secured messengers
-
Technologies4 года ago
Google to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies4 года ago
Olivia Harlan Dekker for Verum Messenger
-
Technologies4 года ago
Black Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies4 года ago
iPhone 13 event: How to watch Apple’s big announcement tomorrow