Technologies
AI Gets Smarter, Safer, More Visual With GPT-4 Release, OpenAI Says
ChatGPT Plus subscribers can try it out now.

The hottest AI technology foundation, OpenAI’s GPT, got a big upgrade Tuesday that’s now available in the premium version of the ChatGPT chatbot.
The new GPT-4 can generate much longer strings of text and respond when people feed it images, and it’s designed to do a better job avoiding artificial intelligence pitfalls visible in the earlier GPT-3.5, OpenAI said Tuesday. For example, when taking bar exams that attorneys must pass to practice law, GPT-4 ranks in the top 10% of scores compared to the bottom 10% for GPT-3.5, the AI research company said.
GPT stands for Generative Pretrained Transformer, a reference to the fact that it can generate text on its own and that it uses an AI technology called transformers that Google pioneered. It’s a type of AI called a large language model, or LLM, that’s trained on vast swaths of data harvested from the internet, learning mathematically to spot patterns and reproduce styles.
OpenAI has made its GPT technology available to developers for years, but ChatGPT, which debuted in November, offered an easy interface that yielded an explosion of interest, experimentation and worry about the downsides of the technology. ChatGPT is free, but it falter when demand is high. In January, OpenAI began offering ChatGPT Plus for $20 per month with assured availability and, now, the GPT-4 foundation.
GPT-4 advancements
«In a casual conversation, the distinction between GPT-3.5 and GPT-4 can be subtle. The difference comes out when the complexity of the task reaches a sufficient threshold,» OpenAI said. «GPT-4 is more reliable, creative and able to handle much more nuanced instructions than GPT-3.5.»
Another major advance in GPT-4 is the ability to accept input data that includes text and photos. OpenAI’s example is asking the chatbot to explain a joke showing a bulky decades-old computer cable plugged into a modern iPhone’s tiny Lightning port.
Another is better performance avoiding AI problems like hallucinations — incorrectly fabricated responses, often offered with just as much seeming authority as answers the AI gets right. GPT-4 also is better at thwarting attempts to get it to say the wrong thing: «GPT-4 scores 40% higher than our latest GPT-3.5 on our internal adversarial factuality evaluations,» OpenAI said.
GPT-4 also adds new «steerability» options. Users of large language models today often must engage in elaborate «prompt engineering,» learning how to embed specific cues in their prompts to get the right sort of responses. GPT-4 adds a system command option that lets users set a specific tone or style, for example programming code or a Socratic tutor: «You are a tutor that always responds in the Socratic style. You never give the student the answer, but always try to ask just the right question to help them learn to think for themselves.»
«Stochastic parrots» and other problems
OpenAI acknowledges significant shortcomings that persist with GPT-4, though it also touts progress avoiding them.
«It can sometimes make simple reasoning errors … or be overly gullible in accepting obvious false statements from a user. And sometimes it can fail at hard problems the same way humans do, such as introducing security vulnerabilities into code it produces,» OpenAI said. In addition, «GPT-4 can also be confidently wrong in its predictions, not taking care to double-check work when it’s likely to make a mistake.»
Large language models can deliver impressive results, seeming to understand huge amounts of subject matter and to converse in human-sounding if somewhat stilted language. Fundamentally, though, LLM AIs don’t really know anything. They’re just able to string words together in statistically very refined ways.
This statistical but fundamentally somewhat hollow approach to knowledge led researchers, including former Google AI researchers Emily Bender and Timnit Gebru, to warn of the «dangers of stochastic parrots» that come with large language models. Language model AIs tend to encode biases, stereotypes and negative sentiment present in training data, and researchers and other people using these models tend «to mistake … performance gains for actual natural language understanding.»
OpenAI, Microsoft and Nvidia partnership
OpenAI got a big boost when Microsoft said in February it’s using GPT technology in its Bing search engine, including a chat features similar to ChatGPT. On Tuesday, Microsoft said it’s using GPT-4 for the Bing work. Together, OpenAI and Microsoft pose a major search threat to Google, but Google has its own large language model technology too, including a chatbot called Bard that Google is testing privately.
Microsoft uses GPT technology both to evaluate the searches people type into Bing and, in some cases, to offer more elaborate, conversational responses. The results can be much more informative than those of earlier search engines, but the more conversational interface that can be invoked as an option has had problems that make it look unhinged.
To train GPT, OpenAI used Microsoft’s Azure cloud computing service, including thousands of Nvidia’s A100 graphics processing units, or GPUs, yoked together. Azure now can use Nvidia’s new H100 processors, which include specific circuitry to accelerate AI transformer calculations.
Technologies
Starlink Plans to Send 42K Satellites Into Space. That Could Be Bad News for the Ozone
Technologies
Scary Survey Results: Teen Drivers Are Often Looking at Their Phones
New troubling research found that entertainment is the most common reason teens use their phones behind the wheel, followed by texting and navigation.

A new study reveals that teen drivers in the US are spending more than one-fifth of their driving time distracted by their phones, with many glances lasting long enough to significantly raise the risk of a crash. Published in the journal Traffic Injury Prevention and released on Thursday, the research found that, on average, teens reported looking at their phones during 21.1% of every driving trip. More than a quarter of those distractions lasted two seconds or longer, which is an amount of time widely recognized as dangerous at highway speeds.
Most distractions tied to entertainment, not emergencies
The top reason teens said they reached for their phones behind the wheel was for entertainment, cited by 65% of respondents. Texting (40%) and navigation (30%) were also common. Researchers emphasized that these distractions weren’t typically urgent, but rather habitual or social.
Teens know the risks
The study includes survey responses from 1,126 teen drivers across all four US regions, along with in-depth interviews with a smaller group of high schoolers. Most participants recognized that distracted driving is unsafe and believed their parents and peers disapproved of the behavior.
But many teens also assumed that their friends were doing it anyway, pointing to a disconnect between personal values and perceived social norms.
Teens think they can resist distractions
Interestingly, most teens expressed confidence in their ability to resist distractions. That belief, researchers suggest, could make it harder to change behavior unless future safety campaigns specifically target these attitudes.
The study’s lead author, Dr. Rebecca Robbins of Boston’s Brigham and Women’s Hospital, said interventions should aim to shift social norms while also emphasizing practical steps, such as enabling «Do Not Disturb» mode and physically separating drivers from their devices.
«Distracted driving is a serious public health threat and particularly concerning among young drivers,» Robbins said. «Driving distracted doesn’t just put the driver at risk of injury or death, it puts everyone else on the road in danger of an accident.»
What this means for parents and educators
The researchers say their findings can help guide educators and parents in developing more persuasive messaging about the dangers of distracted driving. One of the recommendations is that adults need to counter teens’ beliefs that phone use while driving is productive or harmless.
While the study’s qualitative component was limited by a small and non-urban sample, the authors believe the 38-question survey they developed can be used more broadly to assess beliefs, behaviors and the effectiveness of future safety efforts.
Technologies
Nintendo Switch 2 Joy-Con Issues? It Might Just Be Your HDMI Cable
Make sure to use the Switch 2 cable included with the new gaming console.

As the Switch 2 continues to sell in the millions for Nintendo, it shouldn’t be a surprise that there’d be some issues with the console. It appears, however, that one problem Switch 2 owners are facing is actually just a matter of using the wrong cable.
Reddit users have posted about their Joy-Cons disconnecting when they’re playing on their Switch 2 while it’s docked, an issue spotted earlier by IGN. It does appear that, luckily, the issue can be resolved by using the included HDMI cable for the Switch 2 rather than an older, slower one — including the cable that came with the original Nintendo Switch.
Nintendo laid out the solution on its support page for when the Joy-Con 2 starts disconnecting from the console:
- Confirm that you’re using an «Ultra High Speed» HDMI cable to connect the dock to the TV. If it’s not Ultra High Speed, your console won’t perform as expected when docked.
- If you’re using a different cable than the one that came with the console, it should have printed on the cable that it’s «Ultra High Speed.»
- The HDMI cable that came with the Nintendo Switch is not «Ultra High Speed» and should not be used with the Nintendo Switch 2 dock.
Nintendo didn’t immediately respond to a request for comment about the source of this issue.
Since the Switch 2 launch, many gamers have come to realize that Nintendo’s new console is very picky about what cables are connected to it. This goes for the HDMI cable as well as the power cable.
While the new and old Switch share the same name, they don’t share the same components. The Switch 2 is a huge upgrade in graphics power over the 2017 console, which means it needs the appropriate power supply. Not providing the Switch 2 with sufficient power could likely cause some issues, especially if the system has to do a lot of work to run a game.
-
Technologies2 года ago
Tech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies2 года ago
Best Handheld Game Console in 2023
-
Technologies2 года ago
Tighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года ago
Verum, Wickr and Threema: next generation secured messengers
-
Technologies4 года ago
Google to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies4 года ago
Black Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies4 года ago
Olivia Harlan Dekker for Verum Messenger
-
Technologies4 года ago
iPhone 13 event: How to watch Apple’s big announcement tomorrow