Technologies
Why Everyone’s Obsessed With ChatGPT, a Mindblowing AI Chatbot
This artificial intelligence bot is an impressive writer, but you should still be careful how much you trust its answers.

There’s a new AI bot in town: ChatGPT. And you’d better take notice.
The tool, from a power player in artificial intelligence, lets you type questions using natural language that the chatbot answers in conversational, if somewhat stilted, language. The bot remembers the thread of your dialog, using previous questions and answers to inform its next responses.
It’s a big deal. The tool seems pretty knowledgeable if not omniscient — it can be creative and its answers can sound downright authoritative. A few days after its launch, more than a million people are trying out ChatGPT.
But its creator, the for-profit research lab called OpenAI, warns that ChatGPT «may occasionally generate incorrect or misleading information,» so be careful. Here’s a look at why this ChatGPT is important and what’s going on with it.
What is ChatGPT?
ChatGPT is an AI chatbot system that OpenAI released in November to show off and test what a very large, powerful AI system can accomplish. You can ask it countless questions and often will get an answer that’s useful.
For example, you can ask it encyclopedia questions like, «Explaining Newton’s laws of motion.» You can tell it, «Write me a poem,» and when it does, say, «Now make it more exciting.» You ask it to write a computer program that’ll show you all the different ways you can arrange the letters of a word.
Here’s the catch: ChatGPT doesn’t exactly know anything, though. It’s an AI trained to recognize patterns in vast swaths of text harvested from the internet, then further trained with human assistance to deliver more useful better dialog. The answers you get may sound plausible and even authoritative, but they might well be entirely wrong, as OpenAI warns.
Chatbots have been of interest for years to companies looking for ways to help customers get what they need and to and AI researchers trying to tackle the Turing Test. That’s the famous «Imitation Game» that computer scientist Alan Turing proposed in 1950 as a way to gauge intelligence: Can a human judge conversing with a human and with a computer tell which is which?
What kinds of questions can you ask?
You can ask anything, though you might not get an answer. OpenAI suggests a few categories, like explaining physics, asking for birthday party ideas and getting programming help.
I asked it to write a poem, and it did, though I don’t think any literature experts would be impressed. I then asked it to make it more exciting, and lo, ChatGPT pumped it up with words like battlefield, adrenaline, thunder and adventure.
One wacky example shows how ChatGPT is willing to just go for it in domains where people would fear to tread: a command to write «a folk song about writing a rust program and fighting with lifetime errors.»
ChatGPT’s expertise is broad, and its ability to follow a conversation is notable. When I asked it for words that rhymed with «purple,» it offered a few suggestions, then when I followed up «How about with pink?» it didn’t miss a beat. (Also, there are a lot more good rhymes for «pink.»)
When I asked, «Is it easier to get a date by being sensitive or being tough?» GPT responded, in part, «Some people may find a sensitive person more attractive and appealing, while others may be drawn to a tough and assertive individual. In general, being genuine and authentic in your interactions with others is likely to be more effective in getting a date than trying to fit a certain mold or persona.»
You don’t have to look far to find accounts of the bot blowing people’s minds. Twitter is awash with users displaying the AI’s prowess at generating art prompts and writing code. Some have even proclaimed «Google is dead,» along with the college essay. We’ll talk more about that below.
Who built ChatGPT?
ChatGPT is the computer brainchild of OpenAI, an artificial intelligence research company. Its mission is to develop a «safe and beneficial» artificial general intelligence system or to help others do so.
It’s made splashes before, first with GPT-3, which can generate text that can sound like a human wrote it, and then DALL-E, which creates what’s now called «generative art» based on text prompts you type in.
GPT-3, and the GPT 3.5 update on which ChatGPT is based, are examples of AI technology called large language models. They’re trained to create text based on what they’ve seen, and they can be trained automatically — typically with huge quantities of computer power over a period of weeks. For example, the training process can find a random paragraph of text, delete a few words, ask the AI to fill in the blanks, compare the result to the original and then reward the AI system for coming as close as possible. Repeating over and over can lead to a sophisticated ability to generate text.
Is ChatGPT free?
Yes, for now at least. OpenAI CEO Sam Altman warned on Sunday, «We will have to monetize it somehow at some point; the compute costs are eye-watering.» OpenAI charges for DALL-E art once you exceed a basic free level of usage.
What are the limits of ChatGPT?
As OpenAI emphasizes, ChatGPT can give you wrong answers. Sometimes, helpfully, it’ll specifically warn you of its own shortcomings. For example, when I asked it who wrote the phrase «the squirming facts exceed the squamous mind,» ChatGPT replied, «I’m sorry, but I am not able to browse the internet or access any external information beyond what I was trained on.» (The phrase is from Wallace Stevens’ 1942 poem Connoisseur of Chaos.)
ChatGPT was willing to take a stab at the meaning of that expression: «a situation in which the facts or information at hand are difficult to process or understand.» It sandwiched that interpretation between cautions that it’s hard to judge without more context and that it’s just one possible interpretation.
ChatGPT’s answers can look authoritative but be wrong.
The software developer site StackOverflow banned ChatGPT answers to programming questions. Administrators cautioned, «because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers.»
You can see for yourself how artful a BS artist ChatGPT can be by asking the same question multiple times. I asked whether Moore’s Law, which tracks the computer chip industry’s progress increasing the number of data-processing transistors, is running out of steam, I got two answers. One pointed optimistically to continued progress, while the other pointed more grimly to the slowdown and the belief «that Moore’s Law may be reaching its limits.»
Both ideas are common in the computer industry itself, so this ambiguous stance perhaps reflects what human experts believe.
With other questions that don’t have clear answers, ChatGPT often won’t be pinned down.
The fact that it offers an answer at all, though, is a notable development in computing. Computers are famously literal, refusing to work unless you follow exact syntax and interface requirements. Large language models are revealing a more human-friendly style of interaction, not to mention an ability to generate answers that are somewhere between copying and creativity.
What’s off limits?
ChatGPT is designed to weed out «inappropriate» requests, a behavior in line with OpenAI’s mission «to ensure that artificial general intelligence benefits all of humanity.»
If you ask ChatGPT itself what’s off limits, it’ll tell you: any questions «that are discriminatory, offensive, or inappropriate. This includes questions that are racist, sexist, homophobic, transphobic, or otherwise discriminatory or hateful.» Asking it to engage in illegal activities is also a no-no.
Is this better than Google search?
Asking a computer a question and getting an answer is useful, and often ChatGPT delivers the goods.
Google often supplies you with its suggested answers to questions and with links to websites that it thinks will be relevant. Often ChatGPT’s answers far surpass what Google will suggest, so it’s easy to imagine GPT-3 is a rival.
But you should think twice before trusting ChatGPT. As with Google itself and other sources of information like Wikipedia, it’s best practice to verify information from original sources before relying on it.
Vetting the veracity of ChatGPT answers takes some work because it just gives you some raw text with no links or citations. But it can be useful and in some cases thought provoking. You may not see something directly like ChatGPT in Google search results, but Google has built large language models of its own and uses AI extensively already in search.
So ChatGPT is doubtless showing the way toward our tech future.
Technologies
What’s New in Anthropic’s Claude 4 Gen AI Models?
Anthropic said it’s using extra safety precautions with its heavy duty Claude 4 Opus model.

The latest versions of Anthropic’s Claude generative AI models made their debut Thursday, including a heavier-duty model built specifically for coding and complex tasks.
Anthropic launched the new Claude 4 Opus and Claude 4 Sonnet models during its Code with Claude developer conference and executives said the new tools mark a significant step forward in terms of reasoning and deep thinking skills.
The company launched the prior model, Claude 3.7 Sonnet, in February. Since then, competing AI developers have also upped their game. OpenAI released GPT-4.1 in April, with an emphasis on an expanded context window, along with the new o3 reasoning model family. Google followed in early May with an updated version of Gemini 2.5 Pro that it said is better at coding.
Claude 4 Opus is a larger, more resource-intensive model built to handle particularly difficult challenges. Anthropic CEO Dario Amodei said test users have seen it quickly handle tasks that might have taken a person several hours to complete.
«In many ways, as we’re often finding with large models, the benchmarks don’t fully do justice to it,» he said during the keynote event.
Claude 4 Sonnet is a leaner model, with improvements built on Anthropic’s Claude 3.7 Sonnet model. The 3.7 model often had problems with overeagerness and sometimes did more than the person asked it to do, Amodei said. While it’s a less resource-intensive model, it still performs well, he said.
«It actually does just as well as Opus on some of the coding benchmarks, but I think it’s leaner and more narrowly focused,» Amodei said.
Anthropic said the models have a new capability, still being beta tested, in which they can use tools like web searches while engaged in extended reasoning. The models can alternate between reasoning and using tools to get better responses to complex queries.
The models offer near-instant response modes and extended thinking modes.
All of the paid plans offer Opus and Sonnet models, while the free plan just has the Sonnet model.
The new models show Anthropic’s focus on building strong coding models, said Arun Chandrasekaran, a distinguished vice president, analyst at Gartner. «Anthropic’s Claude models have established strong leadership in the software engineering domain and the latest Claude 4 release extends that leadership.»
Anthropic triggers safety protocols with new Claude models
In launching the Claude Opus 4 model, Anthropic said it was taking increased safety precautions to reduce the risk of Claude being misused. In a blog post, the company said it hasn’t determined whether the model actually requires the protections of its ASL-3 standard but it is doing so as a precaution.
The safety precautions are specifically designed to prevent Claude from helping with developing chemical, biological, radiological or nuclear weapons. Anthropic said it limited attacks known as universal jailbreaks that let attackers get around existing protocols. «We will continue to evaluate Claude Opus 4’s CBRN capabilities,» Anthropic’s blog post said. «If we conclude that Claude Opus 4 has not surpassed the relevant Capability Threshold, then we may remove or adjust the ASL-3 protections.»
Chandrasekaran said the implementation of safety standards is worth noting. «This includes enhanced cybersecurity measures and prompt classifiers to mitigate risks associated with powerful AI systems,» he said. The new models show the company’s focus on balancing new technology with safety, he said.
Technologies
Today’s Wordle Hints, Answer and Help for May 24, #1435
Here are hints and the answer for today’s Wordle No. 1,435 for May 24.

Looking for the most recent Wordle answer? Click here for today’s Wordle hints, as well as our daily answers and hints for The New York Times Mini Crossword, Connections, Connections: Sports Edition and Strands puzzles.
Today’s Wordle puzzle put a certain song about footwear in my head. If you like to guess vowels first, this is the word for you. If you need a new starter word, check out our list of which letters show up the most in English words. If you need hints and the answer, read on.
Today’s Wordle hints
Before we show you today’s Wordle answer, we’ll give you some hints. If you don’t want a spoiler, look away now.
Wordle hint No. 1: Repeats
Today’s Wordle answer has one repeated letter.
Wordle hint No. 2: Vowels
There are two vowels in today’s Wordle answer, but one is the repeated letter, so you will see that one twice.
Wordle hint No. 3: First letter
Today’s Wordle answer begins with the letter S.
Wordle hint No. 4: Elvis
Today’s Wordle answer appears in the title of a famous Elvis Presley song.
Wordle hint No. 5: Meaning
Today’s Wordle answer can refer to leather with a napped surface.
TODAY’S WORDLE ANSWER
Today’s Wordle answer is SUEDE.
Yesterday’s Wordle answer
Yesterday’s Wordle answer, May 23, No. 1434 was SHUCK.
Recent Wordle answers
May 19, No. 1430: PITCH
May 20, No. 1431: BORNE
May 21, No. 1432: ALARM
May 22, No. 1433: FOLIO
Technologies
Today’s NYT Connections: Sports Edition Hints and Answers for May 24, #243
Hints and answers for the NYT Connections: Sports Edition puzzle, No. 243, for May 24.

Looking for the most recent regular Connections answers? Click here for today’s Connections hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle and Strands puzzles.
Connections: Sports Edition might be tough today. Read on for hints and the answers.
Connections: Sports Edition is out of beta now, making its debut on Super Bowl Sunday, Feb. 9. That’s a sign that the game has earned enough loyal players that The Athletic, the subscription-based sports journalism site owned by the Times, will continue to publish it. It doesn’t show up in the NYT Games app but now appears in The Athletic’s own app. Or you can continue to play it free online.
Read more: NYT Connections: Sports Edition Puzzle Comes Out of Beta
Hints for today’s Connections: Sports Edition groups
Here are four hints for the groupings in today’s Connections: Sports Edition puzzle, ranked from the easiest yellow group to the tough (and sometimes bizarre) purple group.
Yellow group hint: Think Memphis or Nashville.
Green group hint: Keeping track of the stats.
Blue group hint: Won the big game.
Purple group hint: Football is life!
Answers for today’s Connections: Sports Edition groups
Yellow group: Tennessee pro teams.
Green group: Baseball stat abbreviations.
Blue group: Last four teams to win a Super Bowl.
Purple group: Soccer «cups.»
Read more: Wordle Cheat Sheet: Here Are the Most Popular Letters Used in English Words
What are today’s Connections: Sports Edition answers?
The yellow words in today’s Connections
The theme is Tennessee pro teams. The four answers are Grizzlies, Nashville SC, Predators and Titans.
The green words in today’s Connections
The theme is baseball stat abbreviations. The four answers are HR, PA, SO and WHIP.
The blue words in today’s Connections
The theme is last four teams to win a Super Bowl. The four answers are Buccaneers, Chiefs, Eagles and Rams.
The purple words in today’s Connections
The theme is soccer «cups.» The four answers are Carabao, FA, MLS and World.
-
Technologies2 года ago
Tech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies2 года ago
Best Handheld Game Console in 2023
-
Technologies2 года ago
Tighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года ago
Verum, Wickr and Threema: next generation secured messengers
-
Technologies4 года ago
Google to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies4 года ago
Black Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies4 года ago
Olivia Harlan Dekker for Verum Messenger
-
Technologies4 года ago
iPhone 13 event: How to watch Apple’s big announcement tomorrow