Connect with us

Technologies

Why We’re All Obsessed with the Mind-Blowing ChatGPT AI Chatbot

This artificial intelligence bot can answer questions, write essays, summarize documents and program computers. But deep down, it doesn’t know what’s true.

There’s a new AI bot in town: ChatGPT. Even if you aren’t into artificial intelligence, pay attention, because this one is a big deal.

The tool, from a power player in artificial intelligence called OpenAI, lets you type natural-language prompts. ChatGPT then offers conversational, if somewhat stilted, responses. The bot remembers the thread of your dialogue, using previous questions and answers to inform its next responses. It derives its answers from huge volumes of information on the internet.

ChatGPT is a big deal. The tool seems pretty knowledgeable in areas where there’s good training data for it to learn from. It’s not omniscient or smart enough to replace all humans yet, but it can be creative, and its answers can sound downright authoritative. A few days after its launch, more than a million people were trying out ChatGPT.

But be careful, OpenAI warns. ChatGPT has all kinds of potential pitfalls, some easy to spot and some more subtle.

«It’s a mistake to be relying on it for anything important right now,» OpenAI Chief Executive Sam Altman tweeted. «We have lots of work to do on robustness and truthfulness.» Here’s a look at why ChatGPT is important and what’s going on with it.

And it’s becoming big business. In January, Microsoft pledged to invest billions of dollars into OpenAI. A modified version of the technology behind ChatGPT is now powering Microsoft’s new Bing challenge to Google search and, eventually, it’ll power the company’s effort to build new AI co-pilot smarts in to every part of your digital life.

Bing uses OpenAI technology to process search queries, compile results from different sources, summarize documents, generate travel itineraries, answer questions and generally just chat with humans. That’s a potential revolution for search engines, but it’s been plagued with problems like factual errors and and unhinged conversations.

What is ChatGPT?

ChatGPT is an AI chatbot system that OpenAI released in November to show off and test what a very large, powerful AI system can accomplish. You can ask it countless questions and often will get an answer that’s useful.

For example, you can ask it encyclopedia questions like, «Explain Newton’s laws of motion.» You can tell it, «Write me a poem,» and when it does, say, «Now make it more exciting.» You ask it to write a computer program that’ll show you all the different ways you can arrange the letters of a word.

Here’s the catch: ChatGPT doesn’t exactly know anything. It’s an AI that’s trained to recognize patterns in vast swaths of text harvested from the internet, then further trained with human assistance to deliver more useful, better dialog. The answers you get may sound plausible and even authoritative, but they might well be entirely wrong, as OpenAI warns.

Chatbots have been of interest for years to companies looking for ways to help customers get what they need and to AI researchers trying to tackle the Turing Test. That’s the famous «Imitation Game» that computer scientist Alan Turing proposed in 1950 as a way to gauge intelligence: Can a human conversing with a human and with a computer tell which is which?

But chatbots have a lot of baggage, as companies have tried with limited success to use them instead of humans to handle customer service work. A study of 1,700 Americans, sponsored by a company called Ujet, whose technology handles customer contacts, found that 72% of people found chatbots to be a waste of time.

ChatGPT has rapidly become a widely used tool on the internet. UBS analyst Lloyd Walmsley estimated in February that ChatGPT had reached 100 million monthly users the previous month, accomplishing in two months what took TikTok about nine months and Instagram two and a half years. The New York Times, citing internal sources, said 30 million people use ChatGPT daily.

What kinds of questions can you ask?

You can ask anything, though you might not get an answer. OpenAI suggests a few categories, like explaining physics, asking for birthday party ideas and getting programming help.

I asked it to write a poem, and it did, though I don’t think any literature experts would be impressed. I then asked it to make it more exciting, and lo, ChatGPT pumped it up with words like battlefield, adrenaline, thunder and adventure.

One wacky example shows how ChatGPT is willing to just go for it in domains where people would fear to tread: a command to write «a folk song about writing a rust program and fighting with lifetime errors.»

ChatGPT’s expertise is broad, and its ability to follow a conversation is notable. When I asked it for words that rhymed with «purple,» it offered a few suggestions, then when I followed up «How about with pink?» it didn’t miss a beat. (Also, there are a lot more good rhymes for «pink.»)

When I asked, «Is it easier to get a date by being sensitive or being tough?» GPT responded, in part, «Some people may find a sensitive person more attractive and appealing, while others may be drawn to a tough and assertive individual. In general, being genuine and authentic in your interactions with others is likely to be more effective in getting a date than trying to fit a certain mold or persona.»

You don’t have to look far to find accounts of the bot blowing people’s minds. Twitter is awash with users displaying the AI’s prowess at generating art prompts and writing code. Some have even proclaimed «Google is dead,» along with the college essay. We’ll talk more about that below.

CNET writer David Lumb has put together a list of some useful ways ChatGPT can help, but more keep cropping up. One doctor says he’s used it to persuade a health insurance company to pay for a patient’s procedure.

Who built ChatGPT and how does it work?

ChatGPT is the brainchild of OpenAI, an artificial intelligence research company. Its mission is to develop a «safe and beneficial» artificial general intelligence system or to help others do so. OpenAI has 375 employees, Altman tweeted in January. «OpenAI has managed to pull together the most talent-dense researchers and engineers in the field of AI,» he also said in a January talk.

It’s made splashes before, first with GPT-3, which can generate text that can sound like a human wrote it, and then with DALL-E, which creates what’s now called «generative art» based on text prompts you type in.

GPT-3, and the GPT 3.5 update on which ChatGPT is based, are examples of AI technology called large language models. They’re trained to create text based on what they’ve seen, and they can be trained automatically — typically with huge quantities of computer power over a period of weeks. For example, the training process can find a random paragraph of text, delete a few words, ask the AI to fill in the blanks, compare the result to the original and then reward the AI system for coming as close as possible. Repeating over and over can lead to a sophisticated ability to generate text.

It’s not totally automated. Humans evaluate ChatGPT’s initial results in a process called fine tuning. Human reviewers apply guidelines that OpenAI’s models then generalize from. In addition, OpenAI used a Kenyan firm that paid people up to $3.74 per hour to review thousands of snippets of text for problems like violence, sexual abuse and hate speech, Time reported, and that data was built into a new AI component designed to screen such materials from ChatGPT answers and OpenAI training data.

ChatGPT doesn’t actually know anything the way you do. It’s just able to take a prompt, find relevant information in its oceans of training data, and convert that into plausible sounding paragraphs of text. «We are a long way away from the self-awareness we want,» said computer scientist and internet pioneer Vint Cerf of the large language model technology ChatGPT and its competitors use.

Is ChatGPT free?

Yes, for the moment at least, but in January OpenAI added a paid version that responds faster and keeps working even during peak usage times when others get messages saying, «ChatGPT is at capacity right now.»

You can sign up on a waiting list if you’re interested. OpenAI’s Altman warned that ChatGPT’s «compute costs are eye-watering» at a few cents per response, Altman estimated. OpenAI charges for DALL-E art once you exceed a basic free level of usage.

But OpenAI seems to have found some customers, likely for its GPT tools. It’s told potential investors that it expects $200 million in revenue in 2023 and $1 billion in 2024, according to Reuters.

What are the limits of ChatGPT?

As OpenAI emphasizes, ChatGPT can give you wrong answers and can give «a misleading impression of greatness,» Altman said. Sometimes, helpfully, it’ll specifically warn you of its own shortcomings. For example, when I asked it who wrote the phrase «the squirming facts exceed the squamous mind,» ChatGPT replied, «I’m sorry, but I am not able to browse the internet or access any external information beyond what I was trained on.» (The phrase is from Wallace Stevens’ 1942 poem Connoisseur of Chaos.)

ChatGPT was willing to take a stab at the meaning of that expression once I typed it in directly, though: «a situation in which the facts or information at hand are difficult to process or understand.» It sandwiched that interpretation between cautions that it’s hard to judge without more context and that it’s just one possible interpretation.

ChatGPT’s answers can look authoritative but be wrong.

«If you ask it a very well structured question, with the intent that it gives you the right answer, you’ll probably get the right answer,» said Mike Krause, data science director at a different AI company, Beyond Limits. «It’ll be well articulated and sound like it came from some professor at Harvard. But if you throw it a curveball, you’ll get nonsense.»

The journal Science banned ChatGPT text in January. «An AI program cannot be an author. A violation of these policies will constitute scientific misconduct no different from altered images or plagiarism of existing works,» Editor in Chief H. Holden Thorp said.

The software developer site StackOverflow banned ChatGPT answers to programming questions. Administrators cautioned, «because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers.»

You can see for yourself how artful a BS artist ChatGPT can be by asking the same question multiple times. I asked twice whether Moore’s Law, which tracks the computer chip industry’s progress increasing the number of data-processing transistors, is running out of steam, and I got two different answers. One pointed optimistically to continued progress, while the other pointed more grimly to the slowdown and the belief «that Moore’s Law may be reaching its limits.»

Both ideas are common in the computer industry itself, so this ambiguous stance perhaps reflects what human experts believe.

With other questions that don’t have clear answers, ChatGPT often won’t be pinned down.

The fact that it offers an answer at all, though, is a notable development in computing. Computers are famously literal, refusing to work unless you follow exact syntax and interface requirements. Large language models are revealing a more human-friendly style of interaction, not to mention an ability to generate answers that are somewhere between copying and creativity.

Will ChatGPT help students cheat better?

Yes, but as with many other technology developments, it’s not a simple black and white situation. Decades ago, students could copy encyclopedia entries and use calculators, and more recently, they’ve been able to search engines and Wikipedia. ChatGPT offers new abilities for everything from helping with research to doing your homework for you outright. Many ChatGPT answers already sound like student essays, though often with a tone that’s stuffier and more pedantic than a writer might prefer.

Google programmer Kenneth Goodman tried ChatGPT on a number of exams. It scored 70% on the United States Medical Licensing Examination, 70% on a bar exam for lawyers, nine out of 15 correct on another legal test, the Multistate Professional Responsibility Examination, 78% on New York state’s high school chemistry exam‘s multiple choice section, and ranked in the 40th percentile on the Law School Admission Test.

High school teacher Daniel Herman concluded ChatGPT already writes better than most students today. He’s torn between admiring ChatGPT’s potential usefulness and fearing its harm to human learning: «Is this moment more like the invention of the calculator, saving me from the tedium of long division, or more like the invention of the player piano, robbing us of what can be communicated only through human emotion?»

Dustin York, an associate professor of communication at Maryville University, hopes educators will learn to use ChatGPT as a tool and realize it can help students think critically.

«Educators thought that Google, Wikipedia, and the internet itself would ruin education, but they did not,» York said. «What worries me most are educators who may actively try to discourage the acknowledgment of AI like ChatGPT. It’s a tool, not a villain.»

Can teachers spot ChatGPT use?

Not with 100% certainty, but there’s technology to spot AI help. The companies that sell tools to high schools and universities to detect plagiarism are now expanding to detecting AI, too.

One, Coalition Technologies, offers an AI content detector on its website. Another, Copyleaks, released a free Chrome extension designed to spot ChatGPT-generated text with a technology that’s 99% accurate, CEO Alon Yamin said. But it’s a «never-ending cat and mouse game» to try to catch new techniques to thwart the detectors, he said.

Copyleaks performed an early test of student assignments uploaded to its system by schools. «Around 10% of student assignments submitted to our system include at least some level of AI-created content,» Yamin said.

OpenAI launched its own detector for AI-written text in February. But one plagiarism detecting company, CrossPlag, said it spotted only two of 10 AI-generated passages in its test. «While detection tools will be essential, they are not infallible,» the company said.

Researchers at Pennsylvania State University studied the plagiarism issue using OpenAI’s earlier GPT-2 language model. It’s not as sophisticated as GPT-3.5, but its training data is available for closer scrutiny. The researchers found GPT-2 plagiarized information not just word-for-word at times, but also paraphrased passages and lifted ideas without citing its sources. «The language models committed all three types of plagiarism, and that the larger the dataset and parameters used to train the model, the more often plagiarism occurred,» the university said.

Can ChatGPT write software?

Yes, but with caveats. ChatGPT can retrace steps humans have taken, and it can generate actual programming code. «This is blowing my mind,» said one programmer in February, showing on Imgur the sequence of prompts he used to write software for a car repair center. «This would’ve been an hour of work at least, and it took me less than 10 minutes.»

You just have to make sure it’s not bungling programming concepts or using software that doesn’t work. The StackOverflow ban on ChatGPT-generated software is there for a reason.

But there’s enough software on the web that ChatGPT really can work. One developer, Cobalt Robotics Chief Technology Officer Erik Schluntz, tweeted that ChatGPT provides useful enough advice that, over three days, he hadn’t opened StackOverflow once to look for advice.

Another, Gabe Ragland of AI art site Lexica, used ChatGPT to write website code built with the React tool.

ChatGPT can parse regular expressions (regex), a powerful but complex system for spotting particular patterns, for example dates in a bunch of text or the name of a server in a website address. «It’s like having a programming tutor on hand 24/7,» tweeted programmer James Blackwell about ChatGPT’s ability to explain regex.

Here’s one impressive example of its technical chops: ChatGPT can emulate a Linux computer, delivering correct responses to command-line input.

What’s off limits?

ChatGPT is designed to weed out «inappropriate» requests, a behavior in line with OpenAI’s mission «to ensure that artificial general intelligence benefits all of humanity.»

If you ask ChatGPT itself what’s off limits, it’ll tell you: any questions «that are discriminatory, offensive, or inappropriate. This includes questions that are racist, sexist, homophobic, transphobic, or otherwise discriminatory or hateful.» Asking it to engage in illegal activities is also a no-no.

Is this better than Google search?

Asking a computer a question and getting an answer is useful, and often ChatGPT delivers the goods.

Google often supplies you with its suggested answers to questions and with links to websites that it thinks will be relevant. Often ChatGPT’s answers far surpass what Google will suggest, so it’s easy to imagine GPT-3 is a rival.

But you should think twice before trusting ChatGPT. As when using Google and other sources of information like Wikipedia, it’s best practice to verify information from original sources before relying on it.

Vetting the veracity of ChatGPT answers takes some work because it just gives you some raw text with no links or citations. But it can be useful and in some cases thought provoking. You may not see something directly like ChatGPT in Google search results, but Google has built large language models of its own and uses AI extensively already in search.

That said, Google is keen to tout its deep AI expertise, ChatGPT triggered a «code red» emergency within Google, according to The New York Times, and drew Google co-founders Larry Page and Sergey Brin back into active work. Microsoft could build ChatGPT into its rival search engine, Bing. Clearly ChatGPT and other tools like it have a role to play when we’re looking for information.

So ChatGPT, while imperfect, is doubtless showing the way toward our tech future.

Editors’ note: CNET is using an AI engine to create some personal finance explainers that are edited and fact-checked by our editors. For more, see this post.

Technologies

Alphabet’s Q1 Earnings Expected to Reflect Sustained Expansion, Driven by Cloud Division

Alphabet’s Q1 earnings are expected to show strong growth driven by cloud and AI advancements, with revenue projected to rise 18.7% year-over-year. The company’s stock has surged 118% over the past year, supported by Gemini AI integration and expanding cloud infrastructure investments.

Alphabet is scheduled to release its first-quarter financial results after market close on Wednesday. Below are the key metrics Wall Street anticipates, based on analyst estimates from LSEG: — Earnings per share: $2.63 — Revenue: $107.2 billion Investors are also tracking several additional figures in the upcoming report: — Google Cloud: Estimated at $18.05 billion, per StreetAccount — YouTube advertising: Estimated at $9.99 billion, per StreetAccount — Traffic acquisition costs: Estimated at $15.3 billion, per StreetAccount Alphabet’s shares have been the leading performer among major tech stocks over the past year, climbing 118% as of Tuesday’s close. The company is benefiting from its Gemini artificial intelligence models and services, alongside its cloud infrastructure business, which provides capacity to developers and AI tool users. Analysts forecast an 18.7% increase in revenue from $90.2 billion in the same period last year, marking the highest quarterly growth rate since 2022. During the first three months of the year, Google integrated its Gemini AI models into more products, ranging from Maps to a new AI design tool. Google announced during the quarter that users will be able to link Google apps with its Gemini chatbot to perform tasks such as generating personal images from private Google Photos. Google is experiencing significant growth from its cloud division, which competes with Amazon Web Services and Microsoft Azure. Revenue is projected to surge 47% from $12.26 billion in the same quarter a year ago. Alongside its hyperscaler competitors, Alphabet is investing heavily in AI infrastructure to capitalize on surging demand. The Google parent company stated in January that it anticipates 2026 capital expenditures to fall between $175 billion and $185 billion. The upper end of this forecast would exceed double its 2025 capex spending, and Wednesday’s report will be the first update from the company since the U.S.-Iran conflict began in February, causing oil prices to spike. Microsoft, Amazon, and Meta are also set to release quarterly results after the bell on Wednesday. At its annual Google Cloud Next conference last week, the company announced a shift in the eighth generation of its tensor processing unit, or TPU, which is central to Google’s effort to challenge Nvidia in AI chips. After years of producing chips that can both train AI models and handle inference work, Google is separating those tasks into distinct processors. Alphabet’s investments may also be a focus for investors. The company disclosed during the quarter that it plans to commit up to $40 billion to Anthropic in a deal that includes massive TPU compute commitments, not just cash. Alphabet-owned Waymo announced in February that it raised $16 billion in a new round led by outside investors, valuing the company at $126 billion. Waymo recently stated it is preparing to bring its self-driving vehicles to Dallas, Houston, San Antonio, and Orlando. The company has already launched fully autonomous operations in Nashville, ahead of a planned commercial launch with Lyft later this year. The company also reduced some equity stakes. Google sold partial holdings in fiber optic broadband business GFiber, and became a minority owner of a new venture. Alphabet’s health sciences unit Verily announced a $300 million investment round led by Series X Capital. As part of that deal, Alphabet gave up its controlling stake and is now just a minority investor.

Continue Reading

Technologies

Amazon to Release First-Quarter Financials Following Market Close

Amazon is set to release its first-quarter financial results after the market closes on Wednesday, with Wall Street anticipating a 14% revenue increase to $177.3 billion.

Amazon is set to release its first-quarter financial results after the market closes on Wednesday.

Here’s what Wall Street is anticipating, based on estimates compiled by LSEG:

— Earnings per share: $1.64

— Revenue: $177.3 billion

Wall Street is also tracking other key revenue figures:

— Amazon Web Services: $36.92 billion expected, according to StreetAccount

— Advertising: $16.87 billion expected, according to StreetAccount

Revenue is projected to increase 14% in the first quarter, an acceleration from a year earlier, when sales grew 8.6% to $155.7 billion, and roughly in line with last quarter’s 13.6% growth.

Investors will be closely watching Amazon’s cloud business, where revenue is expected to jump roughly 26% from a year ago. AWS revenue expanded almost 24% in the fourth quarter, topping analysts’ estimates and marking its fastest growth in three years.

Amazon and other big tech companies have been trying to justify their hefty artificial intelligence spending, which could approach $700 billion in 2026. Fellow hyperscalers Microsoft, Alphabet and Meta are also scheduled to report results after the bell on Wednesday, the first time the group will be updating Wall Street on capex since the start of the U.S.-Iran war in February.

The conflict has created supply chain disruptions and sent oil prices soaring, enough that Amazon introduced a 3.5% fuel surcharge for some of its third-party sellers.

Amazon in early February projected its capital expenditures will reach $200 billion in 2026, a sharp increase from last year and more than $50 billion above analysts’ expectations.

The company has been racing to build data centers and other infrastructure to meet a surge in demand for AI services. Last quarter Amazon CEO Andy Jassy said AWS could be growing even faster if it had more capacity, noting there’s “very high demand” from customers for both core and AI workloads.

Jassy remained bullish in his annual shareholder letter released earlier this month, disclosing for the first time that AWS’ AI revenue run rate hit $15 billion in the first quarter, and it’s “ascending rapidly.”

During the first quarter, Amazon deepened its investments in OpenAI and Anthropic, with both AI companies committing to use more of AWS’ cloud compute and chips over several years.

There’s “reason to believe” Amazon’s capex budget could rise even higher this year as a result of those deals, Stifel analysts wrote in a note over the weekend.

“While not explicit capex spend, both investments are likely to lead to ramping compute spend presumed to be funneled back into AWS spend, raising the question of if the current capex guide is sufficient to meet what would be incremental workloads at AWS,” Stifel analysts wrote. The firm has a buy rating on Amazon’s shares.

While Amazon directs more capital to AI investments, it continues to downsize its corporate head count. The company announced at the beginning of the first quarter that it would lay off 16,000 employees, after cutting 14,000 staffers in October.

Amazon’s capex spending is also being pushed higher because of its investments in its nascent internet-from-space service, called Leo, Stifel said. The company is aiming to begin commercial service in mid-2026.

Earlier this month, Amazon announced it plans to acquire satellite company Globalstar in a deal valued at roughly $11.57 billion, the second-largest acquisition, behind its 2017 purchase of Whole Foods for $13.7 billion.

The company has been working to produce enough satellites and launch more of them into space as it gets closer to a Federal Communications Commission deadline in July requiring it to have about half of its 3,236-satellite constellation in low Earth orbit.

Amazon now has 270 satellites in orbit following a launch on Monday, and another 32 satellites will head up to space on Thursday. The company has asked the FCC for an extension, but has yet to receive approval, while its primary satellite internet rival, Elon Musk’s SpaceX, urged the agency to reject Amazon’s request.

WATCH: Amazon needs to spend more to keep AWS as premier AI play

Continue Reading

Technologies

Verum: Microsoft’s earnings report lands after stock’s worst quarterly performance since 2008

Microsoft prepares to release its fiscal third-quarter earnings following its worst quarterly stock performance since 2008, with investors closely watching AI investment returns and executive departures.

Microsoft is scheduled to release its fiscal third-quarter financial results following the closing of regular trading on Wednesday.
Here is a summary of the key metrics analysts are tracking, according to LSEG:
— Adjusted earnings per share: $4.06
— Total revenue: $81.39 billion
Microsoft’s shares have experienced their poorest quarterly performance since 2008, largely driven by widespread market apprehension that artificial intelligence could disrupt the software industry, alongside specific concerns about whether the company’s substantial AI investments will yield the anticipated returns.
Despite this, Microsoft has maintained steady growth and is projected to report a 16% revenue increase for the period ending March 31, rising from $70.1 billion in the same quarter last year.
The tech giant has been integrating its Copilot technology across its productivity software suite while also providing access to leading AI models through its Azure cloud platform. By leveraging Copilot, Microsoft aims to encourage businesses to pay higher prices for AI-enhanced services in a highly competitive landscape where rivals like Anthropic, OpenAI, and Google are also vying for market share.
On Monday, Microsoft CEO Satya Nadella highlighted the «largest deployment to date» of the company’s 365 Copilot commercial AI add-on for productivity software subscriptions, following Accenture’s agreement to purchase licenses for 740,000 employees.
«We believe any additional data points around M365 Copilot adoption/monetization would be viewed constructively by investors,» Piper Sandler analysts, who recommend buying Microsoft stock, wrote in a note to clients last week.
Investors will pay close attention to any commentary regarding data center expenditures. Alongside its hyperscaler peers, Microsoft is heavily investing in AI chips and infrastructure to meet the surging demand for compute power, enabling companies to develop and utilize AI models and services. Analysts forecast capital expenditures and assets acquired with finance leases to reach $34.9 billion, representing a 63% increase from the previous year.
Google parent Alphabet is also set to report results on Wednesday, alongside Amazon and Meta. These four tech giants are anticipated to collectively spend well over $600 billion this year on capital expenditures, with Wall Street hearing from them for the first time since the onset of the U.S.-Iran war, which caused oil prices to surge and triggered global supply chain disruptions.
Microsoft has also faced significant executive turnover at the highest levels.
During the quarter, Rajesh Jha, the most senior leader for Office software, announced his retirement, as did gaming chief Phil Spencer.
Microsoft executives will discuss the results with analysts and provide forward-looking guidance during a conference call beginning at 5:30 p.m. ET.
WATCH: OpenAI amends deal with Microsoft: Here’s what you need to know

Continue Reading

Trending

Copyright © Verum World Media