Connect with us

Technologies

Google Unveils Its ChatGPT Rival

Meet Bard.

Google released its own AI chatbot similar to ChatGPT on Monday called Bard.

«Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence, and creativity of our large language models,» Google Chief Executive Sundar Pichai tweeted Monday. «It draws on information from the web to provide fresh, high-quality responses.»

Powering Bard is Google’s Language Model for Dialogue Applications (LaMDA). The company says its new AI will use information from the web to make creative responses to queries or provide detailed info on questions asked.

Bard will be made available on Monday to selected testers and will be available to the public in the coming weeks..

Google didn’t immediately respond to request for comment.

Bard uses a lightweight version of LaMDA, according to a blog post by CEO Sundar Pichai. This model uses less computing power, allowing it to scale to more people and allowing additional feedback. Pichai pressed that feedback will be critical in meeting Google’s «high bar for quality, safety and groundedness in real-world information.»

Don’t expect Google rival Microsoft to stand still. CEO Satya Nadella is announcing «progress on a few exciting projects» at a press event at the company’s headquarters on Tuesday, according to an invitation. Microsoft plans to integrate ChatGPT into its technology, and this event could be where details are announced.

ChatGPT uses artificial intelligence technology called a large language model, trained on vast swaths of data on the internet. That type of model uses an AI mechanism called a transformer that Google pioneered. ChatGPT’s success in everything from writing software, passing exams, and offering advice, in the style of the King James Bible, on removing a sandwich from a VCR has propelled it into the tech spotlight, even though its results can be misleading or wrong.

AI technology already is all around us, helping in everything from flagging credit card fraud to translating our speech into text messages. The ChatGPT technology has elevated expectations, though, so it’s clear the technology will become more important in our lives one way or another as we rely on digital assistants and online tools.

Google AI subsidiary DeepMind also is involved. Chief Executive Demis Hassabis told Time that his company is considering a 2023 private beta test of an AI chatbot called Sparrow.

Google detailed transformers in 2017, and it’s since become a fixture of some of the biggest AI systems out there. Nvidia’s new H100 processor, the top dog in the world of AI acceleration at least in terms of public speed tests, now includes specific circuitry to accelerate transformers.

The large language model (LLM) revolution in AI that resulted is useful for language-specific systems like ChatGPT, Google’s LaMDA and newer PaLM, and others from companies including AI21 Labs, Adept AI Labs and Cohere. But LLMs are used for other tasks, too, including stacking boxes and processing genetic data to hunt for new drugs. Notably, they’re good at generating text, which is why they can be used for answering questions.

Google, which endured bad publicity over the departure of AI researcher Timnit Gebru in 2020, has a program focusing on responsible AI and machine learning, or ML, technology. «Building ML models and products in a responsible and ethical manner is both our core focus and core commitment,» Google Research Vice President Marian Croak said in a January post.

Google is keen to tout its deep AI expertise. ChatGPT triggered a «code red» emergency within Google, according to The New York Times, and drew Google co-founders Larry Page and Sergey Brin back into active work.

Editors’ note: CNET is using an AI engine to create some personal finance explainers that are edited and fact-checked by our editors. For more, see this post.

Technologies

Today’s NYT Mini Crossword Answers for Tuesday, Oct. 14

Here are the answers for The New York Times Mini Crossword for Oct. 14.

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


Today’s Mini Crossword has an odd vertical shape, with an extra Across clue, and only four Down clues. The clues are not terribly difficult, but one or two could be tricky. Read on if you need the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

Mini across clues and answers

1A clue: Smokes, informally
Answer: CIGS

5A clue: «Don’t have ___, man!» (Bart Simpson catchphrase)
Answer: ACOW

6A clue: What the vehicle in «lane one» of this crossword is winning?
Answer: RACE

7A clue: Pitt of Hollywood
Answer: BRAD

8A clue: «Yeah, whatever»
Answer: SURE

9A clue: Rd. crossers
Answer: STS

Mini down clues and answers

1D clue: Things to «load» before a marathon
Answer: CARBS

2D clue: Mythical figure who inspired the idiom «fly too close to the sun»
Answer: ICARUS

3D clue: Zoomer around a small track
Answer: GOCART

4D clue: Neighbors of Norwegians
Answer: SWEDES

Continue Reading

Technologies

Watch SpaceX’s Starship Flight Test 11

Continue Reading

Technologies

New California Law Wants Companion Chatbots to Tell Kids to Take Breaks

Gov. Gavin Newsom signed the new requirements on AI companions into law on Monday.

AI companion chatbots will have to remind users in California that they’re not human under a new law signed Monday by Gov. Gavin Newsom.

The law, SB 243, also requires companion chatbot companies to maintain protocols for identifying and addressing cases in which users express suicidal ideation or self-harm. For users under 18, chatbots will have to provide a notification at least every three hours that reminds users to take a break and that the bot is not human.

It’s one of several bills Newsom has signed in recent weeks dealing with social media, artificial intelligence and other consumer technology issues. Another bill signed Monday, AB 56, requires warning labels on social media platforms, similar to those required for tobacco products. Last week, Newsom signed measures requiring internet browsers to make it easy for people to tell websites they don’t want them to sell their data and banning loud advertisements on streaming platforms. 

AI companion chatbots have drawn particular scrutiny from lawmakers and regulators in recent months. The Federal Trade Commission launched an investigation into several companies in response to complaints by consumer groups and parents that the bots were harming children’s mental health. OpenAI introduced new parental controls and other guardrails in its popular ChatGPT platform after the company was sued by parents who allege ChatGPT contributed to their teen son’s suicide. 

«We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability,» Newsom said in a statement.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


One AI companion developer, Replika, told CNET that it already has protocols to detect self-harm as required by the new law, and that it is working with regulators and others to comply with requirements and protect consumers. 

«As one of the pioneers in AI companionship, we recognize our profound responsibility to lead on safety,» Replika’s Minju Song said in an emailed statement. Song said Replika uses content-filtering systems, community guidelines and safety systems that refer users to crisis resources when needed.

Read more: Using AI as a Therapist? Why Professionals Say You Should Think Again

A Character.ai spokesperson said the company «welcomes working with regulators and lawmakers as they develop regulations and legislation for this emerging space, and will comply with laws, including SB 243.» OpenAI spokesperson Jamie Radice called the bill a «meaningful move forward» for AI safety. «By setting clear guardrails, California is helping shape a more responsible approach to AI development and deployment across the country,» Radice said in an email.

One bill Newsom has yet to sign, AB 1064, would go further by prohibiting developers from making companion chatbots available to children unless the AI companion is «not foreseeably capable of» encouraging harmful activities or engaging in sexually explicit interactions, among other things. 

Continue Reading

Trending

Copyright © Verum World Media