Technologies
Honor’s Magic V5 Boasts On-Device Live AI Call Translation for Guaranteed Privacy
In an exclusive interview with CNET, Honor’s President of Product Fei Fang reveals how the V5’s AI model will allow for more speed, accuracy and privacy.

«Hola! ¿Hablas inglés?» I asked the woman who answered the phone in the Barcelona restaurant.
I was calling in a futile attempt to make a reservation for the CNET team dinner during Mobile World Congress this year. Unfortunately, I don’t know Spanish (I learned French and German at school). And as it turned out, she didn’t speak English either.
«No!» she said, and brusquely hung up.
What I needed in that moment was the kind of AI call translation feature that’s becoming increasingly prevalent on phones — including those made by Samsung and Google, and, starting next week, Honor.
When Honor unveils its Magic V5 foldable at a launch event on Aug. 28 in London, it will come with what the company is calling «the industry’s first on-device large speech model,» which will allow live AI call translation to take place on device, with no cloud processing.
Currently the phone supports six languages — English, Chinese, French, German, Italian and Spanish. For aforementioned reasons, I can’t test all of these, but I’ve already had a play around with the feature and can confirm it did a very effective job of translating my garbled messages into French. I only wish I’d had it available to me in Spain when I needed it.
The model Honor has deployed was designed by the company in collaboration with Shanghai Jiao Tong University, based on the open-source Whisper model, said Fei Fang, president of product at Honor in an interview. It’s been optimized for streaming speech recognition, automatic language detection and translation inference acceleration (that’s speed and efficiency, to you and I).
According to Fang, Honor’s user experience studies have shown that as long as translation occurs within 1.5 seconds, it doesn’t «induce waiting anxiety,» in anyone attempting to use AI call translation. As such, it’s made sure to keep the latency to within these parameters so you won’t get anxious waiting for the translation to kick in.
«We also work together with industry language experts to consistently and comprehensively evaluate the accuracy of our output,» she added. «The assessment is primarily based on five metrics: accuracy, logical coherence, readability, grammatical correctness and conciseness.»
In addition to Honor’s AI model, live translation is being powered by Qualcomm’s Snapdragon 8 Elite chip. The 8 Elite’s NPU allows multimodal generative AI applications to be integrated onto the device. Honor’s algorithms work together with the NPU to keep power consumption as low as possible while maintaining the required accuracy of the translations, said Christopher Patrick, SVP of mobile handsets at Qualcomm.
There are a number of benefits to having the AI model embedded on the Magic V5, but perhaps the most compelling is the privacy it guarantees. It means that everything is processed locally and your calls will therefore remain completely confidential. The fact that the model lives on device and you don’t need to download voice packages also reduces its storage needs.
Another benefit of running the model on the phone itself is «offline usability,» said Patrick. «All conversation information is stored directly on-device and users can access it anytime, anywhere, without network restrictions.»
The work Honor has done on AI call translation is set to be recognized at the upcoming Interspeech conference on speech science and tech. But already, Honor is thinking about how this use of AI can be used to enable other new and exciting features for the people who buy its phones.
«Beyond the essential user scenario of call translation, Honor’s on-device large speech model will also be deployed in scenarios such as face-to-face translation [and] AI subtitles,» said Fang. The process of developing the speech model has allowed Honor’s AI team to gain extensive experience of model optimization, which it will use to develop other AI applications, she added.
«Looking ahead, we will continue to expand capabilities in areas such as emotion recognition and health monitoring, further empowering voice interactions with your on-device AI assistant,» she said.
Technologies
How Much Energy Do Your AI Prompts Consume? Google Just Shared Its Gemini Numbers
Current measurements of AI’s impact aren’t telling the full story. Google has offered a new method it hopes to standardize.

The explosion of AI tools worldwide is increasing exponentially, but the companies that make these tools often don’t express their environmental impact in detail.
Google has just released a technical paper detailing measurements for energy, emissions and water use of its Gemini AI prompts. The impact of a single prompt is, it says, minuscule. According to its methodology for measuring AI’s impact, a single prompt’s energy consumption is about the equivalent of watching TV for less than 9 seconds.
That’s quite in a single serving, except when you consider the variety of chatbots being used, with billions of prompts easily sent every day.
On the more positive side of progress, the technology behind these prompts has become more efficient. Over the past 12 months, the energy of a single Gemini text prompt has been reduced by 33x, and the total carbon footprint has been reduced by 44x, Google says. According to the tech giant, that’s not unsubstantial, and it’s a momentum that will need to be maintained going forward.
Google did not immediately respond to CNET’s request for further comment.
Google’s calculation method considers much more
The typical calculation for the energy cost of an AI prompt ends at the active machine it’s been run on, which shows a much smaller per-prompt footprint. But Google’s method for measuring the impact of a prompt purportedly spans a much wider range of factors that paint a clearer picture, including full-system dynamic power, idle machines, data center overhead, water consumption and more.
For comparison, it’s estimated that only using the active TPU and GPU consumption, a single Gemini prompt uses 0.10 watt-hours of energy, 0.12 milliliters of water and emits 0.02 grams of carbon dioxide equivalent. This is a promising number, but Google’s wider methodology tells a different story. With more considerations in place, a Gemini text prompt uses 0.24Wh of energy, 0.26mL of water and emits 0.03 gCO2e — around double across the board.
Will new efficiencies keep up with AI use?
Through a multilayered series of efficiencies, Google is continually working on ways to make AI’s impact less burdensome, from more efficient model architectures and data centers to custom hardware.
With smarter models, use cases and tools emerging daily, those efficiencies will be critical as we immerse ourselves deeper in this AI reality.
For more, you should stop using ChatGPT for these things.
Technologies
Vivo Launches Mixed-Reality Headset, an Apple Vision Pro Competitor
Vivo Vision has many of the same design elements as Apple’s VR/AR, but is only available in China, for now.

Look-alikes of Apple products often pop up in China, and mixed-reality headsets have now joined the party. Chinese smartphone maker Vivo has introduced the Vivo Vision, a headset mixing both AR and VR, and it bears many similarities to the Apple Vision Pro.
The company announced the Vivo Vision Discovery Edition at its 30th anniversary celebration in Dongguan, China, saying it’s «the first MR product developed by a smartphone manufacturer in China, positioning Vivo as the first Chinese company to operate within both the smartphone and MR product sectors.»
The Vivo Vision, currently only an in-store experience in mainland China, has a curved glass visor, an aluminum external battery pack and downward-pointing cameras like the Vision Pro. But it also has some differences — an 180-degree panoramic field of view and a much lighter weight at 398 grams (versus the Vision Pro’s 650 grams).
CNET asked Vivo if it plans to sell the Vivo Vision to non-China markets, but the company did not immediately respond.
The Vivo Vision runs on OriginOS Vision, Vivo’s mixed-reality operating system. It supports 3D video recording, spatial photos and audio, and a 120-foot cinematic screen experience.
The starting cost in China will be $1,395 (converted to US dollars), compared to the Vision Pro at $3,500.
Even if the Vivo Vision came to the consumer market in the US, it might not matter much to Apple’s bottom line. The Vision Pro hasn’t been a big seller, likely because of the price tag. Still, the headset market is expected to grow quickly over the next several years, and Apple is already working on new versions of the Vision Pro, including one that’s more affordable than the original.
Jon Rettinger, a tech influencer with more than 1.65 million YouTube subscribers, says he’s not overly enthusiastic about VR/AR just yet. «It’s heavy, invasive and without a must-have use case,» Rettinger told CNET. «If the technology can go from goggles to glasses, I think we’ll see a significant rise. But if the current form factors stay, it will always be niche.
The YouTuber loves that the technology exists, but still doesn’t use it. «The honeymoon wore off. Aside from some gaming and content viewing, it’s still cumbersome, and I tend to go back to my laptop,» he said.
Technologies
Today’s NYT Strands Hints, Answers and Help for Aug. 22 #537
Here are hints and answers for the NYT Strands puzzle for Aug. 22, No. 537.

Looking for the most recent Strands answer? Click here for our daily Strands hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle, Connections and Connections: Sports Edition puzzles.
Today’s NYT Strands puzzle has a fun theme, especially if you have ever read Agatha Christie books or played a few rounds of the board game Clue. If you need hints and answers, read on.
I go into depth about the rules for Strands in this story.
If you’re looking for today’s Wordle, Connections and Mini Crossword answers, you can visit CNET’s NYT puzzle hints page.
Read more: NYT Connections Turns 1: These Are the 5 Toughest Puzzles So Far
Hint for today’s Strands puzzle
Today’s Strands theme is: Whodunit?
If that doesn’t help you, here’s a clue: Solve the crime
Clue words to unlock in-game hints
Your goal is to find hidden words that fit the puzzle’s theme. If you’re stuck, find any words you can. Every time you find three words of four letters or more, Strands will reveal one of the theme words. These are the words I used to get those hints but any words of four or more letters that you find will work:
- REST, POEM, SOUR, SOURS, DIAL, HOLE, VOLE, ROLE, ROLES, VOLES, HOLES, DEEM, GAIT, SAME
Answers for today’s Strands puzzle
These are the answers that tie into the theme. The goal of the puzzle is to find them all, including the spangram, a theme word that reaches from one side of the puzzle to the other. When you have all of them (I originally thought there were always eight but learned that the number can vary), every letter on the board will be used. Here are the nonspangram answers:
- HEIR, LOVER, RIVAL, SPOUSE, STRANGER, DETECTIVE
Today’s Strands spangram
Today’s Strands spangram is ITSAMYSTERY, with all the answers being characters common to mystery novels. To find it, look for the I that’s the farthest left letter on the top row, and wind down.
-
Technologies3 года ago
Tech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies2 года ago
Best Handheld Game Console in 2023
-
Technologies2 года ago
Tighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года ago
Verum, Wickr and Threema: next generation secured messengers
-
Technologies4 года ago
Google to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies4 года ago
Black Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies4 года ago
Olivia Harlan Dekker for Verum Messenger
-
Technologies4 года ago
iPhone 13 event: How to watch Apple’s big announcement tomorrow