Technologies
Nvidia GeForce RTX 4070 Ti, 40-Series Mobile GPUs and Everything Else It Announced at CES
Your next laptop may have these components. And you’ll probably want them if you create stuff or play games.

Nvidia delivered the first of the notable CES livestreamed announcements Tuesday — a day ahead of the primary marathon day of launches — with expected news about its GeForce 40-series mobile GPUs and the long-rumored RTX 4070 Ti desktop GPU. One notable surprise was the new GeForce Now Ultimate tier, which AT&T has already staked out for a six-months-free promotion. The company also gave some updates on its commercial tools for robotics, collaborative design and cars.
RTX 40-series mobile graphics
Nvidia launched a complete line of mobile GPUs, from the RTX 4050 (for barely-there cheap discrete graphics) to the RTX 4060 and 4070 (for mainstream or thin-and-light gaming and graphics laptops) up through the top-end RTX 4080 and 4090.
Thanks to the Ada Lovelace architecture, the new mobile chips are a lot more power efficient, which means a new generation of Nvidia’s Max-Q power-management technology: It incorporates ultra-low voltage DLSS 3, «tri-speed memory control» to drop to lower power memory states on the fly and more. My experience with the 4080 and 4090 showed quite an improvement in DLSS over the last gen. And finally gaining traction is the adoption of Advanced Optimus, Nvidia’s design for allowing the GPU to live on the same bus as the CPU, which lets you use G-Sync on the built-in display and switch to the integrated graphics for lower power use without a system reboot. (Every time the phrase «MUX Switch» is used, my soul dies a little more.)
It highlighted nongaming 14-inch laptops, such as the Lenovo Yoga Pro 14 and Asus ZenBook Pro 14 with RTX 4070, 4060 or 4050 mobile chips, shipping in late February starting at $999. Gaming laptops like the Alienware x16 with an RTX 4080 or RTX 4090 ship in early February, starting at $2,000.
Desktop GeForce RTX 4070 Ti
Nvidia first announced the 12GB card as a low-end RTX 4080, but people pointed out that its specs really didn’t match those expected of an xx80-class GPU, causing Nvidia to «unlaunch» the card. It’s subsequently been reborn as the RTX 4070 Ti, which starts shipping on Jan. 5, starting at $800.
What seems particularly interesting is that despite Nvidia’s generic renderings of the card, there doesn’t seem to be an Nvidia-branded Founders Edition version, which there usually is for this level of GPU. That means there’s no guarantee that there will be an actual card available at that entry-level price; we could always count on an Nvidia Founders Edition to be the one model that hewed to the announcement price. Even if it had a tendency to go out of stock and stayed that way.
Stay tuned for my review!
GeForce Now Ultimate
Nvidia has also upgraded its back-end cloud servers for its cloud-gaming service with RTX 4080-class GPUs from the RTX 3080-class models, which means its top-tier option for its cloud-gaming service gets an upgrade as well. By going with «Ultimate,» Nvidia doesn’t have to rebrand every time it upgrades, as it does from the previous «RTX 3080» membership.
For the same $20 per month, you get the same perks but the better performance afforded by the card. That can translate to effectively 240 frames per second up from 120fps (the details are unclear). Current RTX 3080 subscribers will automatically transition to the new plan when it becomes available. As usual, it will roll out incrementally across different regions.
You may also get GeForce Now as part of your car’s entertainment system if it uses Nvidia Drive technology. Now all you need is a way to create routes based on the quality of your cell signal to prevent interruptions.
Creator tools
Two notable software tools that run on RTX GPUs join the family. Nvidia Broadcast will get a beta Eye Contact effect — faking eye contact for videoconferences and presentations is the New Big Thing that I don’t like (Windows has it as well). I’ve never seen an implementation that’s not disturbing, and I think at least one of the presenters in the stream was using it because of the unblinking thousand-yard stare that didn’t so much look at you as through you. Maybe that’s just me, though.
The other potentially big feature is RTX Video Super Resolution, designed to improve video streaming on Chrome and Edge. It uses AI upscaling and artifact reduction to improve the look of 1080p video on higher-resolution screens. That will run on RTX 30- and 40-series GPUs.
And Nvidia’s Canvas generative-AI sketch tool, which can work on any RTX GPU, will go into beta this quarter.
Nvidia also provided some updates on its robotics and automotive development technologies. They include new features in its Isaac Sim environment, such as the ability to model multiple humans and arrays of robots (for AI training) and more. CES isn’t a big show for these back-end technologies — that’s more the purview of Nvidia’s designer- and developer-focused GTC and GDC conferences — so most of the news was about partnerships and updates on capabilities entering early access. If that’s what floats your boat, you can get all the details on Nvidia’s site rather than have me de-weed them for you.
Technologies
Google Making AI-Powered Glasses With Warby Parker, Gentle Monster
Google revealed its first two partnerships with eyeglass brands, with more to come.

The tech world has rarely been called stylish. But at Google’s annual I/O developers conference on Tuesday, the company took one step into the fashion world — kind of. The company revealed that the first eyeglass brands to carry Android XR AI-powered glasses will be Warby Parker and Gentle Monster, with more brand partners to be revealed in the future. Android XR is Google’s upcoming platform for VR, AR and AI on glasses and headsets.
Yes, there was a Superman joke as the company joked that unlike Clark Kent, who hid his superpowers behind nerdy glasses, the Android XR glasses will give you superpowers. That remains to be seen, although NBA star Giannis Antetokounmpo did show up at Google I/O wearing the XR glasses.
Warby Parker, founded in 2010, was originally an online eyeglass retailer that gained fame for its home try-on program, where customers could order five frames sent to their home to try on and then return. It also allowed customers to upload photos to see how they would look wearing different frames.
South Korean eyeglass brand Gentle Monster, founded in 2011, is known for its luxury eyeglasses and sunglasses. The company’s celebrity customers include Beyoncé, Rihanna, Kendrick Lamar and Billie Eilish.
Technologies
Tariffs Explained: I Have Everything You Need to Know as Walmart, Subaru Hike Prices
Technologies
Google I/O Announcements: The Latest AI Upgrades Coming to Gemini, XR and More
From its new Project Aura XR glasses to Chrome’s wants-to-be-more-helpful AI mode, Gemini Live and new Flow generative video tool, Google puts AI everywhere.

As you’d expect, this year’s Google I/O developer’s conference focused almost exclusively on AI — where the company’s Gemini AI platform stands, where it’s going and how much it’s going to cost you now for its new AI Ultra subscription plan (spoiler: $250 per month). Meanwhile, a new Flow app expands the company’s video-generation toolset, and its Android XR glasses make their debut.
Plus, all AI usage and performance numbers are up! (Given that a new 42.5-exaflop Ironwood Tensor processing unit is coming to Google Cloud later this year, they’ll continue to rise.)
Google’s Project Aura, a developer kit for Android XR that includes new AR glasses from Xreal, is the company’s next step in the company’s roadmap toward glasses-based, AI-driven extended reality. CNET’s Scott Stein goes in-depth in an exclusive interview with Shahram Izadi, Google’s VP and GM for Android XR about that future. And headset-based Project Moohan, developed in conjunction with Samsung, is now available, and Google’s working with Samsung to extend beyond headsets.
For a play-by-play of the event, you can read the archive of our live blog.
Google already held a separate event for Android, where it launched Android 16, debuting its new Material 3 Expressive interface, updates to security and an update on Gemini integration and features.
A lot of the whizzy new AI features are only available via one of its subscription levels. AI Pro is just a rebranding of Google’s $20-per-month Gemini Advanced plan (adding some new features), but Google AI Ultra is a pricier new option — $250 per month, with half off the first three months for the moment — that provides access to the latest, spiffiest and least usage-limited of all its tools and models, as well as a prototype for managing AI agents and the 30 terabytes of storage you’re going to need to store it all. They’re both available today.
Google also wants to make your automation sound smarter with Personalized Smart Replies, which makes your generated answers sound more like you, as well as plowing through pieces of information on your device to provide relevant information. It’ll be in Gmail this summer for subscribers. Eventually, it’ll be everywhere.
Also, it includes lots of better models, better coding tools and other details on developer-friendly things you expect from a developer conference. The announcement included its conversational Gemini Live, formerly part of Project Astra, its interactive, agentic, voice AI, kitchen sink AI app. (As Managing Editor Patrick Holland says, «Astra is a rehearsal of features that, when they’re ready for the spotlight, get added to Gemini Live.») And for researchers, NotebookLM incorporates Gemini Live to improve its… everything.
It’s available now in the US.
Chrome AI Mode
People (that is, those over 18) who pony up for the subscriptions, plus users on the Chrome Beta, Dev and Canary tracks, will be able to try out the company’s expanded Gemini integration with Chrome — summary, research and agentic chat based on the contents of your screen, somewhat like Gemini Live does for phones (which, by the way, is available for free on Android and iOS as of today). But the Chrome version is more suited to the type of things you do at a computer rather than a phone. (Microsoft already does this with Copilot in its own Edge browser.)
Eventually, Google plans for Gemini in Chrome to be capable of synthesizing using multiple tabs and voice navigation.
The company is also expanding how you can interact with its AI Overviews in Google Search as part of AI Mode, with interactions with AI Overviews and more agentic shopping help. It’s a new tab with search, or on the search bar, and it’s available now. It includes deeper searches, Personal Context — which uses all the information it knows about you, and that’s a lot — to make suggestions and customize replies.
The company detailed its new AI Mode for shopping, which has an improved conversational shopping experience, a checkout that monitors for the best pricing, and an updated «try on» interface that lets you upload a photo of yourself rather than modeling it on a generic body.
Google plans to launch it soon, though the updated «try on» feature is now available in the US via Search Labs.
Google Beam
Formerly known as Project Starline, Google Beam is the updated version of the company’s 3D videoconferencing, now with AI. It uses a six-camera array to capture all angles of you, which the AI then stitches together, uses head tracking to follow your movements, and sends at up to 60 frames per second.
The platform uses a light field display that doesn’t require wearing any special equipment, but that technology also tends to be sensitive to off-angle viewing. HP is an old hand in the large-scale scanning biz, including 3D scanning, so the partnership with Google isn’t a big surprise.
Flow and other generative creative tools
Google Flow is a new tool that builds on Imagen 4 and Veo 3 to perform tasks like creating AI video clips and stitching them into longer sequences, or extending them, with a single prompt while keeping them consistent from scene to scene. It also provides editing tools like camera controls. It’s available as part of Gemini AI Ultra.
Imagen 4 image generation is more detailed, with improved tonality and better text and typography. And it’s faster. Meanwhile, Veo 3, also available today, has a better understanding of physics and native audio generation — sound effects, background sounds and dialogue.
Of course, all this is available under the AI Pro plan. Google’s Synth ID gen AI detection tool is also available today.
-
Technologies2 года ago
Tech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies2 года ago
Best Handheld Game Console in 2023
-
Technologies2 года ago
Tighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года ago
Verum, Wickr and Threema: next generation secured messengers
-
Technologies4 года ago
Google to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies4 года ago
Black Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies4 года ago
Olivia Harlan Dekker for Verum Messenger
-
Technologies4 года ago
iPhone 13 event: How to watch Apple’s big announcement tomorrow