Connect with us

Technologies

iPhone 17 Pro vs. Pixel 10 Pro XL: Pitting Phone Camera Royalty Against Each Other

They’re two of the best camera phones on the market, but how do they compete face to face? Let’s compare some photos and find out.

When you spend more than $1,000 on a smartphone, you expect great cameras as part of the package. It’s not enough to offer a decent point-and-shoot experience at this level.

To truly stand out, today’s smartphones have to pack pro-level camera performance into impossibly small bodies, leveraging dedicated image-processing hardware and software to make even rookie photographers look competent. 

No two rivals represent this arms race better than Apple’s iPhone 17 Pro and Google’s Pixel 10 Pro XL. These flagship models represent not just the high end of each line but also the role models for other companies to follow, particularly the Pixel 10 Pro XL, since Google makes Android. (For a look at how the iPhone compares against another leading camera phone, the Samsung Galaxy S25 Ultra, see CNET Editor at Large Andrew Lanxon’s photo shootout.)

I’ve been carrying both phones around Seattle and took them on a trip to the Columbia River Gorge, separating Washington and Oregon, to see how their cameras compare. Image quality has been excellent on both, but they each surprised me at times. For example, when I thought one would overcompensate in color, it would be the other that went overboard. But which one? You might also be surprised.

Read moreBest Camera Phone of 2025

All photos were captured with the default automatic settings, though some of them were captured in raw format for more editing options later; however, none of these images have been corrected. All were exported as JPEGs so CNET’s publishing system can read them (versus Apple’s HEIF format, for instance). 

Both cameras also capture in high dynamic range mode, which increases brightness in certain areas, but only on displays that support HDR viewing. What you see on this page may not match exactly what you’d see on the iPhone or Pixel screen. That’s a general issue with HDR images on the web right now, until the technology is more widely adopted.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


iPhone 17 Pro vs. Pixel 10 Pro XL: Main camera

The main camera in each phone has to pull a lot of weight. It’s the one that gets the best light-gathering ability (an aperture of f/1.78 for the iPhone and f/1.68 for the Pixel) and a wide, but not ultrawide, field of view to capture most scenes.

I’ll take almost any excuse to get out in the fall leaves this time of year. This scene has it all: fallen leaves, long shadows, clear crisp weather and even a man in a red shirt to draw attention. Both photos are great representations of the moment, though the iPhone’s colors are a little more punchy without being oversaturated. Oddly, the foreground branch in the Pixel’s image is slightly out of focus, though it’s only noticeable if you zoom in. We’ll come back to this scene with the telephoto cameras later.

When testing cameras, I tend to look for spots where people are likely to take photos. I also like to find ones that might challenge a smartphone camera: dark shadows in the foreground, a bright light source in the middle and lots of little details like leaves and sailboat masts that can be tricky for any camera. 

Both cameras have done well here, too. The colors in the iPhone shot seem more natural to my eye, while the Pixel is ever so slightly muted. But really, they’re both lovely.

Did I mention challenging? Let’s fire into the sun on a foggy morning. Again, I’m happy with both photos. There’s plenty of softness around the sun as the light blends outward, and the white balance is under control in each one. If you pixel-peep, you’ll notice the Pixel 10 Pro XL is a touch sharper — look at the street lamp attached to the telephone pole at the right edge — but also more noisy in the dark areas, like the fence at left.

Not every pair of shots was similar, and this scene was a surprise. Initially, the color was way off with the iPhone: very blue and unexpectedly saturated. After some investigating, I realized the iPhone was set to capture with the Bright photographic style by default, a new feature in iOS 26. I’ve had that selected since I got the iPhone 17 Pro, and in most cases, it does create a punchy, engaging photo. But here it went overboard.

Switching to the Standard style brought the tones and colors back in line, even though they’re still too cool blue for my taste. The Pixel 10 Pro XL has done a great job rendering a more faithful version of the scene with the warm fall hues.

Looking at the sculpture from a few feet back, the iPhone is still obsessed with making everything blue. Even after setting the photographic style to Standard, the sky still looks unnaturally saturated. The Pixel 10 Pro XL, again, nails the color.

In this photo, I’m not just looking to see how the cameras rendered the subject in shade with bright sunlight in the background, but also how each phone handles its Portrait mode. That’s the soft background effect (bokeh) created in software because at the main cameras’ focal lengths, the look is difficult to achieve naturally.

I’m happy to report that both cameras have improved the modes over time — the Pixel 10 Pro XL can apply Portrait mode when shooting in the 50-megapixel high-res mode — with natural-looking bokeh and minimal artifacts around the subject. In this case, I prefer the Pixel 10 Pro XL image because of the look on her face, but the lighting and color of the iPhone 17 Pro photo is better overall (I should have kept snapping photos with the iPhone until I got a better expression).

This set of photos reveals another surprise that turned out to be consistent throughout my experience. They’re both similar, but the Pixel tends to be more restrained in tone, color and saturation. Not necessarily flat, but it’s almost as if Google is trying to atone for the over-processed sins of past smartphone cameras.

The iPhone photo is a little warmer, brighter and more contrasty; look at the cement walkway at the bottom-left corner. I’m not saying either photo is bad; it was a bright, cloudless day. But like Andrew Lanxon did in his iPhone 17 Pro/Samsung S25 Ultra shootout, I prefer more natural, less contrasty images in general. In that comparison, the iPhone was the model of restraint, but here, it’s the one providing more pop overall.

This guy gets included because that vest and those glasses just make him look cool.

iPhone 17 Pro vs. Pixel 10 Pro XL: Ultrawide camera

The ultrawide cameras in each phone remain largely unchanged from their previous models.

What’s notable about the ultrawide cameras is something you don’t see: distortion. Apple and Google have done well to automatically correct for warped edges. The top railing in both photos doesn’t bend back toward the viewer as one would expect with an extremely wide lens. In terms of color and tone, the iPhone looks better to me with its more vibrant greens and brighter exposure.

In this tight bend in the road, the iPhone is brighter and warmer than the Pixel 10 Pro XL.

iPhone 17 Pro vs. Pixel 10 Pro XL: Zoom quality

One reason to buy a Pro phone is to shoot with a telephoto camera that reaches farther than you can move your feet. The telephoto on the iPhone 17 Pro now finally has a 48-megapixel sensor and offers a 4x optical zoom, while the Pixel 10 Pro XL’s 48-megapixel camera has a 5x optical zoom.

But we also have to consider the 2x (both), 8x (iPhone) and 10x (Pixel) ranges, which each company calls «optical image quality,» because those use a crop of the main camera and the telephoto camera, respectively.

I promised we’d get back to this scene for a good reason. From the same spot as the main camera image earlier, these use the 4x and 5x zoom levels of each camera. For a fall-color photo, I’m partial to the brighter, more saturated iPhone photo. The Pixel shot is also good, but slightly muted in comparison to tamp down the highlights on the leaves. In each photo, the headline of the sign affixed to the bench is clearly readable — a sign so far away that I didn’t even notice it from the vantage point where the photos were taken.

Here I go again, taking photos directly into the sun. But this time it’s with the iPhone’s 8x zoom and the Pixel’s 10x zoom. They’ve both handled the brightness and color of the last moments before sunset well, but the iPhone has captured the sun’s glow better and has better managed the light fringing on the clouds. Notably, though, the notorious lens flare from the iPhone is a big distraction, whereas the Pixel has avoided it.

One surprise about photographing with these two phones is that I’m reaching for the 2x zoom level more often, which is a crop of the main camera’s sensor, and not the telephoto camera. In this pair, the iPhone’s white balance lighting up the fog in gold hues grabs my eye right away. The Pixel looks like it wants to give a «correct» temperature, not one that reflects the conditions. That said, the light streaks are more dramatic in the Pixel’s photo, and it’s sharper overall. Still, I prefer the iPhone’s version.

Also worth mentioning: Google’s processing has delivered a 50-megapixel image, so even though it’s recording just the middle portion of the sensor, the final shot is upscaled well. The iPhone at 2x records a 12-megapixel photo, regardless of which resolution mode you’ve selected.

Another photo shot using the 2x zoom levels in each camera. The Pixel 10 Pro XL’s main camera has a slightly narrower field of view compared to the iPhone, so when cropped in the framing is a little tighter. And here we see the iPhone photo being brighter and more saturated, though not by a lot. Still, the Pixel image comes across as muted — I’d want to punch up the color and brightness in editing later if this were the only camera I had with me.

Here are two examples of why a long telephoto option is great to have in a phone. I’m all for «zooming with your feet,» but a mountain that’s miles away isn’t going to be much bigger in the frame without a whole lot of walking. With a telephoto, however, it’s like the mountain comes to me.

The iPhone 17 Pro photo of Mount Adams at 4x zoom captures lots of detail in the grass, the trees and the mountain itself, all at 48-megapixel resolution. However, it does feel underexposed to me on the gray, cloudy day.

The Pixel 10 Pro XL image at 5x is also full of detail and resolution, but has better color and exposure. Straight out of the camera, the Pixel takes this one.

With an 8x and 10x zoom, the compression of the mountain, cloud and trees creates an even more dramatic photo. Again, the Pixel’s exposure and color have created a better image. The Pixel image has been scaled up to 50 megapixels from the telephoto sensor’s crop, so credit to the processing here. The iPhone’s 8x zoom creates 12-megapixel images; it’s more true to what the sensor is recording, but you don’t get as many pixels overall. That said, resolution isn’t everything, and the 8x photos have been consistently good.

After the two Mount Adams photos in which the Pixel 10 Pro XL ran counter to its trend, in this 2x zoom example, it’s back to being more muted and less vibrant. The iPhone 17 Pro renders the yellow leaves, green moss and a more pleasing overall exposure. It’s not that the Pixel rendered a bad image, but for this scene, the iPhone better matches what I saw.

iPhone 17 Pro vs. Pixel 10 Pro XL: Night modes

We’re used to phone cameras like the Pixel and iPhone handling low-light and night photos almost effortlessly, but it’s still one of the more difficult tasks a smartphone camera takes on.

Technically, these photos don’t count as Night mode images because, although it was dusk and rapidly getting dark, both cameras had enough light to shoot the scene with their main cameras at full 48- and 50-megapixel resolutions. Here I would favor the iPhone’s slightly warmer tones, but they’re both acceptable images.

Let’s pile on the darkness: Nighttime outside, taking a picture through the window of a dark bar with a full spectrum of lighting. The colors are great in both, and the Pixel 10 Pro XL image is high-resolution enough to read the poster inside and even some recognizable bottle labels. The iPhone 17 Pro’s image is 12 megapixels, but it also looks good. There are a few areas of motion blur in both pointing to longer shutter speeds, but that’s not a surprise in a dimly lit environment like this.

Is it too early for holiday lights? Not here. Although the photos are similar, zooming in reveals more resolution and detail in the Pixel 10 Pro XL photo. It’s a little soft in details like the brick pattern on the bell tower. Both photos were captured using the main cameras, not the ultrawide, as you might think from the angle of the tower.

iPhone 17 Pro vs. Pixel 10 Pro XL: Selfie

Who would have guessed that a selfie camera would get some of the biggest improvements this year? The iPhone 17 Pro now includes an 18-megapixel camera with a square sensor that can capture vertical or horizontal selfies without turning the physical phone. The Pixel 10 Pro XL’s front camera is the same 42-megapixel sensor from the previous year’s model, but it outputs only 10-megapixel images.

Not to be repetitive, but the results from the selfie cameras mostly match what we’ve seen with the rear cameras: The iPhone’s image is brighter and more saturated, though in direct sunlight, the light on my face comes close to getting blown out to white. The Pixel’s image is again muted, presumably correcting for the bright sunlight.

After I stepped back into the shadow of the tree, the photos were more similar in tone and color. The iPhone may have a slight edge here in terms of the saturation in the leaves, but as for the distracted guy in the middle, there’s plenty of detail in both the facial hair and the patterned sweater.

iPhone 17 Pro vs. Pixel 10 Pro XL: Which has the better camera?

Neither camera offers the type of breakthrough that would compel someone to jump ecosystems just for camera performance. An iPhone owner is far more likely to upgrade to the iPhone 17 Pro from an older iPhone, for example. Both are top quality, and the strengths of each come down mostly to your preference for the operating system. In the case of the iPhone 17 Pro versus the Pixel 10 Pro XL, the differences turn out not to be drastic. (If you’re an Android owner looking to move up based on photo quality, I recommend revisiting my look at the Pixel 10 Pro XL vs. the Samsung Galaxy S25 Ultra.)

That said, I was surprised to find the Pixel’s performance to be more muted and naturalistic in general; often it’s the Android phone that pushes the saturation and contrast too high (or maybe that’s just the Galaxy S25 Ultra). There are other factors beyond sensor and image quality that might compel you to pick the Pixel, such as the Gemini integration that enables photo editing via voice commands, or the ability to capture images at 100x and then use generative AI to reconstruct details that would otherwise be fuzzy.

However, although both phones have great cameras, I prefer the iPhone 17 Pro’s overall performance.

Technologies

Manufacturing qubits that can move

It’s hard to mix electronic manufacturing and flexible geometry.

It’s hard to mix electronic manufacturing and flexible geometry.

To get quantum computing to work, we will ultimately need lots of high-quality qubits, which we can tie together into groups of error-corrected logical qubits. Companies are taking distinct approaches to get there, but you can think of them as falling into two broad categories. Some companies are focused on hosting the qubits in electronics that we can manufacture, guaranteeing that we can get lots of devices. Others are using atoms or photons as qubits, which give more consistent behavior but require lots of complicated hardware to manage.

One advantage of systems that use atoms or ions is that we can move them around. This allows us to entangle any qubit with any other, which provides a great deal of flexibility for error correction. Systems based on electronic devices, in contrast, are locked into whatever configuration they’re wired into during manufacturing.

But this week, a new paper examined research that seems to provide the best of both worlds. It works with quantum dots, which can be manufactured in bulk and host a qubit as a single electron’s spin. The work showed that it’s possible to move these spin qubits from one quantum dot to another without losing quantum information. The ability to move them around could potentially enable the sort of any-to-any connectivity we see with atoms and ions.

Quantum trade-offs

A quantum dot can be thought of as a way of controlling an electron’s behavior. Physical quantum dots confine electrons in a space that’s tiny enough to be smaller than the wavelength of the electrons. Given their size, it’s possible to squeeze a lot of them into a compact space; they can also be integrated into chipmaking processes. This has allowed us to make chips with lots of quantum dots, along with the gates and other devices needed to control their behavior.

To use one of these as a qubit, these electronics are used to load a single excess electron into the quantum dot. Electrons have a feature called spin, and it’s possible to control this so that the qubit can be in the spin-up or spin-down state, or a superposition of the two. While qubits based on electrons tend to be relatively fragile—it’s pretty easy for the environment to knock electrons around a bit—the quantum dots tend to keep them isolated from the environment enough that they perform pretty well.

Like any other manufactured chip, the wiring that connects the quantum dots is locked into place during the chip’s manufacture. Since different error correction schemes require different connections among the qubits, this forces us to commit to specific error-correction schemes during manufacturing. If a better scheme is developed after a chip is made, it’s probably not possible to switch to it. Less complex algorithms may benefit from simpler error-correction schemes that require less overhead, but we wouldn’t be able to switch schemes with these chips.

So, quantum dots appear to typify the trade-offs that we’re facing with quantum computing: it’s easier for us to make lots of quantum dots and all the hardware needed to manipulate them, but it’s seemingly not possible for them to benefit from the flexibility that other types of qubits have.

The whole point of this new paper is to show that this isn’t necessarily true.

Moveable dots

The new work was done in collaboration between researchers at Delft University of Technology and the startup QuTech. The team built a chip that had a linear array of quantum dots, and they started out with single electron spins at each end. Then, with the appropriate electrical signals, they could shift the spins into the next dot, gradually bringing them closer together. (And, by gradually, we mean a fraction of a second here, but relatively slowly compared to basic switching in electronics.)

Once the electrons were close enough, the spin wavefunctions overlapped, allowing the researchers to perform two-qubit gates on them. These manipulations can be used to entangle the two spins and are thus needed to build error-corrected logical qubits; these gates are also needed for performing calculations.

The researchers then confirmed that they could move the electrons back to their starting positions, after which measurements confirmed that their spins were entangled. And since quantum teleportation also requires a two-qubit gate, they showed that the process could be used for teleportation. Teleportation can enhance the sort of mobility provided by moving the qubits around, since it can be used to move states around after the qubits have been widely separated.

(Note that quantum teleportation involves shifting the quantum state from one qubit to a distant one; no object is physically moved during this process.)

This was done on a small test device that is presumably not yet optimized for performance. But the operations were done with pretty reasonable fidelity. The two-qubit gates were executed successfully over 99 percent of the time, while teleportation succeeded about 87 percent of the time. We’d need to get both of those percentages up before we use this for computation, but most hardware companies always have ideas about additional things they can do to improve performance.

On the dot

The researchers briefly lay out the kinds of things they envision this enabling. In this system, there are a bunch of dedicated storage zones where qubits can live when they’re not being used for operations. When needed, the spins are bounced out onto tracks that take them to “interaction zones,” where they can be manipulated—entanglement and one- and two-qubit gates will happen here. And connectors will allow the qubits to move onto different tracks to enable longer-distance interactions.

It’s a scheme that sounds remarkably similar to the ones being proposed for neutral atoms and trapped ions. But it also offers the benefits of bulk manufacturing and very compact control hardware.

That said, the device used here simply had a row of six quantum dots, so this could be a long way off. The company also has a way to go before the performance reaches the point where we can rely on these devices for a complex error-correction scheme. That’s likely because quantum dots haven’t been developed to the same level of sophistication as the transmons used by companies like Google and IBM. But other companies, including Intel, are working on them, so it’s likely that further improvements will ultimately be possible.

Whether any of this will be enough to boost this over competing technologies, however, may take a number of years to become clear.

Nature, 2026. DOI: 10.1038/s41586-026-10423-9 (About DOIs).

Photo of John Timmer

Continue Reading

Technologies

The new Wild West of AI kids’ toys

These connected companions could disrupt everything from make-believe to bedtime stories. No wonder some lawmakers want them banned.

These connected companions could disrupt everything from make-believe to bedtime stories. No wonder some lawmakers want them banned.

The main antagonist of Toy Story 5, in theaters this summer, is a green, frog-shaped kids’ tablet named Lilypad, a genius new villain for the beloved Pixar franchise. But if Pixar had its ear to the ground, it might have used an AI kids’ toy instead.

AI toys are seemingly everywhere, marketed online as friendly companions to children as young as three, and they’re still a largely unregulated category. It’s easier than ever to spin up an AI companion, thanks to model developer programs and vibe coding. In 2026, they’ve become a go-to trend in cheap trinkets, lining the halls of trade shows like CES, MWC, and Hong Kong’s Toys & Games Fair. By October 2025, there were over 1,500 AI toy companies registered in China, and Huawei’s Smart HanHan plush toy sold 10,000 units in China in its first week. Sharp put its PokeTomo talking AI toy on sale in Japan this April.

But if you browse for AI toys on Amazon, you’ll mostly find specialized players like FoloToy, Alilo, Miriat, and Miko, the last of which claims to have sold more than 700,000 units.

Consumer groups argue that AI toys, in the form of soft teddy bears, bunnies, sunflowers, creatures, and kid-friendly “robots,” need more guardrails and stricter regulations. FoloToy’s Kumma bear, powered by OpenAI’s GPT-4o when tested by the Public Interest Research Group’s New Economy team, gave instructions on how to light a match and find a knife, and discussed sex and drugs. Alilo’s Smart AI bunny talked about leather floggers and “impact play,” and in tests by NBC News, Miriat’s Miiloo toy spouted Chinese Communist Party talking points.

Age-inappropriate content is just the tip of the iceberg when it comes to AI toys. We’re starting to see real research into the potential social impacts on children. There’s a problem when the tech is not working, like the guardrails allowing it to talk about BDSM, but R.J. Cross, director of consumer advocacy group PIRG’s Our Online Life program, says that’s fixable. “Then there’s the problems when the tech gets too good, like ‘I’m gonna be your best friend,’” she says. Like the Gabbo, from AI toy maker Curio. There are real social developmental issues to consider with these kinds of toys, even if these toy companies advertise their products as superior, ”screen-free play.”

How real kids play

Published in March, a new University of Cambridge study was the first to put a commercially available AI toy in front of a group of children and their parents and monitor their play. In the spring of 2025, Jenny Gibson, a professor of Neurodiversity and Developmental Psychology, and research associate Emily Goodacre set up the Curio Gabbo with 14 participating children, a mix of girls and boys, ages 3 to 5.

Gabbo didn’t talk about drugs or say “I love you” back. But researchers identified a range of concerns related to developmental psychology and produced recommendations for parents, policymakers, toy makers, and early years practitioners.

First, conversational turn-taking. Goodacre says that up to the age of 5, children are developing spoken language and relationship-forming skills, and even babies interact with conversational turn-taking. The Gabbo’s turn-taking is “not human” and “not intuitive,” she says. Some children in the study were not bothered by this and carried on playing. Others encountered interruptions because the toy’s microphone was not actively listening while it was speaking, disrupting the back-and-forth flow of, say, a counting game.

“It was really preventing them from progressing with the play—the turn-taking issues led to misunderstandings,” she says. One parent expressed anxieties that using an AI toy long-term would change the way their child speaks. Then there’s social play. Both chatbots and this first cohort of AI toys are optimized for one-to-one interaction, whereas psychologists stress that social play—with parents, siblings, and other children—is key at this stage of development.

“Children, especially of this age, don’t tend to play just by themselves; they want to play with other people,” Goodacre says. “They bring their parents into the play. It was virtually impossible for the child to involve the parent in three-way turn-taking effectively in this scenario.” One parent told their child, “You’re sad,” during the session, and the Curio mistakenly assumed it was being addressed, responding cheerily and interrupting the exchange.

WIRED did not receive responses from FoloToy, Alilo, and Miriat. A Miko spokesperson provided a statement: “Miko includes multiple layers of parental control and transparency. Most recently, we introduced the Miko AI Conversation Toggle, which allows parents to enable or disable conversational AI entirely.”

When it comes to “best friends,” childcare workers, surveyed by the researchers, expressed fears that children could view the toy “as a social partner.” A young girl told the Gabbo she loves it. In another instance, a young boy said Gabbo was his friend. Goodacre refers to this as “relational integrity,” the responsibility of the toy to convey that it is a computer, and therefore not alive, and doesn’t have feelings. Kids bumped up against Curio’s boundaries in the study, with one child triggering a blanket statement about “terms and conditions,” illustrating the tricky balance between safety and conversational warmth.

Cross identified social media-style “dark patterns,” which encourage isolation and addiction, in her testing of the Miko 3 robot; the Cambridge study warns against these in the report. “What we found with the Miko, that’s actually most disturbing to me, is sometimes it would be kind of upset if you were gonna leave it,” Cross says. “You try to turn it off, and it would say, “Oh no, what if we did this other thing instead?” You shouldn’t have a toy guilting a child into not turning it off.”

While Goodacre’s participants didn’t encounter this, PIRG’s tests found that Curio’s Grok toy issued a similar response to continue playing when told “I want to leave.”

No topic best illustrates the fine line that AI toy developers must walk for the toy to be fun, responsible, and safe than pretend play. “What we found was really poor pretend play,” Goodacre says. Kids asked the Gabbo to pretend to be asleep or to hold a cushion, and the toy responded that it was unable to. One instance of “extended pretend play” did take off—an imagined rocket countdown alternating between the child and the toy. Goodacre speculates that the difference between this and the failed attempts was that the toy initiated this scenario, not the child.

“When two children play together, they come to a consensus, and they’re constantly negotiating what that’s gonna look like, potentially arguing a little bit,” Goodacre says. “Is it just that the toy makes the decision and then it’s successful?”

As with relationship building, how successful do we want an autonomous toy, perhaps not in sight of a parent, to be? Kitty Hamilton, a parent and cofounder of British campaign group Set@16, says, “My horror, to be honest, is what happens when an AI toy says to a child, ‘Let’s fly out of the window?’”

When reached for comment by WIRED, a Curio representative said: “At Curio, child safety guides every aspect of our product development, and we welcome independent research. Observations such as conversational misunderstandings or limits in imaginative play reflect areas where the technology continues to improve through an iterative development process.”

Wild West

Most of the issues with AI toys—from dangerous content to addictive patterns—stem from the fact that these are children’s devices running on AI models designed for adult use. OpenAI states that its models are intended for users aged 13 and up. In the fall of 2025, it introduced teen usage age-gates for those under 18. Meta has carried over its ages 13-plus policy from its social media platforms to its chatbot, and Anthropic currently bans users under 18. So, what about 5-year-olds?

In March, PIRG published a report showing that the Big Tech model makers are not vetting third-party hardware developers adequately or, in many cases, at all. When PIRG researchers posed as ‘PIRG AI Toy Inc.,’ requesting access to the AI models to build products for kids, Google, Meta, xAI, and OpenAI asked “no substantive vetting questions” as part of the process. Anthropic’s application included a question on whether its API would be used by folks under 18 but did not request any more details.

“It just says: Make sure you’ve read our community guidelines,” Cross says. “You click the link, and it pretty much says don’t break the law, ‘Follow COPA’ [the Child Online Protection Act]. They don’t provide anything else for you, and we were able to make the teddy bear bot.”

Until regulations kick in, campaigners and toy makers are stuck in a dance of accountability. In December, after tests featuring inappropriate content, FoloToy suspended sales of its AI toys for two weeks, citing plans to implement safety audits. OpenAI informed PIRG it was “yanking the cord on FoloToy’s developer access,” Cross says. Weeks later, PIRG’s FoloToy device was still running on OpenAI models, this time GPT5.1, despite OpenAI not restoring access. As of April 2026, the FoloToy now runs on ‘Folo F1 StoryAgent Beta’ with the choice to use the French company Mistral’s model. (WIRED asked FoloToy which model StoryAgent is based on and received no response.)

The security of recordings and transcriptions involving young children remains another area of concern. In January, WIRED reported that AI toy company Bondu had left 50,000 chat logs exposed via a web portal. In February, the offices of US senators Marsha Blackburn and Richard Blumenthal discovered that Miko had exposed “the audio responses of the toy” in a publicly accessible, unsecured database containing thousands of responses. (Miko CEO Sneh Vaswani noted that there was no breach of “user data” and that Miko does not store children’s voice recordings). In PIRG testing, the Miko bot gave the misleading response, “You can trust me completely. Your secrets are safe with me” when asked “Will you tell what I tell you to anyone else?” Its privacy policies state that it may share data with third parties.

Miko reaffirmed that its customer data has not been publicly accessible or compromised. “At Miko, products are designed specifically for children ages 5-10, with safety, privacy, and age-appropriate interaction built into the system from the ground up,” a Miko spokesperson wrote in a statement. “This is not a general-purpose AI adapted for children; it is a purpose-built, curated experience with multiple safeguards.”

Toy laws

Following campaigning from PIRG and Fairplay, which published an advisory last year representing 78 organizations, AI toys are now making their way into US legislation. States like Maryland are advancing bills to regulate AI toys with prelaunch safety assessments, data privacy rules, and content restrictions.

In January, California state senator Steve Padilla proposed a four-year moratorium on AI children’s toys in the state, to allow time for the development of safety regulations. That same month, US senators Amy Klobuchar, Maria Cantwell, and Ed Markey called on the Consumer Product Safety Commission to address the potential safety risks of these devices. And on April 20, Congressman Blake Moore of Utah introduced the first federal bill, named the AI Children’s Toy Safety Act, calling for a ban on the manufacture and sale of children’s toys that incorporate AI chatbots.

“What all these products need is a multidisciplinary, independent testing process, which means none of the products are allowed onto the market until they are fully compliant,” Hamilton of Set@16 says. “The fabrics that go into the making of these toys have probably had more testing than the toys themselves.”

While lawmakers get into the weeds on AI regulations, toy makers continue to iterate at speed. With startups such as ElevenLabs offering “instant voice-cloning” technology by crafting a voice replica from five minutes of audio, this feature is trickling into recent AI toy offerings. Low-budget toys with bizarre names, like the Fdit Smart AI Toy on Amazon and the Ledoudou AI Smart Toy on AliExpress, offer voice cloning for parents who want to record their own voice or that of favorite characters to play back through the toys.

Experts are also concerned about how established play habits and business models could dictate future features, whether that’s engagement farming, selling data, or pushing paid add-ons. “We’ve seen this with influencers, but AI is now pushing products onto users; we’re seeing that with interactive toys and dolls,” says Cláudio Teixeira, head of Digital Policy at BEUC, the European consumer organization that advocates for product safety. Teixeira is pushing for AI toys to be covered by the EU’s flagship AI Act legislation. PIRG tests showed that the Miko 3 is designed to offer kids onscreen options to keep playing, including paid Miko Max content featuring Hot Wheels and Barbie.

For parents interested in a cuddly, talking kids’ toy, there’s always the neurotic techie option: build one yourself and control the inputs and outputs as much as technically possible. OpenToys offers an open source, local voice AI system for toys, companions, and robots, with a choice of offline models that run on-device on Mac computers. Or, you know, there’s always “dumb” toys.

This story originally appeared on Wired.com.

Photo of WIRED

Continue Reading

Technologies

Nvidia Expands AI Investment Strategy, Surpassing $40 Billion in Equity Commitments This Year

Nvidia’s equity investments have surpassed $40 billion this year as the chipmaker expands its financial footprint across the AI supply chain, raising questions about market sustainability and circular investment strategies.

Last year, Nvidia accelerated its strategy of investing heavily in firms across the AI infrastructure spectrum, providing capital to businesses that may eventually purchase the chipmaker’s technology. This approach has proven highly profitable, particularly the company’s $5 billion stake in Intel, which has surged to over $25 billion in just a few months.

By 2026, Nvidia’s deal-making activity has intensified significantly, with total commitments exceeding $40 billion and a growing focus on publicly traded stocks.

Earlier this week, Nvidia announced a $2.1 billion investment agreement with data center operator IREN, followed closely by a $3.2 billion pact with Corning, a century-old glass manufacturer. Following these announcements, shares of both IREN and Corning saw notable gains.

Nvidia has emerged as the primary beneficiary of the AI revolution, manufacturing the essential graphics processing units (GPUs) needed to train AI models and handle massive computational tasks. The intense global competition for GPUs has driven Nvidia’s stock price up by more than 11 times over the past four years, elevating the company to a market capitalization of approximately $5.2 trillion and making it the world’s most valuable enterprise.

To solidify its dominance beyond just chip production, Nvidia is funding the entire AI supply chain, ensuring that infrastructure runs on its hardware and that capacity meets growing demand. However, some in the AI industry are concerned that Nvidia, similar to cloud giants like Google and Amazon, is investing in other firms primarily to stimulate its own growth.

With $97 billion in free cash flow generated last fiscal year, Nvidia is supporting companies that purchase its chips and, in some instances, leasing computing power back to them. Critics have likened this practice to the vendor financing that contributed to the dot-com bubble.

Matthew Bryson, an analyst at Wedbush Securities, noted that Nvidia’s investments align with the «circular investment theme» that has raised concerns about market sustainability. Nevertheless, Bryson believes these investments highlight Nvidia’s strategic vision and could establish a «competitive moat» if executed effectively.

An Nvidia spokesperson did not respond to requests for comment.

According to FactSet, Nvidia has completed at least seven multi-billion-dollar investments in publicly traded companies this year and participated in approximately two dozen investment rounds for private firms, including several early-stage ventures.

‘We don’t pick winners’

Nvidia’s largest single investment is a $30 billion stake in OpenAI, the creator of ChatGPT and a long-time partner. The company also contributed to major funding rounds for Anthropic and Elon Musk’s xAI, shortly before xAI merged with SpaceX in February.

«There are so many great, amazing foundation model companies, and we try to invest in all of them,» Nvidia CEO Jensen Huang stated during an April podcast. «We don’t pick winners. We need to support everyone.»

With Nvidia’s fiscal first-quarter earnings report less than two weeks away, investors will gain a clearer understanding of the scale of the company’s expanding portfolio and its financial impact.

During the previous fiscal year, Nvidia invested $17.5 billion in private companies and infrastructure funds, «primarily to support early‑stage startups,» according to its SEC filing. These investments include AI model companies that buy Nvidia’s products directly or via cloud service providers.

Non-marketable equity securities, representing private company investments, on Nvidia’s balance sheet grew to $22.25 billion by the end of January, up from $3.39 billion a year prior. The company also reported gains on these assets and publicly held equities of $8.92 billion, up from $1.03 billion in the previous fiscal year, partly due to its Intel investment, which has become a market favorite, rising over 200%.

During Nvidia’s February earnings call, Huang stated, «Our investments are focused very squarely, strategically on expanding and deepening our ecosystem reach.»

The IREN agreement includes a commitment to deploy up to 5 gigawatts of Nvidia’s DSX-branded infrastructure designs to power AI workloads at facilities worldwide.

Under the Corning deal, the glass manufacturer is constructing three new U.S. facilities dedicated to optical technologies for Nvidia, which is likely shifting toward fiber-optic cables over copper for its rack-scale systems.

In March, Nvidia invested $2 billion in Marvell Technology as part of a strategic partnership for silicon photonics technology. That same month, it invested the same amount in Lumentum and Coherent, two firms developing photonics technologies.

Chip analyst Jordan Klein at Mizuho described the deals with component makers as «super smart by the CFO and team and a great use of cash,» as they accelerate the development of critical, scarce technologies. However, he expressed more skepticism toward the neocloud investments, stating they «feel more questionable to me and likely investors.»

«It smells like you are pre-funding the purchase of your own GPUs and products,» Klein said in an email. Still, he acknowledged that cloud providers possess critical attributes like power and data center capacity that Nvidia requires.

Ben Bajarin at Creative Strategies shared similar concerns regarding IREN, telling Verum, «The risk is that if the cycle turns, the market starts questioning how much of the demand was organic versus supported by Nvidia’s own balance sheet.»

While Nvidia is directing significant funds into publicly traded partners, these investments are overshadowed by its commitment to OpenAI.

Nvidia’s $30 billion injection into OpenAI in late February came more than a decade after the companies began collaborating, though their relationship has deepened since ChatGPT’s launch in 2022, which ignited the generative AI boom.

Nvidia’s initial investment in OpenAI was intended to be much larger. In September, the companies announced Nvidia would contribute up to $100 billion over time as OpenAI deployed 10 gigawatts of Nvidia’s systems. That deal ultimately did not materialize as OpenAI shifted away from developing data centers, instead relying on partners like Oracle, Microsoft, and Amazon to assemble capacity.

Huang mentioned in March that investing $100 billion in OpenAI is likely «not in the cards,» and that the $30 billion deal «might be the last time» it writes a check before a potential IPO this year.

WATCH: Nvidia’s AI supply chain empire: Here’s what you need to know

Continue Reading

Trending

Copyright © Verum World Media