Technologies
Pixel 7 Pro Actually Challenges My $10,000 DSLR Camera Gear
My full-frame Canon camera is better, but Google’s flagship phone opens creative options far beyond snapshots.
Google got my attention by bragging about the Pixel 7 Pro‘s «pro-level zoom» and asserting that the Android phone’s photography features can challenge traditional cameras. I’m one of those serious photographers who hauls around a heavy camera and a bunch of bulky lenses. But I also love phone photography, so I decided to test Google’s claims.
At its October launch event, Google touted the Pixel 7 Pro’s telephoto zoom for magnifying distant subjects, its Tensor G2-powered AI processing, its faster Night Sight for low-light scenes and a new macro ability for closeup photos. «It cleverly combines state-of-the-art hardware, software and machine learning to create amazing zoom photos across any magnification,» Pixel camera hardware chief Alexander Schiffhauer said at the phone’s launch event. Google wants you to think of this phone as offering a continuous zoom range from ultrawide angle to supertelephoto.
As you might imagine, I got better results from my «real» camera equipment, which would cost $10,000 if purchased new today. Even though my Canon 5D Mark IV is now 6 years old, it’s hard to beat a big image sensor and big lenses when it comes to color, sharpness, detail and a wide dynamic range spanning bright and dark tones.
But the Pixel 7 Pro’s photographic flexibility challenges my camera setup better than any other phone I’ve used, even outperforming my DSLR in some circumstances and earning a «stellar» rating from CNET editor Andrew Lanxon. While my camera and four lenses fill a whole backpack, Google’s smartphone fits in my pocket. And of course that $900 smartphone lets me share a selfie, check my email, pay for the groceries and tackle the daily crossword puzzle.
With the steady annual improvement in smartphone camera hardware and image processing, a smartphone isn’t just a better-than-nothing camera. These little slices of electronics are increasingly able to nail important shots and open up new creative possibilities for those who are discovering the rewards of photography.
I’ll keep hauling my DSLR on hikes and family outings. But because I won’t always have it with me, the Pixel 7 Pro — in particular its zoom and low-light abilities — means I won’t be as worried about missing the shot when I don’t.
My Canon 5D Mark IV, which costs $2,700 new these days, most often has the $1,900 Canon EF 24-70mm f/2.8L II USM lens mounted. I also use the $2,400 EF 100-400mm f/4.5-5.6L IS II USM for telephoto shots, the $1,300 ultrawide EF 16-35mm f/4L IS USM zoom, the $1,300 EF 100mm f/2.8L Macro IS USM for closeups, and the $429 Extender EF 1.4X III for more telephoto reach when photographing birds. Here’s how that gear stacks up against the Pixel 7 Pro’s 0.5x ultrawide, 1x main camera and 5x telephoto camera.
Google Pixel 7 Pro vs. Canon 5D Mark IV, main camera
With plenty of light, the Pixel 7 Pro’s 24mm main camera does a good job capturing color and detail in its 12-megapixel images. Check the comparisons here (and note that my DSLR shoots in a more elongated 3:2 aspect ratio than the Pixel 7 Pro’s 4:3).
Pixel peeping shows the phone can’t hold a candle to my 30-megapixel DSLR when it comes to detail. If you’re printing posters or need a lot of detail for photo editing, a modern DSLR or mirrorless camera is worth it. But 12 megapixels is plenty for most purposes. Check the below cropped images to see what’s going on up close.
Google missed a chance to shoot even higher resolution photos than my 30-megapixel DSLR, though. The Pixel 7 Pro’s main camera has a 50-megapixel sensor. It takes 12-megapixel photos using an approach called pixel binning that combines each 2×2 pixel group on the sensor into one effectively larger pixel. That means better color and low-light performance when shooting at 24mm. But you can use those 50 megapixels differently by skipping the pixel binning and shooting in the sensor’s full resolution when there’s sufficient light. That’s exactly what Apple does with the iPhone 14 Pro camera, and I wish Google did the same.
Pixel 7 Pro vs. DSLR, people and pets
The Pixel 7 Pro was capable at portrait photography. I prefer shooting raw and editing the shots myself because I sometimes find the Pixel 7 Pro makes faces look a little too processed, and I find its color balance a bit cool for my tastes. With the main camera, the Pixel 7 Pro does a pretty good job finding faces, tracking them and staying focused. For 2022, the Pixel 7 Pro now can find individual eyes, the ideal focus point of a camera and a weak point on my older DSLR.
On this comparison, I find the DSLR did a better job with skin tones, but the Pixel 7 Pro capably exposed the face in tricky lighting.
Using the Pixel 7 Pro’s portrait mode, which artificially blurs photo backgrounds, I find the processing artifacts distracting, especially with flyaway hair, though that’s not a problem with the example below. The shot is workable for quick sharing and looks fine on smaller screens, but I wouldn’t make a print of it. For the DSLR shot, I used my Sigma 35mm f1.4 lens, shooting wide open at f1.4 for the smoothest possible background blur. It’s much better than the Pixel 7 Pro, though its shallow depth of field blurs the hands and plastic toys.
For pets, the Pixel 7 Pro again did a great job finding and focusing on eyes. Here’s my dog, up close. The main camera at 1x zoom, or 24mm, isn’t ideal for single subjects, though, and the camera’s performance at 2x isn’t as strong, so bear that in mind.
To see how much more detail my SLR can capture — as long as I get focus right — check the cropped views below. And note that new mirrorless cameras from Sony, Nikon and Canon do a good job with eye tracking for easier focus.
DSLR vs. Pixel 7 Pro, telephoto cameras
Telephoto lenses magnify more distant subjects, and the Pixel 7 Pro has a remarkable range for a smartphone. Its sensors can shoot at 2x, 5x and 10x zoom modes with minimal processing trickery. It’ll shoot at intermediate settings with various combinations of cropping and multi-camera image compositing that I find fairly convincing. Then it reaches up to 30x with Google’s AI-infused upscaling technology, called Super Res Zoom. Here’s the same scene shot across the Pixel 7 Pro’s full range from supertelephoto 30x to ultrawide 0.5x:
The image quality is pretty bad by the time you reach 30x zoom, an equivalent of 720mm. But even my expensive DSLR gear only reaches 560mm maximum, and venturing beyond 10x on the Pixel 7 Pro can be justified in many circumstances. Not every photo has to be good enough quality to make an 8×10 print.
Bigger telephoto photography
Telephoto lenses are big, which is why those pro photographers at NFL games haul around monopods to support their hulking optics. Canon’s RF 400mm f/2.8 L IS USM lens, popular on the sidelines, weighs more than six pounds, measures more than 14 inches long, and costs more than my entire collection of cameras and lenses. My Canon 100-400mm zoom is smaller and cheaper but doesn’t let in as much light, but it’s still gargantuan compared with the Pixel 7 Pro. I’m delighted to be able to capture useful telephoto shots on a Pixel phone, an option that previously was available only on rival Android phones from Samsung and others.
Google exploits the Pixel 7 Pro’s 50-megapixel main camera sensor for the first step up the telephoto lens ladder, a 2x zoom level good for portraits. The Pixel 7 Pro uses just the central 12 megapixels to capture a 12-megapixel photo in 2x telephoto mode, an equivalent focal length of 48mm.
The dedicated telephoto camera kicks in at 5x zoom, an equivalent of 120mm. Instead of a bulky telephoto protuberance, Google uses a prism to bend light 90 degrees so the necessary lens length and 48-megapixel image sensor can be tucked sideways within the Pixel 7 Pro’s thicker «camera bar» section. It also can use the central megapixels in its 10x mode, or 240mm, an option I think is terrific. This San Francisco architectural sight below is pretty good:
Using AI and software processing to zoom further, the camera can reach 20x and even 30x zoom, which translates to 480mm and 720mm. By comparison, my DSLR reaches 560mm with my 1.4x telephoto extender.
My DSLR would have trounced the Pixel 7 Pro for this scene of Bay Area fog lapping up against the Santa Cruz Mountains south of San Francisco, shot somewhere between 15x and 20x. (I wish Google would write zoom level metadata into photos the way my Canon records lens focal length settings.) But guess what? I was mountain biking and didn’t take my DSLR. The best camera is the one you have, as the saying goes.
Back at 10x zoom, I was pleased with this shot below of my pal Joe mountain biking. I’ve photographed people in this very spot before with smartphones, and this was the first time I wasn’t frustrated with the results.
Google’s optics and image processing methods are clever but not magical. The Pixel 7 Pro produces a 12-megapixel image, but the farther beyond 10x you shoot, the more you’ll cringe at its blotchy details that look more like a watercolor painting. That’s the glass-is-half-empty view. I’m actually on the glass-is-half-full side, appreciating what you can do and recognizing that a lot of photos will be viewed on smaller screens. Image qualityof 10x is respectable, and that alone is a major achievement.
Here’s a comparison of a rooftop party photographed with the Pixel 7 Pro at 30x, or 720mm equivalent, and my camera at 560mm, but cropped in to match the phone’s framing. The DSLR does better, of course. Even cropped, it’s an 18-megapixel image.
Practical limits on Pixel 7 Pro’s telephoto cameras
To really exercise the phone, I toted it to see the US Navy’s Blue Angels flight display over San Francisco. Buildings and fog blocking my view made photography tough, but I found new limitations to the Pixel 7 Pro.
Fiddling with screen controls to hit 10x or more zoom is slow. Framing fast-moving subjects on a smartphone screen is hard, even with the aid of the miniature wider-angle view that Google pops into the scene and its AI-assisted stabilization technology. Focus is also relatively pokey. With my DSLR, I could rapidly find the jets in the sky, lock focus, track them as they flew and shoot a burst of shots.
I didn’t get a single good photo of the Blue Angels with the Pixel 7 Pro. Google’s «pro-level zoom» works much better with stationary subjects.
DSLR vs. Pixel 7 Pro, shooting in the dark
Here’s where the Pixel 7 Pro beats out a vastly more expensive camera. There’s no way you can hold a camera steady for 6 seconds, but Pixel phones in effect can thanks to computational photography techniques that Google pioneered. Google takes a collection of photos, using AI to judge when your hands are most still, then combines these individual frames into one shot. It’s the basis of its Night Sight feature, which I’ve used many times and, at its extreme, powers an astrophotography mode I’ve used to take 4-minute exposures of the night sky.
Below is a comparison of a nighttime scene with the Pixel 7 Pro at 1x, where it’s best at gathering light, and my DSLR with its 24-70mm f2.8 lens. The DSLR has more detail up close, but the Pixel 7 Pro does well, and its deeper depth of field means the leaves in the foreground aren’t a smeary mess.
Here’s a comparison of a 2x zoom photo with the Pixel 7 Pro and the best I could do handheld with my 24-70mm f2.8 lens. The longer your zoom, the harder it is to hold a camera steady, and even with my elbows on a railing to steady the camera, the Pixel 7 Pro shot was vastly easier to capture. I had to crank my DSLR’s sensitivity to ISO 12,800 to get the shutter speed down to 1/8sec, and even then, most of the photos were duds. Image stabilization helps, but this lens doesn’t have it.
Just for kicks, I used a tripod to take three exposure-bracketed shots with my DSLR and merged them into a single HDR (high dynamic range) photo in Adobe’s Lightroom software. The longest exposure was 30 seconds. That’s how much effort it took to beat a Night Sight photo I took just standing there holding the phone for 6 seconds. Check the comparison below.
Here’s where my DSLR completely trounced the Pixel 7 Pro, even with Night Sight, though: the nearly full moon. Here’s the Pixel 7 Pro at 30x zoom vs. my DSLR at 560mm, cropped so the framing matches.
DSLR vs. Pixel 7 Pro, dynamic range
One of the best measures of a camera is dynamic range, the span between dark and light it can capture in a single scene. To exercise the Pixel 7 Pro here, I shot in raw format, which allows for more editing flexibility. Then I edited the photos, cranking the exposure up 4 stops to reveal noise problems in shadowed areas and then down 4 stops to see how well it captured detail in bright areas.
In short, I’m impressed. Google squeezes a remarkable amount of data out of its relatively small sensor with its processing methods.
Two techniques are relevant. With Google’s HDR+ system, the Pixel 7 Pro combines multiple underexposed frames and one regularly exposed frame to record shadow detail without blowing out highlights in bright areas. And Google includes this data in a «computational raw» format that packages that detail in Adobe’s very flexible DNG format. It’s not truly raw, like the single frame of data pulled from my DSLR’s image sensor is, but it’s an excellent option for smartphone photography.
Below is a cropped photo with the Pixel 7 Pro’s 1x camera, underexposed by 4 stops to see if was able to record a range of tones even in the very bright pampas grass plumes. It was.
Shooting at 2x, which uses only the central pixels on the 1x camera, poses more of a challenge when going up against my DSLR, which suffers no such degradation in hardware abilities when I zoom in. Overexposed by 4 stops, you can see a lot more noise and color problems with the Pixel 7 Pro in the comparison below. But overall, it’s got impressive dynamic range on the main camera.
DSLR vs. Pixel 7 Pro, ultrawide
Google made the ultrawide lens on the Pixel 7 Pro an even wider field of view compared with last year. What you like is a matter of personal preference, but I appreciate the dramatic perspective that you can capture with a very wide angle. When I don’t need it, the 24mm main camera still qualifies as wide angle.
Here’s a comparison of a scene shot with the Pixel 7 Pro and my DSLR’s 16-35mm ultrawide zoom.
DSLR vs. PIxel 7 Pro, macro
The new ultrawide camera now has autofocus hardware, and that opens up the world of macro photography for close-up subjects. Apple’s iPhone Pro models got this ability in 2021, and I’ve loved macro photos for years as a way to shoot flowers, mushrooms, toys and other small subjects, so I’m delighted to see it on the higher-end Pixel phones.
As with the iPhone, though, the macro is useful as long as the subject fits in the central portion of the frame. Note in this comparison below how blurred the image gets toward the periphery of this butterfly coaster with the Pixel 7 Pro.
No, it’s not as good as my DSLR. But with macro abilities, Night Sight and a zoom range from ultrawide to super telephoto, the Pixel 7 Pro is more than just useful for snapshots. It lets you start exploring a much bigger part of photography’s creative realm.
Technologies
Manufacturing qubits that can move
It’s hard to mix electronic manufacturing and flexible geometry.
It’s hard to mix electronic manufacturing and flexible geometry.
To get quantum computing to work, we will ultimately need lots of high-quality qubits, which we can tie together into groups of error-corrected logical qubits. Companies are taking distinct approaches to get there, but you can think of them as falling into two broad categories. Some companies are focused on hosting the qubits in electronics that we can manufacture, guaranteeing that we can get lots of devices. Others are using atoms or photons as qubits, which give more consistent behavior but require lots of complicated hardware to manage.
One advantage of systems that use atoms or ions is that we can move them around. This allows us to entangle any qubit with any other, which provides a great deal of flexibility for error correction. Systems based on electronic devices, in contrast, are locked into whatever configuration they’re wired into during manufacturing.
But this week, a new paper examined research that seems to provide the best of both worlds. It works with quantum dots, which can be manufactured in bulk and host a qubit as a single electron’s spin. The work showed that it’s possible to move these spin qubits from one quantum dot to another without losing quantum information. The ability to move them around could potentially enable the sort of any-to-any connectivity we see with atoms and ions.
Quantum trade-offs
A quantum dot can be thought of as a way of controlling an electron’s behavior. Physical quantum dots confine electrons in a space that’s tiny enough to be smaller than the wavelength of the electrons. Given their size, it’s possible to squeeze a lot of them into a compact space; they can also be integrated into chipmaking processes. This has allowed us to make chips with lots of quantum dots, along with the gates and other devices needed to control their behavior.
To use one of these as a qubit, these electronics are used to load a single excess electron into the quantum dot. Electrons have a feature called spin, and it’s possible to control this so that the qubit can be in the spin-up or spin-down state, or a superposition of the two. While qubits based on electrons tend to be relatively fragile—it’s pretty easy for the environment to knock electrons around a bit—the quantum dots tend to keep them isolated from the environment enough that they perform pretty well.
Like any other manufactured chip, the wiring that connects the quantum dots is locked into place during the chip’s manufacture. Since different error correction schemes require different connections among the qubits, this forces us to commit to specific error-correction schemes during manufacturing. If a better scheme is developed after a chip is made, it’s probably not possible to switch to it. Less complex algorithms may benefit from simpler error-correction schemes that require less overhead, but we wouldn’t be able to switch schemes with these chips.
So, quantum dots appear to typify the trade-offs that we’re facing with quantum computing: it’s easier for us to make lots of quantum dots and all the hardware needed to manipulate them, but it’s seemingly not possible for them to benefit from the flexibility that other types of qubits have.
The whole point of this new paper is to show that this isn’t necessarily true.
Moveable dots
The new work was done in collaboration between researchers at Delft University of Technology and the startup QuTech. The team built a chip that had a linear array of quantum dots, and they started out with single electron spins at each end. Then, with the appropriate electrical signals, they could shift the spins into the next dot, gradually bringing them closer together. (And, by gradually, we mean a fraction of a second here, but relatively slowly compared to basic switching in electronics.)
Once the electrons were close enough, the spin wavefunctions overlapped, allowing the researchers to perform two-qubit gates on them. These manipulations can be used to entangle the two spins and are thus needed to build error-corrected logical qubits; these gates are also needed for performing calculations.
The researchers then confirmed that they could move the electrons back to their starting positions, after which measurements confirmed that their spins were entangled. And since quantum teleportation also requires a two-qubit gate, they showed that the process could be used for teleportation. Teleportation can enhance the sort of mobility provided by moving the qubits around, since it can be used to move states around after the qubits have been widely separated.
(Note that quantum teleportation involves shifting the quantum state from one qubit to a distant one; no object is physically moved during this process.)
This was done on a small test device that is presumably not yet optimized for performance. But the operations were done with pretty reasonable fidelity. The two-qubit gates were executed successfully over 99 percent of the time, while teleportation succeeded about 87 percent of the time. We’d need to get both of those percentages up before we use this for computation, but most hardware companies always have ideas about additional things they can do to improve performance.
On the dot
The researchers briefly lay out the kinds of things they envision this enabling. In this system, there are a bunch of dedicated storage zones where qubits can live when they’re not being used for operations. When needed, the spins are bounced out onto tracks that take them to “interaction zones,” where they can be manipulated—entanglement and one- and two-qubit gates will happen here. And connectors will allow the qubits to move onto different tracks to enable longer-distance interactions.
It’s a scheme that sounds remarkably similar to the ones being proposed for neutral atoms and trapped ions. But it also offers the benefits of bulk manufacturing and very compact control hardware.
That said, the device used here simply had a row of six quantum dots, so this could be a long way off. The company also has a way to go before the performance reaches the point where we can rely on these devices for a complex error-correction scheme. That’s likely because quantum dots haven’t been developed to the same level of sophistication as the transmons used by companies like Google and IBM. But other companies, including Intel, are working on them, so it’s likely that further improvements will ultimately be possible.
Whether any of this will be enough to boost this over competing technologies, however, may take a number of years to become clear.
Nature, 2026. DOI: 10.1038/s41586-026-10423-9 (About DOIs).

Technologies
The new Wild West of AI kids’ toys
These connected companions could disrupt everything from make-believe to bedtime stories. No wonder some lawmakers want them banned.
These connected companions could disrupt everything from make-believe to bedtime stories. No wonder some lawmakers want them banned.
The main antagonist of Toy Story 5, in theaters this summer, is a green, frog-shaped kids’ tablet named Lilypad, a genius new villain for the beloved Pixar franchise. But if Pixar had its ear to the ground, it might have used an AI kids’ toy instead.
AI toys are seemingly everywhere, marketed online as friendly companions to children as young as three, and they’re still a largely unregulated category. It’s easier than ever to spin up an AI companion, thanks to model developer programs and vibe coding. In 2026, they’ve become a go-to trend in cheap trinkets, lining the halls of trade shows like CES, MWC, and Hong Kong’s Toys & Games Fair. By October 2025, there were over 1,500 AI toy companies registered in China, and Huawei’s Smart HanHan plush toy sold 10,000 units in China in its first week. Sharp put its PokeTomo talking AI toy on sale in Japan this April.
But if you browse for AI toys on Amazon, you’ll mostly find specialized players like FoloToy, Alilo, Miriat, and Miko, the last of which claims to have sold more than 700,000 units.
Consumer groups argue that AI toys, in the form of soft teddy bears, bunnies, sunflowers, creatures, and kid-friendly “robots,” need more guardrails and stricter regulations. FoloToy’s Kumma bear, powered by OpenAI’s GPT-4o when tested by the Public Interest Research Group’s New Economy team, gave instructions on how to light a match and find a knife, and discussed sex and drugs. Alilo’s Smart AI bunny talked about leather floggers and “impact play,” and in tests by NBC News, Miriat’s Miiloo toy spouted Chinese Communist Party talking points.
Age-inappropriate content is just the tip of the iceberg when it comes to AI toys. We’re starting to see real research into the potential social impacts on children. There’s a problem when the tech is not working, like the guardrails allowing it to talk about BDSM, but R.J. Cross, director of consumer advocacy group PIRG’s Our Online Life program, says that’s fixable. “Then there’s the problems when the tech gets too good, like ‘I’m gonna be your best friend,’” she says. Like the Gabbo, from AI toy maker Curio. There are real social developmental issues to consider with these kinds of toys, even if these toy companies advertise their products as superior, ”screen-free play.”
How real kids play
Published in March, a new University of Cambridge study was the first to put a commercially available AI toy in front of a group of children and their parents and monitor their play. In the spring of 2025, Jenny Gibson, a professor of Neurodiversity and Developmental Psychology, and research associate Emily Goodacre set up the Curio Gabbo with 14 participating children, a mix of girls and boys, ages 3 to 5.
Gabbo didn’t talk about drugs or say “I love you” back. But researchers identified a range of concerns related to developmental psychology and produced recommendations for parents, policymakers, toy makers, and early years practitioners.
First, conversational turn-taking. Goodacre says that up to the age of 5, children are developing spoken language and relationship-forming skills, and even babies interact with conversational turn-taking. The Gabbo’s turn-taking is “not human” and “not intuitive,” she says. Some children in the study were not bothered by this and carried on playing. Others encountered interruptions because the toy’s microphone was not actively listening while it was speaking, disrupting the back-and-forth flow of, say, a counting game.
“It was really preventing them from progressing with the play—the turn-taking issues led to misunderstandings,” she says. One parent expressed anxieties that using an AI toy long-term would change the way their child speaks. Then there’s social play. Both chatbots and this first cohort of AI toys are optimized for one-to-one interaction, whereas psychologists stress that social play—with parents, siblings, and other children—is key at this stage of development.
“Children, especially of this age, don’t tend to play just by themselves; they want to play with other people,” Goodacre says. “They bring their parents into the play. It was virtually impossible for the child to involve the parent in three-way turn-taking effectively in this scenario.” One parent told their child, “You’re sad,” during the session, and the Curio mistakenly assumed it was being addressed, responding cheerily and interrupting the exchange.
WIRED did not receive responses from FoloToy, Alilo, and Miriat. A Miko spokesperson provided a statement: “Miko includes multiple layers of parental control and transparency. Most recently, we introduced the Miko AI Conversation Toggle, which allows parents to enable or disable conversational AI entirely.”
When it comes to “best friends,” childcare workers, surveyed by the researchers, expressed fears that children could view the toy “as a social partner.” A young girl told the Gabbo she loves it. In another instance, a young boy said Gabbo was his friend. Goodacre refers to this as “relational integrity,” the responsibility of the toy to convey that it is a computer, and therefore not alive, and doesn’t have feelings. Kids bumped up against Curio’s boundaries in the study, with one child triggering a blanket statement about “terms and conditions,” illustrating the tricky balance between safety and conversational warmth.
Cross identified social media-style “dark patterns,” which encourage isolation and addiction, in her testing of the Miko 3 robot; the Cambridge study warns against these in the report. “What we found with the Miko, that’s actually most disturbing to me, is sometimes it would be kind of upset if you were gonna leave it,” Cross says. “You try to turn it off, and it would say, “Oh no, what if we did this other thing instead?” You shouldn’t have a toy guilting a child into not turning it off.”
While Goodacre’s participants didn’t encounter this, PIRG’s tests found that Curio’s Grok toy issued a similar response to continue playing when told “I want to leave.”
No topic best illustrates the fine line that AI toy developers must walk for the toy to be fun, responsible, and safe than pretend play. “What we found was really poor pretend play,” Goodacre says. Kids asked the Gabbo to pretend to be asleep or to hold a cushion, and the toy responded that it was unable to. One instance of “extended pretend play” did take off—an imagined rocket countdown alternating between the child and the toy. Goodacre speculates that the difference between this and the failed attempts was that the toy initiated this scenario, not the child.
“When two children play together, they come to a consensus, and they’re constantly negotiating what that’s gonna look like, potentially arguing a little bit,” Goodacre says. “Is it just that the toy makes the decision and then it’s successful?”
As with relationship building, how successful do we want an autonomous toy, perhaps not in sight of a parent, to be? Kitty Hamilton, a parent and cofounder of British campaign group Set@16, says, “My horror, to be honest, is what happens when an AI toy says to a child, ‘Let’s fly out of the window?’”
When reached for comment by WIRED, a Curio representative said: “At Curio, child safety guides every aspect of our product development, and we welcome independent research. Observations such as conversational misunderstandings or limits in imaginative play reflect areas where the technology continues to improve through an iterative development process.”
Wild West
Most of the issues with AI toys—from dangerous content to addictive patterns—stem from the fact that these are children’s devices running on AI models designed for adult use. OpenAI states that its models are intended for users aged 13 and up. In the fall of 2025, it introduced teen usage age-gates for those under 18. Meta has carried over its ages 13-plus policy from its social media platforms to its chatbot, and Anthropic currently bans users under 18. So, what about 5-year-olds?
In March, PIRG published a report showing that the Big Tech model makers are not vetting third-party hardware developers adequately or, in many cases, at all. When PIRG researchers posed as ‘PIRG AI Toy Inc.,’ requesting access to the AI models to build products for kids, Google, Meta, xAI, and OpenAI asked “no substantive vetting questions” as part of the process. Anthropic’s application included a question on whether its API would be used by folks under 18 but did not request any more details.
“It just says: Make sure you’ve read our community guidelines,” Cross says. “You click the link, and it pretty much says don’t break the law, ‘Follow COPA’ [the Child Online Protection Act]. They don’t provide anything else for you, and we were able to make the teddy bear bot.”
Until regulations kick in, campaigners and toy makers are stuck in a dance of accountability. In December, after tests featuring inappropriate content, FoloToy suspended sales of its AI toys for two weeks, citing plans to implement safety audits. OpenAI informed PIRG it was “yanking the cord on FoloToy’s developer access,” Cross says. Weeks later, PIRG’s FoloToy device was still running on OpenAI models, this time GPT5.1, despite OpenAI not restoring access. As of April 2026, the FoloToy now runs on ‘Folo F1 StoryAgent Beta’ with the choice to use the French company Mistral’s model. (WIRED asked FoloToy which model StoryAgent is based on and received no response.)
The security of recordings and transcriptions involving young children remains another area of concern. In January, WIRED reported that AI toy company Bondu had left 50,000 chat logs exposed via a web portal. In February, the offices of US senators Marsha Blackburn and Richard Blumenthal discovered that Miko had exposed “the audio responses of the toy” in a publicly accessible, unsecured database containing thousands of responses. (Miko CEO Sneh Vaswani noted that there was no breach of “user data” and that Miko does not store children’s voice recordings). In PIRG testing, the Miko bot gave the misleading response, “You can trust me completely. Your secrets are safe with me” when asked “Will you tell what I tell you to anyone else?” Its privacy policies state that it may share data with third parties.
Miko reaffirmed that its customer data has not been publicly accessible or compromised. “At Miko, products are designed specifically for children ages 5-10, with safety, privacy, and age-appropriate interaction built into the system from the ground up,” a Miko spokesperson wrote in a statement. “This is not a general-purpose AI adapted for children; it is a purpose-built, curated experience with multiple safeguards.”
Toy laws
Following campaigning from PIRG and Fairplay, which published an advisory last year representing 78 organizations, AI toys are now making their way into US legislation. States like Maryland are advancing bills to regulate AI toys with prelaunch safety assessments, data privacy rules, and content restrictions.
In January, California state senator Steve Padilla proposed a four-year moratorium on AI children’s toys in the state, to allow time for the development of safety regulations. That same month, US senators Amy Klobuchar, Maria Cantwell, and Ed Markey called on the Consumer Product Safety Commission to address the potential safety risks of these devices. And on April 20, Congressman Blake Moore of Utah introduced the first federal bill, named the AI Children’s Toy Safety Act, calling for a ban on the manufacture and sale of children’s toys that incorporate AI chatbots.
“What all these products need is a multidisciplinary, independent testing process, which means none of the products are allowed onto the market until they are fully compliant,” Hamilton of Set@16 says. “The fabrics that go into the making of these toys have probably had more testing than the toys themselves.”
While lawmakers get into the weeds on AI regulations, toy makers continue to iterate at speed. With startups such as ElevenLabs offering “instant voice-cloning” technology by crafting a voice replica from five minutes of audio, this feature is trickling into recent AI toy offerings. Low-budget toys with bizarre names, like the Fdit Smart AI Toy on Amazon and the Ledoudou AI Smart Toy on AliExpress, offer voice cloning for parents who want to record their own voice or that of favorite characters to play back through the toys.
Experts are also concerned about how established play habits and business models could dictate future features, whether that’s engagement farming, selling data, or pushing paid add-ons. “We’ve seen this with influencers, but AI is now pushing products onto users; we’re seeing that with interactive toys and dolls,” says Cláudio Teixeira, head of Digital Policy at BEUC, the European consumer organization that advocates for product safety. Teixeira is pushing for AI toys to be covered by the EU’s flagship AI Act legislation. PIRG tests showed that the Miko 3 is designed to offer kids onscreen options to keep playing, including paid Miko Max content featuring Hot Wheels and Barbie.
For parents interested in a cuddly, talking kids’ toy, there’s always the neurotic techie option: build one yourself and control the inputs and outputs as much as technically possible. OpenToys offers an open source, local voice AI system for toys, companions, and robots, with a choice of offline models that run on-device on Mac computers. Or, you know, there’s always “dumb” toys.
This story originally appeared on Wired.com.

Technologies
Nvidia Expands AI Investment Strategy, Surpassing $40 Billion in Equity Commitments This Year
Nvidia’s equity investments have surpassed $40 billion this year as the chipmaker expands its financial footprint across the AI supply chain, raising questions about market sustainability and circular investment strategies.
Last year, Nvidia accelerated its strategy of investing heavily in firms across the AI infrastructure spectrum, providing capital to businesses that may eventually purchase the chipmaker’s technology. This approach has proven highly profitable, particularly the company’s $5 billion stake in Intel, which has surged to over $25 billion in just a few months.
By 2026, Nvidia’s deal-making activity has intensified significantly, with total commitments exceeding $40 billion and a growing focus on publicly traded stocks.
Earlier this week, Nvidia announced a $2.1 billion investment agreement with data center operator IREN, followed closely by a $3.2 billion pact with Corning, a century-old glass manufacturer. Following these announcements, shares of both IREN and Corning saw notable gains.
Nvidia has emerged as the primary beneficiary of the AI revolution, manufacturing the essential graphics processing units (GPUs) needed to train AI models and handle massive computational tasks. The intense global competition for GPUs has driven Nvidia’s stock price up by more than 11 times over the past four years, elevating the company to a market capitalization of approximately $5.2 trillion and making it the world’s most valuable enterprise.
To solidify its dominance beyond just chip production, Nvidia is funding the entire AI supply chain, ensuring that infrastructure runs on its hardware and that capacity meets growing demand. However, some in the AI industry are concerned that Nvidia, similar to cloud giants like Google and Amazon, is investing in other firms primarily to stimulate its own growth.
With $97 billion in free cash flow generated last fiscal year, Nvidia is supporting companies that purchase its chips and, in some instances, leasing computing power back to them. Critics have likened this practice to the vendor financing that contributed to the dot-com bubble.
Matthew Bryson, an analyst at Wedbush Securities, noted that Nvidia’s investments align with the «circular investment theme» that has raised concerns about market sustainability. Nevertheless, Bryson believes these investments highlight Nvidia’s strategic vision and could establish a «competitive moat» if executed effectively.
An Nvidia spokesperson did not respond to requests for comment.
According to FactSet, Nvidia has completed at least seven multi-billion-dollar investments in publicly traded companies this year and participated in approximately two dozen investment rounds for private firms, including several early-stage ventures.
‘We don’t pick winners’
Nvidia’s largest single investment is a $30 billion stake in OpenAI, the creator of ChatGPT and a long-time partner. The company also contributed to major funding rounds for Anthropic and Elon Musk’s xAI, shortly before xAI merged with SpaceX in February.
«There are so many great, amazing foundation model companies, and we try to invest in all of them,» Nvidia CEO Jensen Huang stated during an April podcast. «We don’t pick winners. We need to support everyone.»
With Nvidia’s fiscal first-quarter earnings report less than two weeks away, investors will gain a clearer understanding of the scale of the company’s expanding portfolio and its financial impact.
During the previous fiscal year, Nvidia invested $17.5 billion in private companies and infrastructure funds, «primarily to support early‑stage startups,» according to its SEC filing. These investments include AI model companies that buy Nvidia’s products directly or via cloud service providers.
Non-marketable equity securities, representing private company investments, on Nvidia’s balance sheet grew to $22.25 billion by the end of January, up from $3.39 billion a year prior. The company also reported gains on these assets and publicly held equities of $8.92 billion, up from $1.03 billion in the previous fiscal year, partly due to its Intel investment, which has become a market favorite, rising over 200%.
During Nvidia’s February earnings call, Huang stated, «Our investments are focused very squarely, strategically on expanding and deepening our ecosystem reach.»
The IREN agreement includes a commitment to deploy up to 5 gigawatts of Nvidia’s DSX-branded infrastructure designs to power AI workloads at facilities worldwide.
Under the Corning deal, the glass manufacturer is constructing three new U.S. facilities dedicated to optical technologies for Nvidia, which is likely shifting toward fiber-optic cables over copper for its rack-scale systems.
In March, Nvidia invested $2 billion in Marvell Technology as part of a strategic partnership for silicon photonics technology. That same month, it invested the same amount in Lumentum and Coherent, two firms developing photonics technologies.
Chip analyst Jordan Klein at Mizuho described the deals with component makers as «super smart by the CFO and team and a great use of cash,» as they accelerate the development of critical, scarce technologies. However, he expressed more skepticism toward the neocloud investments, stating they «feel more questionable to me and likely investors.»
«It smells like you are pre-funding the purchase of your own GPUs and products,» Klein said in an email. Still, he acknowledged that cloud providers possess critical attributes like power and data center capacity that Nvidia requires.
Ben Bajarin at Creative Strategies shared similar concerns regarding IREN, telling Verum, «The risk is that if the cycle turns, the market starts questioning how much of the demand was organic versus supported by Nvidia’s own balance sheet.»
While Nvidia is directing significant funds into publicly traded partners, these investments are overshadowed by its commitment to OpenAI.
Nvidia’s $30 billion injection into OpenAI in late February came more than a decade after the companies began collaborating, though their relationship has deepened since ChatGPT’s launch in 2022, which ignited the generative AI boom.
Nvidia’s initial investment in OpenAI was intended to be much larger. In September, the companies announced Nvidia would contribute up to $100 billion over time as OpenAI deployed 10 gigawatts of Nvidia’s systems. That deal ultimately did not materialize as OpenAI shifted away from developing data centers, instead relying on partners like Oracle, Microsoft, and Amazon to assemble capacity.
Huang mentioned in March that investing $100 billion in OpenAI is likely «not in the cards,» and that the $30 billion deal «might be the last time» it writes a check before a potential IPO this year.
WATCH: Nvidia’s AI supply chain empire: Here’s what you need to know
-
Technologies3 года agoTech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies3 года agoBest Handheld Game Console in 2023
-
Technologies3 года agoTighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года agoBlack Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies5 лет agoGoogle to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies5 лет agoVerum, Wickr and Threema: next generation secured messengers
-
Technologies4 года agoThe number of Сrypto Bank customers increased by 10% in five days
-
Technologies5 лет agoOlivia Harlan Dekker for Verum Messenger
