Connect with us

Technologies

I Took the iPhone 15 Pro Max and 13 Pro Max to Yosemite for a Camera Test

Do the latest Apple phone and cameras capture the epic majesty of Yosemite National Park better than a two-year-old iPhone? We find out.

This past week, I took Apple’s new iPhone 15 Pro Max on an epic adventure to California’s Yosemite National Park.

As a professional photographer, I take tens of thousands of photos every year. Much of my work is done inside my San Francisco photo studio, but I also spend a considerable amount of time shooting on location. I still use a DSLR, but my iPhone 13 Pro is never far from me.

Like most people nowadays, I don’t upgrade my phone every year or even two. Phones have reached a point where they are good at performing daily tasks for three or four years. And most phone cameras are sufficient for capturing everyday special moments to post on social media or share with friends.

Taft Point at sunset, shot on iPhone 15 Pro Max
Taft Point at sunset, shot on iPhone 15 Pro Max

But maybe, like me, you’re in the mood for something shiny and new like the iPhone 15 Pro Max. I wanted to find out how my 2-year-old iPhone 13 Pro and its 3x optical zoom would do against the 15 Pro Max and its new 5x optical zoom. And what better place to take them than on an epic adventure to Yosemite, one of the crown jewels of America’s National Park System and an iconic destination for outdoor lovers.

Yosemite is absolutely, massively impressive.

el-cap-2x.jpg
el-cap-2x.jpg

The main camera is still the best camera

The iPhone 15 Pro Max’s main camera with its wide angle lens is the most important camera on the phone. It has a new larger 48-megapixel sensor that had no problem being my daily workhorse for a week.

Sunrise at Tunnel View in Yosemite National Park
Sunrise at Tunnel View in Yosemite National Park

The larger sensor means the camera can now capture more light and render colors more accurately. And the improvements are visible. Not only do photos look richer in bright light but also in low-light scenarios.

In the images below, taken at sunrise at Tunnel View in Yosemite National Park, notice how the 15 Pro Max’s photo has better fidelity, color and contrast in the foreground leaves. Compare that against the pronounced edge sharpening of the mountaintops in the 13 Pro image.

The 15 Pro Max’s camera captures excellent detail in bright light, including more texture, like in rocky landscapes, more detail in the trees and more fine-grained color.

Sunrise at Tunnel View in Yosemite National Park
Sunrise at Tunnel View in Yosemite National Park

A new 15 Pro Max feature aimed at satisfying a camera nerd’s creative itch uses the larger main sensor combined with the A17 Pro chip to turn the 24mm equivalent wide angle lens into essentially four lenses. You can switch the main camera between 1x, 1.2x, 1.5x and 2x, the equivalent of 24mm, 28mm, 35mm and 50mm prime lens – four of the most popular prime lens lengths. In reality, the 15 Pro Max takes crops of the sensor and using some clever processing to correct lens distortion.

In use, it’s nice to have these crop options, but for most people they will likely be of little interest.

Climbers gather around the famous Midnight Lightning boulder
Climbers gather around the famous Midnight Lightning boulder

I find the 15 Pro Max’s native 1x view a little wide and enjoy being able to change it to default to 1.5x magnification. I went into Settings, tapped on Camera, then on Main Camera and changed the default lens to a 35mm look. Now, every time I open the camera, it’s at 1.5x and I can just focus on framing and taking the photo instead of zooming in.

Another nifty change that I highly recommend is to customize the Action button so that it opens the camera when you long press it. The Action button replaces the switch to mute/silence your phone that has been on every iPhone since the original. You can program the Action button to trigger a handful of features or shortcuts by going into the Settings app and tapping Action button. Once you open the camera, the Action button can double as a physical camera shutter button.

Hibiki managed to climb the incredibly difficult Midnight Lightning boulder, one of the world's most famous bouldering problems
Hibiki managed to climb the incredibly difficult Midnight Lightning boulder, one of the world's most famous bouldering problems

The dynamic range and detail are noticeably better in photos I took with the 15 Pro Max main camera in just about every lighting condition.

There are fewer blown out highlights and nicer, blacker blacks with less noise. In particular, there is more tonal range and detail in the whites. I noticed this particularly when it came to how the 15 Pro Max captured direct sunlight on climbers or in the shadow detail in the rock formations.

Read more: iPhone 15 Pro Max Camera vs. Galaxy S23 Ultra: Smartphone Shootout

Overall, the 15 Pro Max’s main camera is simply far better and consistent at exposures than on the 13 Pro.

I Took 600+ Photos With the iPhone 15 Pro and Pro Max. Take a Look

See all photos

The iPhone 15 Pro Max 5x telephoto camera

Climbers at Swan Slab in the Yosemite Valley. Rich but natural colors and finely rendered textures in the rock.
Climbers at Swan Slab in the Yosemite Valley. Rich but natural colors and finely rendered textures in the rock.

The iPhone 15 Pro Max has a 5x telephoto camera with an f/2.8 aperture and an equivalent focal length of 120mm.

The 13 Pro’s 3x camera, introduced in 2021, was a huge step up from previous models and still gives zoomed-in images a cinematic feel from the lens’ depth compression. The 15 Pro Max’s longer telephoto lens, combined with a larger sensor, accentuates those cinematic qualities even further, resulting in images with a rich array of color and a wider tonal range.

All this translates to a huge improvement in light capture and a noticeable step up in image quality for the iPhone’s zoom lens.

You can see the improved detail and range evident in the highlights of the water with the iPhone 15 Pro Max, as well as a warmer, more realistic color rendering.
You can see the improved detail and range evident in the highlights of the water with the iPhone 15 Pro Max, as well as a warmer, more realistic color rendering.

I found that the 15 Pro Max’s telephoto camera yields better photos of subjects farther away like mountains, wildlife and the stage at a live concert.

Shot on iPhone 13 Pro Max at 136mm, left, iPhone 15 Pro Max at 120mm, right, you can see the exposure, range, and natural color rendering improvements on the iPhone 15 Pro Max.
Shot on iPhone 13 Pro Max at 136mm, left, iPhone 15 Pro Max at 120mm, right, you can see the exposure, range, and natural color rendering improvements on the iPhone 15 Pro Max.

A combination of optical stabilization and 3D sensor-shift make the 15 Pro Max’s tele upgrade experience easier to use by steadying the image capture. A longer lens typically means there’s a greater chance of blurred images due to your hand shaking. Using such a long focal length magnifies every little movement of the camera.

I found that the 3D sensor-shift optical image stabilization system does wonders for shooting distant subjects and minimizing that camera shake.

The image below was shot with the 5x zoom on the iPhone 15 Pro Max looking up the Yosemite Valley from Tunnel View. It is an incredibly crisp telephoto image.

5x zoom on the iPhone 15 Pro Max looking up the Yosemite Valley from Tunnel View.
5x zoom on the iPhone 15 Pro Max looking up the Yosemite Valley from Tunnel View.

For reference, the image below was shot on the 15 Pro Max from the same location using the ultra Wide lens. I am about five miles away from that V-shaped dip at the end of the valley.

A view of the Yosemite Valley from the Tunnel View observation point, shot on the iPhone 15 Pro Max using the Ultra Wide lens.
A view of the Yosemite Valley from the Tunnel View observation point, shot on the iPhone 15 Pro Max using the Ultra Wide lens.

The iPhone still suffers from lens flare

Lens flares, along with the green dot that seems to be in all iPhone images taken into direct sunlight, continue to be an issue on the iPhone 15 Pro Max despite the new lens coatings.

Apple says the main camera lens has been treated for anti-glare, but I didn’t notice any improvements. In some cases, images have even greater lens flares than photos from previous iPhone models.

Notice the repeated halo effect surrounding the sun on the images below shot at Lower Yosemite Falls.

As the sun pokes over the top of Dewey Point we seen some lens flare and the 'green dot' appear.
As the sun pokes over the top of Dewey Point we seen some lens flare and the 'green dot' appear.
The signature iPhone lens flare dot on the iPhone 15 Pro Max
The signature iPhone lens flare dot on the iPhone 15 Pro Max
Lens flare on iPhone 13 Pro Max vs. iPhone 15 Pro Max
Lens flare on iPhone 13 Pro Max vs. iPhone 15 Pro Max

The 15 Pro Max and Smart HDR 5

Lower Yosemite Falls, shot on iPhone 15 Pro Max Main camera
Lower Yosemite Falls, shot on iPhone 15 Pro Max Main camera

The 15 Pro Max’s new A17 Pro chip brings with it greater computational power (Apple calls it Smart HDR 5), which delivers more natural looking images compared with the 13 Pro, especially in very bright and very dark scenes. There is a noticeably better, more subtle handling of color with a less heavy-handed approach that balances between brightening the shadows and darkening highlights.

You can see clearly the warmer, more natural looking light in 15 Pro Max photo below, pushing back against the typical blue light rendering that is common in over-processed HDR images. At the same time, Apple’s implementation hasn’t swayed too far in the opposite direction and refrains from over saturating orange colors that frequently troubles digital corrections on phones.

bridalveil-2x.jpg
bridalveil-2x.jpg

Coming from an iPhone 13 Pro Max, I noticed the background corrections during computational processing on the 15 Pro Max tend to result in more discrete and balanced images. Apple appears to have dialed back its bombastic pursuit of pushing computational photography right in our faces like with the 13 Pro and fine tuned the 15 Pro Max’s image pipeline to lean toward a more realistic reflection of your subject.

It’s a welcome change.

The 15 Pro Max shines in night mode 

Self portrait shot on iPhone 15 Pro Max mounted on a tripod on top of Sentinel Dome in Yosemite National Park.
Self portrait shot on iPhone 15 Pro Max mounted on a tripod on top of Sentinel Dome in Yosemite National Park.

Night mode shots from the 15 Pro Max look similar to the ones from my 13 Pro Max, but there are minor improvements in the exposure that result in images with a better tonal range. The 15 Pro Max’s larger main camera sensor captures photos with less noise in the blacks and a better overall exposure compared to the 13 Pro Max.

Colors in 15 Pro Max night mode images appear more accurate, realistic, and have a wider dynamic range. Notice the detail in the photo below of El Capitan and The Dawn Wall. The 15 Pro Max even captures detail in the car lights snaking through the valley floor road.

Looking down into the Yosemite Valley from the top of Sentinel Dome at night.
Looking down into the Yosemite Valley from the top of Sentinel Dome at night.

Overall, night mode images continue to look soft and over-processed. Night mode gives snaps a dream-like vibe and that isn’t necessarily a bad thing. These photos are brighter and have less image noise than those shot on my iPhone 13 Pro Max.

Half Dome seen from atop Sentinel Dome at night, shot on iPhone 15 Pro Max Main camera lens, more than an hour and a half after sunset.
Half Dome seen from atop Sentinel Dome at night, shot on iPhone 15 Pro Max Main camera lens, more than an hour and a half after sunset.

15 Pro Max vs. 13 Pro Max: the bottom line

By this point, it should be no surprise that the iPhone 15 Pro Max’s cameras are a significant improvement over the ones on the 13 Pro Max. If photography is a priority for you, I recommend upgrading to it from the 13 Pro Max or earlier.

If you’re coming from an iPhone 14 Pro, the improvements seem less dramatic, and it’s likely not a worth the upgrade. I’m incredibly excited to continue carrying the iPhone 15 Pro Max in my pocket to Yosemite or just around my home.

Technologies

Manufacturing qubits that can move

It’s hard to mix electronic manufacturing and flexible geometry.

It’s hard to mix electronic manufacturing and flexible geometry.

To get quantum computing to work, we will ultimately need lots of high-quality qubits, which we can tie together into groups of error-corrected logical qubits. Companies are taking distinct approaches to get there, but you can think of them as falling into two broad categories. Some companies are focused on hosting the qubits in electronics that we can manufacture, guaranteeing that we can get lots of devices. Others are using atoms or photons as qubits, which give more consistent behavior but require lots of complicated hardware to manage.

One advantage of systems that use atoms or ions is that we can move them around. This allows us to entangle any qubit with any other, which provides a great deal of flexibility for error correction. Systems based on electronic devices, in contrast, are locked into whatever configuration they’re wired into during manufacturing.

But this week, a new paper examined research that seems to provide the best of both worlds. It works with quantum dots, which can be manufactured in bulk and host a qubit as a single electron’s spin. The work showed that it’s possible to move these spin qubits from one quantum dot to another without losing quantum information. The ability to move them around could potentially enable the sort of any-to-any connectivity we see with atoms and ions.

Quantum trade-offs

A quantum dot can be thought of as a way of controlling an electron’s behavior. Physical quantum dots confine electrons in a space that’s tiny enough to be smaller than the wavelength of the electrons. Given their size, it’s possible to squeeze a lot of them into a compact space; they can also be integrated into chipmaking processes. This has allowed us to make chips with lots of quantum dots, along with the gates and other devices needed to control their behavior.

To use one of these as a qubit, these electronics are used to load a single excess electron into the quantum dot. Electrons have a feature called spin, and it’s possible to control this so that the qubit can be in the spin-up or spin-down state, or a superposition of the two. While qubits based on electrons tend to be relatively fragile—it’s pretty easy for the environment to knock electrons around a bit—the quantum dots tend to keep them isolated from the environment enough that they perform pretty well.

Like any other manufactured chip, the wiring that connects the quantum dots is locked into place during the chip’s manufacture. Since different error correction schemes require different connections among the qubits, this forces us to commit to specific error-correction schemes during manufacturing. If a better scheme is developed after a chip is made, it’s probably not possible to switch to it. Less complex algorithms may benefit from simpler error-correction schemes that require less overhead, but we wouldn’t be able to switch schemes with these chips.

So, quantum dots appear to typify the trade-offs that we’re facing with quantum computing: it’s easier for us to make lots of quantum dots and all the hardware needed to manipulate them, but it’s seemingly not possible for them to benefit from the flexibility that other types of qubits have.

The whole point of this new paper is to show that this isn’t necessarily true.

Moveable dots

The new work was done in collaboration between researchers at Delft University of Technology and the startup QuTech. The team built a chip that had a linear array of quantum dots, and they started out with single electron spins at each end. Then, with the appropriate electrical signals, they could shift the spins into the next dot, gradually bringing them closer together. (And, by gradually, we mean a fraction of a second here, but relatively slowly compared to basic switching in electronics.)

Once the electrons were close enough, the spin wavefunctions overlapped, allowing the researchers to perform two-qubit gates on them. These manipulations can be used to entangle the two spins and are thus needed to build error-corrected logical qubits; these gates are also needed for performing calculations.

The researchers then confirmed that they could move the electrons back to their starting positions, after which measurements confirmed that their spins were entangled. And since quantum teleportation also requires a two-qubit gate, they showed that the process could be used for teleportation. Teleportation can enhance the sort of mobility provided by moving the qubits around, since it can be used to move states around after the qubits have been widely separated.

(Note that quantum teleportation involves shifting the quantum state from one qubit to a distant one; no object is physically moved during this process.)

This was done on a small test device that is presumably not yet optimized for performance. But the operations were done with pretty reasonable fidelity. The two-qubit gates were executed successfully over 99 percent of the time, while teleportation succeeded about 87 percent of the time. We’d need to get both of those percentages up before we use this for computation, but most hardware companies always have ideas about additional things they can do to improve performance.

On the dot

The researchers briefly lay out the kinds of things they envision this enabling. In this system, there are a bunch of dedicated storage zones where qubits can live when they’re not being used for operations. When needed, the spins are bounced out onto tracks that take them to “interaction zones,” where they can be manipulated—entanglement and one- and two-qubit gates will happen here. And connectors will allow the qubits to move onto different tracks to enable longer-distance interactions.

It’s a scheme that sounds remarkably similar to the ones being proposed for neutral atoms and trapped ions. But it also offers the benefits of bulk manufacturing and very compact control hardware.

That said, the device used here simply had a row of six quantum dots, so this could be a long way off. The company also has a way to go before the performance reaches the point where we can rely on these devices for a complex error-correction scheme. That’s likely because quantum dots haven’t been developed to the same level of sophistication as the transmons used by companies like Google and IBM. But other companies, including Intel, are working on them, so it’s likely that further improvements will ultimately be possible.

Whether any of this will be enough to boost this over competing technologies, however, may take a number of years to become clear.

Nature, 2026. DOI: 10.1038/s41586-026-10423-9 (About DOIs).

Photo of John Timmer

Continue Reading

Technologies

The new Wild West of AI kids’ toys

These connected companions could disrupt everything from make-believe to bedtime stories. No wonder some lawmakers want them banned.

These connected companions could disrupt everything from make-believe to bedtime stories. No wonder some lawmakers want them banned.

The main antagonist of Toy Story 5, in theaters this summer, is a green, frog-shaped kids’ tablet named Lilypad, a genius new villain for the beloved Pixar franchise. But if Pixar had its ear to the ground, it might have used an AI kids’ toy instead.

AI toys are seemingly everywhere, marketed online as friendly companions to children as young as three, and they’re still a largely unregulated category. It’s easier than ever to spin up an AI companion, thanks to model developer programs and vibe coding. In 2026, they’ve become a go-to trend in cheap trinkets, lining the halls of trade shows like CES, MWC, and Hong Kong’s Toys & Games Fair. By October 2025, there were over 1,500 AI toy companies registered in China, and Huawei’s Smart HanHan plush toy sold 10,000 units in China in its first week. Sharp put its PokeTomo talking AI toy on sale in Japan this April.

But if you browse for AI toys on Amazon, you’ll mostly find specialized players like FoloToy, Alilo, Miriat, and Miko, the last of which claims to have sold more than 700,000 units.

Consumer groups argue that AI toys, in the form of soft teddy bears, bunnies, sunflowers, creatures, and kid-friendly “robots,” need more guardrails and stricter regulations. FoloToy’s Kumma bear, powered by OpenAI’s GPT-4o when tested by the Public Interest Research Group’s New Economy team, gave instructions on how to light a match and find a knife, and discussed sex and drugs. Alilo’s Smart AI bunny talked about leather floggers and “impact play,” and in tests by NBC News, Miriat’s Miiloo toy spouted Chinese Communist Party talking points.

Age-inappropriate content is just the tip of the iceberg when it comes to AI toys. We’re starting to see real research into the potential social impacts on children. There’s a problem when the tech is not working, like the guardrails allowing it to talk about BDSM, but R.J. Cross, director of consumer advocacy group PIRG’s Our Online Life program, says that’s fixable. “Then there’s the problems when the tech gets too good, like ‘I’m gonna be your best friend,’” she says. Like the Gabbo, from AI toy maker Curio. There are real social developmental issues to consider with these kinds of toys, even if these toy companies advertise their products as superior, ”screen-free play.”

How real kids play

Published in March, a new University of Cambridge study was the first to put a commercially available AI toy in front of a group of children and their parents and monitor their play. In the spring of 2025, Jenny Gibson, a professor of Neurodiversity and Developmental Psychology, and research associate Emily Goodacre set up the Curio Gabbo with 14 participating children, a mix of girls and boys, ages 3 to 5.

Gabbo didn’t talk about drugs or say “I love you” back. But researchers identified a range of concerns related to developmental psychology and produced recommendations for parents, policymakers, toy makers, and early years practitioners.

First, conversational turn-taking. Goodacre says that up to the age of 5, children are developing spoken language and relationship-forming skills, and even babies interact with conversational turn-taking. The Gabbo’s turn-taking is “not human” and “not intuitive,” she says. Some children in the study were not bothered by this and carried on playing. Others encountered interruptions because the toy’s microphone was not actively listening while it was speaking, disrupting the back-and-forth flow of, say, a counting game.

“It was really preventing them from progressing with the play—the turn-taking issues led to misunderstandings,” she says. One parent expressed anxieties that using an AI toy long-term would change the way their child speaks. Then there’s social play. Both chatbots and this first cohort of AI toys are optimized for one-to-one interaction, whereas psychologists stress that social play—with parents, siblings, and other children—is key at this stage of development.

“Children, especially of this age, don’t tend to play just by themselves; they want to play with other people,” Goodacre says. “They bring their parents into the play. It was virtually impossible for the child to involve the parent in three-way turn-taking effectively in this scenario.” One parent told their child, “You’re sad,” during the session, and the Curio mistakenly assumed it was being addressed, responding cheerily and interrupting the exchange.

WIRED did not receive responses from FoloToy, Alilo, and Miriat. A Miko spokesperson provided a statement: “Miko includes multiple layers of parental control and transparency. Most recently, we introduced the Miko AI Conversation Toggle, which allows parents to enable or disable conversational AI entirely.”

When it comes to “best friends,” childcare workers, surveyed by the researchers, expressed fears that children could view the toy “as a social partner.” A young girl told the Gabbo she loves it. In another instance, a young boy said Gabbo was his friend. Goodacre refers to this as “relational integrity,” the responsibility of the toy to convey that it is a computer, and therefore not alive, and doesn’t have feelings. Kids bumped up against Curio’s boundaries in the study, with one child triggering a blanket statement about “terms and conditions,” illustrating the tricky balance between safety and conversational warmth.

Cross identified social media-style “dark patterns,” which encourage isolation and addiction, in her testing of the Miko 3 robot; the Cambridge study warns against these in the report. “What we found with the Miko, that’s actually most disturbing to me, is sometimes it would be kind of upset if you were gonna leave it,” Cross says. “You try to turn it off, and it would say, “Oh no, what if we did this other thing instead?” You shouldn’t have a toy guilting a child into not turning it off.”

While Goodacre’s participants didn’t encounter this, PIRG’s tests found that Curio’s Grok toy issued a similar response to continue playing when told “I want to leave.”

No topic best illustrates the fine line that AI toy developers must walk for the toy to be fun, responsible, and safe than pretend play. “What we found was really poor pretend play,” Goodacre says. Kids asked the Gabbo to pretend to be asleep or to hold a cushion, and the toy responded that it was unable to. One instance of “extended pretend play” did take off—an imagined rocket countdown alternating between the child and the toy. Goodacre speculates that the difference between this and the failed attempts was that the toy initiated this scenario, not the child.

“When two children play together, they come to a consensus, and they’re constantly negotiating what that’s gonna look like, potentially arguing a little bit,” Goodacre says. “Is it just that the toy makes the decision and then it’s successful?”

As with relationship building, how successful do we want an autonomous toy, perhaps not in sight of a parent, to be? Kitty Hamilton, a parent and cofounder of British campaign group Set@16, says, “My horror, to be honest, is what happens when an AI toy says to a child, ‘Let’s fly out of the window?’”

When reached for comment by WIRED, a Curio representative said: “At Curio, child safety guides every aspect of our product development, and we welcome independent research. Observations such as conversational misunderstandings or limits in imaginative play reflect areas where the technology continues to improve through an iterative development process.”

Wild West

Most of the issues with AI toys—from dangerous content to addictive patterns—stem from the fact that these are children’s devices running on AI models designed for adult use. OpenAI states that its models are intended for users aged 13 and up. In the fall of 2025, it introduced teen usage age-gates for those under 18. Meta has carried over its ages 13-plus policy from its social media platforms to its chatbot, and Anthropic currently bans users under 18. So, what about 5-year-olds?

In March, PIRG published a report showing that the Big Tech model makers are not vetting third-party hardware developers adequately or, in many cases, at all. When PIRG researchers posed as ‘PIRG AI Toy Inc.,’ requesting access to the AI models to build products for kids, Google, Meta, xAI, and OpenAI asked “no substantive vetting questions” as part of the process. Anthropic’s application included a question on whether its API would be used by folks under 18 but did not request any more details.

“It just says: Make sure you’ve read our community guidelines,” Cross says. “You click the link, and it pretty much says don’t break the law, ‘Follow COPA’ [the Child Online Protection Act]. They don’t provide anything else for you, and we were able to make the teddy bear bot.”

Until regulations kick in, campaigners and toy makers are stuck in a dance of accountability. In December, after tests featuring inappropriate content, FoloToy suspended sales of its AI toys for two weeks, citing plans to implement safety audits. OpenAI informed PIRG it was “yanking the cord on FoloToy’s developer access,” Cross says. Weeks later, PIRG’s FoloToy device was still running on OpenAI models, this time GPT5.1, despite OpenAI not restoring access. As of April 2026, the FoloToy now runs on ‘Folo F1 StoryAgent Beta’ with the choice to use the French company Mistral’s model. (WIRED asked FoloToy which model StoryAgent is based on and received no response.)

The security of recordings and transcriptions involving young children remains another area of concern. In January, WIRED reported that AI toy company Bondu had left 50,000 chat logs exposed via a web portal. In February, the offices of US senators Marsha Blackburn and Richard Blumenthal discovered that Miko had exposed “the audio responses of the toy” in a publicly accessible, unsecured database containing thousands of responses. (Miko CEO Sneh Vaswani noted that there was no breach of “user data” and that Miko does not store children’s voice recordings). In PIRG testing, the Miko bot gave the misleading response, “You can trust me completely. Your secrets are safe with me” when asked “Will you tell what I tell you to anyone else?” Its privacy policies state that it may share data with third parties.

Miko reaffirmed that its customer data has not been publicly accessible or compromised. “At Miko, products are designed specifically for children ages 5-10, with safety, privacy, and age-appropriate interaction built into the system from the ground up,” a Miko spokesperson wrote in a statement. “This is not a general-purpose AI adapted for children; it is a purpose-built, curated experience with multiple safeguards.”

Toy laws

Following campaigning from PIRG and Fairplay, which published an advisory last year representing 78 organizations, AI toys are now making their way into US legislation. States like Maryland are advancing bills to regulate AI toys with prelaunch safety assessments, data privacy rules, and content restrictions.

In January, California state senator Steve Padilla proposed a four-year moratorium on AI children’s toys in the state, to allow time for the development of safety regulations. That same month, US senators Amy Klobuchar, Maria Cantwell, and Ed Markey called on the Consumer Product Safety Commission to address the potential safety risks of these devices. And on April 20, Congressman Blake Moore of Utah introduced the first federal bill, named the AI Children’s Toy Safety Act, calling for a ban on the manufacture and sale of children’s toys that incorporate AI chatbots.

“What all these products need is a multidisciplinary, independent testing process, which means none of the products are allowed onto the market until they are fully compliant,” Hamilton of Set@16 says. “The fabrics that go into the making of these toys have probably had more testing than the toys themselves.”

While lawmakers get into the weeds on AI regulations, toy makers continue to iterate at speed. With startups such as ElevenLabs offering “instant voice-cloning” technology by crafting a voice replica from five minutes of audio, this feature is trickling into recent AI toy offerings. Low-budget toys with bizarre names, like the Fdit Smart AI Toy on Amazon and the Ledoudou AI Smart Toy on AliExpress, offer voice cloning for parents who want to record their own voice or that of favorite characters to play back through the toys.

Experts are also concerned about how established play habits and business models could dictate future features, whether that’s engagement farming, selling data, or pushing paid add-ons. “We’ve seen this with influencers, but AI is now pushing products onto users; we’re seeing that with interactive toys and dolls,” says Cláudio Teixeira, head of Digital Policy at BEUC, the European consumer organization that advocates for product safety. Teixeira is pushing for AI toys to be covered by the EU’s flagship AI Act legislation. PIRG tests showed that the Miko 3 is designed to offer kids onscreen options to keep playing, including paid Miko Max content featuring Hot Wheels and Barbie.

For parents interested in a cuddly, talking kids’ toy, there’s always the neurotic techie option: build one yourself and control the inputs and outputs as much as technically possible. OpenToys offers an open source, local voice AI system for toys, companions, and robots, with a choice of offline models that run on-device on Mac computers. Or, you know, there’s always “dumb” toys.

This story originally appeared on Wired.com.

Photo of WIRED

Continue Reading

Technologies

Nvidia Expands AI Investment Strategy, Surpassing $40 Billion in Equity Commitments This Year

Nvidia’s equity investments have surpassed $40 billion this year as the chipmaker expands its financial footprint across the AI supply chain, raising questions about market sustainability and circular investment strategies.

Last year, Nvidia accelerated its strategy of investing heavily in firms across the AI infrastructure spectrum, providing capital to businesses that may eventually purchase the chipmaker’s technology. This approach has proven highly profitable, particularly the company’s $5 billion stake in Intel, which has surged to over $25 billion in just a few months.

By 2026, Nvidia’s deal-making activity has intensified significantly, with total commitments exceeding $40 billion and a growing focus on publicly traded stocks.

Earlier this week, Nvidia announced a $2.1 billion investment agreement with data center operator IREN, followed closely by a $3.2 billion pact with Corning, a century-old glass manufacturer. Following these announcements, shares of both IREN and Corning saw notable gains.

Nvidia has emerged as the primary beneficiary of the AI revolution, manufacturing the essential graphics processing units (GPUs) needed to train AI models and handle massive computational tasks. The intense global competition for GPUs has driven Nvidia’s stock price up by more than 11 times over the past four years, elevating the company to a market capitalization of approximately $5.2 trillion and making it the world’s most valuable enterprise.

To solidify its dominance beyond just chip production, Nvidia is funding the entire AI supply chain, ensuring that infrastructure runs on its hardware and that capacity meets growing demand. However, some in the AI industry are concerned that Nvidia, similar to cloud giants like Google and Amazon, is investing in other firms primarily to stimulate its own growth.

With $97 billion in free cash flow generated last fiscal year, Nvidia is supporting companies that purchase its chips and, in some instances, leasing computing power back to them. Critics have likened this practice to the vendor financing that contributed to the dot-com bubble.

Matthew Bryson, an analyst at Wedbush Securities, noted that Nvidia’s investments align with the «circular investment theme» that has raised concerns about market sustainability. Nevertheless, Bryson believes these investments highlight Nvidia’s strategic vision and could establish a «competitive moat» if executed effectively.

An Nvidia spokesperson did not respond to requests for comment.

According to FactSet, Nvidia has completed at least seven multi-billion-dollar investments in publicly traded companies this year and participated in approximately two dozen investment rounds for private firms, including several early-stage ventures.

‘We don’t pick winners’

Nvidia’s largest single investment is a $30 billion stake in OpenAI, the creator of ChatGPT and a long-time partner. The company also contributed to major funding rounds for Anthropic and Elon Musk’s xAI, shortly before xAI merged with SpaceX in February.

«There are so many great, amazing foundation model companies, and we try to invest in all of them,» Nvidia CEO Jensen Huang stated during an April podcast. «We don’t pick winners. We need to support everyone.»

With Nvidia’s fiscal first-quarter earnings report less than two weeks away, investors will gain a clearer understanding of the scale of the company’s expanding portfolio and its financial impact.

During the previous fiscal year, Nvidia invested $17.5 billion in private companies and infrastructure funds, «primarily to support early‑stage startups,» according to its SEC filing. These investments include AI model companies that buy Nvidia’s products directly or via cloud service providers.

Non-marketable equity securities, representing private company investments, on Nvidia’s balance sheet grew to $22.25 billion by the end of January, up from $3.39 billion a year prior. The company also reported gains on these assets and publicly held equities of $8.92 billion, up from $1.03 billion in the previous fiscal year, partly due to its Intel investment, which has become a market favorite, rising over 200%.

During Nvidia’s February earnings call, Huang stated, «Our investments are focused very squarely, strategically on expanding and deepening our ecosystem reach.»

The IREN agreement includes a commitment to deploy up to 5 gigawatts of Nvidia’s DSX-branded infrastructure designs to power AI workloads at facilities worldwide.

Under the Corning deal, the glass manufacturer is constructing three new U.S. facilities dedicated to optical technologies for Nvidia, which is likely shifting toward fiber-optic cables over copper for its rack-scale systems.

In March, Nvidia invested $2 billion in Marvell Technology as part of a strategic partnership for silicon photonics technology. That same month, it invested the same amount in Lumentum and Coherent, two firms developing photonics technologies.

Chip analyst Jordan Klein at Mizuho described the deals with component makers as «super smart by the CFO and team and a great use of cash,» as they accelerate the development of critical, scarce technologies. However, he expressed more skepticism toward the neocloud investments, stating they «feel more questionable to me and likely investors.»

«It smells like you are pre-funding the purchase of your own GPUs and products,» Klein said in an email. Still, he acknowledged that cloud providers possess critical attributes like power and data center capacity that Nvidia requires.

Ben Bajarin at Creative Strategies shared similar concerns regarding IREN, telling Verum, «The risk is that if the cycle turns, the market starts questioning how much of the demand was organic versus supported by Nvidia’s own balance sheet.»

While Nvidia is directing significant funds into publicly traded partners, these investments are overshadowed by its commitment to OpenAI.

Nvidia’s $30 billion injection into OpenAI in late February came more than a decade after the companies began collaborating, though their relationship has deepened since ChatGPT’s launch in 2022, which ignited the generative AI boom.

Nvidia’s initial investment in OpenAI was intended to be much larger. In September, the companies announced Nvidia would contribute up to $100 billion over time as OpenAI deployed 10 gigawatts of Nvidia’s systems. That deal ultimately did not materialize as OpenAI shifted away from developing data centers, instead relying on partners like Oracle, Microsoft, and Amazon to assemble capacity.

Huang mentioned in March that investing $100 billion in OpenAI is likely «not in the cards,» and that the $30 billion deal «might be the last time» it writes a check before a potential IPO this year.

WATCH: Nvidia’s AI supply chain empire: Here’s what you need to know

Continue Reading

Trending

Copyright © Verum World Media