Connect with us

Technologies

Starlink Will Be Moving Thousands of Its Satellites for ‘Space Safety’ Reasons

Earth’s orbit is congested with satellites, making collisions and disruptions more and more likely.

Starlink said it will reduce the altitude of thousands of its internet-beaming satellites following a mishap with one and a near collision with others, a vivid reminder of how crowded — and dangerous — Earth’s orbit has become.

In a New Year’s Day post on X, Starlink engineering vice president Michael Nicholls said the company would begin «a significant reconfiguration of its satellite constellation» and lower the orbit of approximately 4,400 satellites, or nearly half its total of more than 9,000, from their current altitude of about 342 miles to about 298 miles.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


«Lowering the satellites results in condensing Starlink orbits, and will increase space safety in several ways,» Nicholls said, including by «reducing the aggregate likelihood of collision.»

A representative for Starlink did not immediately respond to CNET’s request for comment.

Satellite internet has become an increasingly attractive alternative to terrestrial options for broadband access such as cable, fiber and DSL, especially in rural areas. It is most closely identified with Starlink, a subsidiary of Elon Musk’s SpaceX, but other providers include Hughesnet and Viasat. Those satellites are typically in low Earth orbit, in contrast with those like GPS satellites that are thousands of miles from the ground.

In early December, a Starlink satellite came within roughly 200 meters (656 feet) of a Chinese satellite Nicholls posted on X on Dec. 12. He said the Chinese satellite was one of nine deployed days earlier and blamed «lack of coordination between satellite operators,» citing negligence by the operators at the Jiuquan Satellite Launch Center in Northwestern China prior to the deployment of those nine satellites. «This needs to change,» he said in the post.

There are nearly 12,000 active satellites in orbit and thousands more that have stopped working. That number is expected to rise rapidly as SpaceX continues sending up Starlink satellites and as rival internet constellations get built out by projects such as Amazon Leo (formerly Project Kuiper) and China’s «Thousand Sails.»

Starlink’s announcement this week comes two weeks after one of its satellites «experienced an anomaly» and began «tumbling» toward Earth from its height of 260 miles. The company said the object will disintegrate when it hits the Earth’s atmosphere and also does not pose a danger to the International Space Station, which also flies in low Earth orbit.

More from CNETStarlink Internet Review: Plans, Pricing, Speed and Availability

In his X post this week, Nicholls also pointed to «solar minimum» as another reason to reduce the orbital altitude of its satellites. Solar minimum is the period of time when there is the least amount of solar activity — such as sunspots and solar flares — during the sun’s 11-year solar cycle. During this phase, satellites can last longer in space because there is less atmospheric density and thus less drag on the vehicle. But that also means more congestion for a longer span of time.

Nicholls said that the satellites’ «ballistic decay time» — that is, the time it takes a projectile to lose energy in its descent toward Earth — will decrease from 4-plus years to just a few months.

Technologies

Instagram Chief Says AI Images Are Evolving Fast and He’s Worried About Us Keeping Up

We need a whole new approach to «credibility signals» so we know who to trust, says Adam Mosseri.

In a 2025 year-end post, Instagram chief Adam Mosseri addressed the massive shifts AI is causing in photography, stressing that authenticity will be harder and harder to come by — and offering thoughts on how creators, camera makers and Instagram itself will need to adapt.

«The key risk Instagram faces is that, as the world changes more quickly, the platform fails to keep up. Looking forward to 2026, one major shift: authenticity is becoming infinitely reproducible,» Mosseri wrote in the post, which took the form of 20 text slides — no images at all. (He also posted a somewhat expanded version on Threads.)  


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


Mosseri said that AI is making it impossible to distinguish real photos from AI-generated images and that as more «savvy creators are leaning into unproduced, unflattering images,» AI itself will follow with images that lean into that «raw aesthetic» as well. That will force us, he said, to change how we approach images from the jump.

«At that point we’ll need to shift our focus to who says something instead of what is being said,» Mosseri said. But it will take us «years to adapt» and to get away from assuming that what we see is real. «This will be uncomfortable — we’re genetically predisposed to believing our eyes.»

On the technical side, Mosseri predicted that makers of camera equipment will begin offering ways to cryptographically sign photos to establish a chain of ownership, proving that images aren’t AI generated.

He also warned that those camera makers are going the wrong direction by offering ways to help amateur photographers create polished images. «They’re competing to make everyone look like a pro photographer from 2015,» Mosseri said. «Flattering imagery is cheap to produce and boring to consume. People want content that feels real.»

Instagram and the need to «surface credibility signals»

Instagram is owned by Meta, which also owns Facebook and WhatsApp, and like those platforms, Instagram added AI features in 2025. It also surprised some users who saw AI versions of themselves popping up in ads. Like other platforms, Instagram has struggled with the flood of AI-generated content, including slop, crowding out content from humans.

Just look at the powerful AI image and video generators that emerged in 2025, from Google’s Nano Bananas to OpenAI’s Sora.

In his posts, Mosseri said he hopes that the struggle to figure out what’s fake and what’s real will be addressed by labeling «real media» and rewarding originality in how that content is ranked.

Mosseri concluded by listing steps that Instagram will have to take, driven by a need to «surface credibility signals about who’s posting so people can decide who to trust.»

  • Build tools, both traditional and AI-driven, to help creators compete with fully AI-created content.
  • Label AI-generated content clearly.
  • Work with manufacturers to «verify authenticity at capture — fingerprinting real media, not just chasing fake.»
  • Improve ranking for originality.

«Instagram is going to have to evolve in a number of ways,» he said, «and fast.» 

Continue Reading

Technologies

AI Became a Bogeyman to Gamers in 2025, but Developers Are Mixed on Its Potential

The spread of generative AI has become background radiation, pitting players against studios and leaving its role uncertain.

As the games industry has been riddled with layoffs and studio closures in recent years, another shadow emerged in 2025: generative AI, which made its way into the game development pipeline. 

Last March, I attended the Game Developers Conference in San Francisco, California, dashing between the wings of the Moscone Center to hear how the games industry was incorporating generative AI. The technology could be applied to generate code, text or images, yet there was no seeming consensus on what it should be used for. From panels of cautiously optimistic executives to roundtables of freelance developers concerned with securing steady employment, the conference was flooded with a range of views on AI, despite the limited evidence of its use in game development. 

By the end of 2025, the issue spiked, grabbing the attention of gamers everywhere, as developers open up about the ways they’ve used generative AI to make games — which, as far as we know, has still been minimal. On social media, numerous unfounded accusations have been made against games for using AI-generated art and text. The technology has become a bogeyman for gamers. 

When actual proof of AI in a game is revealed, the consequences can be serious. After it came to light that AI-made placeholder assets were included in the launch of JRPG Clair Obscur: Expedition 33 (even though they were swiftly patched out), the Indie Game Awards rescinded two awards for the much-lauded game. And when Swen Vincke, founder and game director of Larian Studios (Baldur’s Gate 3), announced that generative AI was being used to create concept art and placeholder text for its next game, it sparked backlash, according to the video game news and reviews site IGN. 

What’s changed? Awareness, certainly. Throughout the year, AI has been like background radiation, bumming out gamers in other aspects of their lives, spreading through software, exacerbating climate issues, increasing misinformation with falsified images and spiking PC RAM prices. It makes sense that gamers would be suspicious of the use of generative AI in the games they play, especially given its dubious training on datasets and art, often done without the consent of creators.

Lack of transparency is also sparking concern. Companies aren’t disclosing the amount, if any, of generative AI used. It’s common practice for studios to stay quiet during game development, sometimes releasing snippets of behind-the-scenes footage on social media or YouTube to build hype. But opacity only intensifies the furor among fans if news about the use of generative AI then becomes public. Besides, there isn’t an agreed-upon standard on where to use generative AI, how much is appropriate and whether game-makers are obliged to disclose when they’ve used it.

How gen AI’s promises pitted players against studios 

GDC, an annual conference that has been running since 1988, has long been a hub for discussions and sessions on AI. In the past, you’d mostly hear about topics such as computer-controlled character behavior and the use of machine learning. Some of that remains, but much of AI’s presence at GDC has moved on to generative AI. 

Despite the skepticism surrounding the technology, I’ve seen ideas for what it could offer players in the future. GDC 2024 was brimming with possibilities for generative AI in gaming, and GDC 2025 took it to the next level, demonstrating prototype technology to attendees. From the moment the doors opened at the Moscone Center, it was all about promoting the current and near-future applications of generative AI in both game production and tools for players.

Xbox executives Fatima Kardar and Sonali Yadav, corporate vice president of gaming AI at Microsoft and partner group product manager, respectively, gave an overview of their plans to use Microsoft’s Copilot, an AI-powered assistant, to support Xbox gamers during play. It felt much like a pitch for other smart assistants. They proposed ways it could guide new players or provide customized advice to more experienced players, offering the example of suggesting hero choices and post-death tips in Overwatch. (This Copilot on Xbox functionality launched in beta back in September.) 

They also emphasized their responsibility to players when deploying the assistant. «We want to make sure that, as AI shows up in their experiences, those experiences add value and make the gaming more powerful an experience, yet keep games at the front and center of it,» Kardar said. «It needs to make sure gamers are having more fun.»

Accessory-maker Razer also showcased its own AI-powered in-game assistant at GDC. The abundance of gaming guides online, including those on YouTube, suggests that gamers would be receptive to such guidance, even if they might initially resist it. At this point, however, there haven’t been enough titles that incorporate in-game assistance to gauge player reaction. 

Instead, the wider gaming community’s exposure to generative AI in games has been discovering, after release, that the technology was used but not divulged. For example, 11 Bit Studios, which developed the sci-fi base-builder The Alters, apologized in June for not disclosing its use of AI in development (players discovered AI-generated text prompts in the released version of the game). 

Embark, the studio behind extraction shooter Arc Raiders, pushed back against accusations that it used generative AI, telling PCGamesN that machine learning handled movement for the game’s multilegged robots. On the game’s Steam page, the studio says AI was used in development, but doesn’t specify the nature of the AI used, unlike the disclosure for its previous game, The Finals, which used text-to-speech tools to generate audio. 

In each instance, fans reacted sourly, with bitter condemnation that studios had deliberately misled them. Some developers owned up, like 11 Bit Studios apologizing for using generative AI to hastily translate text for international versions of the game in time for its launch (saying the plan was to swap in professional translations later). Other instances seem to have been oversights, as with Sandfall Interactive admitting that the AI-generated textures in Clair Obscur: Expedition 33 were accidentally left in but then removed days after its release. 

While it’s unclear how broad this sentiment is among gamers, the loudest critics consider AI-generated game elements tantamount to poisoning their experience. Aftermath journalist Luke Plunkett appropriately titled his commentary: «I’m Getting Real Tired of Not Being Able to Trust That a Video Game Doesn’t Have AI Crap in It.»

Nowhere has that new norm of AI hostility been more evident than in the immediate aftermath of The Game Awards in December, when Larian, beloved creator of Baldur’s Gate 3, released a trailer for its next RPG, Divinity 3. The reveal was well received until studio head Vincke discussed his company’s use of AI in a follow-up interview with Bloomberg. Fan backlash prompted him to release a statement to IGN clarifying that no AI-generated content would be included in the final game, which is still years away from release. In a separate post on X, Vincke explained that Larian is using generative AI to explore visual ideas and compositions before the in-house artists create the actual concept art.

What generative AI promises game developers

Within the industry itself, developers see AI as a mixed bag.

Microsoft’s talk with Xbox executives Kardar and Yadav explored other ways AI could be built into Microsoft’s developer tools (like DirectX, Visual Studio, Azure AI Services and more) to help developers create games, whether by speeding up workflows or helping log bugs faster, as well as by offering AI chat-based support. 

Razer also showcased another generative AI tool, designed for game development: a quality assurance assistant that automates aspects of bug tracking and filing. When a tester plays a build of a new game and stops the session because they noticed something awry, Razer’s tool can create an automatic report that logs when and where certain bugs were encountered. Razer says this automation can reduce QA time by 50%, though it stressed that the tech was intended to be an efficiency multiplier, not a job replacer.

The corporations also envision using generative AI to address issues, such as easing internal processes, automating mundane tasks, and parsing player and industry data for actionable insights. It’s an idea that was echoed in several talks throughout GDC, including one featuring developers from studios such as Raven Software, Sledgehammer Games, Treyarch and Activision Shanghai. The developers listed technical ways in which large language models helped them use multimodal searches to identify the right item among hundreds of thousands of assets in digital libraries, or spot and eliminate redundant tickets in task-tracking software like Jira.

Another panel of executives from several companies, including Xbox, Roblox, 2K, enterprise AI platform maker Databricks and game engine creator Unity, explored the downsides of prompting generative AI to produce code. 2K chief technical officer Nibedita Baral recounted a developer who seemingly reduced a three-day task down to minutes, though it then took three days to correct the issues in the AI-generated output. Optimizing models is challenging, especially in ensuring that the output is ethical.

«That’s on us to reduce the bias, to have diversity. A machine cannot do it, a tool cannot do it. Humans have to invest in that to figure out the balance,» Baral said. 

AI’s threat to labor and art in the games industry

While GDC opened with optimistic corporate pitches and rather pedestrian uses for generative AI in game production, concerns about the human cost bubbled up through the rest of the week.

Anyone currently seeking employment is aware of the significant impact that generative AI has had on the job market. These days, AI services filter out many applicants before they even reach a human’s desk. With applicants using AI to build resumes that can survive automated filtering, the entire process is obscured. At a roundtable discussing how AI is impacting hiring new employees, games industry recruiters described using LLMs for an additional phone screening of applicants to cut down on time. Yet that also presents another AI barrier to prospective hires — one that can’t filter for culture fit the way humans can. 

A few hundred feet away, contractors were hashing out survival strategies to weather one of the worst employment periods the industry has seen. Many developers employed by studios voiced concerns about how AI might replace their work, but it was low on the list of priorities for freelancers. They were more bedeviled by the ordinary evils that plague vulnerable workers, such as getting stiffed on client payments or being pressured into performing free labor through endless revisions. 

In a conversation with Dr. Jakin Vela, executive director of the International Game Developers Association, we explored the challenges facing the games industry during what could be considered one of its cyclical troughs. Yet it appears that this post-expansion course correction has been particularly grueling. Even more than the rise of generative AI, what weighs on developers is profound economic uncertainty and geopolitical strain, alongside studios cutting jobs and the decline in efforts to hire inclusively.

IGDA’s membership has varying perspectives on the new technology. «Some people are excited for the possibility to incorporate generative AI in their workflows to support their processes, but we have others in our community, especially among artists, localization professionals, QA testers and writers who are rightfully terrified that generative AI will be used by studio leadership and executives to replace them to save costs,» Vela said.

One thing Vela conceded, and which was echoed during the conference, was that generative AI is here to stay. The question is how to ethically incorporate it and identify whether language models used by AI tools were trained on stolen data. Another question is how to use AI to augment developer workflows rather than replace them.

Former EA software engineer David «Rez» Graham hosted a panel on the ethics of using AI in game development. It came with a stern warning: that the increased use of gen AI in production also threatens the death of art. Since any output from the technology is derivative, not creative, normalizing its use in an artistic and experiential art form risks «losing the soul of the industry in the worst, extreme case.»

Graham noted that many artists and designers feel like nobody is listening to their concerns or taking them seriously. Generative AI represents a split in priorities between creatives (artists, designers, developers) and managers. While one could argue that AI tools with ethically sourced data have a place in empowering workers, Graham’s concern is that AI adoption will soon be mandated by individuals with solely financial motives who lack an understanding of artistic workflows.

«I think we’re sitting right now at a crossroads where we get to decide: Are we going to have the bad, dystopian ending, or are we going to have an ending where we can use these tools to uplift?» Graham said.

During GDC, games industry veterans fed up with layoffs and turmoil launched their own union, United Videogame Workers. The union aimed to unify developers across companies, with the ultimate goal of achieving a large enough membership to drive industry-wide change. The workers’ demands have included broad employment protections to resist rampant layoffs — over 25,000 employees lost their jobs over the last two years. And now, there are also concerns about AI technologies threatening those who remain employed. 

Into 2026, the beat continues: AI is here to stay

For a tech reporter like myself, the rest of the year in gaming wasn’t that different. I got early looks at upcoming titles at Summer Game Fest and various previews. My colleagues and I tallied up the best games of the year and attended The Game Awards to cap off 2025.

But that background radiation was always there. Multiple news stories emerged alleging that games were being made with generative AI. Fans have become increasingly wary, and studios started to respond by posting public assurances that their games weren’t made with AI. After the Indie Game Awards revoked its award to Clair Obscur: Expedition 33 and granted it to the runner-up, Blue Prince, the gaming website The Escapist put out an alarmist article claiming the latter may have used AI. 

The article, which has since been corrected, prompted its publisher Raw Fury to post on Bluesky that AI was not used in Blue Prince’s creation. The kerfuffle represents the tenuous state of gaming and suspicion by fans about how much digital automation went into making their favorite entertainment. 

That isn’t to say that gamers should expect generative AI to play a role in every game going forward, especially since the technology is still in its early stages. I chatted with The Witness and Braid creator Jonathan Blow about his upcoming game, Order of the Sinking Star, which was revealed at The Game Awards. He recounted predictions that people wouldn’t even be programming anymore by the end of 2025 — which, he told me, is patently false.

«You could certainly get something on the screen a lot faster with AI than you could before, but you still have the task of evolving that into something that people actually want to play, and past a certain point, AI can’t take you there yet,» Blow said. «The thing it leaves you with is a total mess that programmers wouldn’t really want.» 

Though he acknowledged others’ concerns that AI shouldn’t be used in gaming, Blow said he believed that if and when generative AI improves, it’ll help people expand their creativity. He also said he doesn’t expect it to threaten jobs. 

As 2026 begins, gamers have a lot to look forward to, with blockbuster games like Grand Theft Auto 6, Resident Evil: Requiem, Tomb Raider: Legacy of Atlantis, 007: First Light, Control Resonant and more titles. But they’ll enter the year with a sense of uncertainty, no longer able to trust that their games are completely made by humans.

Continue Reading

Technologies

Pebble’s Bringing Its Round Watch Back, This Time With Revamps

The Pebble Round is back with a decade-later sequel and a two week battery life and hook-in possibilities for AI agents onboard.

Pebble brought its smartwatch lineup back last year and new models are already coming fast. The $199 Pebble Round 2, coming in May, is actually a long-overdue sequel to the original Round Pebble watch I loved from a decade ago. With a round watch face, swappable bands and a touchscreen, 2016 me would have been freaking out about this. It’s available to pre-order now on Pebble’s website.

In 2026, it’s still an intriguing proposition. Pebble’s comeback as an indie gadget company caught my eye this time around not just because it has retro appeal now, but because these watches deliver some extreme battery life. The Round 2 doesn’t have the month-long battery of the other Pebbles, but a promised two weeks on a charge still far outstrips Apple and Google’s watches. The round display’s an always-on color e-paper, higher-res than the older Round watch (1.3 inches, 260 x 260 pixels). The steel watch is 8mm thick, and has a look to it that still catches my eye compared to Pixel watches.

Classic Pebble apps should work on the Round 2 but it’s the promise of AI hook-ins that could open up some new ideas. The Round 2 has two microphones for voice replies to messages and, like the other Pebbles, it could connect with AI agents — not something that’s part of the watch natively but it could be hacked to make it happen. It also works with Pebble’s one-button smart ring, the Index 01, that’s also coming soon.

I’ll be seeing all the Pebble watches and the ring at CES soon, so I’ll have more impressions then. 

Continue Reading

Trending

Exit mobile version