Technologies
Facebook’s AI research could spur smarter AR glasses and robots
Rummaging through drawers to find your keys could become a thing of the past.

Facebook envisions a future in which you’ll learn to play the drums or whip up a new recipe while wearing augmented reality glasses or other devices powered by artificial intelligence. To make that future a reality, the social network needs its AI systems to see through your eyes.
«This is the world where we’d have wearable devices that could benefit you and me in our daily life through providing information at the right moment or helping us fetch memories,» said Kristen Grauman, a lead research scientist at Facebook. The technology could eventually be used to analyze our activities, she said, to help us find misplaced items, like our keys.
That future is still a ways off, as evidenced by Facebook’s Ray-Ban branded smart glasses, which debuted in September without AR effects. Part of the challenge is training AI systems to better understand photos and videos people capture from their perspective so that the AI can help people remember important information.
Facebook said it teamed up with 13 universities and labs that recruited 750 people to capture more than 2,200 hours of first-person video over two years. The participants, who lived in the UK, Italy, India, Japan, Saudi Arabia, Singapore, the US, Rwanda and Colombia, shot videos of themselves engaging in everyday activities such as playing sports, shopping, gazing at their pets or gardening. They used a variety of wearable devices, including GoPro cameras, Vuzix Blade smart glasses and ZShades video recording sunglasses.
Starting next month, Facebook researchers will be able to request access to this trove of data, which the social network said is the world’s largest collection of first-person unscripted videos. The new project, called Ego4D, provides a glimpse into how a tech company could improve technologies like AR, virtual reality and robotics so they play a bigger role in our daily lives.
The company’s work comes during a tumultuous period for Facebook. The social network has faced scrutiny from lawmakers, advocacy groups and the public after The Wall Street Journal published a series of stories about how the company’s internal research showed it knew about the platform’s harms even as it downplayed them publicly. Frances Haugen, a former Facebook product manager turned whistleblower, testified before Congress last week about the contents of thousands of pages of confidential documents she took before leaving the company in May. She’s scheduled to testify in the UK and meet with Facebook’s semi-independent oversight board in the near future.
Even before Haugen’s revelations, Facebook’s smart glasses sparked concerns from critics who worry the device could be used to secretly record people. During its research into first-person video, the social network said it addressed privacy concerns. Camera wearers could view and delete their videos, and the company blurred the faces of bystanders and license plates that were captured.
Fueling more AI research
As part of the new project, Facebook said, it created five benchmark challenges for researchers. The benchmarks include episodic memory, so you know what happened when; forecasting, so computers know what you’re likely to do next; and hand and object manipulation, to understand what a person is doing in a video. The last two benchmarks are understanding who said what, and when, in a video, and who the partners are in the interaction.
«This sets up a bar just to get it started,» Grauman said. «This usually is quite powerful because now you’ll have a systematic way to evaluate data.»
Helping AI understand first-person video can be challenging because computers typically learn from images that are shot from the third-person perspective of a spectator. Challenges such as motion blur and footage from different angles come into play when you record yourself kicking a soccer ball or riding a roller coaster.
Facebook said it’s looking at expanding the project to other countries. The company said diversifying the video footage is important because if AR glasses are helping a person cook curry or do laundry, the AI assistant needs to understand that those activities can look different in various regions of the world.
Facebook said the video dataset includes a diverse range of activities shot in 73 locations across nine countries. The participants included people of different ages, genders and professions.
The COVID-19 pandemic also created limitations for the research. For example, more footage in the data set is of stay-at-home activities such as cooking or crafting rather than public events.
Some of the universities that partnered with Facebook include the University of Bristol in the UK, Georgia Tech in the US, the University of Tokyo in Japan and Universidad de los Andes in Colombia.
Technologies
Aurora Borealis Alert: 21 States Could Marvel at the Dazzling Northern Lights Tonight
A strong G3 magnetic storm is pushing the aurora further south than it’s been since June 2025.
Remember that dazzling night in May 2024, when the aurora borealis lit up states that almost never see its colorful glow? Some of us have been chasing that natural marvel ever since.
Now, the sun is at its solar maximum, and many might get their chance to see the northern lights again. Late Thursday night and early Friday morning, a moderately powerful magnetic storm impacts the Earth’s magnetic field, making the aurora visible in 21 states.
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
According to NOAA, the aurora will be visible in Idaho, Maine, Michigan, Minnesota, Montana, North Dakota, South Dakota, Washington and Wisconsin. Those with a high enough vantage point facing north should also be able to see it in Illinois, Indiana, Iowa, Nebraska, New Hampshire, New York, Ohio, Oregon, Pennsylvania, Wyoming and Vermont. Alaskans and Canadians will have the best view.
This is only a prediction. The aurora could be stronger or weaker depending on how things go. If you’re just south of any of these states, it may be worth seeing if the aurora makes it down to you.
This storm is a continuation of one that hit the US on Wednesday night, for which NOAA initially predicted a G2 magnetic storm but ultimately classified it as a stronger G3 storm.
Both storms come from a pair of X-Class coronal mass ejections, or eruptions of solar material and magnetic field that the sun launched toward the Earth on Nov. 4. X-class is the highest designation, so these ejections were pretty big.
Tips for viewing the northern lights
The methods for viewing an aurora are straightforward.
You’ll want to get as far away from the city and suburbs as you can to minimize light pollution. After that, you’ll want to get as high up as possible and then face north.
The northern states in the US will have the best view, but those further south in the prediction zone may still see something if they’re high enough and it’s dark enough outside.
Avoiding light pollution may be tough because the moon is almost full. It may drown out the aurora along the southern reaches of NOAA’s prediction area.
If you do decide to head out, you also have a pretty good chance at spotting a shooting star since there are four meteor showers active right now, including Orionids, Leonids, Northern Taurids and Southern Taurids. Three of the meteor showers are scheduled to peak in November.
Technologies
This New AI Feature for Cars Promises to Keep You From Missing Your Exit
Polestar is integrating Google Maps’ live lane guidance into the head-up display.
Technologies
Android Users Downloaded OpenAI’s Sora AI App Nearly Half a Million Times in One Day
How much AI slop does 470,000 Sora app installs equate to?
It’s only been two days since OpenAI dropped the Android-compatible version of its Sora app, but the AI social media app’s popularity seems to know no bounds. A new report from Appfigures found that the Android app was downloaded 470,000 times on the first day it was available. That’s four times as many downloads as compared to Sora’s initial iOS app launch in September, according to TechCrunch, which originally reported the news.
Keep in mind that the iOS app was downloaded over a million times in under five days. It was also restricted to North America and required an invite code. Since Sora has dropped its invite code requirement and opened up the app to more countries, it makes sense that Android downloads would be higher than the iOS ones. But it’s still an eye-popping statistic, even for an app that has quickly become one of the most powerful and controversial AI developments so far.
The Android app is just one of many updates OpenAI has dropped in recent weeks. In a new post, OpenAI’s head of Sora, Bill Peebles, outlined what’s coming soon for the AI-video app, including new creation tools, improved social features and much-anticipated Android support. OpenAI also said it would be working with unions like SAG-AFTRA and other celebrities and public figures to help manage the creation of potentially inappropriate or illegal videos, including deepfakes.
You can download Sora now on the Google Play Store and start scrolling right away. Here’s everything that’s inside the Sora app. For more, check out our guide for how to spot AI-generated videos.
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
Cameos and editing tools
Sora recently gained new creation tools in the form of character cameos, which are now expanding beyond people. Cameo is Sora’s primary feature that allows you to use other people’s likenesses to create nearly any kind of AI video. Soon, you’ll be able to cameo your dog, guinea pig, favorite stuffed toy or generated characters from existing Sora videos. Several Halloween-themed characters have been added recently.
The app’s generation interface will also highlight trending cameos in real-time, likely building on popular existing social media features, such as the For You page or Explore page on Instagram.
OpenAI is also introducing basic video editing tools, beginning with the ability to stitch clips together directly within the app. Peebles says more advanced editing features are on the way, hinting at a broader creative suite that aims to move Sora beyond short, one-off generations to an app that can be used by professional creators.
On the social side, the team is experimenting with new ways to utilize Sora with friends and communities, rather than just a global feed. That could mean channels for your university, workplace, hobbies or sports teams, bringing a more localized vibe to what has so far been a mostly chaotic public stream of AI videos.
These changes follow the first major Sora update earlier this month, which introduced longer video limits and a storyboarding feature. The company announced that free Sora users can make videos up to 15 seconds long on the iPhone app and the web (which is the only way Android users can use Sora at the moment). Pro users also receive an additional 10 seconds when creating on the web, for a total of 35 seconds. The announcement came one day after Google upgraded its popular AI video model, Veo 3, to handle longer video generations.
New payment options for videos
As OpenAI added new features and opened up its app to anyone (no invite code needed), it also introduced payment plans. Previously, free users could generate up to 30 videos per day, while Pro users had a limit of 100 videos per day. Now, if anyone hits their generation limit, they can pay $4 for an additional 10 video generations.
Since your Sora account is linked to your ChatGPT account, if you pay for ChatGPT Pro, you’re a paying Sora user. For more information, see all the payment plans.
Storyboarding
Storyboarding, available only to Pro users on the web, lets creators plan out videos on the web before generating them. Storyboarding has long been a part of the professional filmmaking process and is occasionally included in more professional software programs. Google’s AI filmmaking program Flow, for example, allows for storyboarding. But this is an interesting and somewhat unexpected addition to Sora.
Sora has only been around a short time, but the vibe on the app is focused on shorter, funny videos, echoing OpenAI’s claim that the app is designed to help people connect with their friends. Professional-grade videos that are longer and better planned aren’t very common, but these upcoming updates will likely change that.
This could be a sign that OpenAI is attempting to attract the professional creators it has previously alienated. Professional creators would need storyboarding, video editing, longer run times, and higher resolutions, and OpenAI seems to be addressing these needs quickly.
(Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
-
Technologies3 года ago
Tech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies3 года ago
Best Handheld Game Console in 2023
-
Technologies3 года ago
Tighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года ago
Verum, Wickr and Threema: next generation secured messengers
-
Technologies4 года ago
Black Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies4 года ago
Google to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies4 года ago
Olivia Harlan Dekker for Verum Messenger
-
Technologies4 года ago
iPhone 13 event: How to watch Apple’s big announcement tomorrow