Connect with us

Technologies

Facebook’s AI research could spur smarter AR glasses and robots

Rummaging through drawers to find your keys could become a thing of the past.

Facebook envisions a future in which you’ll learn to play the drums or whip up a new recipe while wearing augmented reality glasses or other devices powered by artificial intelligence. To make that future a reality, the social network needs its AI systems to see through your eyes.

«This is the world where we’d have wearable devices that could benefit you and me in our daily life through providing information at the right moment or helping us fetch memories,» said Kristen Grauman, a lead research scientist at Facebook. The technology could eventually be used to analyze our activities, she said, to help us find misplaced items, like our keys.

That future is still a ways off, as evidenced by Facebook’s Ray-Ban branded smart glasses, which debuted in September without AR effects. Part of the challenge is training AI systems to better understand photos and videos people capture from their perspective so that the AI can help people remember important information.

Facebook said it teamed up with 13 universities and labs that recruited 750 people to capture more than 2,200 hours of first-person video over two years. The participants, who lived in the UK, Italy, India, Japan, Saudi Arabia, Singapore, the US, Rwanda and Colombia, shot videos of themselves engaging in everyday activities such as playing sports, shopping, gazing at their pets or gardening. They used a variety of wearable devices, including GoPro cameras, Vuzix Blade smart glasses and ZShades video recording sunglasses.

Starting next month, Facebook researchers will be able to request access to this trove of data, which the social network said is the world’s largest collection of first-person unscripted videos. The new project, called Ego4D, provides a glimpse into how a tech company could improve technologies like AR, virtual reality and robotics so they play a bigger role in our daily lives.

The company’s work comes during a tumultuous period for Facebook. The social network has faced scrutiny from lawmakers, advocacy groups and the public after The Wall Street Journal published a series of stories about how the company’s internal research showed it knew about the platform’s harms even as it downplayed them publicly. Frances Haugen, a former Facebook product manager turned whistleblower, testified before Congress last week about the contents of thousands of pages of confidential documents she took before leaving the company in May. She’s scheduled to testify in the UK and meet with Facebook’s semi-independent oversight board in the near future.

Even before Haugen’s revelations, Facebook’s smart glasses sparked concerns from critics who worry the device could be used to secretly record people. During its research into first-person video, the social network said it addressed privacy concerns. Camera wearers could view and delete their videos, and the company blurred the faces of bystanders and license plates that were captured.

Fueling more AI research

As part of the new project, Facebook said, it created five benchmark challenges for researchers. The benchmarks include episodic memory, so you know what happened when; forecasting, so computers know what you’re likely to do next; and hand and object manipulation, to understand what a person is doing in a video. The last two benchmarks are understanding who said what, and when, in a video, and who the partners are in the interaction.

«This sets up a bar just to get it started,» Grauman said. «This usually is quite powerful because now you’ll have a systematic way to evaluate data.»

Helping AI understand first-person video can be challenging because computers typically learn from images that are shot from the third-person perspective of a spectator. Challenges such as motion blur and footage from different angles come into play when you record yourself kicking a soccer ball or riding a roller coaster.

Facebook said it’s looking at expanding the project to other countries. The company said diversifying the video footage is important because if AR glasses are helping a person cook curry or do laundry, the AI assistant needs to understand that those activities can look different in various regions of the world.

Facebook said the video dataset includes a diverse range of activities shot in 73 locations across nine countries. The participants included people of different ages, genders and professions.

The COVID-19 pandemic also created limitations for the research. For example, more footage in the data set is of stay-at-home activities such as cooking or crafting rather than public events.

Some of the universities that partnered with Facebook include the University of Bristol in the UK, Georgia Tech in the US, the University of Tokyo in Japan and Universidad de los Andes in Colombia.

Technologies

Today’s NYT Mini Crossword Answers for Thursday, May 22

Here are the answers for The New York Times Mini Crossword for May 22.

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


Today’s NYT Mini Crossword is fairly easy, especially if you know recent trends in baby names. It also helps to have a slight knowledge of The Lord of the Rings, or at a minimum, be good at riddles. Need some help with today’s Mini Crossword? Read on. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

The Mini Crossword is just one of many games in the Times’ games collection. If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

Mini across clues and answers

1A clue: Part of a fleet
Answer: SHIP

5A clue: Answer to Gollum’s riddle in «The Hobbit,» which starts «This thing all things devours — birds, beasts, trees, flowers»
Answer: TIME

6A clue: «See ya!»
Answer: LATER

7A clue: Second-most popular girl’s name of the 2020s, after Olivia
Answer: EMMA

8A clue: Not keeping secrets
Answer: OPEN

Mini down clues and answers

1D clue: Sticker on an envelope
Answer: STAMP

2D clue: «I’d like another card,» in blackjack
Answer: HITME

3D clue: «Uhh, that is to say …»
Answer: IMEAN

4D clue: The «p» of m.p.h.
Answer: PER

6D clue: Name taken by the new pope
Answer: LEO

How to play more Mini Crosswords

The New York Times Games section offers a large number of online games, but only some of them are free for all to play. You can play the current day’s Mini Crossword for free, but you’ll need a subscription to the Times Games section to play older puzzles from the archives.

Continue Reading

Technologies

What a Proposed Moratorium on State AI Rules Could Mean for You

Congressional Republicans have proposed a 10-year pause on the enforcement of state regulations around artificial intelligence.

States couldn’t enforce regulations on artificial intelligence technology for a decade under a plan being considered in the US House of Representatives. The legislation, in an amendment to the federal government’s budget bill, says no state or political subdivision «may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems or automated decision systems» for 10 years. The proposal would still need the approval of both chambers of Congress and President Donald Trump before it can become law. The House is expected to vote on the full budget package this week.

AI developers and some lawmakers have said federal action is necessary to keep states from creating a patchwork of different rules and regulations across the US that could slow the technology’s growth. The rapid growth in generative AI since ChatGPT exploded on the scene in late 2022 has led companies to fit the technology in as many spaces as possible. The economic implications are significant, as the US and China race to see which country’s tech will predominate, but generative AI poses privacy, transparency and other risks for consumers that lawmakers have sought to temper.

«We need, as an industry and as a country, one clear federal standard, whatever it may be,» Alexandr Wang, founder and CEO of the data company Scale AI, told lawmakers during an April hearing. «But we need one, we need clarity as to one federal standard and have preemption to prevent this outcome where you have 50 different standards.»

Efforts to limit the ability of states to regulate artificial intelligence could mean fewer consumer protections around a technology that is increasingly seeping into every aspect of American life. «There have been a lot of discussions at the state level, and I would think that it’s important for us to approach this problem at multiple levels,» said Anjana Susarla, a professor at Michigan State University who studies AI. «We could approach it at the national level. We can approach it at the state level too. I think we need both.»

Several states have already started regulating AI

The proposed language would bar states from enforcing any regulation, including those already on the books. The exceptions are rules and laws that make things easier for AI development and those that apply the same standards to non-AI models and systems that do similar things. These kinds of regulations are already starting to pop up. The biggest focus is not in the US, but in Europe, where the European Union has already implemented standards for AI. But states are starting to get in on the action.

Colorado passed a set of consumer protections last year, set to go into effect in 2026. California adopted more than a dozen AI-related laws last year. Other states have laws and regulations that often deal with specific issues such as deepfakes or require AI developers to publish information about their training data. At the local level, some regulations also address potential employment discrimination if AI systems are used in hiring.

«States are all over the map when it comes to what they want to regulate in AI,» said Arsen Kourinian, partner at the law firm Mayer Brown. So far in 2025, state lawmakers have introduced at least 550 proposals around AI, according to the National Conference of State Legislatures. In the House committee hearing last month, Rep. Jay Obernolte, a Republican from California, signaled a desire to get ahead of more state-level regulation. «We have a limited amount of legislative runway to be able to get that problem solved before the states get too far ahead,» he said.

While some states have laws on the books, not all of them have gone into effect or seen any enforcement. That limits the potential short-term impact of a moratorium, said Cobun Zweifel-Keegan, managing director in Washington for the International Association of Privacy Professionals. «There isn’t really any enforcement yet.» 

A moratorium would likely deter state legislators and policymakers from developing and proposing new regulations, Zweifel-Keegan said. «The federal government would become the primary and potentially sole regulator around AI systems,» he said.

What a moratorium on state AI regulation means

AI developers have asked for any guardrails placed on their work to be consistent and streamlined. During a Senate Commerce Committee hearing last week, OpenAI CEO Sam Altman told Sen. Ted Cruz, a Republican from Texas, that an EU-style regulatory system «would be disastrous» for the industry. Altman suggested instead that the industry develop its own standards.

Asked by Sen. Brian Schatz, a Democrat from Hawaii, if industry self-regulation is enough at the moment, Altman said he thought some guardrails would be good but, «It’s easy for it to go too far. As I have learned more about how the world works, I am more afraid that it could go too far and have really bad consequences.» (Disclosure: Ziff Davis, parent company of CNET, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

Concerns from companies — both the developers that create AI systems and the «deployers» who use them in interactions with consumers — often stem from fears that states will mandate significant work such as impact assessments or transparency notices before a product is released, Kourinian said. Consumer advocates have said more regulations are needed, and hampering the ability of states could hurt the privacy and safety of users.

«AI is being used widely to make decisions about people’s lives without transparency, accountability or recourse — it’s also facilitating chilling fraud, impersonation and surveillance,» Ben Winters, director of AI and privacy at the Consumer Federation of America, said in a statement. «A 10-year pause would lead to more discrimination, more deception and less control — simply put, it’s siding with tech companies over the people they impact.»

A moratorium on specific state rules and laws could result in more consumer protection issues being dealt with in court or by state attorneys general, Kourinian said. Existing laws around unfair and deceptive practices that are not specific to AI would still apply. «Time will tell how judges will interpret those issues,» he said.

Susarla said the pervasiveness of AI across industries means states might be able to regulate issues like privacy and transparency more broadly, without focusing on the technology. But a moratorium on AI regulation could lead to such policies being tied up in lawsuits. «It has to be some kind of balance between ‘we don’t want to stop innovation,’ but on the other hand, we also need to recognize that there can be real consequences,» she said.

Much policy around the governance of AI systems does happen because of those so-called technology-agnostic rules and laws, Zweifel-Keegan said. «It’s worth also remembering that there are a lot of existing laws and there is a potential to make new laws that don’t trigger the moratorium but do apply to AI systems as long as they apply to other systems,» he said.

Moratorium draws opposition ahead of House vote

House Democrats have said the proposed pause on regulations would hinder states’ ability to protect consumers. Rep. Jan Schakowsky called the move «reckless» in a committee hearing on AI regulation Wednesday. «Our job right now is to protect consumers,» the Illinois Democrat said.

Republicans, meanwhile, contended that state regulations could be too much of a burden on innovation in artificial intelligence. Rep. John Joyce, a Pennsylvania Republican, said in the same hearing that Congress should create a national regulatory framework rather than leaving it to the states. «We need a federal approach that ensures consumers are protected when AI tools are misused, and in a way that allows innovators to thrive.»

At the state level, a letter signed by 40 state attorneys general — of both parties — called for Congress to reject the moratorium and instead create that broader regulatory system. «This bill does not propose any regulatory scheme to replace or supplement the laws enacted or currently under consideration by the states, leaving Americans entirely unprotected from the potential harms of AI,» they wrote.

Continue Reading

Technologies

AT&T Is Buying 95% of Lumen’s Quantum Fiber. Will Prices Go Up?

Continue Reading

Trending

Copyright © Verum World Media