Connect with us

Technologies

Facebook butts heads with Instagram researchers studying photo site’s algorithm

AlgorithmWatch says it shut down its Instagram project because Facebook threatened legal action. Facebook says it didn’t threaten the group and that AlgorithmWatch was breaking its rules.

AlgorithmWatch, a German research and advocacy group, shut down its Instagram monitoring project after what it says was a «thinly veiled threat» from Facebook. But the social network says it made no such threat and that the group’s project ran afoul of Facebook policies around data collection.

The advocacy group says it’s «committed to evaluating and shedding light on … algorithmic decision-making processes that have social relevance» and that its project found that Instagram prioritizes posts that feature people who are «scantily clad» and that politicians’ posts were seen by more people when those posts showed a politico’s face instead of text.

In a blog post Friday, the researchers said they shut down the Instagram project on July 13, after a May meeting with Facebook, which owns Instagram. At that meeting, they said, Facebook told AlgorithmWatch it had violated Facebook’s terms of service, which prohibit the automated collection of data. According to the group, Facebook said it would «mov[e] to more formal engagement» if the issue wasn’t resolved, which the researchers took as a threat of legal action.

Facebook says it didn’t threaten any legal action against AlgorithmWatch and wanted to work with the organization to find a way to continue the research.

«We had concerns with their practices,» a Facebook spokesperson said in an email Friday, «which is why we contacted them multiple times so they could come into compliance with our terms and continue their research, as we routinely do with other research groups when we identify similar concerns.»

As part of the Instagram project, AlgorithmWatch developed an add-on that scraped volunteers’ Instagram newsfeeds to study how the social network «prioritizes pictures and videos in a user’s timeline.» The researchers contend that the add-on’s users volunteered their feed data to the project and that since the project’s launch, in March 2020, about 1,500 volunteers had installed the add-on.

Earlier this month, Facebook disabled a similar research project at New York University, saying it violated the social network’s terms around data gathering The NYU Ad Observatory used an add-on to collect data regarding what political ads were shown in a user’s Facebook feed.

News about the shutdown of AlgorithmWatch comes as there’s been intense scrutiny on social networks, the misinformation found on them and the effect they have on individuals and society.

For its part, Facebook has had to be careful with how it manages the data of its users, particularly following 2018’s Cambridge Analytica scandal, in which an outside firm harvested information from 50 million Facebook accounts without their permission. That scandal led to Facebook CEO Mark Zuckerberg being called before Congress to testify about the social network’s data privacy policies. And it played a part in Facebook agreeing, in 2019, to pay a $5 billion fine to the US Federal Trade Commission over privacy violations. Under that settlement, Facebook must certify that it’s taking steps to protect user privacy.

The Facebook spokesperson said Friday that the company makes it a point to cooperate with researchers. «We collaborate with hundreds of research groups to enable the study of important topics, including by providing data sets and access to APIs, and recently published information explaining how our systems work and why you see what you see on our platform.»

AlgorithmWatch, on the other hand, accused Facebook of «weaponizing» its terms of service. «Given that Facebook’s terms of service can be updated at their discretion (with 30 days’ notice), the company could forbid any ongoing analysis that aims at increasing transparency, simply by changing its terms,» the group said in its blog post.

Technologies

Today’s NYT Mini Crossword Answers for Tuesday, Oct. 14

Here are the answers for The New York Times Mini Crossword for Oct. 14.

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


Today’s Mini Crossword has an odd vertical shape, with an extra Across clue, and only four Down clues. The clues are not terribly difficult, but one or two could be tricky. Read on if you need the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

Mini across clues and answers

1A clue: Smokes, informally
Answer: CIGS

5A clue: «Don’t have ___, man!» (Bart Simpson catchphrase)
Answer: ACOW

6A clue: What the vehicle in «lane one» of this crossword is winning?
Answer: RACE

7A clue: Pitt of Hollywood
Answer: BRAD

8A clue: «Yeah, whatever»
Answer: SURE

9A clue: Rd. crossers
Answer: STS

Mini down clues and answers

1D clue: Things to «load» before a marathon
Answer: CARBS

2D clue: Mythical figure who inspired the idiom «fly too close to the sun»
Answer: ICARUS

3D clue: Zoomer around a small track
Answer: GOCART

4D clue: Neighbors of Norwegians
Answer: SWEDES

Continue Reading

Technologies

Watch SpaceX’s Starship Flight Test 11

Continue Reading

Technologies

New California Law Wants Companion Chatbots to Tell Kids to Take Breaks

Gov. Gavin Newsom signed the new requirements on AI companions into law on Monday.

AI companion chatbots will have to remind users in California that they’re not human under a new law signed Monday by Gov. Gavin Newsom.

The law, SB 243, also requires companion chatbot companies to maintain protocols for identifying and addressing cases in which users express suicidal ideation or self-harm. For users under 18, chatbots will have to provide a notification at least every three hours that reminds users to take a break and that the bot is not human.

It’s one of several bills Newsom has signed in recent weeks dealing with social media, artificial intelligence and other consumer technology issues. Another bill signed Monday, AB 56, requires warning labels on social media platforms, similar to those required for tobacco products. Last week, Newsom signed measures requiring internet browsers to make it easy for people to tell websites they don’t want them to sell their data and banning loud advertisements on streaming platforms. 

AI companion chatbots have drawn particular scrutiny from lawmakers and regulators in recent months. The Federal Trade Commission launched an investigation into several companies in response to complaints by consumer groups and parents that the bots were harming children’s mental health. OpenAI introduced new parental controls and other guardrails in its popular ChatGPT platform after the company was sued by parents who allege ChatGPT contributed to their teen son’s suicide. 

«We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability,» Newsom said in a statement.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


One AI companion developer, Replika, told CNET that it already has protocols to detect self-harm as required by the new law, and that it is working with regulators and others to comply with requirements and protect consumers. 

«As one of the pioneers in AI companionship, we recognize our profound responsibility to lead on safety,» Replika’s Minju Song said in an emailed statement. Song said Replika uses content-filtering systems, community guidelines and safety systems that refer users to crisis resources when needed.

Read more: Using AI as a Therapist? Why Professionals Say You Should Think Again

A Character.ai spokesperson said the company «welcomes working with regulators and lawmakers as they develop regulations and legislation for this emerging space, and will comply with laws, including SB 243.» OpenAI spokesperson Jamie Radice called the bill a «meaningful move forward» for AI safety. «By setting clear guardrails, California is helping shape a more responsible approach to AI development and deployment across the country,» Radice said in an email.

One bill Newsom has yet to sign, AB 1064, would go further by prohibiting developers from making companion chatbots available to children unless the AI companion is «not foreseeably capable of» encouraging harmful activities or engaging in sexually explicit interactions, among other things. 

Continue Reading

Trending

Copyright © Verum World Media