Connect with us

Technologies

Facebook, Twitter and YouTube face content challenges as Afghanistan falls

From fact-checking to labels, social networks are being put to the test yet again.

A CNN reporter stands in front of a photo of a helicopter flying over the US embassy in Kabul, Afghanistan, a city that has fallen into chaos. Underneath the image, a caption states: «Violent but mostly peaceful transfer of power.»

The image, supposedly a screengrab of the network, circulated widely on Facebook, Twitter and other social media, prompting questions about its authenticity. How could the transfer be considered peaceful, some wondered. Was the language meant to be satire?

Turns out the image was fake.

Reuters and Politifact both fact-checked the image and concluded that it, like so many photos before it, had been digitally altered. The doctored image borrowed a screenshot of CNN correspondent Omar Jimenez from a 2020 broadcast of protests in Kenosha, Wisconsin, over a police shooting. At the time, some conservatives criticized CNN for running the caption «Fiery but mostly peaceful protests after police shooting.»

Altered images and video, such as a doctored version of a Nancy Pelosi speech that made the House Speaker appear drunk, have plagued Facebook and Twitter for years. Now the problem is resurfacing as news pours out of Afghanistan, which quickly fell into turmoil as the US wound down a 20-year war. Just as before, social media outlets are resorting to labels and warnings to caution users about faked content.

On Sunday, Taliban fighters took over Kabul, the capital, and President Ashraf Ghani fled the country. Violence erupted at the city’s international airport, with videos spreading through social media of people clinging to a US military aircraft as it took off and others falling from another plane midair. The Associated Press, citing US senior military officials, reported that at least seven people died at the airport.

The upheaval in Afghanistan poses a set of familiar challenges to social networks, which monitor their platforms for offensive content including graphic imagery. Some Facebook videos of people falling from planes warned users the content didn’t violate its rules but might include violent or graphic content. Similar videos appeared on Twitter and TikTok without a label.

On Facebook and its photo-sharing service Instagram, the doctored CNN image was labeled as altered. «Independent fact-checkers say this information could mislead people,» the label said. The fake CNN caption was also used as a title in a YouTube video with different video footage, and the altered image also spread throughout Twitter, which didn’t add a label. YouTube didn’t label the video and said the video didn’t violate its rules.

Instagram boss: ‘The risk will evolve’

Adam Mosseri, who runs Instagram, told Bloomberg Television that the photo-sharing service bans posts promoting the Taliban, which is covered by its dangerous-organization policies because of US government sanctions.

«We are relying on that policy to proactively take down anything that we can that might be dangerous or that is related to the Taliban in general,» Mosseri said. «Now this situation is evolving rapidly, and with it I’m sure the risk will evolve as well. We are going to have to modify what we do and how we do it to respond to those changing risks as they happen.»

A Facebook spokesman said the company has a dedicated team, «including Afghan nationals and native Dari and Pashto speakers,» to assess the situation in real time.

«Our teams continue to monitor the situation on the ground in Afghanistan, in consultation with our partners, and will take action on any content that violates these policies,» the spokesman said in a statement. Facebook’s online rules prohibit glorifying violence or celebrating the suffering of others but note that it will include a warning screen for some gory content.

Facebook also noted that it bars the Taliban from its services because they’re «sanctioned as a terrorist organization under US law.» The social media giant owns messaging app WhatsApp and reportedly blocked a number being used by the Taliban that’s meant to be a hotline for civilians to report violence, looting and other problems, according to The Financial Times.

From April to June, the social network took action on 7 million pieces of content that contained terrorism, according to Facebook’s Community Standards Enforcement Report released on Wednesday. Facebook didn’t say how much of that content was Taliban-related. The New York Times reported on Wednesday it found more than 100 new accounts and pages on Facebook and Twitter that claim to belong to the Taliban or expressed support for the group.

On YouTube, some news outlets added their own warnings at the beginning of videos that cautioned users the imagery was graphic. But not all did. YouTube added age restrictions and a label to a video of people falling from a plane that was posted by the Hindustan Times, a big Indian newspaper. The label noted the «video may be inappropriate for some users.»

YouTube’s rules don’t allow violent, graphic or shocking content, though they make exceptions for content that is educational, documentary, artistic or scientific. The company said it also surfaces videos from authoritative sources during breaking news events.

In a statement, a YouTube spokesperson said the video-sharing platform would «terminate» accounts it believes are owned and operated by the Taliban because of sanctions and trade compliance laws.

Twitter pointed to its policies against violent organizations and hateful conduct. The company received criticism from some conservatives for allowing Taliban spokesman Zabihullah Mujahid to use its platform. Some activists accused the Taliban of «trying to fish for legitimacy» and pushing out information that conflicts with news reports. The company didn’t immediately answer questions about whether the account violated its rules.

The company has been testing a forum called Birdwatch that lets users flag tweets and write notes with more context. Some of the notes included content about Afghanistan.

In one tweet that was rated as both «not misleading» and «potentially misleading,» Sen. Marco Rubio, a Florida Republican, tweeted that US President Joe Biden «apparently» had «no plans» to speak about Afghanistan. Both of the notes said Rubio tweeted before Biden announced he would be speaking about the topic later on Monday.

Other tweets users rated as misleading note that a video shared by some high-profile conservatives, including US Sen. Ted Cruz, a Texas Republican, «attempt to frame CNN as proponents of the Taliban and their take-over of Afghanistan.» The video shows CNN correspondent Clarissa Ward reporting that Taliban fighters are «just chanting death to America, but they seem friendly at the same time. It’s utterly bizarre.»

TikTok didn’t immediately respond to questions about how it’s moderating content about Afghanistan.

Richard Nieva contributed to this report.

Technologies

Today’s NYT Mini Crossword Answers for Tuesday, Oct. 14

Here are the answers for The New York Times Mini Crossword for Oct. 14.

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


Today’s Mini Crossword has an odd vertical shape, with an extra Across clue, and only four Down clues. The clues are not terribly difficult, but one or two could be tricky. Read on if you need the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

Mini across clues and answers

1A clue: Smokes, informally
Answer: CIGS

5A clue: «Don’t have ___, man!» (Bart Simpson catchphrase)
Answer: ACOW

6A clue: What the vehicle in «lane one» of this crossword is winning?
Answer: RACE

7A clue: Pitt of Hollywood
Answer: BRAD

8A clue: «Yeah, whatever»
Answer: SURE

9A clue: Rd. crossers
Answer: STS

Mini down clues and answers

1D clue: Things to «load» before a marathon
Answer: CARBS

2D clue: Mythical figure who inspired the idiom «fly too close to the sun»
Answer: ICARUS

3D clue: Zoomer around a small track
Answer: GOCART

4D clue: Neighbors of Norwegians
Answer: SWEDES

Continue Reading

Technologies

Watch SpaceX’s Starship Flight Test 11

Continue Reading

Technologies

New California Law Wants Companion Chatbots to Tell Kids to Take Breaks

Gov. Gavin Newsom signed the new requirements on AI companions into law on Monday.

AI companion chatbots will have to remind users in California that they’re not human under a new law signed Monday by Gov. Gavin Newsom.

The law, SB 243, also requires companion chatbot companies to maintain protocols for identifying and addressing cases in which users express suicidal ideation or self-harm. For users under 18, chatbots will have to provide a notification at least every three hours that reminds users to take a break and that the bot is not human.

It’s one of several bills Newsom has signed in recent weeks dealing with social media, artificial intelligence and other consumer technology issues. Another bill signed Monday, AB 56, requires warning labels on social media platforms, similar to those required for tobacco products. Last week, Newsom signed measures requiring internet browsers to make it easy for people to tell websites they don’t want them to sell their data and banning loud advertisements on streaming platforms. 

AI companion chatbots have drawn particular scrutiny from lawmakers and regulators in recent months. The Federal Trade Commission launched an investigation into several companies in response to complaints by consumer groups and parents that the bots were harming children’s mental health. OpenAI introduced new parental controls and other guardrails in its popular ChatGPT platform after the company was sued by parents who allege ChatGPT contributed to their teen son’s suicide. 

«We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability,» Newsom said in a statement.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


One AI companion developer, Replika, told CNET that it already has protocols to detect self-harm as required by the new law, and that it is working with regulators and others to comply with requirements and protect consumers. 

«As one of the pioneers in AI companionship, we recognize our profound responsibility to lead on safety,» Replika’s Minju Song said in an emailed statement. Song said Replika uses content-filtering systems, community guidelines and safety systems that refer users to crisis resources when needed.

Read more: Using AI as a Therapist? Why Professionals Say You Should Think Again

A Character.ai spokesperson said the company «welcomes working with regulators and lawmakers as they develop regulations and legislation for this emerging space, and will comply with laws, including SB 243.» OpenAI spokesperson Jamie Radice called the bill a «meaningful move forward» for AI safety. «By setting clear guardrails, California is helping shape a more responsible approach to AI development and deployment across the country,» Radice said in an email.

One bill Newsom has yet to sign, AB 1064, would go further by prohibiting developers from making companion chatbots available to children unless the AI companion is «not foreseeably capable of» encouraging harmful activities or engaging in sexually explicit interactions, among other things. 

Continue Reading

Trending

Copyright © Verum World Media