Nonprofit Global Witness tested how Facebook was dealing with hate speech ahead of the presidential vote by analysing 200,000 comments on the pages of 67 US Senate candidates between September 6 and October 6.Image: REUTERS/Yves Herman/File Photo

Meta struggles to curb hate speech before US vote: researchers

by · TimesLIVE

Warning: Article contains offensive language Meta — the owner of Facebook and Instagram — is struggling to fully contain and address hate speech ahead of the US election, according to research shared exclusively with the Thomson Reuters Foundation.

Nonprofit Global Witness tested how Facebook was dealing with hate speech ahead of the presidential vote by analysing 200,000 comments on the pages of 67 US Senate candidates between Sept. 6 and Oct. 6.

When Global Witness researchers used Facebook's reporting tool to flag 14 comments that they considered particularly egregious violations of Meta's hate speech rules in its “community standards”, Meta took days to react.

The comments flagged by the researchers referred to Muslims as a “plague,” Jews as “inbred and parasitic,” and called one political candidate a “lezbo pig.”

Meta removed some but not all of the 14 comments from Facebook after Global Witness emailed the company directly, the researchers said.

“There was a real failure to promptly review these posts,” said Ellen Judson, a researcher with Global Witness who oversaw the test.

The findings come as Meta has long faced criticism from researchers, watchdog groups, and lawmakers for not fostering a healthy information ecosystem during elections across the globe.

Only in April, the European Commission opened an investigation to assess whether Meta may have breached EU online content rules ahead of the European Parliament elections.

Judson said Facebook's handling of the comments flagged by Global Witness points to a breakdown in how the platform deals with hate speech.

In an email, a spokesperson for Meta said the Global Witness work was “based on a tiny sample of comments and we removed those that violate our policies”.

“This is not reflective of the work our teams — including the 40,000 people working on safety and security — are doing to keep our platform safe ahead of the election,” the spokesperson said.

Facebook's community standards say content that “attacks individuals based on their race, ethnicity, national origin, sex, gender, gender identity, sexual orientation, religious affiliation, disabilities, or diseases is considered a violation”.

While it is not clear how many users were exposed to the hate speech, Judson said the impact could be large.

“Online abuse can have negative psychological impact and can make people reconsider being in politics. For outside observers, seeing that kind of discourse can perhaps give them the impression that this isn't a place for me,” she said.

“A small amount of abuse can still do a lot of harm.”

LACK OF INVESTMENT?

The failure is part of a broader lack of investment in election preparedness ahead of the upcoming US vote, said Theodora Skeadas, a former public policy official at Twitter — now X.

“They have laid off staff and decreased resources towards monitoring political content,” said Skeadas, now CEO of Tech Policy Consulting, which addresses issues including AI governance and information integrity.

Over the past several years, Facebook has reduced its headcount across multiple teams.

Facebook and Instagram are the second and third most popular social media platforms in the US, according to the US-based Pew Research Center, with 68% of US adults reporting that they use Facebook, and 47% saying they use Instagram.

Over a third of users rely on the platforms to get information about current events, the Pew Research Center found.

According to Meta, the relevance of hate speech violating content is very low on its platform — about 0.02% on Facebook and between 0.02%-0.03% on Instagram, meaning for every 10,000 content views about 2 to 3 of them would contain hate speech.

The Meta spokesperson told the Thomson Reuters Foundation that in the second quarter of 2024, Facebook took action against 7.2 million pieces of content for violating hate speech policies and 7.8 million pieces of content for violating its bullying and harassment policies.

But Jeff Allen, a former data scientist at Meta, who is the co-founder of the nonprofit Integrity Institute, said that automated systems used to flag hate speech often miss a lot. They can fail to grasp the context of a comment, or be fooled by slang or oblique language, he said.

Allen also said platforms like Facebook are wary of being too heavy-handed about removing posts, as it can interfere with the amount of time people spend online.

“If you are more aggressive about taking down content, you see engagement go down — there are trade-offs,” he said.

In a February blog post outlining its strategy for elections, Meta's president of global affairs Nick Clegg wrote: “No tech company does more or invests more to protect elections online than Meta — not just during election periods but at all times.”

Clegg said the company invested more than $20 billion in the effort leading into the 2024 US presidential election, highlighting Meta's commitment to making political advertising transparent and to strengthening teams hunting down hate groups on the platform.

TRANSPARENCY

Despite Facebook's pledge to help safeguard elections, a number of recent reports point to instances where false advertising, election misinformation, and hate speech have been permitted.

In October, Global Witness carried out a test of major social media platforms' advertising systems that found that some paid ads with election misinformation were still being accepted and posted on Facebook, even though the platform had improved its review process.

Forbes reported in October that Facebook was running over a million dollars of ads falsely claiming that the US election could be postponed or rigged, while the Bureau of Investigative Journalism published a report in November saying e-commerce companies were selling merchandise via Facebook that contained similar falsehoods. In both cases, Meta said it was reviewing the matter, according to the reports.

Researchers like Allen say that Meta could be much more transparent about how it tackles hate speech, by releasing data on how many users are exposed, explaining how often posts are submitted to human reviewers, and disclosing more about how their automated systems work.

Meta phased out “CrowdTangle”, a tool widely used by outside researchers to track viral misinformation on the platform, in August. The moved fuelled complaints from groups and experts who used it, but Facebook said it had introduced new tools that gave a fuller picture of activities on its platform.

“We need metrics on the scale of harms,” Allen said.

Global Witness said that Facebook did not engage with it at all on the findings of its research — leaving it in the dark about how hate speech was being handled in the days before the US election.

Without more transparency, it is impossible to know how seriously the platforms are taking abuse at this critical time, leaving it to outside researchers to flag violations of the company's own rules, said Judson, with Global Witness.

“For them, it's always a 'catch-up' situation, it's not proactive,” she said.

Thomson Reuters Foundation