It seems everyone has a bone to pick with Facebook CEO Mark Zuckerberg these days. So, when a delegation of prominent civil rights leaders met with Zuckerberg at his house in Palo Alto, California, earlier this week to discuss how racist and anti-Muslim posts on Facebook put people in danger, they brought with them plenty of proof. Farhana Khera, a civil rights lawyer and director of Muslim Advocates, showed Zuckerberg a number of viral Facebook posts that incite violence against Muslims on and offline.
One bloody meme shows the corpse of a Middle Eastern soldier and declares that the “only good Muslim is a fucking dead one.” In another, a member of a local school board in New Jersey named Dan Leonard reposted a news story about Rep. Rashida Tlaib and wrote that his “life would be complete if she/they died.” Leonard calls Rep. Ilhan Omar a “terrorist” in another post. Leonard reportedly refused to step down from the school board amid a firestorm of controversy but is not seeking reelection. Tlaib and Omar are among the most high-profile women of color in Congress and have faced racist attacks by President Donald Trump, and death threats from some of his followers.
“From our perspective, we have seen this issue play out not just at the highest level of the president, but it’s just as prevalent as the local level,” Khera told Truthout in an interview after meeting with Zuckerberg. “There is a problem throughout with these elected officials.”
Facebook is facing criticism from all corners for allowing hate speech and political disinformation to flourish on its social media site, where 43 percent of consumers get at least some of their news and where Russian trolls famously sought to sow discord and sway the 2016 elections. A recent study by the online activist network Avaaz found that the top 100 political “fake” news stories debunked by fact checkers were posted 2.3 million times on Facebook and attracted nearly 160 million views – enough disinformation to reach each of the 153 million registered voters in the 2019 elections. Hate speech is prohibited on Facebook, but Khera and other civil rights leaders say it continues to spread, inciting violence at home and across the world.
“There are a lot of bad actors who are looking to abuse your platform not just to denigrate people and demonize people, but also to stoke attacks of violence and even the mass murder of people based on their religions and other protected statuses,” Khera said she told Zuckerberg.
But what if those bad actors are also politicians who shape public discourse? In an era when white nationalist violence is rising and the president of the United States spews racist demagoguery along with thousands of lies, the debate over “free speech” on Facebook presents a highly partisan and financial conundrum for a company that raises revenue by tracking user data and selling targeted ads. After all, Trump is spending millions of dollars on Facebook ads in his bid for reelection and has already run at least one ad that is demonstrably false.
Echoing alt-right myths spread by conspiracy theorists, Trump and right-wing commentators have accused social media outlets of bias against conservatives and pressured Zuckerberg to meet with them in person. Political disinformation appears in all sorts of ideological flavors, but Trump as well as the right-wing “news” outlets that back him are notorious for spreading falsehoods and trafficking in conspiracy theories that attract attention on the far right. Of the debunked “news” stories analyzed by Avaaz, 69 percent were biased toward the right. Both independent analysists and Facebook employees have concluded that the algorithms, data tracking and targeted advertising used by Facebook and platforms such as YouTube drive certain users toward more politically extreme content to the benefit of white supremacists and demagogues.
Most recently, Facebook has come under fire from more liberal critics for allowing candidates and elected officials to run political ads containing lies and misinformation and for exempting politicians from its third-party fact-checking program. Facebook generally considers a politician’s speech to be “newsworthy” and therefore exempt from its Community Standards. In an email, Facebook spokesman Ruchika Budhraja said the “newsworthy” exemption has also been used to allow images of war and famine that might otherwise violate the rules, because they serve to inform the public.
“Hate speech won’t be allowed to remain on the platform; rather, we conduct a balancing test to evaluate the public interest value and the risk of harm associated with the offending speech,” Budhraja said in an email to Truthout.
In other words, hate speech is not allowed on Facebook, but a political figure’s hate speech would be if Facebook decides the “public interest value” of allowing it to stand outweighs the risk of inciting violence that could harm people in real life. Statements made by lower-level officials may have less public interest value, but Budhraja said the same “balancing test” between newsworthiness and risk of harm applies. Budhraja added that Facebook takes “country-specific” factors into account, such as whether elections or a war is going on, and whether the country has a free press.
“In evaluating the risk of harm, we will consider the severity of the harm,” Budhraja said. “Hate speech and other content that has the potential to incite violence poses a safety risk that we will take into account.”
Adriel Hampton, a digital media strategist who gained nationwide attention by running for governor of California and pledging to run false political ads in protest of Facebook’s policy, said Zuckerberg is “bending over backward” for Trump and the right wing. Facebook has said any false ads from Hampton’s campaign would be taken down, a move Hampton considers a blatant double-standard.
“I want Zuckerberg to stop giving Trump and the right-wing exemptions to break the rules on Facebook,” Hampton said in an interview with Truthout.
Trump and conservative activists, Hampton said, are trying to pull the world’s largest social media platform to the right. Hampton said left-wing pages that violated Facebook’s rules have been taken down, but right-wing pages that violate the same rules are still operating. Facebook is reportedly placing the extremist, alt-right news site Breitbart under the “high-quality” section of its new flagship news service, despite the publication’s record of publishing conspiracy theories and false information.
Facebook argues it’s not the company’s job to decide whether politicians are telling the truth. Zuckerberg has attempted to frame the debate around “free speech” and transparency, arguing the public should hear what politicians have to say and judge for themselves. But what if politicians say things that could incite violent against Muslims, people of color and other groups? In a recent speech, Zuckerberg agreed there should be limits on speech that could put people in danger, although he questioned how “dangerous speech” should be defined. It should be noted that the Constitution’s First Amendment protects speech from censorship by the government, not by corporations like Facebook.
Civil rights advocates say the definition of dangerous speech is clear — it is speech that can increase the risk that members of its audience will condone or participate in violence against another group — and deadly white nationalist attacks on synagogues and mosques show just how dangerous it can be. In one post Khera shared with Zuckerberg, a Texas county commissioner wrote “HOW TO WINK AT A MUSLIM” above an image of a man in a cowboy hat squinting down the double-barrel of a shotgun. Trump’s public racism goes back decades, and his anti-immigrant rhetoric has been condemned for helping to spark white nationalist violence, such as the massacre of Latina and Latino shoppers at a Wal-Mart in El Paso.
While hate speech is prohibited on Facebook and can be removed, Khera said a politician could still turn a racist post into a political ad, pay Facebook to place it with target audiences and share it widely under the company’s current policies. The same goes for misinformation. While Facebook has pledged to “demote” previously debunked content posted by politicians and display “related information” from fact-checkers, politicians can still use the platform to spread lies and conspiracy theories.
“Historically, in the United States and around the world, demagogues in positions of power pose the greatest danger to the voting rights and physical security of people of color,” wrote Kristen Clarke, director of the Lawyers’ Committee for Civil Rights Under Law, in an open letter to Zuckerberg. “These harms will be further exacerbated by a presumptive ‘newsworthiness’ exception — without disclaimer, demotion, or quarantine — for rules violations by politicians and by your decision not do any fact checking of ‘opinion’ content.”
Khera told Zuckerberg that, for Muslims in the U.S., the problem is bigger than online hate speech. For years, politicians have used the Muslim community as a “political weapon” to stoke outrage among their supporters and polarize the electorate. However, Khera’s organization is nonpartisan and does not have any skin in the electoral game. Muslim Advocates want to keep Muslim communities safe from harassment and violence, both online and in the physical world.
“From my perspective, this isn’t just about Breitbart and Daily Caller,” Khera said. “There is a global problem here.”
Khera told Zuckerberg that Muslim communities make up only 1 percent of the U.S. population, but one quarter of the world’s population. Facebook is a global brand, and the fact that Muslims continue to be “harassed and terrorized” on the platform should have him seriously concerned. It’s not just a problem in the United States. China has used Facebook and other social media outlets to spread disinformation about its concentration camps for Uyghur Muslims. In Burma (also known as Myanmar), lies spread on Facebook by anti-Muslim politicians contributed to genocide and mob violence targeting the Rohingya Muslim minority.
“Even the United Nations released a report finding that Facebook was complicit in the genocide of the Rohingya in Myanmar,” Khera said.
Khera said there have been some “modest improvements” at Facebook but not the “substantial change” that is needed to remove dangerous and dehumanizing content for good. Indeed, Facebook has made a number of changes to how news is presented on its platform and has a new policy that “bans posts from people who intend to bring weapons anywhere to intimidate or harass others, or who encourage people to do the same,” according to Budhraja. Facebook is also undergoing an internal “civil rights audit,” although Clarke told Zuckerberg that his recent statements to lawmakers and “disregard” for the audit “demonstrate that you still do not grasp — or do not care about — the gravity of the harm you are causing.”
Khera said Facebook uses artificial intelligence to remove 99 percent of child pornography and online propaganda spread by ISIS from its platform. Khera wants to know why Facebook doesn’t use the same technology to find and remove hate speech targeting Muslims, people of color and other groups. Facebook apparently wants its content referees to have a lot more wiggle room in deciding what’s in the “public interest.” If Leonard’s post suggesting Rep. Tlaib should die were immediately removed, he may have not faced public backlash. At the same time, Tlaib recently broke down in tears while describing death threats aimed at herself and Rep. Omar, the nation’s first two Muslim congresswomen.
What Zuckerberg told Khera and civil rights leaders at his home in Palo Alto remains off the record. However, it appears that if Facebook were to use its artificial intelligence to root out dangerous hate speech, or require that political ads be free of outright lies, certain pundits and politicians — including the president himself — may find themselves censored. After coming under fire from the right, that may be one controversy Zuckerberg is trying to avoid.