Enough. Facebook Needs to Step Up and End the Hate.

If you’ve followed any of our content over the years, you know we don’t like to jump on the hot-take bandwagon. We prefer to listen more than we speak. We see ourselves as geeks, not gurus. We like to see the spotlight on our customers, not on ourselves. But the time comes when you need to stand up and be counted for what you believe.  

We’ve struggled over the past few months discerning whether and how to weigh in on anything going down in America today in a meaningful way. As Dave Chappelle put it so poignantly in his 8:46 monologue, the movement against racism in America has its own voice, and has no need for what we think.  

But as a social media company—particularly one that funnels media dollars to social networks for advertising—industry issues that impact racism in America do matter to us, and to our customers. To the limited extent we have any platform to raise our voice among our peers, these are the topics on which we will end our silence, and today that starts with Facebook.

Many of you may already be aware of the meeting yesterday between Facebook executives Mark Zuckerberg and Sheryl Sandberg, and leaders of the #StopHateForProfit boycott against advertising spending on Facebook. Leaders of the movement have organized an advertising boycott among more than a thousand advertisers, with a series of recommendations for Facebook to address the rampant problem of hate being spewed on the social network. During the meeting, Facebook did not address any of the issues that led to the boycott, nor have they demonstrated anything more than lip service in coming up with policies and practices to root out hate speech on their platform.

For now, I’m going to leave off discussing the #StopHateForProfit recommendations, though I think they’re worth reading and discussing. Instead, I want to focus on the content that Facebook continues to allow on their platform today, and why it needs to stop. In our view, Facebook is willfully playing a dangerous game that mines profit from discord. That’s what we want to shine a light on, and call out for the poison that it is.

Yesterday, a link to a YouTube video was shared on Facebook as shown below. It shows a picture of a car that was used to run down a group of BLM protesters in Seattle. One of the protesters was killed. The image features the mascot for the alt-right white nationalist movement laughing at the violent death of a protester, and blaming the victim for their own death. The links associated with the image led to websites where more hateful memes are spread, and merchandise is sold to support the site.

Facebook thinks this content is a-okay. It is not.

YouTube pulled down the video linked in this post and suspended the user’s account. Facebook did nothing. Despite the post being flagged and reported as hate speech, Facebook responded to the complaints by saying the post may be offensive to some people, but does not violate their community standards. Rather than blocking the post, they recommended anyone offended by the posts should block the offending account from their own feed. Sorry you found this offensive. Hands washed.

This is the text of the “Community Standards” Facebook cites for “Hate Speech” when denying these posts break the rules.

We do not allow hate speech on Facebook because it creates an environment of intimidation and exclusion and in some cases may promote real-world violence.

We define hate speech as a direct attack on people based on what we call protected characteristics — race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability. We protect against attacks on the basis of age when age is paired with another protected characteristic, and also provide certain protections for immigration status. We define attack as violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation…

Sometimes people share content containing someone else’s hate speech for the purpose of raising awareness or educating others. In some cases, words or terms that might otherwise violate our standards are used self-referentially or in an empowering way. People sometimes express contempt in the context of a romantic break-up. Other times, they use gender-exclusive language to control membership in a health or positive support group, such as a breastfeeding group for women only. In all of these cases, we allow the content but expect people to clearly indicate their intent, which helps us better understand why they shared it. Where the intention is unclear, we may remove the content.

In addition, we believe that people are more responsible when they share this kind of commentary using their authentic identity.”

There’s a lot to unpack here. First, the post in question does not meet any of Facebook’s criteria for allowing what might otherwise be considered hate speech—such as raising awareness, self-reference, or membership exclusivity, and it does not explain its intent as Facebook “expects”. Second, the post also does not meet any of the criteria for how Facebook defines Hate Speech, creating a giant and profitable loophole for Facebook to sustain.

It seems that Facebook is pinning their allowance of such clearly violent content on the fact that it is not a “direct attack” on a specific person, such as a threat or slander, and is instead a “general attack” on an unspecified target: you have to know the context of recent news and the alt-right mascot to put this hate speech in context. So Facebook thinks this content is a-okay. It is not. The fact that they profit off this kind of discord—because controversy drives clicks, providing more page views for paid ads—makes this despicable.

In our view at SocialRep, it is worth standing up against this kind of hate speech by avoiding all paid advertising on the platform, until Facebook starts using their algorithmic genius to root this out and off of their platform for good.  No, this is not a first amendment issue, which applies only to the government limiting speech. Facebook as a business can make the choice to ban all hateful propaganda, in addition to the narrowly defined “direct attack” hate speech they currently disallow, with the same sophistication and nuance that created this loophole in the first place. And we, as private companies using paid and organic media to reach customers, can make the choice not to spend ad dollars on any platform that supports such hate.

Irrespective of all the additional recommendations and calls to accountability, until Facebook redefines its policies to disallow hateful propaganda as well as direct attacks, no business should spend money on their platform.

#StopHateForProfit

Leave a Comment