How to scrub hate off Facebook, Twitter and the internet

Standard
Originally published July 9, 2017

Peter Strain (for CNET Magazine)

By Ian Sherr

Brittan Heller doesn’t know quite what caused it.

Maybe she turned a man down for a date too quickly, bruising his pride. Maybe she just bothered him in some way.

Whatever it was, Heller inadvertently unleashed waves of attacks from a fellow Yale law student when she did whatever she did a decade ago.

Back then Facebook didn’t have the reach it currently has. So Heller’s tormentor raised an online mob on AutoAdmit.com, a message board for law students and lawyers. Soon, posts appeared accusing her of using drugs and of trading sexual favors for admission to the elite school.

That sucked her into a larger maelstrom raging on the message board. Other female students at Yale were being accused of sleeping with professors to get better grades. Behind pseudonyms, some posters said they hoped the women would be raped.

Often, this is where the story ends. The women, harassed and degraded, close their accounts or drop out of school, anything to put distance between themselves and the anonymous hatred.

Heller, now a lawyer for the Anti-Defamation League, and her peers chose to fight, suing AutoAdmit to reveal the names of their harassers. They eventually settled. The terms of the settlement are confidential, Heller says, but the experience set her on the path toward a career fighting hate speech.

“My work would be a success if no one ever needed me,” Heller says. But so far, it’s the opposite. “We’re in a growth industry.”

Hate is everywhere these days. It’s hurled at people of different skin colors, religions and sexual orientations. It isn’t limited by political view; it’s not hard to find hateful words and acts on the left and the right. And it takes place everywhere: airports, shopping malls and, of course, on the internet.

Hate groups have taken up residence online. The hateful meet up with like-minded gangs on sites like Reddit, Voat and 4Chan, terrorizing people they don’t like or agree with. Because much of the internet is public, the medium magnifies the hateful messages as it distributes them.

The ADL a civil rights group, found that about 1,600 online accounts were responsible for the 68 percent of the roughly 19,000 anti-Semitic tweets targeting Jewish journalists between August 2015 and July 2016. During the same period, 2.6 million anti-Jewish tweets may have been viewed as many as 10 billion times, the ADL says.

It would be bad enough if digital hate stayed locked up online. But it doesn’t. It feeds real-world violence. In May, a University of Maryland student who reportedly belonged to a Facebook page where white supremacists shared memes was arrested in the stabbing death of a black Army lieutenant. A few days later, a man who had reportedly posted Nazi imagery and white nationalist ideology to his Facebook pagewent on a stabbing spree in Portland, Oregon, after threatening two women, one of whom was wearing a Muslim head dress. Two Good Samaritans were killed. The man who opened fire on a Republican representatives baseball practice was reportedly a member of Facebook groups with names such as “The Road to Hell Is Paved with Republicans” and “Terminate the Republican Party.”

And that doesn’t count the garden variety taunts people get because of how they look, or the bomb threats or vandalized cemeteries.

The legal response has varied from place to place. In the US, where freedom of speech includes the expression of hate, activists are pushing lawmakers to draw a line at harassment, and treat it the same whether it’s in real life or over the internet.

In other countries, like Germany, where hate speech that includes inciting or threatening violence is already outlawed, the government is working with social networks like Facebook and Twitter to ensure enforcement. Last month, Germany passed a law that could fine social media companies more than $50 million if they fail to remove or block criminally offensive comments within 24 hours.

So far, tech has proved ineffective at curbing online hate speech, and that’s not just because of the internet’s reach and anonymity. Take today’s tools that automatically flag derogatory words or phrases. Humans get around them through simple code words and symbols, like a digital secret handshake. So instead of the slur “kike” for Jew, they write “skype.” The smear “spics” for Hispanics becomes “yahoos,” “skittles” stands for Muslims (a reference to Donald Trump Jr.’s infamous comparison of the candy to Syrian refugees) and “google” stands for the N-word.

Now tech companies, activists and educators are devising new approaches and tools that, for instance, hide toxic comments, identify who we are and verify the content we see, or make us stop and think before we post. They’re also experimenting with virtual reality, potentially putting us in the shoes of a victim.

Their goal: to encourage civility, empathy and understanding.

“It’s not impossible,” says Caroline Sinders, a Wikimedia product analyst and online harassment researcher. “It’s fixable.”

What form that fix will take is anyone’s guess. This problem, after all, has existed since before the internet was even a thing. And right now most efforts to curb online hate are in their early stages. Some may show promise, but none appears to be the answer.

“It’s going to be a combination of different approaches,” says Randi Lee Harper, a coder who founded the Online Abuse Prevention Initiative after being targeted by online hate mobs.

Read the rest of this story at