Pages

Thursday, November 1, 2018

Online hate is a deadly threat. When will tech companies finally take it seriously?

Patrisse Cullors
So when pipe bombs were discovered across the country, and Cesar Sayoc was arrested in Florida after two harrowing days, I was not surprised the suspect fit into the mold of my worst online harassers: a white supremacist Donald Trump supporter targeting people he considered critics of his leader's agenda.
Then, as if that weren't enough, we learned of horrifying killings at a Kroger in Kentucky and a synagogue in Pennsylvania. As people began investigating the backgrounds of the three suspects in these hate-fueled crimes, many in the media and elsewhere finally realized something organizers have known for years. Online harassment is a predictor for this kind of racialized violence.
For so many black and other marginalized users, tech platforms have long been hostile and toxic environments, but enough is enough. Tech companies must do much more to protect underrepresented communities and combat online hate, and they must start now. It shouldn't take tragedies to get tech companies to address hate speech and online harassment proactively.
That's why racial justice, civil rights, anti-hate and open internet organizations have come together to call on tech companies to reduce hate on their platforms. This new project, "Change the Terms, Reducing Hate in Tech," gives tech companies clear steps to help marginalized voices engage more freely on their platforms without fear of hateful harassment or erroneous censorship by biased algorithms and improperly trained moderators.
It won't be easy to reduce hate and extremism online, but there are steps every tech company can take to address this issue more effectively.
Zuckerberg's comments give Holocaust deniers an opening
First, tech companies need new training materials for content moderators, created by anti-hate experts with relevant social, cultural and historical understanding of power dynamics. Users should also have the right to appeal any removed content or accounts.
This change is necessary because tech companies' current approach to moderation is failing. Democratic strategist Rochelle Ritchie recognized Sayoc right away and tweeted that she'd reported harassment from him to Twitter. His account was allowed to continue -- unmoderated. Meanwhile, racial justice activists have repeatedly had their posts removed and accounts suspended for publicly sharing the harassment they've received online and in the real world.
Tech companies also need to make their content moderation training materials publicly available so anti-hate advocates and the public can make sure the trainings accurately reflect what we need to feel safe on these platforms. Recent leaks of moderation materials have shown these documents to be woefully inadequate -- one leak of a Facebook manual featured passages lifted straight from Wikipedia. When asked for comment by one media outlet, Facebook directed it to the "community standards" document the company released to the public. A $500 billion company such as Facebook should have higher standards than a college freshman rushing to finish a term paper.
The seeds of Pittsburgh were sown in Charlottesville
Tech companies also need to be totally transparent around what type of content is being removed and what isn't. With greater transparency around content moderation, platforms will help ensure that marginalized voices are being protected, while those that seek to harm, harass and intimidate are quickly removed.
Finally and most importantly, tech companies need to make sure they act now before more people get hurt. When it became clear that the synagogue shooting suspect used the alt-right social network Gab to spew anti-Semitism unchecked, PayPal and Stripe finally moved to cut off that platform. But people had been ringing the alarm about Gab and its racist content for years -- PayPal and Stripe should have taken action a long time ago.
These actions are desperately needed and long overdue. As we reel from the violence of the last weeks, white supremacists and other hate groups continue to use online platforms to organize, fund, normalize and recruit supporters for racism, sexism, xenophobia, religious bigotry, homophobia and transphobia. Their largely unchecked behavior online chills speech from marginalized voices, curbs democratic participation and leads to real-world violence such as the kind we've all now seen.
It is possible to have an open and inclusive internet while simultaneously limiting hate, violence and harassment. By enacting stronger policies and the robust transparency measures, tech companies will not only more effectively reduce hate online, they will also begin to restore public trust in their ability to handle these problems.
I know firsthand what it meant to be called a terrorist after I spoke out against police violence. It meant that I became a target. Tech companies have the means and the responsibility to change this; they just need to follow the blueprint laid out in "Change the Terms, Reducing Hate in Tech."

Let's block ads! (Why?)

from CNN.com - RSS Channel https://ift.tt/2CVjJMg

No comments:

Post a Comment