Pages

Monday, October 29, 2018

Big Tech made the social media mess. It has to fix it

As if on cue, reports circulated of social media abuse moments after the world learned the names of alleged mail bomber Cesar Sayoc and Pittsburgh synagogue shooting suspect Robert Bowers. Sayoc threatened Democratic politicians on Twitter and posted right-wing conspiracy theories on Facebook. Bowers spouted his anti-Semitism on Gab, a smaller, far less restricted social network that has become a favorite for hate groups.
Social media has given people a platform to spew hate speech and radical beliefs to other disaffected people, amplifying what are otherwise fringe opinions. A few have turned that hate into violence.
Silicon Valley has built the biggest, most powerful companies in the world. Now tech leaders have a responsibility to rein in the creations that have grown too unwieldy for them to control.
These are businesses, and their leaders will have to figure out how to do that without destroying their companies. So far, balancing Big Tech's responsibilities to society and its duties to investors has proven difficult.
"Every hour and every dollar spent policing content is a dollar and hour spent not investing in growing the company," said PW Singer, senior fellow at liberal think tank New America and author of "LikeWar: the Weaponization of Social Media." "But this has become a political football that has hurt social media companies' branding and their reputations."

A difficult balance

Facebook (FB) announced in July it would invest billions of dollars a year to improve safety and security on the platform. The company said the effort would come at the expense of user growth and profitability. The company's stock lost about a third of its value because investors were concerned about Facebook's growth stalling.
Twitter (TWTR) has purged millions of fake accounts as part of a company effort to improve the health of the platform. CEO Jack Dorsey in July said his plan was "right thing to do" for the company and "for society as a whole." Investors disagree. Twitter's stock is down 25% over the past three months — and that includes a 13% rebound in October.
Facebook did not respond to requests for comment. A Twitter spokesperson declined to comment.
Each of the companies has set rules for their platforms and put protocols in place to stamp out hate speech and threats of violence. They've worked with varying degrees of success.
"The platforms just aren't ready for this," said Jennifer Grygiel, professor at Syracuse University's Newhouse School of Public Communications. "Their content moderation is completely ineffective. Their AI and machine learning isn't sophisticated enough."
Twitter has largely rid ISIS recruiters from the social network. Facebook and YouTube have labeled fake news stories as such. Yet the work is isn't close to done. Google has also made significant changes to YouTube's advertising policies after ads from 300 companies and organizations ran on channels promoting hate groups.
Critics say Twitter's moderation isn't working. People routinely post screenshots of death threats, hate speech or other violence along with Twitter's response to their complaints: "We have reviewed your report carefully and found there was no violation of Twitter rules."
"We do not look at content with regards to political ideology. We look at behavior," Twitter CEO Jack Dorsey told CNN last month. "I haven't done enough at articulating my personal objectives with this company. That's on me."
Facebook is better at weeding out threats, but it continues to struggle with governments using its platform to sow disruption abroad — particularly in the United States. Facebook's WhatsApp also has a major problem in its biggest market, India, where viral fake rumors on the mobile messaging service have been blamed for more than a dozen lynchings in the past year. Under pressure from the Indian government, WhatsApp has added several new safety features, but the company has also pushed back against some of the government's demands.
WhatsApp's problems are global. For example, Brazil temporarily banned WhatsApp in 2016 because it had become a haven for drug traffickers to avoid detection.
Google (GOOGL) has gotten its advertising problems under control, but its platform is still filled with disturbing videos.
"YouTube has strict policies that prohibit content which incites others to violence or promotes hatred," a YouTube spokesperson said. "We quickly remove videos violating our policies when flagged by our users."

Can it be fixed?

A full-fledged effort to rid the platforms of hate speech and violence would take billions more dollars, stymie growth even further and potentially send their stock prices even lower. It's unclear whether the CEOs have the stomach to do what it takes to right their ships.
"People are one of your most expensive budget line items and they're trying to do this as cheaply as possible," said Grygiel. "They're not sitting around the board room saying let's decrease our profit margins by acting today."
Although social networks have invested in tools to combat dangerous speech, to a large degree they continue to act like they're online town halls where the free flow of ideas and expression will build up society (and create big profits for advertisers and investors). But if the shouting, trolling and violence keep up, people are going to leave those halls for calmer ones, Singer argues. After years of huge growth in people joining their platforms, Facebook and Twitter are leveling off: Twitter lost 9 million daily active users last quarter. Facebook lost 3 million in the United States and Canada.
The reality is no one has a right to a Facebook or Twitter account. YouTube does not need to let everyone post their videos. Screening, vetting, limiting or otherwise putting up the digital equivalent of a velvet rope would change social media — but perhaps for the better.
The stakes were far smaller when Facebook was a collection of a few dozen colleges or even a few million people posting about their days. When Twitter was in its infancy, people had no idea what to make of 140-character posts, so they called it "microblogging." YouTube began as the place where cats played the piano and people taught the history of dance.
The public cared about their connections with friends and family. Investors cared about advertising revenue. And the companies' CEOs cared about massive, sustained growth.
What seemed to cross no one's mind a decade ago was the potential downside to building a platform with billions of customers. So many posts can't possibly be monitored in real time. The networks gave the dregs of society the ability to reach like-minded people and poison the minds of others.
Without that foresight, Silicon Valley allowed their creations to break down. They're not beyond repair, but fixing them will be costly and difficult. Whether Big Tech is up to the task will determine the fates of their platforms.

Let's block ads! (Why?)

from CNN.com - RSS Channel https://ift.tt/2zctfHA

No comments:

Post a Comment