Pages

Saturday, November 3, 2018

Big Tech and the midterms: The scary thing is what we still don't know

The original "fake news" problem seems quaint by comparison.
Think back two years to Election Day 2016. Like many other media reporters, I was focused on the problem of "fake news" -- truly fake stuff -- made-up stories that were designed to deceive people. Profiteers and propagandists were going gangbusters on Facebook and Twitter. The most notorious example? A story claiming that Pope Francis had endorsed Donald Trump.
After the election, the tech giants belatedly admitted that they had a real problem on their hands. They started to make changes. But then President-elect Trump claimed that "fake news" was invented by Democrats as an excuse for losing the election. (It wasn't.) And he used his bully pulpit to redefine the term "fake news" to mean stories he didn't like.
Almost any other president would have seen the dangers of viral misinformation and tried to be part of the solution... instead of worsening the problem... but I digress.
Two years later, what's changed? Are midterm voters being fooled by made-up stories? I've been talking with experts and scouring social media websites for answers. My impression is that the specific "fake news" problem is less pronounced this election season. But the threats have morphed and multiplied.
Here's what I mean: I'm not seeing simplistic "Candidate X said Y" lies showing up in the newsfeed. Facebook has staffers and machines and fact-checking partnerships in place to reduce that pollution. Twitter has been taking action too.
But the online environment is still polluted. The sites are struggling to keep up with all the state-sponsored attacks, bot networks, hate speech posts, hyperpartisan memes and other problems. Russian trolls are still hard at work. The US government recently indicted a Russian woman for "attempting to sow political and social discord ahead of the 2018 midterm election."
So the cliche is true: This is like Whack-a-mole, but the grand prize is a healthier democracy.

Twitter stops voter suppression effort

On Friday evening, Twitter revealed that it had shut down a voter suppression effort that involved thousands of accounts, mostly posing as Democrats, that were discouraging people from voting. 
That's not "fake news" as it was defined in 2016. But it's pernicious nonetheless.
"Misinformation is a problem, but it's not the 100% fabricated stories that we saw in 2016," Claire Wardle, the head of First Draft, told me.
Poynter's International Fact-Checking Network director, Alexios Mantzarlis, agreed, but said "I think the jury is still out." With that caveat in mind, he said "the full-on viral 'fake news' of yonder is playing a somewhat secondary role compared to (A) misinformation pushed by and for openly political purposes and (B) bizarro conspiracy theories emerging from messaging boards and getting amplified wittingly or unwittingly by folks on Facebook/Twitter."
Many of those bizarro theories have been about the Kavanaugh hearings and, more recently, the migrant "caravan." The NYT's Kevin Roose caught another viral "caravan" hoax on Friday. It was not taken down until he publicly pointed it out. Anti-immigration hoaxes aren't about the midterms, exactly, but could affect the results since Trump is relying on immigration fears to fuel GOP turnout.
These "stand-alone visual posts on Facebook, Twitter or Instagram" disproportionately target female candidates and candidates of color, Wardle said. "Taking advantage of the deep partisan divisions, much of the content is designed to reinforce positions and denigrate the other side, using dog-whistles, logical fallacies and false equivalency."
But "currently there is little the platforms can do with his type of content," she said. "It can not be fact-checked in a formal sense and some would argue that this type of content is 'politics as normal'. What we don't know is how to measure the drip, drip, drip of these divisive hyper-partisan memes on society."

Some of the hoaxes are just harder to see

CNN Business's own Donie O'Sullivan pointed out to me that "a lot of the conversation has moved into closed Facebook groups." There are huge MAGA groups and huge Resistance groups -- "total ideological echo chambers," he said.
Tow Center researcher Jonathan Albright seconded this point. Facebook is "flooded with fake messages, astroturfing armies, Q garbage and anti-migrant posts," he told me. But it's "all buried in groups and subpages."
Albright spent all day Friday going down these rabbit holes. He said he thinks it's ten times worse than 2016. He observed a "major effort underway to push messages out FROM groups into rest of platform. Groups are polluted with fake accounts and duplicate profiles."
Some smears start there, and then get mainstreamed by media personalities and politicians.
Charlie Warzel, who covers the "information wars" for BuzzFeed, said "what we are seeing more than anything else is the way that so much 'fake news' has rewired our discourse and reprogrammed (especially right wing) media."
Case in point, scary stories about the "caravan" have been a staple of Fox News shows.
"Hyperpartisan spin which was relegated to MAGA FB pages (which published a lot of fake news in '16 cycle) has really made its way into mainstream discourse," Warzel said, citing the bogus "false flag" theories about the recent wave of mail bombs.

Misinformation is coming from the top

Of course, any conversation about this online pollution has to reckon with the president's role as the biggest polluter of them all. His Twitter account recently shared a video that was both racist and factually inaccurate.
Obama administration official turned CNN commentator Sam Vinograd argues that Trump sows discord in much the same way that Russian hackers try to do.
Trump "knowingly spreads content that is helpful to Russia because he thinks it is also helpful to him personally and politically," she said. Silicon Valley's efforts to combat "combat fake news and bots and trolls are irrelevant" when similarly divisive content is coming from the president's account, she asserted.
The Washington Post's Philip Bump made a similar point on "Reliable Sources" recently: "You can't crack down on people for retweeting the president of the United States even though it's fake news."

What are we missing now?

These problems don't stop at the US border. Misinformation and disinformation is spreading in a variety of ways all around the world. In the past few weeks, there have been headlines about WhatsApp hoaxes affecting the presidential election in Brazil; Saudi Arabian Twitter trolls spreading lies about Jamal Khashoggi; and Iranian Facebook pages stoking divisions in the US and the UK.
Every week, it seems, investigators or reporters or social media managers discover another scheme or propaganda effort.
And O'Sullivan made a very important point to me: "What will we learn about 4 or 8 or 12 months from now that the Russians or Iranians were doing before the 2018 midterms?"
Here's what he asked: "Will there be something that the intel community, the platforms, and the media all missed? Are disinformation operatives in Tehran and Beijing and Moscow laughing at us right now getting all worked up about some ad disclaimers in Facebook? Have they innovated quicker than we can keep up?"

Let's block ads! (Why?)

from CNN.com - RSS Channel https://ift.tt/2Dk10eq

No comments:

Post a Comment