Beware the Never-Ending Disinformation Emergency


“If you put up this whole interview,” Donald Trump said during a podcast livestream on Wednesday afternoon, “let’s see what happens when Instagram and Facebook and Twitter and all of them take it down.”

Trump named the wrong platforms; the podcast, Full Send, a mildly Rogan-esque bro-fest, was streaming on YouTube. But otherwise, his prediction made sense, because during the interview he reiterated his claim that he, not Joe Biden, was the rightful winner of the 2020 election. “The election fraud was massive,” he said, during one of several riffs on the theme. “I call it ‘the crime of the century.’ We’re doing a book on it.”

YouTube has a strict policy against claims that the 2020 election was stolen. Yet the video stayed up for more than 24 hours, drawing more than 5 million views. YouTube took it down Thursday evening, a few hours after WIRED inquired about it. It’s the latest example of how platforms can struggle to enforce strict misinformation policies—and raises the question of whether this kind of content ban makes sense in the first place.

Consider what happened to the Hill.

Last week, YouTube suspended the Hill, a political publication in Washington, DC, for seven days after its YouTube channel aired clips of Trump claiming election fraud. One came from his recent speech at the Conservative Political Action Conference. The second was a snippet from a Trump interview on Fox News, which was broadcast on the Hill’s daily commentary show, Rising.

The latter clip wasn’t even primarily about the election. In it, Trump gives his less-than-statesmanlike analysis of the Russian invasion of Ukraine, which the Rising hosts proceeded to mock. But right at the end of the clip, Trump says, “And it all happened because of a rigged election.”

This was enough to trigger YouTube’s election integrity policy, which prohibits “false claims that widespread fraud, errors, or glitches changed the outcome” of past presidential elections. Under the policy, you can only include those claims if you explicitly debunk or condemn them. That’s where the Hill went wrong. “Upon review, we determined that the content removed from this channel contained footage claiming the 2020 US presidential election was rigged (which violates our election integrity policy) without sufficient context,” said YouTube spokesperson Ivy Choi in an email. One “strike” gets you a warning, two gets you a weeklong suspension, and a third gets you kicked off the platform.

With all the attention paid to online misinformation, it’s easy to forget that the big platforms generally refused to remove false content purely because it was false until 2020. It was Covid-19, and then the election, that got them past their squeamishness about weighing in on factual disputes. Two years into the pandemic and more than a year after January 6, however, it’s worth asking: What’s the endgame for policies adopted during an emergency?

It’s important to remember that platforms have very good reasons for not wanting to be the “arbiters of truth,” in Mark Zuckerberg’s famous words. As Trump seems to understand, it feeds people’s sense that there are ideas that powerful entities are afraid of discussing. “If we talk about the election fraud, they will not cover it,” Trump said on the podcast, referring to the “corrupt” media. He challenged the hosts to stand up to the censorious social media overlords. “Let’s see what happens when they threaten you,” he said. “It’s a test.” And, of course, platforms will inevitably restrict perfectly legitimate content while letting bad stuff slip past, because no one can do perfect enforcement. In addition to the podcast interview, Trump’s full CPAC speech—showing a clip of which helped get the Hill suspended—was still available, from CPAC’s YouTube channel, 11 days after it first went up. YouTube also took that video down only after WIRED inquired.

In the Hill’s case, YouTube’s election integrity policy seems to rest on particularly questionable assumptions. Notice that when I quoted Trump’s comments from the podcast, I didn’t add that his claims were false. Were you therefore at risk of believing them, if you didn’t already? The unstated premise of a policy like YouTube’s is that, in the year 2022, there are a meaningful number of people out there who would have been.



Source link

Powered by BeaconSites