The manipulated clip, slowed to make Pelosi sound as if she were slurring her words, racked up millions of views on Facebook the following day. It was posted to YouTube, and on Thursday night was given a boost on Twitter when Rudy Giuliani, President Trump’s personal lawyer and former mayor of New York, shared a link with his 318,000 followers.
By Friday, the three social media giants were forced to respond to this viral instance of political fakery. How they dealt with the issue, three years after being blindsided by a wave of fake news and disinformation in the 2016 election cycle, may serve as a harbinger of what’s to come in 2020.
And for those who had hoped that new technology, stricter standards and the full attention of these powerful Silicon Valley companies might stem the tide of lies, the case of the Pelosi video does not bode well.
Facebook, where the clip found its largest audience, refused to take it down. A spokesperson for the company said that the video does not violate Facebook’s Community Standards, adding in a statement that “we don't have a policy that stipulates that the information you post on Facebook must be true.”
Instead, Facebook ran the video through its official fake news process, codified since the company admitted it had a problem in late 2016. It submitted the clip to a third-party fact-checking company, which rated it “false.” Following that judgment, the company drastically decreased how often the video is automatically displayed in users’ newsfeeds and appended an info box below it linking to articles that say that the clip is a fake.
“We work hard to find the right balance between encouraging free expression and promoting a safe and authentic community,” the spokesperson said. “We believe that reducing the distribution of inauthentic content strikes that balance. But just because something is allowed to be on Facebook doesn’t mean it should get distribution. In other words, we allow people to post it as a form of expression, but we're not going to show it at the top of News Feed.”
YouTube deleted all copies of the video on its site after being notified of its existence following a Washington Post report on the video. The company said in a statement that the clip violated its policies, and added that it did not “surface prominently” on the site or in search results.
The Google-owned video platform said last year that it was tweaking its algorithms to promote more authoritative news sources. The company also introduced panels similar to Facebook’s info boxes that appear below videos dealing with common conspiracy theories or produced by state-run media outlets to give viewers more context, though a Buzzfeed investigation in January found that they were inconsistently used.
Twitter declined to comment on the Pelosi clip in particular and has not taken formal action.
Giuliani deleted his original tweet linking to the clip, then posted a follow-up appearing to apologize that included an image of the Atlanta Hawks bench celebrating. Half an hour later, Giuliani defended sharing the video.
“Nancy Pelosi wants an apology for a caricature exaggerating her already halting speech pattern,” Giuliani tweeted. “First she should withdraw her charge which hurts our entire nation when she says the President needs an ‘intervention.’ People who live in a glass house shouldn’t throw stones.”
Trump himself tweeted “PELOSI STAMMERS THROUGH NEWS CONFERENCE” with a different video from Fox Business that stitched together moments in which Pelosi stumbled over her words during a press conference.
Kat Lo, a researcher at UC Irvine who studies content moderation, said companies like Facebook and YouTube have improved since 2016 — but not enough.
“The changes are incremental. It’s not like they've solved anything, but they've made progress,” she said.
She believes YouTube's approach in this case was more effective than Facebook's strategy of including a disclaimer and reducing the video's presence in the News Feed.
"The best way to counter disinformation is to deplatform it. To make it not visible and not shareable,” Lo said, adding that research indicates fact-checking measures aren't always effective.
It's well-documented that Twitter has not been able to sufficiently manage hateful content on its platform, Lo said, and while the company has been hiring teams to combat this problem, it has not proved it can successfully intervene when content that contributes to radicalization and disinformation is circulated.
Facebook and others are still not well equipped to manage complex targeted manipulation campaigns, Lo said, citing Russian efforts to spread disinformation ahead of the 2016 election and the way that members of Myanmar's military used Facebook as a tool to instigate genocide, spreading propaganda that vilified the country's mostly Muslim Rohingya minority group.
UC Berkeley computer science professor and digital forensics expert Hany Farid, who studies methods for detecting “deep fakes” — more advanced false videos that use sophisticated software to create realistic clips fabricated from whole cloth — noted that social media’s broad reach aids the spread of disinformation, no matter the format.
“The threat of manipulated video of any form remains significant because of the declining level of discourse, particularly on social media, the public’s seemingly inability or lack of interest in distinguishing between real and fake news, and our willingness — in fact eagerness — to believe the worst in people that we disagree with,” Farid said in an email.
While it’s always a challenge to correct the record when false information is widely distributed, the task has only become more difficult when prominent figures use their position of “extraordinary power” to amplify false information, Farid added.
Pelosi, for her part, has stayed quiet. In a statement to the Washington Post, her deputy chief of staff Drew Hammill said, “we’re not going to comment on this sexist trash.”
©2019 the Los Angeles Times. Distributed by Tribune Content Agency, LLC.