The problem: That’s only half true.
Since last week’s election of Donald Trump to the White House — which left many looking for someone to blame — Google, Facebook and, to a lesser extent, Twitter have been besieged by critics accusing the companies of promoting fake news that some media experts believe may have impacted Americans’ decisions at the polls.
By Monday, it seemed, Google had decided to do something about that.
The search-engine juggernaut announced that it would no longer allow websites that peddle falsehoods as fact to use its online advertising services.
Hours later, news broke that Facebook would do the same. But Facebook, which has a list of types of websites it bans from accessing its advertising service, had already been cutting off fake news sites from its advertising tools for months.
And yet, fake news stories were shared rapid-fire on social media for months leading up to last week’s vote.
Fake news sites, which run the gamut of political leanings, usually fall into one of two categories: organizations that seek to spread misinformation to manipulate people and sow mistrust, and companies or individuals that use sensational — and false — stories to attract readers and make money through advertising.
Facebook and Google “are essentially polluting an information ecosystem with trash, with garbage, and it’s clear that that garbage has an impact,” said Gabriel Kahn, a professor at the University of Southern California’s Annenberg School for Communication and Journalism. “When you create a marketplace that puts two unequal things on the same level, you’re democratizing unequal information. You wouldn’t put a carton of milk next to a carton of paste and tell people they’re the same thing because they both look white.”
The companies have largely declined to acknowledge the role they play in the creation and dissemination of information, critics said, though calls for them to do so have grown louder. Barring fake news sites from using their advertising tools to turn a profit may hamper the seemingly unending barrage of hoaxes and hooey online, but, experts said, it’s not likely to change the fact that the Internet is a minefield of misinformation.
That’s because online-advertising giants like Google and Facebook are a large reason why such sites exist — they drive traffic right to them, Kahn said.
Google’s new directive targets its AdSense tool, which allows companies to buy advertisements on millions of independent websites. (Often, readers find those same websites through Google searches.) Banning fake news sites means they will no longer profit from those Google-sold ads.
The new policy does not impact false news reports that may appear in Google search results. A few days after the election, a search on “final election count” displayed a result from a fake news site that reported Trump won the popular vote. (He did not. Though votes are still being counted, Democratic candidate Hillary Clinton has maintained a lead.)
“Moving forward, we will restrict ad serving on pages that misrepresent, misstate or conceal information about the publisher, the publisher’s content or the primary purpose of the Web property,” Google said in the statement.
Facebook, which has been barring false news sites from using its Audience Network for months, said the only change made this week was that the company updated its written policy Monday to specifically include “fake news” alongside gambling and adult content as websites it bans from using its ad tools.
“We do not integrate or display ads in apps or sites containing content that is illegal, misleading or deceptive, which includes fake news,” Facebook said in a statement. “Our team will continue to closely vet all prospective publishers and monitor existing ones to ensure compliance.”
Neither Google nor Facebook would specify how it would determine which websites count as fake.
Part of the challenge is that it’s not easy to distinguish between real and false information.
When email servers detect spam, the machines scan for certain keywords to flag scams and unwanted ads. That’s harder to do with false articles, because they require a contextual understanding of what’s written.
“What’s fake in the story is the message it’s conveying,” said Aram Galstyan of USC’s Information Sciences Institute. “Writing the story using the same set of words — in one case you are telling the truth, but in one case a complete lie.”
Detecting fake news stories through keywords could result in legitimate articles covering a hoax or lie also getting flagged. Machines are also not as good at understanding sarcasm, Galstyan said.
But using machines to detect fake news articles has an advantage: Unlike humans, “the computer never gets tired,” said John Nay, a Ph.D. student at Vanderbilt University and co-founder of PredictGov, which uses machine learning to predict and understand legal outcomes.
Galstyan believes Google may use a method that would combine the work of machines and humans to detect fake stories. The machines could flag sites that are suspected of hosting misinformation, and then humans could look at those sites and make a determination, he said.
Melissa Zimdars, an assistant professor of communication and media at Merrimack College published a list she created of fake news websites, which included parody publications like the Onion. Initially, Zimdars included about 150 websites. A day later, she said, she had received submissions of hundreds more.
As these sites are found out, the website operators may evolve, though, figuring out how to “reverse-engineer the algorithm that is used to flag the fake news” and then test new articles, Nay said. “Google likely has much better engineers than the fake news operators and will likely be slightly a step ahead in this game.”
This isn’t the first time Google has taken a stand against bad content. In 2011, Google updated its algorithm on searching that affected about 12 percent of its queries and dropped the rankings of “low-quality sites” — those that scraped information off other websites and weren’t very helpful at answering what people were looking for in their search query.
The change negatively impacted content farms that employed writers to generate large swaths of content that would appear high on search results.
“Google’s update basically was intended to demote those sites in the search rankings,” said Jan Dawson, chief analyst at research firm Jackdaw.
Fake news websites at the crux of this month’s controversy use tactics similar to the content farms of the past, experts said. The goal is to bring lots of traffic to their sites by posting it on Facebook and getting people to share it widely. As Google changes its algorithm yet again, experts say, operators will probably try to find a way around it.
“It’s an ongoing game of cat and mouse,” Dawson said. “The two keep adjusting themselves in response to each other.”
©2016 the San Francisco Chronicle Distributed by Tribune Content Agency, LLC.