But in 2017, the word “fake” has taken on an entirely new meaning. When combined with terms like “global propaganda” “disinformation efforts,” “semi-fake,” or “sensationalized and exaggerated stories,” traditional “spin” campaigns seem old-fashioned.
Lately, while the political left focuses on how Russia used fake news to potentially influence the election, conservatives complain that the mainstream media can’t be trusted. President Trump even called CNN fake news.
CNN responded by reporting on President Trump’s dangerous war on the media.
And moving beyond traditional media outlets, more and more Americans are getting their news updates from Facebook and other social media sites. Last year, Pew Research Center reported that almost half of U.S. adults get their news from Facebook.
But the “fake” situation for end users goes far beyond where Americans go for news updates. Fake apps being downloaded also abound.
Back in November 2016, The New York Times reported:
“Hundreds of fake retail and product apps have popped up in Apple’s App Store in recent weeks — just in time to deceive holiday shoppers.
The counterfeiters have masqueraded as retail chains like Dollar Tree and Foot Locker, big department stores like Dillard’s and Nordstrom, online product bazaars like Zappos.com and Polyvore, and luxury-goods makers like Jimmy Choo, Christian Dior and Salvatore Ferragamo. …
Entering credit card information opens a customer to potential financial fraud. Some fake apps contain malware that can steal personal information or even lock the phone until the user pays a ransom. And some fakes encourage users to log in using their Facebook credentials, potentially exposing sensitive personal information.”
And government apps and websites are not immune. In 2015, the FBI warned consumers of fake government websites with alerts such as: "By the time the victim realizes it is a scam, they may have had extra charges billed to their credit/debit card, had a third-party designee added to their [Employer Identification Number] card and never received the service(s) or documents requested. Additionally, all of their PII data has been compromised by the criminals running the websites and can be used for any number of illicit purposes."
Even in India, as reported by the UK Daily Mail in January 2017, fraudsters cash in on Digital India drive with fake government apps which steal money and personal info. “Online fraudsters are creating fake mobile recharge and government websites carrying Modi's name and picture and then circulating the malicious links on social media platforms.”
To address this global epidemic in "fake," there are planned commission reports, global inquiries from governments and even threats of legislation regarding actions required by social media companies.
Is ‘Fake’ Really a Tech problem?
So how did we get into this mess?
According to Mike Elgan at Computerworld, the tech industry created this "fake" problem and the tech industry needs to fix it. While Elgan admits that we’ve always had fake news, he claims the Internet has made the situation much worse:
“Then the web arrived, followed by the social web. Now, instead of three reputable news sources, you hear facts and ideas from thousands of sources of varying reliability. These appear before your eyeballs by invisible means — by the compatibility of the content with the secret algorithms used to determine what spreads widely — and what doesn't.
A broad range of people with political, commercial or anti-social interests have been evolving their techniques for gaming the social algorithms for ever-accelerating the spread of fake news. Dana Boyd calls it ‘hacking the attention economy. …’”
Mike’s solution: Silicon Valley to the rescue with new algorithms, tools and databases of the bad websites and known bad actors and much more to come.
And Seth Freeman at TheHill.com has a similar view regarding tech’s answer to these problems. “We need a tool — which I would like to challenge software engineers to develop — which can evaluate the content of written material for its truthfulness. Not judge it. Certainly not censor or redact any of it.
We need a filter, a program which functions merely to advise readers, who choose to employ the tool, as to the degree of confidence which the program has in the authenticity or correctness of written statements found in a digital environment. …”
My question: Who decides what goes into the databases? Can’t these innovations become political tools to inhibit free speech?
Meanwhile, Newsweek has a much more skeptical view of resolving the issue with tech in their report on Why Facebook Can’t Fix Fake News. “Despite recent statements by Facebook CEO Mark Zuckerberg about his efforts to rein in fake news, he won’t be able to do that easily. Zuckerberg hit on the reason when he said it would be problematic to set up Facebook editors or algorithms as 'arbiters of truth.' Because — what’s truth?”
Solution, Please?
The first step for all government pros is to understand the vast scope of the trust problems we face in the "fake" area. Remember that some customers will rely on their news from The Washington Postwhile other rely on The Washington Times. Some will like the www.Drudgereport.com while others trust www.Huffingtonpost.com and still others www.Facebook.com. Know your audiences and where they go to get their news. Also, one person's "spin" may be someone else's lie.
College students in Chicago think they have a tool that can help end users detect outright fake news. There are other similar tools that can help. Governments need to be checking to see where their messages are showing up online.
This example from China offers some lessons on steps that can be taken to address fake news; however, many people believe that some of these techniques can be described as censorship.
On fake apps, C|NET offers this article on how to spot a fake IoS or Android app. Here a three of their tips:
- Check to see who published the app. Be careful, though, scammers will use similar names; such was the case for Overstock.com (real) and Overstock Inc (fake).
- Check the reviews in Apple's App Store and Google's Play store. A real app will likely have thousands of (hopefully positive) reviews, while a fake one will likely have zero.
- Look at the publish date. A fake app will have a recent publish date, while the a real one will have an "updated on" date. For example, that fake Overstock app was only published on October 26 of this year.
There is no doubt that tools from companies like Facebook and Google can also help flag fake websites, apps and news.
For government enterprise teams, the first step is to know that there will be fraudsters who try to not only hack your apps, but also imitate them or redirect your customers to a different website. Having a trusted seal that cannot be easily duplicated can help, and there are many security companies that offer website brand protection services. One example can be found at NetNames, which includes app protection services as well.
Also, remember that these communication issues are complicated by the growing number of phishing scams using bad links and even phone calls or non-technical types of fraud directed at our employees and residents.
Final Thoughts
For more than a decade, security professionals have been discussing end-to-end trust, identity-proofing and more to build online confidentiality, integrity and availability.
And yet, the simple questions are still the same as 20 years ago. Trust goes back to being confident of who is speaking. Are they honest? Can the message (and messenger) be trusted? Is the technology reliable? Can the data sources and outputs be verified? Is my personal data safe?
Sadly, the "fake" situation has become much worse online lately. I mix these different theses of news, apps and websites in one blog, since they are so interrelated for governments public information officers (PIOs) in getting their messages out — regardless of political party.
Local, state and federal government technology pros must engage in this online trust area as they work with constituents on a wide range of communication and tech issues. As more and more information, services and help are placed online, the integrity of government service will be questioned. Are you ready? Do you have FAQs? How about an incident management plan?
This piece by Heather Dockray is also worth reading on this topic, with highlights on the flaws with “fake news detectors,” a need for more conversations and fewer “echo chambers,” and much more thoughtful analysis of the scope of the challenges ahead. For example: “If ‘critical readers’ have any hope of stopping the fake news explosion, they'll need to do more than rely on external fact-checking tools or New York Times hyperlinks. They'll need to keep people in their Facebook feeds who they disagree with, and try to have radically empathetic, compassionate conversations with them off of the Internet.”
The "fake" problem, like so many other cyber-related issues, is not going away. Nevertheless, government technology and security leaders can take steps to build trust and guard government service reputations, as well as fight cybercrime.
Start now with educating the right people on the "fake" problem.