This well-written and thought-provoking opinion piece begins with the reality that cyberthreats are exploding globally and data breaches have led mainstream businesses to spend over $93 billion in 2017 on stopping cybercrime.
Furthermore, cyberattacks against Internet of Things (IoT) devices are skyrocketing even faster, causing Congress to get involved. Gartner anticipates that a third of hacker attacks will target "shadow IT" and IoT by 2020.
In our scary new normal online, I certainly agree with Dibrov that: “Executives who are preparing to handle future cybersecurity challenges with the same mindset and tools that they’ve been using all along are setting themselves up for continued failure.”
No doubt, old methods of defending enterprises from cyberattacks are failing and new security solutions are certainly needed. So what is the author’s solution?
Answer: Take people out of the security equation.
Dibrov writes: “It can’t be denied, however, that in the age of increased social-engineering attacks and unmanaged device usage, reliance on a human-based strategy is questionable at best…
It only took one click on a link that led to the download of malware strains like WannaCry and Petya to set off cascading, global cybersecurity events. This alone should be taken as absolute proof that humans will always represent the soft underbelly of corporate defenses. …”
The article goes on to explain that the “Amazon Echo is susceptible to airborne attacks,” and “Users may have productivity goals in mind, but there is simply no way you can rely on employees to use them within acceptable security guidelines. IoT training and awareness programs certainly will not do anything to help, so what’s the answer?
It is time to relieve your people (employees, partners, customers, etc.) of the cybersecurity burden.”
My Response: Wrong answer. While I certainly agree that humans are often the weakest link in online security and we must do better at equipping staff, relieving your people from the cybersecurity burden is going in the wrong direction. People use the technology, and their actions, and the processes that are followed, will always be essential components of effective security strategies with the myriad new Internet of Things devices.
The conventional wisdom remains true that solutions must involve people, process and technology answers. As I have written in the past, most experts say the largest percentage of our security challenges involve user actions (or interactions). Nevertheless, I am willing to concede that the percentage breakdown assigned to each category is open to debate and may be different for various products, services, companies and/or IoT devices.
But before I explain in more detail why I part ways with respected CEO and co-founder of Armis, I want to say that I certainly agree that we need much better security built into IoT devices. I certainly think IoT security is at the cutting edge of cyberissues, and I share Dibrov’s skeptical view that we can keep doing the same things and get different results — in all three categories.
Without hesitation, almost everyone except criminal hackers would love to have IoT devices ship “secure by default” or “secure by design” with a hack-proof seal of approval on every IoT box that ships.
There is no doubt that much more needs be done with the security built into all technology, and it would be great if we could drastically reduce IoT security flaws and the potential number of mistakes that can be made by end users.
However, pitting effective security awareness training and/or a positive security culture against better technology is a serious mistake and ultimately leads down a path to dismal failure. History has taught us that lasting security answers must include “all of the above,” with people, process and technology working together well.
A Short History Lesson Regarding Cybersecurity
As I ponder these concepts and especially promises of more IoT security built-in up front, I can’t help but think back more than a decade to the Bill Gates promise of better security. Here’s a very brief history reminder from the days of Microsoft’s Trustworthy Computing.
On Jan. 23, 2003, Bill Gates wrote these well known "Secure By Design" words:
“Secure by Default: In the past, a product feature was typically enabled by default if there was any possibility that a customer might want to use it. Today, we are closely examining when to pre-configure products as "locked down," meaning that the most secure options are the default settings. For example, in the forthcoming Windows Server 2003, services such as Content Indexing Service, Messenger and NetDDE will be turned off by default. In Office XP, macros are turned off by default. VBScript is turned off by default in Office XP SP1. And Internet Explorer frame display is disabled in the "restricted sites" zone, which reduces the opportunity for the frames mechanism in HTML email to be used as an attack vector. …”
While I applauded these laudable goals more than a decade ago along with other important steps taken by Microsoft to improve security, the sad truth is that many hundreds of "Patch Tuesdays" have come and gone, with more hacked systems than ever before in 2017. The promise of “secure by default” is far from reality across the technology industry software, hardware and even cloud-hosted services.
Beyond Microsoft, other companies have the same issues with technology bugs and security holes that hackers eventually find. Even when technology products ship with all security settings enabled, which is not the case with many IoT devices, end users often turn off security features or fail to download critical security updates or don’t follow recommended practices such as changing default passwords.
Yevgeny Dibrov is not the first one to suggest that technology can be made secure regardless of people’s actions, and he won’t be the last. However, I am somewhat surprised that this viewpoint remains popular as we head into 2018.
Why? Beyond software development flaws, we have witnessed decades of insider threats caused by people like Edward Snowden and others who were able to use processes and weaknesses in people to overcome sophisticated data protections.
There is simply no way that IoT manufacturers will spend the kind of dollars on security that the National Security Agency (NSA) spends on technology to protect national secrets. And yet, even those technology defenses were able to be defeated by social engineering weaknesses exploited by Snowden — such as colleagues giving away their passwords.
External hackers use those same techniques today, as demonstrated at security conferences like RSA.
Recent cyberattacks against bitcoin exchanges represent another example of how attacks will go after weaknesses in people and process, despite solid technology which is supposedly "hack-proof." Just last week a South Korean bitcoin exchange declared bankrupcy after the second attack in less than a year. This situation developed after commentators still maintain that the bitcoin currency cannot be hacked. Perhaps true, but your bitcoin wallet can still be raided. Similar problems will continue to occur with IoT devices in the future.
Fun Movie and TV Examples to Help Understand the Role of People in Security
I want to recognize that Dibrov says: “It may be prudent, and required, for you to continue with awareness programs, but you will have to rely more on intelligent technologies and automation if you hope to have any chance at success. …”
I certainly agree.
Nevertheless, the reality is that the main point of his article comes from the last sentence at the end of the article: “It’s time to remove people from the discussion and move towards a more intelligent, secure future.”
Really? Take people out of the security discussion?
Side note: I immediately posted this article to my LinkedIn and Twitter feeds and received a flood of similar comments to what I am writing in this rebuttal. Some of those same comments from colleagues appear at the bottom of the article at HBR.org.
Furthermore, to keep this simple, I’d like to offer a fun illustration of why people cannot be removed from the central security discussion. In the (fictional) film series Mission Impossible, the most sophisticated technical security controls are consistently overcome via weaknesses exploited in people and process hacks.
Ethan Hunt (played by Tom Cruise) and a wide assortment of men and women spies in the fictional U.S. Impossible Mission Force (IMF), face an untold number of highly improbable and dangerous tasks that are action-packed, over-the-top and fun to watch. One common theme throughout these five movies (with number six coming in 2018) is how people can still defeat the most sophisticated technology safeguards put in place.
Sadly, hackers overcoming state-of-the-art technology defenses are not just for the movies or TV shows like Mr. Robot. We have seen an untold number of ways that IoT devices can be hacked by tricking people into doing things or not following recommended best practices for security. Sadly, hacking IoT devices is often easier than Tom Cruise pulling off one of his movie stunts.
Everyone certainly agrees with the goal to build more-secure IoT devices. Humans certainly make mistakes, and we should aim to automate as much security as possible. Just as we safely fly planes on autopilot, shouldn’t we strive to build human-proof smart devices that are secure out of the box?
Of course. And ... I am all for more-secure IoT devices that remove the potential for most end-user errors or security mistakes.
Nevertheless, training and working with people and processes to protect data will never be an optional extra for secure enterprises, homes or individuals.
A False Choice
The HBR article by Yevgeny Dibrov appears to offer an attractive answer because it promises IoT security solutions without the very hard to change enterprise security culture. It offers a false hope by eliminating “reliance on a human-based strategy” and offering better security with a perfect technology-driven, or bolt-on tech solution, for all IoT devices. Managers imagine saving significant money by reducing the time required for staff to be trained and/or understand and implement appropriate (and secure) business processes with innovative technology.
This invented conflict is similar to another security paradox from a few years back that asked the question: Are data breaches inevitable? Most people now say "yes" without hesitation, but Invincea CEO Anup Ghosh told Washington news site DC Inno that breach prevention is possible, proclaiming “breach inevitability” is just marketing.
As I wrote at that time, we need a third answer that adopts all the wisdom contained in the NIST Cybersecurity Framework regarding cyberincident and data breach prevention as well as incident response.
The same holistic approach is required for IoT security. Let’s not sacrifice one security best practice in exchange for another, as if we need to pick technology protections over enabling people with better awareness training and engaging in cyberexercises. The NIST guidance encourages an assessment of all cyber-risks with prioritization based upon your specific situation. It recommends that solutions contain end-user training, technical training for developers and system administrators, cybersecurity exercises, management briefings, repeatable technology upgrade processes and much more. Don’t skip over important sections of the NIST Cyber Framework.
Final Thoughts
Better cybersecurity protections for IoT requires improvements in people, process and technology. So let’s not pit people issues against technology protections in a fight for dollars — nor pretend that a perfect black box is coming that will enable IoT nirvana, while removing people and process from the security equation.
Bottom line: The Edward Snowden story can teach many important security lessons. But no security message is more central than this: People, and their actions, will always matter in cybersecurity.
So can we remove people from the IoT security discussion? Mission Impossible!