Certainly 2023 could be described as the year AI became mainstream. In October 2022, best-selling author Bernard Marr postulated about the democratization of AI. He wrote, “AI will only achieve its full potential if it’s available to everyone and every company and organization is able to benefit. Thankfully in 2023, this will be easier than ever. An ever-growing number of apps put AI functionality at the fingers of anyone, regardless of their level of technical skill.” In the business world, companies quickly introduced products and tools with some AI components attached. According to a 2023 IBM survey, “75 percent of CEOs believe that competitive advantage will depend on who has the most advanced generative AI. However, executives are also weighing potential risks or barriers of the technology such as bias, ethics and security. More than half of CEOs surveyed are concerned about data security and 48 percent worry about bias or data accuracy.”
IoT AND AI
According to FinancesOnline, “The number of connected IoT devices in 2020 is estimated to be 8.74 billion. The figures are expected to increase by about 200 percent in 2030 and have an estimated value of more than $1 trillion.” As the number of devices keeps growing, so does the need to collect, store and analyze data. As the IT publication InfoWorldpointed out in 2022, “With AI, IoT networks and devices can learn from past decisions, predict future activity, and continuously improve performance and decision-making capabilities. AI allows the devices to ‘think for themselves,’ interpreting data and making real-time decisions without the delays and congestion that occur from data transfers.” With the proliferation of IoT, AI will continue to grow as it manages an ever-increasing population of devices.
IoT AND CYBERSECURITY
Smart homes, cities and connected cars will put new pressures on corporate and government institutions to ensure data safety. In December 2022, the market research company Insider Intelligence forecasted 4.3 billion IoT mobile connections worldwide by 2026 and more than 64 billion IoT devices installed by 2026. With this phenomenal growth of IoT, there is a natural demand for robust, autonomous cybersecurity tools. In the world of IoT, functionality doesn’t require human intervention. As Microsoft’s website points out, “there is real risk in what are really network-connected, general-purpose computers that can be hijacked by attackers, resulting in problems beyond IoT security. Even the most mundane device can become dangerous when compromised over the Internet — from spying with video baby monitors to interrupted services on life-saving health care equipment. Once attackers have control, they can steal data, disrupt delivery of services, or commit any other cyber crime they’d do with a computer.” So therein lies the challenge of managing an ever-increasing number of devices while keeping them fully functional, safe and secure.
AI TO THE RESCUE?
But, at the same time, AI can also create “false positives” when it encounters new or unknown threats that it has yet to recognize. Elisa Silverman wrote in June for the workflow automation company Zapier, “If AI models can be tricked into misclassifying dangerous input as safe, an app developed with this AI could execute malware and even bypass security controls to give the malware elevated privileges. AI models that lack human oversight can be vulnerable to data poisoning.”
AI IN THE WRONG HANDS
While AI can provide many tools to combat cyber breaches, it has also become a useful tool for cyber criminals. Joesph Menn in theWashington Postwrote in May that experts, executives and government officials are worried about attackers using artificial intelligence to “write software that can break into corporate networks in novel ways, change appearance and functionality to beat detection, and smuggle data back out through processes that appear normal.”
AI being used to “outsmart” established cybersecurity protection strategies and systems is referred to as adversarial AI. These attacks can be described as AI-based or AI-facilitated cyber attacks, and are also known as adversarial learning — the case of “machine versus machine as malicious AI algorithms are used to subvert (machine learning)-powered security solutions,” according to CTO Nadav Maman of the cybersecurity company Deep Instinct. These scenarios might appear like a scene from the movie “The Terminator,” as AI machines attempted to take over the “human world.” How can we prudently utilize AI tools for cybersecurity while maintaining appropriate human control?
PRUDENT USE OF AI FOR CYBERSECURITY
When considering AI for cybersecurity, it’s important to consider the operation’s goals and carefully determine its measurable objectives. There are limitations to AI, but with appropriately educated and trained staff in AI and cybersecurity, safeguards can be put into place. As with any cybersecurity process, it’s not a one-and-done proposition but requires continually going back and monitoring, evaluating and auditing systems. AI and cybersecurity can coexist. But they will only do so successfully if there is a human component.