Investigating even one suspicious event to determine if a legitimate threat exists consumes scarce security staff time and resources, something states have in limited supply. Right now, states face skills shortages and struggle to find qualified candidates to fill available IT security positions. With limited time and manpower, security staff have no option but to develop more efficient techniques for identifying critical indicators from the deluge of events they continuously collect.
Improving Defensive Capabilities
Artificial intelligence and machine learning hold promise as effective techniques for sifting through the large volumes of security events logged by SIEM technology. These tools can augment existing security staff and safeguard the enterprise by dramatically increasing the chances that real threats will be detected more quickly.One version of AI builds on top of SIEM technology, mapping events to machine learning data models created to classify alerts. Machine learning can execute an algorithm built from past samples to classify a given event as benign or contributing to a threat sequence. Future versions of this technology could automatically correct the condition or just notify security staff to pay attention to that particular event.
AI strengthens cybersecurity defensive capabilities by:
Scanning large volumes of events from multiple sources.
The rapid evolution of machine learning in AI gives government security teams new tools for filtering massive amounts of data to find events that are disparate from other data in the set.
Identifying variations from typical network traffic patterns.
Like a highway system, where traffic patterns are comparable from one day to the next and at certain times of day, every state computer network has day-to-day and historical traffic patterns. As users enter the system, they usually move in similar ways to search for or provide information, transact business or contact an agency representative, for example.
Cybercriminals who infiltrate the system tend to appear, at least initially, like normal users. Finding them can be like searching for a needle in a haystack. Using traditional techniques, it may take weeks to piece together a sequence of events and identify a hacker. However, AI can conduct the search and reduce time to discovery.
Grouping related security events and notifying security personnel about potential threats.
As AI identifies realistic threats and generates alerts, it also can group related events to reduce the number of individual investigations security experts have to review. Far from becoming obsolete, security operations teams will become more effective as the use of AI in threat detection frees them up to focus on the events that most likely indicate cyberattacks.
Watching IoT network entry points.
Statista estimates that 75.44 billion devices worldwide will be connected to the Internet by 2025. Often, these devices have little to no security monitoring, making them attractive entry points for bad actors. In the massive Target breach of 2013, hackers stole login credentials from the company’s heating, ventilation and air conditioning provider, and then used the credentials to enter Target’s Internet-connected HVAC system and jump to the company’s payment systems.
Connected devices constantly communicate with other devices or with network hosts, so a hacker entering a government agency’s network could look perfectly normal. If an IoT device begins “talking” to a network host with which it has never communicated previously, it could indicate a breach — but it is hard to identify whether a given contact has or has not happened before. AI can serve as a sentry for IoT gateways, generating alerts when connected devices receive new or anomalous contacts.
Understand the Potential Challenges
AI is still a relatively new technology. Like every new technological tool, it comes with some risks. For example, AI can classify attacks based on threat level, but it may not be accurate enough for high confidence in the classification. Initial phases of an attack may look different each time an attacker probes for weaknesses, and AI may not be able to accurately detect and classify these probes in every case.Cybersecurity is full of these gray-area challenges. Incorporating deep learning principles with state-of-the-art machine learning algorithms into government AI systems will improve over time. As more data is aggregated, AI will learn new behaviors and modify its conclusions accordingly.
Another concern is the risk of negative feedback loops corrupting the data models used for classification and autonomous decisions. Research into data poisoning attacks against machine learning is still a developing cybersecurity topic. A successful data poisoning attack might, in some cases, identify constituent activity as suspicious and therefore inadvertently block citizens from access. States should closely monitor the decisions AI makes autonomously and limit the duration those decisions are in effect. Blocking an attack for 30 minutes to an hour before allowing normal activity to resume effectively disrupts a cyberattack.
Even as AI becomes increasingly adept at addressing complex security challenges, cybercriminals will look for ways to work around or confuse its machine learning models. The perfect AI does not exist, but it already is a very effective and powerful security tool. As it matures, government security teams may find AI has leveled the cybersecurity playing field in the constant defense against cyberattacks. Government security teams should educate themselves about its capabilities and integrate the AI software that will make their efforts more efficient and effective.