IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

How Will Generative AI Change Cybersecurity Teams?

A global study finds almost half of government cybersecurity professionals expect generative AI to end the need for certain cyber skills or cyber roles.

The skyline of a city at night, with the tallest building giving off a bright light. Overlapping the image are zeros and ones, locks and blue lines representing networks.
A new report has found that about half of cybersecurity “practitioners and decision-makers” anticipate generative artificial intelligence (GenAI) will remove the need for cyber professionals to have certain technical skills or to fill certain roles, according to a survey by cybersecurity membership organization ISC2.

Forty-nine percent of cyber professionals working in the government sector said they believe generative AI will lead to certain cybersecurity skills becoming obsolete, and 48 percent said they think generative AI could replace certain cybersecurity roles, according to additional data ISC2 shared with Government Technology. A slightly higher portion of the overall respondent group — which spans many industries — echoed those views.

But it’s still unclear if generative AI really will replace some skills, as well as which skills those would be. That ambiguity may be why many hiring managers across industries are prioritizing recruits with soft skills that will remain relevant regardless of what the emerging technology’s future looks like. Per the report, 59 percent of hiring managers don’t know what skills are needed to be successful in an “AI-driven world.” As such, many said they’re currently prioritizing finding candidates skilled at problem-solving, teamwork and communication over those with technical skills like cloud computing security and risk assessment.

And even while AI’s growing role in organizations may be looming, only 23 percent of government hiring managers worldwide (and 24 percent of hiring managers across industries) said they were actively looking for recruits with skills in AI and machine learning. That may indicate that these hiring managers are more focused on immediate needs, rather than ones that may take a few years to bear fruit.

In contrast with hiring priorities, non-hiring managers — that is, professionals who don’t influence the final decision on whether to bring on a candidate — were more likely to value AI and machine learning skills. More than a third believe AI and machine learning skills are important for cyber professionals wanting to get hired or promoted. That’s a view espoused by 40 percent of non-hiring managers in government, and 37 percent of non-hiring managers across industries.

Some cyber teams are actively using generative AI, including for purposes like making it easier to access information or to speed up “tedious” tasks. Here, governments in the global survey showed a sharp departure from other industries: Just 26 percent of government respondents said generative AI was built into their cybersecurity teams’ tools, compared to 45 percent of respondents across industries. U.S. states, however, may be more in line with the wider trend: The latest Deloitte-NASCIO Cybersecurity Study found 41 percent of state CISOs using GenAI to support their security work and 43 percent looking to do so within the next 12 months.

Even as cyber teams see ways generative AI can help their own work, many worry about use of generative AI by outside departments introducing more data privacy and security risks. More than two-thirds of government respondents told ISC2 their organizations needed more regulations around how to safely use the technology, slightly higher than the portion of overall respondents saying the same.

Including cyber teams in shaping the state’s generative AI strategies can help, and the recent Deloitte-NASCIO study found many state CISOs are doing so: 88 percent of state CISOs were involved in developing their state’s GenAI strategy, and 96 percent involved in developing their state’s GenAI security policy. That outstrips the global average ISC2 found, with only 60 percent of respondents across industries and countries saying their cyber teams had a part in creating regulations and guidelines for GenAI.
Jule Pattison-Gordon is a senior staff writer for Governing and former senior staff writer for Government Technology, where she'd specialized in cybersecurity. Jule also previously wrote for PYMNTS and The Bay State Banner and holds a B.A. in creative writing from Carnegie Mellon. She’s based outside Boston.