IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Kansas CISO: How AI Could Help Patch Software Faster

CISO John Godfrey sees potential for AI to help cybersecurity teams know when it’s safe to push patches fast. At the same time, he’s keeping an eye on AI-powered threats like deepfakes.

Kansas CISO John Godfrey
Kansas CISO John Godfrey
Government Technology/David Kidd
Artificial intelligence (AI) may be buzzy now, but cybersecurity teams have long had their eyes on the technology — and they continue finding new ways it can power their work as its capabilities evolve.

Kansas CISO John Godfrey said machine learning traditionally has been helpful with tasks like log management and malicious email scanning and blocking. But recently, uses for the technology “are starting to ratchet up a notch,” he added.

One emerging idea is to use AI for “patch management at scale.” Updates need to happen regularly, and ideally, cyber teams would deploy software patches to all machines the moment the patches become available. But this isn’t always possible: Teams must also be careful to ensure patching doesn’t disrupt operations, and it’s customary to wait a few weeks before applying a patch. AI might help balance the needs for speed and stability, however. The technology could help assess the likelihood of patching going smoothly.

“What data can we capture on the success rate of patches, and the extent to which those are disruptive to the systems, and how can that inform our patching strategy going forward?” Godfrey said. “If we know that 90 percent of the time these patches are not disruptive, maybe those go into a faster release cycle.”

These ideas are circulating at a time when Kansas is preparing to release version 2.0 of its initial generative AI policy. The updated version will include more guardrails on how AI can be used by the state and will see more information collected on how business units are using the technology, he said.

As the state vets potential vendors’ security practices, it’s now starting to include questions about whether and how the companies use AI in their products and what will happen to government-owned data. It’ll be an ongoing process for the state to refine the approach and find the right balance of restrictions that can reduce risk without going too far and overly impeding “future business needs and capabilities.”

Godfrey is also paying attention to how AI can make disinformation attacks easier and more sophisticated. That includes by rapidly creating deepfakes.

As for how to stop deepfakes: “I don’t think you can,” Godrey said. As such, there needs to be focus on response and resilience.

Jule Pattison-Gordon is a senior staff writer for Governing and former senior staff writer for Government Technology, where she'd specialized in cybersecurity. Jule also previously wrote for PYMNTS and The Bay State Banner and holds a B.A. in creative writing from Carnegie Mellon. She’s based outside Boston.
e.Republic Executive Editor Noelle Knell is a contributing editor to Emergency Management magazine.