Others call it "new shadow IT for generative AI (GenAI)."
Many more use traditional terms like "governance for GenAI" or "GenAI policies and compliance."
Cyber pros use related terms with broader implications, like "securing GenAI," "GenAI cybersecurity" or "GenAI security policies."
And a growing number of public- and private-sector groups prefer to call it "guardrails for GenAI."
But regardless of which terminology you use, many conversations have emerged on this topic around the world. Everyone is trying to get their arms around “getting to yes” in our rapidly evolving world of GenAI apps, many of which are currently freely available via your favorite Internet browser or on your smartphone as an app.
Meanwhile, my favorite prediction for 2024 is "Bring Your Own AI (BYOAI) will dominate enterprises."
As I shared on this Digital Decodepodcast regarding the top cybersecurity predictions for 2024, GenAI is dominating conversations. But CISOs are struggling to understand how they can gain visibility regarding what is actually being used now by end users in their enterprises with these AI tools.
That GenAI conversation was front and center in this recent Kiteworks webinar:
WHAT ARE THE SECURITY ISSUES WITH FREE GenAI APPS?
Last fall, Forbes published this Dell contributed article: What Is Shadow AI And What Can IT Do About It? Here's an excerpt:
“Shadow AI is a term describing unsanctioned or ad hoc generative AI use within an organization that’s outside IT governance. Research shows about 49 percent of people have used generative AI, with over one-third using it daily, according to Salesforce. Inside the workplace, this can mean employees accessing generative AI tools like ChatGPT to perform tasks such as drafting copy, creating images, or even writing code. For IT, this can result in a governance nightmare that requires deciding what AI usage to permit or restrict to support the workforce while at the same time keeping the business safe.
“If that wasn’t enough for IT, generative AI use is actually accelerating. According to that same Salesforce survey, 52 percent of respondents reported that their usage of generative AI is increasing compared to when they first started. This means the threat of shadow AI is here for IT — and it’s growing."
Back in the early summer of 2023, I was one of the first to write about the new challenges that were emerging for cybersecurity teams around the world in in this viral CSO Magazine article: Has generative AI quietly ushered in a new era of shadow IT on steroids? Here’s an excerpt:
“What I’m concerned about is not the variety, productivity gains or other numerous benefits of GenAI tools. Rather, it’s whether these new tools now serve as a type of Trojan horse for enterprises. Are end users taking matters into their own hands by using these apps and ignoring policies and procedures on the acceptable use of non-approved apps in the process? I believe the answer for many organizations is yes. …
“But what concerns me the most is the astonishing growth in generative AI apps — along with how fast these apps are being adopted for a myriad of reasons. Indeed, if the internet can best be described as an accelerator for both good and evil — which I believe is true — generative AI is supercharging that acceleration in both directions.
“Put simply, it’s hard to compete with free. Most organizations move slowly in acquiring new technology, and this budgeting and deployment process can take months or years. End users, who are likely already violating policies by using these free generative AI tools, are generally loath to band together and insist that the enterprise CTO (or other executives) buy new products that could end up costing millions of dollars for enterprise usage over time. That ROI may come over the next few years, but meanwhile, they experiment with free versions because everyone is doing it.”
After I wrote that article, CIO Magazine came out and proclaimed: Shadow AI will be much worse than Shadow IT.
“Shadow AI has the potential to eclipse Shadow IT. How and why is that you ask? With Shadow IT, your developers were really the only points of failure in the equation; with generative AI every user has the potential to be one. This means you must count on everyone—from admins to executives—to make the right decision each and every time they use GenAI. This requires you to put a high degree of trust in user behavior, but it also forces your users to self-govern in a way that might hamstring their own speed and agility if they’re constantly second-guessing their own actions. There is an easier way, but we’ll get to that later.”
(As an aside, I actually think shadow AI is a subset of shadow IT, so I am not sure that this statement makes logical sense. When I worked for the state of Michigan, end users were sometimes using their own cloud technology rather than the enterprise solution. Nevertheless, I do agree that shadow AI has dramatically accelerated the shadow IT problem in new ways.)
WE'VE BEEN HERE BEFORE: A QUICK HISTORY LESSON
Yes, we’ve seen similar issues before. As I have written on many occasions, we must learn from history.
In an article entitled "Shadow AI poses new generation of threats to enterprise IT," the authors identify a series of risks that must be addressed with Shadow AI. These include:
- Functional risks
- Operational risks
- Legal risks
- Resource risks
They recommend starting with leadership:
“Firstly, leadership needs to know how much is being spent on AI — sanctioned and otherwise.
“Secondly, groups previously working outside the ambit of institutional risk controls must be brought into the fold. Their projects have to comply with the enterprise's risk management requirements, if not its technical choices.”
Third, the article's authors encourage data classification, creating a set of AI policies, and educating and training employees.
In the CSO Magazine article, I outline pragmatic steps for security teams to take, such as:
“Some readers may be thinking, we already dealt with this shadow IT issue years ago — this is a classic cloud access security broker (CASB) problem. To a large extent, they’d be correct. Companies such as Netskope and Zscaler, which are known for offering CASB solutions in their product suites, offer toolsets to manage enterprise policies for managing generative AI apps.
“No doubt, other solutions are available that can help manage generative AI apps from top CASB vendors, and this article provides more potential CASB options. Still, these CASB solutions must be deployed and configured properly for CASB to help governance.
“To be clear, CASB toolsets still do not solve all of your generative AI app issues. Organizations still need to address other questions related to licensing, application sprawl, security and privacy policies, procedures, and more. There are also training considerations, product evaluations and business workflow management to consider. Put simply, who is researching the various options and optimizing which generative AI approaches make the most sense for your public- or private-sector organization or particular business area?”
FINAL THOUGHTS
I like this Government Technology article, which outlines how AI has ushered in the dawn of a new era and how AI was included in the 2024 State of the State addresses (with grades given in the form of 1 to 5 stars for each, based on how much technology is mentioned).
What is clear is that this issue continues to grow and is not going away. There is a need for action by federal, state and local cybersecurity teams to assist in the oversight and management of ongoing guardrails for GenAI use.
This July 2023 Guidance from the UK Cyber Security Center can help on shadow IT (and I think shadow AI is included under this umbrella. See the cloud services section.)
One more, an article from Venturebeat, "Why IT leaders should seize on shadow AI to modernize governance," offers several helpful solutions as well.