One of those recent research studies is called the 2024 AI Readiness Report: Greatest Risks and the Ways to Avoid Them.
Here are some of the highlights listed from this Presidio press report:
The comprehensive study, which surveyed more than 1,000 CIOs, CTOs and IT decision-makers across industries, reveals major concerns about AI implementation, critical risks and readiness gaps. Notably, 80 percent of companies have already adopted generative AI (GenAI), but 50 percent admit to launching GenAI initiatives before they were thoroughly prepared to do so.
“While there’s undeniable momentum behind AI adoption, our findings show that some organizations are not fully prepared for the complexities involved,” said Rob Kim, chief technology officer at Presidio. “CIOs are eager to unlock AI’s potential but face critical challenges, particularly around data quality, security, and operational integration. Moving too quickly without the right infrastructure and skill sets in place can lead to costly setbacks.”
AI investment is a top priority, and AI offers a competitive advantage according to 96 percent of IT leaders, plus 71 percent identify it as their company’s biggest area of investment. Looking exclusively at senior leadership, a striking 90 percent of CIOs express concerns about integrating AI into their operations. However, data-related barriers and cybersecurity top their list of AI adoption hurdles, especially in the financial services and health-care sectors.
Additional key findings include:
- Data-related barriers: Over 86 percent of respondents report significant data challenges, from gaining meaningful insights to ensuring real-time access. Among those who have already adopted GenAI, 84 percent experienced issues with their data sources.
- High stakes in security: Cybersecurity is a key focus for CIOs, with 69 percent leveraging AI to bolster their security programs. However, many IT leaders worry about the security risks associated with AI use. The report brings attention to widespread concerns over data exposure, regulatory compliance and the risk of employees independently adopting AI tools. Thirty-seven percent cite data privacy and security as their primary concern with AI adoption.
- Public sector lags behind in readiness and adoption: The report underscores stark differences in GenAI readiness and adoption rates across industries. Eighty-five percent of IT leaders in the private sector have adopted GenAI, while 38 percent of government respondents have yet to do so. The financial services industry demonstrates a higher level of readiness than other sectors.
CONCERNING AI DETAILS, PLEASE
Digging deeper into some of the research and the findings regarding implementation challenges, this chart identifies the percentages surrounding the top concerns with adopting AI.
WHY DOES AI ADOPTION FAIL?
According to the research, the top-five reasons that AI adoption can fail include:
1) Rushing to adopt without thinking strategically
2) Lack of quality data
3) Poor management
4) Lack of proper cloud infrastructure support
5) Unrealistic expectations from leadership
A total of 13 percent in local and state government reported that moving too fast led to failure. The overall percentages from each area are shown in this graph:
“The rise of 'Bring Your Own AI' (BYOAI) and the independent use of GenAI by employees poses additional risks to organizations across sectors. When organizations hesitate to invest in companywide AI solutions due to cost concerns or uncertainty around immediate ROI, employees may adopt AI tools independently, outside of official guardrails. This creates vulnerabilities that expose businesses to cyber attacks and regulatory issues.”
The report has a section that offers a list of solution options to the question: How to protect your company from BYOAI/independent GenAI use risks.
• Just over two-thirds (67 percent) of IT decision-makers said their organization’s internal AI guardrails and governance are restrictive, with a fifth (20 percent) saying very restrictive and nearly half (47 percent) who said moderately restrictive, e.g., only restrictive for specific use cases/departments.
• Three in 10 (30 percent) said their organization’s internal AI guardrails and governance are not restrictive, with over a fifth (22 percent) saying not very restrictive and 1 in 14 (7 percent) who said not at all restrictive.
• Over 7 in 10 (73 percent) IT leaders in the health-care sector said their organization’s internal AI guardrails and governance are restrictive, while just under two-thirds of those in the financial services and local/state government sectors (both 65 percent) said the same.
FINAL THOUGHTS
Overall, the report makes it pretty clear that most state and local governments are behind the private sector in overcoming these AI challenges. This trend shows up in multiple areas addressed. It seems to me that this also opens up opportunities for the SLED communities to learn from their private-sector counterparts.
ZDNet covered this same AI project research here with their own analysis, along with other similar reports on AI implementation challenges. Their main focus was on the importance of data to implement effective AI projects.
A few other recent (2024) research reports and studies that cover AI adoption suggestions include:
· Rand: The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed
· Deloitte: Despite Increased Investment and Early Enthusiasm, Data and Risk Remain Key Challenges to Scaling Generative AI, Reveals New Deloitte Survey
· Accenture: Accelerating Reinvention to Support Growth With AI-Powered Operations
· PwC: Global Artificial Intelligence Study: Exploiting the AI Revolution - What’s the real value of AI for your business and how can you capitalize?