IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Nonprofit Aims to Help Govt Consider Risks of Generative AI

CivAI is creating a toolkit that will help state and local government leaders address the risks as they start using the rapidly evolving technology for more use cases.

Chatbot,chat,With,Ai.,Digital,Chat,Bot.,Human,Ask,Bot.,Bot
Shutterstock
A new nonprofit is building resources for state and local governments that want to better understand and assess the risks of generative AI.

One of CivAI’s offerings, for example, aims to show policymakers the threats government could face from outside attackers equipped with generative AI, complete with demos to illuminate how easily malicious actors could make deepfakes or conduct spear phishing. Another looks at risk stemming from government’s own use of the tech. The idea is to spread awareness and help decision-makers get familiar with the risks of the tech.

“A big part of not falling for this stuff is just knowing what’s possible,” said co-founder Lucas Hansen.

And this comes at a time when many state and local governments are actively considering how to engage with generative AI. Pennsylvania recently announced it will pilot ChatGPT Enterprise, while New York City adopted an AI-powered chatbot in October. California is in the process of assessing generative AI and issuing guidelines. New York and Maryland, meanwhile, are among the states working on guardrails for state use of all AI, generative included.

DEEPFAKE DEMOS


CivAI wants to show policymakers how easily publicly available tools can be used to create spear phishing, voice cloning and deepfaked images. To do so, it uses the tools on participants own likenesses. While many have seen headlines about deepfakes of celebrities, the risks can hit harder hearing one's own voice or seeing their own face.

“All of this built using purely open source and commercially available tools,” said co-founder Siddharth Hiregowdara, of the demonstrations. “We haven't trained any AI models or done any novel AI anything here. It's all stringing together publicly available stuff.”

Entering a person’s LinkedIn address into one tool crafts an email personalized for that individual. Should the target respond, the AI-powered tool generates replies, combining details from a LinkedIn profile with other info from a data broker.

The style of phishing used in the demo is more subtle than classic examples. Messages avoid overtly asking a target to download a file or click a link. Instead, Hansen explained, it piques a target’s interest in the sender enough that they want to find out more about them, and — if the attack goes according to plan — it gets the victim to click a hyperlink in the sender’s email signature. A fake LinkedIn login page then pops up, asking for username and password, subsequently stealing that data.

CivAI’s demos also showcase a tool that creates a voice clone from just a few minutes of uploaded audio or video. That spoofed voice reads whatever message the attacker types. In one demo, the voice clone even peppered in uhs and ums.

The final part of the demo includes showing how a single photo of a person can be used for face swaps. Examples included showing the person in surveillance footage or depicting them in a hospital bed.

Generative AI has been making existing forms of fraud easier and cheaper to conduct, encouraging fraudsters to pursue targets who would not previously have been profitable enough. Creating a voicemail message with a spoofed voice likely now costs just “a couple of cents,” Hansen said.

“Previously, there were people who were shielded by it not being worth it — like, the expected payoff of running the scam just not being high enough,” Hansen said. “But now they're no longer safe because it's so much cheaper to run the scam in the first place.”

RISK ASSESSMENT TOOLKIT


CivAI’s forthcoming toolkit is designed to help state and local governments ensure that plans to use generative AI account for known risks.

The toolkit helps government teams apply the advice in the National Institute of Standards and Technology (NIST)’s AI Risk Management Framework to generative AI use cases. NIST’s framework avoids discussing specific sectors or use cases and addresses AI systems in general; NIST has yet to release a planned generative AI specific risk management profile.

The CivAI GenAI Toolkit provides a tailored questionnaire to help government apply the NIST AI Risk Management Framework’s guidance here. For example, an agency official may look to follow NIST’s advice to map the known limits of an AI system. The official can go through the CivAI questionnaire to ensure they’ve accounted for the possibility of hallucinations, prompt injection attacks and other ways that generative AI can go wrong.

“We can make literal worksheets for state government that they can just work through methodically to ensure their bases are covered when they’re using GenAI,” Hiregowdara said.

A sample section of CivAI's toolkit showing how to apply NIST' AI RMF's Map 2.2 to generative AI. The questionnaire  suggests considering how someone might try to use a prompt injection to disrupt the chatbot and asks a series of questions about how the system might be vulnerable to that attack, an example of what that attack might look like, and the worst-case impacts of it. Below, a hypothetical DMV official's answers are written. Another prompt asks about how generative AI hallucinations might play out and what the worst impacts of that could be.
A CivAI white paper demonstrates how a DMV could use the toolkit to vet a potential AI-powered chatbot.
Photo credit: CivAI
This project is early-stage: After developing the toolkit internally, the nonprofit is now soliciting government feedback. In particular it seeks input on additional uses cases and risks for government.

People interested in all this work can email contact@civai.org and submit request for demos via webform. The nonprofit also hopes to demo at conferences and is considering creating educational videos for internal cybersecurity training programs.
Jule Pattison-Gordon is a senior staff writer for Governing and former senior staff writer for Government Technology, where she'd specialized in cybersecurity. Jule also previously wrote for PYMNTS and The Bay State Banner and holds a B.A. in creative writing from Carnegie Mellon. She’s based outside Boston.