Misinformation around voting has been an issue for years, and AI has heightened the risk. This puts governments in a unique position as they try to combat deception.
The CDT report released Monday, Generating Confusion: Stress-Testing AI Chatbots on Voting with a Disability, examines the potential harms of election-related misinformation on the disability community. To create it, CDT tested five chatbots: Mixtral 8x7B v0.1, Gemini 1.5 Pro, ChatGPT-4, Claude 3 Opus and Llama 2 70b. Findings suggest AI-generated responses may deter voter participation.
Every chatbot tested hallucinated at least once, which the report defined as incorrect information that the model constructed with “no verifiable basis in fact.” These hallucinations ranged from providing information about laws that did not exist, to recommending disability rights organizations that do not exist as resources.
The CDT has been working with the federal government to ensure that, as AI advances, governments consider the potential impact for people with disabilities.
“Algorithmic outputs are created as a result of inputs, and those inputs come from data sets,” CDT Policy Counsel for Disability Rights in Technology Policy Ariana Aboulafia said recently, noting inaccurate or nonrepresentative data sets can lead to algorithmic bias.
Barriers in the voting process already exist for people with disabilities, the CDT report said. This is due to a multitude of factors, from transportation logistics and voter ID requirements to a complex legal landscape and lack of compliance with voting laws.
These factors — paired with the fact that chatbots are relatively easy to use and are sometimes highlighted as a resource for the disability community — might lead people with disabilities to rely on them for voting information, the report said. As such, the risk of negatively impacting the ability to vote for this population “is significant.”
The report recommended that people avoid using AI chatbots as a primary source for election information, and suggested using them to find more reliable resources instead. It also recommended users fact-check chatbot-provided information using trusted sources, before sharing or relying on it.
The document also provided recommendations for developers. Namely, it said developers should direct users to authoritative sources of election information, and prohibit acts that could potentially interfere with elections. It also suggested developers test the model by posing common election questions and disclose how recently training data was updated.
If steps are taken to mitigate risk, the report said, it is possible that chatbots can actually help in the effort to protect the right to vote for all eligible voters with and without disabilities.