IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

AI Chatbots May Disrupt Voting for Disability Community

A new report from the Center for Democracy and Technology examines ways in which AI-powered chatbots may negatively impact voter confidence this election season, for people with disabilities.

image represents voting by showing a checkmark in a box floating over other squares in blue and purple space
Shutterstock/Blackboard
Artificial intelligence-powered (AI) chatbots can provide misinformation that may discourage people with disabilities from voting, according to a new report from the Center for Democracy and Technology (CDT).

Misinformation around voting has been an issue for years, and AI has heightened the risk. This puts governments in a unique position as they try to combat deception.

The CDT report released Monday, Generating Confusion: Stress-Testing AI Chatbots on Voting with a Disability, examines the potential harms of election-related misinformation on the disability community. To create it, CDT tested five chatbots: Mixtral 8x7B v0.1, Gemini 1.5 Pro, ChatGPT-4, Claude 3 Opus and Llama 2 70b. Findings suggest AI-generated responses may deter voter participation.
For example, one-quarter of the chatbot responses analyzed could either “dissuade, impede or prevent” the user from voting. And more than one-third of all the answers provided included false information, according to the report, ranging from inaccurate registration deadlines to falsely stating that curbside voting would be an option.

Every chatbot tested hallucinated at least once, which the report defined as incorrect information that the model constructed with “no verifiable basis in fact.” These hallucinations ranged from providing information about laws that did not exist, to recommending disability rights organizations that do not exist as resources.

The CDT has been working with the federal government to ensure that, as AI advances, governments consider the potential impact for people with disabilities.

“Algorithmic outputs are created as a result of inputs, and those inputs come from data sets,” CDT Policy Counsel for Disability Rights in Technology Policy Ariana Aboulafia said recently, noting inaccurate or nonrepresentative data sets can lead to algorithmic bias.

Barriers in the voting process already exist for people with disabilities, the CDT report said. This is due to a multitude of factors, from transportation logistics and voter ID requirements to a complex legal landscape and lack of compliance with voting laws.

These factors — paired with the fact that chatbots are relatively easy to use and are sometimes highlighted as a resource for the disability community — might lead people with disabilities to rely on them for voting information, the report said. As such, the risk of negatively impacting the ability to vote for this population “is significant.”

The report recommended that people avoid using AI chatbots as a primary source for election information, and suggested using them to find more reliable resources instead. It also recommended users fact-check chatbot-provided information using trusted sources, before sharing or relying on it.

The document also provided recommendations for developers. Namely, it said developers should direct users to authoritative sources of election information, and prohibit acts that could potentially interfere with elections. It also suggested developers test the model by posing common election questions and disclose how recently training data was updated.

If steps are taken to mitigate risk, the report said, it is possible that chatbots can actually help in the effort to protect the right to vote for all eligible voters with and without disabilities.