IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Federal Agency Outlines Recommendations for Accessible AI

The U.S. Access Board highlighted its initial findings on the risks and benefits of artificial intelligence for people with disabilities, offering recommendations to promote responsible use.

A light source illuminates a room through an open door with floating cyan symbols representing code or tech.
The U.S. Access Board shared its initial findings on the risks and benefits of artificial intelligence for people with disabilities in a Tuesday webinar.

President Joe Biden’s October 2023 AI executive order (EO) — the future of which remains uncertain — set the stage for this work by emphasizing the need for accessibility best practices around AI. The EO directed the Access Board, a small independent federal agency, to do several things: conduct community engagement, issue technical assistance and recommendations on AI risks and benefits, and provide people with disabilities access to information and communication technology and transportation services.

The Access Board signed a memorandum of understanding (MOU) in May with two external agencies, the American Association of People with Disabilities and the Center for Democracy and Technology, to complete the tasks outlined in the EO.

Since the MOU’s signing, several key actions have been completed. First, the Access Board launched a webpage hosting relevant information and recordings from its AI series.

The AI series, “Developing Artificial Intelligence (AI) Equity, Access & Inclusion for All Series,” is the key way the Access Board is approaching this work with its partners, according to Amy Nieves, public affairs specialist to the Board's Office of Executive Director. Thus far, the series has had five targeted sessions, the most recent of which was the Tuesday webinar. The first, in July, was an informational session. A series of public hearings followed, two focused on the disability community and one for federal agencies and AI practitioners. The next session was a request for public comments, which concluded Oct. 31.

More than 3,800 people have visited the AI webpage since its launch, 33 public comments have been received on regulations.gov, and hundreds of attendees have tuned in to the informational sessions and hearings. Hundreds more have viewed them online after the fact.

“As this work is still in the early stages, there’s still a lot more to come when it comes to artificial intelligence and the work the Access Board is going to be engaged in,” said Sachin Pavithran, Access Board executive director.

The community engagement process revealed many benefits AI has for people with disabilities, Nieves said. These include assistance with everyday tasks, improvements to assistive technologies such as speech recognition, support for navigating indoor and outdoor spaces, enhanced communication capabilities, and smart home solutions that can enable independent living.

Federal agencies and businesses provided insight on potential AI benefits, including increased efficiency on tasks that would otherwise need manual review, data analysis tools to identify patterns, and the ability to generate content based on learned patterns.

For example, the General Services Administration is exploring how the use of AI tools can support accessibility initiatives, such as converting high volumes of PDF documents to accessible formats, providing AI-enhanced captioning and transcription services, and converting jargon language to plain language on federal websites.

However, the community engagement process predominantly revealed risks AI poses for the disability community.

According to Nieves, a key challenge the process highlighted is that of having inclusive data sets, which can contribute to AI causing discriminatory outcomes for the disability community. As such, the training data for an AI system should accurately reflect the disability experience — which is not monolithic.

Recommendations were provided in each of seven risk areas: employment, education, benefits determination, information and communication, health care, transportation, and the legal system.

In employment, there are risks in the use of hiring tools and surveillance tools that are not calibrated for people with disabilities. The Access Board recommends employers evaluate automated tools in the hiring process and on the job to identify potentially discriminatory impacts.

In education, AI could violate students’ civil rights under the Americans with Disabilities Act and other legislation. It’s recommended that public entities and educators in school systems assess whether AI-powered tools can disproportionately affect students with disabilities. Pre-deployment audits of AI technology should be conducted prior to adoption, as well as ongoing monitoring to identify discriminatory impacts. Developers of educational technologies should consider disability-related bias.

Regarding benefits access, people with disabilities may unfairly be denied access to benefits as a result of algorithmic systems. It is recommended that state and local governments that administer public benefits use federal guidance to mitigate AI risks. Annual inventories of AI use cases should be published, and beneficiaries should be notified when AI is used in the benefits determination process. Information about the redress process should also be provided.

For communication technology, developers should focus on privacy protection in their tools and the protection of sensitive or disability-related data. These technologies should be developed to be compatible with assistive technologies. It’s recommended that public entities involve AI-trained accessibility coordinators in procuring AI technologies.

In health care, algorithmic tools can determine levels of care. Hospital systems are recommended to conduct pre-deployment audits focused on the algorithmic impact on people with disabilities. And patients should be fully informed of privacy implications around the use of at-home monitoring technologies.

In transportation, some algorithmic tools rely on biometric data. Those who develop and deploy such tools should evaluate their impact on people with disabilities. Alternative and equitable processes should be established for individuals who are unable to provide biometric input. Post-deployment audits should also be done.

Finally, in the legal system, AI tools can amplify existing systemic disability biases and could potentially determine an outcome before a trial happens. It’s recommended that state and local police implement risk-reduction measures based on the Office of Management and Budget’s Memorandum M-24-10.

More broadly, it is recommended to include people with disabilities in developing and implementing AI, take action to minimize bias, disclose AI when it is used, and continually monitor and assess for negative impacts.

The MOU partnership will continue until May 2027, and its goals will be adjusted as needed while AI evolves. More education and training on AI will be provided and more community engagement opportunities are expected.
Julia Edinger is a staff writer for Government Technology. She has a bachelor's degree in English from the University of Toledo and has since worked in publishing and media. She's currently located in Southern California.