IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

ED Issues Guidance to Avoid Discriminatory Use of AI

The U.S. Department of Education’s Office for Civil Rights shared a series of illustrative scenarios last week to help schools understand what constitutes artificial intelligence-based discrimination.

A human figure looks at a 1 and 0 on a scale, entertaining a computational dilemma.
Shutterstock
The U.S. Department of Education’s Office for Civil Rights (OCR) last week released a 16-page document to help schools avoid discriminatory uses of artificial intelligence, featuring detailed examples of how improper school use of AI and handling of AI-related issues could violate the civil rights of students.

Title VI of the Civil Rights Act of 1964 prohibits discrimination on the basis of race, color or national origin. The new guidance from OCR provides six hypothetical scenarios to illustrate how school AI use could violate this law, including an AI cheating detector that inaccurately flags English learners and AI-enabled facial recognition technology that consistently misidentifies students of color as known criminals.

In the latter hypothetical, despite complaints to the principal, the school does not take any action, and students continue to be called out of class and questioned, resulting in missed learning time and harm to their reputations as peers begin calling them criminals.

“[T]he students may have experienced a hostile environment due to the multiple false flags, being pulled out of class and questioned, and the allegations regarding peer harassment,” the OCR document states.

Another six hypothetical scenarios illustrate potential violations of Title IX of the Education Amendments of 1972, prohibiting discrimination on the basis of sex. These include harassment on school grounds due to AI-generated sexual deepfakes and an AI program that scans college engineering applications, informed by past data that creates a bias toward men.

In the sexual deepfakes scenario, the guidance explains that if school officials do nothing but report the issue to police, OCR would have reason to investigate.

“[T]he students may have experienced prohibited harassment about which the school knew and failed to appropriately respond,” the document states.

Section 504 of the Rehabilitation Act of 1973 and Title II of the Americans with Disabilities Act of 1990 prohibit discrimination based on disability. OCR offers six examples of how AI can perpetrate this form of discrimination, from a generative AI tool that composes Section 504 plans for disabled students without human oversight, resulting in cookie-cutter plans that do not meet the needs of each student, to an AI-enabled noise monitor and display meant to keep classrooms quiet that continually flags a student who is hard of hearing for speaking too loudly.

In most of these hypothetical examples, school authorities fail to act on complaints from parents and students, leaving the school open to investigation should anyone decide to report the issue to OCR. The agency is responsible for enforcing federal civil rights laws in public schools and colleges, and any AI use that violates these laws could be grounds for an investigation, according to Catherine Lhamon, assistant secretary of education for civil rights.

“Federal civil rights laws protect students in educational settings with and without AI,” she said in a public statement. “School communities must take care that they do not discriminate when applying AI tools. OCR will remain vigilant in enforcement regarding AI usage as we will with respect to any other aspect of students’ educational experience.”

Last year, OCR received 19,201 complaints regarding school-based civil rights violations, the highest in the agency’s history, according to its most recent annual report. If an OCR investigation determines a school failed to comply with a civil rights law, enforcement options range from voluntary compliance procedures to the revocation of federal funds.