Click "play" on the video below and close your eyes for a few minutes.
Attendees of the 32nd CSUN Assistive Technology Conference in March went through this same exercise in a session taught by Terrill Thompson, a technology accessibility specialist at the University of Washington who's worked in the accessibility field for 20 years. And they found out what you probably just discovered — the video didn't tell them much.
"We watched that video with our eyes closed and then talked about it, and obviously no one got anything out of it other than that there was good music," he recalled.
In the video version that includes human audio description, people who are blind find out that students and faculty at the University of Washington won a Nobel prize, broke a research record and earned a national golf championship in 2016, among other things.
As more online learning instruction relies on visuals to tell a story, they're often not accessible to people who are blind. With little or no audio narration, visually impaired users can't hear what's happening. Even audio-rich lectures and documentary-style videos don't describe everything that's on the screen. They need human narration or text-based descriptions read aloud by screen-reading software. Then they can understand who's talking and what's going on. Similarly, people who are deaf need to see captions and transcripts.
About 11 percent of U.S. undergraduates have some sort of disability, according to government statistics, and federal law requires higher education institutions to provide equal access to learning resources for people with disabilities. Similarly, an international technical standard called the Web Content Accessibility Guidelines 2.0 (WCAG 2.0) provides recommendations that the U.S. Education Department's Office of Civil Rights expects higher ed to follow. But universities are struggling to make progress on accessibility challenges.
"The hardest and most confusing WCAG 2.0 standard is the one that requires audio description of text and graphics in videos, and universities are just now figuring out how to address that issue," Thompson said.
The University of California, Berkeley received a warning letter from the Office of Civil Rights last year about its online audio and video resources. The office reviewed 26 massively open online courses, 30 lectures on YouTube and 27 courses on iTunes and found that deaf or hard-of-hearing individuals could not access major portions of the public-facing content. To fix it, the office recommended developing systems and processes that would help the university meet WCAG 2.0 standards.
The university responded with a letter that suggested Berkeley might take down the older content rather than make it accessible to the disabled. In a March letter to the campus, Vice Chancellor for Undergraduate Education Cathy Koshland said UC Berkeley plans to focus on making new content accessible rather than retrofitting three- to 10-year-old content. This month, the university started moving more than 20,000 legacy online resources to a video channel that requires university user authentication.
"For Berkeley to say something like that is distressing to all of us since we consider Berkeley a major institution that would support accessibility and disabilities," said Kimberly Elmore, board coordinator for Disability Rights, Education, Activism and Mentoring (DREAM) at the National Center for College Students with Disabilities. "That was disturbing."
Making video content accessible takes time and money — and in the case of human audio descriptions, it can be difficult to mix original audio tracks with accessible narration to match the timing of the images on screen, said Sean Keegan, president of the Access Technology Higher Education Network.
Creating a standard educational video requires special filming equipment, a videographer, an editor, a speaker and someone to mix the audio. Together, those basic video-producing components can make the entire process expensive, without the added cost of accessibility components. Pricing varies for human-narrated descriptions, ranging from $12 to $75 per video minute, according to Thompson. Text-based descriptions may run $1 to $3 a minute depending on the quality, said Keegan.
"If it's built in as part of the process, I don't see cost as being a major determining factor," he said. "But if it's something that has to be done after the fact, then yes, it can be expensive."
After California Community Colleges received a warning letter in the late '90s for its lack of accessibility, it received help from the state. The state legislature made accessibility a priority and set up a support system backed by state funding, including up to a million dollars each year for a video caption grant program. It also created two centers for training faculty on accessibility and for creating electronic files that could be printed for Braille readers.
"If we truly believe that our community college mission is to serve the top 100 percent of students who come to us, then we must work systemically to ensure that the learning materials are accessible to everyone," said James Glapa-Grossklag, the captioning grant administrator.
Keegan suggested two resources that can help colleges make learning materials accessible, and they both receive some funding from the U.S. Department of Education. The National Association of the Deaf administers the Described and Captioned Media Program, which provides a borrowing library of no-charge media that's described and captioned, along with a learning center and a gateway to Internet resources. The National Center for Accessible Media includes online assessments, no-charge training resources and tools to create accessible multimedia and websites.
At the University of Washington, two groups meet regularly to share accessibility information, evaluate each other's websites and figure out code that can help. The university has concentrated its efforts on educating staff members and working with vendors to make tools accessible.
"The campus as a whole is starting to appreciate that accessibility is cool and fun," Thompson said, "and it's important that we not build things that our students or employees can't use."
Thompson built an open-source media player as a personal project that he hoped others would model. While the project began out of his desire to share music he created on his personal website in an accessible way, it's now benefiting the university as well. The Disabilities, Opportunities, Internetworking and Technology group (DO-IT)uses the Able Player to share university videos; the player also supports audio descriptions, captions, sign language and chapters.
His favorite feature is a track tag in the HTML5 markup language that allows media creators to synchronize text with video, something that the University of Wisconsin-Extension is considering adding to its open-source Storybook Plus player that a team developed after the Able Player was launched. This feature doesn't overwhelm watchers with all the text at once, but rather times it so that the audio and written words play at the same time.
The University of Wisconsin-Extension finished a major update to the Storybook Plus player last year with internal grant funding of just over $3,700. The update brought it up to current accessibility standards and improved compatibility with most browsers, websites and mobile devices. A team of instructional designers and media specialists worked on the update and stressed the importance of having control over changes made to the player rather than relying on a commercial solution.
Instructors can pull together audio files, images, titles and metadata that make up a presentation and upload it to a Web server for the media player to access. Then the design and media team refines it as needed.
"We want to make sure that all users who take our degree programs or courses are having the same experience — and that it is a positive experience," said Laurie Berry, a member of the project team on the design side.
That's why testing the player with users is so important. The development team tried it with people who are blind and watched how they interacted with the player. In the process, they discovered some redundant information that screen reading software didn't need to read and a key piece of information that caused them to skip an entire slide.
When the testers took a self-assessment quiz as part of the presentation, they would then click to go to the next slide. But because the next slide had the same words "self assessment" at the top, students would skip to the subsequent slide. In reality, the page had refreshed with faculty feedback on their results, even though it sounded like a duplicate slider to users. Based on this observation, the team used ARIA accessibility attributes in the code to update the language when the page refreshed. That way, students would hear the feedback without skipping the page.
"We went in with assumptions about what we thought it meant to be accessible or how things should be laid out and structured, and we got so much good feedback from those people," said Bryan Bortz, a member of the project team on the media side.
Accessibility has come a long way from the days in the late '90s when Keegan was a grad student at Ball State University in Indiana. He was working on his master's degree in exercise science, and his professor directed the athletic training program as a blind individual. In the early days of the Internet, they both worked on harnessing its power so the professor could communicate with students and faculty.
After he graduated, Keegan started working on accessibility in higher education and has seen major changes over the last 17 years. Assistive technology and underlying Web technologies have improved. Meanwhile, Web developers are paying more attention to accessibility, and universities are making it a priority on campus.
"When institutions are incorporating accessibility into their culture, that's where we're going to see the real significant impacts over time."