That bias can enter into these often life-altering decisions is nothing new. But today, with artificial intelligence assisting everyone from college admission directors to parole boards, a group of researchers at Morgan State University says the potential for racial, gender and other discrimination is amplified by magnitudes.
“You automate the bias, you multiply and expand the bias,” said Gabriella Waters, a director at a Morgan State center seeking to prevent just that. “If you’re doing something wrong, it’s going to do it in a big way.”
Waters directs research and operations for the Baltimore university’s Center for Equitable Artificial Intelligence and Machine Learning Systems, or CEAMLS for short. Pronounced “seamless,” it indeed brings together specialists from across disciplines ranging from engineering to philosophy with the goal of harnessing the power of artificial intelligence while ensuring it doesn’t introduce or spread bias.
AI is a catchall phrase for systems that can process large amounts of data quickly and, mimicking human cognitive functions such as detecting patterns, predict outcomes and recommend decisions.
But therein lies both its benefits and pitfalls: as data points are introduced, so, too, can bias enter in. Facial recognition systems were found more likely to misidentify Black and Asian people, for example, and Amazon dumped a recruiting program that favored male over female applicants.
Bias also cropped up in an algorithm used to assess the relative sickness of patients, and thus the level of treatment they should receive, because it was based on the amount of previous spending on health care — meaning Black people, who are more likely to have lower incomes and less access to care to begin with, were erroneously scored as healthier than they actually were.
Don’t blame the machines, though. They can only do what they do with what they’re given.
“It’s human beings that are the source of the data sets being correlated,” Waters said. “Not all of this is intentional. It’s just human nature.”
Data can “obscure the actual truths,” she said. You might find that ice cream sales are high in areas where a lot of shark attacks occur, Waters said, but that, of course, doesn’t mean one causes the other.
The center at Morgan was created in July 2022 to find ways to address problems that already underlie existing AI systems, and create new technologies that avoid introducing bias.
As a historically Black university that has been boosting its research capacity in recent years, Morgan State is poised to put its own “stamp” on the AI field, said Kofi Nyarko, who is the CEAMLS director and a professor of electrical and computer engineering.
“Morgan has a unique position here,” Nyarko said. “Yes, we have the experts in machine learning that we can pull from the sciences.
“But also we have a mandate. We have a mission that seeks to not only advance the science, but make sure that we advance our community such that they are involved in that process and that advancement.”
Morgan State’s AI research has been fueled by an influx of public and private funding — by its calculations, nearly $18.5 million over the past 3½ years. Many of the grants come from federal agencies, including the Office of Naval Research, which gave the university $9 million, the National Science Foundation and the National Institutes of Health.
Throughout the state, efforts are underway to catch up with the burgeoning field of AI, tapping into its potential while working to guard against any unintended consequences.
The General Assembly and Democratic Gov. Wes Moore’s administration have both been delving into AI, seeking to understand how it can be used to improve state government services and ensure that its applications meet values such as equity, security and privacy.
That was was part of the agenda of a Nov. 29 meeting of the General Assembly’s Joint Committee on Cybersecurity, Information Technology, and Biotechnology, where some of Moore’s newly appointed technology officials briefed state senators and delegates on the use of the rapidly advancing technology in state government.
“It’s all moving very fast,” said Nishant Shah, who in August was named Moore’s senior advisor for responsible AI. “We don’t know what we don’t know.”
Shah said he’ll be working to develop a set of AI principles and values that will serve as a “North Star” for procuring AI systems and monitoring them for any possible harm. State tech staff are also doing an inventory of AI already in use — “very little,” according to a survey that drew limited response this summer — and hoping to increase the knowledge and skills of personnel across the government, he said.
At Morgan, Nyarko said he is heartened by the amount of attention in the state and also federally on getting AI right. The White House, for example, issued an executive order in October on the safe and responsible use of the technology.
“There is a lot of momentum now, which is fantastic,” Nyarko said. “Are we there yet? No. Just as the technology evolves, the approach will have to evolve with it, but I think the conversations are happening, which is great.”
Nyarko, who leads` Morgan’s Data Engineering and Predictive Analytics (DEPA) Research Lab, is working on ways to monitor the performance of cloud-based systems and whether they alter depending on variables such as a person’s race or ethnicity. He’s also working on how to objectively measure the “very nebulous” concept of fairness — could there be a consensus within the industry, for example, on benchmarks that everyone would use to test their system’s performance?
“Think about going to the grocery store and picking up a package with a nutrition label on it,” Nyarko said. “It’s really clear when you pick it up you know what you’re getting.
“What would that look like for the AI model? … Pick up a product and flip it over, so to speak, metaphorically see what its strengths are, what its weaknesses are, in what areas what groups are impacted one way or the other.”
The center’s staff and students ranging from undergrads to post-docs are working on multiple projects: A child’s toy car is parked in one room, awaiting further work to make it self-driving. There are autonomous wheelchairs, being tested at Baltimore/Washington International Thurgood Marshall Airport, where hopefully one day they can be ordered like an Uber.
Waters, who directs the Cognitive and Neurodiversity AI Lab at Morgan, is working on applications to help in diagnosing autism and assist those with autism in developing skills. With much autism research based on a small pool, usually boys and particularly white boys, she is working on using AI to observe and track children of other racial and ethnic groups in their family settings, seeking to tease out cultural differences that may mask symptoms of autism.
She is also working on using augmented reality glasses and AI to develop individualized programs for those with autism. The glasses would put an overlay on the real environment, prompting and rewarding the wearer to be more vocal, for example, or using a cartoon character to point to a location they should go to, such as a bathroom.
While the center works on projects that could find their way onto the marketplace, it maintains its focus on providing, as its mission statement puts it, “thought leadership in the application of fair and unbiased technology.”
One only has to look at previous technologies that took unexpected turns from their original intent, said J. Phillip Honenberger, who joined the center from Morgan’s philosophy and religious studies department. He specializes in the intersection of philosophy and science, and sees the center’s work as an opportunity to get ahead of whatever unforeseen implications AI may have for our lives.
“Any socially disruptive technology almost never gets sufficient deliberation and reflection,” Honenberger said. “They hit the market and start to affect people’s lives before people really have a chance to think about what’s happening.
“Look at the way social media affected the political space,” Honenberger said. No one thought, he said, “We’re going to build this thing to connect people with their friends and family, and it’s going to change the outcome of elections, it’s going to lead to polarization … and disinformation and all the other negative effects.’”
Technology tends to have a “reflection and deliberation deficit,” Honenberger said.
But, he said, that doesn’t mean innovation should be stifled because it might lead to unintended consequences.
“The solution is to build ethical capacity, build reflective and deliberative capacity,” he said, “and that’s what we’re in the business of doing.”
©2024 Baltimore Sun. Distributed by Tribune Content Agency, LLC.