The university recently secured $625,000 in grant funding from the National Science Foundation, according to a press release from Rep. Jimmy Panetta (D-Carmel Valley). The money will be used in an effort to refine artificial intelligence software that determines someone's eligibility for a loan, job interview or bail.
The technology is called machine learning, and it studies analytical data in order to come to logical conclusions.
"My research is about how machine learning — data science — interacts with people," principal investigator Yang Liu said. "There's some unique challenges to processing human data. Human data can be imperfect. People can do all of these manipulations. I'm interested in how machine learning, how data science handles data coming from people."
Machine learning has had analytical issues in the past. The model has had issues with racial and gender biases, Liu said.
The model learned to reinforce discrimination from different demographics, he said. It had learned a correlation between race or gender and loan approval or bail monitoring from the past and began to make more decisions based on those factors rather than factors that would contribute to those situations.
While the model wasn't making those mistakes all the time, it was still making them at a higher rate for marginalized groups of people. Technology experts realized that needed correcting.
"We want the model to be serving all people, or people from different groups equally," Liu said. "Models make mistakes. What they found was the machine learning model was making substantially more mistakes for the black community as compared to the other racial groups."
Originally, the idea was to remove those data points, such as race and gender, from the equation, but the software continued to find the correlation, Liu said. It was still able to determine an applicant's gender or race based on other factors such as hobbies, where they went to school and their name.
Researchers found a way to prevent those logic loops from leading the model to its decision a few years ago. Its decisions are now more solidly based on factors such as income and job status. However, that was just half the problem.
The software can learn quickly, but it still has flaws in its thinking, even after it has come to a decision on someone's application. The model can recognize that someone's salary is too low for them to qualify for a loan and give them advice on how to have a higher change of approval on their next application.
"Machine learning can offer an explanation. It can tell you why you got denied," Liu said. "Sometimes these explanations are not actionable. People cannot do anything. The model is having difficulty."
The software will make improvement suggestions that are unrealistic or even impossible. For example, it can recognize that a married couple does not have a high enough salary and recommend they get divorced in order to have a higher chance of loan approval, according to Liu.
"This type of thing can happen with the model," he said. "We don't want this to hurt people's opportunities from taking actions. We want to give people the best and most cost-efficient way to take actions. We want to give the recommendation for people to follow to gradually improve their well-being."
Liu called it the consequence of machine learning, and that is what he is aiming to fix.
"We really want to understand the consequence," he said. "We are hoping to mitigate and consequence and offer a good outcome to people instead of making it harder."
In order to fix the shortcomings of the software, Liu has planned for it to interact with human subjects.
The interaction will work like a role-playing game, Liu said. Participants will be given specifics characters and told to apply for a loan. Researchers will then apply the machine learning model to determine if their character will qualify for the loan.
From there, the model will make suggestions to improve their future application. Participants will then be asked how reasonable or attainable these suggestions are.
"We are trying to simulate a real-world interaction with the banks and the agents. We are going to observe how people respond and how they are going to take actions accordingly," Liu said. "Based on these observations, we are trying to learn or trying to discover the patterns of how people respond to the interactions."
Researchers will use their observations and the patterns they discovered to create a new model that will pair with machine learning. The two models will work together to help the software better respond to varying issues.
Rather than suggest one outrageous solution, it will learn to suggest a series of obtainable steps people can take to improve their chances of approval down the road.
Liu believes the research his team does, and the changes it is able to make to the software will be able to help give people the opportunity to improve their lives rather than leave them in the same position they were in before.
"We don't want the model to be too brutal," he said. "We want to model to be able to induce improvements from people."
(c)2021 the Santa Cruz Sentinel (Scotts Valley, Calif.) Distributed by Tribune Content Agency, LLC.