IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

What Could AI Do for Scientific Research?

From generating research questions to analyzing data to running simulations, AI could affect every aspect of the scientific process, but experts say accuracy and sustainability should be part of the conversation.

A person writing in a chart in a lab with sample tubes in front of them.
Earth in early times looked a lot different than it does now. It was a hot, high-pressure environment with no ozone layer and no oxygen in the atmosphere. Still, organisms that existed at these times evolved into organisms compatible with life as we know it today.

Researchers at Johns Hopkins University wanted to understand how, but their method for finding out — measuring how certain proteins respond to high-pressure conditions — could have taken decades, according to a public statement from chemist Stephen Fried.

Luckily, the team had artificial intelligence on their side.

Google’s AlphaFold tool was able to map out more than 2,500 proteins, identify which parts were pressure sensitive and provide key insights on how they may have behaved millions of years ago.

“This work gives us a better idea of how you might design a new protein to withstand stress and new clues into what types of proteins would be more likely to exist in high-pressure environments like those at the bottom of the ocean or on a different planet,” Fried said in a public statement.

Siddhartha Rao, CEO and co-founder of Positron Networks, which makes technology for scientific research, pointed to this study as a prime example of why AI is useful for his clients. AI can accelerate processes that cost time, labor and materials, potentially accelerating the rate of scientific discovery and minimizing the impact of funding bodies on what universities study.

“During drug discovery, oftentimes, a pharmaceutical scientist will be trying to find a specific sequence of another protein or a molecule that'll fit, like a key going into a hole, the right spot on a protein,” Rao said in an interview with Government Technology. “Historically, the only way they were able to do that is by literally constructing the chemical involved or the protein involved and repeatedly running it in a physical environment that mimics the environment that they're trying to target, such as the human body, and ultimately trying to find that molecule that would work for them.”

AI COULD HELP EVERY STAGE OF THE SCIENTIFIC PROCESS


Rao pointed out that AI tools can take in lots of information on a topic and identify aspects where further research would be helpful, potentially leading to new studies. It can use existing information to generate a hypothesis. It can use information on similar experiments and lab protocols to write an experimental procedure. It can create a simulation modeling the experimental procedure and estimate which parts of the experiment would be most useful to run in a physical environment. It can analyze data, aggregate findings and write conclusions. It can even be useful with completed experiments by attempting to reproduce results in a simulated environment, and it can evaluate papers in the peer-review process.

However, AI can only be helpful if it can understand and generate complex ideas with high confidence. According to Rao, the concept of non-linearity is key to this.

In computer science, non-linearity refers to operations in which the output does not have a simple, directly proportional relationship to the input. It’s why we can ask questions of a chatbot and get answers beyond “yes” or “no,” and it's what makes AI appealing for scientific research, Rao said. Non-linear “thinking” is already baked into AI tools, and this makes them good at predicting real-world behaviors like weather, fluid dynamics and, in the case of Johns Hopkins’ research, protein folding.

PURPOSE-BUILT RESEARCH TOOLS


While AI is well-poised to assist with scientific research, Rao stressed that existing AI tools are not often made with scientists in mind.

As an example of how this can be done, NASA scientists recently partnered with IBM Research and used AI to develop a model for supporting climate and weather research. The model can be used to predict severe weather, create localized forecasts and enhance regional climate simulations. It can also improve the accuracy of existing physical processes used to do the same.

Rao wants to expand access to AI tools so that scientists with fewer resources than those at NASA can still benefit from the technology. His company, Positron Networks, created a tool called Robbie that simplifies AI and machine learning tasks used in research, so that researchers can conduct complex experiments without needing IT or software development skills.

“When you have a good idea, do you really want to go talk to a whole bunch of people, or do you want to try it out? You want to try it out. You want to experiment. And that's the entire concept behind the scientific method,” he said. “Whenever you put scientists in a position where they have to find or collaborate or, effectively, pay for help to get their experiment to run on this infrastructure, you’re creating a blocker for science.”

In addition to private companies like Positron and IBM investing in AI for scientific research, new publicly funded AI research centers are popping up nationwide. The National Science Foundation has funded 27 AI institutes across the U.S. through the National Artificial Intelligence Research Institutes program.

Matt Lease, a computer science professor at the University of Texas, is the co-director of the NSF-Simons AI Institute for Cosmic Origins (CosmicAI). While CosmicAI involves many astronomers and cosmologists, Lease's role is to adapt AI tools to the needs of the institute's research. One of his projects is an AI copilot specifically designed for astronomy.

“What could this AI copilot do that would help an astronomer?” he said. “Well, a starting point for thinking about this is, ‘What do people think these kinds of AI copilots can do that help us with our regular work?’”

A copilot trained on astronomy databases and scientific papers that can clearly articulate the source of its claims could be useful for simple tasks like brainstorming or generating research proposals, he said.

HUMAN OVERSIGHT IMPERATIVE


As AI grows more prevalent in scientific research, Lease is thinking about ethics and practicality. He’s a founding member of UT’s Good Systems initiative, which aims to create AI technology that meets humanity’s needs and values.

He wants AI tools to be trustworthy, but not trusted blindly.

“One idea is that when people review AI outputs, they'll just be able to tell if there are any mistakes and fix them,” he said. “And sometimes that's true, but sometimes it's pretty hard for you to tell when there are mistakes. So, if there are any existing random mistakes or consistent, systematic bias in the AI output, there's a risk of those being perpetuated, those errors persisting, bias persisting, if people reviewing the AI outcomes don't catch them.”

It's a case for keeping humans in the loop, a common refrain in conversations about AI, and for routine audits. Thoughtful curation of the materials for training an AI can help improve its reliability, but it can also help ensure the training materials do not infringe on intellectual property rights, jeopardize data privacy or contribute unnecessarily to carbon emissions.

“People have often used the simplest hammer: ‘Oh, just give me more data and I'll make my AI model better.’ But not all data is equal, and often some data has a lot of bad stuff,” Lease said. “So, we do want to be thoughtful about curating our data, not just because we're trying to reduce the amount of compute and environmental impact, but because actually, you'll get a better model behavior and performance if you can curate your data to be more of the good stuff and less of the bad stuff.”
Abby Sourwine is a staff writer for the Center for Digital Education. She has a bachelor's degree in journalism from the University of Oregon and worked in local news before joining the e.Republic team. She is currently located in San Diego, California.