Artificial intelligence and actual robots might seem like an obvious way to offload some of this menial work, but, for a long time, the complexity of even simple scientific experiments put this beyond reach. Yet now, a quiet revolution is under way in the form of the autonomous lab.
At the heart of so many scientific disciplines that concern themselves with finding new substances, from better batteries to powerful drugs, is a problem of scale. Our ability to predict possible new structures has grown enormously, like with Google DeepMind's protein structure-predicting AI AlphaFold, but our capacity to test whether these structures exist and are correct hasn't caught up.
This problem is surprisingly pervasive. In November last year, Google DeepMind released another model, called GNoME, that predicted potentially millions of new crystal structures that could be used for things like better solar panels or computer chips. But until they are made in the real world, which could take months in the lab with a team of people, they had to remain as predictions.
Self-driving labs, which tend to consist of some assemblage of robotic arms, testing equipment and an AI overseer, can perform experiments thousands to millions of times faster than a person — and they don't need to sleep. They can measure properties, assess results and tweak future rounds of experimentation depending on what they find. "We can do more science in less time," says Alán Aspuru-Guzik at the University of Toronto, the head of a self-driving lab collaboration called the Acceleration Consortium.
Google DeepMind shared its AI crystal predictions with a group at Lawrence Berkeley National Laboratory in California that is developing a self-driving lab, called A-Lab, capable of autonomously synthesizing and testing new crystal structures. The A-Lab team managed to create 41 of the predicted crystals in real life, though the results have received pushback from some scientists, who say that the group hasn't correctly identified the materials.
Researchers are devising self-driving labs to do science in a surprisingly wide range of disciplines: in August, for example, a group at Boston University discovered the most energy-absorbing structure ever measured, which could find use in bike helmets and car crumple zones, after searching through possible designs using its Bayesian experimental autonomous researcher, or BEAR for short. Boasting a 3D printer, a robotic arm, scales and a hydraulic press, as well as an AI coordinator, BEAR can run 50 experiments a day without human supervision.
In November, researchers at the University of Science and Technology of China gave real Martian meteorites to their robotic chemist and asked it to autonomously find oxygen-producing catalysts, using only the minerals it could extract from the meteorites. They saved their robot some of the drudgery by using an AI system to predict which combinations of elements might make the best catalyst, narrowing down a list of millions of testable materials to just 200. It worked, finding a catalyst comparable to the best available catalysts on Earth from 10 years ago.
All of these examples are impressive displays of engineering and how far AI has come, but none of the results has been world-changing yet. We still haven't seen a better battery or solar panel deployed on a commercial scale that was made possible because of a discovery by a robot gripper, rather than a human hand. Many argue it is just a matter of time until we do, as these methods become more commonplace, but thinking that AI will solve all our problems is an easy trap to fall into.
For all the promise that self-driving labs hold, there are many questions too. Humans are still needed to interpret results, and this can be open to controversy, like in the case of the A-Lab project. The machines must also be told by a human where to look and what project to focus on, and currently lack the general exploratory powers of a human researcher. And though the labs are developing rapidly, it is hard to keep up with the massive increase in AI predictions.
As Andy Cooper at the University of Liverpool, UK, told me last year: "The world's capability to predict and calculate things, and use machine learning to extrapolate further, is evolving faster than the robotic capability to look for the materials."
Alex Wilkins is a news reporter focused on space, physics and breakthroughs in biology and robotics.
©2024 New Scientist Ltd.