UPTON, NY-- A team of scientists from the U.S. Department of Energy's Brookhaven National Laboratory and Lawrence Berkeley National Laboratory designed, created, and successfully tested a new algorithm to make smarter scientific measurement decisions. The algorithm, a form of artificial intelligence (AI), can make autonomous decisions to define and perform the next step of an experiment. The team described the capabilities and flexibility of their new measurement tool in a paper published on August 14, 2019 in Scientific Reports.
From Galileo and Newton to the recent discovery of gravitational waves, performing scientific experiments to understand the world around us has been the driving force of our technological advancement for hundreds of years. Improving the way researchers do their experiments can have tremendous impact on how quickly those experiments yield applicable results for new technologies.
Over the last decades, researchers have sped up their experiments through automation and an ever-growing assortment of fast measurement tools. However, some of the most interesting and important scientific challenges--such as creating improved battery materials for energy storage or new quantum materials for new types of computers--still require very demanding and time-consuming experiments.
By creating a new decision-making algorithm as part of a fully automated experimental setup, the interdisciplinary team from two of Brookhaven's DOE Office of Science user facilities--the Center for Functional Nanomaterials (CFN) and the National Synchrotron Light Source II (NSLS-II)--and Berkeley Lab's Center for Advanced Mathematics for Energy Research Applications (CAMERA) offers the possibility to study these challenges in a more efficient fashion.
The challenge of complexity
The goal of many experiments is to gain knowledge about the material that is studied, and scientists have a well-tested way to do this: They take a sample of the material and measure how it reacts to changes in its environment.
A standard approach for scientists at user facilities like NSLS-II and CFN is to manually scan through the measurements from a given experiment to determine the next area where they might want to run an experiment. But access to these facilities' high-end materials-characterization tools is limited, so measurement time is precious. A research team might only have a few days to measure their materials, so they need to make the most out of each measurement.
"The key to achieving a minimum number of measurements and maximum quality of the resulting model is to go where uncertainties are large," said Marcus Noack, a postdoctoral scholar at CAMERA and lead author of the study. "Performing measurements there will most effectively reduce the overall model uncertainty."
As Kevin Yager, a co-author and CFN scientist, pointed out, "The final goal is not only to take data faster but also to improve the quality of the data we collect. I think of it as experimentalists switching from micromanaging their experiment to managing at a higher level. Instead of having to decide where to measure next on the sample, the scientists can instead think about the big picture, which is ultimately what we as scientists are trying to do."
"This new approach is an applied example of artificial intelligence," said co-author Masafumi Fukuto, a scientist at NSLS-II. "The decision-making algorithm is replacing the intuition of the human experimenter and can scan through the data and make smart decisions about how the experiment should proceed."
More information for less?
In practice, before starting an experiment, the scientists define a set of goals they want to get out of the measurement. With these goals set, the algorithm looks at the previously measured data while the experiment is ongoing to determine the next measurement. On its search for the best next measurement, the algorithm creates a surrogate model of the data, which is an educated guess as to how the material will behave in the next possible steps, and calculates the uncertainty--basically how confident it is in its guess--for each possible next step. Based on this, it then selects the most uncertain option to measure next. The trick here is by picking the most uncertain step to measure next, the algorithm maximizes the amount of knowledge it gains by making that measurement. The algorithm not only maximizes the information gain during the measurement, it also defines when to end the experiment by figuring out the moment when any additional measurements would not result in more knowledge.
"The basic idea is, given a bunch of experiments, how can you automatically pick the next best one?" said James Sethian, director of CAMERA and a co-author of the study. "Marcus has built a world which builds an approximate surrogate model on the basis of your previous experiments and suggests the best or most appropriate experiment to try next."
How we got here
To make autonomous experiments a reality, the team had to tackle three important pieces: the automation of the data collection, real-time analysis, and, of course, the decision-making algorithm.
"This is an exciting part of this collaboration," said Fukuto. "We all provided an essential piece for it: The CAMERA team worked on the decision-making algorithm, Kevin from CFN developed the real-time data analysis, and we at NSLS-II provided the automation for the measurements."
The team first implemented their decision-making algorithm at the Complex Materials Scattering (CMS) beamline at NSLS-II, which the CFN and NSLS-II operate in partnership. This instrument offers ultrabright x-rays to study the nanostructure of various materials. As the lead beamline scientist of this instrument, Fukuto had already designed the beamline with automation in mind. The beamline offers a sample-exchanging robot, automatic sample movement in various directions, and many other helpful tools to ensure fast measurements. Together with Yager's real-time data analysis, the beamline was--by design--the perfect fit for the first "smart" experiment.
The first "smart" experiment
The first fully autonomous experiment the team performed was to map the perimeter of a droplet where nanoparticles segregate using a technique called small-angle x-ray scattering at the CMS beamline. During small-angle x-ray scattering, the scientists shine bright x-rays at the sample and, depending on the atomic to nanoscale structure of the sample, the x-rays bounce off in different directions. The scientists then use a large detector to capture the scattered x-rays and calculate the properties of the sample at the illuminated spot. In this first experiment, the scientists compared the standard approach of measuring the sample with measurements taken when the new decision-making algorithm was calling the shots. The algorithm was able to identify the area of the droplet and focused on its edges and inner parts instead of the background.
"After our own initial success, we wanted to apply the algorithm more, so we reached out to a few users and proposed to test our new algorithm on their scientific problems," said Yager. "They said yes, and since then we have measured various samples. One of the most interesting ones was a study on a sample that was fabricated to contain a spectrum of different material types. So instead of making and measuring an enormous number of samples and maybe missing an interesting combination, the user made one single sample that included all possible combinations. Our algorithm was then able to explore this enormous diversity of combinations efficiently," he said.
What's next?
After the first successful experiments, the scientists plan to further improve the algorithm and therefore its value to the scientific community. One of their ideas is to make the algorithm "physics-aware"--taking advantage of anything already known about material under study--so the method can be even more effective. Another development in progress is to use the algorithm during synthesis and processing of new materials, for example to understand and optimize processes relevant to advanced manufacturing as these materials are incorporated into real-world devices. The team is also thinking about the larger picture and wants to transfer the autonomous method to other experimental setups.
"I think users view the beamlines of NSLS-II or microscopes of CFN just as powerful characterization tools. We are trying to change these capabilities into a powerful material discovery facility," Fukuto said.