Our homes and offices are only as solid as the ground beneath them. When that solid ground turns to liquid -- as sometimes happens during earthquakes -- it can topple buildings and bridges. This phenomenon is known as liquefaction, and it was a major feature of the 2011 earthquake in Christchurch, New Zealand, a magnitude 6.3 quake that killed 185 people and destroyed thousands of homes.
An upside of the Christchurch quake was that it was one of the most well-documented in history. Because New Zealand is seismically active, the city was instrumented with numerous sensors for monitoring earthquakes. Post-event reconnaissance provided a wealth of additional data on how the soil responded across the city.
"It's an enormous amount of data for our field," said post-doctoral researcher, Maria Giovanna Durante, a Marie Sklodowska Curie Fellow previously of The University of Texas at Austin (UT Austin). "We said, 'If we have thousands of data points, maybe we can find a trend.'"
Durante works with Prof. Ellen Rathje, Janet S. Cockrell Centennial Chair in Engineering at UT Austin and the principal investigator for the National Science Foundation-funded DesignSafe cyberinfrastructure, which supports research across the natural hazards community. Rathje's personal research on liquefaction led her to study the Christchurch event. She had been thinking about ways to incorporate machine learning into her research and this case seemed like a great place to start.
"For some time, I had been impressed with how machine learning was being incorporated into other fields, but it seemed we never had enough data in geotechnical engineering to utilize these methods," Rathje said. "However, when I saw the liquefaction data coming out of New Zealand, I knew we had a unique opportunity to finally apply AI techniques to our field."
The two researchers developed a machine learning model that predicted the amount of lateral movement that occurred when the Christchurch earthquake caused soil to lose its strength and shift relative to its surroundings.
The results were published online in Earthquake Spectra on April 2021.
"It's one of the first machine learning studies in our area of geotechnical engineering," Durante said.
The researchers first used a Random Forest approach with a binary classification to forecast whether lateral spreading movements occurred at a specific location. They then applied a multiclass classification approach to predict the amount of displacement, from none to more than 1 meter.
"We needed to put physics into our model and be able to recognize, understand, and visualize what the model does," Durante said. "For that reason, it was important to select specific input features that go with the phenomenon we study. We're not using the model as a black box-- we're trying to integrate our scientific knowledge as much as possible."
Durante and Rathje trained the model using data related to the peak ground shaking experienced (a trigger for liquefaction), the depth of the water table, the topographic slope, and other factors. In total, more than 7,000 data points from a small area of the city were used for training data -- a great improvement, as previous geotechnical machine learning studies had used only 200 data points.
They tested their model citywide on 2.5 million sites around the epicenter of the earthquake to determine the displacement. Their model predicted whether liquefaction occurred with 80% accuracy; it was 70% accurate at determining the amount of displacement.
The researchers used the Frontera supercomputer at the Texas Advanced Computing Center (TACC), one of the world's fastest, to train and test the model. TACC is a key partner on the DesignSafe project, providing computing resources, software, and storage to the natural hazards engineering community.
Access to Frontera provided Durante and Rathje machine learning capabilities on a scale previously unavailable to the field. Deriving the final machine learning model required testing 2,400 possible models.
"It would have taken years to do this research anywhere else," Durante said. "If you want to run a parametric study, or do a comprehensive analysis, you need to have computational power."
She hopes their machine learning liquefaction models will one day direct first-responders to the most urgent needs in the aftermath of an earthquake. "Emergency crews need guidance on what areas, and what structures, may be most at risk of collapse and focus their attention there," she said.
Sharing, Reproducibility, and Access
For Rathje, Durante, and a growing number of natural hazard engineers, a journal publication is not the only result of a research project. They also publish all of their data, models, and methods to the DesignSafe portal, a hub for research related to the impact of hurricanes, earthquakes, tsunamis, and other natural hazards on the built and natural environment.
"We did everything on the project in the DesignSafe portal," Durante said. "All the maps were made using QGIS, a mapping tool available on DesignSafe, using my computer as a way to connect to the cyberinfrastructure."
For their machine learning liquefaction model, they created a Jupyter notebook -- an interactive, web-based document that includes the dataset, code, and analyses. The notebook allows other scholars to reproduce the team's findings interactively, and test the machine learning model with their own data.
"It was important to us to make the materials available and make it reproducible," Durante said. "We want the whole community to move forward with these methods."
This new paradigm of data-sharing and collaboration is central to DesignSafe and helps the field progress more quickly, according Joy Pauschke, program director in NSF's Directorate for Engineering.
"Researchers are beginning to use AI methods with natural hazards research data, with exciting results," Pauschke said. "Adding machine learning tools to DesignSafe's data and other resources will lead to new insights and help speed advances that can improve disaster resilience."
Advances in machine learning require rich datasets, precisely like the data from the Christchurch earthquake. "All of the information about the Christchurch event was available on a website," Durante said. "That's not so common in our community, and without that, this study would not have been impossible."
Advances also require high-performance computing systems to test out new approaches and apply them to new fields.
The researchers continue to refine the machine learning model for liquefaction. Further research, they say, is needed to develop machine learning models that are generalizable to other earthquake events and geologic settings.
Durante, who returned to her native Italy this year, says one thing she hopes to take back from the U.S. is the ability for research to impact public policy.
She cited a recent project working with Scott Brandenberg and Jonathan Stewart (University of California, Los Angeles) that developed a new methodology to determine whether a retaining wall would collapse during an earthquake. Less than three years after the beginning of their research, the recommended seismic provisions for new buildings and other structures in the U.S. included their methodology.
"I want my work to have an impact on everyday life," Durante said. "In the U.S., there is more of a direct connection between research and real life, and that's something that I would like to bring back home."