Scientists at the California Institute of Technology reported last week that after analyzing a decade of seismic data from 2008 to 2017, they identified 1.81 million earthquakes that hit California that no one had noticed.
Nearly 2 Million Earthquakes Hiding Within Background Noise
Using ten years worth of Southern California seismic data, scientists at the California Institute of Technology (Caltech) developed an algorithm that was able to isolate almost 2 million negative to low magnitude earthquakes that have struck the state of California between 2008 and 2017 that, until now, no one had even noticed.
RELATED: NEW RESEARCH SAYS EARTH'S CORE COULD PREDICT EARTHQUAKES 5 YEARS EARLIER
These newly discovered earthquakes represent a ten-fold increase in the number of earthquakes to hit California on record over the years examined and shows a much more seismologically active region than previously believed—and California is famous for its earthquakes as it is.
"It's not that we didn't know these small earthquakes were occurring. The problem is that they can be very difficult to spot amid all of the noise,” said Zachary Ross, a postdoctoral scholar in geophysics, set to join the faculty of Caltech in June, and who was the lead author of the study.
Ross was joined by Egill Hauksson, a research professor of geophysics at Caltech; Daniel Trugman, of Los Alamos National Laboratory; and Peter Shearer, of the University of California at San Diego's Scripps Institution of Oceanography.
Isolating the Signals from the Static
The noise in the seismological record that these researchers were hoping to filter out could be something as simple as nearby construction work or a passing truck on an adjacent road.
Negative to low magnitude earthquakes between -2.0 and 1.7 would blend right in with these readings, so the challenge has always been to isolate a signal buried inside a bunch of static.
In order to do this, the researchers began with the premise that the signal itself is the same for any earthquake; it just scales larger or smaller according to its magnitude.
If so, then we would actually know exactly what to look for since we've seen it many times before in higher magnitude earthquakes, we'd only have to scale the signal down to the appropriate magnitude and see if a given reading in the seismological recording matches that scaled down signal.
Looking at the earthquake record for a particular area, they developed a generalized shape of what an earthquake should look like in a given area using the readings from large, easily identifiable earthquakes as a guide.
Then, as they combed through the historical data for a given location, whenever they spotted something that resembled the earthquake template, they would check the records from other seismometers nearby and see whether those seismometers picked up an earthquake using the earthquake template developed for that area around the same time.
If they could correlate different earthquake readings to a single event, they could verify that it was, in fact, an earthquake.
This technique, known as template matching, isn't new but it's usually reserved for much smaller data sets due to the way larger data sets require exponentially more computational resources as the data set grows.
Ten years worth of second by second seismological readings from an entire network of data collecting instruments around one of the largest, most seismologically active states in the country certainly sounds like a lot of data to process.
It's so labor-intensive, in fact, that it took 200 GPUs working for weeks at a time to go through all the data, scan, detect, rescan several more time, and catalog the findings.
Had they used the time to mine Bitcoin instead, they'd probably have tens of thousands of dollars to show for it, but thankfully, the researchers have dedicated their lives to science, and the effort turned out to be more valuable for the people of California in the end than cryptocurrencies will likely ever be.
By combing through each area datapoint by datapoint, researchers were able to isolate almost 2 million earthquakes that instruments were able to record, but that no one else had noticed.
"Seismicity along one fault affects faults and quakes around it," said Hauksson, "and this newly fleshed-out picture of seismicity in Southern California will give us new insights into how that works."
RELATED: HOW DESTRUCTIVE WILL THE NEXT HAYWARD FAULT EARTHQUAKE BE?
They were even able to identify tremors that preceded more major earthquakes, suggesting that it might one day be possible to detect these tremors in close to real time, giving advanced warning to people in the affected region of larger, deadlier earthquakes that may be about to occur.
"The advance Zach Ross and colleagues have made fundamentally changes the way we detect earthquakes within a dense seismic network like the one Caltech operates with the [US Geological Survey],” said Michael Gurnis, Director of the Seismological Laboratory and John E. and Hazel S. Smits Professor of Geophysics at Caltech.
“Zach has opened a new window allowing us to see millions of previously unseen earthquakes and this changes our ability to characterize what happens before and after large earthquakes."
The study was published last week in the journal Science.