Abstract

We compare the NOAA Significant Earthquake Historical database versus typical climatic indices and the length of the day (LOD). The Pacific Decadal Oscillation (PDO) record is mainly adopted because most of the analyzed earthquakes occurred at the land boundaries of the Pacific Plate. The NOAA catalog contains information on destructive earthquakes. Using advanced spectral and magnitude squared coherence methodologies, we found that the magnitude \(M\ge 7\) earthquake annual frequency and the PDO record share common frequencies at about 9-, 20-, and 50- to 60-year periods, which are typically found in climate records and among the solar and lunar harmonics. The two records are negatively correlated at the 20- and 50- to 60-year timescales and positively correlated at the 9-year and lower timescales. We use a simple harmonic model to forecast the \(M\ge 7\) significant earthquake annual frequency for the next decades. The next 15 years should be characterized by a relatively high \(M\ge 7\) earthquake activity (on average 10–12 occurrences per year) with possible maxima in 2020 and 2030 and a minimum in the 2040s. On the 60-year scale, the LOD is found to be highly correlated with the earthquake record (\(r=0.51\) for 1900–1994, and \(r=0.95\) for 1910–1970). However, the LOD variations appear to be too small to be the primary earthquake trigger. Our results suggest that large earthquakes are triggered by crust deformations induced by, and/or linked to climatic and oceanic oscillations induced by astronomical forcings, which also regulate the LOD.

Notes

Acknowledgments

The authors thank the two anonymous referees and Giuliano F. Panza for useful suggestions.

Appendix 1: The threshold of the magnitude of completeness

In the main paper, we have studied the NOAA Significant Earthquake Historical \(M\ge 7\) earthquake annual frequency record under the assumption that since 1900 this record is statistically stationary, as shown by the top panel in Fig. 1. Assessing the magnitude of completeness of earthquake catalogs is necessary for any seismic analysis and for avoiding mistakes due to pattern artifacts caused by data incompleteness. However, there is no unique methodology to handle this issue and any statistical strategy may be unsatisfactory for some reason.

The earthquake completeness issue is usually addressed by assuming the validity of the Gutenberg–Richter relation predicting that earthquakes in any given region and time period are distributed in function of their magnitude (Gutenberg and Richter 1954) according the equation

$$\begin{aligned} \log _{10}(N)=a-bM \end{aligned}$$

(5)

where on average \(b\approx 1\): however, \(b\) can vary between 0.5 and 1.5 depending on the specific tectonic environment of the region (Bhattacharya et al. 2009). Gutenberg–Richter analysis of several earthquake catalogs has shown that the relation should be fulfilled for local catalogs and \(M\le 7\) (cf. Mignan and Woessner 2012), while for \(M\ge 7\) statistical fluctuations and statistical deficiency may characterize the distributions.

Figure 10 shows the magnitude frequency curve of the NOAA Significant Earthquake Historical catalog. It is evident that for \(M<7\) the catalog is severely incomplete because the trend does not fit the Gutenberg–Richter relation. There are also about 300 earthquakes with no reported magnitude. However, a question rises whether the threshold of the magnitude of completeness should be chosen to be \(M=7\), as chosen in the paper, or 7.5 as Fig. 10 suggests given the fact that there are more events in the \(7.5\le M<8\) bin than in the \(7\le M<7.5\) bin.

The NOAA Significant Earthquake Historical database uses the \(M\ge 7.5\) threshold as one of its sufficient although not necessary collection criteria (http://www.ngdc.noaa.gov/hazard/earthqk.shtml). Thus, the catalog is very likely complete for \(M\ge 7.5\). However, in statistical analysis, it is also important to avoid under-sampling and to reduce non-stationarities due to stochastic fluctuations by using as many data as possible. Discarding the \(7\le M<7.5\) data would mean to use 665 instead of 1,087 data, which could significantly reduce the statistical reliability of the record.

In Fig. 10, the tail of the distribution is fit with two alternative Gutenberg–Richter functions, Eq. 5. The blue fit curve produces the Gutenberg–Richter coefficient \(b\approx 1,\) that is, the expected value considering that the catalog collects worldwide events. The blue fit curve suggests that \(M=7\) could be the optimal value for the magnitude of completeness threshold because it is consistent with the Gutenberg–Richter trend observed for the \(M\ge 8.5\) earthquakes and with the local earthquake catalogs that are characterized by a Gutenberg–Richter average exponent \(b\approx 1\) for \(M<7\) (cf. Bhattacharya et al. 2009; Gutenberg and Richter 1954; Mignan and Woessner 2012). Moreover, as shown in Fig. 1 the \(M\ge 7\) earthquake annual frequency sequence is quite stationary since 1900.

On the contrary, the green curve depicted in Fig. 10 predicts a Gutenberg–Richter exponent of \(b\approx 1.67\). While the green curve well fits the three dot regions for \(7.5\le M<9\), \(b\approx 1.67\) does not seem compatible with Gutenberg–Richter law, which predicts an average exponent \(b\approx 1\). Thus, the frequency peak observed in Fig. 10 in the \(7.5\le M<8\) bin appears to be a non-stationary statistical fluctuation of the data or an end-of-distribution tail pattern where the Gutenberg–Richter law is not valid any more.

The non-stationarity of the \(M\ge 7.5\) earthquake annual frequency sequence is confirmed in Fig. 11: From 1900 to 1920, there are \(9.3\pm 0.7\) events per year, while from 1920 to 2014, there are \(5\pm 0.2\) events per year. However, even if quite disrupted by its non-stationarity, a quasi 50- to 60-year oscillation is still visible (maxima around 1910, 1965, 2020 and minima around 1940 and 1980–1990). Thus, the coherence between the earthquake catalog and the climatic indexes at the 50- to 60-year scale appears confirmed also by the restricted \(M\ge 7.5\) earthquake annual frequency sequence. For the above reasons, in the paper, we have assumed that the \(M\ge 7\) earthquake annual frequency sequence is the most significant one for our analysis.

As explained in the Introduction, the Centennial Earthquake catalog contains more events than the NOAA Significant Earthquake Historical database. The latter was preferred here because the former may also add a number of aftershocks that need to be filtered off for our analysis and, in any case, the latter is made of events occurred in the worldwide inhabited regions, which are certain, while the data from the uninhabited regions may be more fragmented. Our comparison between the two records suggests that they are almost equivalent for the \(M\ge 7.5\) earthquakes: The Centennial Earthquake catalog lists 422 events against 665 events of the NOAA Significant Earthquake Historical database. For the \(7\le M<7.5\) earthquakes, the Centennial Earthquake Catalog contains a number of events that is almost 3 time larger than the NOAA catalog: 1235 versus 422. The \(M\ge 7\) Centennial Earthquake annual frequency sequence approximately mirrors a similar non-stationarity as seen in Fig. 11 presenting an average of 17 events per year for the period 1900–1950 and about 14 events per year for 1950–2007. Thus, it is possible that many of the reported \(7\le M<7.5\) earthquakes are aftershocks of the \(M\ge 7.5\) earthquakes that have not been reported in the NOAA catalog because they did not cause significant damages, while the NOAA catalog may reports more independent events. In any case, the \(M\ge 7\) Centennial Earthquake annual frequency sequence presents harmonic patterns disrupted relative to the NOAA \(M\ge 7\) earthquake annual frequency record shown in Fig. 1. Understanding this difference requires a dedicated study.

Appendix 2: Removing aftershocks at short time and area scales

Seismicity declustering is the process of separating an earthquake catalog into foreshocks, mainshocks, and aftershocks. Declustering is widely used in seismology, in particular, for seismic hazard assessment and in earthquake prediction models and several advanced algorithms have been proposed (Knopoff 1964; Stiphout et al. 2012). Here, we adopt a simpler strategy to verify that the NOAA Significant Earthquake Historical earthquake catalog herein studied is sufficiently declustered and that the spectral results are sufficiently robust under data declustering corrections.

Figure 12A shows four \(M\ge 7\) earthquake sequences since 1900 detrended of plausible aftershocks. On a total of 1,087 \(M\ge 7\) events from 1900 to 2013, we found: 39 events occurred close to other events within 6 months and \(0.5^{\circ }\) coordinate (about 56 km radius); 47 events within 1 year and \(0.5^{\circ }\) coordinate; 77 events within 6 months and \(1^{\circ }\) coordinate (about 111 km radius); 94 events within 1 year and \(1^{\circ }\) coordinate. Their MEM spectral analysis depicted in Fig.12B confirms the power spectrum depicted in Fig. 4. In particular, note the large 50- to 60-year and 20-year spectral peaks.

Scafetta N (2012c) Does the Sun work as a nuclear fusion amplifier of planetary tidal forcing? A proposal for a physical mechanism based on the mass-luminosity relation. J. Atmos. Solar-Terrestr. Phys. 81–82:27–40CrossRefGoogle Scholar

Welch PD (1967) The use of the fast fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms. IEEE Trans Audio Electroacoust 15:70–73CrossRefGoogle Scholar