Earthquakeprediction is one of the most difficult problems in physical science and, owing to its societal implications, one of the most controversial. The study of earthquakepredictability has been impeded by the lack of an adequate experimental infrastructure---the capability to conduct scientific prediction experiments under rigorous, controlled conditions and evaluate them using accepted criteria specified in advance. To remedy this deficiency, the Southern California Earthquake Center (SCEC) is working with its international partners, which include the European Union (through the Swiss Seismological Service) and New Zealand (through GNS Science), to develop a virtual, distributed laboratory with a cyberinfrastructure adequate to support a global program of research on earthquakepredictability. This Collaboratory for the Study of EarthquakePredictability (CSEP) will extend the testing activities of SCEC's Working Group on Regional Earthquake Likelihood Models, from which we will present first results. CSEP will support rigorous procedures for registering prediction experiments on regional and global scales, community-endorsed standards for assessing probability-based and alarm-based predictions, access to authorized data sets and monitoring products from designated natural laboratories, and software to allow researchers to participate in prediction experiments. CSEP will encourage research on earthquakepredictability by supporting an environment for scientific prediction experiments that allows the predictive skill of proposed algorithms to be rigorously compared with standardized reference methods and data sets. It will thereby reduce the controversies surrounding earthquakeprediction, and it will allow the results of prediction experiments to be communicated to the scientific community, governmental agencies, and the general public in an appropriate research context.

On Agust 3, 1973, a small earthquake (magnitude 2.5) occurred near Blue Mountain Lake in the Adirondack region of northern New York State. This seemingly unimportant event was of great significance, however, because it was predicted. Seismologsits at the Lamont-Doherty geologcal Observatory of Columbia University accurately foretold the time, place, and magnitude of the event. Their prediction was based on certain pre-earthquake processes that are best explained by a hypothesis known as "dilatancy," a concept that has injected new life and direction into the science of earthquakeprediction. Although much mroe reserach must be accomplished before we can expect to predict potentially damaging earthquakes with any degree of consistency, results such as this indicate that we are on a promising road.

Earthquakeprediction is possible by looking into the location of active sunspots before it harbours energy towards earth. Earth is a restless planet the restlessness turns deadly occasionally. Of all natural hazards, earthquakes are the most feared. For centuries scientists working in seismically active regions have noted premonitory signals. Changes in thermosphere, Ionosphere, atmosphere and hydrosphere are noted before the changes in geosphere. The historical records talk of changes of the water level in wells, of strange weather, of ground-hugging fog, of unusual behaviour of animals (due to change in magnetic field of the earth) that seem to feel the approach of a major earthquake. With the advent of modern science and technology the understanding of these pre-earthquake signals has become stronger enough to develop a methodology of earthquakeprediction. A correlation of earth directed coronal mass ejection (CME) from the active sunspots has been possible to develop as a precursor of the earthquake. Occasional local magnetic field and planetary indices (Kp values) changes in the lower atmosphere that is accompanied by the formation of haze and a reduction of moisture in the air. Large patches, often tens to hundreds of thousands of square kilometres in size, seen in night-time infrared satellite images where the land surface temperature seems to fluctuate rapidly. Perturbations in the ionosphere at 90 - 120 km altitude have been observed before the occurrence of earthquakes. These changes affect the transmission of radio waves and a radio black out has been observed due to CME. Another heliophysical parameter Electron flux (Eflux) has been monitored before the occurrence of the earthquakes. More than hundreds of case studies show that before the occurrence of the earthquakes the atmospheric temperature increases and suddenly drops before the occurrence of the earthquakes. These changes are being monitored by using Sun Observatory Heliospheric observatory

The clustering of earthquakes in time and space is widely accepted, however the existence of correlations in earthquake magnitudes is more questionable. In standard models of seismic activity, it is usually assumed that magnitudes are independent and therefore in principle unpredictable. Our work seeks to test this assumption by analysing magnitude correlation between earthquakes and their aftershocks. To separate mainshocks from aftershocks, we perform stochastic declustering based on the widely used Epidemic Type Aftershock Sequence (ETAS) model, which allows us to then compare the average magnitudes of aftershock sequences to that of their mainshock. The results of earthquake magnitude correlations were compared with acoustic emissions (AE) from laboratory analog experiments, as fracturing generates both AE at the laboratory scale and earthquakes on a crustal scale. Constant stress and constant strain rate experiments were done on Darley Dale sandstone under confining pressure to simulate depth of burial. Microcracking activity inside the rock volume was analyzed by the AE technique as a proxy for earthquakes. Applying the ETAS model to experimental data allowed us to validate our results and provide for the first time a holistic view on the correlation of earthquake magnitudes. Additionally we search the relationship between the conditional intensity estimates of the ETAS model and the earthquake magnitudes. A positive relation would suggest the existence of magnitude correlations. The aim of this study is to observe any trends of dependency between the magnitudes of aftershock earthquakes and the earthquakes that trigger them.

The earthquakeprediction pendulum has swung from optimism in the 1970s to rather extreme pessimism in the 1990s. Earlier work revealed evidence of possible earthquake precursors: physical changes in the planet that signal that a large earthquake is on the way. Some respected earthquake scientists argued that earthquakes are likewise fundamentally unpredictable. The fate of the Parkfield prediction experiment appeared to support their arguments: A moderate earthquake had been predicted along a specified segment of the central San Andreas fault within five years of 1988, but had failed to materialize on schedule. At some point, however, the pendulum began to swing back. Reputable scientists began using the "P-word" in not only polite company, but also at meetings and even in print. If the optimism regarding earthquakeprediction can be attributed to any single cause, it might be scientists' burgeoning understanding of the earthquake cycle.

The purpose of this paper was to improve catastrophe insurance level. Firstly, earthquakepredictions were carried out using mathematical analysis method. Secondly, the foreign catastrophe insurances’ policies and models were compared. Thirdly, the suggestions on catastrophe insurances to China were discussed. The further study should be paid more attention on the earthquakeprediction by introducing big data.

some understanding of their sources and the physical properties of the crust, which also vary from place to place and time to time. Anomalies are not necessarily due to stress or earthquake preparation, and separating the extraneous ones is a problem as daunting as understanding earthquake behavior itself. Fourth, the associations presented between anomalies and earthquakes are generally based on selected data. Validating a proposed association requires complete data on the earthquake record and the geophysical measurements over a large area and time, followed by prospective testing which allows no adjustment of parameters, criteria, etc. The Collaboratory for Study of EarthquakePredictability (CSEP) is dedicated to providing such prospective testing. Any serious proposal for prediction research should deal with the problems above, and anticipate the huge investment in time required to test hypotheses.

We showed that positive and negative electron density anomalies emerge above the fault immediately before they rupture, 40/20/10 minutes before Mw9/8/7 earthquakes (Heki, 2011 GRL; Heki and Enomoto, 2013 JGR; He and Heki 2017 JGR). These signals are stronger for earthquake with larger Mw and under higher background vertical TEC (total electron conetent) (Heki and Enomoto, 2015 JGR). The epicenter, the positive and the negative anomalies align along the local geomagnetic field (He and Heki, 2016 GRL), suggesting electric fields within ionosphere are responsible for making the anomalies (Kuo et al., 2014 JGR; Kelley et al., 2017 JGR). Here we suppose the next Nankai Trough earthquake that may occur within a few tens of years in Southwest Japan, and will discuss if we can recognize its preseismic signatures in TEC by real-time observations with GNSS.During high geomagnetic activities, large-scale traveling ionospheric disturbances (LSTID) often propagate from auroral ovals toward mid-latitude regions, and leave similar signatures to preseismic anomalies. This is a main obstacle to use preseismic TEC changes for practical short-term earthquakeprediction. In this presentation, we show that the same anomalies appeared 40 minutes before the mainshock above northern Australia, the geomagnetically conjugate point of the 2011 Tohoku-oki earthquake epicenter. This not only demonstrates that electric fields play a role in making the preseismic TEC anomalies, but also offers a possibility to discriminate preseismic anomalies from those caused by LSTID. By monitoring TEC in the conjugate areas in the two hemisphere, we can recognize anomalies with simultaneous onset as those caused by within-ionosphere electric fields (e.g. preseismic anomalies, night-time MSTID) and anomalies without simultaneous onset as gravity-wave origin disturbances (e.g. LSTID, daytime MSTID).

An objective if the U.S. Earthquake Hazards Reduction Act of 1977 is to introduce into all regions of the country that are subject to large and moderate earthquakes, systems for predictingearthquakes and assessing earthquake risk. In 1985, the USGS developed for the Secretary of the Interior a program for implementation of a prototype operational earthquakeprediction system in southern California.

Henry Spall talked recently with Denis Mileti who is in the Department of Sociology, Colorado State University, Fort Collins, Colo. Dr. Mileti is a sociologst involved with research programs that study the socioeconomic impact of earthquakeprediction.

One major focus of the current Japanese earthquakeprediction research program (2009-2013), which is now integrated with the research program for prediction of volcanic eruptions, is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of EarthquakePredictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. We started the 1st earthquake forecast testing experiment in Japan within the CSEP framework. We use the earthquake catalogue maintained and provided by the Japan Meteorological Agency (JMA). The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called "All Japan," "Mainland," and "Kanto." A total of 105 models were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. The experiments were completed for 92 rounds for 1-day, 6 rounds for 3-month, and 3 rounds for 1-year classes. For 1-day testing class all models passed all the CSEP's evaluation tests at more than 90% rounds. The results of the 3-month testing class also gave us new knowledge concerning statistical forecasting models. All models showed a good performance for magnitude forecasting. On the other hand, observation is hardly consistent in space distribution with most models when many earthquakes occurred at a spot. Now we prepare the 3-D forecasting experiment with a depth range of 0 to 100 km in Kanto region. The testing center is improving an evaluation system for 1-day class experiment to finish forecasting and testing results within one day. The special issue of 1st part titled Earthquake Forecast

The current status of geochemical and groundwater observations for earthquakeprediction in Japan is described. The development of the observations is discussed in relation to the progress of the earthquakeprediction program in Japan. Three major findings obtained from our recent studies are outlined. (i) Long-term radon observation data over 18 years at the SKE (Suikoen) well indicate that the anomalous radon change before the 1978 Izu-Oshima-kinkai earthquake can with high probability be attributed to precursory changes. (ii) It is proposed that certain sensitive wells exist which have the potential to detect precursory changes. (iii) The appearance and nonappearance of coseismic radon drops at the KSM (Kashima) well reflect changes in the regional stress state of an observation area. In addition, some preliminary results of chemical changes of groundwater prior to the 1995 Kobe (Hyogo-ken nanbu) earthquake are presented. PMID:11607665

The Collaboratory for the Study of EarthquakePredictability (CSEP) supports a global program to conduct prospective earthquake forecasting experiments. CSEP testing centers are now operational in California, New Zealand, Japan, China, and Europe with 442 models under evaluation. The California testing center, started by SCEC, Sept 1, 2007, currently hosts 30-minute, 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the Western Pacific, and worldwide. Our tests are now based on the hypocentral locations and magnitudes of cataloged earthquakes, but we plan to test focal mechanisms, seismic hazard models, ground motion forecasts, and finite rupture forecasts as well. We have increased computational efficiency for high-resolution global experiments, such as the evaluation of the Global Earthquake Activity Rate (GEAR) model, introduced Bayesian ensemble models, and implemented support for non-Poissonian simulation-based forecasts models. We are currently developing formats and procedures to evaluate externally hosted forecasts and predictions. CSEP supports the USGS program in operational earthquake forecasting and a DHS project to register and test external forecast procedures from experts outside seismology. We found that earthquakes as small as magnitude 2.5 provide important information on subsequent earthquakes larger than magnitude 5. A retrospective experiment for the 2010-2012 Canterbury earthquake sequence showed that some physics-based and hybrid models outperform catalog-based (e.g., ETAS) models. This experiment also demonstrates the ability of the CSEP infrastructure to support retrospective forecast testing. Current CSEP development activities include adoption of the Comprehensive Earthquake Catalog (ComCat) as an authorized data source, retrospective testing of simulation-based forecasts, and support for additive ensemble methods. We describe the open-source CSEP software that is available to researchers as

In this paper, according to the status of China’s need to improve the ability of earthquake disaster prevention, this paper puts forward the implementation plan of earthquake disaster prediction system of Langfang city based on GIS. Based on the GIS spatial database, coordinate transformation technology, GIS spatial analysis technology and PHP development technology, the seismic damage factor algorithm is used to predict the damage of the city under different intensity earthquake disaster conditions. The earthquake disaster prediction system of Langfang city is based on the B / S system architecture. Degree and spatial distribution and two-dimensional visualization display, comprehensive query analysis and efficient auxiliary decision-making function to determine the weak earthquake in the city and rapid warning. The system has realized the transformation of the city’s earthquake disaster reduction work from static planning to dynamic management, and improved the city’s earthquake and disaster prevention capability.

A test to evaluate earthquakeprediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.

Maria Liukis, SCEC, USC; Maximilian Werner, University of Bristol; Danijel Schorlemmer, GFZ Potsdam; John Yu, SCEC, USC; Philip Maechling, SCEC, USC; Jeremy Zechar, Swiss Seismological Service, ETH; Thomas H. Jordan, SCEC, USC, and the CSEP Working Group The Collaboratory for the Study of EarthquakePredictability (CSEP) supports a global program to conduct prospective earthquake forecasting experiments. CSEP testing centers are now operational in California, New Zealand, Japan, China, and Europe with 435 models under evaluation. The California testing center, operated by SCEC, has been operational since Sept 1, 2007, and currently hosts 30-minute, 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the Western Pacific, and worldwide. We have reduced testing latency, implemented prototype evaluation of M8 forecasts, and are currently developing formats and procedures to evaluate externally-hosted forecasts and predictions. These efforts are related to CSEP support of the USGS program in operational earthquake forecasting and a DHS project to register and test external forecast procedures from experts outside seismology. A retrospective experiment for the 2010-2012 Canterbury earthquake sequence has been completed, and the results indicate that some physics-based and hybrid models outperform purely statistical (e.g., ETAS) models. The experiment also demonstrates the power of the CSEP cyberinfrastructure for retrospective testing. Our current development includes evaluation strategies that increase computational efficiency for high-resolution global experiments, such as the evaluation of the Global Earthquake Activity Rate (GEAR) model. We describe the open-source CSEP software that is available to researchers as they develop their forecast models (http://northridge.usc.edu/trac/csep/wiki/MiniCSEP). We also discuss applications of CSEP infrastructure to geodetic transient detection and how CSEP procedures are being

The only thing we know for sure about earthquakes is that one will happen again very soon. Earthquakes pose a vital yet puzzling set of research questions that have confounded scientists for decades, but new ways of looking at seismic information and innovative laboratory experiments are offering tantalizing clues to what triggers earthquakes â and when.

Collaboratory for the Study of EarthquakePredictability (CSEP) is a global project of earthquakepredictability research. The final goal of this project is to have a look for the intrinsic predictability of the earthquake rupture process through forecast testing experiments. The Earthquake Research Institute, the University of Tokyo joined the CSEP and started the Japanese testing center called as CSEP-Japan. This testing center constitutes an open access to researchers contributing earthquake forecast models for applied to Japan. A total of 91 earthquake forecast models were submitted on the prospective experiment starting from 1 November 2009. The models are separated into 4 testing classes (1 day, 3 months, 1 year and 3 years) and 3 testing regions covering an area of Japan including sea area, Japanese mainland and Kanto district. We evaluate the performance of the models in the official suite of tests defined by the CSEP. The experiments of 1-day, 3-month, 1-year and 3-year forecasting classes were implemented for 92 rounds, 4 rounds, 1round and 0 round (now in progress), respectively. The results of the 3-month class gave us new knowledge concerning statistical forecasting models. All models showed a good performance for magnitude forecasting. On the other hand, observation is hardly consistent in space-distribution with most models in some cases where many earthquakes occurred at the same spot. Throughout the experiment, it has been clarified that some properties of the CSEP's evaluation tests such as the L-test show strong correlation with the N-test. We are now processing to own (cyber-) infrastructure to support the forecast experiment as follows. (1) Japanese seismicity has changed since the 2011 Tohoku earthquake. The 3rd call for forecasting models was announced in order to promote model improvement for forecasting earthquakes after this earthquake. So, we provide Japanese seismicity catalog maintained by JMA for modelers to study how seismicity

The Annual Consultation Meeting on Earthquake Tendency in China is held by the China Earthquake Administration (CEA) in order to provide one-year earthquakepredictions over most China. In these predictions, regions of concern are denoted together with the corresponding magnitude range of the largest earthquake expected during the next year. Evaluating the performance of these earthquakepredictions is rather difficult, especially for regions that are of no concern, because they are made on arbitrary regions with flexible magnitude ranges. In the present study, the gambling score is used to evaluate the performance of these earthquakepredictions. Based on a reference model, this scoring method rewards successful predictions and penalizes failures according to the risk (probability of being failure) that the predictors have taken. Using the Poisson model, which is spatially inhomogeneous and temporally stationary, with the Gutenberg-Richter law for earthquake magnitudes as the reference model, we evaluate the CEA predictions based on 1) a partial score for evaluating whether issuing the alarmed regions is based on information that differs from the reference model (knowledge of average seismicity level) and 2) a complete score that evaluates whether the overall performance of the prediction is better than the reference model. The predictions made by the Annual Consultation Meetings on Earthquake Tendency from 1990 to 2003 are found to include significant precursory information, but the overall performance is close to that of the reference model.

Earthquakeprediction is inherently statistical. Although some people continue to think of earthquakeprediction as the specification of the time, place, and magnitude of a future earthquake, it has been clear for at least a decade that this is an unrealistic and unreasonable definition. the reality is that earthquakeprediction starts from the long-term forecasts of place and magnitude, with very approximate time constraints, and progresses, at least in principle, to a gradual narrowing of the time window as data and understanding permit. Primitive long-term forecasts are clearly possible at this time on a few well-characterized fault systems. Tightly focuses monitoring experiments aimed at short-term prediction are already underway in Parkfield, California, and in the Tokai region in Japan; only time will tell how much progress will be possible.

The problems in predictingearthquakes have been attacked by phenomenological methods from pre-historic times to the present. The associations of presumed precursors with large earthquakes often have been remarked upon. the difficulty in identifying whether such correlations are due to some chance coincidence or are real precursors is that usually one notes the associations only in the relatively short time intervals before the large events. Only rarely, if ever, is notice taken of whether the presumed precursor is to be found in the rather long intervals that follow large earthquakes, or in fact is absent in these post-earthquake intervals. If there are enough examples, the presumed correlation fails as a precursor in the former case, while in the latter case the precursor would be verified. Unfortunately, the observer is usually not concerned with the 'uniteresting' intervals that have no large earthquakes.

Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquakeprediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.

Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquakeprediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663

Japan’s National Project for EarthquakePrediction has been conducted since 1965 without success. An earthquakeprediction should be a short-term prediction based on observable physical phenomena or precursors. The main reason of no success is the failure to capture precursors. Most of the financial resources and manpower of the National Project have been devoted to strengthening the seismographs networks, which are not generally effective for detecting precursors since many of precursors are non-seismic. The precursor research has never been supported appropriately because the project has always been run by a group of seismologists who, in the present author’s view, are mainly interested in securing funds for seismology — on pretense of prediction. After the 1995 Kobe disaster, the project decided to give up short-term prediction and this decision has been further fortified by the 2011 M9 Tohoku Mega-quake. On top of the National Project, there are other government projects, not formally but vaguely related to earthquakeprediction, that consume many orders of magnitude more funds. They are also un-interested in short-term prediction. Financially, they are giants and the National Project is a dwarf. Thus, in Japan now, there is practically no support for short-term prediction research. Recently, however, substantial progress has been made in real short-term prediction by scientists of diverse disciplines. Some promising signs are also arising even from cooperation with private sectors. PMID:24213204

Since the beginning of modern seismology, seismologists have contemplated predictingearthquakes. The usefulness of earthquakepredictions to the reduction of human and economic losses and the value of long-range earthquakeprediction to planning is obvious. Not as clear are the long-range economic and social impacts of earthquakeprediction to a speicifc area. The general consensus of opinion among scientists and government officials, however, is that the quest of earthquakeprediction is a worthwhile goal and should be prusued with a sense of urgency.

I was misquoted by C. Lomnitz's [1998] Forum letter (Eos, August 4, 1998, p. 373), which said: [I wonder whether Sasha Gusev [1998] actually believes that branding earthquakeprediction a ‘proven nonscience’ [Geller, 1997a] is a paradigm for others to copy.”Readers are invited to verify for themselves that neither “proven nonscience” norv any similar phrase was used by Geller [1997a].

... updates on past topics of discussion, including work with social and behavioral scientists on improving... probabilities; USGS collaborative work with the Collaboratory for Study of EarthquakePredictability (CSEP...

Since 1985, a focused earthquakeprediction experiment has been in progress along the San Andreas fault near the town of Parkfield in central California. Parkfield has experienced six moderate earthquakes since 1857 at average intervals of 22 years, the most recent a magnitude 6 event in 1966. The probability of another moderate earthquake soon appears high, but studies assigning it a 95% chance of occurring before 1993 now appear to have been oversimplified. The identification of a Parkfield fault "segment" was initially based on geometric features in the surface trace of the San Andreas fault, but more recent microearthquake studies have demonstrated that those features do not extend to seismogenic depths. On the other hand, geodetic measurements are consistent with the existence of a "locked" patch on the fault beneath Parkfield that has presently accumulated a slip deficit equal to the slip in the 1966 earthquake. A magnitude 4.7 earthquake in October 1992 brought the Parkfield experiment to its highest level of alert, with a 72-hour public warning that there was a 37% chance of a magnitude 6 event. However, this warning proved to be a false alarm. Most data collected at Parkfield indicate that strain is accumulating at a constant rate on this part of the San Andreas fault, but some interesting departures from this behavior have been recorded. Here we outline the scientific arguments bearing on when the next Parkfield earthquake is likely to occur and summarize geophysical observations to date.

The evaluation of the success level of an earthquakeprediction method should not be based on approaches that apply generalized strict statistical laws and avoid the specific nature of the earthquake phenomenon. Fault rupture processes cannot be compared to gambling processes. The outcome of the present note is that even an ideal earthquakeprediction method is still shown to be a matter of a “chancy” association between precursors and earthquakes if we apply the same procedure proposed by Mulargia and Gasperini [1992] in evaluating VAN earthquakepredictions. Each individual VAN prediction has to be evaluated separately, taking always into account the specific circumstances and information available. The success level of epicenter prediction should depend on the earthquake magnitude, and magnitude and time predictions may depend on earthquake clustering and the tectonic regime respectively.

Earthquakes often occur on faults that juxtapose different rocks. The result is rupture behavior that differs from that of an earthquake occurring on a fault in a homogeneous material. Previous 2D numerical simulations have studied simple cases of earthquake rupture propagation where there is a material contrast across a fault and have come to two different conclusions: 1) earthquake rupture propagation direction can be predicted from the material contrast, and 2) earthquake rupture propagation direction cannot be predicted from the material contrast. In this paper we provide observational evidence from 70 years of earthquakes at Parkfield, CA, and new 3D numerical simulations. Both the observations and the numerical simulations demonstrate that earthquake rupture propagation direction is unlikely to be predictable on the basis of a material contrast. Copyright 2005 by the American Geophysical Union.

This study examined the prevalence and the psychosocial predictors of probable PTSD among Chinese adolescents in Kunming (approximately 444 miles from the epicenter), China, who were indirectly exposed to the Sichuan Earthquake in 2008. Using a longitudinal study design, primary and secondary school students (N = 3577) in Kunming completed questionnaires at baseline (June 2008) and 6 months afterward (December 2008) in classroom settings. Participants' exposure to earthquake-related imagery and content, perceptions and emotional reactions related to the earthquake, and posttraumatic stress symptoms were measured. Univariate and forward stepwise multivariable logistic regression models were fit to identify significant predictors of probable PTSD at the 6-month follow-up. Prevalences of probable PTSD (with a Children's Revised Impact of Event Scale score ≥30) among the participants at baseline and 6-month follow-up were 16.9% and 11.1% respectively. In the multivariable analysis, those who were frequently exposed to distressful imagery had experienced at least two types of negative life events, perceived that teachers were distressed due to the earthquake, believed that the earthquake resulted from damages to the ecosystem, and felt apprehensive and emotionally disturbed due to the earthquake reported a higher risk of probable PTSD at 6-month follow-up (all ps < .05). Exposure to distressful media images, emotional responses, and disaster-related perceptions at baseline were found to be predictive of probable PTSD several months after indirect exposure to the event. Parents, teachers, and the mass media should be aware of the negative impacts of disaster-related media exposure on adolescents' psychological health. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

Short-term earthquake (EQ) prediction is defined as prospective prediction with the time scale of about one week, which is considered to be one of the most important and urgent topics for the human beings. If this short-term prediction is realized, casualty will be drastically reduced. Unlike the conventional seismic measurement, we proposed the use of electromagnetic phenomena as precursors to EQs in the prediction, and an extensive amount of progress has been achieved in the field of seismo-electromagnetics during the last two decades. This paper deals with the review on this short-term EQ prediction, including the impossibility myth of EQsmore » prediction by seismometers, the reason why we are interested in electromagnetics, the history of seismo-electromagnetics, the ionospheric perturbation as the most promising candidate of EQ prediction, then the future of EQ predictology from two standpoints of a practical science and a pure science, and finally a brief summary.« less

A microearthquake network is a group of highly sensitive seismographic stations designed primarily to record local earthquakes of magnitudes less than 3. Depending on the application, a microearthquake network will consist of several stations or as many as a few hundred . They are usually classified as either permanent or temporary. In a permanent network, the seismic signal from each is telemetered to a central recording site to cut down on the operating costs and to allow more efficient and up-to-date processing of the data. However, telemetering can restrict the location sites because of the line-of-site requirement for radio transmission or the need for telephone lines. Temporary networks are designed to be extremely portable and completely self-contained so that they can be very quickly deployed. They are most valuable for recording aftershocks of a major earthquake or for studies in remote areas.

A test of the algorithm M8 is described. The test is constructed to meet four rules, which we propose to be applicable to the test of any method for earthquakeprediction: 1. An earthquakeprediction technique should be presented as a well documented, logical algorithm that can be used by investigators without restrictions. 2. The algorithm should be coded in a common programming language and implementable on widely available computer systems. 3. A test of the earthquakeprediction technique should involve future predictions with a black box version of the algorithm in which potentially adjustable parameters are fixed in advance. The source of the input data must be defined and ambiguities in these data must be resolved automatically by the algorithm. 4. At least one reasonable null hypothesis should be stated in advance of testing the earthquakeprediction method, and it should be stated how this null hypothesis will be used to estimate the statistical significance of the earthquakepredictions. The M8 algorithm has successfully predicted several destructive earthquakes, in the sense that the earthquakes occurred inside regions with linear dimensions from 384 to 854 km that the algorithm had identified as being in times of increased probability for strong earthquakes. In addition, M8 has successfully "post predicted" high percentages of strong earthquakes in regions to which it has been applied in retroactive studies. The statistical significance of previous predictions has not been established, however, and post-predictionstudies in general are notoriously subject to success-enhancement through hindsight. Nor has it been determined how much more precise an M8 prediction might be than forecasts and probability-of-occurrence estimates made by other techniques. We view our test of M8 both as a means to better determine the effectiveness of M8 and as an experimental structure within which to make observations that might lead to improvements in the algorithm

Testing earthquakeprediction methods requires statistical techniques that compare observed success to random chance. One technique is to produce simulated earthquake catalogs and measure the relative success of predicting real and simulated earthquakes. The accuracy of these tests depends on the validity of the statistical model used to simulate the earthquakes. This study tests the effect of clustering in the statistical earthquake model on the results. Three simulation models were used to produce significance levels for a VLF earthquakeprediction method. As the degree of simulated clustering increases, the statistical significance drops. Hence, the use of a seismicity model with insufficient clustering can lead to overly optimistic results. A successful method must pass the statistical tests with a model that fully replicates the observed clustering. However, a method can be rejected based on tests with a model that contains insufficient clustering. U.S. copyright. Published in 1997 by the American Geophysical Union.

The occurrence of the 2004 Parkfield earthquake provided a unique "teachable moment" for students in our science course for teacher education majors. The course uses seismology as a medium for teaching a wide variety of science topics appropriate for future teachers. The 2004 Parkfield earthquake occurred just 15 minutes after our students completed a lab on earthquake processes and earthquakeprediction. That lab included a discussion of the Parkfield EarthquakePrediction Experiment as a motivation for the exercises they were working on that day. Furthermore, this earthquake was recorded on an AS1 seismograph right in their lab, just minutes after the students left. About an hour after we recorded the earthquake, the students were able to see their own seismogram of the event in the lecture part of the course, which provided an excellent teachable moment for a lecture/discussion on how the occurrence of the 2004 Parkfield earthquake might affect seismologists' ideas about earthquakeprediction. The specific lab exercise that the students were working on just before we recorded this earthquake was a "sliding block" experiment that simulates earthquakes in the classroom. The experimental apparatus includes a flat board on top of which are blocks of wood attached to a bungee cord and a string wrapped around a hand crank. Plate motion is modeled by slowly turning the crank, and earthquakes are modeled as events in which the block slips ("blockquakes"). We scaled the earthquake data and the blockquake data (using how much the string moved as a proxy for time) so that we could compare blockquakes and earthquakes. This provided an opportunity to use interevent-time histograms to teach about earthquake processes, probability, and earthquakeprediction, and to compare earthquake sequences with blockquake sequences. We were able to show the students, using data obtained directly from their own lab, how global earthquake data fit a Poisson exponential distribution better

The current Japanese national earthquakeprediction program emphasizes the importance of modeling as well as monitoring for a sound scientific development of earthquakeprediction research. One major focus of the current program is to move toward creating testable earthquake forecast models. For this purpose, in 2009 we joined the Collaboratory for the Study of EarthquakePredictability (CSEP) and installed, through an international collaboration, the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan. We started Japanese earthquakepredictability experiment on November 1, 2009. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year and 3 years) and 3 testing regions called 'All Japan,' 'Mainland,' and 'Kanto.' A total of 160 models, as of August 2013, were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. We will present results of prospective forecast and testing for periods before and after the 2011 Tohoku-oki earthquake. Because a seismic activity has changed dramatically since the 2011 event, performances of models have been affected very much. In addition, as there is the problem of authorized catalogue related to the completeness magnitude, most models did not pass the CSEP consistency tests. Also, we will discuss the retrospective earthquake forecast experiments for aftershocks of the 2011 Tohoku-oki earthquake. Our aim is to describe what has turned out to be the first occasion for setting up a research environment for rigorous earthquake forecasting in Japan.

The current Japanese national earthquakeprediction program emphasizes the importance of modeling as well as monitoring for a sound scientific development of earthquakeprediction research. One major focus of the current program is to move toward creating testable earthquake forecast models. For this purpose, in 2009 we joined the Collaboratory for the Study of EarthquakePredictability (CSEP) and installed, through an international collaboration, the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan. We started Japanese earthquakepredictability experiment on November 1, 2009. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year and 3 years) and 3 testing regions called 'All Japan,' 'Mainland,' and 'Kanto.' A total of 160 models, as of August 2013, were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. We will present results of prospective forecast and testing for periods before and after the 2011 Tohoku-oki earthquake. Because a seismic activity has changed dramatically since the 2011 event, performances of models have been affected very much. In addition, as there is the problem of authorized catalogue related to the completeness magnitude, most models did not pass the CSEP consistency tests. Also, we will discuss the retrospective earthquake forecast experiments for aftershocks of the 2011 Tohoku-oki earthquake. Our aim is to describe what has turned out to be the first occasion for setting up a research environment for rigorous earthquake forecasting in Japan.

A magnitude 4.7 earthquake occurred near Parkfield, California, on October 20, 992, at 05:28 UTC (October 19 at 10:28 p.m. local or Pacific Daylight Time).This moderate shock, interpreted as the potential foreshock of a damaging earthquake on the San Andreas fault, triggered long-standing federal, state and local government plans to issue a public warning of an imminent magnitude 6 earthquake near Parkfield. Although the predictedearthquake did not take place, sophisticated suites of instruments deployed as part of the Parkfield EarthquakePrediction Experiment recorded valuable data associated with an unusual series of events. this article describes the geological aspects of these events, which occurred near Parkfield in October 1992. The accompnaying article, an edited version of a press conference b Richard Andrews, the Director of the California Office of Emergency Service (OES), describes governmental response to the prediction.

There are two distinct motivations for earthquakeprediction. The mechanistic approach aims to understand the processes leading to a large earthquake. The empirical approach is governed by the immediate need to protect lives and property. With our current lack of knowledge about the earthquake process, future progress cannot be made without gathering a large body of measurements. These are required not only for the empirical prediction of earthquakes, but also for the testing and development of hypotheses that further our understanding of the processes at work. The earthquakeprediction program is basically a program of scientific inquiry, but one which is motivated by social, political, economic, and scientific reasons. It is a pursuit that cannot rely on empirical observations alone nor can it carried out solely on a blackboard or in a laboratory. Experiments must be carried out in the real Earth.

Sedimentary basins increase the damaging effects of earthquakes by trapping and amplifying seismic waves. Simulations of seismic wave propagation in sedimentary basins capture this effect; however, there exists no method to validate these results for earthquakes that have not yet occurred. We present a new approach for ground motion prediction that uses the ambient seismic field. We apply our method to a suite of magnitude 7 scenario earthquakes on the southern San Andreas fault and compare our ground motion predictions with simulations. Both methods find strong amplification and coupling of source and structure effects, but they predict substantially different shaking patterns across the Los Angeles Basin. The virtual earthquake approach provides a new approach for predicting long-period strong ground motion.

The digital revolution started just about 15 years ago has already surpassed the global information storage capacity of more than 5000 Exabytes (in optimally compressed bytes) per year. Open data in a Big Data World provides unprecedented opportunities for enhancing studies of the Earth System. However, it also opens wide avenues for deceptive associations in inter- and transdisciplinary data and for inflicted misleading predictions based on so-called "precursors". Earthquakeprediction is not an easy task that implies a delicate application of statistics. So far, none of the proposed short-term precursory signals showed sufficient evidence to be used as a reliable precursor of catastrophic earthquakes. Regretfully, in many cases of seismic hazard assessment (SHA), from term-less to time-dependent (probabilistic PSHA or deterministic DSHA), and short-term earthquake forecasting (StEF), the claims of a high potential of the method are based on a flawed application of statistics and, therefore, are hardly suitable for communication to decision makers. Self-testing must be done in advance claiming prediction of hazardous areas and/or times. The necessity and possibility of applying simple tools of EarthquakePrediction Strategies, in particular, Error Diagram, introduced by G.M. Molchan in early 1990ies, and Seismic Roulette null-hypothesis as a metric of the alerted space, is evident. The set of errors, i.e. the rates of failure and of the alerted space-time volume, can be easily compared to random guessing, which comparison permits evaluating the SHA method effectiveness and determining the optimal choice of parameters in regard to a given cost-benefit function. These and other information obtained in such a simple testing may supply us with a realistic estimates of confidence and accuracy of SHA predictions and, if reliable but not necessarily perfect, with related recommendations on the level of risks for decision making in regard to engineering design, insurance

Although precursory signs of an earthquake can occur before the event, it is difficult to observe such signs with precision, especially on earth's surface where artificial noise and other factors complicate signal detection. One possible solution to this problem is to install monitoring instruments into the deep bedrock where earthquakes are likely to begin. When evaluating earthquake occurrence, it is necessary to elucidate the processes of stress accumulation in a medium and then release as a fault (crack) is generated, and to do so, the stress must be observed continuously. However, continuous observations of stress have not been implemented yet for earthquake monitoring programs. Strain is a secondary physical quantity whose variation varies depending on the elastic coefficient of the medium, and it can yield potentially valuable information as well. This article describes the development of a borehole stress meter that is capable of recording both continuous stress and strain at a depth of about 1 km. Specifically, this paper introduces the design principles of the stress meter as well as its actual structure. It also describes a newly developed calibration procedure and the results obtained to date for stress and strain studies of deep boreholes at three locations in Japan. To show examples of the observations, records of stress seismic waveforms generated by the 2011 Tohoku earthquake ( M 9.0) are presented. The results demonstrate that the stress meter data have sufficient precision and reliability.

The Collaboratory for the Study of EarthquakePredictability (CSEP) was developed to rigorously test earthquake forecasts retrospectively and prospectively through reproducible, completely transparent experiments within a controlled environment (Zechar et al., 2010). During 2006-2011, thirteen five-year time-invariant prospective earthquake mainshock forecasts developed by the Regional Earthquake Likelihood Models (RELM) working group were evaluated through the CSEP testing center (Schorlemmer and Gerstenberger, 2007). The number, spatial, and magnitude components of the forecasts were compared to the respective observed seismicity components using a set of consistency tests (Schorlemmer et al., 2007, Zechar et al., 2010). In the initial experiment, all but three forecast models passed every test at the 95% significance level, with all forecasts displaying consistent log-likelihoods (L-test) and magnitude distributions (M-test) with the observed seismicity. In the ten-year RELM experiment update, we reevaluate these earthquake forecasts over an eight-year period from 2008-2016, to determine the consistency of previous likelihood testing results over longer time intervals. Additionally, we test the Uniform California Earthquake Rupture Forecast (UCERF2), developed by the U.S. Geological Survey (USGS), and the earthquake rate model developed by the California Geological Survey (CGS) and the USGS for the National Seismic Hazard Mapping Program (NSHMP) against the RELM forecasts. Both the UCERF2 and NSHMP forecasts pass all consistency tests, though the Helmstetter et al. (2007) and Shen et al. (2007) models exhibit greater information gain per earthquake according to the T- and W- tests (Rhoades et al., 2011). Though all but three RELM forecasts pass the spatial likelihood test (S-test), multiple forecasts fail the M-test due to overprediction of the number of earthquakes during the target period. Though there is no significant difference between the UCERF2 and NSHMP

We treat the earth crust and mantle as large scale discrete matters based on the principles of granular physics and existing experimental observations. Main outcomes are: A granular model of the structure and movement of the earth crust and mantle is established. The formation mechanism of the tectonic forces, which causes the earthquake, and a model of propagation for precursory information are proposed. Properties of the seismic precursory information and its relevance with the earthquake occurrence are illustrated, and principle of ways to detect the effective seismic precursor is elaborated. The mechanism of deep-focus earthquake is also explained by the jamming-unjamming transition of the granular flow. Some earthquake phenomena which were previously difficult to understand are explained, and the predictability of the earthquake is discussed. Due to the discrete nature of the earth crust and mantle, the continuum theory no longer applies during the quasi-static seismological process. In this paper, based on the principles of granular physics, we study the causes of earthquakes, earthquake precursors and predictions, and a new understanding, different from the traditional seismological viewpoint, is obtained.

After 30 yearsS research, we have found that great earthquakes are triggered by tide- generation force of the moon. ItSs not the tide-generation force in classical view- points, but is a non-classical viewpoint tide-generation force. We call it as TGFR (Tide-Generation ForcesS Resonance). TGFR strongly depends on the tide-generation force at time of the strange astronomical points (SAP). The SAP mostly are when the moon and another celestial body are arranged with the earth along a straight line (with the same apparent right ascension or 180o difference), the other SAP are the turning points of the moonSs relatively motion to the earth. Moreover, TGFR have four different types effective areas. Our study indicates that a majority of earthquakes are triggering by the rare superimposition of TGFRsS effective areas. In China the great earthquakes in the plain area of Hebei Province, Taiwan, Yunnan Province and Sichuan province are trigger by the decompression TGFR; Other earthquakes are trig- gered by compression TGFR which are in Gansu Province, Ningxia Provinces and northwest direction of Beijing. The great earthquakes in Japan, California, southeast of Europe also are triggered by compression of the TGFR. and in the other part of the world like in Philippines, Central America countries, and West Asia, great earthquakes are triggered by decompression TGFR. We have carried out examinational immediate prediction cooperate TGFR method with other earthquake impending signals such as suggested by Professor Li Junzhi. The successful ratio is about 40%(from our fore- cast reports to the China Seismological Administration). Thus we could say the great earthquake can be predicted (include immediate earthquakeprediction). Key words: imminent prediction; triggering factor; TGFR (Tide-Generation ForcesS Resonance); TGFR compression; TGFR compression zone; TGFR decompression; TGFR decom- pression zone

Presents an analysis of the causes of earthquakes. Topics discussed include (1) geological and seismological factors that determine the effect of a particular earthquake on a given structure; (2) description of some large earthquakes such as the San Francisco quake; and (3) prediction of earthquakes. (HM)

The ANSA web news titled "'No L'Aquila quake risk' experts probed in Italy in June 2010" gave a shock to the Japanese seismological community. For the previous 6 months from the L'Aquila earthquake which occurred on 6th April 2009, the seismicity in that region had been active. Having become even more active and reached to magnitude 4 on 30th March, the government held the Major Risks Committee, which is a part of the Civil Protection Department and is tasked with forecasting possible risks by collating and analyzing data from a variety of sources and making preventative recommendations. According to this ANSA news, the committee did not insist on the risk of damaging earthquake at the press conference held after the committee. Six days later, however, a magnitude 6.3 earthquake attacked L'Aquila and killed 308 people. On 3rd June next year, the prosecutors started on the investigation after complaints of the victims that far more people would have fled their homes that night if there had been no reassurances of the Major Risks Committee in the previous week. Lessons from this issue are of significant importance. Science communication is now in currency, and more efforts are made to reach out to the public and policy makers. But when we deal with disaster sciences, it contains a much bigger proportion of risk communication. A similar incident had happened with the outbreak of the BSE back in the late 1980's. Many of the measures taken according to the Southwood Committee are laudable, but for one - science back then could not show whether or not it was contagious to humans, and is written in the committee minutes that "it is unlikely to infect humans". If read thoroughly, it does refer to the risk, but since it had not been stressed, the government started a campaign saying that "UK beef is safe". In the presentation, we review the L'Aquila affair referring to our interviews to some of the committee members and the Civil Protection Department, and also introduce

Natural hazards like earthquakes can result to enormous property damage, and human casualties in mountainous areas. Italy has always been exposed to numerous earthquakes, mostly concentrated in central and southern regions. Last year, two seismic events near Norcia (central Italy) have occurred, which led to substantial loss of life and extensive damage to properties, infrastructure and cultural heritage. This research utilizes remote sensing products and GIS software, to provide a database of information. We used both SAR images of Sentinel 1A and optical imagery of Landsat 8 to examine the differences of topography with the aid of the multi temporal monitoring technique. This technique suits for the observation of any surface deformation. This database is a cluster of information regarding the consequences of the earthquakes in groups, such as property and infrastructure damage, regional rifts, cultivation loss, landslides and surface deformations amongst others, all mapped on GIS software. Relevant organizations can implement these data in order to calculate the financial impact of these types of earthquakes. In the future, we can enrich this database including more regions and enhance the variety of its applications. For instance, we could predict the future impacts of any type of earthquake in several areas, and design a preliminarily model of emergency for immediate evacuation and quick recovery response. It is important to know how the surface moves, in particular geographical regions like Italy, Cyprus and Greece, where earthquakes are so frequent. We are not able to predictearthquakes, but using data from this research, we may assess the damage that could be caused in the future.

A regional approach to the problem of assessing earthquakepredictions inevitably faces a deficit of data. We point out some basic limits of assessment methods reported in the literature, considering the practical case of the performance of the CN pattern recognition method in the prediction of large Italian earthquakes. Along with the classical hypothesis testing, a new game approach, the so-called parimutuel gambling (PG) method, is examined. The PG, originally proposed for the evaluation of the probabilistic earthquake forecast, has been recently adapted for the case of 'alarm-based' CN prediction. The PG approach is a non-standard method; therefore it deserves careful examination and theoretical analysis. We show that the PG alarm-based version leads to an almost complete loss of information about predictedearthquakes (even for a large sample). As a result, any conclusions based on the alarm-based PG approach are not to be trusted. We also show that the original probabilistic PG approach does not necessarily identifies the genuine forecast correctly among competing seismicity rate models, even when applied to extensive data.

... DEPARTMENT OF THE INTERIOR U.S. Geological Survey National EarthquakePrediction Evaluation... 96-472, the National EarthquakePrediction Evaluation Council (NEPEC) will hold a 1\\1/2\\-day meeting.... Geological Survey on proposed earthquakepredictions, on the completeness and scientific validity of the...

... EarthquakePrediction Evaluation Council (NEPEC) AGENCY: U.S. Geological Survey, Interior. ACTION: Notice of meeting. SUMMARY: Pursuant to Public Law 96-472, the National EarthquakePrediction Evaluation Council... proposed earthquakepredictions, on the completeness and scientific validity of the available data related...

The signals of Earth's natural pulse electromagnetic field (ENPEMF) is a combination of the abnormal crustal magnetic field pulse affected by the earthquake, the induced field of earth's endogenous magnetic field, the induced magnetic field of the exogenous variation magnetic field, geomagnetic pulsation disturbance and other energy coupling process between sun and earth. As an instantaneous disturbance of the variation field of natural geomagnetism, ENPEMF can be used to predictearthquakes. This theory was introduced by A.A Vorobyov, who expressed a hypothesis that pulses can arise not only in the atmosphere but within the Earth's crust due to processes of tectonic-to-electric energy conversion (Vorobyov, 1970; Vorobyov, 1979). The global field time scale of ENPEMF signals has specific stability. Although the wave curves may not overlap completely at different regions, the smoothed diurnal ENPEMF patterns always exhibit the same trend per month. The feature is a good reference for observing the abnormalities of the Earth's natural magnetic field in a specific region. The frequencies of the ENPEMF signals generally locate in kilo Hz range, where frequencies within 5-25 kilo Hz range can be applied to monitor earthquakes. In Wuhan, the best observation frequency is 14.5 kilo Hz. Two special devices are placed in accordance with the S-N and W-E direction. Dramatic variation from the comparison between the pulses waveform obtained from the instruments and the normal reference envelope diagram should indicate high possibility of earthquake. The proposed detection method of earthquake based on ENPEMF can improve the geodynamic monitoring effect and can enrich earthquakeprediction methods. We suggest the prospective further researches are about on the exact sources composition of ENPEMF signals, the distinction between noise and useful signals, and the effect of the Earth's gravity tide and solid tidal wave. This method may also provide a promising application in

The Large-Scale Earthquake Countermeasures Act was enacted in Japan in December 1978. This act aims at mitigating earthquake hazards by designating an area to be an area under intensified measures against earthquake disaster, such designation being based on long-term earthquakeprediction information, and by issuing an earthquake warnings statement based on imminent prediction information, when possible. In an emergency case as defined by the law, the prime minister will be empowered to take various actions which cannot be taken at ordinary times. For instance, he may ask the Self-Defense Force to come into the earthquake-threatened area before the earthquake occurrence.more » A Prediction Council has been formed in order to evaluate premonitory effects that might be observed over the Tokai area, which was designated an area under intensified measures against earthquake disaster some time in June 1979. An extremely dense observation network has been constructed over the area.« less

Social support and self-efficacy are regarded as coping resources that may facilitate readjustment after traumatic events. The 2009 Cinchona earthquake in Costa Rica serves as an example for such an event to study resources to prevent subsequent severity of posttraumatic stress symptoms. At Time 1 (1-6 months after the earthquake in 2009), N=200 survivors were interviewed, assessing resource loss, received family support, and posttraumatic stress response. At Time 2 in 2012, severity of posttraumatic stress symptoms and general self-efficacy beliefs were assessed. Regression analyses estimated the severity of posttraumatic stress symptoms accounted for by all variables. Moderator and mediator models were examined to understand the interplay of received family support and self-efficacy with posttraumatic stress symptoms. Baseline posttraumatic stress symptoms and resource loss (T1) accounted for significant but small amounts of the variance in the severity of posttraumatic stress symptoms (T2). The main effects of self-efficacy (T2) and social support (T1) were negligible, but social support buffered resource loss, indicating that only less supported survivors were affected by resource loss. Self-efficacy at T2 moderated the support-stress relationship, indicating that low levels of self-efficacy could be compensated by higher levels of family support. Receiving family support at T1 enabled survivors to feel self-efficacious, underlining the enabling hypothesis. Receiving social support from relatives shortly after an earthquake was found to be an important coping resource, as it alleviated the association between resource loss and the severity of posttraumatic stress response, compensated for deficits of self-efficacy, and enabled self-efficacy, which was in turn associated with more adaptive adjustment 3 years after the earthquake.

Prediction of Earthquakes by Lunar Cicles Author ; Guillermo Rodriguez Rodriguez Afiliation Geophysic and Astrophysicist. Retired I have exposed this idea to many meetings of EGS, UGS, IUGG 95, from 80, 82.83,and AGU 2002 Washington and 2003 Niza I have thre aproximition in Time 1º Earthquakes hapen The same day of the years every 18 or 19 years (cicle Saros ) Some times in the same place or anhother very far . In anhother moments of the year , teh cicle can be are ; 14 years, 26 years, 32 years or the multiples o 18.61 years expecial 55, 93, 224, 150 ,300 etcetc. For To know the day in the year 2º Over de cicle o one Lunation ( Days over de date of new moon) The greats Earthquakes hapens with diferents intervals of days in the sucesives lunations (aproximately one month) like we can be see in the grafic enclosed. For to know the day of month 3º Over each day I have find that each 28 day repit aproximately the same hour and minute. The same longitude and the same latitud in all earthquakes , also the littles ones . This is very important because we can to proposse only the precaution of wait it in the street or squares Whenever some times the cicles can be longuers or more littles This is my special way of cientific metode As consecuence of the 1º and 2º principe we can look The correlation between years separated by cicles of the 1º tipe For example 1984 and 2002 0r 2003 and consecutive years include 2007...During 30 years I have look de dates. I am in my subconcense the way but I can not make it in scientific formalisme

For the previous 6 months from the L'Aquila earthquake which occurred on 6th April 2009, the seismicity in that region had been active. Having become even more active and reached to magnitude 4 earthquake on 30th March, the government held Major Risks Committee which is a part of the Civil Protection Department and is tasked with forecasting possible risks by collating and analyzing data from a variety of sources and making preventative recommendations. At the press conference immediately after the committee, they reported that "The scientific community tells us there is no danger, because there is an ongoing discharge of energy. The situation looks favorable." 6 days later, a magunitude 6.3 earthquake attacked L'Aquila and killed 308 people. On 3rd June next year, the prosecutors opened the investigation after complaints of the victims that far more people would have fled their homes that night if there had been no reassurances of the Major Risks Committee the previous week. This issue becomes widely known to the seismological society especially after an email titled "Letter of Support for Italian Earthquake Scientists" from seismologists at the National Geophysics and Volcanology Institute (INGV) sent worldwide. It says that the L'Aquila Prosecutors office indicted of manslaughter the members of the Major Risks Committee and that the charges are for failing to provide a short term alarm to the population before the earthquake struck. It is true that there is no generalized method to predictearthquakes but failing the short term alarm is not the reason for the investigation of the scientists. The chief prosecutor stated that "the committee could have provided the people with better advice", and "it wasn't the case that they did not receive any warnings, because there had been tremors". The email also requests sign-on support for the open letter to the president of Italy from Earth sciences colleagues from all over the world and collected more than 5000 signatures

We study repeated earthquake slip of a 2 m long laboratory granite fault surface with approximately homogenous frictional properties. In this apparatus earthquakes follow a period of controlled, constant rate shear stress increase, analogous to tectonic loading. Slip initiates and accumulates within a limited area of the fault surface while the surrounding fault remains locked. Dynamic rupture propagation and slip of the entire fault surface is induced when slip in the nucleating zone becomes sufficiently large. We report on the event to event reproducibility of loading time (recurrence interval), failure stress, stress drop, and precursory activity. We tentatively interpret these variations as indications of the intrinsic variability of small earthquake occurrence and source physics in this controlled setting. We use the results to produce measures of earthquakepredictability based on the probability density of repeating occurrence and the reproducibility of near-field precursory strain. At 4 MPa normal stress and a loading rate of 0.0001 MPa/s, the loading time is ˜25 min, with a coefficient of variation of around 10%. Static stress drop has a similar variability which results almost entirely from variability of the final (rather than initial) stress. Thus, the initial stress has low variability and event times are slip-predictable. The variability of loading time to failure is comparable to the lowest variability of recurrence time of small repeating earthquakes at Parkfield (Nadeau et al., 1998) and our result may be a good estimate of the intrinsic variability of recurrence. Distributions of loading time can be adequately represented by a log-normal or Weibel distribution but long term prediction of the next event time based on probabilistic representation of previous occurrence is not dramatically better than for field-observed small- or large-magnitude earthquake datasets. The gradually accelerating precursory aseismic slip observed in the region of

The behavior of individual stick-slip events observed in three different laboratory experimental configurations is better explained by a "memoryless" earthquake model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. We make similar findings in the companion manuscript for the behavior of natural repeating earthquakes. Taken together, these results allow us to conclude that the predictions of a characteristic earthquake model that assumes either fixed slip or fixed recurrence interval should be preferred to the predictions of the time- and slip-predictable models for all earthquakes. Given that the fixed slip and recurrence models are the preferred models for all of the experiments we examine, we infer that in an event-to-event sense the elastic rebound model underlying the time- and slip-predictable models does not explain earthquake behavior. This does not indicate that the elastic rebound model should be rejected in a long-term-sense, but it should be rejected for short-term predictions. The time- and slip-predictable models likely offer worse predictions of earthquake behavior because they rely on assumptions that are too simple to explain the behavior of earthquakes. Specifically, the time-predictable model assumes a constant failure threshold and the slip-predictable model assumes that there is a constant minimum stress. There is experimental and field evidence that these assumptions are not valid for all earthquakes.

This paper presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points betted by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.

M ≥7 earthquakes have showed an obvious commensurability and orderliness in Xinjiang of China and its adjacent region since 1800. The main orderly values are 30 a × k (k = 1,2,3), 11 12 a, 41 43 a, 18 19 a, and 5 6 a. In the guidance of the information forecasting theory of Wen-Bo Weng, based on previous research results, combining ordered network structure analysis with complex network technology, we focus on the prediction summary of M ≥ 7 earthquakes by using the ordered network structure, and add new information to further optimize network, hence construct the 2D- and 3D-ordered network structure of M ≥ 7 earthquakes. In this paper, the network structure revealed fully the regularity of seismic activity of M ≥ 7 earthquakes in the study region during the past 210 years. Based on this, the Karakorum M7.1 earthquake in 1996, the M7.9 earthquake on the frontier of Russia, Mongol, and China in 2003, and two Yutian M7.3 earthquakes in 2008 and 2014 were predicted successfully. At the same time, a new prediction opinion is presented that the future two M ≥ 7 earthquakes will probably occur around 2019 - 2020 and 2025 - 2026 in this region. The results show that large earthquake occurred in defined region can be predicted. The method of ordered network structure analysis produces satisfactory results for the mid-and-long term prediction of M ≥ 7 earthquakes.

A statistical procedure, derived from a theoretical model of fracture growth, is used to identify a foreshock sequence while it is in progress. As a predictor, the procedure reduces the average uncertainty in the rate of occurrence for a future strong earthquake by a factor of more than 1000 when compared with the Poisson rate of occurrence. About one-third of all main shocks with local magnitude greater than or equal to 4.0 in central California can be predicted in this way, starting from a 7-year database that has a lower magnitude cut off of 1.5. The time scale of such predictions is of the order of a few hours to a few days for foreshocks in the magnitude range from 2.0 to 5.0.

As of mid-2014, the main organizations of the earthquake (EQ hereafter) prediction program, including the Seismological Society of Japan (SSJ), the MEXT Headquarters for EQ Research Promotion, hold the official position that they neither can nor want to make any short-term prediction. It is an extraordinary stance of responsible authorities when the nation, after the devastating 2011 M9 Tohoku EQ, most urgently needs whatever information that may exist on forthcoming EQs. Japan's national project for EQ prediction started in 1965, but it has made no success. The main reason for no success is the failure to capture precursors. After the 1995 Kobe disaster, the project decided to give up short-term prediction and this stance has been further fortified by the 2011 M9 Tohoku Mega-quake. This paper tries to explain how this situation came about and suggest that it may in fact be a legitimate one which should have come a long time ago. Actually, substantial positive changes are taking place now. Some promising signs are arising even from cooperation of researchers with private sectors and there is a move to establish an "EQ Prediction Society of Japan". From now on, maintaining the high scientific standards in EQ prediction will be of crucial importance.

The FDL method makes use of Fibonacci, Dual and Lucas numbers and has shown considerable success in predictingearthquake events locally as well as globally. Predicting the location of the epicenter of an earthquake is one difficult challenge the other being the timing and magnitude. One technique for predicting the onset of earthquakes is the use of cycles, and the discovery of periodicity. Part of this category is the reported FDL method. The basis of the reported FDL method is the creation of FDL future dates based on the onset date of significant earthquakes. The assumption being that each occurred earthquake discontinuity can be thought of as a generating source of FDL time series The connection between past earthquakes and future earthquakes based on FDL numbers has also been reported with sample earthquakes since 1900. Using clustering methods it has been shown that significant earthquakes (<6.5R) can be predicted with very good accuracy window (+-1 day). In this contribution we present an improvement modification to the FDL method, the MFDL method, which performs better than the FDL. We use the FDL numbers to develop possible earthquakes dates but with the important difference that the starting seed date is a trigger planetary aspect prior to the earthquake. Typical planetary aspects are Moon conjunct Sun, Moon opposite Sun, Moon conjunct or opposite North or South Modes. In order to test improvement of the method we used all +8R earthquakes recorded since 1900, (86 earthquakes from USGS data). We have developed the FDL numbers for each of those seeds, and examined the earthquake hit rates (for a window of 3, i.e. +-1 day of target date) and for <6.5R. The successes are counted for each one of the 86 earthquake seeds and we compare the MFDL method with the FDL method. In every case we find improvement when the starting seed date is on the planetary trigger date prior to the earthquake. We observe no improvement only when a planetary trigger coincided with

Last October 21st marked the 140th anniversary of the M6.8 1868 Hayward Earthquake, the last damaging earthquake on the southern Hayward Fault. This anniversary was used to help publicize the seismic hazards associated with the fault because: (1) the past five such earthquakes on the Hayward Fault occurred about 140 years apart on average, and (2) the Hayward-Rodgers Creek Fault system is the most likely (with a 31 percent probability) fault in the Bay Area to produce a M6.7 or greater earthquake in the next 30 years. To promote earthquake awareness and preparedness, over 140 public and private agencies and companies and many individual joined the public-private nonprofit 1868 Hayward Earthquake Alliance (1868alliance.org). The Alliance sponsored many activities including a public commemoration at Mission San Jose in Fremont, which survived the 1868 earthquake. This event was followed by an earthquake drill at Bay Area schools involving more than 70,000 students. The anniversary prompted the Silver Sentinel, an earthquake response exercise based on the scenario of an earthquake on the Hayward Fault conducted by Bay Area County Offices of Emergency Services. 60 other public and private agencies also participated in this exercise. The California Seismic Safety Commission and KPIX (CBS affiliate) produced professional videos designed forschool classrooms promoting Drop, Cover, and Hold On. Starting in October 2007, the Alliance and the U.S. Geological Survey held a sequence of press conferences to announce the release of new research on the Hayward Fault as well as new loss estimates for a Hayward Fault earthquake. These included: (1) a ShakeMap for the 1868 Hayward earthquake, (2) a report by the U. S. Bureau of Labor Statistics forecasting the number of employees, employers, and wages predicted to be within areas most strongly shaken by a Hayward Fault earthquake, (3) new estimates of the losses associated with a Hayward Fault earthquake, (4) new ground motion

Seismically induced landslides are a major environmental effect of earthquakes, which may significantly contribute to related losses. Moreover, in paleoseismology landslide event sizes are an important proxy for the estimation of the intensity and magnitude of past earthquakes and thus allowing us to improve seismic hazard assessment over longer terms. Not only earthquake intensity, but also factors such as the fault characteristics, topography, climatic conditions and the geological environment have a major impact on the intensity and spatial distribution of earthquake induced landslides. We present here a review of factors contributing to earthquake triggered slope failures based on an "event-by-event" classification approach. The objective of this analysis is to enable the short-term prediction of earthquake triggered landslide event sizes in terms of numbers and size of the affected area right after an earthquake event occurred. Five main factors, 'Intensity', 'Fault', 'Topographic energy', 'Climatic conditions' and 'Surface geology' were used to establish a relationship to the number and spatial extend of landslides triggered by an earthquake. The relative weight of these factors was extracted from published data for numerous past earthquakes; topographic inputs were checked in Google Earth and through geographic information systems. Based on well-documented recent earthquakes (e.g. Haiti 2010, Wenchuan 2008) and on older events for which reliable extensive information was available (e.g. Northridge 1994, Loma Prieta 1989, Guatemala 1976, Peru 1970) the combination and relative weight of the factors was calibrated. The calibrated factor combination was then applied to more than 20 earthquake events for which landslide distribution characteristics could be cross-checked. One of our main findings is that the 'Fault' factor, which is based on characteristics of the fault, the surface rupture and its location with respect to mountain areas, has the most important

The number of successes and the space-time alarm rate are commonly used to characterize the strength of an earthquakeprediction method and the significance of prediction results. It has been recently suggested to use a new characteristic to evaluate the forecaster's skill, the gambling score (GS), which incorporates the difficulty of guessing each target event by using different weights for different alarms. We expand parametrization of the GS and use the M8 prediction algorithm to illustrate difficulties of the new approach in the analysis of the prediction significance. We show that the level of significance strongly depends (1) on the choice of alarm weights, (2) on the partitioning of the entire alarm volume into component parts and (3) on the accuracy of the spatial rate measure of target events. These tools are at the disposal of the researcher and can affect the significance estimate. Formally, all reasonable GSs discussed here corroborate that the M8 method is non-trivial in the prediction of 8.0 ≤M < 8.5 events because the point estimates of the significance are in the range 0.5-5 per cent. However, the conservative estimate 3.7 per cent based on the number of successes seems preferable owing to two circumstances: (1) it is based on relative values of the spatial rate and hence is more stable and (2) the statistic of successes enables us to construct analytically an upper estimate of the significance taking into account the uncertainty of the spatial rate measure.

We examine the initial subevent (ISE) of the M?? 6.7, 1994 Northridge, California, earthquake in order to discriminate between two end-member rupture initiation models: the 'preslip' and 'cascade' models. Final earthquake size may be predictable from an ISE's seismic signature in the preslip model but not in the cascade model. In the cascade model ISEs are simply small earthquakes that can be described as purely dynamic ruptures. In this model a large earthquake is triggered by smaller earthquakes; there is no size scaling between triggering and triggered events and a variety of stress transfer mechanisms are possible. Alternatively, in the preslip model, a large earthquake nucleates as an aseismically slipping patch in which the patch dimension grows and scales with the earthquake's ultimate size; the byproduct of this loading process is the ISE. In this model, the duration of the ISE signal scales with the ultimate size of the earthquake, suggesting that nucleation and earthquake size are determined by a more predictable, measurable, and organized process. To distinguish between these two end-member models we use short period seismograms recorded by the Southern California Seismic Network. We address questions regarding the similarity in hypocenter locations and focal mechanisms of the ISE and the mainshock. We also compare the ISE's waveform characteristics to those of small earthquakes and to the beginnings of earthquakes with a range of magnitudes. We find that the focal mechanisms of the ISE and mainshock are indistinguishable, and both events may have nucleated on and ruptured the same fault plane. These results satisfy the requirements for both models and thus do not discriminate between them. However, further tests show the ISE's waveform characteristics are similar to those of typical small earthquakes in the vicinity and more importantly, do not scale with the mainshock magnitude. These results are more consistent with the cascade model.

Regardless of the future potential of earthquakeprediction, it is presently impractical to rely on it to mitigate earthquake disasters. The practical approach is to strengthen the resilience of our built environment to earthquakes based on hazard assessment. But this was not common understanding in China when the M 7.9 Wenchuan earthquake struck the Sichuan Province on 12 May 2008, claiming over 80,000 lives. In China, earthquakeprediction is a government-sanctioned and law-regulated measure of disaster prevention. A sudden boom of the earthquakeprediction program in 1966-1976 coincided with a succession of nine M > 7 damaging earthquakes in the densely populated region of the country and the political chaos of the Cultural Revolution. It climaxed with the prediction of the 1975 Haicheng earthquake, which was due mainly to an unusually pronounced foreshock sequence and the extraordinary readiness of some local officials to issue imminent warning and evacuation order. The Haicheng prediction was a success in practice and yielded useful lessons, but the experience cannot be applied to most other earthquakes and cultural environments. Since the disastrous Tangshan earthquake in 1976 that killed over 240,000 people, there have been two opposite trends in China: decreasing confidence in prediction and increasing emphasis on regulating construction design for earthquake resilience. In 1976, most of the seismic intensity XI areas of Tangshan were literally razed to the ground, but in 2008, many buildings in the intensity XI areas of Wenchuan did not collapse. Prediction did not save life in either of these events; the difference was made by construction standards. For regular buildings, there was no seismic design in Tangshan to resist any earthquake shaking in 1976, but limited seismic design was required for the Wenchuan area in 2008. Although the construction standards were later recognized to be too low, those buildings that met the standards suffered much less

The 28 September 2004 M 6.0 Parkfield earthquake, a long-anticipated event on the San Andreas fault, is the world's best recorded earthquake to date, with state-of-the-art data obtained from geologic, geodetic, seismic, magnetic, and electrical field networks. This has allowed the preearthquake and postearthquake states of the San Andreas fault in this region to be analyzed in detail. Analyses of these data provide views into the San Andreas fault that show a complex geologic history, fault geometry, rheology, and response of the nearby region to the earthquake-induced ground movement. Although aspects of San Andreas fault zone behavior in the Parkfield region can be modeled simply over geological time frames, the Parkfield EarthquakePrediction Experiment and the 2004 Parkfield earthquake indicate that predicting the fine details of future earthquakes is still a challenge. Instead of a deterministic approach, forecasting future damaging behavior, such as that caused by strong ground motions, will likely continue to require probabilistic methods. However, the Parkfield EarthquakePrediction Experiment and the 2004 Parkfield earthquake have provided ample data to understand most of what did occur in 2004, culminating in significant scientific advances.

The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

Earthquakeprediction research must meet certain standards before it can be suitably evaluated for potential application in decision making. For methods that result in a binary (on or off) alarm condition, requirements include (1) a quantitative description of observables that trigger an alarm, (2) a quantitative description, including ranges of time, location, and magnitude, of the predictedearthquakes, (3) documented evidence of all previous alarms, (4) a complete list of predictedearthquakes, (5) a complete list of unpredicted earthquakes. The VAN technique [Varotsos and Lazaridou, 1991; Varotsos et al., 1996] has not yet been stated as a testable hypothesis. It fails criteria (1) and (2) so it is not ready to be evaluated properly. Although telegrams were transmitted in advance of claimed successes, these telegrams did not fully specify the predicted events, and all of the published statistical evaluations involve many subjective ex post facto decisions. Lacking a statistically demonstrated relationship to earthquakes, a candidate prediction technique should satisfy several plausibility criteria, including: (1) a reasonable relationship between the location of the candidate precursor and that of the predictedearthquake, (2) some demonstration that the candidate precursory observations are related to stress, strain, or other quantities related to earthquakes, and (3) the existence of co-seismic as well as pre-seismic variations of the candidate precursor. The VAN technique meets none of these criteria.

The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquakeprediction claims. They suggest rules and recipes of adequate earthquakeprediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquakeprediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquakeprediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and

Earthquakeprediction research has searched for both informational phenomena, those that provide information about earthquake hazards useful to the public, and causal phenomena, causally related to the physical processes governing failure on a fault, to improve our understanding of those processes. Neither informational nor causal phenomena are a subset of the other. I propose a classification of potential earthquake predictors of informational, causal, and predictive phenomena, where predictors are causal phenomena that provide more accurate assessments of the earthquake hazard than can be gotten from assuming a random distribution. Achieving higher, more accurate probabilities than a random distribution requires much more information about the precursor than just that it is causally related to the earthquake. PMID:11607656

The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

Prediction of seismic events is one of the most challenging jobs for the scientific community. Conventional ways for prediction of earthquakes are to monitor crustal structure movements, though this method has not yet yield satisfactory results. Furthermore, this method fails to give any short-term prediction. Recently, it is noticed that prior to any seismic event a huge amount of energy is released which may create disturbances in the lower part of D-layer/E-layer of the ionosphere. This ionospheric disturbance may be used as a precursor of earthquakes. Since VLF radio waves propagate inside the wave-guide formed by lower ionosphere and Earth's surface, this signal may be used to identify ionospheric disturbances due to seismic activity. We have analyzed VLF signals to find out the correlations, if any, between the VLF signal anomalies and seismic activities. We have done both the case by case study and also the statistical analysis using a whole year data. In both the methods we found that the night time amplitude of VLF signals fluctuated anomalously three days before the seismic events. Also we found that the terminator time of the VLF signals shifted anomalously towards night time before few days of any major seismic events. We calculate the D-layer preparation time and D-layer disappearance time from the VLF signals. We have observed that this D-layer preparation time and D-layer disappearance time become anomalously high 1-2 days before seismic events. Also we found some strong evidences which indicate that it may possible to predict the location of epicenters of earthquakes in future by analyzing VLF signals for multiple propagation paths.

A method to predictearthquakes in two of the seismogenic areas of the Iberian Peninsula, based on Artificial Neural Networks (ANNs), is presented in this paper. ANNs have been widely used in many fields but only very few and very recent studies have been conducted on earthquakeprediction. Two kinds of predictions are provided in this study: a) the probability of an earthquake, of magnitude equal or larger than a preset threshold magnitude, within the next 7 days, to happen; b) the probability of an earthquake of a limited magnitude interval to happen, during the next 7 days. First, the physical fundamentals related to earthquake occurrence are explained. Second, the mathematical model underlying ANNs is explained and the configuration chosen is justified. Then, the ANNs have been trained in both areas: The Alborán Sea and the Western Azores-Gibraltar fault. Later, the ANNs have been tested in both areas for a period of time immediately subsequent to the training period. Statistical tests are provided showing meaningful results. Finally, ANNs were compared to other well known classifiers showing quantitatively and qualitatively better results. The authors expect that the results obtained will encourage researchers to conduct further research on this topic. Development of a system capable of predictingearthquakes for the next seven days Application of ANN is particularly reliable to earthquakeprediction. Use of geophysical information modeling the soil behavior as ANN's input data Successful analysis of one region with large seismic activity

The San Andreas fault is viewed, according to the concepts of seafloor spreading and plate tectonics, as a transform fault that separates the Pacific and North American plates and along which relative movements of 2 to 6 cm/year have been taking place. The resulting strain can be released by creep, by earthquakes of moderate size, or (as near San Francisco and Los Angeles) by great earthquakes. Microearthquakes, as mapped by a dense seismograph network in central California, generally coincide with zones of the San Andreas fault system that are creeping. Microearthquakes are few and scattered in zones where elastic energy is being stored. Changes in the rate of strain, as recorded by tiltmeter arrays, have been observed before several earthquakes of about magnitude 4. Changes in fluid pressure may control timing of seismic activity and make it possible to control natural earthquakes by controlling variations in fluid pressure in fault zones. An experiment in earthquake control is underway at the Rangely oil field in Colorado, where the rates of fluid injection and withdrawal in experimental wells are being controlled. ?? 1972.

For 20 years the South Iceland Seismic Zone (SISZ) was a test site for multinational earthquakeprediction research, partly bridging the gap between laboratory tests samples, and the huge transform zones of the Earth. The approach was to explore the physics of processes leading up to large earthquakes. The book Advances in EarthquakePrediction, Research and Risk Mitigation, by R. Stefansson (2011), published by Springer/PRAXIS, and an article in the August issue of the BSSA by Stefansson, M. Bonafede and G. Gudmundsson (2011) contain a good overview of the findings, and more references, as well as examples of partially successful long and short term warnings based on such an approach. Significant findings are: Earthquakes that occurred hundreds of years ago left scars in the crust, expressed in volumes of heterogeneity that demonstrate the size of their faults. Rheology and stress heterogeneity within these volumes are significantly variable in time and space. Crustal processes in and near such faults may be observed by microearthquake information decades before the sudden onset of a new large earthquake. High pressure fluids of mantle origin may in response to strain, especially near plate boundaries, migrate upward into the brittle/elastic crust to play a significant role in modifying crustal conditions on a long and short term. Preparatory processes of various earthquakes can not be expected to be the same. We learn about an impending earthquake by observing long term preparatory processes at the fault, finding a constitutive relationship that governs the processes, and then extrapolating that relationship into near space and future. This is a deterministic approach in earthquakeprediction research. Such extrapolations contain many uncertainties. However the long time pattern of observations of the pre-earthquake fault process will help us to put probability constraints on our extrapolations and our warnings. The approach described is different from the usual

Casualty prediction in a building during earthquakes benefits to implement the economic loss estimation in the performance-based earthquake engineering methodology. Although after-earthquake observations reveal that the evacuation has effects on the quantity of occupant casualties during earthquakes, few current studies consider occupant movements in the building in casualty prediction procedures. To bridge this knowledge gap, a numerical simulation method using refined cellular automata model is presented, which can describe various occupant dynamic behaviors and building dimensions. The simulation on the occupant evacuation is verified by a recorded evacuation process from a school classroom in real-life 2013 Ya'an earthquake in China. The occupant casualties in the building under earthquakes are evaluated by coupling the building collapse process simulation by finite element method, the occupant evacuation simulation, and the casualty occurrence criteria with time and space synchronization. A case study of casualty prediction in a building during an earthquake is provided to demonstrate the effect of occupant movements on casualty prediction.

The Serata Stressmeter has been developed to measure and monitor earthquake shear stress build-up along shallow active faults. The development work made in the past 25 years has established the Stressmeter as an automatic stress measurement system to study timing of forthcoming major earthquakes in support of the current earthquakepredictionstudies based on statistical analysis of seismological observations. In early 1982, a series of major Man-made earthquakes (magnitude 4.5-5.0) suddenly occurred in an area over deep underground potash mine in Saskatchewan, Canada. By measuring underground stress condition of the mine, the direct cause of the earthquake was disclosed. The cause was successfully eliminated by controlling the stress condition of the mine. The Japanese government was interested in this development and the Stressmeter was introduced to the Japanese government research program for earthquake stress studies. In Japan the Stressmeter was first utilized for direct measurement of the intrinsic lateral tectonic stress gradient G. The measurement, conducted at the Mt. Fuji Underground Research Center of the Japanese government, disclosed the constant natural gradients of maximum and minimum lateral stresses in an excellent agreement with the theoretical value, i.e., G = 0.25. All the conventional methods of overcoring, hydrofracturing and deformation, which were introduced to compete with the Serata method, failed demonstrating the fundamental difficulties of the conventional methods. The intrinsic lateral stress gradient determined by the Stressmeter for the Japanese government was found to be the same with all the other measurements made by the Stressmeter in Japan. The stress measurement results obtained by the major international stress measurement work in the Hot Dry Rock Projects conducted in USA, England and Germany are found to be in good agreement with the Stressmeter results obtained in Japan. Based on this broad agreement, a solid geomechanical

In 2003, the Reverse Tracing of Precursors (RTP) algorithm attracted the attention of seismologists and international news agencies when researchers claimed two successful predictions of large earthquakes. These researchers had begun applying RTP to seismicity in Japan, California, the eastern Mediterranean and Italy; they have since applied it to seismicity in the northern Pacific, Oregon and Nevada. RTP is a pattern recognition algorithm that uses earthquake catalogue data to declare alarms, and these alarms indicate that RTP expects a moderate to large earthquake in the following months. The spatial extent of alarms is highly variable and each alarm typically lasts 9 months, although the algorithm may extend alarms in time and space. We examined the record of alarms and outcomes since the prospective application of RTP began, and in this paper we report on the performance of RTP to date. To analyse these predictions, we used a recently developed approach based on a gambling score, and we used a simple reference model to estimate the prior probability of target earthquakes for each alarm. Formally, we believe that RTP investigators did not rigorously specify the first two `successful' predictions in advance of the relevant earthquakes; because this issue is contentious, we consider analyses with and without these alarms. When we included contentious alarms, RTP predictions demonstrate statistically significant skill. Under a stricter interpretation, the predictions are marginally unsuccessful.

After retrospection of years of practice of the earthquakeprediction in Yunnan area, it is widely considered that the fixed-point earthquake precursory anomalies mainly reflect the field information. The increase of amplitude and number of precursory anomalies could help to determine the original time of earthquakes, however it is difficult to obtain the spatial relevance between earthquakes and precursory anomalies, thus we can hardly predict the spatial locations of earthquakes using precursory anomalies. The past practices have shown that the seismic activities are superior to the precursory anomalies in predictingearthquakes locations, resulting from the increased seismicity were observed before 80% M=6.0 earthquakes in Yunnan area. While the mobile geomagnetic anomalies are turned out to be helpful in predictingearthquakes locations in recent year, for instance, the forecasted earthquakes occurring time and area derived form the 1-year-scale geomagnetic anomalies before the M6.5 Ludian earthquake in 2014 are shorter and smaller than which derived from the seismicity enhancement region. According to the past works, the author believes that the medium-short-term earthquake forecast level, as well as objective understanding of the seismogenic mechanisms, could be substantially improved by the densely laying observation array and capturing the dynamic process of physical property changes in the enhancement region of medium to small earthquakes.

Many empirical relationships for the earthquake ground motion duration were developed for interplate region, whereas only a very limited number of empirical relationships exist for intraplate region. Also, the existing relationships were developed based mostly on the scaled recorded interplate earthquakes to represent intraplate earthquakes. To the author's knowledge, none of the existing relationships for the intraplate regions were developed using only the data from intraplate regions. Therefore, an attempt is made in this study to develop empirical predictive relationships of earthquake ground motion duration (i.e., significant and bracketed) with earthquake magnitude, hypocentral distance, and site conditions (i.e., rock and soil sites) using the data compiled from intraplate regions of Canada, Australia, Peninsular India, and the central and southern parts of the USA. The compiled earthquake ground motion data consists of 600 records with moment magnitudes ranging from 3.0 to 6.5 and hypocentral distances ranging from 4 to 1000 km. The non-linear mixed-effect (NLMEs) and logistic regression techniques (to account for zero duration) were used to fit predictive models to the duration data. The bracketed duration was found to be decreased with an increase in the hypocentral distance and increased with an increase in the magnitude of the earthquake. The significant duration was found to be increased with the increase in the magnitude and hypocentral distance of the earthquake. Both significant and bracketed durations were predicted higher in rock sites than in soil sites. The predictive relationships developed herein are compared with the existing relationships for interplate and intraplate regions. The developed relationship for bracketed duration predicts lower durations for rock and soil sites. However, the developed relationship for a significant duration predicts lower durations up to a certain distance and thereafter predicts higher durations compared to the

The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks. Images Fig. 3 PMID:11607666

The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance D(c), apparent fracture energy at a rupture front, time- dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of D, apply to faults in nature. However, scaling of D(c) is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

We will summarize a multi-year research effort on wide-ranging observations of pre-earthquake processes. Based on space and ground data we present some new results relevant to the existence of pre-earthquake signals. Over the past 15-20 years there has been a major revival of interest in pre-earthquakestudies in Japan, Russia, China, EU, Taiwan and elsewhere. Recent large magnitude earthquakes in Asia and Europe have shown the importance of these various studies in the search for earthquake precursors either for forecasting or predictions. Some new results were obtained from modeling of the atmosphere-ionosphere connection and analyses of seismic records (foreshocks /aftershocks), geochemical, electromagnetic, and thermodynamic processes related to stress changes in the lithosphere, along with their statistical and physical validation. This cross - disciplinary approach could make an impact on our further understanding of the physics of earthquakes and the phenomena that precedes their energy release. We also present the potential impact of these interdisciplinary studies to earthquakepredictability. A detail summary of our approach and that of several international researchers will be part of this session and will be subsequently published in a new AGU/Wiley volume. This book is part of the Geophysical Monograph series and is intended to show the variety of parameters seismic, atmospheric, geochemical and historical involved is this important field of research and will bring this knowledge and awareness to a broader geosciences community.

The 13 November 2016, M7.8, 54 km NNE of Amberley, New Zealand and the 25 December 2016, M7.6, 42 km SW of Puerto Quellon, Chile earthquakes happened outside the area of the on-going real-time global testing of the intermediate-term middle-range earthquakeprediction algorithm M8, accepted in 1992 for the M7.5+ range. Naturally, over the past two decades, the level of registration of earthquakes worldwide has grown significantly and by now is sufficient for diagnosis of times of increased probability (TIPs) by the M8 algorithm on the entire territory of New Zealand and Southern Chile as far as below 40°S. The mid-2016 update of the M8 predictions determines TIPs in the additional circles of investigation (CIs) where the two earthquakes have happened. Thus, after 50 semiannual updates in the real-time prediction mode, we (1) confirm statistically approved high confidence of the M8-MSc predictions and (2) conclude a possibility of expanding the territory of the Global Test of the algorithms M8 and MSc in an apparently necessary revision of the 1992 settings.

M9 super-giant earthquake with huge tsunami devastated East Japan on 11 March, causing more than 20,000 casualties and serious damage of Fukushima nuclear plant. This earthquake was predicted neither short-term nor long-term. Seismologists were shocked because it was not even considered possible to happen at the East Japan subduction zone. However, it was not the only un-predictedearthquake. In fact, throughout several decades of the National EarthquakePrediction Project, not even a single earthquake was predicted. In reality, practically no effective research has been conducted for the most important short-term prediction. This happened because the Japanese National Project was devoted for construction of elaborate seismic networks, which was not the best way for short-term prediction. After the Kobe disaster, in order to parry the mounting criticism on their no success history, they defiantly changed their policy to "stop aiming at short-term prediction because it is impossible and concentrate resources on fundamental research", that meant to obtain "more funding for no prediction research". The public were and are not informed about this change. Obviously earthquakeprediction would be possible only when reliable precursory phenomena are caught and we have insisted this would be done most likely through non-seismic means such as geochemical/hydrological and electromagnetic monitoring. Admittedly, the lack of convincing precursors for the M9 super-giant earthquake has adverse effect for us, although its epicenter was far out off shore of the range of operating monitoring systems. In this presentation, we show a new possibility of finding remarkable precursory signals, ironically, from ordinary seismological catalogs. In the frame of the new time domain termed natural time, an order parameter of seismicity, κ1, has been introduced. This is the variance of natural time kai weighted by normalised energy release at χ. In the case that Seismic Electric Signals

A devastating earthquake had been predicted for May 11, 2011 in Rome. This prediction was never released officially by anyone, but it grew up in the Internet and was amplified by media. It was erroneously ascribed to Raffaele Bendandi, an Italian self-taught natural scientist who studied planetary motions. Indeed, around May 11, 2011, a planetary alignment was really expected and this contributed to give credibility to the earthquakeprediction among people. During the previous months, INGV was overwhelmed with requests for information about this supposed prediction by Roman inhabitants and tourists. Given the considerable mediatic impact of this expected earthquake, INGV decided to organize an Open Day in its headquarter in Rome for people who wanted to learn more about the Italian seismicity and the earthquake as natural phenomenon. The Open Day was preceded by a press conference two days before, in which we talked about this prediction, we presented the Open Day, and we had a scientific discussion with journalists about the earthquakeprediction and more in general on the real problem of seismic risk in Italy. About 40 journalists from newspapers, local and national tv's, press agencies and web news attended the Press Conference and hundreds of articles appeared in the following days, advertising the 11 May Open Day. The INGV opened to the public all day long (9am - 9pm) with the following program: i) meetings with INGV researchers to discuss scientific issues; ii) visits to the seismic monitoring room, open 24h/7 all year; iii) guided tours through interactive exhibitions on earthquakes and Earth's deep structure; iv) lectures on general topics from the social impact of rumors to seismic risk reduction; v) 13 new videos on channel YouTube.com/INGVterremoti to explain the earthquake process and give updates on various aspects of seismic monitoring in Italy; vi) distribution of books and brochures. Surprisingly, more than 3000 visitors came to visit INGV

This paper explores several data mining and time series analysis methods for predicting the magnitude of the largest seismic event in the next year based on the previously recorded seismic events in the same region. The methods are evaluated on a catalog of 9,042 earthquake events, which took place between 01/01/1983 and 31/12/2010 in the area of Israel and its neighboring countries. The data was obtained from the Geophysical Institute of Israel. Each earthquake record in the catalog is associated with one of 33 seismic regions. The data was cleaned by removing foreshocks and aftershocks. In our study, we have focused on ten most active regions, which account for more than 80% of the total number of earthquakes in the area. The goal is to predict whether the maximum earthquake magnitude in the following year will exceed the median of maximum yearly magnitudes in the same region. Since the analyzed catalog includes only 28 years of complete data, the last five annual records of each region (referring to the years 2006–2010) are kept for testing while using the previous annual records for training. The predictive features are based on the Gutenberg-Richter Ratio as well as on some new seismic indicators based on the moving averages of the number of earthquakes in each area. The new predictive features prove to be much more useful than the indicators traditionally used in the earthquakeprediction literature. The most accurate result (AUC = 0.698) is reached by the Multi-Objective Info-Fuzzy Network (M-IFN) algorithm, which takes into account the association between two target variables: the number of earthquakes and the maximum earthquake magnitude during the same year. PMID:26812351

This paper explores several data mining and time series analysis methods for predicting the magnitude of the largest seismic event in the next year based on the previously recorded seismic events in the same region. The methods are evaluated on a catalog of 9,042 earthquake events, which took place between 01/01/1983 and 31/12/2010 in the area of Israel and its neighboring countries. The data was obtained from the Geophysical Institute of Israel. Each earthquake record in the catalog is associated with one of 33 seismic regions. The data was cleaned by removing foreshocks and aftershocks. In our study, we have focused on ten most active regions, which account for more than 80% of the total number of earthquakes in the area. The goal is to predict whether the maximum earthquake magnitude in the following year will exceed the median of maximum yearly magnitudes in the same region. Since the analyzed catalog includes only 28 years of complete data, the last five annual records of each region (referring to the years 2006-2010) are kept for testing while using the previous annual records for training. The predictive features are based on the Gutenberg-Richter Ratio as well as on some new seismic indicators based on the moving averages of the number of earthquakes in each area. The new predictive features prove to be much more useful than the indicators traditionally used in the earthquakeprediction literature. The most accurate result (AUC = 0.698) is reached by the Multi-Objective Info-Fuzzy Network (M-IFN) algorithm, which takes into account the association between two target variables: the number of earthquakes and the maximum earthquake magnitude during the same year.

... Search Term(s): Main Content Home Be Informed EarthquakesEarthquakes An earthquake is the sudden, rapid shaking of the earth, ... by the breaking and shifting of underground rock. Earthquakes can cause buildings to collapse and cause heavy ...

In this paper, we describe a possible method for predicting the earthquakes, which is based on simultaneous recording of the intensity of fluxes of neutrons and charged particles by detectors, commonly used in nuclear physics. These low-energy particles originate from radioactive nuclear processes in the Earth's crust. The variations in the particle flux intensity can be the precursor of the earthquake. A description is given of an electronic installation that records the fluxes of charged particles in the radial direction, which are a possible response to the accumulated tectonic stresses in the Earth's crust. The obtained results showed an increase in the intensity of the fluxes for 10 or more hours before the occurrence of the earthquake. The previous version of the installation was able to indicate for the possibility of an earthquake (Maksudov et al. in Instrum Exp Tech 58:130-131, 2015), but did not give information about the direction of the epicenter location. In this regard, the installation was modified by adding eight directional detectors. With the upgraded setup, we have received both the predictive signals, and signals determining the directions of the location of the forthcoming earthquake, starting 2-3 days before its origin.

Banner headlines in an Italian newspaper read on May 11, 2011: "Absence boom in offices: the urban legend in Rome become psychosis". This was the effect of a large-magnitude earthquakeprediction in Rome for May 11, 2011. This prediction was never officially released, but it grew up in Internet and was amplified by media. It was erroneously ascribed to Raffaele Bendandi, an Italian self-taught natural scientist who studied planetary motions and related them to earthquakes. Indeed, around May 11, 2011, there was a planetary alignment and this increased the earthquakeprediction credibility. Given the echo of this earthquakeprediction, INGV decided to organize on May 11 (the same day the earthquake was predicted to happen) an Open Day in its headquarter in Rome to inform on the Italian seismicity and the earthquake physics. The Open Day was preceded by a press conference two days before, attended by about 40 journalists from newspapers, local and national TV's, press agencies and web news magazines. Hundreds of articles appeared in the following two days, advertising the 11 May Open Day. On May 11 the INGV headquarter was peacefully invaded by over 3,000 visitors from 9am to 9pm: families, students, civil protection groups and many journalists. The program included conferences on a wide variety of subjects (from social impact of rumors to seismic risk reduction) and distribution of books and brochures, in addition to several activities: meetings with INGV researchers to discuss scientific issues, visits to the seismic monitoring room (open 24h/7 all year), guided tours through interactive exhibitions on earthquakes and Earth's deep structure. During the same day, thirteen new videos have also been posted on our youtube/INGVterremoti channel to explain the earthquake process and hazard, and to provide real time periodic updates on seismicity in Italy. On May 11 no large earthquake happened in Italy. The initiative, built up in few weeks, had a very large feedback

Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquakeprediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquakeprediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier

We report on the development of new and online set of tools for use within Google Maps, for earthquake research. We demonstrate this server based and online platform (developped with PHP, Javascript, MySQL) with the new tools using a database system with earthquake data. The platform allows us to carry out statistical and deterministic analysis on earthquake data use of Google Maps and plot various seismicity graphs. The tool box has been extended to draw on the map line segments, multiple straight lines horizontally and vertically as well as multiple circles, including geodesic lines. The application is demonstrated using localized seismic data from the geographic region of Greece as well as other global earthquake data. The application also offers regional segmentation (NxN) which allows the studyingearthquake clustering, and earthquake cluster shift within the segments in space. The platform offers many filters such for plotting selected magnitude ranges or time periods. The plotting facility allows statistically based plots such as cumulative earthquake magnitude plots and earthquake magnitude histograms, calculation of 'b' etc. What is novel for the platform is the additional deterministic tools. Using the newly developed horizontal and vertical line and circle tools we have studied the spatial distribution trends of many earthquakes and we here show for the first time the link between Fibonacci Numbers and spatiotemporal location of some earthquakes. The new tools are valuable for examining visualizing trends in earthquake research as it allows calculation of statistics as well as deterministic precursors. We plan to show many new results based on our newly developed platform.

... DEPARTMENT OF THE INTERIOR Geological Survey National EarthquakePrediction Evaluation Council...: Pursuant to Public Law 96-472, the National EarthquakePrediction Evaluation Council (NEPEC) will hold a 2... proposed earthquakepredictions, on the completeness and scientific validity of the available data related...

There are many published reports of anomalous changes in the ionosphere prior to large earthquakes. However, whether or not these ionospheric changes are reliable precursors that could be useful for earthquakeprediction is controversial within the scientific community. To test a possible statistical relationship between the ionosphere and earthquakes, we compare changes in the total electron content (TEC) of the ionosphere with occurrences of M≥6.0 earthquakes globally for a multiyear period. We use TEC data from a global ionosphere map (GIM) and an earthquake list declustered for aftershocks. For each earthquake, we look for anomalous changes in TEC within ±30 days of the earthquake time and within 2.5° latitude and 5.0° longitude of the earthquake location (the spatial resolution of GIM). Our preliminary analysis, using global TEC and earthquake data for 2002-2010, has not found any statistically significant changes in TEC prior to earthquakes. Thus, we have found no evidence that would suggest that TEC changes are useful for earthquakeprediction. Our results are discussed in the context of prior statistical and case studies. Namely, our results agree with Dautermann et al. (2007) who found no relationship between TEC changes and earthquakes in the San Andreas fault region. Whereas, our results disagree with Le et al. (2011) who found an increased rate in TEC anomalies within a few days before global earthquakes M≥6.0.

This study presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points bet by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. For discrete predictions, we apply this method to evaluate performance of Shebalin's predictions made by using the Reverse Tracing of Precursors (RTP) algorithm and of the outputs of the predictions from the Annual Consultation Meeting on Earthquake Tendency held by China Earthquake Administration. For the continuous case, we use it to compare the probability forecasts of seismicity in the Abruzzo region before and after the L'aquila earthquake based on the ETAS model and the PPE model.

Most severe disasters cause large population movements. These movements make it difficult for relief organizations to efficiently reach people in need. Understanding and predicting the locations of affected people during disasters is key to effective humanitarian relief operations and to long-term societal reconstruction. We collaborated with the largest mobile phone operator in Haiti (Digicel) and analyzed the movements of 1.9 million mobile phone users during the period from 42 d before, to 341 d after the devastating Haiti earthquake of January 12, 2010. Nineteen days after the earthquake, population movements had caused the population of the capital Port-au-Prince to decrease by an estimated 23%. Both the travel distances and size of people’s movement trajectories grew after the earthquake. These findings, in combination with the disorder that was present after the disaster, suggest that people’s movements would have become less predictable. Instead, the predictability of people’s trajectories remained high and even increased slightly during the three-month period after the earthquake. Moreover, the destinations of people who left the capital during the first three weeks after the earthquake was highly correlated with their mobility patterns during normal times, and specifically with the locations in which people had significant social bonds. For the people who left Port-au-Prince, the duration of their stay outside the city, as well as the time for their return, all followed a skewed, fat-tailed distribution. The findings suggest that population movements during disasters may be significantly more predictable than previously thought. PMID:22711804

An earthquake happens when two blocks of the earth suddenly slip past one another. Earthquakes strike suddenly, violently, and without warning at any time of the day or night. If an earthquake occurs in a populated area, it may cause ...

In this paper we discuss the physical meaning of the magnitude-time model parameters for earthquakeprediction. The gestation process for strong earthquake in all eleven seismic zones in China can be described by the magnitude-time prediction model using the computations of the parameters of the model. The average model parameter values for China are: b = 0.383, c=0.154, d = 0.035, B = 0.844, C = -0.209, and D = 0.188. The robustness of the model parameters is estimated from the variation in the minimum magnitude of the transformed data, the spatial extent, and the temporal period. Analysis of the spatial and temporal suitability of the model indicates that the computation unit size should be at least 4° × 4° for seismic zones in North China, at least 3° × 3° in Southwest and Northwest China, and the time period should be as long as possible.

tilt measurements give us a means of monitoring vertical displacements or local uplift of the crust. The simplest type of tiltmeter is a stationary pendulum (fig. 1). As the Earth's surface distorts locally, the pendulum housing is tilted while, of course, the pendulum continues to hang vertically (that is, in the direction of the gravity vector). The tilt angle is the angle through which the pendulum housing is tilted. The pendulum is the inertial reference (the force of gravity remains unchanged at the site), and tilting of the instrument housing represents the moving reference frame. We note in passing that the tiltmeter could also be used to measure the force of gravity by using the pendulum in the same way as Henry Kater did in his celebrated measurement of g in 1817.

Researchers have reported ionospheric electron distribution abnormalities, such as electron density enhancements and/or depletions, that they claimed were related to forthcoming earthquakes. In this study, the Tohoku earthquake is examined using ionosonde data to establish whether any otherwise unexplained ionospheric anomalies were detected in the days and hours prior to the event. As the choices for the ionospheric baseline are generally different between previous works, three separate baselines for the peak plasma frequency of the F2 layer, foF2, are employed here; the running 30-day median (commonly used in other works), the International Reference Ionosphere (IRI) model and the Thermosphere Ionosphere Electrodynamic General Circulation Model (TIE-GCM). It is demonstrated that the classification of an ionospheric perturbation is heavily reliant on the baseline used, with the 30-day median, the IRI and the TIE-GCM generally underestimating, approximately describing and overestimating the measured foF2, respectively, in the 1-month period leading up to the earthquake. A detailed analysis of the ionospheric variability in the 3 days before the earthquake is then undertaken, where a simultaneous increase in foF2 and the Es layer peak plasma frequency, foEs, relative to the 30-day median was observed within 1 h before the earthquake. A statistical search for similar simultaneous foF2 and foEs increases in 6 years of data revealed that this feature has been observed on many other occasions without related seismic activity. Therefore, it is concluded that one cannot confidently use this type of ionospheric perturbation to predict an impending earthquake. It is suggested that in order to achieve significant progress in our understanding of seismo-ionospheric coupling, better account must be taken of other known sources of ionospheric variability in addition to solar and geomagnetic activity, such as the thermospheric coupling.

One of a series of general interest publications on science topics, the booklet provides those interested in earthquakes with an introduction to the subject. Following a section presenting an historical look at the world's major earthquakes, the booklet discusses earthquake-prone geographic areas, the nature and workings of earthquakes, earthquake…

Accelerating rates of geophysical signals are observed before a range of material failure phenomena. They provide insights into the physical processes controlling failure and the basis for failure forecasts. However, examples of accelerating seismicity before landslides are rare, and their behavior and forecasting potential are largely unknown. Here I use a Bayesian methodology to apply a novel gamma point process model to investigate a sequence of quasiperiodic repeating earthquakes preceding a large landslide at Nuugaatsiaq in Greenland in June 2017. The evolution in earthquake rate is best explained by an inverse power law increase with time toward failure, as predicted by material failure theory. However, the commonly accepted power law exponent value of 1.0 is inconsistent with the data. Instead, the mean posterior value of 0.71 indicates a particularly rapid acceleration toward failure and suggests that only relatively short warning times may be possible for similar landslides in future.

The Load/Unload Response Ratio (LURR) method is a proposed technique to predictearthquakes that was first put forward by Yin in 1984 (Yin, 1987). LURR is based on the idea that when a region is near failure, there is an increase in the rate of seismic activity during loading of the tidal cycle relative to the rate of seismic activity during unloading of the tidal cycle. Typically the numerator of the LURR ratio is the number, or the sum of some measure of the size (e.g. Benioff strain), of small earthquakes that occur during loading of the tidal cycle, whereas the denominator is the same as the numerator except it is calculated during unloading. LURR method suggests this ratio should increase in the months to year preceding a large earthquake. Regions near failure have tectonic stresses nearly high enough for a large earthquake to occur, thus it seems more likely that smaller earthquakes in the region would be triggered when the tidal stresses add to the tectonic ones. However, until recently even the most careful studies suggested that the effect of tidal stresses on earthquake occurrence is very small and difficult to detect. New studies have shown that there is a tidal triggering effect on shallow thrust faults in areas with strong tides from ocean loading (Tanaka et al., 2002; Cochran et al., 2004). We have been conducting an independent test of the LURR method, since there would be important scientific and social implications if the LURR method were proven to be a robust method of earthquakeprediction. Smith and Sammis (2003) also undertook a similar study. Following both the parameters of Yin et al. (2000) and the somewhat different ones of Smith and Sammis (2003), we have repeated calculations of LURR for the Northridge and Loma Prieta earthquakes in California. Though we have followed both sets of parameters closely, we have been unable to reproduce either set of results. A general agreement was made at the recent ACES Workshop in China between research

Slip events generated in a laboratory fault model consisting of a circulinear chain of eight spring-connected blocks of approximately equal weight elastically driven to slide on a frictional surface are studied. It is found that most of the input strain energy is released by a relatively few large events, which are approximately time predictable. A large event tends to roughen stress distribution along the fault, whereas the subsequent smaller events tend to smooth the stress distribution and prepare a condition of simultaneous criticality for the occurrence of the next large event. The frequency-size distribution resembles the Gutenberg-Richter relation for earthquakes, except for a falloff for the largest events due to the finite energy-storage capacity of the fault system. Slip distributions, in different events are commonly dissimilar. Stress drop, slip velocity, and rupture velocity all tend to increase with event size. Rupture-initiation locations are usually not close to the maximum-slip locations. ?? 1994 Birkha??user Verlag.

Arias Intensity (Arias, MIT Press, Cambridge MA, pp 438-483, 1970) is an important measure of the strength of a ground motion, as it is able to simultaneously reflect multiple characteristics of the motion in question. Recently, the effectiveness of Arias Intensity as a predictor of the likelihood of damage to short-period structures has been demonstrated, reinforcing the utility of Arias Intensity for use in both structural and geotechnical applications. In light of this utility, Arias Intensity has begun to be considered as a ground-motion measure suitable for use in probabilistic seismic hazard analysis (PSHA) and earthquake loss estimation. It is therefore timely to develop predictive equations for this ground-motion measure. In this study, a suite of four predictive equations, each using a different functional form, is derived for the prediction of Arias Intensity from crustal earthquakes in New Zealand. The provision of a suite of models is included to allow for epistemic uncertainty to be considered within a PSHA framework. Coefficients are presented for four different horizontal-component definitions for each of the four models. The ground-motion dataset for which the equations are derived include records from New Zealand crustal earthquakes as well as near-field records from worldwide crustal earthquakes. The predictive equations may be used to estimate Arias Intensity for moment magnitudes between 5.1 and 7.5 and for distances (both rjb and rrup) up to 300 km.

After a major earthquake, the assignment of scarce mental health emergency personnel to different geographic areas is crucial to the effective management of the crisis. The scarce information that is available in the aftermath of a disaster may be valuable in helping predict where are the populations that are in most need. The objectives of this study were to derive algorithms to predict posttraumatic stress (PTS) symptom prevalence and local distribution after an earthquake and to test whether there are algorithms that require few input data and are still reasonably predictive. A rich database of PTS symptoms, informed after Chile's 2010 earthquake and tsunami, was used. Several model specifications for the mean and centiles of the distribution of PTS symptoms, together with posttraumatic stress disorder (PTSD) prevalence, were estimated via linear and quantile regressions. The models varied in the set of covariates included. Adjusted R2 for the most liberal specifications (in terms of numbers of covariates included) ranged from 0.62 to 0.74, depending on the outcome. When only including peak ground acceleration (PGA), poverty rate, and household damage in linear and quadratic form, predictive capacity was still good (adjusted R2 from 0.59 to 0.67 were obtained). Information about local poverty, household damage, and PGA can be used as an aid to predict PTS symptom prevalence and local distribution after an earthquake. This can be of help to improve the assignment of mental health personnel to the affected localities. Dussaillant F , Apablaza M . Predicting posttraumatic stress symptom prevalence and local distribution after an earthquake with scarce data. Prehosp Disaster Med. 2017;32(4):357-367.

During 11 sequences of earthquakes that in retrospect can be classed as foreshocks, the accelerating rate at which seismic moment is released follows, at least in part, a simple equation. This equation (1) is {Mathematical expression},where {Mathematical expression} is the cumulative sum until time, t, of the square roots of seismic moments of individual foreshocks computed from reported magnitudes;C and n are constants; and tfis a limiting time at which the rate of seismic moment accumulation becomes infinite. The possible time of a major foreshock or main shock, tf,is found by the best fit of equation (1), or its integral, to step-like plots of {Mathematical expression} versus time using successive estimates of tfin linearized regressions until the maximum coefficient of determination, r2,is obtained. Analyzed examples include sequences preceding earthquakes at Cremasta, Greece, 2/5/66; Haicheng, China 2/4/75; Oaxaca, Mexico, 11/29/78; Petatlan, Mexico, 3/14/79; and Central Chile, 3/3/85. In 29 estimates of main-shock time, made as the sequences developed, the errors in 20 were less than one-half and in 9 less than one tenth the time remaining between the time of the last data used and the main shock. Some precursory sequences, or parts of them, yield no solution. Two sequences appear to include in their first parts the aftershocks of a previous event; plots using the integral of equation (1) show that the sequences are easily separable into aftershock and foreshock segments. Synthetic seismic sequences of shocks at equal time intervals were constructed to follow equation (1), using four values of n. In each series the resulting distributions of magnitudes closely follow the linear Gutenberg-Richter relation log N=a-bM, and the product n times b for each series is the same constant. In various forms and for decades, equation (1) has been used successfully to predict failure times of stressed metals and ceramics, landslides in soil and rock slopes, and volcanic

Needless to say, heterogeneous attenuation structure is important for ground motion prediction, including earthquake early warning, that is, real time ground motion prediction. Hoshiba and Ogiso (2015, AGU Fall meeting) showed that the heterogeneous attenuation and scattering structure will lead to earlier and more accurate ground motion prediction in the numerical shake prediction scheme proposed by Hoshiba and Aoki (2015, BSSA). Hoshiba and Ogiso (2015) used assumed heterogeneous structure, and we discuss the effect of them in the case of 2016 Kumamoto Earthquake, using heterogeneous structure estimated by actual observation data. We conducted Multiple Lapse Time Window Analysis (Hoshiba, 1993, JGR) to the seismic stations located on western part of Japan to estimate heterogeneous attenuation and scattering structure. The characteristics are similar to the previous work of Carcole and Sato (2010, GJI), e.g. strong intrinsic and scattering attenuation around the volcanoes located on the central part of Kyushu, and relatively weak heterogeneities in the other area. Real time ground motion prediction simulation for the 2016 Kumamoto Earthquake was conducted using the numerical shake prediction scheme with 474 strong ground motion stations. Comparing the snapshot of predicted and observed wavefield showed a tendency for underprediction around the volcanic area in spite of the heterogeneous structure. These facts indicate the necessity of improving the heterogeneous structure for the numerical shake prediction scheme.In this study, we used the waveforms of Hi-net, K-NET, KiK-net stations operated by the NIED for estimating structure and conducting ground motion prediction simulation. Part of this study was supported by the Earthquake Research Institute, the University of Tokyo cooperative research program and JSPS KAKENHI Grant Number 25282114.

We present a new automatic earthquake discrimination procedure to determine in near-real time the tectonic regime and seismotectonic domain of an earthquake, its most likely source type, and the corresponding ground-motion prediction equation (GMPE) class to be used in the U.S. Geological Survey (USGS) Global ShakeMap system. This method makes use of the Flinn–Engdahl regionalization scheme, seismotectonic information (plate boundaries, global geology, seismicity catalogs, and regional and local studies), and the source parameters available from the USGS National Earthquake Information Center in the minutes following an earthquake to give the best estimation of the setting and mechanism of the event. Depending on the tectonic setting, additional criteria based on hypocentral depth, style of faulting, and regional seismicity may be applied. For subduction zones, these criteria include the use of focal mechanism information and detailed interface models to discriminate among outer-rise, upper-plate, interface, and intraslab seismicity. The scheme is validated against a large database of recent historical earthquakes. Though developed to assess GMPE selection in Global ShakeMap operations, we anticipate a variety of uses for this strategy, from real-time processing systems to any analysis involving tectonic classification of sources from seismic catalogs.

Rock avalanches are extremely rapid, massive flow-like movements of fragmented rock. The travel path of the rock avalanches may be confined by channels in some cases, which are referred to as channelized rock avalanches. Channelized rock avalanches are potentially dangerous due to their difficult-to-predict travel distance. In this study, we constructed a dataset with detailed characteristic parameters of 38 channelized rock avalanches triggered by the 2008 Wenchuan earthquake using the visual interpretation of remote sensing imagery, field investigation and literature review. Based on this dataset, we assessed the influence of different factors on the runout distance and developed prediction models of the channelized rock avalanches using the multivariate regression method. The results suggested that the movement of channelized rock avalanche was dominated by the landslide volume, total relief and channel gradient. The performance of both models was then tested with an independent validation dataset of eight rock avalanches that were induced by the 2008 Wenchuan earthquake, the Ms 7.0 Lushan earthquake and heavy rainfall in 2013, showing acceptable good prediction results. Therefore, the travel-distance prediction models for channelized rock avalanches constructed in this study are applicable and reliable for predicting the runout of similar rock avalanches in other regions.

The quality of earthquakeprediction is usually characterized by a two-dimensional diagram n versus τ, where n is the rate of failures-to-predict and τ is a characteristic of space-time alarm. Unlike the time prediction case, the quantity τ is not defined uniquely. We start from the case in which τ is a vector with components related to the local alarm times and find a simple structure of the space-time diagram in terms of local time diagrams. This key result is used to analyze the usual 2-d error sets { n, τ w } in which τ w is a weighted mean of the τ components and w is the weight vector. We suggest a simple algorithm to find the ( n, τ w ) representation of all random guess strategies, the set D, and prove that there exists the unique case of w when D degenerates to the diagonal n + τ w = 1. We find also a confidence zone of D on the ( n, τ w ) plane when the local target rates are known roughly. These facts are important for correct interpretation of ( n, τ w ) diagrams when we discuss the prediction capability of the data or prediction methods.

... DEPARTMENT OF THE INTERIOR Geological Survey [GX13GG009950000] Scientific EarthquakeStudies... Law 106-503, the Scientific EarthquakeStudies Advisory Committee (SESAC) will hold its next meeting... the Scientific EarthquakeStudies Advisory Committee are open to the public. DATES: October 29, 2012...

... DEPARTMENT OF THE INTERIOR U.S. Geological Survey Scientific EarthquakeStudies Advisory Committee... Scientific EarthquakeStudies Advisory Committee (SESAC) will hold its next meeting at the U.S. Geological.... Meetings of the Scientific EarthquakeStudies Advisory Committee are open to the public. DATES: January 26...

... DEPARTMENT OF THE INTERIOR U.S. Geological Survey Scientific EarthquakeStudies Advisory Committee...-503, the Scientific EarthquakeStudies Advisory Committee (SESAC) will hold its next meeting at the.... Meetings of the Scientific EarthquakeStudies Advisory Committee are open to the public. DATES: November 2...

... DEPARTMENT OF THE INTERIOR U.S. Geological Survey [GX13GG009950000] Scientific EarthquakeStudies... Law 106-503, the Scientific EarthquakeStudies Advisory Committee (SESAC) will hold its next meeting... international activities. Meetings of the Scientific EarthquakeStudies Advisory Committee are open to the...

... DEPARTMENT OF THE INTERIOR U.S. Geological Survey Scientific EarthquakeStudies Advisory Committee...-503, the Scientific EarthquakeStudies Advisory Committee (SESAC) will hold its next meeting at the.... Meetings of the Scientific EarthquakeStudies Advisory Committee are open to the public. DATES: March 29...

The Geysers geothermal field is well known for being susceptible to dynamic triggering of earthquakes by large distant earthquakes, owing to the introduction of fluids for energy production. Yet, it is unknown if dynamic triggering of earthquakes is 'predictable' or whether dynamic triggering could lead to a potential hazard for energy production. In this paper, our goal is to investigate the characteristics of triggering and the physical conditions that promote triggering to determine whether or not triggering is in anyway foreseeable. We find that, at present, triggering in The Geysers is not easily 'predictable' in terms of when and where based on observable physical conditions. However, triggered earthquake magnitude positively correlates with peak imparted dynamic stress, and larger dynamic stresses tend to trigger sequences similar to mainshock-aftershock sequences. Thus, we may be able to 'predict' what size earthquakes to expect at The Geysers following a large distant earthquake.

Nevertheless, basic earthquake-related information has always been of consuming interest to the public and the media in this part of California (fig. 2.). So it is not surprising that earthquakeprediction continues to be a significant reserach program at the laboratory. Several of the current spectrum of projects related to prediction are discussed below.

For many years an important aim in seismological studies has been forecasting the occurrence of large earthquakes. Despite some well-established statistical behavior of earthquake sequences, expressed by e.g. the Omori law for aftershock sequences and the Gutenburg-Richter distribution of event magnitudes, purely statistical approaches to short-term earthquakeprediction have in general not been successful. It seems that better understanding of the processes leading to critical stress build-up prior to larger events is necessary to identify useful precursory activity, if this exists, and statistical analyses are an important tool in this context. There has been considerable debate on the usefulness or otherwise of foreshock studies for short-term earthquakeprediction. We investigate generic patterns of foreshock activity using aggregated data and by studying not only strong but also moderate magnitude events. Aggregating empirical local seismicity time series prior to larger events observed in and around Greece reveals a statistically significant increasing rate of seismicity over 20 days prior to M>3.5 earthquakes. This increase cannot be explained by tempo-spatial clustering models such as ETAS, implying genuine changes in the mechanical situation just prior to larger events and thus the possible existence of useful precursory information. Because of tempo-spatial clustering, including aftershocks to foreshocks, even if such generic behavior exists it does not necessarily follow that foreshocks have the potential to provide useful precursory information for individual larger events. Using synthetic catalogs produced based on different clustering models and different presumed system sensitivities we are now investigating to what extent the apparently established generic foreshock rate acceleration may or may not imply that the foreshocks have potential in the context of routine forecasting of larger events. Preliminary results suggest that this is the case, but

In this study, we perform a case study on imagery from the Haiti earthquake that evaluates a novel object-based approach for characterizing earthquake induced surface effects of liquefaction against a traditional pixel based change technique. Our technique, which combines object-oriented change detection with discriminant/categorical functions, shows the power of distinguishing earthquake-induced surface effects from changes in buildings using the object properties concavity, convexity, orthogonality and rectangularity. Our results suggest that object-based analysis holds promise in automatically extracting earthquake-induced damages from high-resolution aerial/satellite imagery.

Estimation of ground-motion amplitudes that may be produced by future earthquakes constitutes the foundation of seismic hazard assessment and earthquake-resistant structural design. This is typically done by using a prediction equation that quantifies amplitudes as a function of key seismological variables such as magnitude, distance and site condition. In this study, we develop a hybrid empirical prediction equation for earthquakes in western Alberta, where evaluation of seismic hazard associated with induced seismicity is of particular interest. We use peak ground motions and response spectra from recorded seismic events to model the regional source and attenuation attributes. The available empirical data is limited in the magnitude range of engineering interest (M>4). Therefore, we combine empirical data with a simulation-based model in order to obtain seismologically informed predictions for moderate-to-large magnitude events. The methodology is two-fold. First, we investigate the shape of geometrical spreading in Alberta. We supplement the seismic data with ground motions obtained from mining/quarry blasts, in order to gain insights into the regional attenuation over a wide distance range. A comparison of ground-motion amplitudes for earthquakes and mining/quarry blasts show that both event types decay at similar rates with distance and demonstrate a significant Moho-bounce effect. In the second stage, we calibrate the source and attenuation parameters of a simulation-based prediction equation to match the available amplitude data from seismic events. We model the geometrical spreading using a trilinear function with attenuation rates obtained from the first stage, and calculate coefficients of anelastic attenuation and site amplification via regression analysis. This provides a hybrid ground-motion prediction equation that is calibrated for observed motions in western Alberta and is applicable to moderate-to-large magnitude events.

Earthquake magnitude prediction is a challenging problem that has been widely studied during the last decades. Statistical, geophysical and machine learning approaches can be found in literature, with no particularly satisfactory results. In recent years, powerful computational techniques to analyze big data have emerged, making possible the analysis of massive datasets. These new methods make use of physical resources like cloud based architectures. California is known for being one of the regions with highest seismic activity in the world and many data are available. In this work, the use of several regression algorithms combined with ensemble learning is explored in the context of big data (1 GB catalog is used), in order to predictearthquakes magnitude within the next seven days. Apache Spark framework, H2 O library in R language and Amazon cloud infrastructure were been used, reporting very promising results.

It is a quite important issue for strong ground motion prediction to construct the source model of huge subduction earthquakes. Iwata and Asano (2012, AGU) summarized the scaling relationships of large slip area of heterogeneous slip model and total SMGA sizes on seismic moment for subduction earthquakes and found the systematic change between the ratio of SMGA to the large slip area and the seismic moment. They concluded this tendency would be caused by the difference of period range of source modeling analysis. In this paper, we try to construct the methodology of construction of the source model for strong ground motion prediction for huge subduction earthquakes. Following to the concept of the characterized source model for inland crustal earthquakes (Irikura and Miyake, 2001; 2011) and intra-slab earthquakes (Iwata and Asano, 2011), we introduce the proto-type of the source model for huge subduction earthquakes and validate the source model by strong ground motion modeling.

The spatially extensive damage from the 2010-2011 Christchurch, New Zealand earthquake events are a reminder of the need for liquefaction hazard maps for anticipating damage from future earthquakes. Liquefaction hazard mapping as traditionally relied on detailed geologic mapping and expensive site studies. These traditional techniques are difficult to apply globally for rapid response or loss estimation. We have developed a logistic regression model to predict the probability of liquefaction occurrence in coastal sedimentary areas as a function of simple and globally available geospatial features (e.g., derived from digital elevation models) and standard earthquake-specific intensity data (e.g., peak ground acceleration). Some of the geospatial explanatory variables that we consider are taken from the hydrology community, which has a long tradition of using remotely sensed data as proxies for subsurface parameters. As a result of using high resolution, remotely-sensed, and spatially continuous data as a proxy for important subsurface parameters such as soil density and soil saturation, and by using a probabilistic modeling framework, our liquefaction model inherently includes the natural spatial variability of liquefaction occurrence and provides an estimate of spatial extent of liquefaction for a given earthquake. To provide a quantitative check on how the predicted probabilities relate to spatial extent of liquefaction, we report the frequency of observed liquefaction features within a range of predicted probabilities. The percentage of liquefaction is the areal extent of observed liquefaction within a given probability contour. The regional model and the results show that there is a strong relationship between the predicted probability and the observed percentage of liquefaction. Visual inspection of the probability contours for each event also indicates that the pattern of liquefaction is well represented by the model.

We introduce a framework for developing earthquake forecasts using Virtual Quake (VQ), the generalized successor to the perhaps better known Virtual California (VC) earthquake simulator. We discuss the basic merits and mechanics of the simulator, and we present several statistics of interest for earthquake forecasting. We also show that, though the system as a whole (in aggregate) behaves quite randomly, (simulated) earthquake sequences limited to specific fault sections exhibit measurable predictability in the form of increasing seismicity precursory to large m > 7 earthquakes. In order to quantify this, we develop an alert based forecasting metric similar to those presented in Keilis-Borok (2002); Molchan (1997), and show that it exhibits significant information gain compared to random forecasts. We also discuss the long standing question of activation vs quiescent type earthquake triggering. We show that VQ exhibits both behaviors separately for independent fault sections; some fault sections exhibit activation type triggering, while others are better characterized by quiescent type triggering. We discuss these aspects of VQ specifically with respect to faults in the Salton Basin and near the El Mayor-Cucapah region in southern California USA and northern Baja California Norte, Mexico.

Although optimism prevailed in the 1970s, the present consensus on earthquakeprediction appears to be quite pessimistic. However, short-term prediction based on geoelectric potential monitoring has stood the test of time in Greece for more than a decade [VarotsosandKulhanek, 1993] Lighthill, 1996]. The method used is called the VAN method.The geoelectric potential changes constantly due to causes such as magnetotelluric effects, lightning, rainfall, leakage from manmade sources, and electrochemical instabilities of electrodes. All of this noise must be eliminated before preseismic signals are identified, if they exist at all. The VAN group apparently accomplished this task for the first time. They installed multiple short (100-200m) dipoles with different lengths in both north-south and east-west directions and long (1-10 km) dipoles in appropriate orientations at their stations (one of their mega-stations, Ioannina, for example, now has 137 dipoles in operation) and found that practically all of the noise could be eliminated by applying a set of criteria to the data.

Examines the types of damage experienced by California State University at Northridge during the 1994 earthquake and what lessons were learned in handling this emergency are discussed. The problem of loose asbestos is addressed. (GR)

This study investigates whether real-time strong ground motion data from seismic stations could have been used to provide an accurate estimate of the magnitude of the 2011 Tohoku-Oki earthquake in Japan. Ultimately, such an estimate could be used as input data for a tsunami forecast and would lead to more robust earthquake and tsunami early warning. We collected the strong motion accelerograms recorded by borehole and free-field (surface) Kiban Kyoshin network stations that registered this mega-thrust earthquake in order to perform an off-line test to estimate the magnitude based on ground motion prediction equations (GMPEs). GMPEs for peak ground acceleration and peak ground velocity (PGV) from a previous study by Eshaghi et al. in the Bulletin of the Seismological Society of America 103. (2013) derived using events with moment magnitude ( M) ≥ 5.0, 1998-2010, were used to estimate the magnitude of this event. We developed new GMPEs using a more complete database (1998-2011), which added only 1 year but approximately twice as much data to the initial catalog (including important large events), to improve the determination of attenuation parameters and magnitude scaling. These new GMPEs were used to estimate the magnitude of the Tohoku-Oki event. The estimates obtained were compared with real time magnitude estimates provided by the existing earthquake early warning system in Japan. Unlike the current operational magnitude estimation methods, our method did not saturate and can provide robust estimates of moment magnitude within ~100 s after earthquake onset for both catalogs. It was found that correcting for average shear-wave velocity in the uppermost 30 m () improved the accuracy of magnitude estimates from surface recordings, particularly for magnitude estimates of PGV (Mpgv). The new GMPEs also were used to estimate the magnitude of all earthquakes in the new catalog with at least 20 records. Results show that the magnitude estimate from PGV values using

The continental United States has been rocked by two particularly damaging earthquakes in the last 4.5 years, Loma Prieta in northern California in 1989 and Northridge in southern California in 1994. Combined losses from these two earthquakes approached $30 billion. Approximately half these losses were reimbursed by the federal government. Because large earthquakes typically overwhelm state resources and place unplanned burdens on the federal government, it is important to learn from these earthquakes how to reduce future losses. My purpose here is to explore a potential implication of the Northridge and Loma Prieta earthquakes for hazard-mitigation strategies: earth scientists should increase their efforts to map hazardous areas within urban regions.

In earthquake early warning systems, real-time shake prediction through wave propagation simulation is a promising approach. Compared with traditional methods, it does not suffer from the inaccurate estimation of source parameters. For computation efficiency, wave direction is assumed to propagate on the 2-D surface of the earth in these methods. In fact, since the seismic wave propagates in the 3-D sphere of the earth, the 2-D space modeling of wave direction results in inaccurate wave estimation. In this paper, we propose a 3-D space numerical shake prediction method, which simulates the wave propagation in 3-D space using radiative transfer theory, and incorporate data assimilation technique to estimate the distribution of wave energy. 2011 Tohoku earthquake is studied as an example to show the validity of the proposed model. 2-D space model and 3-D space model are compared in this article, and the prediction results show that numerical shake prediction based on 3-D space model can estimate the real-time ground motion precisely, and overprediction is alleviated when using 3-D space model.

Time series of data on variations in the electric activity (EA) of four species of weakly electric fish Gnathonemus leopoldianus and moving activity (MA) of two cat-fishes Hoplosternum thoracatum and two groups of Columbian cockroaches Blaberus craniifer were analyzed. The observations were carried out in the Garm region of Tajikistan within the frameworks of the experiments aimed at searching for earthquake precursors. An automatic recording system continuously recorded EA and DA over a period of several years. Hourly means EA and MA values were processed. Approximately 100 different parameters were calculated on the basis of six initial EA and MA time series, which characterize different variations in the EA and DA structure: amplitude of the signal and fluctuations of activity, parameters of diurnal rhythms, correlated changes in the activity of various biological indicators, and others. A detailed analysis of the statistical structure of the total array of parametric time series obtained in the experiment showed that the behavior of all animals shows a strong temporal variability. All calculated parameters are unstable and subject to frequent changes. A comparison of the data obtained with seismicity allow us to make the following conclusions: (1) The structure of variations in the studied parameters is represented by flicker noise or even a more complex process with permanent changes in its characteristics. Significant statistics are required to prove the cause-and-effect relationship of the specific features of such time series with seismicity. (2) The calculation of the reconstruction statistics in the EA and MA series structure demonstrated an increase in their frequency in the last hours or a few days before the earthquake if the hypocenter distance is comparable to the source size. Sufficiently dramatic anomalies in the behavior of catfishes and cockroaches (changes in the amplitude of activity variation, distortions of diurnal rhythms, increase in the

It is unclear about the change and risk factors of depression among adolescent survivors after earthquake. This study aimed to explore the change of depression, and identify the predictive factors of depression among adolescent survivors after the 2008 Wenchuan earthquake in China. The depression among high school students at 6, 12 and 18 months after the Wenchuan earthquake were investigated. The Beck Depression Inventory (BDI) was used in this study to assess the severity of depression. Subjects included 548 student survivors in an affected high school. The rates of depression among the adolescent survivors at 6-, 12- and 18-month after the earthquake were 27.3%, 42.9% and 33.3%, respectively, for males, and 42.9%, 61.9% and 53.4%, respectively, for females. Depression symptoms, trauma-related self-injury, suicidal ideation and PTSD symptoms at the 6-month follow-up were significant predictive factors for depression at the 18-month time interval following the earthquake. This study highlights the need for considering disaster-related psychological sequela and risk factors of depression symptoms in the planning and implementation of mental health services. Long-term mental and psychological supports for victims of natural disasters are imperative.

A comprehensive technique for earthquake-related casualty estimation remains an unmet challenge. This study aims to integrate risk factors related to characteristics of the exposed population and to the built environment in order to improve communities' preparedness and response capabilities and to mitigate future consequences. An innovative model was formulated based on a widely used loss estimation model (HAZUS) by integrating four human-related risk factors (age, gender, physical disability and socioeconomic status) that were identified through a systematic review and meta-analysis of epidemiological data. The common effect measures of these factors were calculated and entered to the existing model's algorithm using logistic regression equations. Sensitivity analysis was performed by conducting a casualty estimation simulation in a high-vulnerability risk area in Israel. the integrated model outcomes indicated an increase in the total number of casualties compared with the prediction of the traditional model; with regard to specific injury levels an increase was demonstrated in the number of expected fatalities and in the severely and moderately injured, and a decrease was noted in the lightly injured. Urban areas with higher populations at risk rates were found more vulnerable in this regard. The proposed model offers a novel approach that allows quantification of the combined impact of human-related and structural factors on the results of earthquake casualty modelling. Investing efforts in reducing human vulnerability and increasing resilience prior to an occurrence of an earthquake could lead to a possible decrease in the expected number of casualties.

The science of earthquakeprediction is interesting and worthy of support. In many respects the ultimate payoff of earthquakeprediction or earthquake forecasting is how the information can be used to enhance public safety and public preparedness. This is a particularly important issue here in California where we have such a high level of seismic risk historically, and currently, as a consequence of activity in 1989 in the San Francisco Bay Area, in Humboldt County in April of this year (1992), and in southern California in the Landers-Big Bear area in late June of this year (1992). We are currently very concerned about the possibility of a major earthquake, one or more, happening close to one of our metropolitan areas. Within that context, the Parkfield experiment becomes very important.

We show here 4 examples of short-term and imminent prediction of earthquakes in China last year. They are Nima Earthquake(Ms5.2), Minxian Earthquake(Ms6.6), Nantou Earthquake (Ms6.7) and Dujiangyan Earthquake (Ms4.1) Imminent Prediction of Nima Earthquake(Ms5.2) Based on the comprehensive analysis of the prediction of Victor Sibgatulin using natural electromagnetic pulse anomalies and the prediction of Song Song and Song Kefu using observation of a precursory halo, and an observation for the locations of a degasification of the earth in the Naqu, Tibet by Zeng Zuoxun himself, the first author made a prediction for an earthquake around Ms 6 in 10 days in the area of the degasification point (31.5N, 89.0 E) at 0:54 of May 8th, 2013. He supplied another degasification point (31N, 86E) for the epicenter prediction at 8:34 of the same day. At 18:54:30 of May 15th, 2013, an earthquake of Ms5.2 occurred in the Nima County, Naqu, China. Imminent Prediction of Minxian Earthquake (Ms6.6) At 7:45 of July 22nd, 2013, an earthquake occurred at the border between Minxian and Zhangxian of Dingxi City (34.5N, 104.2E), Gansu province with magnitude of Ms6.6. We review the imminent prediction process and basis for the earthquake using the fingerprint method. 9 channels or 15 channels anomalous components - time curves can be outputted from the SW monitor for earthquake precursors. These components include geomagnetism, geoelectricity, crust stresses, resonance, crust inclination. When we compress the time axis, the outputted curves become different geometric images. The precursor images are different for earthquake in different regions. The alike or similar images correspond to earthquakes in a certain region. According to the 7-year observation of the precursor images and their corresponding earthquake, we usually get the fingerprint 6 days before the corresponding earthquakes. The magnitude prediction needs the comparison between the amplitudes of the fingerpringts from the same

Abstract: Background: Public education is one of the most important elements of earthquake preparedness. The present study identifies methods and appropriate strategies for public awareness and education on preparedness for earthquakes based on people's opinions in the city of Tehran. Methods: This was a cross-sectional study and a door-to-door survey of residents from 22 municipal districts in Tehran, the capital city of Iran. It involved a total of 1 211 individuals aged 15 and above. People were asked about different methods of public information and education, as well as the type of information needed for earthquake preparedness. Results: "Enforcing the building contractors' compliance with the construction codes and regulations" was ranked as the first priority by 33.4% of the respondents. Over 70% of the participants (71.7%) regarded TV as the most appropriate means of media communication to prepare people for an earthquake. This was followed by "radio" which was selected by 51.6% of respondents. Slightly over 95% of the respondents believed that there would soon be an earthquake in the country, and 80% reported that they obtained this information from "the general public". Seventy percent of the study population felt that news of an earthquake should be communicated through the media. However, over fifty (58%) of the participants believed that governmental officials and agencies are best qualified to disseminate information about the risk of an imminent earthquake. Just over half (50.8%) of the respondents argued that the authorities do not usually provide enough information to people about earthquakes and the probability of their occurrence. Besides seismologists, respondents thought astronauts (32%), fortunetellers (32.3%), religious figures (34%), meteorologists (23%), and paleontologists (2%) can correctly predict the occurrence of an earthquake. Furthermore, 88.6% listed aid centers, mosques, newspapers and TV as the most important sources of

Public education is one of the most important elements of earthquake preparedness. The present study identifies methods and appropriate strategies for public awareness and education on preparedness for earthquakes based on people's opinions in the city of Tehran. This was a cross-sectional study and a door-to-door survey of residents from 22 municipal districts in Tehran, the capital city of Iran. It involved a total of 1 211 individuals aged 15 and above. People were asked about different methods of public information and education, as well as the type of information needed for earthquake preparedness. "Enforcing the building contractors' compliance with the construction codes and regulations" was ranked as the first priority by 33.4% of the respondents. Over 70% of the participants (71.7%) regarded TV as the most appropriate means of media communication to prepare people for an earthquake. This was followed by "radio" which was selected by 51.6% of respondents. Slightly over 95% of the respondents believed that there would soon be an earthquake in the country, and 80% reported that they obtained this information from "the general public". Seventy percent of the study population felt that news of an earthquake should be communicated through the media. However, over fifty (58%) of the participants believed that governmental officials and agencies are best qualified to disseminate information about the risk of an imminent earthquake. Just over half (50.8%) of the respondents argued that the authorities do not usually provide enough information to people about earthquakes and the probability of their occurrence. Besides seismologists, respondents thought astronauts (32%), fortunetellers (32.3%), religious figures (34%), meteorologists (23%), and paleontologists (2%) can correctly predict the occurrence of an earthquake. Furthermore, 88.6% listed aid centers, mosques, newspapers and TV as the most important sources of information during the aftermath of an earthquake

This study presents two examples of modelling complex seismic sources in Italy, done in the framework of regional probabilistic seismic hazard assessment (PSHA). The first case study is for an area centred around Collalto Stoccaggio, a natural gas storage facility in Northern Italy, located within a system of potentially seismogenic thrust faults in the Venetian Plain. The storage exploits a depleted natural gas reservoir located within an actively growing anticline, which is likely driven by the Montello Fault, the underlying blind thrust. This fault has been well identified by microseismic activity (M<2) detected by a local seismometric network installed in 2012 (http://rete-collalto.crs.inogs.it/). At this time, no correlation can be identified between the gas storage activity and local seismicity, so we proceed with a PSHA that considers only natural seismicity, where the rates of earthquakes are assumed to be time-independent. The source model consists of faults and distributed seismicity to consider earthquakes that cannot be associated to specific structures. All potentially active faults within 50 km of the site are considered, and are modelled as 3D listric surfaces, consistent with the proposed geometry of the Montello Fault. Slip rates are constrained using available geological, geophysical and seismological information. We explore the sensitivity of the hazard results to various parameters affected by epistemic uncertainty, such as ground motions prediction equations with different rupture-to-site distance metrics, fault geometry, and maximum magnitude. The second case is an innovative study, where we perform aftershock probabilistic seismic hazard assessment (APSHA) in Central Italy, following the Amatrice M6.1 earthquake of August 24th, 2016 (298 casualties) and the subsequent earthquakes of Oct 26th and 30th (M6.1 and M6.6 respectively, no deaths). The aftershock hazard is modelled using a fault source with complex geometry, based on literature data

This report addresses the ability of transportation facilities in California to survive four postulated earthquakes that are based on historical events. The survival of highways, railroads, ports, airports, and pipelines is investigated following ind...

Kumamoto earthquake on April 16th 2016 in Kumamoto prefecture, Kyushu Island, Japan with intense seismic scale of M7.3 (maximum acceleration = 1316 gal in Aso volcanic region) yielded countless instances of landslide and debris flow that induced serious damages and causalities in the area, especially in the Aso volcanic mountain range. Hence, field investigation and numerical slope stability analysis were conducted to delve into the characteristics or the prediction factors of the landslides induced by this earthquake. For the numerical analysis, Finite Element Method (FEM) and CSSDP (Critical Slip Surface analysis by Dynamic Programming theory based on limit equilibrium method) were applied to the landslide slopes with seismic acceleration observed. These numerical analysis methods can automatically detect the landslide slip surface which has minimum Fs (factor of safety). The various results and the information obtained through this investigation and analysis were integrated to predict the landslide susceptible slopes in volcanic area induced by earthquakes and rainfalls of their aftermath, considering geologic-geomorphologic features, geo-technical characteristics of the landslides and vegetation effects on the slope stability. Based on the FEM or CSSDP results, the landslides occurred in this earthquake at the mild gradient slope on the ridge have the safety factor of slope Fs=2.20 approximately (without rainfall nor earthquake, and Fs>=1.0 corresponds to stable slope without landslide) and 1.78 2.10 (with the most severe rainfall in the past) while they have approximately Fs=0.40 with the seismic forces in this earthquake (horizontal direction 818 gal, vertical direction -320 gal respectively, observed in the earthquake). It insists that only in case of earthquakes the landslide in volcanic sediment apt to occur at the mild gradient slopes as well as on the ridges with convex cross section. Consequently, the following results are obtained. 1) At volcanic

For few decades, growing number of studies have shown usefulness of data in the field of seismogeochemistry interpreted as geochemical precursory signals for impending earthquakes and radon is idendified to be as one of the most reliable geochemical precursor. Radon is recognized as short-term precursor and is being monitored in many countries. This study is aimed at developing an effective earthquake forecasting system by inspecting long term radon time series data. The data is obtained from a network of radon monitoring stations eastblished along different faults of Taiwan. The continuous time series radon data for earthquakestudies have been recorded and some significant variations associated with strong earthquakes have been observed. The data is also examined to evaluate earthquake precursory signals against environmental factors. An automated real-time database operating system has been developed recently to improve the data processing for earthquake precursory studies. In addition, the study is aimed at the appraisal and filtrations of these environmental parameters, in order to create a real-time database that helps our earthquake precursory study. In recent years, automatic operating real-time database has been developed using R, an open source programming language, to carry out statistical computation on the data. To integrate our data with our working procedure, we use the popular and famous open source web application solution, AMP (Apache, MySQL, and PHP), creating a website that could effectively show and help us manage the real-time database.

We revise the slip processes of July 17 2006 Java earthquakes by combined inverting teleseismic body wave, long period surface waves, as well as the broadband records at Christmas island (XMIS), which is 220 km away from the hypocenter and so far the closest observation for a Tsunami earthquake. Comparing with the previous studies, our approach considers the amplitude variations of surface waves with source depths as well as the contribution of ScS phase, which usually has amplitudes compatible with that of direct S phase for such low angle thrust earthquakes. The fault dip angles are also refined using the Love waves observed along fault strike direction. Our results indicate that the 2006 event initiated at a depth around 12 km and unilaterally rupture southeast for 150 sec with a speed of 1.0 km/sec. The revised fault dip is only about 6 degrees, smaller than the Harvard CMT (10.5 degrees) but consistent with that of 1994 Java earthquake. The smaller fault dip results in a larger moment magnitude (Mw=7.9) for a PREM earth, though it is dependent on the velocity structure used. After verified with 3D SEM forward simulation, we compare the inverted result with the revised slip models of 1994 Java and 1992 Nicaragua earthquakes derived using the same wavelet based finite fault inversion methodology.

Bayesian sampling has several advantages over conventional optimization approaches to solving inverse problems. It produces the distribution of all possible models sampled proportionally to how much each model is consistent with the data and the specified prior information, and thus images the entire solution space, revealing the uncertainties and trade-offs in the model. Bayesian sampling is applicable to both linear and non-linear modeling, and the values of the model parameters being sampled can be constrained based on the physics of the process being studied and do not have to be regularized. However, these methods are computationally challenging for high-dimensional problems. Until now the computational expense of Bayesian sampling has been too great for it to be practicable for most geophysical problems. I present a new parallel sampling algorithm called CATMIP for Cascading Adaptive Tempered Metropolis In Parallel. This technique, based on Transitional Markov chain Monte Carlo, makes it possible to sample distributions in many hundreds of dimensions, if the forward model is fast, or to sample computationally expensive forward models in smaller numbers of dimensions. The design of the algorithm is independent of the model being sampled, so CATMIP can be applied to many areas of research. I use CATMIP to produce a finite fault source model for the 2007 Mw 7.7 Tocopilla, Chile earthquake. Surface displacements from the earthquake were recorded by six interferograms and twelve local high-rate GPS stations. Because of the wealth of near-fault data, the source process is well-constrained. I find that the near-field high-rate GPS data have significant resolving power above and beyond the slip distribution determined from static displacements. The location and magnitude of the maximum displacement are resolved. The rupture almost certainly propagated at sub-shear velocities. The full posterior distribution can be used not only to calculate source parameters but also

Background A comprehensive technique for earthquake-related casualty estimation remains an unmet challenge. This study aims to integrate risk factors related to characteristics of the exposed population and to the built environment in order to improve communities’ preparedness and response capabilities and to mitigate future consequences. Methods An innovative model was formulated based on a widely used loss estimation model (HAZUS) by integrating four human-related risk factors (age, gender, physical disability and socioeconomic status) that were identified through a systematic review and meta-analysis of epidemiological data. The common effect measures of these factors were calculated and entered to the existing model’s algorithm using logistic regression equations. Sensitivity analysis was performed by conducting a casualty estimation simulation in a high-vulnerability risk area in Israel. Results the integrated model outcomes indicated an increase in the total number of casualties compared with the prediction of the traditional model; with regard to specific injury levels an increase was demonstrated in the number of expected fatalities and in the severely and moderately injured, and a decrease was noted in the lightly injured. Urban areas with higher populations at risk rates were found more vulnerable in this regard. Conclusion The proposed model offers a novel approach that allows quantification of the combined impact of human-related and structural factors on the results of earthquake casualty modelling. Investing efforts in reducing human vulnerability and increasing resilience prior to an occurrence of an earthquake could lead to a possible decrease in the expected number of casualties. PMID:26959647

Over the last 20 years, natural disasters have killed nearly 3 million people and disrupted the lives of over 800 million others. In 2 years there were more than 50 serious natural disasters, including landslides in Italy, France, and Colombia; a typhoon in Korea; wildfires in China and the United States; a windstorm in England; grasshopper plagues in Africa's horn and the Sahel; tornadoes in Canada; devastating earthquakes in Soviet Armenia and Tadzhikstand; infestations in Africa; landslides in Brazil; and tornadoes in the United States

Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the

The Himalaya is one of the most seismically active regions of the world. The occurrence of several large magnitude earthquakes viz. 1905 Kangra earthquake (Mw 7.8), 1934 Bihar-Nepal earthquake (Mw 8.2), 1950 Assam earthquake (Mw 8.4), 2005 Kashmir (Mw 7.6), and 2015 Gorkha (Mw 7.8) are the testimony to ongoing tectonic activity. In the last few decades, tremendous efforts have been made along the Himalayan arc to understand the patterns of earthquake occurrences, size, extent, and return periods. Some of the large magnitude earthquakes produced surface rupture, while some remained blind. Furthermore, due to the incompleteness of the earthquake catalogue, a very few events can be correlated with medieval earthquakes. Based on the existing paleoseismic data certainly, there exists a complexity to precisely determine the extent of surface rupture of these earthquakes and also for those events, which occurred during historic times. In this paper, we have compiled the paleo-seismological data and recalibrated the radiocarbon ages from the trenches excavated by previous workers along the entire Himalaya and compared earthquake scenario with the past. Our studies suggest that there were multiple earthquake events with overlapping surface ruptures in small patches with an average rupture length of 300 km limiting Mw 7.8-8.0 for the Himalayan arc, rather than two or three giant earthquakes rupturing the whole front. It has been identified that the large magnitude Himalayan earthquakes, such as 1905 Kangra, 1934 Bihar-Nepal, and 1950 Assam, that have occurred within a time frame of 45 years. Now, if these events are dated, there is a high possibility that within the range of ±50 years, they may be considered as the remnant of one giant earthquake rupturing the entire Himalayan arc. Therefore, leading to an overestimation of seismic hazard scenario in Himalaya.

We quantify the correlation between earthquakes and use the same to extract causally connected earthquake pairs. Our correlation metric is a variation on the one introduced by Baiesi and Paczuski [M. Baiesi and M. Paczuski, Phys. Rev. E EULEEJ1539-375510.1103/PhysRevE.69.06610669, 066106 (2004)]. A network of earthquakes is then constructed from the time-ordered catalog and with links between the more correlated ones. A list of recurrences to each of the earthquakes is identified employing correlation thresholds to demarcate the most meaningful ones in each cluster. Data pertaining to three different seismic regions (viz., California, Japan, and the Himalayas) are comparatively analyzed using such a network model. The distribution of recurrence lengths and recurrence times are two of the key features analyzed to draw conclusions about the universal aspects of such a network model. We find that the unimodal feature of recurrence length distribution, which helps to associate typical rupture lengths with different magnitude earthquakes, is robust across the different seismic regions. The out-degree of the networks shows a hub structure rooted on the large magnitude earthquakes. In-degree distribution is seen to be dependent on the density of events in the neighborhood. Power laws, with two regimes having different exponents, are obtained with recurrence time distribution. The first regime confirms the Omori law for aftershocks while the second regime, with a faster falloff for the larger recurrence times, establishes that pure spatial recurrences also follow a power-law distribution. The crossover to the second power-law regime can be taken to be signaling the end of the aftershock regime in an objective fashion.

Predicted conditional probabilities of surface manifestations of liquefaction during a repeat of the 1906 San Francisco (M7.8) earthquake range from 0.54 to 0.79 in the area underlain by the sandy artificial fills along the eastern shore of San Francisco Bay near Oakland, California. Despite widespread liquefaction in 1906 of sandy fills in San Francisco, most of the East Bay fills were emplaced after 1906 without soil improvement to increase their liquefaction resistance. They have yet to be shaken strongly. Probabilities are based on the liquefaction potential index computed from 82 CPT soundings using median (50th percentile) estimates of PGA based on a ground-motion prediction equation. Shaking estimates consider both distance from the San Andreas Fault and local site conditions. The high probabilities indicate extensive and damaging liquefaction will occur in East Bay fills during the next M ??? 7.8 earthquake on the northern San Andreas Fault. ?? 2006, Earthquake Engineering Research Institute.

Results on the long-term predictability of strong earthquakes are discussed. It is shown that dates of earthquakes with M>5.5 could be determined in advance of several months before the event. The magnitude and the region of approaching earthquake could be specified in the time-frame of a month before the event. Determination of number of M6+ earthquakes, which are expected to occur during the analyzed year, is performed using the special sequence diagram of seismic activity for the century time frame. Date analysis could be performed with advance of 15-20 years. Data is verified by a monthly sequence diagram of seismic activity. The number of strong earthquakes expected to occur in the analyzed month is determined by several methods having a different prediction horizon. Determination of days of potential earthquakes with M5.5+ is performed using astronomical data. Earthquakes occur on days of oppositions of Solar System planets (arranged in a single line). At that, the strongest earthquakes occur under the location of vector "Sun-Solar System barycenter" in the ecliptic plane. Details of this astronomical multivariate indicator still require further research, but it's practical significant is confirmed by practice. Another one empirical indicator of approaching earthquake M6+ is a synchronous variation of meteorological parameters: abrupt decreasing of minimal daily temperature, increasing of relative humidity, abrupt change of atmospheric pressure (RAMES method). Time difference of predicted and actual date is no more than one day. This indicator is registered 104 days before the earthquake, so it was called as Harmonic 104 or H-104. This fact looks paradoxical, but the works of A. Sytinskiy and V. Bokov on the correlation of global atmospheric circulation and seismic events give a physical basis for this empirical fact. Also, 104 days is a quarter of a Chandler period so this fact gives insight on the correlation between the anomalies of Earth orientation

In tectonically active areas, earthquakes are an important trigger of landslides with significant impact on hillslopes and river evolutions. However, detailed prediction of landslides locations and properties for a given earthquakes remain difficult.In contrast we propose, landscape scale, analytical prediction of bulk coseismic landsliding, that is total landslide area and volume (Marc et al., 2016a) as well as the regional area within which most landslide must distribute (Marc et al., 2017). The prediction is based on a limited number of seismological (seismic moment, source depth) and geomorphological (landscape steepness, threshold acceleration) parameters, and therefore could be implemented in landscape evolution model aiming at engaging with erosion dynamics at the scale of the seismic cycle. To assess the model we have compiled and normalized estimates of total landslide volume, total landslide area and regional area affected by landslides for 40, 17 and 83 earthquakes, respectively. We have found that low landscape steepness systematically leads to overprediction of the total area and volume of landslides. When this effect is accounted for, the model is able to predict within a factor of 2 the landslide areas and associated volumes for about 70% of the cases in our databases. The prediction of regional area affected do not require a calibration for the landscape steepness and gives a prediction within a factor of 2 for 60% of the database. For 7 out of 10 comprehensive inventories we show that our prediction compares well with the smallest region around the fault containing 95% of the total landslide area. This is a significant improvement on a previously published empirical expression based only on earthquake moment.Some of the outliers seems related to exceptional rock mass strength in the epicentral area or shaking duration and other seismic source complexities ignored by the model. Applications include prediction on the mass balance of earthquakes and

This thematic collection contains eight papers mostly presented at the 2016 AOGS meeting in Beijing. Four papers describe historical earthquakestudies in Europe, Japan, and China; one paper uses modern instrumental data to examine the effect of giant earthquakes on the seismicity rate; and three papers describe paleoseismological studies using tsunami deposit in Japan, marine terraces in Philippines, and active faults in Himalayas. Hammerl (Geosci Lett 4:7, 2017) introduced historical seismological studies in Austria, starting from methodology which is state of the art in most European countries, followed by a case study for an earthquake of July 17, 1670 in Tyrol. Albini and Rovida (Geosci Lett 3:30, 2016) examined 114 historical records for the earthquake on April 6, 1667 on the east coast of the Adriatic Sea, compiled 37 Macroseismic Data Points, and estimated the epicenter and the size of the earthquake. Matsu'ura (Geosci Lett 4:3, 2017) summarized historical earthquakestudies in Japan which resulted in about 8700 Intensity Data Points, assigned epicenters for 214 earthquakes between AD 599 and 1872, and estimated focal depth and magnitudes for 134 events. Wang et al. (Geosci Lett 4:4, 2017) introduced historical seismology in China, where historical earthquake archives include about 15,000 sources, and parametric catalogs include about 1000 historical earthquakes between 2300 BC and AD 1911. Ishibe et al. (Geosci Lett 4:5, 2017) tested the Coulomb stress triggering hypothesis for three giant (M 9) earthquakes that occurred in recent years, and found that at least the 2004 Sumatra-Andaman and 2011 Tohoku earthquakes caused the seismicity rate change. Ishimura (2017) re-estimated the ages of 11 tsunami deposits in the last 4000 years along the Sanriku coast of northern Japan and found that the average recurrence interval of those tsunamis as 350-390 years. Ramos et al. (2017) studied 1000-year-old marine terraces on the west coast of Luzon Island, Philippines

We report on a unique set of infrasound observations from a single earthquake, the 2011 January 3 Circleville earthquake (Mw 4.7, depth of 8 km), which was recorded by nine infrasound arrays in Utah. Based on an analysis of the signal arrival times and backazimuths at each array, we find that the infrasound arrivals at six arrays can be associated to the same source and that the source location is consistent with the earthquake epicentre. Results of propagation modelling indicate that the lack of associated arrivals at the remaining three arrays is due to path effects. Based on these findings we form the working hypothesis that the infrasound is generated by body waves causing the epicentral region to pump the atmosphere, akin to a baffled piston. To test this hypothesis, we have developed a numerical seismoacoustic model to simulate the generation of epicentral infrasound from earthquakes. We model the generation of seismic waves using a 3-D finite difference algorithm that accounts for the earthquake moment tensor, source time function, depth and local geology. The resultant acceleration-time histories on a 2-D grid at the surface then provide the initial conditions for modelling the near-field infrasonic pressure wave using the Rayleigh integral. Finally, we propagate the near-field source pressure through the Ground-to-Space atmospheric model using a time-domain Parabolic Equation technique. By comparing the resultant predictions with the six epicentral infrasound observations from the 2011 January 3, Circleville earthquake, we show that the observations agree well with our predictions. The predicted and observed amplitudes are within a factor of 2 (on average, the synthetic amplitudes are a factor of 1.6 larger than the observed amplitudes). In addition, arrivals are predicted at all six arrays where signals are observed, and importantly not predicted at the remaining three arrays. Durations are typically predicted to within a factor of 2, and in some cases

Natural disaster reduction focuses on the urgent need for prevention activities to reduce loss of life, damage to property, infrastructure and environment, and the social and economic disruption caused by natural hazards. One of the most important factors in reduction of the potential damage of earthquakes is trained manpower. To understanding the causes of earthquakes and other natural phenomena (landslides, avalanches, floods, volcanoes, etc.) is one of the pre-conditions to show a conscious behavior. The aim of the study is to analysis and to investigate, how earthquakes and other natural phenomena are perceived by the students and the possible consequences of this perception, and their effects of reducing earthquake damage. One of the crucial questions is that is our education system fear or curiosity based education system? Effects of the damages due to earthquakes have led to look like a fear subject. In fact, due to the results of the effects, the earthquakes are perceived scary phenomena. In the first stage of the project, the learning (or perception) levels of earthquakes and other natural disasters for the students of primary school are investigated with a survey. Aim of this survey study of earthquakes and other natural phenomena is that have the students fear based or curiosity based approaching to the earthquakes and other natural events. In the second stage of the project, the path obtained by the survey are evaluated with the statistical point of approach. A questionnaire associated with earthquakes and natural disasters are applied to primary school students (that total number of them is approximately 700 pupils) to measure the curiosity and/or fear levels. The questionnaire consists of 17 questions related to natural disasters. The questions are: "What is the Earthquake ?", "What is power behind earthquake?", "What is the mental response during the earthquake ?", "Did we take lesson from earthquake's results ?", "Are you afraid of earthquake

A ground-motion prediction equation (GMPE) for computing medians and standard deviations of peak ground acceleration and 5-percent damped pseudo spectral acceleration response ordinates of maximum horizontal component of randomly oriented ground motions was developed by Graizer and Kalkan (2007, 2009) to be used for seismic hazard analyses and engineering applications. This GMPE was derived from the greatly expanded Next Generation of Attenuation (NGA)-West1 database. In this study, Graizer and Kalkan’s GMPE is revised to include (1) an anelastic attenuation term as a function of quality factor (Q0) in order to capture regional differences in large-distance attenuation and (2) a new frequency-dependent sedimentary-basin scaling term as a function of depth to the 1.5-km/s shear-wave velocity isosurface to improve ground-motion predictions for sites on deep sedimentary basins. The new model (GK15), developed to be simple, is applicable to the western United States and other regions with shallow continental crust in active tectonic environments and may be used for earthquakes with moment magnitudes 5.0–8.0, distances 0–250 km, average shear-wave velocities 200–1,300 m/s, and spectral periods 0.01–5 s. Directivity effects are not explicitly modeled but are included through the variability of the data. Our aleatory variability model captures inter-event variability, which decreases with magnitude and increases with distance. The mixed-effects residuals analysis shows that the GK15 reveals no trend with respect to the independent parameters. The GK15 is a significant improvement over Graizer and Kalkan (2007, 2009), and provides a demonstrable, reliable description of ground-motion amplitudes recorded from shallow crustal earthquakes in active tectonic regions over a wide range of magnitudes, distances, and site conditions.

This study presents a new method, namely the gambling score, for scoring the performance of earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. A fair scoring scheme should reward the success in a way that is compatible with the risk taken. Suppose that we have the reference model, usually the Poisson model for usual cases or Omori-Utsu formula for the case of forecasting aftershocks, which gives probability p0 that at least 1 event occurs in a given space-time-magnitude window. The forecaster, similar to a gambler, who starts with a certain number of reputation points, bets 1 reputation point on ``Yes'' or ``No'' according to his forecast, or bets nothing if he performs a NA-prediction. If the forecaster bets 1 reputation point of his reputations on ``Yes" and loses, the number of his reputation points is reduced by 1; if his forecasts is successful, he should be rewarded (1-p0)/p0 reputation points. The quantity (1-p0)/p0 is the return (reward/bet) ratio for bets on ``Yes''. In this way, if the reference model is correct, the expected return that he gains from this bet is 0. This rule also applies to probability forecasts. Suppose that p is the occurrence probability of an earthquake given by the forecaster. We can regard the forecaster as splitting 1 reputation point by betting p on ``Yes'' and 1-p on ``No''. In this way, the forecaster's expected pay-off based on the reference model is still 0. From the viewpoints of both the reference model and the forecaster, the rule for rewarding and punishment is fair. This method is also extended to the continuous case of point process models, where the reputation points bet by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when

A 904 single-component 10-Hz geophone array deployed within 15 km of Mount St. Helens (MSH) in 2014 recorded continuously for two-weeks. Automated reverse-time imaging (RTI) was used to generate a catalog of 212 earthquakes. Among these, two distinct types of upper crustal (<8 km) earthquakes were classified. Volcano-tectonic (VT) and long-period (LP) earthquakes were identified using analysis of array spectrograms, envelope functions, and velocity waveforms. To remove analyst subjectivity, quantitative classification criteria were developed based on the ratio of power in high and low frequency bands and coda duration. Prior to the 2014 experiment, upper crustal LP earthquakes had only been reported at MSH during volcanic activity. Subarray beamforming was used to distinguish between LP earthquakes and surface generated LP signals, such as rockfall. This method confirmed 16 LP signals with horizontal velocities exceeding that of upper crustal P-wave velocities, which requires a subsurface hypocenter. LP and VT locations overlap in a cluster slightly east of the summit crater from 0-5 km below sea level. LP displacement spectra are similar to simple theoretical predictions for shear failure except that they have lower corner frequencies than VT earthquakes of similar magnitude. The results indicate a distinct non-resonant source for LP earthquakes which are located in the same source volume as some VT earthquakes (within hypocenter uncertainty of 1 km or less). To further investigate MSH microseismicity mechanisms, a 142 three-component (3-C) 5 Hz geophone array will record continuously for one month at MSH in Fall 2017 providing a unique dataset for a volcano earthquake source study. This array will help determine if LP occurrence in 2014 was transient or if it is still ongoing. Unlike the 2014 array, approximately 50 geophones will be deployed in the MSH summit crater directly over the majority of seismicity. RTI will be used to detect and locate earthquakes by

This paper presents the results from shaking table tests of a one-tenth-scale reinforced concrete (RC) building model. The test model is a protype of a building that was seriously damaged during the 1985 Mexico earthquake. The input ground excitation used during the test was from the records obtained near the site of the prototype building during the 1985 and 1995 Mexico earthquakes. The tests showed that the damage pattern of the test model agreed well with that of the prototype building. Analytical prediction of earthquake response has been conducted for the prototype building using a sophisticated 3-D frame model. The input motion used for the dynamic analysis was the shaking table test measurements with similarity transformation. The comparison of the analytical results and the shaking table test results indicates that the response of the RC building to minor and the moderate earthquakes can be predicated well. However, there is difference between the predication and the actual response to the major earthquake.

We establish positive correlation between the local spatio-temporal fluctuations of the earthquake magnitude distribution and the occurrence of regional earthquakes. In order to accomplish this goal, we develop a sequential Bayesian statistical estimation framework for the b-value (slope of the Gutenberg-Richter's exponential approximation to the observed magnitude distribution) and for the ratio a(t) between the earthquake intensities in two non-overlapping magnitude intervals. The time-dependent dynamics of these parameters is analyzed using Markov Chain Models (MCM). The main advantage of this approach over the traditional window-based estimation is its "soft" parameterization, which allows one to obtain stable results with realistically small samples. We furthermore discuss a statistical methodology for establishing lagged correlations between continuous and point processes. The developed methods are applied to the observed seismicity of California, Nevada, and Japan on different temporal and spatial scales. We report an oscillatory dynamics of the estimated parameters, and find that the detected oscillations are positively correlated with the occurrence of large regional earthquakes, as well as with small events with magnitudes as low as 2.5. The reported results have important implications for further development of earthquakeprediction and seismic hazard assessment methods.

It is a quite important issue for strong ground motion prediction to construct the source model of huge subduction earthquakes. Irikura and Miyake (2001, 2011) proposed the characterized source model for strong ground motion prediction, which consists of plural strong ground motion generation area (SMGA, Miyake et al., 2003) patches on the source fault. We obtained the SMGA source models for many events using the empirical Green's function method and found the SMGA size has an empirical scaling relationship with seismic moment. Therefore, the SMGA size can be assumed from that empirical relation under giving the seismic moment for anticipated earthquakes. Concerning to the setting of the SMGAs position, the information of the fault segment is useful for inland crustal earthquakes. For the 1995 Kobe earthquake, three SMGA patches are obtained and each Nojima, Suma, and Suwayama segment respectively has one SMGA from the SMGA modeling (e.g. Kamae and Irikura, 1998). For the 2011 Tohoku earthquake, Asano and Iwata (2012) estimated the SMGA source model and obtained four SMGA patches on the source fault. Total SMGA area follows the extension of the empirical scaling relationship between the seismic moment and the SMGA area for subduction plate-boundary earthquakes, and it shows the applicability of the empirical scaling relationship for the SMGA. The positions of two SMGAs are in Miyagi-Oki segment and those other two SMGAs are in Fukushima-Oki and Ibaraki-Oki segments, respectively. Asano and Iwata (2012) also pointed out that all SMGAs are corresponding to the historical source areas of 1930's. Those SMGAs do not overlap the huge slip area in the shallower part of the source fault which estimated by teleseismic data, long-period strong motion data, and/or geodetic data during the 2011 mainshock. This fact shows the huge slip area does not contribute to strong ground motion generation (10-0.1s). The information of the fault segment in the subduction zone, or

East Pacific Rise transform faults are characterized by high slip rates (more than ten centimetres a year), predominantly aseismic slip and maximum earthquake magnitudes of about 6.5. Using recordings from a hydroacoustic array deployed by the National Oceanic and Atmospheric Administration, we show here that East Pacific Rise transform faults also have a low number of aftershocks and high foreshock rates compared to continental strike-slip faults. The high ratio of foreshocks to aftershocks implies that such transform-fault seismicity cannot be explained by seismic triggering models in which there is no fundamental distinction between foreshocks, mainshocks and aftershocks. The foreshock sequences on East Pacific Rise transform faults can be used to predict (retrospectively) earthquakes of magnitude 5.4 or greater, in narrow spatial and temporal windows and with a high probability gain. The predictability of such transform earthquakes is consistent with a model in which slow slip transients trigger earthquakes, enrich their low-frequency radiation and accommodate much of the aseismic plate motion.

A comparison of recent observed ground motion parameters of recent Gorkha Nepal earthquake of 25 April 2015 (Mw 7.8) with the predicted ground motion parameters using exitsing attenuation relation of the Himalayan region will be presented. The recent earthquake took about 8000 lives and destroyed thousands of poor quality of buildings and the earthquake was felt by millions of people living in Nepal, China, India, Bangladesh, and Bhutan. The knowledge of ground parameters are very important in developing seismic code of seismic prone regions like Himalaya for better design of buildings. The ground parameters recorded in recent earthquake event and aftershocks are compared with attenuation relations for the Himalayan region, the predicted ground motion parameters show good correlation with the observed ground parameters. The results will be of great use to Civil engineers in updating existing building codes in the Himlayan and surrounding regions and also for the evaluation of seismic hazards. The results clearly show that the attenuation relation developed for the Himalayan region should be only used, other attenuation relations based on other regions fail to provide good estimate of observed ground motion parameters.

The M ˜ 8.3-8.4 25 November 1941 was one of the largest submarine strike-slip earthquakes ever recorded in the Northeast (NE) Atlantic basin. This event occurred along the Eurasia-Nubia plate boundary between the Azores and the Strait of Gibraltar. After the earthquake, the tide stations in the NE Atlantic recorded a small tsunami with maximum amplitudes of 40 cm peak to through in the Azores and Madeira islands. In this study, we present a re-evaluation of the earthquake epicentre location using seismological data not included in previous studies. We invert the tsunami travel times to obtain a preliminary tsunami source location using the backward ray tracing (BRT) technique. We invert the tsunami waveforms to infer the initial sea surface displacement using empirical Green's functions, without prior assumptions about the geometry of the source. The results of the BRT simulation locate the tsunami source quite close to the new epicentre. This fact suggests that the co-seismic deformation of the earthquake induced the tsunami. The waveform inversion of tsunami data favours the conclusion that the earthquake ruptured an approximately 160 km segment of the plate boundary, in the eastern section of the Gloria Fault between -20.249 and -18.630° E. The results presented here contribute to the evaluation of tsunami hazard in the Northeast Atlantic basin.

Since the continent-continent collision 55 Ma, the Himalaya has accommodated 2000 km of convergence along its arc. The strain energy is being accumulated at a rate of 37-44 mm/yr and releases at time as earthquakes. The Garhwal Himalaya is located at the western side of a Seismic Gap, where a great earthquake is overdue atleast since 200 years. This seismic gap (Central Seismic Gap: CSG) with 52% probability for a future great earthquake is located between the rupture zones of two significant/great earthquakes, viz. the 1905 Kangra earthquake of M 7.8 and the 1934 Bihar-Nepal earthquake of M 8.0; and the most recent one, the 2015 Gorkha earthquake of M 7.8 is in the eastern side of this seismic gap (CSG). The Garhwal Himalaya is one of the ideal locations of the Himalaya where all the major Himalayan structures and the Himalayan Seimsicity Belt (HSB) can ably be described and studied. In the present study, we are presenting the spatio-temporal analysis of the relocated local micro-moderate earthquakes, recorded by a seismicity monitoring network, which is operational since, 2007. The earthquake locations are relocated using the HypoDD (double difference hypocenter method for earthquake relocations) program. The dataset from July, 2007- September, 2015 have been used in this study to estimate their spatio-temporal relationships, moment tensor (MT) solutions for the earthquakes of M>3.0, stress tensors and their interactions. We have also used the composite focal mechanism solutions for small earthquakes. The majority of the MT solutions show thrust type mechanism and located near the mid-crustal-ramp (MCR) structure of the detachment surface at 8-15 km depth beneath the outer lesser Himalaya and higher Himalaya regions. The prevailing stress has been identified to be compressional towards NNE-SSW, which is the direction of relative plate motion between the India and Eurasia continental plates. The low friction coefficient estimated along with the stress inversions

... Studies Advisory Committee AGENCY: U.S. Geological Survey. ACTION: Notice of Meeting. SUMMARY: Pursuant to Public Law 106-503, the Scientific EarthquakeStudies Advisory Committee (SESAC) will hold its next... Studies Advisory Committee are open to the public, seating may be limited due to room capacity. DATES: The...

In this manuscript, we introduce a framework for developing earthquake forecasts using Virtual Quake (VQ), the generalized successor to the perhaps better known Virtual California (VC) earthquake simulator. We discuss the basic merits and mechanics of the simulator, and we present several statistics of interest for earthquake forecasting. We also show that, though the system as a whole (in aggregate) behaves quite randomly, (simulated) earthquake sequences limited to specific fault sections exhibit measurable predictability in the form of increasing seismicity precursory to large m > 7 earthquakes. In order to quantify this, we develop an alert-based forecasting metric, and show that it exhibits significant information gain compared to random forecasts. We also discuss the long-standing question of activation versus quiescent type earthquake triggering. We show that VQ exhibits both behaviours separately for independent fault sections; some fault sections exhibit activation type triggering, while others are better characterized by quiescent type triggering. We discuss these aspects of VQ specifically with respect to faults in the Salton Basin and near the El Mayor-Cucapah region in southern California, USA and northern Baja California Norte, Mexico.

The ground shaking assessment allows quantifying the hazards associated with the occurrence of earthquakes. Chile and western Canada are two areas that have experienced, and are susceptible to imminent large crustal, in-slab and megathrust earthquakes that can affect the population significantly. In this context, we compare the current GMPEs used in the 2015 National Building Code of Canada and the most recent GMPEs calculated for Chile, with observed accelerations generated by four recent Chilean megathrust earthquakes (MW ≥ 7.7) that have occurred during the past decade, which is essential to quantify how well current models predict observations of major events.We collected the 3-component waveform data of more than 90 stations from the Centro Sismologico Nacional and the Universidad de Chile, and processed them by removing the trend and applying a band-pass filter. Then, for each station, we obtained the Peak Ground Acceleration (PGA), and by using a damped response spectra, we calculated the Pseudo Spectral Acceleration (PSA). Finally, we compared those observations with the most recent Chilean and Canadian GMPEs. Given the lack of geotechnical information for most of the Chilean stations, we also used a new method to obtain the VS30 by inverting the H/V ratios using a trans-dimensional Bayesian inversion, which allows us to improve the correction of observations according to soil conditions.As expected, our results show a good fit between observations and the Chilean GMPEs, but we observe that although the shape of the Canadian GMPEs is coherent with the distribution of observations, in general they under predict the observations for PGA and PSA at shorter periods for most of the considered earthquakes. An example of this can be seen in the attached figure for the case of the 2014 Iquique earthquake.These results present important implications related to the hazards associated to large earthquakes, especially for western Canada, where the probability of a

The conditional spectrum (CS) is a target spectrum (with conditional mean and conditional standard deviation) that links seismic hazard information with ground-motion selection for nonlinear dynamic analysis. Probabilistic seismic hazard analysis (PSHA) estimates the ground-motion hazard by incorporating the aleatory uncertainties in all earthquake scenarios and resulting ground motions, as well as the epistemic uncertainties in ground-motion prediction models (GMPMs) and seismic source models. Typical CS calculations to date are produced for a single earthquake scenario using a single GMPM, but more precise use requires consideration of at least multiple causal earthquakes and multiple GMPMs that are often considered in a PSHA computation. This paper presents the mathematics underlying these more precise CS calculations. Despite requiring more effort to compute than approximate calculations using a single causal earthquake and GMPM, the proposed approach produces an exact output that has a theoretical basis. To demonstrate the results of this approach and compare the exact and approximate calculations, several example calculations are performed for real sites in the western United States. The results also provide some insights regarding the circumstances under which approximate results are likely to closely match more exact results. To facilitate these more precise calculations for real applications, the exact CS calculations can now be performed for real sites in the United States using new deaggregation features in the U.S. Geological Survey hazard mapping tools. Details regarding this implementation are discussed in this paper.

OBJECTIVE: The features of earthquake-related head injuries may be different from those of injuries obtained in daily life because of differences in circumstances. We aim to compare the features of head traumas caused by the Sichuan earthquake with those of other common head traumas using multidetector computed tomography. METHODS: In total, 221 patients with earthquake-related head traumas (the earthquake group) and 221 patients with other common head traumas (the non-earthquake group) were enrolled in our study, and their computed tomographic findings were compared. We focused the differences between fractures and intracranial injuries and the relationships between extracranial and intracranial injuries. RESULTS: More earthquake-related cases had only extracranial soft tissue injuries (50.7% vs. 26.2%, RR = 1.9), and fewer cases had intracranial injuries (17.2% vs. 50.7%, RR = 0.3) compared with the non-earthquake group. For patients with fractures and intracranial injuries, there were fewer cases with craniocerebral injuries in the earthquake group (60.6% vs. 77.9%, RR = 0.8), and the earthquake-injured patients had fewer fractures and intracranial injuries overall (1.5±0.9 vs. 2.5±1.8; 1.3±0.5 vs. 2.1±1.1). Compared with the non-earthquake group, the incidences of soft tissue injuries and cranial fractures combined with intracranial injuries in the earthquake group were significantly lower (9.8% vs. 43.7%, RR = 0.2; 35.1% vs. 82.2%, RR = 0.4). CONCLUSION: As depicted with computed tomography, the severity of earthquake-related head traumas in survivors was milder, and isolated extracranial injuries were more common in earthquake-related head traumas than in non-earthquake-related injuries, which may have been the result of different injury causes, mechanisms and settings. PMID:22012045

This study considers uncertainties in material strengths and the modeling which have important effects on structural resistance force based on reliability theory. After analyzing the destruction mechanism of a RC bridge, structural functions and the reliability were given, then the safety level of the piers of a reinforced concrete continuous girder bridge with stochastic structural parameters against earthquake was analyzed. Using response surface method to calculate the failure probabilities of bridge piers under high-level earthquake, their seismic reliability for different damage states within the design reference period were calculated applying two-stage design, which describes seismic safety level of the built bridges to some extent.

In Switzerland, nearly all historical Mw ~ 6 earthquakes have induced damaging landslides, rockslides and snow avalanches that, in some cases, also resulted in damage to infrastructure and loss of lives. We describe the customisation to Swiss conditions of a globally calibrated statistical approach originally developed to rapidly assess earthquake-induced landslide likelihoods worldwide. The probability of occurrence of such earthquake-induced effects is modelled through a set of geospatial susceptibility proxies and peak ground acceleration. The predictive model is tuned to capture the observations from past events and optimised for near-real-time estimates based on USGS-style ShakeMaps routinely produced by the Swiss Seismological Service. Our emphasis is on the use of high-resolution geospatial datasets along with additional local information on ground failure susceptibility. Even if calibrated on historic events with moderate magnitudes, the methodology presented in this paper yields sensible results also for low-magnitude recent events. The model is integrated in the Swiss ShakeMap framework. This study has a high practical relevance to many Swiss ShakeMap stakeholders, especially those managing lifeline systems, and to other global users interested in conducting a similar customisation for their region of interest.

To improve the global weather prediction and space weather monitoring, six microsatellites termed the Formosa Satellite 3 - Constellation Observing System for Meteorology, Ionosphere, and Climate (FORMOSAT-3/COSMIC) were launched into a circular low-Earth orbit (LEO) from Vandenberg Air Force Base, California, at 0140 UTC on 15 April 2006. Each microsatellite of the joint Taiwan-US satellite constellation mission has a GPS occultation experiment (GOX) payload to operate the atmospheric and ionospheric radio occultation, a tiny ionospheric photometer (TIP) to observe the nighttime ionospheric airglow OI 135.6 nm emission, and a tri-band beacon (TBB) to tomographically estimate fine structures of ionospheric electron density on the satellite-to-receiver plane. While the GOX daily observes about 2500 vertical electron density profiles up to the satellite altitude, the TIP provides accurate horizontal gradients of nighttime electron density. In this study, anomalies in the ionospheric electron density structure and dynamics concurrently observed by FORMOSAT-3/COSMIC and co-located ground- based GPS receivers before recent large earthquakes are presented and discussed.

The 2015 Illapel, Chile earthquake was recorded over a wide range of seismic, geodetic and oceanographic instruments. The USGS assigned magnitude 8.3 earthquake produced a tsunami that was recorded trans-oceanically at both tide gauges and deep-water tsunami pressure sensors. The event also generated surface deformation along the Chilean coast that was recovered through ascending and descending paths of the Sentinel-1A satellite. Additionally, seismic waves were recorded across various global seismic networks. While the determination of the rupture source through seismic and geodetic means is now commonplace and has been studied extensively in this fashion for the Illapel event, the use of tsunami datasets in the inversion process, rather than purely as a forward validation of models, is less common. In this study, we evaluate the use of both near and far field tsunami pressure gauges in the source inversion process, examining their contribution to seismic and geodetic joint inversions- as well as examine the contribution of dispersive and elastic loading parameters on the numerical tsunami propagation. We determine that the inclusion of near field tsunami pressure gauges assists in resolving the degree of slip in the near-trench environment, where purely geodetic inversions lose most resolvability. The inclusion of a far-field dataset has the potential to add further confidence to tsunami inversions, however at a high computational cost. When applied to the Illapel earthquake, this added near-trench resolvability leads to a better estimation of tsunami arrival times at near field gauges and contributes understanding to the wide variation in tsunamigenic slip present along the highly active Peru-Chile trench.

Volcanoes are extremely effective transmitters of matter, energy and information from the deep Earth towards its surface. Their capacities as information carriers are far to be fully exploited so far. Volcanic conduits can be viewed in general as rod-like or sheet-like vertical features with relatively homogenous composition and structure crosscutting geological structures of far more complexity and compositional heterogeneity. Information-carrying signals such as earthquake precursor signals originating deep below the Earth surface are transmitted with much less loss of information through homogenous vertically extended structures than through the horizontally segmented heterogeneous lithosphere or crust. Volcanic conduits can thus be viewed as upside-down "antennas" or waveguides which can be used as privileged pathways of any possible earthquake precursor signal. In particular, conduits of monogenetic volcanoes are promising transmitters of deep Earth information to be received and decoded at surface monitoring stations because the expected more homogenous nature of their rock-fill as compared to polygenetic volcanoes. Among monogenetic volcanoes those with dominantly effusive activity appear as the best candidates for privileged earthquake monitoring sites. In more details, effusive monogenetic volcanic conduits filled with rocks of primitive parental magma composition indicating direct ascent from sub-lithospheric magma-generating areas are the most suitable. Further selection criteria may include age of the volcanism considered and the presence of mantle xenoliths in surface volcanic products indicating direct and straightforward link between the deep lithospheric mantle and surface through the conduit. Innovative earthquakeprediction research strategies can be based and developed on these grounds by considering conduits of selected extinct monogenetic volcanoes and deep trans-crustal fractures as privileged emplacement sites of seismic monitoring stations

We present an analytical, seismologically consistent expression for the surface area of the region within which most landslides triggered by an earthquake are located (landslide distribution area). This expression is based on scaling laws relating seismic moment, source depth, and focal mechanism with ground shaking and fault rupture length and assumes a globally constant threshold of acceleration for onset of systematic mass wasting. The seismological assumptions are identical to those recently used to propose a seismologically consistent expression for the total volume and area of landslides triggered by an earthquake. To test the accuracy of the model we gathered geophysical information and estimates of the landslide distribution area for 83 earthquakes. To reduce uncertainties and inconsistencies in the estimation of the landslide distribution area, we propose an objective definition based on the shortest distance from the seismic wave emission line containing 95 % of the total landslide area. Without any empirical calibration the model explains 56 % of the variance in our dataset, and predicts 35 to 49 out of 83 cases within a factor of 2, depending on how we account for uncertainties on the seismic source depth. For most cases with comprehensive landslide inventories we show that our prediction compares well with the smallest region around the fault containing 95 % of the total landslide area. Aspects ignored by the model that could explain the residuals include local variations of the threshold of acceleration and processes modulating the surface ground shaking, such as the distribution of seismic energy release on the fault plane, the dynamic stress drop, and rupture directivity. Nevertheless, its simplicity and first-order accuracy suggest that the model can yield plausible and useful estimates of the landslide distribution area in near-real time, with earthquake parameters issued by standard detection routines.

We are conducting validation studies on temporal-spatial pattern of pre-earthquake signatures in atmosphere and ionosphere associated with M>7 earthquakes in 2015. Our approach is based on the Lithosphere Atmosphere Ionosphere Coupling (LAIC) physical concept integrated with Multi-sensor-networking analysis (MSNA) of several non-correlated observations that can potentially yield predictive information. In this study we present two type of results: 1/ prospective testing of MSNA-LAIC for M7+ in 2015 and 2:/ retrospective analysis of temporal-spatial variations in atmosphere and ionosphere several days before the two M7.8 and M7.3 in Nepal and M8.3 Chile earthquakes. During the prospective test 18 earthquakes M>7 occurred worldwide, from which 15 were alerted in advance with the time lag between 2 up to 30 days and with different level of accuracy. The retrospective analysis included different physical parameters from space: Outgoing long-wavelength radiation (OLR obtained from NPOES, NASA/AQUA) on the top of the atmosphere, Atmospheric potential (ACP obtained from NASA assimilation models) and electron density variations in the ionosphere via GPS Total Electron Content (GPS/TEC). Concerning M7.8 in Nepal of April 24, rapid increase of OLR reached the maximum on April 21-22. GPS/TEC data indicate maximum value during April 22-24 periods. Strong negative TEC anomaly was detected in the crest of EIA (Equatorial Ionospheric Anomaly) on April 21st and strong positive on April 24th, 2015. For May 12 M7.3 aftershock similar pre- earthquake patterns in OLR and GPS/TEC were observed. Concerning the M8.3 Chile of Sept 16, the OLR strongest transient feature was observed of Sept 12. GPS/TEC analysis data confirm abnormal values on Sept 14. Also on the same day the degradation of EIA and disappearance of the crests of EIA as is characteristic for pre-dawn and early morning hours (11 LT) was observed. On Sept 16 co-seismic ionospheric signatures consistent with defined circular

Accuracy and resolution are complimentary properties necessary to interpret the results of earthquake location and tomography studies. Accuracy is the how close an answer is to the “real world”, and resolution is who small of node spacing or earthquake error ellipse one can achieve. We have modified SimulPS (Thurber, 1986) in several ways to provide a tool for evaluating accuracy and resolution of potential micro-earthquake networks. First, we provide synthetic travel times from synthetic three-dimensional geologic models and earthquake locations. We use this to calculate errors in earthquake location and velocity inversion results when we perturb these models and try to invert to obtain these models. We create as many stations as desired and can create a synthetic velocity model with any desired node spacing. We apply this study to SimulPS and TomoDD inversion studies. “Real” travel times are perturbed with noise and hypocenters are perturbed to replicate a starting location away from the “true” location, and inversion is performed by each program. We establish travel times with the pseudo-bending ray tracer and use the same ray tracer in the inversion codes. This, of course, limits our ability to test the accuracy of the ray tracer. We developed relationships for the accuracy and resolution expected as a function of the number of earthquakes and recording stations for typical tomographic inversion studies. Velocity grid spacing started at 1km, then was decreased to 500m, 100m, 50m and finally 10m to see if resolution with decent accuracy at that scale was possible. We considered accuracy to be good when we could invert a velocity model perturbed by 50% back to within 5% of the original model, and resolution to be the size of the grid spacing. We found that 100 m resolution could obtained by using 120 stations with 500 events, bu this is our current limit. The limiting factors are the size of computers needed for the large arrays in the inversion and a

The paper aims at giving a few methodological suggestions in deterministic earthquakepredictionstudies based on combined ground-based and space observations of earthquake precursors. Up to now what is lacking is the demonstration of a causal relationship with explained physical processes and looking for a correlation between data gathered simultaneously and continuously by space observations and ground-based measurements. Coordinated space and ground-based observations imply available test sites on the Earth surface to correlate ground data, collected by appropriate networks of instruments, with space ones detected on board of LEO satellites. At this purpose a new result reported in the paper is an original and specific space mission project (ESPERIA) and two instruments of its payload. The ESPERIA space project has been performed for the Italian Space Agency and three ESPERIA instruments (ARINA and LAZIO particle detectors, and EGLE search-coil magnetometer) have been built and tested in space. The EGLE experiment started last April 15, 2005 on board the ISS, within the ENEIDE mission. The launch of ARINA occurred on June 15, 2006, on board the RESURS DK-1 Russian LEO satellite. As an introduction and justification to these experiments the paper clarifies some basic concepts and critical methodological aspects concerning deterministic and statistic approaches and their use in earthquakeprediction. We also take the liberty of giving the scientific community a few critical hints based on our personal experience in the field and propose a joint study devoted to earthquakeprediction and warning.

This project's aim is to design a telemetric system which will be able to collect data from a digitizer, transform it into appropriate form and transfer this data to a central system where an on-line data elaboration will take place. On-line mathematical elaboration (fractal analysis) of pre-seismic electromagnetic signals and instant display may lead to safe earthquakeprediction methodologies. Ad-hoc connections and heterogeneous topologies are the core network, while wired and wireless means cooperate for an accurate and on-time transmission. The nature of data is considered very sensitive so the transmission needs to be instant. All stations are situated in rural places in order to prevent electromagnetic interferences; this imposes continuous monitoring and provision of backup data links. The central stations collect the data of every station and allocate them properly in a predefined database. Special software is designed to elaborate mathematically the incoming data and export it graphically. The developing part included digitizer design, workstation software design, transmission protocol study and simulation on OPNET, database programming, mathematical data elaborations and software development for graphical representation. All the package was tested under lab conditions and tested in real conditions. The main aspect that this project serves is the very big interest for the scientific community in case this platform will eventually be implemented and then installed in Greek countryside in large scale. The platform is designed in such a way that techniques of data mining and mathematical elaboration are possible and any extension can be adapted. The main specialization of this project is that these mechanisms and mathematical transformations can be applied on live data. This can help to rapid exploitation of the real meaning of the measured and stored data. The elaboration of this study has as primary intention to help and alleviate the analysis process

No previous studies have systematically assessed the psychological functioning of medical students following a major disaster. To describe the psychological functioning of medical students following the earthquakes in Canterbury, New Zealand, and identify predictors of adverse psychological functioning. 7 months following the most severe earthquake, medical students completed the Depression, Anxiety and Stress Scale (DASS), the Post-Traumatic Stress Disorder Checklist, the Eysenck Personality Questionnaire, the Connor Davidson Resilience Scale, the Work and Adjustment Scale, and Likert scales assessing psychological functioning at worst and currently. A substantial minority of medical students reported moderate-extreme difficulties on the DASS subscales 7 months following the most severe earthquake (Depression =12%; Anxiety =9%; Stress =10%). Multiple linear modelling produced a model that predicted 27% of the variance in total scores on the DASS. Variables contributing significantly to the model were: year of medical course, presence of mental health problems prior to the earthquakes, not being New Zealand European, and being higher on retrospectively rated neuroticism prior to the earthquakes. Around 10% of medical students experienced moderate-extreme psychological difficulties 7 months following the most severe earthquake on 22 February 2011. Specific groups at high risk for ongoing psychological symptomatology were able to be identified.

We carried out multi-sensors observations in our investigation of phenomena preceding major earthquakes. Our approach is based on a systematic analysis of several physical and environmental parameters, which we found, associated with the earthquake processes: thermal infrared radiation, temperature and concentration of electrons in the ionosphere, radon/ion activities, and air temperature/humidity in the atmosphere. We used satellite and ground observations and interpreted them with the Lithosphere-Atmosphere- Ionosphere Coupling (LAIC) model, one of possible paradigms we study and support. We made two independent continues hind-cast investigations in Taiwan and Japan for total of 102 earthquakes (M>6) occurring from 2004-2011. We analyzed: (1) ionospheric electromagnetic radiation, plasma and energetic electron measurements from DEMETER (2) emitted long-wavelength radiation (OLR) from NOAA/AVHRR and NASA/EOS; (3) radon/ion variations (in situ data); and 4) GPS Total Electron Content (TEC) measurements collected from space and ground based observations. This joint analysis of ground and satellite data has shown that one to six (or more) days prior to the largest earthquakes there were anomalies in all of the analyzed physical observations. For the latest March 11 , 2011 Tohoku earthquake, our analysis shows again the same relationship between several independent observations characterizing the lithosphere /atmosphere coupling. On March 7th we found a rapid increase of emitted infrared radiation observed from satellite data and subsequently an anomaly developed near the epicenter. The GPS/TEC data indicated an increase and variation in electron density reaching a maximum value on March 8. Beginning from this day we confirmed an abnormal TEC variation over the epicenter in the lower ionosphere. These findings revealed the existence of atmospheric and ionospheric phenomena occurring prior to the 2011 Tohoku earthquake, which indicated new evidence of a distinct

Provides a survey and a review of earthquake activity and global tectonics from the advancement of the theory of continental drift to the present. Topics include: an identification of the major seismic regions of the earth, seismic measurement techniques, seismic design criteria for buildings, and the prediction of earthquakes. (BT)

Recent earthquake disasters highlighted the importance of multi- and trans-disciplinary studies of earthquake risk. A major component of earthquake disaster risk analysis is hazards research, which should cover not only a traditional assessment of ground shaking, but also studies of geodetic, paleoseismic, geomagnetic, hydrological, deep drilling and other geophysical and geological observations together with comprehensive modeling of earthquakes and forecasting extreme events. Extreme earthquakes (large magnitude and rare events) are manifestations of complex behavior of the lithosphere structured as a hierarchical system of blocks of different sizes. Understanding of physics and dynamics of the extreme events comes from observations, measurements and modeling. A quantitative approach to simulate earthquakes in models of fault dynamics will be presented. The models reproduce basic features of the observed seismicity (e.g., the frequency-magnitude relationship, clustering of earthquakes, occurrence of extreme seismic events). They provide a link between geodynamic processes and seismicity, allow studying extreme events, influence of fault network properties on seismic patterns and seismic cycles, and assist, in a broader sense, in earthquake forecast modeling. Some aspects of predictability of large earthquakes (how well can large earthquakes be predicted today?) will be also discussed along with possibilities in mitigation of earthquake disasters (e.g., on 'inverse' forensic investigations of earthquake disasters).

Earthquake forecasting and prediction has been one of the key struggles of modern geosciences for the last few decades. A large number of approaches for various time periods have been developed for different locations around the world. A categorization and review of more than 20 of new and old methods was undertaken to develop a state-of-the-art catalogue in forecasting algorithms and methodologies. The different methods have been categorised into time-independent, time-dependent and hybrid methods, from which the last group represents methods where additional data than just historical earthquake statistics have been used. It is necessary to categorize in such a way between pure statistical approaches where historical earthquake data represents the only direct data source and also between algorithms which incorporate further information e.g. spatial data of fault distributions or which incorporate physical models like static triggering to indicate future earthquakes. Furthermore, the location of application has been taken into account to identify methods which can be applied e.g. in active tectonic regions like California or in less active continental regions. In general, most of the methods cover well-known high-seismicity regions like Italy, Japan or California. Many more elements have been reviewed, including the application of established theories and methods e.g. for the determination of the completeness magnitude or whether the modified Omori law was used or not. Target temporal scales are identified as well as the publication history. All these different aspects have been reviewed and catalogued to provide an easy-to-use tool for the development of earthquake forecasting algorithms and to get an overview in the state-of-the-art.

Ground water can facilitate earthquake development and respond physically and chemically to tectonism. Thus, an understanding of ground water circulation in seismically active regions is important for earthquakeprediction. To investigate the roles of ground water in the development and prediction of earthquakes, geological and hydrogeological monitoring was conducted in a seismogenic area in the Yanhuai Basin, China. This study used isotopic and hydrogeochemical methods to characterize ground water samples from six hot springs and two cold springs. The hydrochemical data and associated geological and geophysical data were used to identify possible relations between ground water circulation and seismically active structural features. The data for delta18O, deltaD, tritium, and 14C indicate ground water from hot springs is of meteoric origin with subsurface residence times of 50 to 30,320 years. The reservoir temperature and circulation depths of the hot ground water are 57 degrees C to 160 degrees C and 1600 to 5000 m, respectively, as estimated by quartz and chalcedony geothermometers and the geothermal gradient. Various possible origins of noble gases dissolved in the ground water also were evaluated, indicating mantle and deep crust sources consistent with tectonically active segments. A hard intercalated stratum, where small to moderate earthquakes frequently originate, is present between a deep (10 to 20 km), high-electrical conductivity layer and the zone of active ground water circulation. The ground water anomalies are closely related to the structural peculiarity of each monitoring point. These results could have implications for ground water and seismic studies in other seismogenic areas.

Ground water can facilitate earthquake development and respond physically and chemically to tectonism. Thus, an understanding of ground water circulation in seismically active regions is important for earthquakeprediction. To investigate the roles of ground water in the development and prediction of earthquakes, geological and hydrogeological monitoring was conducted in a seismogenic area in the Yanhuai Basin, China. This study used isotopic and hydrogeochemical methods to characterize ground water samples from six hot springs and two cold springs. The hydrochemical data and associated geological and geophysical data were used to identify possible relations between ground water circulation and seismically active structural features. The data for ??18O, ??D, tritium, and 14C indicate ground water from hot springs is of meteoric origin with subsurface residence times of 50 to 30,320 years. The reservoir temperature and circulation depths of the hot ground water are 57??C to 160??C and 1600 to 5000 m, respectively, as estimated by quartz and chalcedony geothermometers and the geothermal gradient. Various possible origins of noble gases dissolved in the ground water also were evaluated, indicating mantle and deep crust sources consistent with tectonically active segments. A hard intercalated stratum, where small to moderate earthquakes frequently originate, is present between a deep (10 to 20 km), high-electrical conductivity layer and the zone of active ground water circulation. The ground water anomalies are closely related to the structural peculiarity of each monitoring point. These results could have implications for ground water and seismic studies in other seismogenic areas. Copyright ?? 2005 National Ground Water Association.

The purpose of this study was to assess the effect of earthquake instruction on students' earthquake content and preparedness for earthquakes. This study used an innovative direct instruction on earthquake science content and concepts with an inquiry-based group activity on earthquake safety followed by an earthquake simulation and preparedness video to help middle school students understand and prepare for the regional seismic threat. A convenience sample of 384 sixth and seventh grade students at two small middle schools in southern Illinois was used in this study. Qualitative information was gathered using open-ended survey questions, classroom observations, and semi-structured interviews. Quantitative data were collected using a 21 item content questionnaire administered to test students' General Earthquake Knowledge, Local Earthquake Knowledge, and Earthquake Preparedness Knowledge before and after instruction. A pre-test and post-test survey Likert scale with 21 items was used to collect students' perceptions and attitudes. Qualitative data analysis included quantification of student responses to the open-ended questions and thematic analysis of observation notes and interview transcripts. Quantitative datasets were analyzed using descriptive and inferential statistical methods, including t tests to evaluate the differences in means scores between paired groups before and after interventions and one-way analysis of variance (ANOVA) to test for differences between mean scores of the comparison groups. Significant mean differences between groups were further examined using a Dunnett's C post hoc statistical analysis. Integration and interpretation of the qualitative and quantitative results of the study revealed a significant increase in general, local and preparedness earthquake knowledge among middle school students after the interventions. The findings specifically indicated that these students felt most aware and prepared for an earthquake after an

The observed ground motions from five large aftershocks of the 1999 Chi-Chi, Taiwan, earthquake are compared with predictions from four equations based primarily on data from California. The four equations for active tectonic regions are those developed by Abrahamson and Silva (1997), Boore et al. (1997), Campbell (1997, 2001), and Sadigh et al. (1997). Comparisons are made for horizontal-component peak ground accelerations and 5%-damped pseudoacceleration response spectra at periods between 0.02 sec and 5 sec. The observed motions are in reasonable agreement with the predictions, particularly for distances from 10 to 30 km. This is in marked contrast to the motions from the Chi-Chi mainshock, which are much lower than the predicted motions for periods less than about 1 sec. The results indicate that the low motions in the mainshock are not due to unusual, localized absorption of seismic energy, because waves from the mainshock and the aftershocks generally traverse the same section of the crust and are recorded at the same stations. The aftershock motions at distances of 30-60 km are somewhat lower than the predictions (but not nearly by as small a factor as those for the mainshock), suggesting that the ground motion attenuates more rapidly in this region of Taiwan than it does in the areas we compare with it. We provide equations for the regional attenuation of response spectra, which show increasing decay of motion with distance for decreasing oscillator periods. This observational study also demonstrates that ground motions have large earthquake-location-dependent variability for a specific site. This variability reduces the accuracy with which an earthquake-specific prediction of site response can be predicted. Online Material: PGAs and PSAs from the 1999 Chi-Chi earthquake and five aftershocks.

The intensity data for the California earthquake of April 18, 1906, are strongly dependent on distance from the zone of surface faulting and the geological character of the ground. Considering only those sites (approximately one square city block in size) for which there is good evidence for the degree of ascribed intensity, the empirical relation derived between 1906 intensities and distance perpendicular to the fault for 917 sites underlain by rocks of the Franciscan Formation is: Intensity = 2.69 - 1.90 log (Distance) (km). For sites on other geologic units intensity increments, derived with respect to this empirical relation, correlate strongly with the Average Horizontal Spectral Amplifications (AHSA) determined from 99 three-component recordings of ground motion generated by nuclear explosions in Nevada. The resulting empirical relation is: Intensity Increment = 0.27 +2.70 log (AHSA), and average intensity increments for the various geologic units are -0.29 for granite, 0.19 for Franciscan Formation, 0.64 for the Great Valley Sequence, 0.82 for Santa Clara Formation, 1.34 for alluvium, 2.43 for bay mud. The maximum intensity map predicted from these empirical relations delineates areas in the San Francisco Bay region of potentially high intensity from future earthquakes on either the San Andreas fault or the Hazard fault.

A near future large earthquake is estimated to occur off Miyagi prefecture, northeast Japan within 20 years at a probability of about 80 %. In order to predict this earthquake, we have observed groundwater temperature in a borehole at Sendai city 100 km west of the asperity. This borehole penetrates the fault zone of NE-trending active reverse fault, Nagamachi-Rifu fault zone, at 820m depth. Our concept of the ground water observation is that fault zones are natural amplifier of crustal strain, and hence at 820m depth we set a very precise quartz temperature sensor with the resolution of 0.0002 deg. C. We confirmed our observation system to work normally by both the pumping up tests and the systematic temperature changes at different depths. Since the observation started on June 20 in 2004, we found mysterious intermittent temperature fluctuations of two types; one is of a period of 5-10 days and an amplitude of ca. 0.1 deg. C, and the other is of a period of 11-21 days and an amplitude of ca. 0.2 deg. C. Based on the examination using the product of Grashof number and Prantl number, natural convection of water can be occurred in the borehole. However, since these temperature fluctuations are observed only at the depth around 820 m, thus it is likely that they represent the hydrological natures proper to the Nagamachi-Rifu fault zone. It is noteworthy that the small temperature changes correlatable with earth tide are superposed on the long term and large amplitude fluctuations. The amplitude on the days of the full moon and new moon is ca. 0.001 deg. C. The bottoms of these temperature fluctuations always delay about 6 hours relative to peaks of earth tide. This is interpreted as that water in the borehole is sucked into the fault zone on which tensional normal stress acts on the days of the full moon and new moon. The amplitude of the crustal strain by earth tide was measured at ca. 2∗10^-8 strain near our observation site. High frequency temperature noise of

In Italy, after the 2009 L'Aquila earthquake, a procedure was developed for gathering and disseminating authoritative information about the time dependence of seismic hazard to help communities prepare for a potentially destructive earthquake. The most striking time dependency of the earthquake occurrence process is the time clustering, which is particularly pronounced in time windows of days and weeks. The Operational Earthquake Forecasting (OEF) system that is developed at the Seismic Hazard Center (Centro di Pericolosità Sismica, CPS) of the Istituto Nazionale di Geofisica e Vulcanologia (INGV) is the authoritative source of seismic hazard information for Italian Civil Protection. The philosophy of the system rests on a few basic concepts: transparency, reproducibility, and testability. In particular, the transparent, reproducible, and testable earthquake forecasting system developed at CPS is based on ensemble modeling and on a rigorous testing phase. Such phase is carried out according to the guidance proposed by the Collaboratory for the Study of EarthquakePredictability (CSEP, international infrastructure aimed at evaluating quantitatively earthquakeprediction and forecast models through purely prospective and reproducible experiments). In the OEF system, the two most popular short-term models were used: the Epidemic-Type Aftershock Sequences (ETAS) and the Short-Term Earthquake Probabilities (STEP). Here, we report the results from OEF's 24hour earthquake forecasting during the main phases of the 2016-2017 sequence occurred in Central Apennines (Italy).

We test a means to predict strong ground motion using the Mw=7.4 and Mw=7.2 1999 Izmit and Duzce, Turkey earthquakes. We generate 100 rupture scenarios for each earthquake, constrained by a prior knowledge, and use these to synthesize strong ground motion and make the prediction. Ground motion is synthesized with the representation relation using impulsive point source Green's functions and synthetic source models. We synthesize the earthquakes from DC to 25 Hz. We demonstrate how to incorporate this approach into standard probabilistic seismic hazard analyses (PSHA). The synthesis of earthquakes is based upon analysis of over 3,000 aftershocks recorded by several seismic networks. The analysis provides source parameters of the aftershocks; records available for use as empirical Green's functions; and a three-dimensional velocity structure from tomographic inversion. The velocity model is linked to a finite difference wave propagation code (E3D, Larsen 1998) to generate synthetic Green's functions (DC < f < 0.5 Hz). We performed the simultaneous inversion for hypocenter locations and three-dimensional P-wave velocity structure of the Marmara region using SIMULPS14 along with 2,500 events. We also obtained source moment and corner frequency and individual station attenuation parameter estimates for over 500 events by performing a simultaneous inversion to fit these parameters with a Brune source model. We used the results of the source inversion to deconvolve out a Brune model from small to moderate size earthquake (M<4.0) recordings to obtain empirical Green's functions for the higher frequency range of ground motion (0.5 < f < 25.0 Hz). Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-ENG-48.

The quality of space-time earthquakeprediction is usually characterized by a 2-D error diagram (n, τ), where n is the fraction of failures-to-predict and τ is the local rate of alarm averaged in space. The most reasonable averaging measure for analysis of a prediction strategy is the normalized rate of target events λ(dg) in a subarea dg. In that case the quantity H = 1 - (n + τ) determines the prediction capability of the strategy. The uncertainty of λ(dg) causes difficulties in estimating H and the statistical significance, α, of prediction results. We investigate this problem theoretically and show how the uncertainty of the measure can be taken into account in two situations, viz., the estimation of α and the construction of a confidence zone for the (n, τ)-parameters of the random strategies. We use our approach to analyse the results from prediction of M >= 8.0 events by the M8 method for the period 1985-2009 (the M8.0+ test). The model of λ(dg) based on the events Mw >= 5.5, 1977-2004, and the magnitude range of target events 8.0 <= M < 8.5 are considered as basic to this M8 analysis. We find the point and upper estimates of α and show that they are still unstable because the number of target events in the experiment is small. However, our results argue in favour of non-triviality of the M8 prediction algorithm.

Residuals between ground‐motion data and ground‐motion prediction equations (GMPEs) can be decomposed into terms representing earthquake source, path, and site effects. These terms can be cast in terms of repeatable (epistemic) residuals and the random (aleatory) components. Identifying the repeatable residuals leads to a GMPE with reduced uncertainty for a specific source, site, or path location, which in turn can yield a lower hazard level at small probabilities of exceedance. We illustrate a schematic framework for this residual partitioning with a dataset from the ANZA network, which straddles the central San Jacinto fault in southern California. The dataset consists of more than 3200 1.15≤M≤3 earthquakes and their peak ground accelerations (PGAs), recorded at close distances (R≤20 km). We construct a small‐magnitude GMPE for these PGA data, incorporating VS30 site conditions and geometrical spreading. Identification and removal of the repeatable source, path, and site terms yield an overall reduction in the standard deviation from 0.97 (in ln units) to 0.44, for a nonergodic assumption, that is, for a single‐source location, single site, and single path. We give examples of relationships between independent seismological observables and the repeatable terms. We find a correlation between location‐based source terms and stress drops in the San Jacinto fault zone region; an explanation of the site term as a function of kappa, the near‐site attenuation parameter; and a suggestion that the path component can be related directly to elastic structure. These correlations allow the repeatable source location, site, and path terms to be determined a priori using independent geophysical relationships. Those terms could be incorporated into location‐specific GMPEs for more accurate and precise ground‐motion prediction.

Earthquakes are commonly seen as unpredictable. Even when scientists believe an earthquake is likely, it is still hard to understand the indications observed, as well as their theoretical and practical implications. There is some controversy surrounding the concept of using animals as a precursor of earthquakes. Nonetheless, several institutes at University of Natural Resources and Life Sciences, and Vienna University of Technology, both Vienna, Austria, and Syiah Kuala University, Banda Aceh, as well as Terramath Indonesia, Buleleng, both Indonesia, cooperate in a long-term project, funded by Red Bull Media House, Salzburg, Austria, which aims at getting some decisive step forward from anecdotal to scientific evidence of those interdependencies, and show their possible use in forecasting seismic hazard on a short-term basis. Though no conclusive research has yet been published, an idea in this study is that even if animals do not respond to specific geophysical precursors and with enough notice to enable earthquake forecasting on that basis, they may at least enhance, in conjunction with other indications, the degree of certainty we can get of a prediction of an impending earthquake. In Indonesia, indeed, before the great earthquakes of 2004 and 2005, ominous geophysical as well as biological phenomena occurred (but were realized as precursors only in retrospect). Numerous comparable stories can be told from other times and regions. Nearly 2000 perceptible earthquakes (> M3.5) occur each year in Indonesia. Also, in 2007, the government has launched a program, focused on West Sumatra, for investigating earthquake precursors. Therefore, Indonesia is an excellent target area for a study concerning possible interconnections between geophysical and biological earthquake precursors. Geophysical and atmospheric measurements and behavioral observation of several animal species (elephant, domestic cattle, water buffalo, chicken, rat, catfish) are conducted in three areas

Ancient earthquakes can leave their mark in the mythical practices and literary accounts of ancient peoples, the stratigraphy of their site histories, and the structural integrity of their constructions. The ancient Greek/Roman city of Cnidus in southwestern Turkey records all three. A spectacular exposed fault plane cliff bordering the northern edge of the city appears to have been an important revered site, bearing votive niches carved into the near-vertical slip plane and associated with a Sanctuary of Demeter that implies a connection to the underworld. Stratigraphic evidence for earthquake faulting can be found in the form of a destruction horizon of contorted soil, relics and human remains exposed in the original excavations of the Sanctuary of Demeter by Sir Charles Newton (1857-58) and in a destruction horizon of burnt soil and bone uncovered by the ongoing excavation of a colonnaded street. Structural damage to constructions is widespread across the site, with warped and offset walls in the Sanctuary of Demeter, collapsed buildings in several places, and a parallel arrangement of fallen columns in the colonnaded street. The most remarkable structural evidence for fault activity, however, is the rupture of the ancient city's famous Round Temple of Aphrodite, whose podium reveals a history of damage and which is unambiguously displaced across a bedrock fault. While these phenomena are equivocal when viewed in isolation, collectively they imply at least two damaging earthquakes at the site, one (possibly both) of which ruptured along the fault on which the city is found. The Cnidus case study highlights how reliable identification of archaeoseismic damage relies on compiling an assemblage of indicators rather than the discovery of a diagnostic "smoking gun".

It is generally accepted that the earthquakes (EQ) are the most dangerous natural phenomena leading to the multiple losses in human lives and economics. The space observations must be included into the global chain of the EQ precursors monitoring at least as the initial warning to pay greater attention to the ground segment data. As the common opinion agrees, only in combination of multiple observation sites and set of monitored parameters the further progress at the way to raise EQ precursors detection probability may be obtained. There is necessary to answer two important questions before to plan any experiment to study ionospheric precursors of EQ. First one - whether the variations in the ionosphere definitely connected with the EQ preparation process do exist, and the second one - if they do, whether using these signals the precursors of EQ can be reliably identified and used for, if not prediction, then for the warning that the EQ in the given area approaches. The first successful mission dedicated to this problem solution was DEMETER (in orbit during more than 6 years from June 2004 until December 2010). The statistics of this study is impressive: altogether, about 9000 EQs with magnitude larger than M = 5.0 and depth lower than 40 km occurred all over the world during the analyzed period. In the result, the conclusion made there suggests that, obviously, there are real perturbations in the ionosphere connected with the seismic activity, but they are rather weak and at the present stage of data processing may be revealed only with the help of statistical analysis. To realize the study of ionospheric precursors, first it is imperative to clarify the mechanism of energy transfer along the chain “lithosphere-atmosphere-ionosphere”. Many hypotheses of such a mechanism exist, from which the mostly supported are fair weather currents (FWC) and atmospheric gravity waves (AGW), both of which have their pros and contras. The following minimal set of physical

Although local and regional instrumental recordings of the devastating 26, January 2001, Bhuj earthquake are sparse, the distribution of macroseismic effects can provide important constraints on the mainshock ground motions. We compiled available news accounts describing damage and other effects and interpreted them to obtain modified Mercalli intensities (MMIs) at >200 locations throughout the Indian subcontinent. These values are then used to map the intensity distribution throughout the subcontinent using a simple mathematical interpolation method. Although preliminary, the maps reveal several interesting features. Within the Kachchh region, the most heavily damaged villages are concentrated toward the western edge of the inferred fault, consistent with western directivity. Significant sediment-induced amplification is also suggested at a number of locations around the Gulf of Kachchh to the south of the epicenter. Away from the Kachchh region, intensities were clearly amplified significantly in areas that are along rivers, within deltas, or on coastal alluvium, such as mudflats and salt pans. In addition, we use fault-rupture parameters inferred from teleseismic data to predict shaking intensity at distances of 0-1000 km. We then convert the predicted hard-rock ground-motion parameters to MMI by using a relationship (derived from Internet-based intensity surveys) that assigns MMI based on the average effects in a region. The predicted MMIs are typically lower by 1-3 units than those estimated from news accounts, although they do predict near-field ground motions of approximately 80%g and potentially damaging ground motions on hard-rock sites to distances of approximately 300 km. For the most part, this discrepancy is consistent with the expected effect of sediment response, but it could also reflect other factors, such as unusually high building vulnerability in the Bhuj region and a tendency for media accounts to focus on the most dramatic damage, rather than

We propose that catastrophic events are “outliers” with statistically different properties than the rest of the population and result from mechanisms involving amplifying critical cascades. We describe a unifying approach for modeling and predicting these catastrophic events or “ruptures,” that is, sudden transitions from a quiescent state to a crisis. Such ruptures involve interactions between structures at many different scales. Applications and the potential for prediction are discussed in relation to the rupture of composite materials, great earthquakes, turbulence, and abrupt changes of weather regimes, financial crashes, and human parturition (birth). Future improvements will involve combining ideas and tools from statistical physics and artificial/computational intelligence, to identify and classify possible universal structures that occur at different scales, and to develop application-specific methodologies to use these structures for prediction of the “crises” known to arise in each application of interest. We live on a planet and in a society with intermittent dynamics rather than a state of equilibrium, and so there is a growing and urgent need to sensitize students and citizens to the importance and impacts of ruptures in their multiple forms. PMID:11875205

The significant increase of seismicity rate in the central and eastern United States since 2009 has drawn wide attention for the potential seismic hazard. Unfortunately, most of moderate earthquakes in this region lack near-fault strong motion records, limiting in-depth studies. The 2016/11/07 M 5.0 Cushing, Oklahoma earthquake and its fore/aftershock sequence, which was monitored by four strong motion stations within 10 km of the mainshock epicenter, is the only exception. According to Oklahoma Geological Survey, no M>1.5 earthquake occurred before 2013 within 5 km of the mainshock epicenter, but 110 foreshocks, including two M>4 events, had occurred before the mainshock initiation. The close-fault records also revealed that M>4 foreshocks and mainshock excited unusually high level of strong ground motion. For example, 2015/10/10 Mw 4.3 Cushing earthquake resulted in peak ground acceleration (PGA) and peak ground velocity (PGV) up to 0.6 g and 8.3 cm/s, respectively. Simply correcting the geometric spreading (1/R, R is hypocenter distance) leads to mean PGA and PGV of 0.2 g and 3.6 cm/s at R=10 km, which are 4-8 times of the average values inferred from NGA-West dataset (Archuleta and Ji, 2016). Here we constrain the slip history of Cushing mainshock and its M>4 foreshocks using strong motion waveforms and compare them with the results of other moderate Oklahoma earthquakes. Our preliminary analysis of the mainshock leads to a preferred model of heterogeneous dextral slip on a vertical fault plane orienting N60oE, with three major rupture stages. The rupture initiated at a depth of 4.1 km, within the "cloud" of foreshocks. The first subevent has a rupture duration of 0.7 s and accounts for 20% of total seismic moment (Mw 4.4). After a delay of 0.5 s, a slip patch just outside the foreshock "cloud" and 2-3 km away from the hypocenter broke. From 1.2 s to 1.7 s, 45% of total seismic moment (Mw 4.7) was quickly released. The rest of the seismic moment (35%, Mw 4

Global earthquake occurring rate displays an exponential decay down to ~300 km and then peaks around 550 to 600 km before terminating abruptly near 700 km. How fractures initiate, nucleate, and propagate at these depths remains one of the greatest puzzles in earth science, as increasing pressure inhibits fracture propagation. We report nanoseismological analysis on high-resolution acoustic emission (AE) records obtained during ruptures triggered by partial transformation from olivine to spinel in Mg 2GeO 4, an analog to the dominant mineral (Mg,Fe) 2SiO 4 olivine in the upper mantle, using state-of-the-art seismological techniques, in the laboratory. AEs’ focal mechanisms, asmore » well as their distribution in both space and time during deformation, are carefully analyzed. Microstructure analysis shows that AEs are produced by the dynamic propagation of shear bands consisting of nanograined spinel. These nanoshear bands have a near constant thickness (~100 nm) but varying lengths and self-organize during deformation. This precursory seismic process leads to ultimate macroscopic failure of the samples. Several source parameters of AE events were extracted from the recorded waveforms, allowing close tracking of event initiation, clustering, and propagation throughout the deformation/transformation process. AEs follow the Gutenberg-Richter statistics with a well-defined b value of 1.5 over three orders of moment magnitudes, suggesting that laboratory failure processes are self-affine. The seismic relation between magnitude and rupture area correctly predicts AE magnitude at millimeter scales. A rupture propagation model based on strain localization theory is proposed. Future numerical analyses may help resolve scaling issues between laboratory AE events and deep-focus earthquakes.« less

Global earthquake occurring rate displays an exponential decay down to ~300 km and then peaks around 550 to 600 km before terminating abruptly near 700 km. How fractures initiate, nucleate, and propagate at these depths remains one of the greatest puzzles in earth science, as increasing pressure inhibits fracture propagation. We report nanoseismological analysis on high-resolution acoustic emission (AE) records obtained during ruptures triggered by partial transformation from olivine to spinel in Mg2GeO4, an analog to the dominant mineral (Mg,Fe)2SiO4 olivine in the upper mantle, using state-of-the-art seismological techniques, in the laboratory. AEs’ focal mechanisms, as well as their distributionmore » in both space and time during deformation, are carefully analyzed. Microstructure analysis shows that AEs are produced by the dynamic propagation of shear bands consisting of nanograined spinel. These nanoshear bands have a near constant thickness (~100 nm) but varying lengths and self-organize during deformation. This precursory seismic process leads to ultimate macroscopic failure of the samples. Several source parameters of AE events were extracted from the recorded waveforms, allowing close tracking of event initiation, clustering, and propagation throughout the deformation/transformation process. AEs follow the Gutenberg-Richter statistics with a well-defined b value of 1.5 over three orders of moment magnitudes, suggesting that laboratory failure processes are self-affine. The seismic relation between magnitude and rupture area correctly predicts AE magnitude at millimeter scales. A rupture propagation model based on strain localization theory is proposed. Future numerical analyses may help resolve scaling issues between laboratory AE events and deep-focus earthquakes.« less

Global earthquake occurring rate displays an exponential decay down to ~300 km and then peaks around 550 to 600 km before terminating abruptly near 700 km. How fractures initiate, nucleate, and propagate at these depths remains one of the greatest puzzles in earth science, as increasing pressure inhibits fracture propagation. We report nanoseismological analysis on high-resolution acoustic emission (AE) records obtained during ruptures triggered by partial transformation from olivine to spinel in Mg2GeO4, an analog to the dominant mineral (Mg,Fe)2SiO4 olivine in the upper mantle, using state-of-the-art seismological techniques, in the laboratory. AEs’ focal mechanisms, as well as their distribution in both space and time during deformation, are carefully analyzed. Microstructure analysis shows that AEs are produced by the dynamic propagation of shear bands consisting of nanograined spinel. These nanoshear bands have a near constant thickness (~100 nm) but varying lengths and self-organize during deformation. This precursory seismic process leads to ultimate macroscopic failure of the samples. Several source parameters of AE events were extracted from the recorded waveforms, allowing close tracking of event initiation, clustering, and propagation throughout the deformation/transformation process. AEs follow the Gutenberg-Richter statistics with a well-defined b value of 1.5 over three orders of moment magnitudes, suggesting that laboratory failure processes are self-affine. The seismic relation between magnitude and rupture area correctly predicts AE magnitude at millimeter scales. A rupture propagation model based on strain localization theory is proposed. Future numerical analyses may help resolve scaling issues between laboratory AE events and deep-focus earthquakes. PMID:28776024

Global earthquake occurring rate displays an exponential decay down to ~300 km and then peaks around 550 to 600 km before terminating abruptly near 700 km. How fractures initiate, nucleate, and propagate at these depths remains one of the greatest puzzles in earth science, as increasing pressure inhibits fracture propagation. We report nanoseismological analysis on high-resolution acoustic emission (AE) records obtained during ruptures triggered by partial transformation from olivine to spinel in Mg 2GeO 4, an analog to the dominant mineral (Mg,Fe) 2SiO 4 olivine in the upper mantle, using state-of-the-art seismological techniques, in the laboratory. AEs’ focal mechanisms, asmore » well as their distribution in both space and time during deformation, are carefully analyzed. Microstructure analysis shows that AEs are produced by the dynamic propagation of shear bands consisting of nanograined spinel. These nanoshear bands have a near constant thickness (~100 nm) but varying lengths and self-organize during deformation. This precursory seismic process leads to ultimate macroscopic failure of the samples. Several source parameters of AE events were extracted from the recorded waveforms, allowing close tracking of event initiation, clustering, and propagation throughout the deformation/transformation process. AEs follow the Gutenberg-Richter statistics with a well-defined b value of 1.5 over three orders of moment magnitudes, suggesting that laboratory failure processes are self-affine. The seismic relation between magnitude and rupture area correctly predicts AE magnitude at millimeter scales. A rupture propagation model based on strain localization theory is proposed. Future numerical analyses may help resolve scaling issues between laboratory AE events and deep-focus earthquakes.« less

We illustrate two essential consequences of the systematic difference between moment magnitude and local magnitude for small earthquakes, illuminating the underlying earthquake physics. Moment magnitude, M 2/3 log M0, is uniformly valid for all earthquake sizes [Hanks and Kanamori, 1979]. However, the relationship between local magnitude ML and moment is itself magnitude dependent. For moderate events, 3< M < 7, M and M­L are coincident; for earthquakes smaller than M3, ML log M0 [Hanks and Boore, 1984]. This is a consequence of the saturation of the apparent corner frequency fc as it becoming greater than the largest observable frequency, fmax; In this regime, stress drop no longer controls ground motion. This implies that ML and M differ by a factor of 1.5 for these small events. While this idea is not new, its implications are important as more small-magnitude data are incorporated into earthquake hazard research. With a large dataset of M<3 earthquakes recorded on the ANZA network, we demonstrate striking consequences of the difference between M and ML. ML scales as the log peak ground motions (e.g., PGA or PGV) for these small earthquakes, which yields log PGA log M0 [Boore, 1986]. We plot nearly 15,000 records of PGA and PGV at close stations, adjusted for site conditions and for geometrical spreading to 10 km. The slope of the log of ground motion is 1.0*ML­, or 1.5*M, confirming the relationship, and that fc >> fmax. Just as importantly, if this relation is overlooked, prediction of large-magnitude ground motion from small earthquakes will be misguided. We also consider the effect of this magnitude scale difference on b-value. The oft-cited b-value of 1 should hold for small magnitudes, given M. Use of ML necessitates b=2/3 for the same data set; use of mixed, or unknown, magnitudes complicates the matter further. This is of particular import when estimating the rate of large earthquakes when one has limited data on their recurrence, as is the case for

Short-term earthquakeprediction requires sensitive instruments for measuring the small anomalous changes in stress and strain that precede earthquakes. Instruments installed at or near the surface have proven too noisy for measuring anomalies of the size expected to occur, and it is now recognized that even to have the possibility of a reliable earthquake-prediction system will require instruments installed in drill holes at depths sufficient to reduce the background noise to a level below that of the expected premonitory signals. We are conducting experiments to determine the maximum signal-to-noise improvement that can be obtained in drill holes. In a 592 m well in the Mojave Desert near Hi Vista, California, we measured water-level changes with amplitudes greater than 10 cm, induced by earth tides. By removing the effects of barometric pressure and the stress related to earth tides, we have achieved a sensitivity to volumetric strain rates of 10-9 to 10-10 per day. Further improvement may be possible, and it appears that a successful earthquake-prediction capability may be achieved with an array of instruments installed in drill holes at depths of about 1 km, assuming that the premonitory strain signals are, in fact, present. ?? 1985 Birkha??user Verlag.

There are two kinds of purposes in the studies on earthquakeprediction or forecasts: one is to give a systematic estimation of earthquake risks in some particular region and period in order to give advice to governments and enterprises for the use of reducing disasters, the other one is to search for reliable precursors that can be used to improve earthquakeprediction or forecasts. For the first case, a complete score is necessary, while for the latter case, a partial score, which can be used to evaluate whether the forecasts or predictions have some advantages than a well know model, is necessary. This study reviews different scoring methods for evaluating the performance of earthquakeprediction and forecasts. Especially, the gambling scoring method, which is developed recently, shows its capacity in finding good points in an earthquakeprediction algorithm or model that are not in a reference model, even if its overall performance is no better than the reference model.

There is no clearer example of the increase in hazard due to prolonged and amplified shaking in sedimentary, than the case of Mexico City in the 1985 Michoacan earthquake. It is critically important to identify what other cities might be susceptible to similar basin amplification effects. Physics-based simulations in 3D crustal structure can be used to model and anticipate those effects, but they rely on our knowledge of the complexity of the medium. We propose a parallel approach to validate ground motion simulations using the ambient seismic field. We compute the Earth's impulse response combining the ambient seismic field and coda-wave enforcing causality and symmetry constraints. We correct the surface impulse responses to account for the source depth, mechanism and duration using a 1D approximation of the local surface-wave excitation. We call the new responses virtual earthquakes. We validate the ground motion predicted from the virtual earthquakes against moderate earthquakes in southern California. We then combine temporary seismic stations on the southern San Andreas Fault and extend the point source approximation of the Virtual Earthquake Approach to model finite kinematic ruptures. We confirm the coupling between source directivity and amplification in downtown Los Angeles seen in simulations.

The July 2015 Mw 7.0 Solomon Islands tsunamigenic earthquake occurred ~40 km north of the February 2013 Mw 8.0 Santa Cruz earthquake. The proximity of the two epicenters provided unique opportunities for a comparative study of their source mechanisms and tsunami generation. The 2013 earthquake was an interplate event having a thrust focal mechanism at a depth of 30 km while the 2015 event was a normal-fault earthquake occurring at a shallow depth of 10 km in the overriding Pacific Plate. A combined use of tsunami and teleseismic data from the 2015 event revealed the north dipping fault plane and a rupture velocity of 3.6 km/s. Stress transfer analysis revealed that the 2015 earthquake occurred in a region with increased Coulomb stress following the 2013 earthquake. Spectral deconvolution, assuming the 2015 tsunami as empirical Green's function, indicated the source periods of the 2013 Santa Cruz tsunami as 10 and 22 min.

Study on seismic hazard, building vulnerability and human loss assessment become substantial for building education institutions since the building are used by a lot of students, lecturers, researchers, and guests. The University of the Philippines, Los Banos (UPLB) located in an earthquake prone area. The earthquake could cause structural damage and injury of the UPLB community. We have conducted earthquake assessment in different magnitude and time to predict the posibility of ground shaking, building vulnerability and estimated the number of casualty of the UPLB community. The data preparation in this study includes the earthquake scenario modeling using Intensity Prediction Equations (IPEs) for shallow crustal shaking attenuation to produce intensity map of bedrock and surface. Earthquake model was generated from the segment IV and the segment X of the Valley Fault System (VFS). Building vulnerability of different type of building was calculated using fragility curve of the Philippines building. The population data for each building in various occupancy time, damage ratio, and injury ratio data were used to compute the number of casualties. The result reveals that earthquake model from the segment IV and the segment X of the VFS could generate earthquake intensity between 7.6 - 8.1 MMI in the UPLB campus. The 7.7 Mw earthquake (scenario I) from the segment IV could cause 32% - 51% damage of building and 6.5 Mw earthquake (scenario II) occurring in the segment X could cause 18% - 39% structural damage of UPLB buildings. If the earthquake occurs at 2 PM (day-time), it could injure 10.2% - 18.8% for the scenario I and could injure 7.2% - 15.6% of UPLB population in scenario II. The 5 Pm event, predicted will injure 5.1%-9.4% in the scenario I, and 3.6%-7.8% in scenario II. A nighttime event (2 Am) cause injury to students and guests who stay in dormitories. The earthquake is predicted to injure 13 - 66 students and guests in the scenario I and 9 - 47 people in the

The subduction zone in northern Chile is a well-identified seismic gap that last ruptured in 1877. On 1 April 2014, this region was struck by a large earthquake following a two week long series of foreshocks. This study combines a wide range of observations, including geodetic, tsunami, and seismic data, to produce a reliable kinematic slip model of the Mw=8.1 main shock and a static slip model of the Mw=7.7 aftershock. We use a novel Bayesian modeling approach that accounts for uncertainty in the Green's functions, both static and dynamic, while avoiding nonphysical regularization. The results reveal a sharp slip zone, more compact than previously thought, located downdip of the foreshock sequence and updip of high-frequency sources inferred by back-projection analysis. Both the main shock and the Mw=7.7 aftershock did not rupture to the trench and left most of the seismic gap unbroken, leaving the possibility of a future large earthquake in the region.

A seismic event is represented by a point in a parameter space, quantified by the vector of parameter values. Studies of earthquake clustering involve considering distances between such points in multidimensional spaces. However, the metrics of earthquake parameters are different, hence the metric in a multidimensional parameter space cannot be readily defined. The present paper proposes a solution of this metric problem based on a concept of probabilistic equivalence of earthquake parameters. Under this concept the lengths of parameter intervals are equivalent if the probability for earthquakes to take values from either interval is the same. Earthquake clustering is studied in an equivalent rather than the original dimensions space, where the equivalent dimension (ED) of a parameter is its cumulative distribution function. All transformed parameters are of linear scale in [0, 1] interval and the distance between earthquakes represented by vectors in any ED space is Euclidean. The unknown, in general, cumulative distributions of earthquake parameters are estimated from earthquake catalogues by means of the model-free non-parametric kernel estimation method. Potential of the transformation to EDs is illustrated by two examples of use: to find hierarchically closest neighbours in time-space and to assess temporal variations of earthquake clustering in a specific 4-D phase space.

The M 4.5 southwestern Indiana earthquake of 18 June 2002 triggered 46 blast monitors in Indiana, Illinois, and Kentucky. The resulting free-field particle velocity records, along with similar data from previous earthquakes in the study area, provide a clear standard for judging the reliability of current maps for predicting ground motions greater than 2 Hz in southwestern Indiana and southeastern Illinois. Peak horizontal accelerations and velocities, and 5% damped pseudo-accelerations for the earthquake, generally exceeded ground motions predicted for the top of the bedrock by factors of 2 or more, even after soil amplifications were taken into consideration. It is suggested, but not proven, that the low shear-wave velocity and weathered bedrock in the area are also amplifying the higher-frequency ground motions that have been repeatedly recorded by the blast monitors in the study area. It is also shown that there is a good correlation between the peak ground motions and 5% pseudo-accelerations recorded for the event, and the Modified Mercalli intensities interpreted for the event by the U.S. Geological Survey.

When the time comes that earthquakes can be predicted accurately, what shall we do with the knowledge? This was the theme of a November 1975 conference on earthquake warning and response held in San Francisco called by Assistant Secretary of the Interior Jack W. Carlson. Invited were officials of State and local governments from Alaska, California, Hawaii, Idaho, Montana, Nevada, utah, Washington, and Wyoming and representatives of the news media.

Earthquakeprediction is a long-sought goal. Changes in groundwater chemistry before earthquakes in Iceland highlight a potential hydrogeochemical precursor, but such signals must be evaluated in the context of long-term, multiparametric data sets.

With the improvement of China's urbanization, to ensure people survive the earthquake needs scientific routine emergency evacuation drills. Drawing on cellular automaton, shortest path algorithm and collision avoidance, we designed a model of earthquake emergency evacuation drill for school scenes. Based on this model, we made simulation software for earthquake emergency evacuation drill. The software is able to perform the simulation of earthquake emergency evacuation drill by building spatial structural model and selecting the information of people's location grounds on actual conditions of constructions. Based on the data of simulation, we can operate drilling in the same building. RFID technology could be used here for drill data collection which read personal information and send it to the evacuation simulation software via WIFI. Then the simulation software would contrast simulative data with the information of actual evacuation process, such as evacuation time, evacuation path, congestion nodes and so on. In the end, it would provide a contrastive analysis report to report assessment result and optimum proposal. We hope the earthquake emergency evacuation drill software and trainer can provide overall process disposal concept for earthquake emergency evacuation drill in assembly occupancies. The trainer can make the earthquake emergency evacuation more orderly, efficient, reasonable and scientific to fulfill the increase in coping capacity of urban hazard.

The Yellow Sea (a.k.a West Sea in Korea) is an epicontinental and semi-closed sea located between Korea and China. Recent earthquakes in the Yellow Sea including, but not limited to, the Seogyuckryulbi-do (1 April 2014, magnitude 5.1), Heuksan-do (21 April 2013, magnitude 4.9), Baekryung-do (18 May 2013, magnitude 4.9) earthquakes, and the earthquake swarm in the Boryung offshore region in 2013, remind us of the seismic hazards affecting east Asia. This series of earthquakes in the Yellow Sea raised numerous questions. Unfortunately, both governments have trouble in monitoring seismicity in the Yellow Sea because earthquakes occur beyond their seismic networks. For example, the epicenters of the magnitude 5.1 earthquake in the Seogyuckryulbi-do region in 2014 reported by the Korea Meteorological Administration and China Earthquake Administration differed by approximately 20 km. This illustrates the difficulty with seismic monitoring and locating earthquakes in the region, despite the huge effort made by both governments. Joint effort is required not only to overcome the limits posed by political boundaries and geographical location but also to study seismicity and the underground structures responsible. Although the well-established and developing seismic networks in Korea and China have provided unprecedented amount and quality of seismic data, high quality catalog is limited to the recent 10s of years, which is far from major earthquake cycle. It is also noticed the earthquake catalog from either country is biased to its own and cannot provide complete picture of seismicity in the Yellow Sea. In order to understand seismic hazard and tectonics in the Yellow Sea, a composite earthquake catalog has been developed. We gathered earthquake information during last 5,000 years from various sources. There are good reasons to believe that some listings account for same earthquake, but in different source parameters. We established criteria in order to provide consistent

During the past year, the grant supported research on several aspects of crustal deformation. The relation between earthquake displacements and fault dimensions was studied in an effort to find scaling laws that relate static parameters such as slip and stress drop to the dimensions of the rupture. Several implications of the static relations for the dynamic properties of earthquakes such as rupture velocity and dynamic stress drop were proposed. A theoretical basis for earthquake related phenomena associated with slow rupture growth or propagation, such as delayed multiple events, was developed using the stress intensity factor defined in fracture mechanics and experimental evidence from studies of crack growth by stress corrosion. Finally, extensive studies by Japanese geologists have established the offset across numerous faults in Japan over the last one hundred thousand years. These observations of intraplate faulting are being used to establish the spatial variations of the average strain rate of subregions in southern Japan.

5.12 Wenchuan earthquake and 4.25 Nepal earthquake are of the similar magnitude, but the climate and geographic environment are totally different. Our team carried out medical rescue in both disasters, so we would like to compare the different traumatic conditions of the wounded in two earthquakes. The clinical data of the wounded respectively in 5.12 Wenchuan earthquake and 4.25 Nepal earthquake rescued by Chengdu Military General Hospital were retrospectively analyzed. Then a contrast study between the wounded was conducted in terms of age, sex, injury mechanisms, traumatic conditions, complications and prognosis. Three days after 5.12 Wenchuan earthquake, 465 cases of the wounded were hospitalized in Chengdu Military General Hospital, including 245 males (52.7%) and 220 females (47.3%) with the average age of (47.6±22.7) years. Our team carried out humanitarian relief in Katmandu after 4.25 Nepal earthquake. Three days after this disaster, 71 cases were treated in our field hospital, including 37 males (52.1%) and 34 females (47.9%) with the mean age of (44.8±22.9) years. There was no obvious difference in sex and mean age between two groups, but the age distribution was a little different: there were more wounded people at the age over 60 years in 4.25 Nepal earthquake (p<0.01) while more wounded people at the age between 21 and 60 years in 5.12 Wenchuan earthquake (p<0.05). The main cause of injury in both disasters was bruise by heavy drops but 5.12 Wenchuan earthquake had a higher rate of bruise injury and crush injury (p<0.05) while 4.25 Nepal earthquake had a higher rate of falling injury (p<0.01). Limb fracture was the most common injury type in both disasters. However, compared with 5.12 Wenchuan earthquake, 4.25 Nepal earthquake has a much higher incidence of limb fractures (p<0.01), lung infection (p<0.01) and malnutrition (p<0.05), but a lower incidence of thoracic injury (p<0.05) and multiple injury (p<0.05). The other complications and death rate

In the immediate aftermath of a major earthquake, the U.S. Geological Survey (USGS) will be called upon to provide information on the characteristics of the event to emergency responders and the media. One such piece of information is the expected surface displacement due to the earthquake. In conducting probabilistic hazard analyses for the San Francisco Bay Region, the Working Group on California Earthquake Probabilities (WGCEP) identified a series of scenario earthquakes involving the major faults of the region, and these were used in their 2003 report (hereafter referred to as WG03) and the recently released 2008 Uniform California Earthquake Rupture Forecast (UCERF). Here I present a collection of maps depicting the expected surface displacement resulting from those scenario earthquakes. The USGS has conducted frequent Global Positioning System (GPS) surveys throughout northern California for nearly two decades, generating a solid baseline of interseismic measurements. Following an earthquake, temporary GPS deployments at these sites will be important to augment the spatial coverage provided by continuous GPS sites for recording postseismic deformation, as will the acquisition of Interferometric Synthetic Aperture Radar (InSAR) scenes. The information provided in this report allows one to anticipate, for a given event, where the largest displacements are likely to occur. This information is valuable both for assessing the need for further spatial densification of GPS coverage before an event and prioritizing sites to resurvey and InSAR data to acquire in the immediate aftermath of the earthquake. In addition, these maps are envisioned to be a resource for scientists in communicating with emergency responders and members of the press, particularly during the time immediately after a major earthquake before displacements recorded by continuous GPS stations are available.

The current status of ionospheric precursor studies associated with large earthquakes (EQ) is summarized in this report. It is a joint endeavor of the "Ionosphere Precursor Study Task Group," which was formed with the support of the Mitsubishi Foundation in 2014-2015. The group promotes the study of ionosphere precursors (IP) to EQs and aims to prepare for a future EQ dedicated satellite constellation, which is essential to obtain the global morphology of IPs and hence demonstrate whether the ionosphere can be used for short-term EQ predictions. Following a review of the recent IP studies, the problems and specific research areas that emerged from the one-year project are described. Planned or launched satellite missions dedicated (or suitable) for EQ studies are also mentioned.

Because of the advantages of low-cost, lightweight and photography under the cloud, UAVs have been widely used in the field of seismic geomorphology research in recent years. Earthquake surface rupture is a typical seismic tectonic geomorphology that reflects the dynamic and kinematic characteristics of crustal movement. The quick identification of earthquake surface rupture is of great significance for understanding the mechanism of earthquake occurrence, disasters distribution and scale. Using integrated differential UAV platform, series images were acquired with accuracy POS around the former urban area (Qushan town) of Beichuan County as the area stricken seriously by the 2008 Wenchuan Ms8.0 earthquake. Based on the multi-view 3D reconstruction technique, the high resolution DSM and DOM are obtained from differential UAV images. Through the shade-relief map and aspect map derived from DSM, the earthquake surface rupture is extracted and analyzed. The results show that the surface rupture can still be identified by using the UAV images although the time of earthquake elapse is longer, whose middle segment is characterized by vertical movement caused by compression deformation from fault planes.

Few clues were found in the literature about the independent risk factors for PTSD among earthquake survivors in Sichuan province three years after the 2008 earthquake. Ours was the first case-control study with matching factors of age and distance from the epicenter among survivors age 16 years or older, three years after the catastrophe. To identify independent risk factors for PTSD among earthquake survivors. We performed a population-based matched case-control study. The cases were drawn from earthquake areas three years after the Wenchuan earthquake, including 113 cases who met positive criteria for PTSD symptoms according to the PCL-C (PTSD Checklist-Civilian Version) score and 452 controls who did not meet the criteria. Cases and controls were matched individually by birth year (+ three years) and the town they lived in when the earthquake occurred. Independent risk factors for PTSD symptoms included two-week disease prevalence (odds ratio [OR],1.92; 95% confidence interval [CI],1.18-3.13), witnessing someone being killed in the earthquake (OR, 2.04;95%CI, 1.17-3.58), having no regular income after the earthquake (OR, 0.52; 95%CI, 0.28-0.98), receiving mental health support only one time after the earthquake (OR, 2.43; 95%CI, 1.09-5.42) and lower social support (lower PSSS score) (OR, 0.95; 95%CI, 0.93-0.97). Earthquake experience, suffering from physical illnesses, lack of stable income, and lower social support were associated with PTSD symptoms.

As one of the causative agents of viral hepatitis, hepatitis E virus (HEV) has gained public health attention globally. HEV epidemics occur in developing countries, associated with faecal contamination of water and poor sanitation. In industrialised nations, HEV infections are associated with travel to countries endemic for HEV, however, autochthonous infections, mainly through zoonotic transmission, are increasingly being reported. HEV can also be transmitted by blood transfusion. Nepal has experienced a number of HEV outbreaks, and recent earthquakes resulted in predictions raising the risk of an HEV outbreak to very high. This study aimed to measure HEV exposure in Nepalese blood donors after large earthquakes. Samples (n = 1,845) were collected from blood donors from Kathmandu, Chitwan, Bhaktapur and Kavre. Demographic details, including age and sex along with possible risk factors associated with HEV exposure were collected via a study-specific questionnaire. Samples were tested for HEV IgM, IgG and antigen. The proportion of donors positive for HEV IgM or IgG was calculated overall, and for each of the variables studied. Chi square and regression analyses were performed to identify factors associated with HEV exposure. Of the donors residing in earthquake affected regions (Kathmandu, Bhaktapur and Kavre), 3.2% (54/1,686; 95% CI 2.7-4.0%) were HEV IgM positive and two donors were positive for HEV antigen. Overall, 41.9% (773/1,845; 95% CI 39.7-44.2%) of donors were HEV IgG positive, with regional variation observed. Higher HEV IgG and IgM prevalence was observed in donors who reported eating pork, likely an indicator of zoonotic transmission. Previous exposure to HEV in Nepalese blood donors is relatively high. Detection of recent markers of HEV infection in healthy donors suggests recent asymptomatic HEV infection and therefore transfusion-transmission in vulnerable patients is a risk in Nepal. Surprisingly, this study did not provide evidence of a large

Several recently published reports have suggested that semi-stationary linear-cloud formations might be causally precursory to earthquakes. We examine the report of Guangmeng and Jie (2013), who claim to have predicted the 2012 M 6.0 earthquake in the Po Valley of northern Italy after seeing a satellite photograph (a digital image) showing a linear-cloud formation over the eastern Apennine Mountains of central Italy. From inspection of 4 years of satellite images we find numerous examples of linear-cloud formations over Italy. A simple test shows no obvious statistical relationship between the occurrence of these cloud formations and earthquakes that occurred in and around Italy. All of the linear-cloud formations we have identified in satellite images, including that which Guangmeng and Jie (2013) claim to have used to predict the 2012 earthquake, appear to be orographic – formed by the interaction of moisture-laden wind flowing over mountains. Guangmeng and Jie (2013) have not clearly stated how linear-cloud formations can be used to predict the size, location, and time of an earthquake, and they have not published an account of all of their predictions (including any unsuccessful predictions). We are skeptical of the validity of the claim by Guangmeng and Jie (2013) that they have managed to predict any earthquakes.

Real-time applications such as earthquake early warning (EEW) typically use empirical ground-motion prediction equations (GMPEs) along with event magnitude and source-to-site distances to estimate expected shaking levels. In this simplified approach, effects due to finite-fault geometry, directivity and site and basin response are often generalized, which may lead to a significant under- or overestimation of shaking from large earthquakes (M > 6.5) in some locations. For enhanced site-specific ground-motion predictions considering 3-D wave-propagation effects, we develop support vector regression (SVR) models from the SCEC CyberShake low-frequency (<0.5 Hz) and broad-band (0–10 Hz) data sets. CyberShake encompasses 3-D wave-propagation simulations of >415 000 finite-fault rupture scenarios (6.5 ≤ M ≤ 8.5) for southern California defined in UCERF 2.0. We use CyberShake to demonstrate the application of synthetic waveform data to EEW as a ‘proof of concept’, being aware that these simulations are not yet fully validated and might not appropriately sample the range of rupture uncertainty. Our regression models predict the maximum and the temporal evolution of instrumental intensity (MMI) at 71 selected test sites using only the hypocentre, magnitude and rupture ratio, which characterizes uni- and bilateral rupture propagation. Our regression approach is completely data-driven (where here the CyberShake simulations are considered data) and does not enforce pre-defined functional forms or dependencies among input parameters. The models were established from a subset (∼20 per cent) of CyberShake simulations, but can explain MMI values of all >400 k rupture scenarios with a standard deviation of about 0.4 intensity units. We apply our models to determine threshold magnitudes (and warning times) for various active faults in southern California that earthquakes need to exceed to cause at least ‘moderate’, ‘strong’ or ‘very strong’ shaking

Earthquakes can produce significant tree mortality, and consequently affect regional carbon dynamics. Unfortunately, detailed studies quantifying the influence of earthquake on forest mortality are currently rare. The committed forest biomass carbon loss associated with the 2008 Wenchuan earthquake in China is assessed by a synthetic approach in this study that integrated field investigation, remote sensing analysis, empirical models and Monte Carlo simulation. The newly developed approach significantly improved the forest disturbance evaluation by quantitatively defining the earthquake impact boundary and detailed field survey to validate the mortality models. Based on our approach, a total biomass carbon of 10.9 Tg·C wasmore » lost in Wenchuan earthquake, which offset 0.23% of the living biomass carbon stock in Chinese forests. Tree mortality was highly clustered at epicenter, and declined rapidly with distance away from the fault zone. It is suggested that earthquakes represent a signif icant driver to forest carbon dynamics, and the earthquake-induced biomass carbon loss should be included in estimating forest carbon budgets.« less

Earthquakes can produce significant tree mortality, and consequently affect regional carbon dynamics. Unfortunately, detailed studies quantifying the influence of earthquake on forest mortality are currently rare. The committed forest biomass carbon loss associated with the 2008 Wenchuan earthquake in China is assessed by a synthetic approach in this study that integrated field investigation, remote sensing analysis, empirical models and Monte Carlo simulation. The newly developed approach significantly improved the forest disturbance evaluation by quantitatively defining the earthquake impact boundary and detailed field survey to validate the mortality models. Based on our approach, a total biomass carbon of 10.9 Tg·C wasmore » lost in Wenchuan earthquake, which offset 0.23% of the living biomass carbon stock in Chinese forests. Tree mortality was highly clustered at epicenter, and declined rapidly with distance away from the fault zone. It is suggested that earthquakes represent a signif icant driver to forest carbon dynamics, and the earthquake-induced biomass carbon loss should be included in estimating forest carbon budgets.« less

We study the distributions of earthquake numbers in two global earthquake catalogues: Global Centroid-Moment Tensor and Preliminary Determinations of Epicenters. The properties of these distributions are especially required to develop the number test for our forecasts of future seismic activity rate, tested by the Collaboratory for Study of EarthquakePredictability (CSEP). A common assumption, as used in the CSEP tests, is that the numbers are described by the Poisson distribution. It is clear, however, that the Poisson assumption for the earthquake number distribution is incorrect, especially for the catalogues with a lower magnitude threshold. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrences, the negative-binomial distribution (NBD) has two parameters. The second parameter can be used to characterize the clustering or overdispersion of a process. We also introduce and study a more complex three-parameter beta negative-binomial distribution. We investigate the dependence of parameters for both Poisson and NBD distributions on the catalogue magnitude threshold and on temporal subdivision of catalogue duration. First, we study whether the Poisson law can be statistically rejected for various catalogue subdivisions. We find that for most cases of interest, the Poisson distribution can be shown to be rejected statistically at a high significance level in favour of the NBD. Thereafter, we investigate whether these distributions fit the observed distributions of seismicity. For this purpose, we study upper statistical moments of earthquake numbers (skewness and kurtosis) and compare them to the theoretical values for both distributions. Empirical values for the skewness and the kurtosis increase for the smaller magnitude threshold and increase with even greater intensity for small temporal subdivision of catalogues. The Poisson distribution for large rate values approaches the Gaussian law, therefore its skewness

We describe an Italian database of strong ground motion recordings and databanks delineating conditions at the instrument sites and characteristics of the seismic sources. The strong motion database consists of 247 corrected recordings from 89 earthquakes and 101 recording stations. Uncorrected recordings were drawn from public web sites and processed on a record-by-record basis using a procedure utilized in the Next-Generation Attenuation (NGA) project to remove instrument resonances, minimize noise effects through low- and high-pass filtering, and baseline correction. The number of available uncorrected recordings was reduced by 52% (mostly because of s-triggers) to arrive at the 247 recordings in the database. The site databank includes for every recording site the surface geology, a measurement or estimate of average shear wave velocity in the upper 30 m (Vs30), and information on instrument housing. Of the 89 sites, 39 have on-site velocity measurements (17 of which were performed as part of this study using SASW techniques). For remaining sites, we estimate Vs30 based on measurements on similar geologic conditions where available. Where no local velocity measurements are available, correlations with surface geology are used. Source parameters are drawn from databanks maintained (and recently updated) by Istituto Nazionale di Geofisica e Vulcanologia and include hypocenter location and magnitude for small events (M< ??? 5.5) and finite source parameters for larger events. ?? 2009 A.S. Elnashai & N.N. Ambraseys.

A new Southern California Earthquake Center study has quantified how local geologic conditions affect the shaking experienced in an earthquake. The important geologic factors at a site are softness of the rock or soil near the surface and thickness of the sediments above hard bedrock. Even when these 'site effects' are taken into account, however, each earthquake exhibits unique 'hotspots' of anomalously strong shaking. Better predictions of strong ground shaking will therefore require additional geologic data and more comprehensive computer simulations of individual earthquakes.

We present the progress reached by the GPS TEC technologies in study of pre-seismic anomalies in the ionosphere appearing few days before the strong earthquakes. Starting from the first case studies such as 17 August 1999 M7.6 Izmit earthquake in Turkey the technology has been developed and converted into the global near real-time monitoring of seismo-ionospheric effects which is used now in the multiparameter nowcast and forecast of the strong earthquakes. Development of the techniques of the seismo-ionospheric anomalies identification was carried out in parallel with the development of the physical mechanism explaining these anomalies generation. It was established that the seismo-ionospheric anomalies have a self-similarity property, are dependent on the local time and are persistent at least for 4 hours, deviation from undisturbed level could be both positive and negative depending on the leading time (in days) to the moment of impending earthquake and from longitude of anomaly in relation to the epicenter longitude. Low latitude and near equatorial earthquakes demonstrate the magnetically conjugated effect, while the middle and high latitude earthquakes demonstrate the single anomaly over the earthquake preparation zone. From the anomalies morphology the physical mechanism was derived within the framework of the more complex Lithosphere-Atmosphere-Ionosphere-Magnetosphere Coupling concept. In addition to the multifactor analysis of the GPS TEC time series the GIM MAP technology was applied also clearly showing the seismo-ionospheric anomalies locality and their spatial size correspondence to the Dobrovolsky determination of the earthquake preparation zone radius. Application of ionospheric tomography techniques permitted to study not only the total electron content variations but also the modification of the vertical distribution of electron concentration in the ionosphere before earthquakes. The statistical check of the ionospheric precursors passed the

We provide ground-motion prediction equations for computing medians and standard deviations of average horizontal component intensity measures (IMs) for shallow crustal earthquakes in active tectonic regions. The equations were derived from a global database with M 3.0–7.9 events. We derived equations for the primary M- and distance-dependence of the IMs after fixing the VS30-based nonlinear site term from a parallel NGA-West 2 study. We then evaluated additional effects using mixed effects residuals analysis, which revealed no trends with source depth over the M range of interest, indistinct Class 1 and 2 event IMs, and basin depth effects that increase and decrease long-period IMs for depths larger and smaller, respectively, than means from regional VS30-depth relations. Our aleatory variability model captures decreasing between-event variability with M, as well as within-event variability that increases or decreases with M depending on period, increases with distance, and decreases for soft sites.

The destructive effects of large magnitude, thrust subduction superficial (TSS) earthquakes on Mexico City (MC) and Guadalajara (G) has been shown in the recent centuries. For example, the 7/04/1845 and the 19/09/1985, two TSS earthquakes occurred on the coast of the state of Guerrero and Michoacan, with Ms 7+ and 8.1. The economical losses for the later were of about 7 billion US dollars. Also, the largest Ms 8.2, instrumentally observed TSS earthquake in Mexico, occurred in the Colima-Jalisco region the 3/06/1932, and the 9/10/1995 another similar, Ms 7.4 event occurred in the same region, the later produced economical losses of hundreds of thousands US dollars.The frequency of occurrence of large TSS earthquakes in Mexico is poorly known, but it might vary from decades to centuries [1]. Therefore there is a lack of strong ground motions records for extreme TSS earthquakes in Mexico, which as mentioned above, recently had an important economical impact on MC and potentially could have it in G. In this work we obtained samples of broadband synthetics [2,3] expected in MC and G, associated to extreme (plausible) magnitude Mw 8.5, TSS scenario earthquakes, with epicenters in the so-called Guerrero gap and in the Colima-Jalisco zone, respectively. The economical impacts of the proposed extreme TSS earthquake scenarios for MC and G were considered as follows: For MC by using a risk acceptability criteria, the probabilities of exceedance of the maximum seismic responses of their construction stock under the assumed scenarios, and the estimated economical losses observed for the 19/09/1985 earthquake; and for G, by estimating the expected economical losses, based on the seismic vulnerability assessment of their construction stock under the extreme seismic scenario considered. ----------------------- [1] Nishenko S.P. and Singh SK, BSSA 77, 6, 1987 [2] Cabrera E., Chavez M., Madariaga R., Mai M, Frisenda M., Perea N., AGU, Fall Meeting, 2005 [3] Chavez M., Olsen K

Through the development of a Harvard Kennedy School case study (intended for : use as curriculum in graduate-level and executive education programs), this project : examines earthquake preparedness and planning processes in the Los Angeles : metropol...

Many problems in investigating historical seismicity of East Siberia remain unsolved. A list of these problems may refer particularly to the quality and reliability of data sources, completeness of parametric earthquake catalogues, and precision and transparency of estimates for the main parameters of historical earthquakes. The main purpose of this paper is to highlight the current status of the studies of historical seismicity in Eastern Siberia, as well as analysis of existing macroseismic and parametric earthquake catalogues. We also made an attempt to identify the main shortcomings of existing catalogues and to clarify the reasons for their appearance in the light of the history of seismic observations in Eastern Siberia. Contentious issues in the catalogues of earthquakes are considered by the example of three strong historical earthquakes, important for assessing seismic hazard in the region. In particular, it was found that due to technical error the parameters of large M = 7.7 earthquakes of 1742 were transferred from the regional catalogue to the worldwide database with incorrect epicenter coordinates. The way some stereotypes concerning active tectonics influences on the localization of the epicenter is shown by the example of a strong М = 6.4 earthquake of 1814. Effect of insufficient use of the primary data source on completeness of earthquake catalogues is illustrated by the example of a strong M = 7.0 event of 1859. Analysis of the state-of-the-art of historical earthquakesstudies in Eastern Siberia allows us to propose the following activities in the near future: (1) database compilation including initial descriptions of macroseismic effects with reference to their place and time of occurrence; (2) parameterization of the maximum possible (magnitude-unlimited) number of historical earthquakes on the basis of all the data available; (3) compilation of an improved version of the parametric historical earthquake catalogue for East Siberia with

The Satellite Telemetry Earthquake Monitoring Program was started in FY 1968 to evaluate the applicability of satellite relay telemetry in the collection of seismic data from a large number of dense seismograph clusters laid out along the major fault systems of western North America. Prototype clusters utilizing phone-line telemetry were then being installed by the National Center for Earthquake Research (NCER) in 3 regions along the San Andreas fault in central California; and the experience of installing and operating the clusters and in reducing and analyzing the seismic data from them was to provide the raw materials for evaluation in the satellite relay telemetry project.

In 2013, some landslides induced by heavy rainfalls occurred in southern part of Kathmandu, Nepal which is located southern suburb of Kathmandu, the capital. These landslide slopes hit by the strong Gorkha Earthquake in April 2015 and seemed to destabilize again. Hereby, to clarify their susceptibility of landslide in the earthquake, one of these landslide slopes was analyzed its slope stability by CSSDP (Critical Slip Surface analysis by Dynamic Programming based on limit equilibrium method, especially Janbu method) against slope failure with various seismic acceleration observed around Kathmandu in the Gorkha Earthquake. The CSSDP can detect the landslide slip surface which has minimum Fs (factor of safety) automatically using dynamic programming theory. The geology in this area mainly consists of fragile schist and it is prone to landslide occurrence. Field survey was conducted to obtain topological data such as ground surface and slip surface cross section. Soil parameters obtained by geotechnical tests with field sampling were applied. Consequently, the slope has distinctive characteristics followings in terms of slope stability: (1) With heavy rainfall, it collapsed and had a factor of safety Fs <1.0 (0.654 or more). (2) With seismic acceleration of 0.15G (147gal) observed around Kathmandu, it has Fs=1.34. (3) With possible local seismic acceleration of 0.35G (343gal) estimated at Kathmandu, it has Fs=0.989. If it were very shallow landslide and covered with cedars, it could have Fs =1.055 due to root reinforcement effect to the soil strength. (4) Without seismic acceleration and with no rainfall condition, it has Fs=1.75. These results can explain the real landslide occurrence in this area with the maximum seismic acceleration estimated as 0.15G in the vicinity of Kathmandu by the Gorkha Earthquake. Therefore, these results indicate landslide susceptibility of the slopes in this area with strong earthquake. In this situation, it is possible to predict

In the history of earthquake thermal infrared research, it is undeniable that before and after strong earthquakes there are significant thermal infrared anomalies which have been interpreted as preseismic precursor in earthquakeprediction and forecasting. In this paper, we studied the characteristics of thermal radiation observed before and after the 8 great earthquakes with magnitude up to Ms7.0 by using the satellite infrared remote sensing information. We used new types of data and method to extract the useful anomaly information. Based on the analyses of 8 earthquakes, we got the results as follows. (1) There are significant thermal radiation anomalies before and after earthquakes for all cases. The overall performance of anomalies includes two main stages: expanding first and narrowing later. We easily extracted and identified such seismic anomalies by method of “time-frequency relative power spectrum.” (2) There exist evident and different characteristic periods and magnitudes of thermal abnormal radiation for each case. (3) Thermal radiation anomalies are closely related to the geological structure. (4) Thermal radiation has obvious characteristics in abnormal duration, range, and morphology. In summary, we should be sure that earthquake thermal infrared anomalies as useful earthquake precursor can be used in earthquakeprediction and forecasting. PMID:24222728

In the history of earthquake thermal infrared research, it is undeniable that before and after strong earthquakes there are significant thermal infrared anomalies which have been interpreted as preseismic precursor in earthquakeprediction and forecasting. In this paper, we studied the characteristics of thermal radiation observed before and after the 8 great earthquakes with magnitude up to Ms7.0 by using the satellite infrared remote sensing information. We used new types of data and method to extract the useful anomaly information. Based on the analyses of 8 earthquakes, we got the results as follows. (1) There are significant thermal radiation anomalies before and after earthquakes for all cases. The overall performance of anomalies includes two main stages: expanding first and narrowing later. We easily extracted and identified such seismic anomalies by method of "time-frequency relative power spectrum." (2) There exist evident and different characteristic periods and magnitudes of thermal abnormal radiation for each case. (3) Thermal radiation anomalies are closely related to the geological structure. (4) Thermal radiation has obvious characteristics in abnormal duration, range, and morphology. In summary, we should be sure that earthquake thermal infrared anomalies as useful earthquake precursor can be used in earthquakeprediction and forecasting.

In presence of strong lateral heterogeneities, the generation of local surface waves and local resonance can give rise to a complicated pattern in the spatial groundshaking scenario. For any object of the built environment with dimensions greater than the characteristic length of the ground motion, different parts of its foundations can experience severe non-synchronous seismic input. In order to perform an accurate estimate of the site effects, and of differential motion, in realistic geometries, it is necessary to make a parametric study that takes into account the complex combination of the source and propagation parameters. The computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different source and structural models, allows us the construction of damage scenarios that are out of reach of stochastic models. Synthetic signals, to be used as seismic input in a subsequent engineering analysis, e.g. for the design of earthquake-resistant structures or for the estimation of differential motion, can be produced at a very low cost/benefit ratio. We illustrate the work done in the framework of a large international cooperation following the guidelines of the UNESCO IUGS IGCP Project 414 "Realistic Modeling of Seismic Input for Megacities and Large Urban Areas" and show the very recent numerical experiments carried out within the EC project "Advanced methods for assessing the seismic vulnerability of existing motorway bridges" (VAB) to assess the importance of non-synchronous seismic excitation of long structures. >http://www.ictp.trieste.it/www_users/sand/projects.html

Among the natural hazards, earthquakes are the most destructive disasters and cause huge loss of lives, heavily infrastructure damages and great financial losses every year all around the world. According to the statistics about the earthquakes, more than a million earthquakes occur which is equal to two earthquakes per minute in the world. Natural disasters have brought more than 780.000 deaths approximately % 60 of all mortality is due to the earthquakes after 2001. A great earthquake took place at 38.75 N 43.36 E in the eastern part of Turkey in Van Province on On October 23th, 2011. 604 people died and about 4000 buildings seriously damaged and collapsed after this earthquake. In recent years, the use of object oriented classification approach based on different object features, such as spectral, textural, shape and spatial information, has gained importance and became widespread for the classification of high-resolution satellite images and orthophotos. The motivation of this study is to detect the collapsed buildings and debris areas after the earthquake by using very high-resolution satellite images and orthophotos with the object oriented classification and also see how well remote sensing technology was carried out in determining the collapsed buildings. In this study, two different land surfaces were selected as homogenous and heterogeneous case study areas. In the first step of application, multi-resolution segmentation was applied and optimum parameters were selected to obtain the objects in each area after testing different color/shape and compactness/smoothness values. In the next step, two different classification approaches, namely "supervised" and "unsupervised" approaches were applied and their classification performances were compared. Object-based Image Analysis (OBIA) was performed using e-Cognition software.

Earthquake source inversions image the spatio-temporal rupture evolution on one or more fault planes using seismic and/or geodetic data. Such studies are critically important for earthquake seismology in general, and for advancing seismic hazard analysis in particular, as they reveal earthquake source complexity and help (i) to investigate earthquake mechanics; (ii) to develop spontaneous dynamic rupture models; (iii) to build models for generating rupture realizations for ground-motion simulations. In applications (i - iii), the underlying finite-fault source models are regarded as "data" (input information), but their uncertainties are essentially unknown. After all, source models are obtained from solving an inherently ill-posed inverse problem to which many a priori assumptions and uncertain observations are applied. The Source Inversion Validation (SIV) project is a collaborative effort to better understand the variability between rupture models for a single earthquake (as manifested in the finite-source rupture model database) and to develop robust uncertainty quantification for earthquake source inversions. The SIV project highlights the need to develop a long-standing and rigorous testing platform to examine the current state-of-the-art in earthquake source inversion, and to develop and test novel source inversion approaches. We will review the current status of the SIV project, and report the findings and conclusions of the recent workshops. We will briefly discuss several source-inversion methods, how they treat uncertainties in data, and assess the posterior model uncertainty. Case studies include initial forward-modeling tests on Green's function calculations, and inversion results for synthetic data from spontaneous dynamic crack-like strike-slip earthquake on steeply dipping fault, embedded in a layered crustal velocity-density structure.

The effects of earthquake shaking on the population and infrastructure across the State of Hawaii could be catastrophic, and the high seismic hazard in the region emphasizes the likelihood of such an event. Earthquake early warning (EEW) has the potential to give several seconds of warning before strong shaking starts, and thus reduce loss of life and damage to property. The two approaches to EEW are (1) a network approach (such as ShakeAlert or ElarmS) where the regional seismic network is used to detect the earthquake and distribute the alarm and (2) a local approach where a critical facility has a single seismometer (or small array) and a warning system on the premises.The network approach, also referred to here as ShakeAlert or ElarmS, uses the closest stations within a regional seismic network to detect and characterize an earthquake. Most parameters used for a network approach require observations on multiple stations (typically 3 or 4), which slows down the alarm time slightly, but the alarms are generally more reliable than with single-station EEW approaches. The network approach also benefits from having stations closer to the source of any potentially damaging earthquake, so that alarms can be sent ahead to anyone who subscribes to receive the notification. Thus, a fully implemented ShakeAlert system can provide seconds of warning for both critical facilities and general populations ahead of damaging earthquake shaking.The cost to implement and maintain a fully operational ShakeAlert system is high compared to a local approach or single-station solution, but the benefits of a ShakeAlert system would be felt statewide—the warning times for strong shaking are potentially longer for most sources at most locations.The local approach, referred to herein as “single station,” uses measurements from a single seismometer to assess whether strong earthquake shaking can be expected. Because of the reliance on a single station, false alarms are more common than

During the last decades, an important effort has been dedicated to develop accurate and computationally efficient numerical methods to predictearthquake ground motion in heterogeneous 3D media. The progress in methods and increasing capability of computers have made it technically feasible to calculate realistic seismograms for frequencies of interest in seismic design applications. In order to foster the use of numerical simulation in practical prediction, it is important to (1) evaluate the accuracy of current numerical methods when applied to realistic 3D applications where no reference solution exists (verification) and (2) quantify the agreement between recorded and numerically simulated earthquake ground motion (validation). Here we report the results of the Euroseistest verification and validation project - an ongoing international collaborative work organized jointly by the Aristotle University of Thessaloniki, Greece, the Cashima research project (supported by the French nuclear agency, CEA, and the Laue-Langevin institute, ILL, Grenoble), and the Joseph Fourier University, Grenoble, France. The project involves more than 10 international teams from Europe, Japan and USA. The teams employ the Finite Difference Method (FDM), the Finite Element Method (FEM), the Global Pseudospectral Method (GPSM), the Spectral Element Method (SEM) and the Discrete Element Method (DEM). The project makes use of a new detailed 3D model of the Mygdonian basin (about 5 km wide, 15 km long, sediments reach about 400 m depth, surface S-wave velocity is 200 m/s). The prime target is to simulate 8 local earthquakes with magnitude from 3 to 5. In the verification, numerical predictions for frequencies up to 4 Hz for a series of models with increasing structural and rheological complexity are analyzed and compared using quantitative time-frequency goodness-of-fit criteria. Predictions obtained by one FDM team and the SEM team are close and different from other predictions

Continuous monitoring of signal amplitudes of worldwide VLF transmitters is a powerful tool to study the lower ionospheric condition. Although, lower ionospheric perturbations prior to some of the major earthquakes have been reported for years, their occurrence and coupling mechanism between the ground and overlaying ionosphere prior to the earthquakes are not clear yet. In this paper, we carried out a statistical analysis based on the nighttime averaged signal amplitude data from the UEC's VLF/LF transmitter observation network. Two hundred forty three earthquakes were occurred within the 5th Fresnel zone of transmitter-receiver paths around Japan during the time period of 2007 to 2012. These earthquakes were characterized into three different groups based on the Centroid-Moment-Tensor (CMT) solution such as reverse fault type, normal fault type and stress slip type. The ionospheric anomaly was identified by a large change in the VLF/LF amplitude during nighttime. As a result, we found the ionospheric perturbations associated with both ground and sea earthquakes. Remarkably, the reverse fault type earthquakes have the highest occurrence rate of ionospheric perturbation among the three types both for sea (41%) and ground events (61%). The occurrence rates for normal type fault are 35% and 56% for sea and ground earthquakes respectively and the same for stress slip type are 39% and 20% for sea and ground earthquakes respectively. In both cases the occurrence rates are smaller than the reverse fault type. The clear difference of occurrence rate of the ionospheric perturbations may indicate that the coupling efficiency of seismic activity into the overlaying ionosphere is controlled by the pressure in the earth's crust. This gives us further physical insight of Lithosphere-Atmosphere-Ionosphere (LAI) coupling processes.

The forms and location patterns of soil liquefaction induced by earthquakes in southern Siberia, Mongolia, and northern Kazakhstan in 1950 through 2014 have been investigated, using field methods and a database of coseismic effects created as a GIS MapInfo application, with a handy input box for large data arrays. Statistical analysis of the data has revealed regional relationships between the magnitude (Ms) of an earthquake and the maximum distance of its environmental effect to the epicenter and to the causative fault (Lunina et al., 2014). Estimated limit distances to the fault for the Ms = 8.1 largest event are 130 km that is 3.5 times as short as those to the epicenter, which is 450 km. Along with this the wider of the fault the less liquefaction cases happen. 93% of them are within 40 km from the causative fault. Analysis of liquefaction locations relative to nearest faults in southern East Siberia shows the distances to be within 8 km but 69% of all cases are within 1 km. As a result, predictive models have been created for locations of seismic liquefaction, assuming a fault pattern for some parts of the Baikal rift zone. Base on our field and world data, equations have been suggested to relate the maximum sizes of liquefaction-induced clastic dikes (maximum width, visible maximum height and intensity index of clastic dikes) with Ms and local shaking intensity corresponding to the MSK-64 macroseismic intensity scale (Lunina and Gladkov, 2015). The obtained results make basis for modeling the distribution of the geohazard for the purposes of prediction and for estimating the earthquake parameters from liquefaction-induced clastic dikes. The author would like to express their gratitude to the Institute of the Earth's Crust, Siberian Branch of the Russian Academy of Sciences for providing laboratory to carry out this research and Russian Scientific Foundation for their financial support (Grant 14-17-00007).

Proposals are developed to update Tables 11.4-1 and 11.4-2 of Minimum Design Loads for Buildings and Other Structures published as American Society of Civil Engineers Structural Engineering Institute standard 7-10 (ASCE/SEI 7–10). The updates are mean next generation attenuation (NGA) site coefficients inferred directly from the four NGA ground motion prediction equations used to derive the maximum considered earthquake response maps adopted in ASCE/SEI 7–10. Proposals include the recommendation to use straight-line interpolation to infer site coefficients at intermediate values of (average shear velocity to 30-m depth). The NGA coefficients are shown to agree well with adopted site coefficients at low levels of input motion (0.1 g) and those observed from the Loma Prieta earthquake. For higher levels of input motion, the majority of the adopted values are within the 95% epistemic-uncertainty limits implied by the NGA estimates with the exceptions being the mid-period site coefficient, Fv, for site class D and the short-period coefficient, Fa, for site class C, both of which are slightly less than the corresponding 95% limit. The NGA data base shows that the median value of 913 m/s for site class B is more typical than 760 m/s as a value to characterize firm to hard rock sites as the uniform ground condition for future maximum considered earthquake response ground motion estimates. Future updates of NGA ground motion prediction equations can be incorporated easily into future adjustments of adopted site coefficients using procedures presented herein.

We propose a prototype of the procedure to construct source models for strong motion prediction during intraslab earthquakes based on the characterized source model (Irikura and Miyake, 2011). The key is the characterized source model which is based on the empirical scaling relationships for intraslab earthquakes and involve the correspondence between the SMGA (strong motion generation area, Miyake et al., 2003) and the asperity (large slip area). Iwata and Asano (2011) obtained the empirical relationships of the rupture area (S) and the total asperity area (Sa) to the seismic moment (Mo) as follows, with assuming power of 2/3 dependency of S and Sa on M0, S (km**2) = 6.57×10**(-11)×Mo**(2/3) (Nm) (1) Sa (km**2) = 1.04 ×10**(-11)×Mo**(2/3) (Nm) (2). Iwata and Asano (2011) also pointed out that the position and the size of SMGA approximately corresponds to the asperity area for several intraslab events. Based on the empirical relationships, we gave a procedure for constructing source models of intraslab earthquakes for strong motion prediction. [1] Give the seismic moment, Mo. [2] Obtain the total rupture area and the total asperity area according to the empirical scaling relationships between S, Sa, and Mo given by Iwata and Asano (2011). [3] Square rupture area and asperities are assumed. [4] The source mechanism is assumed to be the same as that of small events in the source region. [5] Plural scenarios including variety of the number of asperities and rupture starting points are prepared. We apply this procedure by simulating strong ground motions for several observed events for confirming the methodology.

Scientists who specialize in the study of Mississippi Valley earthquakes say that the region is overdue for a powerful tremor that will cause major damage and undoubtedly some casualties. The inevitability of a future quake and the lack of preparation by both individuals and communities provided the impetus for this book. It brings together applicable information from many disciplines: history, geology and seismology, engineering, zoology, politics and community planning, economics, environmental science, sociology, and psychology and mental health to provide a perspective of the myriad impacts of a major earthquake on the Mississippi Valley. The author addresses such basic questionsmore » as What, actually, are earthquakes How do they occur Can they be predicted, perhaps even prevented He also addresses those steps that individuals can take to improve their chances for survival both during and after an earthquake.« less

The Satellite Telemetry Earthquake Monitoring Program was started to evaluate the applicability of satellite relay telemetry in the collection of seismic data from a large number of dense seismograph clusters laid out along the major fault systems of western North America. Prototype clusters utilizing phone-line telemetry were then being installed by the National Center for Earthquake Research in 3 regions along the San Andreas fault in central California; and the experience of installing and operating the clusters and in reducing and analyzing the seismic data from them was to provide the raw materials for evaluation in the satellite relay telemetry project. The principal advantages of the satellite relay system over commercial telephone or microwave systems were: (1) it could be made less prone to massive failure during a major earthquake; (2) it could be extended readily into undeveloped regions; and (3) it could provide flexible, uniform communications over large sections of major global tectonic zones. Fundamental characteristics of a communications system to cope with the large volume of raw data collected by a short-period seismograph network are discussed.

An earthquake measuring 6.3 on the Richter scale struck the city of Bam in Iran on the 26th of December 2003 at 5.26 A.M. It was devastating, and left over 40,000 dead and around 30,000 injured. The profound tragedy of thousands killed has caused emotional and psychological trauma for tens of thousands of people who have survived. A study was carried out to assess psychological distress among Bam earthquake survivors and factors associated with severe mental health in those who survived the tragedy. This was a population-based study measuring psychological distress among the survivors of Bam earthquake in Iran. Using a multi-stage stratified sampling method a random sample of individuals aged 15 years and over living in Bam were interviewed. Psychological distress was measured using the 12-item General Health Questionnaire (GHQ-12). In all 916 survivors were interviewed. The mean age of the respondents was 32.9 years (SD = 12.4), mostly were males (53%), married (66%) and had secondary school education (50%). Forty-one percent reported they lost 3 to 5 members of their family in the earthquake. In addition the findings showed that 58% of the respondents suffered from severe mental health as measured by the GHQ-12 and this was three times higher than reported psychological distress among the general population. There were significant differences between sub-groups of the study sample with regard to their psychological distress. The results of the logistic regression analysis also indicated that female gender; lower education, unemployment, and loss of family members were associated with severe psychological distress among earthquake victims. The study findings indicated that the amount of psychological distress among earthquake survivors was high and there is an urgent need to deliver mental health care to disaster victims in local medical settings and to reduce negative health impacts of the earthquake adequate psychological counseling is needed for those who

Determination of the rupture propagation of large earthquakes is important and of wide interest to the seismological research community. The conventional inversion method determines the distribution of slip at a grid of subfaults whose orientations are predefined. As a result, difference choices of fault geometry and dimensions often result in different solutions. In this study, we try to reconstruct the rupture history of an earthquake using the newly developed Source-Scanning Algorithm (SSA) without imposing any a priori constraints on the fault's orientation and dimension. The SSA identifies the distribution of seismic sources in two steps. First, it calculates the theoretical arrival times from all grid points inside the model space to all seismic stations by assuming an origin time. Then, the absolute amplitudes of the observed waveforms at the predicted arrival times are added to give the "brightness" of each time-space pair, and the brightest spots mark the locations of sources. The propagation of the rupture is depicted by the migration of the brightest spots throughout a prescribed time window. A series of experiments are conducted to test the resolution of the SSA inversion. Contrary to the conventional wisdom that seismometers should be placed as close as possible to the fault trace to give the best resolution in delineating rupture details, we found that the best results are obtained if the seismograms are recorded at a distance about half of the total rupture length away from the fault trace. This is especially true when the rupture duration is longer than ~10 s. A possible explanation is that the geometric spreading effects for waveforms from different segments of the rupture are about the same if the stations are sufficiently away from the fault trace, thus giving a uniform resolution to the entire rupture history.

The earthquake cycle is poorly understood. Earthquakes continue to occur on previously unrecognized faults. Earthquakeprediction seems impossible. These remain the facts despite nearly 100 years of intensive study since the earthquake cycle was first conceptualized. Using data acquired from satellites in orbit 800 km above the Earth, a new technique, radar interferometry (InSAR), has the potential to solve these problems. For the first time, detailed maps of the warping of the Earth's surface during the earthquake cycle can be obtained with a spatial resolution of a few tens of metres and a precision of a few millimetres. InSAR does not need equipment on the ground or expensive field campaigns, so it can gather crucial data on earthquakes and the seismic cycle from some of the remotest areas of the planet. In this article, I review some of the remarkable observations of the earthquake cycle already made using radar interferometry and speculate on breakthroughs that are tantalizingly close.

Dramatic examples of catastrophic damage from an earthquake occurred in 1989, when the M 7.1 Lorna Prieta rocked the San Francisco Bay area, and in 1994, when the M 6.6 Northridge earthquake jolted southern California. The surprising amount and distribution of damage to private property and infrastructure emphasizes the importance of seismic-hazard research in urbanized areas, where the potential for damage and loss of life is greatest. During April 1995, a group of scientists from the U.S. Geological Survey and the University of Tennessee, using an echo-sounding method described below, is collecting data in San Antonio Park, California, to examine the Monte Vista fault which runs through this park. The Monte Vista fault in this vicinity shows evidence of movement within the last 10,000 years or so. The data will give them a "picture" of the subsurface rock deformation near this fault. The data will also be used to help locate a trench that will be dug across the fault by scientists from William Lettis & Associates.

Based on observations prior to earthquakes, recent theoretical considerations suggest that some geophysical quantities reveal abnormal changes that anticipate moderate and strong earthquakes, within a defined spatial area (the so-called Dobrovolsky area) according to a Lithosphere-Atmosphere-Ionosphere coupling (LAIC) model. One of the possible pre-earthquake effects could be the appearance of some climatological anomalies in the epicentral region, weeks/months before the major earthquakes. An ESA-funded project, SAFE (Swarm for Earthquakestudy) was dedicated to investigate the LAIC from ground to satellite. In this work, the period of two months preceding the Amatrice-Norcia (Central Italy) earthquake sequence that started on 24 August 2016 with an M6 earthquake, and some months later produced other two major shocks, i.e. an M5.9 on 26 October and then an M6.5 on 30 October, was analyzed in terms of some climatological parameters. In particular, starting from a date preceding of about two months the first major shock, we applied a new approach based on the comparison of the thirty-seven year time series at the same seasonal time of three land/atmospheric parameters, i.e. skin temperature (skt), total column water vapour (tcwv) and total column of ozone (tco3), collected from European Center Medium Weather Forecast (ECMWF), and the year in which the earthquake sequence occurred. The originality of the method stands in the way the complete time series is reduced, where also the possible effect of global warming is properly removed. A confutation/confirmation analysis was undertaken where these parameters were successfully analyzed in the same months but considering two seismically "calm" years, when significant seismicity was not present, in order to validate the technique. We also extended the analysis to all available years to construct a confusion matrix comparing the climatological anomalies with the real seismicity. This latter analysis has confirmed the

Scenario-based, loss-estimation studies are useful for gauging potential societal impacts from earthquakes but can be challenging to undertake in areas with multiple scenarios and jurisdictions. We present a geospatial approach using various population data for comparing earthquake scenarios and jurisdictions to help emergency managers prioritize where to focus limited resources on data development and loss-estimation studies. Using 20 earthquake scenarios developed for the State of Washington (USA), we demonstrate how a population-exposure analysis across multiple jurisdictions based on Modified Mercalli Intensity (MMI) classes helps emergency managers understand and communicate where potential loss of life may be concentrated and where impacts may be more related to quality of life. Results indicate that certain well-known scenarios may directly impact the greatest number of people, whereas other, potentially lesser-known, scenarios impact fewer people but consequences could be more severe. The use of economic data to profile each jurisdiction’s workforce in earthquake hazard zones also provides additional insight on at-risk populations. This approach can serve as a first step in understanding societal impacts of earthquakes and helping practitioners to efficiently use their limited risk-reduction resources.

Time discounting refers to the fact that the subjective value of a reward decreases as the delay until its occurrence increases. The present study investigated how time discounting has been affected in survivors of the magnitude-8.0 Wenchuan earthquake that occurred in China in 2008. Nineteen earthquake survivors and 22 controls, all school teachers, participated in the study. Event-related brain potentials (ERPs) for time discounting tasks involving gains and losses were acquired in both the victims and controls. The behavioral data replicated our previous findings that delayed gains were discounted more steeply after a disaster. ERP results revealed that the P200 and P300 amplitudes were increased in earthquake survivors. There was a significant group (earthquake vs. non-earthquake) × task (gain vs. loss) interaction for the N300 amplitude, with a marginally significantly reduced N300 for gain tasks in the experimental group, which may suggest a deficiency in inhibitory control for gains among victims. The results suggest that post-disaster decisions might involve more emotional (System 1) and less rational thinking (System 2) in terms of a dual-process model of decision making. The implications for post-disaster intervention and management are also discussed.

Though it has been the object of both contemporary and modern investigations, the 14 April 1895, Ljubljana event (Mw ~6, according to the European catalogue SHEEC) is still not fully described in its effects. One manifest reason for this is that being the 1895 earthquake a cross-border event, it affected an area that today pertains to three different countries, Slovenia, Austria, and Italy, as well as accounted for in sources today scattered in different archives and libraries. In addition, the 1895 Ljubljana earthquake was a turning point for many aspects. Imperial Vienna sent help to rebuild the damaged city and its surroundings, and the architects brought modern ideas about urban planning, public hygiene and contemporary design. It was also the beginning of organised seismological observations in Slovenia - macroseismic, right after the earthquake, and instrumental, in 1896. The macroseismic data about this earthquake are plentiful and very well preserved. In this new, cross-border study we intend to re-evaluate the already known as well as the newly collected data sources. Specific attention is devoted to the archival documentation on damage, and to the far-field data, which were not comprehensively taken into account beforehand. As the earthquake was felt in a large part of central and Eastern Europe, a considerable effort is put into collecting and interpreting the coeval sources, written in many different languages.

This article is devoted to application of a simulation algorithm based on geostatistical methods to compile and update seismotectonic provinces in which Iran has been chosen as a case study. Traditionally, tectonic maps together with seismological data and information (e.g., earthquake catalogues, earthquake mechanism, and microseismic data) have been used to update seismotectonic provinces. In many cases, incomplete earthquake catalogues are one of the important challenges in this procedure. To overcome this problem, a geostatistical simulation algorithm, turning band simulation, TBSIM, was applied to make a synthetic data to improve incomplete earthquake catalogues. Then, the synthetic data was added to the traditional information to study the seismicity homogeneity and classify the areas according to tectonic and seismic properties to update seismotectonic provinces. In this paper, (i) different magnitude types in the studied catalogues have been homogenized to moment magnitude (Mw), and earthquake declustering was then carried out to remove aftershocks and foreshocks; (ii) time normalization method was introduced to decrease the uncertainty in a temporal domain prior to start the simulation procedure; (iii) variography has been carried out in each subregion to study spatial regressions (e.g., west-southwestern area showed a spatial regression from 0.4 to 1.4 decimal degrees; the maximum range identified in the azimuth of 135 ± 10); (iv) TBSIM algorithm was then applied to make simulated events which gave rise to make 68,800 synthetic events according to the spatial regression found in several directions; (v) simulated events (i.e., magnitudes) were classified based on their intensity in ArcGIS packages and homogenous seismic zones have been determined. Finally, according to the synthetic data, tectonic features, and actual earthquake catalogues, 17 seismotectonic provinces were introduced in four major classes introduced as very high, high, moderate, and low

Since 1985, the California Office of Emergency Services (Cal OES) has issued advisory statements to local jurisdictions and the public following seismic activity that scientists on the California EarthquakePrediction Evaluation Council view as indicating elevated probability of a larger earthquake in the same area during the next several days. These advisory statements are motivated by statistical studies showing that about 5% of moderate earthquakes in California are followed by larger events within a 10-km, five-day space-time window (Jones, 1985; Agnew and Jones, 1991; Reasenberg and Jones, 1994). Cal OES issued four earthquake advisories from 1985 to 1989. In October, 1990, the California Earthquake Advisory Plan formalized this practice, and six Cal OES Advisories have been issued since then. This article describes that protocol’s scientific basis and evolution.

We applied the Differential Interferometric Synthetic Aperture Radar (DInSAR) technique to investigate and measure surface displacements due to the ML 5.2, June 21, 2013, earthquake occurred in the Apuan Alps (NW Italy) at a depth of about 5 km. The Centroid Moment Tensor (CMT) solution from INGV indicates an almost pure normal fault mechanism. Two differential interferograms showing the coseismic displacement were generated using X- band and C-band data respectively. The X-Band interferogram was obtained from a Cosmo-SkyMed ascending pair (azimuth -7.9° and incidence angle 40°) with a time interval of one day (June 21 - June 22) and 139 m spatial baseline, covering an area of about 40x40 km around the epicenter. The topographic phase component was removed using the 90 m SRTM DEM. The C-Band interferferogram was computed from two RADARSAT-2 Standard-3 (S3) images, characterized by 24 days temporal and 69 m spatial baselines, acquired on June 18 and July 12, 2013 on ascending orbit (azimuth -10.8°) with an incidence angle of 34° and covering 100x100 km area around the epicenter. The topographic phase component was removed using 30 m ASTER DEM. Adaptive filtering, phase unwrapping with Minimum Cost Flow (MCF) algorithm and orbital refinement were also applied to both interferograms. We modeled the observed SAR deformation fields using the Okada analytical formulation within a nonlinear inversion scheme, and found them to be consistent with a fault plane dipping towards NW at an angle of about 45°. In spite of the small magnitude, this earthquake produces a surface subsidence of about 1.5 cm in the Line-Of-Sight (LOS) direction, corresponding to about 3 cm along the vertical axis, that can be observed in both interferograms and appears consistent with the normal fault mechanisms.

It has been proposed that the mechanisms for intermediate-depth and deep earthquakes might be different. While previous extensive seismological studies suggested that such potential differences do not significantly affect the scaling relationships of earthquake parameters, there has been only a few investigations regarding their dynamic characteristics, especially for fracture energy. In this work, the 2014 Mw7.9 Rat Islands intermediate-depth (105 km) earthquake and the 2015 Mw7.8 Bonin Islands deep (680 km) earthquake are studied from two different perspectives. First, their kinematic rupture models are constrained using teleseismic body waves. Our analysis reveals that the Rat Islands earthquake breaks the entire cold core of the subducting slab defined as the depth of the 650oC isotherm. The inverted stress drop is 4 MPa, compatible to that of intra-plate earthquakes at shallow depths. On the other hand, the kinematic rupture model of the Bonin Islands earthquake, which occurred in a region lacking of seismicity for the past forty years, according to the GCMT catalog, exhibits an energetic rupture within a 35 km by 30 km slip patch and a high stress drop of 24 MPa. It is of interest to note that although complex rupture patterns are allowed to match the observations, the inverted slip distributions of these two earthquakes are simple enough to be approximated as the summation of a few circular/elliptical slip patches. Thus, we investigate subsequently their dynamic rupture models. We use a simple modelling approach in which we assume that the dynamic rupture propagation obeys a slip-weakening friction law, and we describe the distribution of stress and friction on the fault as a set of elliptical patches. We will constrain the three dynamic parameters that are yield stress, background stress prior to the rupture and slip weakening distance, as well as the shape of the elliptical patches directly from teleseismic body waves observations. The study would help us

Seismic array methods, in both time- and frequency- domains, have been widely used to study the rupture process and energy radiation of earthquakes. With better spatial resolution, the high-resolution frequency-domain methods, such as Multiple Signal Classification (MUSIC) (Schimdt, 1986; Meng et al., 2011) and the recently developed Compressive Sensing (CS) technique (Yao et al., 2011, 2013), are revealing new features of earthquake rupture processes. We have performed various tests on the methods of MUSIC, CS, minimum-variance distortionless response (MVDR) Beamforming and conventional Beamforming in order to better understand the advantages and features of these methods for studyingearthquake rupture processes. We use the ricker wavelet to synthesize seismograms and use these frequency-domain techniques to relocate the synthetic sources we set, for instance, two sources separated in space but, their waveforms completely overlapping in the time domain. We also test the effects of the sliding window scheme on the recovery of a series of input sources, in particular, some artifacts that are caused by the sliding window scheme. Based on our tests, we find that CS, which is developed from the theory of sparsity inversion, has relatively high spatial resolution than the other frequency-domain methods and has better performance at lower frequencies. In high-frequency bands, MUSIC, as well as MVDR Beamforming, is more stable, especially in the multi-source situation. Meanwhile, CS tends to produce more artifacts when data have poor signal-to-noise ratio. Although these techniques can distinctly improve the spatial resolution, they still produce some artifacts along with the sliding of the time window. Furthermore, we propose a new method, which combines both the time-domain and frequency-domain techniques, to suppress these artifacts and obtain more reliable earthquake rupture images. Finally, we apply this new technique to study the 2013 Okhotsk deep mega earthquake

. The usable and realistic ground motion maps for urban areas are generated: - either from the assumption of a "reference earthquake" - or directly, showing values of macroseimic intensity generated by a damaging, real earthquake. In the study, applying deterministic approach, earthquake scenario in macroseismic intensity ("model" earthquake scenario) for the city of Sofia is generated. The deterministic "model" intensity scenario based on assumption of a "reference earthquake" is compared with a scenario based on observed macroseimic effects caused by the damaging 2012 earthquake (MW5.6). The difference between observed (Io) and predicted (Ip) intensities values is analyzed.

Earthquakes, including large damaging events, are as central to the geologic evolution of the Island of Hawai`i as its more famous volcanic eruptions and lava flows. Increasing and expanding development of facilities and infrastructure on the island continues to increase exposure and risk associated with strong ground shaking resulting from future large local earthquakes. Damaging earthquakes over the last fifty years have shaken the most heavily developed areas and critical infrastructure of the island to levels corresponding to at least Modified Mercalli Intensity VII. Hawai`i's most recent damaging earthquakes, the M6.7 Kiholo Bay and M6.0 Mahukona earthquakes, struck within seven minutes of one another off of the northwest coast of the island in October 2006. These earthquakes resulted in damage at all thirteen of the telescopes near the summit of Mauna Kea that led to gaps in telescope operations ranging from days up to four months. With the experiences of 2006 and Hawai`i's history of damaging earthquakes, we have begun a study to explore the feasibility of implementing earthquake early warning systems to provide advanced warnings to the Thirty Meter Telescope of imminent strong ground shaking from future local earthquakes. One of the major challenges for earthquake early warning in Hawai`i is the variety of earthquake sources, from shallow crustal faults to deeper mantle sources, including the basal decollement separating the volcanic pile from the ancient oceanic crust. Infrastructure on the Island of Hawai`i may only be tens of kilometers from these sources, allowing warning times of only 20 s or less. We assess the capability of the current seismic network to produce alerts for major historic earthquakes, and we will provide recommendations for upgrades to improve performance.

We investigate the peak ground motions from the largest well-recorded earthquakes on creeping strike-slip faults in active-tectonic continental regions. Our goal is to evaluate if the strong ground motions from earthquakes on creeping faults are smaller than the strong ground motions from earthquakes on locked faults. Smaller ground motions might be expected from earthquakes on creeping faults if the fault sections that strongly radiate energy are surrounded by patches of fault that predominantly absorb energy. For our study we used the ground motion data available in the PEER NGA-West2 database, and the ground motion prediction equations that were developed from the PEER NGA-West2 dataset. We analyzed data for the eleven largest well-recorded creeping-fault earthquakes, that ranged in magnitude from M5.0-6.5. Our findings are that these earthquakes produced peak ground motions that are statistically indistinguishable from the peak ground motions produced by similar-magnitude earthquakes on locked faults. These findings may be implemented in earthquake hazard estimates for moderate-size earthquakes in creeping-fault regions. Further investigation is necessary to determine if this result will also apply to larger earthquakes on creeping faults. Please also see: Harris, R.A., and N.A. Abrahamson (2014), Strong ground motions generated by earthquakes on creeping faults, Geophysical Research Letters, vol. 41, doi:10.1002/2014GL060228.

I use the Relative Source Time Function (RSTF) method to determine the source properties of earthquakes within southeastern Alaska-northwestern Canada in a first part of the project, and earthquakes within the Denali fault in a second part. I deconvolve a small event P-arrival signal from a larger event by the following method: select arrivals with a tapered cosine window, fast fourier transform to obtain the spectrum, apply water level deconvolution technique, and bandpass filter before inverse transforming the result to obtain the RSTF. I compare the source processes of earthquakes within the area to determine stress drop differences to determine their relation with the tectonic setting of the earthquakes location. Results show an consistency with previous results, stress drop independent of moment implying self-similarity, correlation of stress drop with tectonic regime, stress drop independent of depth, stress drop depends of focal mechanism where strike-slip present larger stress drops, and decreasing stress drop as function of time. I determine seismic wave attenuation in the central western United States using coda waves. I select approximately 40 moderate earthquakes (magnitude between 5.5 and 6.5) located alocated along the California-Baja California, California-Nevada, Eastern Idaho, Gulf of California, Hebgen Lake, Montana, Nevada, New Mexico, off coast of Northern California, off coast of Oregon, southern California, southern Illinois, Vancouver Island, Washington, and Wyoming regions. These events were recorded by the EarthScope transportable array (TA) network from 2005 to 2009. We obtain the data from the Incorporated Research Institutions for Seismology (IRIS). In this study we implement a method based on the assumption that coda waves are single backscattered waves from randomly distributed heterogeneities to calculate the coda Q. The frequencies studied lie between 1 and 15 Hz. The scattering attenuation is calculated for frequency bands centered

Objective To assess and estimate the personality changes that occurred before and after the 2009 earthquake in L'Aquila and to model the ways that the earthquake affected adolescents according to gender and sport practice. The consequences of earthquakes on psychological health are long lasting for portions of the population, depending on age, gender, social conditions and individual experiences. Sports activities are considered a factor with which to test the overall earthquake impact on individual and social psychological changes in adolescents. Design Before and after design. Setting Population-based study conducted in L'Aquila, Italy, before and after the 2009 earthquake. Participants Before the earthquake, a random sample of 179 adolescent subjects who either practised or did not practise sports (71 vs 108, respectively). After the earthquake, of the original 179 subjects, 149 were assessed a second time. Primary outcome measure Minnesota Multiphasic Personality Inventory—Adolescents (MMPI-A) questionnaire scores, in a supervised environment. Results An unbalanced split plot design, at a 0.05 significance level, was carried out using a linear mixed model with quake, sex and sports practice as predictive factors. Although the overall scores indicated no deviant behaviours in the adolescents tested, changes were detected in many individual content scale scores, including depression (A-dep score mean ± SEM: before quake =47.54±0.73; after quake =52.67±0.86) and social discomfort (A-sod score mean ± SEM: before quake =49.91±0.65; after quake =51.72±0.81). The MMPI-A profiles show different impacts of the earthquake on adolescents according to gender and sport practice. Conclusions The differences detected in MMPI-A scores raise issues about social policies required to address the psychological changes in adolescents. The current study supports the idea that sport should be considered part of a coping strategy to assist adolescents in dealing with the

On 7 September 1999 at 11:56:50 GMT, an earthquake of Mw=5.9 occurred at Athens capital of Greece. The epicenter was located in the Northwest area of Parnitha Moun- tain at 18km distance from the city centre. This earthquake was one of the most de- structive in Greece during the modern times. The intensity of the earthquake reached IX in the Northwest territories of the city and caused the death of 143 people and seri- ous structural damage in many buildings. On the 13th of September the Seismological Laboratory of Patras University, installed a seismic network of 30 stations in order to observe the evolution of the aftershock sequence. This temporary seismic network remained in the area of Attika for 50 days and recorded a significant part of the af- tershock sequence. In this paper we use the high quality recordings of this network to investigate the influence of the surface geology to the seismic motion, on sites within the epicentral area, which suffered the most during this earthquake. We applied the horizontal-to-vertical (H/V) spectral ratio method on noise and on earthquake records and the obtained results exhibit very good agreement. Finally we compare the results with the geological conditions of the study area and the damage distribution. Most of the obtained amplification levels were low with an exemption in the site of Ano Liosia were a significant amount of damage was observed and the results indicate that the earthquake motion was amplified four times. Based on the above we conclude that the damages in the city of Athens were due to source effects rather than site effects.

Usually, scholars believe that the fault pre-sliding or expansion phenomenon will be observed near epicenter area before strong earthquake, but more and more observations show that the crust deformation nearby epicenter area is smallest(Zhou, 1997; Niu,2009,2012;Bilham, 2005; Amoruso et al., 2010). The theory of Fixed point t is a branch of mathematics that arises from the theory of topological transformation and has important applications in obvious model analysis. An important precursory was observed by two tilt-meter sets, installed at Wenchuan Observatory in the epicenter area, that the tilt changes were the smallest compared with the other 8 stations around them in one year before the Wenchuan earthquake. To subscribe the phenomenon, we proposed the minimum annual variation range that used as a topological transformation. The window length is 1 year, and the sliding length is 1 day. The convergence of points with minimum annual change in the 3 years before the Wenchuan earthquake is studied. And the results show that the points with minimum deformation amplitude basically converge to the epicenter region before the earthquake. The possible mechanism of fixed point of crustal deformation was explored. Concerning the fixed point of crust deformation, the liquidity of lithospheric medium and the isostasy theory are accepted by many scholars (Bott &Dean, 1973; Merer et al.1988; Molnar et al., 1975,1978; Tapponnier et al., 1976; Wang et al., 2001). To explain the fixed point of crust deformation before earthquakes, we study the plate bending model (Bai, et al., 2003). According to plate bending model and real deformation data, we have found that the earthquake rupture occurred around the extreme point of plate bending, where the velocities of displacement, tilt, strain, gravity and so on are close to zero, and the fixed points are located around the epicenter.The phenomenon of fixed point of crust deformation is different from former understandings about the

Earthquake ruptures perturb stress within the surrounding crustal volume and as it is widely accepted now these stress perturbations strongly correlates with the following seismicity. Here we have documented five cases of the mainshock-aftershock sequences generated by the strike-slip faults from different tectonic environments of world in order to demonstrate that the stress changes resulting from large preceding earthquakes decades before effect spatial distribution of the aftershocks of the current mainshocks. The studied mainshock-aftershock sequences are the 15 October 1979 Imperial Valley earthquake ( Mw = 6.4) in southern California, the 27 November 1979 Khuli-Boniabad ( Mw = 7.1), the 10 May 1997 Qa'enat ( Mw = 7.2) and the 31 March 2006 Silakhor ( Mw = 6.1) earthquakes in Iran and the 13 March 1992 Erzincan earthquake ( Mw = 6.7) in Turkey. In the literature, we have been able to find only these mainshocks that are mainly characterized by dense and strong aftershock activities along and beyond the one end of their ruptures while rare aftershock occurrences with relatively lower magnitude reported for the other end of their ruptures. It is shown that the stress changes resulted from earlier mainshock(s) that are close in both time and space might be the reason behind the observed aftershock patterns. The largest aftershocks of the mainshocks studied tend to occur inside the stress-increased lobes that were also stressed by the background earthquakes and not to occur inside the stress-increased lobes that fall into the stress shadow of the background earthquakes. We suggest that the stress shadows of the previous mainshocks may persist in the crust for decades to suppress aftershock distribution of the current mainshocks. Considering active researches about use of the Coulomb stress change maps as a practical tool to forecast spatial distribution of the upcoming aftershocks for earthquake risk mitigation purposes in near-real time, it is further suggested

The ability to successfully predict the future behavior of a system is a strong indication that the system is well understood. Certainly many details of the earthquake system remain obscure, but several hypotheses related to earthquake occurrence and seismic hazard have been proffered, and predictingearthquake behavior is a worthy goal and demanded by society. Along these lines, one of the primary objectives of the Regional Earthquake Likelihood Models (RELM) working group was to formalize earthquake occurrence hypotheses in the form of prospective earthquake rate forecasts in California. RELM members, working in small research groups, developed more than a dozen 5-year forecasts; they also outlined a performance evaluation method and provided a conceptual description of a Testing Center in which to perform predictability experiments. Subsequently, researchers working within the Collaboratory for the Study of EarthquakePredictability (CSEP) have begun implementing Testing Centers in different locations worldwide, and the RELM predictability experiment—a truly prospective earthquakeprediction effort—is underway within the U.S. branch of CSEP. The experiment, designed to compare time-invariant 5-year earthquake rate forecasts, is now approximately halfway to its completion. In this paper, we describe the models under evaluation and present, for the first time, preliminary results of this unique experiment. While these results are preliminary—the forecasts were meant for an application of 5 years—we find interesting results: most of the models are consistent with the observation and one model forecasts the distribution of earthquakes best. We discuss the observed sample of target earthquakes in the context of historical seismicity within the testing region, highlight potential pitfalls of the current tests, and suggest plans for future revisions to experiments such as this one.

The ability to successfully predict the future behavior of a system is a strong indication that the system is well understood. Certainly many details of the earthquake system remain obscure, but several hypotheses related to earthquake occurrence and seismic hazard have been proffered, and predictingearthquake behavior is a worthy goal and demanded by society. Along these lines, one of the primary objectives of the Regional Earthquake Likelihood Models (RELM) working group was to formalize earthquake occurrence hypotheses in the form of prospective earthquake rate forecasts in California. RELM members, working in small research groups, developed more than a dozen 5-year forecasts; they also outlined a performance evaluation method and provided a conceptual description of a Testing Center in which to perform predictability experiments. Subsequently, researchers working within the Collaboratory for the Study of EarthquakePredictability (CSEP) have begun implementing Testing Centers in different locations worldwide, and the RELM predictability experiment-a truly prospective earthquakeprediction effort-is underway within the U. S. branch of CSEP. The experiment, designed to compare time-invariant 5-year earthquake rate forecasts, is now approximately halfway to its completion. In this paper, we describe the models under evaluation and present, for the first time, preliminary results of this unique experiment. While these results are preliminary-the forecasts were meant for an application of 5 years-we find interesting results: most of the models are consistent with the observation and one model forecasts the distribution of earthquakes best. We discuss the observed sample of target earthquakes in the context of historical seismicity within the testing region, highlight potential pitfalls of the current tests, and suggest plans for future revisions to experiments such as this one. ?? 2010 The Author(s).

The study aimed to gather data on posttraumatic stress and depression in adolescents following the 2015 Nepal earthquake and explore the adolescents' coping strategies. In a questionnaire-based, cross-sectional study about 1 year after the earthquake, adolescents in two districts with different degrees of impact were evaluated for disaster experience, coping strategies, and symptoms of posttraumatic stress and depression measured with the Child Posttraumatic Stress Scale and the Depression Self Rating Scale. In the studied sample (N=409), the estimated prevalence of posttraumatic stress disorder (PTSD) (43.3%) and depression (38.1%) was considerable. Prevalence of PTSD was significantly higher in the more affected area (49.0% v 37.9%); however, the prevalence figures were comparable in adolescents who reported a stress. The prevalence of depression was comparable. Female gender, joint family, financial problems, displacement, injury or being trapped in the earthquake, damage to livelihood, and fear of death were significantly associated with a probable PTSD diagnosis. Various coping strategies were used: talking to others, praying, helping others, hoping for the best, and some activities were common. Drug abuse was rare. Most of the coping strategies were comparable among the clinical groups. A considerable proportion of adolescents had posttraumatic stress and depression 1 year after the earthquake. There is a need for clinical interventions and follow-up studies regarding the outcome. Disaster Med Public Health Preparedness. 2018;page 1 of 7.

We cannot yet predict large earthquakes in the short term with much reliability and skill, but the strong clustering exhibited in seismic sequences tells us that earthquake probabilities are not constant in time; they generally rise and fall over periods of days to years in correlation with nearby seismic activity. Operational earthquake forecasting (OEF) is the dissemination of authoritative information about these time‐dependent probabilities to help communities prepare for potentially destructive earthquakes. The goal of OEF is to inform the decisions that people and organizations must continually make to mitigate seismic risk and prepare for potentially destructive earthquakes on time scales from days to decades. To fulfill this role, OEF must provide a complete description of the seismic hazard—ground‐motion exceedance probabilities as well as short‐term rupture probabilities—in concert with the long‐term forecasts of probabilistic seismic‐hazard analysis (PSHA).

In this study, Turkey earthquake catalog of the events within the time period of 1900-2015 prepared by Boǧaziçi University Kandilli Observatory and Earthquake Research Institute is analyzed. The catalog consists of earthquakes occurred in Turkey and surrounding area (32o-45oN/23o-48oE). The current earthquake catalog data has been checked in two aspects; the time dependent variation and compliance for different regions. Specifically the data set prior to 1976 was found deficient. In total, 7 regions were evaluated according to the tectonic specifications and data set. In this study for every region original data were used without any change; b- values, a- values, Magnitude of completeness (Mc) were calculated. For the calculation of b- values focal depth was selected as h= 0-50 km. One of the important complications for the seismic catalogs is discriminating real (natural) seismic events from artificial (unnatural) seismic events. Therefore within the original current catalog events especially artificial quarry blasts and mine blasts have been separated by declustering and dequarry methods. Declustering process eliminates induced earthquakes especially occurred in thermal regions, large water basins, mine regions from the original catalogs. Current moment tensor catalog prepared by Kalafat, 2015 the faulting type map of the region was prepared. As a result, for each region it is examined if there is a relation between fault type and b- values. In this study, the hypothesis of the relation between previously evaluated and currently ongoing extensional, compression, strike-slip fault regimes in Turkey and b- values are tested one more time. This study was supported by the Department of Science Fellowship and Grant programs (2014-2219) of TUBITAK (The Scientific and Technological Research Councilof Turkey). It also encourages the conduct of the study and support the constructive contributionthat Prof.Dr. Nafi TOKSÖZ to offer my eternal gratitude.

Probabilistic earthquake hazard and loss analyses play important roles in many areas of risk management, including earthquake related public policy and insurance ratemaking. Rigorous loss estimation for portfolios of properties is difficult since there are various types of uncertainties in all aspects of modeling and analysis. It is the objective of this study to investigate the sensitivity of earthquake loss estimation to uncertainties in regional seismicity, earthquake source parameters, ground motions, and sites' spatial correlation on typical property portfolios in Southern California. Southern California is an attractive region for such a study because it has a large population concentration exposed to significant levels of seismic hazard. During the last decade, there have been several comprehensive studies of most regional faults and seismogenic sources. There have also been detailed studies on regional ground motion attenuations and regional and local site responses to ground motions. This information has been used by engineering seismologists to conduct regional seismic hazard and risk analysis on a routine basis. However, one of the more difficult tasks in such studies is the proper incorporation of uncertainties in the analysis. From the hazard side, there are uncertainties in the magnitudes, rates and mechanisms of the seismic sources and local site conditions and ground motion site amplifications. From the vulnerability side, there are considerable uncertainties in estimating the state of damage of buildings under different earthquake ground motions. From an analytical side, there are challenges in capturing the spatial correlation of ground motions and building damage, and integrating thousands of loss distribution curves with different degrees of correlation. In this paper we propose to address some of these issues by conducting loss analyses of a typical small portfolio in southern California, taking into consideration various source and ground

Near-field observations of high-precision borehole strain and pore pressure, show no indication of coherent accelerating strain or pore pressure during the weeks to seconds before the 28 September 2004 M 6.0 Parkfield earthquake. Minor changes in strain rate did occur at a few sites during the last 24 hr before the earthquake but these changes are neither significant nor have the form expected for strain during slip coalescence initiating fault failure. Seconds before the event, strain is stable at the 10-11 level. Final prerupture nucleation slip in the hypocentral region is constrained to have a moment less than 2 ?? 1012 N m (M 2.2) and a source size less than 30 m. Ground displacement data indicate similar constraints. Localized rupture nucleation and runaway precludes useful prediction of damaging earthquakes. Coseismic dynamic strains of about 10 microstrain peak-to-peak were superimposed on volumetric strain offsets of about 0.5 microstrain to the northwest of the epicenter and about 0.2 microstrain to the southeast of the epicenter, consistent with right lateral slip. Observed strain and Global Positioning System (GPS) offsets can be simply fit with 20 cm of slip between 4 and 10 km on a 20-km segment of the fault north of Gold Hill (M0 = 7 ?? 1017 N m). Variable slip inversion models using GPS data and seismic data indicate similar moments. Observed postseismic strain is 60% to 300% of the coseismic strain, indicating incomplete release of accumulated strain. No measurable change in fault zone compliance preceding or following the earthquake is indicated by stable earth tidal response. No indications of strain change accompany nonvolcanic tremor events reported prior to and following the earthquake.

Many earth scientists in this country and abroad are focusing their studies on the search for means of predicting impending earthquakes, but, as yet, an accurate prediction of the time and place of such an event cannot be made. From past experience, however, one can assume that earthquakes will continue to harass mankind and that they will occur most frequently in the areas where they have been relatively common in the past. In the United States, earthquakes can be expected to occur most frequently in the western states, particularly in Alaska, California, Washington, Oregon, Nevada, Utah, and Montana. The danger, however, is not confined to any one part of the country; major earthquakes have occurred at widely scattered locations.

We implemented a daily forecast of m > 4 earthquakes for California in the format suitable for testing in community-based earthquakepredictability experiments: Regional Earthquake Likelihood Models (RELM) and the Collaboratory for the Study of EarthquakePredictability (CSEP). The forecast is based on near-real time earthquake reports from the ANSS catalog above magnitude 2 and will be available online. The model used to generate the forecasts is based on the Epidemic-Type Earthquake Sequence (ETES) model, a stochastic model of clustered and triggered seismicity. Our particular implementation is based on the earlier work of Helmstetter et al. (2006, 2007), but we extended the forecast to all of Cali-fornia, use more data to calibrate the model and its parameters, and made some modifications. Our forecasts will compete against the Short-Term Earthquake Probabilities (STEP) forecasts of Gersten-berger et al. (2005) and other models in the next-day testing class of the CSEP experiment in California. We illustrate our forecasts with examples and discuss preliminary results.

Background An earthquake measuring 6.3 on the Richter scale struck the city of Bam in Iran on the 26th of December 2003 at 5.26 A.M. It was devastating, and left over 40,000 dead and around 30,000 injured. The profound tragedy of thousands killed has caused emotional and psychological trauma for tens of thousands of people who have survived. A study was carried out to assess psychological distress among Bam earthquake survivors and factors associated with severe mental health in those who survived the tragedy. Methods This was a population-based study measuring psychological distress among the survivors of Bam earthquake in Iran. Using a multi-stage stratified sampling method a random sample of individuals aged 15 years and over living in Bam were interviewed. Psychological distress was measured using the 12-item General Health Questionnaire (GHQ-12). Results In all 916 survivors were interviewed. The mean age of the respondents was 32.9 years (SD = 12.4), mostly were males (53%), married (66%) and had secondary school education (50%). Forty-one percent reported they lost 3 to 5 members of their family in the earthquake. In addition the findings showed that 58% of the respondents suffered from severe mental health as measured by the GHQ-12 and this was three times higher than reported psychological distress among the general population. There were significant differences between sub-groups of the study sample with regard to their psychological distress. The results of the logistic regression analysis also indicated that female gender; lower education, unemployment, and loss of family members were associated with severe psychological distress among earthquake victims. Conclusion The study findings indicated that the amount of psychological distress among earthquake survivors was high and there is an urgent need to deliver mental health care to disaster victims in local medical settings and to reduce negative health impacts of the earthquake adequate psychological

I perform a retrospective forecast experiment in the most rapid extensive continental rift worldwide, the western Corinth Gulf (wCG, Greece), aiming to predict shallow seismicity (depth <15 km) with magnitude M ≥ 3.0 for the time period between 1995 and 2013. I compare two short-term earthquake clustering models, based on epidemic-type aftershock sequence (ETAS) statistics, four physics-based (CRS) models, combining static stress change estimations and the rate-and-state laboratory law and one hybrid model. For the latter models, I incorporate the stress changes imparted from 31 earthquakes with magnitude M ≥ 4.5 at the extended area of wCG. Special attention is given on the 3-D representation of active faults, acting as potential receiver planes for the estimation of static stress changes. I use reference seismicity between 1990 and 1995, corresponding to the learning phase of physics-based models, and I evaluate the forecasts for six months following the 1995 M = 6.4 Aigio earthquake using log-likelihood performance metrics. For the ETAS realizations, I use seismic events with magnitude M ≥ 2.5 within daily update intervals to enhance their predictive power. For assessing the role of background seismicity, I implement a stochastic reconstruction (aka declustering) aiming to answer whether M > 4.5 earthquakes correspond to spontaneous events and identify, if possible, different triggering characteristics between aftershock sequences and swarm-type seismicity periods. I find that: (1) ETAS models outperform CRS models in most time intervals achieving very low rejection ratio RN = 6 per cent, when I test their efficiency to forecast the total number of events inside the study area, (2) the best rejection ratio for CRS models reaches RN = 17 per cent, when I use varying target depths and receiver plane geometry, (3) 75 per cent of the 1995 Aigio aftershocks that occurred within the first month can be explained by static stress changes, (4) highly variable

Since 1990s, researchers tried to use offset clusters to study strong earthquake patterns. However, due to the limitation of quantity of offset data, it was not widely used until recent years with the rapid development of high-resolution topographic data, such as remote sensing images, LiDAR. In this study, we use airborne LiDAR data to re-evaluate the cumulative offsets and co-seismic offset of the 1920 Haiyuan Ms 8.5 earthquake along the western and middle segments of the co-seismic surface rupture zone. Our LiDAR data indicate the offset observations along both the western and middle segments fall into five groups. The group with minimum slip amount is associated with the 1920 Haiyuan Ms 8.5 earthquake, which ruptured both the western and middle segments. Our research highlights two new interpretations: firstly, the previously reported maximum displacement of the 1920 Earthquake is likely to be produced by at least two earthquakes; secondly, Our results reveal that the Cumulative Offset Probability Density (COPD) peaks of same offset amount on western segment and middles segment did not corresponding to each other one by one. The ages of the paleoearthquakes indicate the offsets are not accumulated during same period. We suggest that any discussion of the rupture pattern of a certain fault based on the offset data should also consider fault segmentation and paleoseismological data; Therefore, using the COPD peaks for studying the number of palaeo-events and their rupture patterns, the COPD peaks should be computed and analyzed on fault sub-sections and not entire fault zones. Our results reveal that the rupture pattern on the western and middle segment of the Haiyuan Fault is different from each other, which provide new data for the regional seismic potential analysis.

This study sought to expand the literature on bereavement and response to natural disasters by reporting the prevalence, severity, and correlates of depressive symptoms among bereaved and nonbereaved parents of the 2008 Wenchuan Earthquake in China. Bereaved (n = 155) and nonbereaved (n = 35) parents from the Xiang’e township in China were interviewed at 18 months (Wave 1) and 24 months (Wave 2) following the earthquake. From Wave 1 to Wave 2, rates of probable depression fell for both bereaved (65.8% to 44.5%) and nonbereaved parents (34.3% to 20.0%). The depression index of both groups also decreased, but only significantly among bereaved parents. Of bereaved parents, those with fewer years of education had more severe symptoms at both waves. Depressive symptom severity of bereaved mothers improved over time, but that of bereaved fathers remained unchanged. Not becoming pregnant again after the earthquake was significantly linked to worse depressive symptoms in both waves, but this was not significant when age was added to the model. Bereaved parents may need more postearthquake supportive services, with fathers, individuals with fewer years of education, and parents who are not able to become pregnant again after the earthquake being particularly vulnerable. PMID:23536328

The 2010 Mw7.2 El Mayor-Cucapah earthquake occurred southwest of the Pacific-North America plate boundary in north Baja California. It was preceded by an intensive foreshock sequence, and was followed by numerous aftershocks both on and off the mainshock rupture zone, hence providing us a great opportunity to study the physical mechanisms of foreshock and aftershock triggering. In our previously published work (Meng and Peng, GJI, 2014), we focused on the seismicity rate changes around the Salton Sea Geothermal Field (SSGF) and along the San Jacinto Fault (SJF) following the mainshock. Based on a recently developed matched filter technique, we were able to detect up to 20 times more events than listed in the SCSN catalog. We found that the seismicity rate near SSGF and SJF both experienced significant increase immediately following the mainshock. However, the seismicity rate near SSGF, where static Coulomb stress decreased, dropped below the pre-mainshock level after ~50 days. On the other hand, the seismicity rate near SJF, where static Coulomb stress increased, remained high till the end of our detecting time window. Such pattern indicates that both static and dynamic triggering may coexist, but dominate in different time scales. Motivated by this success, we shift our focus to the foreshock and aftershock sequence of the El Mayor-Cucapah event. We utilize available seismic stations immediately north to US-Mexico boarder and a few stations within Mexico to conduct a similar detection ~40 days before to 40 days after the mainshock. We aim to obtain a complete foreshock sequence and investigate its spatio-temporal evolutions before the mainshock. Moreover, we plan to study similar patterns for aftershocks and the corresponding triggering mechanisms. Updated results will be presented at the meeting.

This is the first article in a series that evaluates the health concerns of people living in a Salvadoran rural community after major earthquakes. Part I reviews the background, methods, and results of post-earthquake conditions with regards to healthcare, access to healthcare, housing, food, water and sanitation. Part II reviews the implications of these results and recommendations for improvements within the community. Part III investigates the psychosocial and mental health consequences of the earthquakes and provides suggestions for improved mental health awareness, assessment, and intervention. El Salvador experienced 2 major earthquakes in January and February 2001. This study evaluates the effects of the earthquakes on the health practices in the rural town of San Sebastian. The research was conducted with use of a convenience sample survey of subjects affected by the earthquakes. The sample included 594 people within 100 households. The 32-question survey assessed post-earthquake conditions in the areas of health care and access to care, housing, food and water, and sanitation. Communicable diseases affected a number of family members. After the earthquakes, 38% of households reported new injuries, and 79% reported acute exacerbations of chronic illness. Rural inhabitants were 30% more likely to have an uninhabitable home than were urban inhabitants. Concerns included safe housing, water purification, and waste elimination. The findings indicate a need for greater public health awareness and community action to adapt living conditions after a disaster and prevent the spread of communicable disease.

Earthquakes are renowned as being amongst the most dangerous and destructive types of natural disasters. Iran, a developing country in Asia, is prone to earthquakes and is ranked as one of the most vulnerable countries in the world in this respect. The medical response in disasters is accompanied by managerial, logistic, technical, and medical challenges being also the case in the Bam earthquake in Iran. Our objective was to explore the medical response to the Bam earthquake with specific emphasis on pre-hospital medical management during the first days. The study was performed in 2008; an interview based qualitative study using content analysis. We conducted nineteen interviews with experts and managers responsible for responding to the Bam earthquake, including pre-hospital emergency medical services, the Red Crescent, and Universities of Medical Sciences. The selection of participants was determined by using a purposeful sampling method. Sample size was given by data saturation. The pre-hospital medical service was divided into three categories; triage, emergency medical care and transportation, each category in turn was identified into facilitators and obstacles. The obstacles identified were absence of a structured disaster plan, absence of standardized medical teams, and shortage of resources. The army and skilled medical volunteers were identified as facilitators. The most compelling, and at the same time amenable obstacle, was the lack of a disaster management plan. It was evident that implementing a comprehensive plan would not only save lives but decrease suffering and enable an effective praxis of the available resources at pre-hospital and hospital levels.

Background Earthquakes are renowned as being amongst the most dangerous and destructive types of natural disasters. Iran, a developing country in Asia, is prone to earthquakes and is ranked as one of the most vulnerable countries in the world in this respect. The medical response in disasters is accompanied by managerial, logistic, technical, and medical challenges being also the case in the Bam earthquake in Iran. Our objective was to explore the medical response to the Bam earthquake with specific emphasis on pre-hospital medical management during the first days. Methods The study was performed in 2008; an interview based qualitative study using content analysis. We conducted nineteen interviews with experts and managers responsible for responding to the Bam earthquake, including pre-hospital emergency medical services, the Red Crescent, and Universities of Medical Sciences. The selection of participants was determined by using a purposeful sampling method. Sample size was given by data saturation. Results The pre-hospital medical service was divided into three categories; triage, emergency medical care and transportation, each category in turn was identified into facilitators and obstacles. The obstacles identified were absence of a structured disaster plan, absence of standardized medical teams, and shortage of resources. The army and skilled medical volunteers were identified as facilitators. Conclusions The most compelling, and at the same time amenable obstacle, was the lack of a disaster management plan. It was evident that implementing a comprehensive plan would not only save lives but decrease suffering and enable an effective praxis of the available resources at pre-hospital and hospital levels. PMID:21575233

Stability slope analysis is typically focused on modeling using cohesion and friction angle parameters but in earthquake-induced landslides, susceptibility is correlated more to lithological and stratigraphic parameters. In sedimentary deposits whose cohesion and diagenesis are very low, the risk of landslides increases. The Horcón Formation, which crops out continuously along cliffs in Central Chile between 32.5° and 33°S, is a Miocene-Pliocene well preserved, horizontally stratified unit composed of marine strata which overlies Paleozoic-Mesozoic igneous basement. During the Quaternary, the sequence was tectonically uplifted 80 meters and covered by unconsolidated eolian deposits. Given that Seismotectonic and Barrier-Asperity models suggest the occurrence of a forthcoming megathrust earthquake in a segment which includes this area, the Horcón Formation constitutes a good case study to characterize the susceptibility of this type of sediment for mass movements triggered by earthquakes. Field mapping, stratigraphic and sedimentological studies, including petrographic analyses to determine lithological composition and paragenesis of diagenetic events, have been carried out along with limited gravimetric profiling and CPTU drill tests. High resolution digital elevation modeling has also been applied. This work has led to the recognition of a shallow marine lithofacies association composed of weakly lithified fossiliferous and bioturbated medium to fine grained litharenite, mudstone, and fine conglomerate. The low grade of diagenesis in the sedimentary deposits was in response to a short period of burial and a subsequent accelerated uplift evidenced along the coast of Chile during the Quaternary. We have generated a predictive model of landslide susceptibility for the Horcón Formation and for the overlying Quaternary eolian deposits incorporating variables such as composition and diagenesis of lithofacies, slope, structures, weathering and landcover. The model

We investigate earthquake clustering properties from three geothermal reservoirs to clarify how earthquake patterns respond to hydraulic activities. We process ≈ 9 years from four datasets corresponding to the Geysers (both the entire field and a local subset), Coso and Salton Sea geothermal fields, California. For each, the completeness magnitude, b-value and fractal dimension are calculated and used to identify seismicity clusters using the nearest-neighbor approach of Zaliapin and Ben-Zion [2013a, 2013b]. Estimations of temporal evolution of different clustering properties in relation to hydraulic parameters point to different responses of earthquake dynamics to hydraulic operations in each case study. The clustering at the Geysers at local scale and Salton Sea are most and least affected by hydraulic activities, respectively. The response of the earthquake clustering from different datasets to the hydraulic activities may reflect the regional seismo-tectonic complexity as well as the dimension of the geothermal activities performed (e.g. number of active wells and superposition of injection + production activities).Two clustering properties significantly respond to hydraulic changes across all datasets: the background rates and the proportion of clusters consisting of a single event. Background rates are larger at the Geysers and Coso during high injection-production periods, while the opposite holds for the Salton Sea. This possibly reflects the different physical mechanisms controlling seismicity at each geothermal field. Additionally, a lower proportion of singles is found during time periods with higher injection-production rates. This may reflect decreasing effective stress in areas subjected to higher pore pressure and larger earthquake triggering by stress transfer.

( Kütahya-Gediz Earthquake on March, 28, 1970, Diyarbakir-Lice Earthquake on September, 6, 1975, Van-Muradiye Earthquake on November, 24, 1976, Erzurum-Kars Earthquake on October, 30, 1983, Gölcük Earthquake on August, 17, 1999 , Afyon-Sultanhisar Earthquake on February, 3, 2002). Furthermore, Iran Earthquake on November, 27, 1979 has been measured and recorded from thousands kilometeres away in drilling wells in Turkey. Altough there are a lot of studies and researches on earthquakeprediction and groundwater level changes related earthquake, it is still difficult to say certain results are obtained on this subject. Nowadays, it is well known the importance of these researches on earthquakes. Due to take certain results on earthqauke-water level changes relations, studies must be carried out on this way.

Sichuan is a province in China with an extensive history of earthquakes. Recent earthquakes, including the Lushan earthquake in 2013, have resulted in thousands of people losing their homes and their families. However, there is a research gap on the efficiency of government support policies. Therefore, this study develops a new perspective to study the health of earthquake survivors, based on the effect of post-earthquake rescue policies on health-related quality of life (HRQOL) of survivors of the Sichuan earthquake. This study uses data from a survey conducted in five hard-hit counties (Wenchuan, Qingchuan, Mianzhu, Lushan, and Dujiangyan) in Sichuan in 2013. A total of 2,000 questionnaires were distributed, and 1,672 were returned; the response rate was 83.6%. Results of the rescue policies scale and Medical Outcomes Study Short Form 36 (SF-36) scale passed the reliability test. The confirmatory factor analysis model showed that the physical component summary (PCS) directly affected the mental component summary (MCS). The results of structural equation model regarding the effects of rescue policies on HRQOL showed that the path coefficients of six policies (education, orphans, employment, poverty, legal, and social rescue policies) to the PCS of survivors were all positive and passed the test of significance. Finally, although only the path coefficient of the educational rescue policy to the MCS of survivors was positive and passed the test of significance, the other five policies affected the MCS indirectly through the PCS. The general HRQOL of survivors is not ideal; the survivors showed a low satisfaction with the post-earthquake rescue policies. Further, the six post-earthquake rescue policies significantly improved the HRQOL of survivors and directly affected the promotion of the PCS of survivors. Aside from the educational rescue policy, all other policies affected the MCS indirectly through the PCS. This finding indicates relatively large differences in

It is recently recognized that the ionosphere is very sensitive to seismic effects, and the detection of ionospheric perturbations associated with earthquakes, seems to be very promising for short-term earthquakeprediction. We have proposed a possible use of VLF/LF (very low frequency (3-30 kHz) /low frequency (30-300 kHz)) radio sounding of the seismo-ionospheric perturbations. A brief history of the use of subionospheric VLF/LF propagation for the short-term earthquakeprediction is given, followed by a significant finding of ionospheric perturbation for the Kobe earthquake in 1995. After showing previous VLF/LF results, we present the latest VLF/LF findings; One is the statistical correlation of the ionospheric perturbation with earthquakes and the second is a case study for the Sumatra earthquake in December, 2004, indicating the spatical scale and dynamics of ionospheric perturbation for this earthquake.

We study O-1 earthquake-like precursor effects by analyzing Muon-Spin-Resonance (μSR) MgO data using Maximum Entropy (ME). Due to its presence in the Earth's crust, MgO is ideal to study these features. O-1 formation results from a 2-stage break-up in an O anion pair at high-temperature or high-pressure conditions. As T increases above room temperature, a small % of oxygen is predicted to produce an O-1 state. ME analysis of 100-Oe μSR data of a pure 3N-MgO single crystal produces a broad Gaussian at 1.36 MHz and a sharp Lorentzian at 1.4 MHz. The latter could be effects of extended O-1 states, as positive muons probe near O ions. There is no sharp 1.4-MHz signal observed in the μSR data of insulators Al2O3 and PrBCO6 data, only the expected 100-Oe Gaussian. We have fitted ME μSR transforms of MgO to obtain an empirical description of 1.36- and 1.4- MHz peaks. Their T dependences above room temperature appear to be positive-hole effects. Relations to precursor earthquake-like O-valency effects are discussed. Research supported by ISIS-UK, LANL-DOE, SETI-NASA, SJSU & AFC.

The Whittier Narrows earthquake of 1987 and the Northridge earthquake of 1991 highlighted the earthquake hazards associated with buried faults in the Los Angeles region. A more thorough knowledge of the subsurface structure of southern California is needed to reveal these and other buried faults and to aid us in understanding how the earthquake-producing machinery works in this region.

Probabilities of surface manifestations of liquefaction due to a repeat of the 1868 (M6.7-7.0) earthquake on the southern segment of the Hayward Fault were calculated for two areas along the margin of San Francisco Bay, California: greater Oakland and the northern Santa Clara Valley. Liquefaction is predicted to be more common in the greater Oakland area than in the northern Santa Clara Valley owing to the presence of 57 km2 of susceptible sandy artificial fill. Most of the fills were placed into San Francisco Bay during the first half of the 20th century to build military bases, port facilities, and shoreline communities like Alameda and Bay Farm Island. Probabilities of liquefaction in the area underlain by this sandy artificial fill range from 0.2 to ~0.5 for a M7.0 earthquake, and decrease to 0.1 to ~0.4 for a M6.7 earthquake. In the greater Oakland area, liquefaction probabilities generally are less than 0.05 for Holocene alluvial fan deposits, which underlie most of the remaining flat-lying urban area. In the northern Santa Clara Valley for a M7.0 earthquake on the Hayward Fault and an assumed water-table depth of 1.5 m (the historically shallowest water level), liquefaction probabilities range from 0.1 to 0.2 along Coyote and Guadalupe Creeks, but are less than 0.05 elsewhere. For a M6.7 earthquake, probabilities are greater than 0.1 along Coyote Creek but decrease along Guadalupe Creek to less than 0.1. Areas with high probabilities in the Santa Clara Valley are underlain by young Holocene levee deposits along major drainages where liquefaction and lateral spreading occurred during large earthquakes in 1868 and 1906.

We develop a methodology that combines compressive sensing backprojection (CS-BP) and source spectral analysis of teleseismic P waves to provide metrics relevant to earthquake dynamics of large events. We improve the CS-BP method by an autoadaptive source grid refinement as well as a reference source adjustment technique to gain better spatial and temporal resolution of the locations of the radiated bursts. We also use a two-step source spectral analysis based on (i) simple theoretical Green's functions that include depth phases and water reverberations and on (ii) empirical P wave Green's functions. Furthermore, we propose a source spectrogram methodology that provides the temporal evolution of dynamic parameters such as radiated energy and falloff rates. Bridging backprojection and spectrogram analysis provides a spatial and temporal evolution of these dynamic source parameters. We apply our technique to the recent 2015 Mw 8.3 megathrust Illapel earthquake (Chile). The results from both techniques are consistent and reveal a depth-varying seismic radiation that is also found in other megathrust earthquakes. The low-frequency content of the seismic radiation is located in the shallow part of the megathrust, propagating unilaterally from the hypocenter toward the trench while most of the high-frequency content comes from the downdip part of the fault. Interpretation of multiple rupture stages in the radiation is also supported by the temporal variations of radiated energy and falloff rates. Finally, we discuss the possible mechanisms, either from prestress, fault geometry, and/or frictional properties to explain our observables. Our methodology is an attempt to bridge kinematic observations with earthquake dynamics.

Our undergraduate research program, SCEC/UseIT, an NSF Research Experience for Undergraduates site, provides software for earthquake researchers and educators, movies for outreach, and ways to strengthen the technical career pipeline. SCEC/UseIT motivates diverse undergraduates towards science and engineering careers through team-based research in the exciting field of earthquake information technology. UseIT provides the cross-training in computer science/information technology (CS/IT) and geoscience needed to make fundamental progress in earthquake system science. Our high and increasing participation of women and minority students is crucial given the nation"s precipitous enrollment declines in CS/IT undergraduate degree programs, especially among women. UseIT also casts a "wider, farther" recruitment net that targets scholars interested in creative work but not traditionally attracted to summer science internships. Since 2002, SCEC/UseIT has challenged 79 students in three dozen majors from as many schools with difficult, real-world problems that require collaborative, interdisciplinary solutions. Interns design and engineer open-source software, creating increasingly sophisticated visualization tools (see "SCEC-VDO," session IN11), which are employed by SCEC researchers, in new curricula at the University of Southern California, and by outreach specialists who make animated movies for the public and the media. SCEC-VDO would be a valuable tool for research-oriented professional development programs.

Based on observations prior to earthquakes, recent theoretical considerations suggest that some geophysical quantities reveal abnormal changes that anticipate moderate and strong earthquakes, within a defined spatial area (the so-called Dobrovolsky area) according to a lithosphere-atmosphere-ionosphere coupling model. One of the possible pre-earthquake effects could be the appearance of some climatological anomalies in the epicentral region, weeks/months before the major earthquakes. In this paper, the period of 2 months preceding the Amatrice-Norcia (Central Italy) earthquake sequence, that started on 24 August 2016 with an M6 earthquake and a few months later produced other two major shocks (i.e. an M5.9 on 26 October and then an M6.5 on 30 October), was analyzed in terms of skin temperature, total column water vapour and total column of ozone, compared with the past 37-year trend. The novelty of the method stands in the way the complete time series is reduced, where also the possible effect of global warming is properly removed. The simultaneous analysis showed the presence of persistent contemporary anomalies in all of the analysed parameters. To validate the technique, a confutation/confirmation analysis was undertaken where these parameters were successfully analyzed in the same months but considering a seismically "calm" year, when significant seismicity was not present. We also extended the analysis to all available years to construct a confusion matrix comparing the occurrence of climatological data anomalies with real seismicity. This work confirms the potentiality of multi parameters in anticipating the occurrence of large earthquakes in Central Italy, thus reinforcing the idea of considering such behaviour an effective tool for an integrated system of future earthquakeprediction.

We present ground motion prediction equations (GMPEs) for computing natural log means and standard deviations of vertical-component intensity measures (IMs) for shallow crustal earthquakes in active tectonic regions. The equations were derived from a global database with M 3.0–7.9 events. The functions are similar to those for our horizontal GMPEs. We derive equations for the primary M- and distance-dependence of peak acceleration, peak velocity, and 5%-damped pseudo-spectral accelerations at oscillator periods between 0.01–10 s. We observe pronounced M-dependent geometric spreading and region-dependent anelastic attenuation for high-frequency IMs. We do not observe significant region-dependence in site amplification. Aleatory uncertainty is found to decrease with increasing magnitude; within-event variability is independent of distance. Compared to our horizontal-component GMPEs, attenuation rates are broadly comparable (somewhat slower geometric spreading, faster apparent anelastic attenuation), VS30-scaling is reduced, nonlinear site response is much weaker, within-event variability is comparable, and between-event variability is greater.

The dynamic response of the Golden Gate Bridge, located north of San Francisco, CA, has been studied previously using ambient vibration data and finite element models. Since permanent seismic instrumentation was installed in 1993, only small earthquakes that originated at distances varying between ~11 to 122 km have been recorded. Nonetheless, these records prompted this study of the response of the bridge to low amplitude shaking caused by three earthquakes. Compared to previous ambient vibration studies, the earthquake response data reveal a slightly higher fundamental frequency (shorter-period) for vertical vibration of the bridge deck center span (~7.7–8.3 s versus 8.2–10.6 s), and a much higher fundamental frequency (shorter period) for the transverse direction of the deck (~11.24–16.3 s versus ~18.2 s). In this study, it is also shown that these two periods are dominant apparent periods representing interaction between tower, cable, and deck.

Electrolyte imbalances are very common among crushed earthquake victims but there is not enough data regarding their trend of changes. The present study was designed to evaluate the trend of changes in sodium, calcium, and phosphorus ions among crush syndrome patients. In this retrospective cross-sectional study, using the database of Bam earthquake victims, which was developed by Iranian Society of Nephrology following Bam earthquake, Iran, 2003, the 10-day trend of sodium, calcium, and phosphorus ions changes in > 15 years old crush syndrome patients was evaluated. 118 patients with the mean age of 25.6 ± 6.9 years were studied (57.3 male). On the first day of admission, 52.5% (95% CI: 42.7 - 62.3) of the patients had hyponatremia, which reached 43.9% (95% CI: 28.5 - 59.3) on day 10. 100.0% of patients were hypocalcemic on admission and serum calcium level did not change dramatically during the 10 days of hospitalization. The prevalence of hyperphosphatemia on the first day was 90.5% (95% CI: 81.5 - 99.5) and on the 10 th day of hospitalization 66.7% (95% CI: 48.5 - 84.8) of the patients were still affected. The results of the present study shows the 52.5% prevalence of hyponatremia, 100% hypocalcemia, and 90.5% hyperphosphatemia among crush syndrome patients of Bam earthquake victims on the first day of admission. Evaluation of 10-day trend shows a slow decreasing pattern of these imbalances as after 10 days, 43.9% still remain hyponatremic, 92.3% hypocalcemic, and 66.7% hypophosphatemic.

It has become a highly focused issue that thermal anomalies appear before major earthquakes. There are various hypotheses about the mechanism of thermal anomalies. Because of lacking of enough evidences, the mechanism is still require to be further researched. Gestation and occurrence of a major earthquake is related with the interaction of multi-physical fields. The underground fluid surging out the surface is very likely to be the reason for the thermal anomaly. This study tries to answer some question, such as how the geothermal energy transfer to the surface, and how the multiple-physical fields interacted. The 2008 Wenchuan Ms8.0 earthquake, is one of the largest evens in the last decade in China mainland. Remote sensing studies indicate that distinguishable thermal anomalies occurred several days before the earthquake. The heat anomaly value is more than 3 times the average in normal time and distributes along the Longmen Shan fault zone. Based on geological and geophysical data, 2D dynamic model of coupled stress, seepage and thermal fields (HTM model) is constructed. Then using the COMSOL multi-physics filed software, this work tries to reveal the generation process and distribution patterns of thermal anomalies prior to thrust-type major earthquakes. The simulation get the results: (1)Before the micro rupture, with the increase of compression, the heat current flows to the fault in the footwall on the whole, while in the hanging wall of the fault, particularly near the ground surface, the heat flow upward. In the fault zone, heat flow upward along the fracture surface, heat flux in the fracture zone is slightly larger than the wall rock;, but the value is all very small. (2)After the occurrence of the micro fracture, the heat flow rapidly collects to the faults. In the fault zones, the heat flow accelerates up along the fracture surfaces, the heat flux increases suddenly, and the vertical heat flux reaches to the maximum. The heat flux in the 3 fracture

Ionospheric disturbances have been detected after, e.g. Northridge (Calais and Minster, 1995) and Denali (Ducic et al., 2003) earthquakes. Similar signals observed after the 2003 Tokachi-Oki Earthquake, the largest earthquake in Japan after the completion of GEONET, a nationwide array composed of over 1000 CGPS stations. We followed a standard procedure: applying a band-pass filter for the ionospheric combination of the L1 and L2 phase signals and calculating subioonospheric points (SIP) assuming thin ionosphere at the height of 350 km. Owing to the high density of SIP, many interesting features are observed and several important parameters were constrained, e.g. (1) apparent propagation speed, (2) directivity of disturbance signals, (3) decay during propagation, etc. As for (1), the observed speed of about 1 km/sec is significantly smaller than the Rayleigh Wave velocity, significantly faster than Travelling Ionospheric Disturbances (TID), but is consistent with the sound velocity at the ionospheric heights. The acoustic wave generated by sudden vertical movement of the Earth's surface first propagate upward. Then it will be refracted by height-dependent velocity structure resulting in horizontally propagating wave through the ionosphere. The observed TEC variation, with a wavelength of a few hundred km, may reflect electron density oscillation caused by the passage of such an acoustic wave. Regarding (2), there was a clear indication that the wave does not propagate northward. As first suggested by Calais et al. (1998), such a blocking is considered to be due to interaction between the geomagnetic field and the movement of charged particles comprising the ionosphere associated with the acoustic wave propagation. The model predicts that there will be no southward propagation of ionospheric disturbances caused by earthquakes in southern hemisphere mid-latitudes, which needs be confirmed by future earthquakes. The point (3) enabled the authors to define the

As as the Capital of West Sumatera Province and as the largest city at the West Coast of Sumatera, the City of Padang has been assigned as one of the National Activity Center for Regional Economic Development. The city will be developed as a metropolitan city, which will be very much relied on the multi sectoral support such as business, services, industry, and tourism sectors. However, the city is located at a very high-risk zone for earthquake and tsunami. After 2004 Indian Ocean Tsunami, the city has been stricken several times by large earthquake and tsunami threat, for example in 8.4 M September 2007 and 7.6 M September 2009 causing major casualties, severe damages, great economic loss as well as tsunami threat to the people. Without disaster risk reduction based development planning, the goal of Padang as metropolitan and National Activity Center is difficult to be achieved. Knowing the level of risk and its appropriate countermeasures from the perspective of business resilience becomes very important. Thus, this paper will present the detail study on business reliency for the Padang City, starting from (i) Earthquake and Tsunami Risk Assessment from the perspective of preparedness for Business in Padang Barat Subdistrict of Padang City, (ii) Assessment of resiliency level of Padang City Business after the 2009 event, and (iii) recommendation for considering the Business Resilience factors as part of DRR based CBD development Plan of Padang Barat sub district - Padang City. This study is not only able to identify physical and nonphusical aspect of business characteristic, but it has identified four major components of Bussiness Resiliency Indicators, i.e. Swift Recovery Factors (RR), Experience and Knowledge to Disaster (PP), Emergency Response Plan (RT) and Asset Protection (PA). Each major indicator consists of several indicators, with 19 total indicators. Further investigation on these indicators shown that total performance value of business resiliency is

It is possible to use the waveform data not only to derive the source mechanism of an earthquake but also to establish the hypocentral coordinates of the `best point source' (the centroid of the stress glut density) at a given frequency. Thus two classical problems of seismology are combined into a single procedure. Given an estimate of the origin time, epicentral coordinates and depth, an initial moment tensor is derived using one of the variations of the method described in detail by Gilbert and Dziewonski (1975). This set of parameters represents the starting values for an iterative procedure in which perturbations to the elements of the moment tensor are found simultaneously with changes in the hypocentral parameters. In general, the method is stable, and convergence rapid. Although the approach is a general one, we present it here in the context of the analysis of long-period body wave data recorded by the instruments of the SRO and ASRO digital network. It appears that the upper magnitude limit of earthquakes that can be processed using this particular approach is between 7.5 and 8.0; the lower limit is, at this time, approximately 5.5, but it could be extended by broadening the passband of the analysis to include energy with periods shorter that 45 s. As there are hundreds of earthquakes each year with magnitudes exceeding 5.5, the seismic source mechanism can now be studied in detail not only for major events but also, for example, for aftershock series. We have investigated the foreshock and several aftershocks of the Sumba earthquake of August 19, 1977; the results show temporal variation of the stress regime in the fault area of the main shock. An area some 150 km to the northwest of the epicenter of the main event became seismically active 49 days later. The sense of the strike-slip mechanism of these events is consistent with the relaxation of the compressive stress in the plate north of the Java trench. Another geophysically interesting result of our

We use 21 strong motion recordings from Nepal and India for the 25 April 2015 moment magnitude (MW) 7.8 Gorkha, Nepal, earthquake together with the extensive macroseismic intensity data set presented by Martin et al. (Seism Res Lett 87:957–962, 2015) to analyse the distribution of ground motions at near-field and regional distances. We show that the data are consistent with the instrumental peak ground acceleration (PGA) versus macroseismic intensity relationship developed by Worden et al. (Bull Seism Soc Am 102:204–221, 2012), and use this relationship to estimate peak ground acceleration from intensities (PGAEMS). For nearest-fault distances (RRUP prediction equation (GMPE). At greater distances (RRUP > 200 km), instrumental PGA values are consistent with this GMPE, while PGAEMS is systematically higher. We suggest the latter reflects a duration effect whereby effects of weak shaking are enhanced by long-duration and/or long-period ground motions from a large event at regional distances. We use PGAEMS values within 200 km to investigate the variability of high-frequency ground motions using the Atkinson and Boore (Bull Seism Soc Am 93:1703–1729, 2003) GMPE as a baseline. Across the near-field region, PGAEMS is higher by a factor of 2.0–2.5 towards the northern, down-dip edge of the rupture compared to the near-field region nearer to the southern, up-dip edge of the rupture. Inferred deamplification in the deepest part of the Kathmandu valley supports the conclusion that former lake-bed sediments experienced a pervasive nonlinear response during the mainshock (Dixit et al. in Seismol Res Lett 86(6):1533–1539, 2015; Rajaure et al. in Tectonophysics, 2016. Ground motions were significantly amplified in the southern Gangetic basin, but were relatively low in the northern basin. The overall distribution of ground motions

The March 11, 2011 Tohoku earthquake and its tsunami killed 18,508 people, including the missing (National Police Agency report as of April 2014) and raise the Level 7 accident at TEPCO's Fukushima Dai-ichi nuclear power station in Japan. The problems revealed can be viewed as due to a combination of risk-management, risk-communication, and geoethics issues. Japan's preparations for earthquakes and tsunamis are based on the magnitude of the anticipated earthquake for each region. The government organization coordinating the estimation of anticipated earthquakes is the "Headquarters for Earthquake Research Promotion" (HERP), which is under the Ministry of Education, Culture, Sports, Science and Technology (MEXT). Japan's disaster mitigation system is depicted schematically as consisting of three layers: seismology, civil engineering, and disaster mitigation planning. This research explains students in geoscience should study geoethics as part of their education related Tohoku earthquake and the Level 7 accident at TEPCO's Fukushima Dai-ichi nuclear power station. Only when they become practicing professionals, they will be faced with real geoethical dilemmas. A crisis such as the 2011 earthquake, tsunami, and Fukushima Dai-ichi nuclear accident, will force many geoscientists to suddenly confront previously unanticipated geoethics and risk-communication issues. One hopes that previous training will help them to make appropriate decisions under stress. We name it "decision science".

Since the beginning of 2009, the Collaboratory for the Study of EarthquakePredictability (CSEP) has been conducting an earthquake forecast experiment in the western Pacific. This experiment is an extension of the Kagan-Jackson experiments begun 15 years earlier and is a prototype for future global earthquakepredictability experiments. At the beginning of each year, seismicity models make a spatially gridded forecast of the number of Mw≥ 5.8 earthquakes expected in the next year. For the three participating statistical models, we analyse the first two years of this experiment. We use likelihood-based metrics to evaluate the consistency of the forecasts with the observed target earthquakes and we apply measures based on Student's t-test and the Wilcoxon signed-rank test to compare the forecasts. Overall, a simple smoothed seismicity model (TripleS) performs the best, but there are some exceptions that indicate continued experiments are vital to fully understand the stability of these models, the robustness of model selection and, more generally, earthquakepredictability in this region. We also estimate uncertainties in our results that are caused by uncertainties in earthquake location and seismic moment. Our uncertainty estimates are relatively small and suggest that the evaluation metrics are relatively robust. Finally, we consider the implications of our results for a global earthquake forecast experiment.

Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.

1.Background Major portions of urban areas in Asia are highly exposed and vulnerable to devastating earthquakes. Many studies identify ways to reduce earthquake risk by concentrating more on building resilience for the particularly vulnerable populations. By 2020, as the United Nations' warning, many Asian countries would become 'super-aged societies', such as Taiwan. However, local authorities rarely use resilience approach to frame earthquake disaster risk management and land use strategies. The empirically-based research about the resilience of aging populations has also received relatively little attention. Thus, a challenge arisen for decision-makers is how to enhance resilience of aging populations within the context of risk reduction. This study aims to improve the understanding of the resilience of aging populations and its changes over time in the aftermath of a destructive earthquake at the local level. A novel methodology is proposed to assess the resilience of aging populations and to characterize their changes of spatial distribution patterns, as well as to examine their determinants. 2.Methods and data An indicator-based assessment framework is constructed with the goal of identifying composite indicators (including before, during and after a disaster) that could serve as proxies for attributes of the resilience of aging populations. Using the recovery process of the Chi-Chi earthquake struck central Taiwan in 1999 as a case study, we applied a method combined a geographical information system (GIS)-based spatial statistics technique and cluster analysis to test the extent of which the resilience of aging populations is spatially autocorrelated throughout the central Taiwan, and to explain why clustering of resilient areas occurs in specific locations. Furthermore, to scrutinize the affecting factors of resilience, we develop an aging population resilience model (APRM) based on existing resilience theory. Using the APRM, we applied a multivariate

The occurrence of three earthquakes with moment magnitude (Mw) greater than 8.8 and six earthquakes larger than Mw 8.5, since 2004, has raised interest in the long-term global rate of great earthquakes. Past studies have focused on the analysis of earthquakes since 1900, which roughly marks the start of the instrumental era in seismology. Before this time, the catalog is less complete and magnitude estimates are more uncertain. Yet substantial information is available for earthquakes before 1900, and the catalog of historical events is being used increasingly to improve hazard assessment. Here I consider the catalog of historical earthquakes and show that approximately half of all Mw ≥ 8.5 earthquakes are likely missing or underestimated in the 19th century. I further present a reconsideration of the felt effects of the 8 February 1843, Lesser Antilles earthquake, including a first thorough assessment of felt reports from the United States, and show it is an example of a known historical earthquake that was significantly larger than initially estimated. The results suggest that incorporation of best available catalogs of historical earthquakes will likely lead to a significant underestimation of seismic hazard and/or the maximum possible magnitude in many regions, including parts of the Caribbean.

Phenomena at critical points are vital for identifying the short-impending stage prior to earthquakes. The peak stress is a critical point when stress is converted from predominantly accumulation to predominantly release. We call the duration between the peak stress and instability "the meta-instability stage", which refers to the short-impending stage of earthquakes. The meta-instability stage consists of a steady releasing quasi-static stage and an accelerated releasing quasi-dynamic stage. The turning point of the above two stages is the remaining critical point. To identify the two critical points in the field, it is necessary to study the characteristic phenomena of various physical fields in the meta-instability stage in the laboratory, and the strain and displacement variations were studied. Considering that stress and relative displacement can be detected by thermal variations and peculiarities in the full-field observations, we employed a cooled thermal infrared imaging system to record thermal variations in the meta-instability stage of stick slip events generated along a simulated, precut planer strike slip fault in a granodiorite block on a horizontally bilateral servo-controlled press machine. The experimental results demonstrate the following: (1) a large area of decreasing temperatures in wall rocks and increasing temperatures in sporadic sections of the fault indicate entrance into the meta-instability stage. (2) The rapid expansion of regions of increasing temperatures on the fault and the enhancement of temperature increase amplitude correspond to the turning point from the quasi-static stage to the quasi-dynamic stage. Our results reveal thermal indicators for the critical points prior to earthquakes that provide clues for identifying the short-impending stage of earthquakes.

Natural disasters pose a great challenge to the health systems and individual health facilities. In low-resource settings, disaster preparedness systems are often limited and not been well described. Two devastating earthquakes hit Nepal within a 17-days period in 2015. This study aims to describe the burden and distribution of emergency cases to a local hospital. This is a prospective observational study of patients presenting to a local hospital for a period of 21 days following the earthquake on April 25, 2015. Demographic and clinical information was prospectively registered for all patients in the systematic emergency registry. Systematic telephone interviews were conducted in a random sample of the patients 90 days after admission to the hospital. A total of 2,003 emergency patients were registered during the period. The average daily number of emergency patients during the first five days was almost five times higher (n = 150) than the pre-incident daily average (n = 35). The majority of injuries were fractures (58%), 348 (56%) in the lower extremities. A total of 345 surgical procedures were performed and the hospital treated 111 patients with severe injuries related to the earthquake (compartment syndrome, crush injury, and internal injury). Among those with follow-up interviews, over 90% reported that they had been severely affected by the earthquakes; complete house damage, living in temporary shelter, or loss of close family member. The hospital experienced a very high caseload during the first days, and the majority of patients needed orthopaedic services. The proportion of severely injured and in-hospital deaths were relatively low, probably indicating that the most severely injured did not reach the hospital in time. The experiences underline the need for robust and easily available local health services that can respond to disasters.

Introduction Natural disasters pose a great challenge to the health systems and individual health facilities. In low-resource settings, disaster preparedness systems are often limited and not been well described. Two devastating earthquakes hit Nepal within a 17-days period in 2015. This study aims to describe the burden and distribution of emergency cases to a local hospital. Methods This is a prospective observational study of patients presenting to a local hospital for a period of 21 days following the earthquake on April 25, 2015. Demographic and clinical information was prospectively registered for all patients in the systematic emergency registry. Systematic telephone interviews were conducted in a random sample of the patients 90 days after admission to the hospital. Results A total of 2,003 emergency patients were registered during the period. The average daily number of emergency patients during the first five days was almost five times higher (n = 150) than the pre-incident daily average (n = 35). The majority of injuries were fractures (58%), 348 (56%) in the lower extremities. A total of 345 surgical procedures were performed and the hospital treated 111 patients with severe injuries related to the earthquake (compartment syndrome, crush injury, and internal injury). Among those with follow-up interviews, over 90% reported that they had been severely affected by the earthquakes; complete house damage, living in temporary shelter, or loss of close family member. Conclusion The hospital experienced a very high caseload during the first days, and the majority of patients needed orthopaedic services. The proportion of severely injured and in-hospital deaths were relatively low, probably indicating that the most severely injured did not reach the hospital in time. The experiences underline the need for robust and easily available local health services that can respond to disasters. PMID:29394265

Damages and loss of life sustained during an earthquake results from falling structures and flying glass and objects. To address these and other problems, new information technology and systems as a means can improve crisis management and crisis response. The most important factor for managing the crisis depends on our readiness before disasters by useful data. This study aimed to determine the Earthquake Information Management System (EIMS) in India, Afghanistan and Iran, and describe how we can reduce destruction by EIMS in crisis management. This study was an analytical comparison in which data were collected by questionnaire, observation and checklist. The population was EIMS in selected countries. Sources of information were staff in related organizations, scientific documentations and Internet. For data analysis, Criteria Rating Technique, Delphi Technique and descriptive methods were used. Findings showed that EIMS in India (Disaster Information Management System), Afghanistan (Management Information for Natural Disasters) and Iran are decentralized. The Indian state has organized an expert group to inspect issues about disaster decreasing strategy. In Iran, there was no useful and efficient EIMS to evaluate earthquake information. According to outcomes, it is clear that an information system can only influence decisions if it is relevant, reliable and available for the decision-makers in a timely fashion. Therefore, it is necessary to reform and design a model. The model contains responsible organizations and their functions.

The Murcia Region, is one of the highest seimic activity of Spain, located SE Iberian Peninsula. A system of active faults are included in the región, where the most recent damaging eartquakes took place in our country: 1999, 2002, 2005 and 2011. The last one ocurred in Lorca, causing 9 deads and notably material losses, including the artistic stock. The seismic emergency plann of the Murcia Region was developed in 2006, based of the results of the risk Project RISMUR I, which among other conslusions pointed out Lorca as one of the municipalities with highest risk in the province,. After the Lorca earthquake in 2011, a revisión of the previous study has been developed through the Project RISMUR II, including data of this earthquake , as well as updted Data Base of: seismicity, active faults, strong motion records, cadastre, vulnerability, etc. In adittion, the new study includes, some methodology innovations: modelization of faults as independent units for hazard assessment, analytic methods for risk estimations using data of the earthquake for calibration of capacity and fragility curves. In this work the results of RISMUR II are presented, which are compared with those reached in RISMUR I. The main conclusions are: Increasing of the hazard along the central system fault SW-NE (Alhama de Murcia, Totana nad Carracoy), which involve highest expected damages in the nearest populations to these faults: Lorca, Totana, Alcantarilla and Murcia.

Context: Damages and loss of life sustained during an earthquake results from falling structures and flying glass and objects. To address these and other problems, new information technology and systems as a means can improve crisis management and crisis response. The most important factor for managing the crisis depends on our readiness before disasters by useful data. Aims: This study aimed to determine the Earthquake Information Management System (EIMS) in India, Afghanistan and Iran, and describe how we can reduce destruction by EIMS in crisis management. Materials and Methods: This study was an analytical comparison in which data were collected by questionnaire, observation and checklist. The population was EIMS in selected countries. Sources of information were staff in related organizations, scientific documentations and Internet. For data analysis, Criteria Rating Technique, Delphi Technique and descriptive methods were used. Results: Findings showed that EIMS in India (Disaster Information Management System), Afghanistan (Management Information for Natural Disasters) and Iran are decentralized. The Indian state has organized an expert group to inspect issues about disaster decreasing strategy. In Iran, there was no useful and efficient EIMS to evaluate earthquake information. Conclusions: According to outcomes, it is clear that an information system can only influence decisions if it is relevant, reliable and available for the decision-makers in a timely fashion. Therefore, it is necessary to reform and design a model. The model contains responsible organizations and their functions. PMID:23555130

This is a descriptive analysis, of victims of Turkey's October 23, 2011 and November 21, 2011 Van earthquakes. The goal of this study is investigated the injury profile of the both earthquakes in relation to musculoskeletal trauma. We retrospectively reviewed medical records of 3,965 patients admitted to in seven hospitals. A large share of these injuries were soft tissue injuries, followed by fractures, crush injuries, crush syndromes, nerve injuries, vascular injuries, compartment syndrome and joint dislocations. A total of 73 crush injuries were diagnosed and 31 of them were developed compartment syndrome. The patients with closed undisplaced fractures were treated with casting braces. For closed unstable fractures with good skin and soft-tissue conditions, open reduction and internal fixation was performed. All patients with open fracture had an external fixator applied after adequate debridement. Thirty one of 40 patients with compartment syndrome were treated by fasciotomy. For twelve of them, amputation was necessary. The most common procedure performed was debridement, followed by open reduction and internal fixation and closed reduction-casting, respectively. The results of this study may provide the basis for future development of strategy to optimise attempts at rescue and plan treatment of survivors with musculoskeletal injuries after earthquakes.

Research conducted for determining the location of stations for measuring crustal dynamics and predictingearthquakes is discussed. Procedural aspects, the extraregional kinematic tendencies, and regional tectonic deformation mechanisms are described.

Florida invested in preserving the Tequesta Indians' "Stonehenge-like" site along the Miami River. Direct observation, and telecast reports, show that a strong association exists between this area and Native American place names, hurricanes, tornados, a waterspout, and other nearby phenomena. Electromagnetic stimulation of human nervous systems in areas like these, discernable by appropriately sensitive individuals when these types of events occur, could plausibly account for some correct "predictions" of events like earthquakes. Various sensory modalities may be activated there. It may be important to understand other historic aspects associated with cultural artifacts like Miami's Tequesta remains. If it also generates instrumentally detectable signals that correlate with visual, "auditory," or nerve ending "tinglings" like those cited by the psychiatrist Arthur Guirdham in books like his Obsessions, applied physicists could partly vindicate the investment and also provide a net return. Society and comparative religious study may benefit.

People's well-being after loss resulting from an earthquake is a concern in countries prone to natural disasters. Most studies on post-earthquake subjective quality of life (QOL) have focused on the effects of psychological impairment and post-traumatic stress disorder (PTSD) on the psychological dimension of QOL. However, there is a need for studies focusing on QOL in populations not affected by PTSD or psychological impairment. The aim of this study was to estimate QOL changes over an 18-month period in an adult population sample after the L'Aquila 2009 earthquake. The study was designed as a longitudinal survey with four repeated measurements performed at six monthly intervals. The setting was the general population of an urban environment after a disruptive earthquake. Participants included 397 healthy adult subjects. Exclusion criteria were comorbidities such as physical, psychological, psychiatric or neurodegenerative diseases at the beginning of the study. The primary outcome measure was QOL, as assessed by the WHOQOL-BREF instrument. A generalised estimating equation model was run for each WHOQOL-BREF domain. Overall, QOL scores were observed to be significantly higher 18 months after the earthquake in all WHOQOL-BREF domains. The model detected an average increase in the physical QOL scores (from 66.6 ± 5.2 to 69.3 ± 4.7), indicating a better overall physical QOL for men. Psychological domain scores (from 64.9 ± 5.1 to 71.5 ± 6.5) were observed to be worse in men than in women. Levels at the WHOQOL domain for psychological health increased from the second assessment onwards in women, indicating higher resiliency. Men averaged higher scores than women in terms of social relationships and the environmental domain. Regarding the physical, psychological and social domains of QOL, scores in the elderly group (age > 60) were observed to be similar to each other regardless of the significant covariates used. WHOQOL-BREF scores of the psychological domain

The change in seismicity leading to the Wenchuan Earthquake in 2008 (Mw 7.9) has been studied by various authors based on statistics and/or pattern recognitions (Huang, 2008; Yan et al., 2009; Chen and Wang, 2010; Yi et al., 2011). We show, in particular, that the magnitude-dependent seismic quiescence is observed for the Wenchuan earthquake and that it adds to other similar observations. Such studies on seismic quiescence prior to major earthquakes include 1982 Urakawa-Oki earthquake (M 7.1) (Taylor et al., 1992), 1994 Hokkaido-Toho-Oki earthquake (Mw=8.2) (Takanami et al., 1996), 2011 Tohoku earthquake (Mw=9.0) (Katsumata, 2011). Smith and Sacks (2013) proposed a magnitude-dependent quiescence based on a physical earthquake model (Rydelek and Sacks, 1995) and demonstrated the quiescence can be reproduced by the introduction of "asperities" (dilantacy hardened zones). Actual observations indicate the change occurs in a broader area than the eventual earthquake fault zone. In order to accept the explanation, we need to verify the model as the model predicts somewhat controversial features of earthquakes such as the magnitude dependent stress drop at lower magnitude range or the dynamically appearing asperities and repeating slips in some parts of the rupture zone. We show supportive observations. We will also need to verify the dilatancy diffusion to be taking place. So far, we only seem to have indirect evidences, which need to be more quantitatively substantiated.

We develop an intensity prediction equation (IPE) for the Central and Eastern United States, explore differences between modified Mercalli intensities (MMI) and community internet intensities (CII) and the propensity for reporting, and estimate the moment magnitudes of the 1811-1812 New Madrid, MO, and 1886 Charleston, SC, earthquakes. We constrain the study with North American census data, the National Oceanic and Atmospheric Administration MMI dataset (responses between 1924 and 1985), and the USGS ';Did You Feel It?' CII dataset (responses between June, 2000 and August, 2012). The combined intensity dataset has more than 500,000 felt reports for 517 earthquakes with magnitudes between 2.5 and 7.2. The IPE has the basic form, MMI=c1+c2M+c3exp(λ)+c4λ. where M is moment magnitude and λ is mean log hypocentral distance. Previous IPEs use a limited dataset of MMI, do not differentiate between MMI and CII data in the CEUS, nor account for spatial variations in population. These factors can have an impact at all magnitudes, especially the last factor at large magnitudes and small intensities where the population drops to zero in the Atlantic Ocean and Gulf of Mexico. We assume that the number of reports of a given intensity have hypocentral distances that are log-normally distributed, the distribution of which is modulated by population and the propensity for individuals to report their experience. We do not account for variations in stress drop, regional variations in Q, or distance-dependent geometrical spreading. We simulate the distribution of reports of a given intensity accounting for population and use a grid search method to solve for the fraction of population to report the intensity, the standard deviation of the log-normal distribution and the mean log hypocentral distance, which appears in the above equation. We find that lower intensities, both CII and MMI, are less likely to be reported than greater intensities. Further, there are strong spatial

The intensity scales determined the damage caused by an earthquake. However, a new methodology takes into account not only the damage but the type of damage "Earthquake Archaeological Effects", EAE's, and its orientation (e.g. displaced masonry blocks, conjugated fractures, fallen and oriented columns, impact marks, dipping broken corners, etc.) (Rodriguez-Pascua et al., 2011; Giner-Robles et al., 2012). Its main contribution is that it focuses not only on the amount of damage but also in its orientation, giving information about the ground motion during the earthquake. Therefore, this orientations and instrumental data can be correlated with historical earthquakes. In 2011 an earthquake of magnitude Mw 5.2 took place in Lorca (SE Spain) (9 casualties and 460 million Euros in reparations). The study of the EAE's was carried out through the whole city (Giner-Robles et al., 2012). The present study aimed to a.- validate the EAE's methodology using it only in a small place, specifically the cemetery of San Clemente in Lorca, and b.- constraining the range of orientation for each EAE's. This cemetery has been selected because these damage orientation data can be correlated with instrumental information available, and also because this place has: a.- wide variety of architectural styles (neogothic, neobaroque, neoarabian), b.- its Cultural Interest (BIC), and c.- different building materials (brick, limestone, marble). The procedure involved two main phases: a.- inventory and identification of damage (EAE's) by pictures, and b.- analysis of the damage orientations. The orientation was calculated for each EAE's and plotted in maps. Results are NW-SE damage orientation. This orientation is consistent with that recorded in the accelerometer of Lorca (N160°E) and with that obtained from the analysis of EAE's for the whole town of Lorca (N130°E) (Giner-Robles et al., 2012). Due to the existence of an accelerometer, we know the orientation of the peak ground acceleration

Background A magnitude 9.0 earthquake struck off eastern Japan in March 2011. Many survivors have been living in temporary houses provided by the local government since they lost their houses as a result of the great tsunami (tsunami group) or the expected high-dose radiation resulting from the nuclear accident at the Fukushima Daiichi Nuclear Power Plant (radiation group). The tsunami was more than 9 m high in Soma, Fukushima, which is located 30 km north of the Fukushima Daiichi Nuclear Power Plant and adjacent to the mandatory evacuation area. A health screening program was held for the evacuees in Soma in September 2011. The aim of this study was to compare the metabolic profiles of the evacuees before and after the disaster. We hypothesized that the evacuees would experience deteriorated metabolic status based on previous reports of natural disasters. Methods Data on 200 subjects who attended a health screening program in September or October of 2010 (pre-quake) and 2011 (post-quake) were retrospectively reviewed and included in this study. Pre-quake and post-quake results of physical examinations and laboratory tests were compared in the tsunami and radiation groups. A multivariate regression model was used to determine pre-quake predictive factors for elevation of hemoglobin A1c (HbA1c) in the tsunami group. Results Significantly higher values of body weight, body mass index, waist circumference, and HbA1c and lower high-density lipoprotein cholesterol levels were found at the post-quake screening when compared with the pre-quake levels (p = 0.004, p = 0.03, p = 0.008, p < 0.001, and p = 0.03, respectively). A significantly higher proportion of subjects in the tsunami group with high HbA1c, defined as ≥5.7%, was observed after the quake (34.3%) than before the quake (14.8%) (p < 0.001). Regional factors, periodic clinic visits, and waist circumference before the quake were identified as predictive factors on multivariate analysis for the deterioration

A magnitude 9.0 earthquake struck off eastern Japan in March 2011. Many survivors have been living in temporary houses provided by the local government since they lost their houses as a result of the great tsunami (tsunami group) or the expected high-dose radiation resulting from the nuclear accident at the Fukushima Daiichi Nuclear Power Plant (radiation group). The tsunami was more than 9 m high in Soma, Fukushima, which is located 30 km north of the Fukushima Daiichi Nuclear Power Plant and adjacent to the mandatory evacuation area. A health screening program was held for the evacuees in Soma in September 2011. The aim of this study was to compare the metabolic profiles of the evacuees before and after the disaster. We hypothesized that the evacuees would experience deteriorated metabolic status based on previous reports of natural disasters. Data on 200 subjects who attended a health screening program in September or October of 2010 (pre-quake) and 2011 (post-quake) were retrospectively reviewed and included in this study. Pre-quake and post-quake results of physical examinations and laboratory tests were compared in the tsunami and radiation groups. A multivariate regression model was used to determine pre-quake predictive factors for elevation of hemoglobin A1c (HbA1c) in the tsunami group. Significantly higher values of body weight, body mass index, waist circumference, and HbA1c and lower high-density lipoprotein cholesterol levels were found at the post-quake screening when compared with the pre-quake levels (p = 0.004, p = 0.03, p = 0.008, p < 0.001, and p = 0.03, respectively). A significantly higher proportion of subjects in the tsunami group with high HbA1c, defined as ≥ 5.7%, was observed after the quake (34.3%) than before the quake (14.8%) (p < 0.001). Regional factors, periodic clinic visits, and waist circumference before the quake were identified as predictive factors on multivariate analysis for the deterioration of HbA1c. Post-quake metabolic

This paper is focused on the study of earthquake size statistical distribution by using Bayesian inference. The strategy consists in the definition of an a priori distribution based on instrumental seismicity, and modeled as a power law distribution. By using the observed historical data, the power law is then modified in order to obtain the posterior distribution. The aim of this paper is to define the earthquake size distribution using all the seismic database available (i.e., instrumental and historical catalogs) and a robust statistical technique. We apply this methodology to the Italian seismicity, dividing the territory in source zones as done for the seismic hazard assessment, taken here as a reference model. The results suggest that each area has its own peculiar trend: while the power law is able to capture the mean aspect of the earthquake size distribution, the posterior emphasizes different slopes in different areas. Our results are in general agreement with the ones used in the seismic hazard assessment in Italy. However, there are areas in which a flattening in the curve is shown, meaning a significant departure from the power law behavior and implying that there are some local aspects that a power law distribution is not able to capture.

The study attempts to identify predictors of injuries among persons who were hospitalized following the Armenian earthquake of 7 December 1988. A total of 189 such individuals were identified through neighbourhood polyclinics in the city of Leninakan and 159 noninjured controls were selected from the same neighbourhoods. A standardized interview questionnaire was used. Cases and controls shared many social and demographic characteristics; however, 98% of persons who were hospitalized with injuries were inside a building at the time of the earthquake, compared with 83% of the controls (odds ratio = 12.20, 95% confidence interval (CI) = 3.62-63.79). The odds ratio of injuries for individuals who were in a building that had five or more floors, compared with those in lower buildings, was 3.65 (95% CI = 2.12-6.33). Leaving buildings after the first shock of the earthquake was a protective behaviour. The odds ratio for those staying indoors compared with those who ran out was 4.40 (95% CI = 2.24-8.71). PMID:1600585

Mechanical effects left by a model earthquake on its fault plane, in the post-seismic phase, are investigated employing the `displacement discontinuity method'. Simple crack models, characterized by the release of a constant, unidirectional shear traction are investigated first. Both slip components-parallel and normal to the traction direction-are found to be non-vanishing and to depend on fault depth, dip, aspect ratio and fault plane geometry. The rake of the slip vector is similarly found to depend on depth and dip. The fault plane is found to suffer some small rotation and bending, which may be responsible for the indentation of a transform tectonic margin, particularly if cumulative effects are considered. Very significant normal stress components are left over the shallow portion of the fault surface after an earthquake: these are tensile for thrust faults, compressive for normal faults and are typically comparable in size to the stress drop. These normal stresses can easily be computed for more realistic seismic source models, in which a variable slip is assigned; normal stresses are induced in these cases too, and positive shear stresses may even be induced on the fault plane in regions of high slip gradient. Several observations can be explained from the present model: low-dip thrust faults and high-dip normal faults are found to be facilitated, according to the Coulomb failure criterion, in repetitive earthquake cycles; the shape of dip-slip faults near the surface is predicted to be upward-concave; and the shallower aftershock activity generally found in the hanging block of a thrust event can be explained by `unclamping' mechanisms.

There are many reports on the occurrence of anomalous changes in the ionosphere prior to large earthquakes. However, whether or not these changes are reliable precursors that could be useful for earthquakeprediction is controversial within the scientific community. To test a possible statistical relationship between ionospheric disturbances and earthquakes, we compare changes in the total electron content (TEC) of the ionosphere with occurrences of M ≥ 6.0 earthquakes globally for 2000-2014. We use TEC data from the global ionosphere map (GIM) and an earthquake list declustered for aftershocks. For each earthquake, we look for anomalous changes in GIM-TEC within 2.5° latitude and 5.0° longitude of the earthquake location (the spatial resolution of GIM-TEC). Our analysis has not found any statistically significant changes in GIM-TEC prior to earthquakes. Thus, we have found no evidence that would suggest that monitoring changes in GIM-TEC might be useful for predictingearthquakes.

Triggered earthquakes can be large, damaging, and lethal as evidenced by the 1999 shocks in Turkey and the 2001 events in El Salvador. In this study, earthquakes with M greater than 7.0 from the Harvard CMT catalog are modeled as dislocations to calculate shear stress changes on subsequent earthquake rupture planes near enough to be affected. About 61% of earthquakes that occurred near the main shocks are associated with calculated shear stress increases, while ~39% are associated with shear stress decreases. If earthquakes associated with calculated shear stress increases are interpreted as triggered, then such events make up at least 8% of the CMT catalog. Globally, triggered earthquakes obey an Omori-law rate decay that lasts between ~7-11 years after the main shock. Earthquakes associated with calculated shear stress increases occur at higher rates than background up to 240 km away from the main-shock centroid. Earthquakes triggered by smaller quakes (foreshocks) also obey Omori's law, which is one of the few time-predictable patterns evident in the global occurrence of earthquakes. These observations indicate that earthquake probability calculations which include interactions from previous shocks should incorporate a transient Omori-law decay with time. In addition, a very simple model using the observed global rate change with time and spatial distribution of triggered earthquakes can be applied to immediately assess the likelihood of triggered earthquakes following large events, and can be in place until more sophisticated analyses are conducted.

Accurate prediction of catastrophic brittle failure in rocks and in the Earth presents a significant challenge on theoretical and practical grounds. The governing equations are not known precisely, but are known to produce highly non-linear behavior similar to those of near-critical dynamical systems, with a large and irreducible stochastic component due to material heterogeneity. In a laboratory setting mechanical, hydraulic and rock physical properties are known to change in systematic ways prior to catastrophic failure, often with significant non-Gaussian fluctuations about the mean signal at a given time, for example in the rate of remotely-sensed acoustic emissions. The effectiveness of such signals in real-time forecasting has never been tested before in a controlled laboratory setting, and previous work has often been qualitative in nature, and subject to retrospective selection bias, though it has often been invoked as a basis in forecasting natural hazard events such as volcanoes and earthquakes. Here we describe a collaborative experiment in real-time data assimilation to explore the limits of predictability of rock failure in a best-case scenario. Data are streamed from a remote rock deformation laboratory to a user-friendly portal, where several proposed physical/stochastic models can be analysed in parallel in real time, using a variety of statistical fitting techniques, including least squares regression, maximum likelihood fitting, Markov-chain Monte-Carlo and Bayesian analysis. The results are posted and regularly updated on the web site prior to catastrophic failure, to ensure a true and and verifiable prospective test of forecasting power. Preliminary tests on synthetic data with known non-Gaussian statistics shows how forecasting power is likely to evolve in the live experiments. In general the predicted failure time does converge on the real failure time, illustrating the bias associated with the 'benefit of hindsight' in retrospective analyses

After Wenchuan and Lushan earthquake, the experience of Sichuan reconstruction planning is an important sample of agricultural village human settlements safety and regional ecological environment restoration. This paper combines the experience of the reconstruction of Dujiangyan after the Wenchuan earthquake - that is, the concept of sustainable ecological restoration as well as the concept of regional ecological restoration, and the post-disaster recovery study on Ya'an Zhougongshan Chengqing Temple and the surrounding environment after Lushan earthquake, trying to integrate into the process of post - disaster ecological restoration. Through a comprehensive assessment of the thinking on the regional scale issues and the impact of rural ecological infrastructure, we proposed macro-cognitive and multi-level measures of ecological restoration projects in order to provide effective methods to restore regional ecological environment and reconstruct sustainable human settlements in affected areas in the latest Jiuzhaigou earthquake.

Detection of thermal anomaly prior to earthquake events has been widely confirmed by researchers over the past decade. One of the popular approaches for anomaly detection is the Robust Satellite Approach (RST). In this paper, we use this method on a collection of six years of MODIS satellite data, representing land surface temperature (LST) images to predict 21st May 2003 Boumerdes Algeria earthquake. The thermal anomalies results were compared with the ambient temperature variation measured in three meteorological stations of Algerian National Office of Meteorology (ONM) (DELLYS-AFIR, TIZI-OUZOU, and DAR-EL-BEIDA). The results confirm the importance of RST as an approach highly effective for monitoring the earthquakes.

The 2004, Chuetsu, Japan, earthquake of Mw 6.6 occurred as shallow thrust event and the detailed kinematic source model was obtained by Hikima and Koketsu (2005). Just after the event, a dense temporal seismic network was deployed, and the detailed structure was elucidated (A. Kato et al. 2006). The seismic velocities in the hanging wall above the main shock fault are lower than those in the footwall, with the velocity contrast extending to a depth of approximately 10 km (A. Kato et al. 2006). Their results also show the high velocity on the asperity. We investigate that effect of the structure heterogeneity on fault rupture. First, we model the structure of the source region of 100km x 100km x 40km as simple as possible, and then solve the static elastic equation of motion with gravity effect by using finite difference method and GeoFEM. Our structure model consists of two layers, in which the boundary is a dipping surface from ground surface to 10km depth and bend to horizontal plane. The slope of the boundary corresponds to the earthquake fault and a bump located on the asperity between the depths of 4km and 10km. Finite difference grid size is 0.25km horizontally and 0.4km vertically. Ratio of the horizontal to vertical grids corresponds to the dip angle of the main shock. We simply assume the rigidity of 30GPa for lower sediment part and 40GPa for hard rock part. The boundary conditions imposed are, 1) stress free on the ground surface, 2) depth dependent or uniform normal stress are added on the sides that cause horizontal maximum stress, 3) Lithostatic vertical stress on the bottom. The calculated stress field on the main shock fault has the following features, 1) The high shear stress peaks appear around the depth of hypocenter and the top edge of the asperity, corresponding to the depths of the velocity contrast. These high stress zones are caused by stress concentration of the low rigidity wedge shaped sediment. 2) Expected stress drop distribution is

Icebergs calved from marine-terminating glaciers currently account for up to half of the 400 Gt of ice lost annually from the Greenland ice sheet (Enderlin et al., 2014). When large capsizing icebergs ( 1 Gt of ice) calve, they produce elastic waves that propagate through the solid earth and are observed as teleseismically detectable MSW 5 glacial earthquakes (e.g., Ekström et al., 2003; Nettles & Ekström, 2010 Tsai & Ekström, 2007; Veitch & Nettles, 2012). The annual number of these events has increased dramatically over the past two decades. We analyze glacial earthquakes from 2011-2013, which expands the glacial-earthquake catalog by 50%. The number of glacial-earthquake solutions now available allows us to investigate regional patterns across Greenland and link earthquake characteristics to changes in ice dynamics at individual glaciers. During the years of our study Greenland's west coast dominated glacial-earthquake production. Kong Oscar Glacier, Upernavik Isstrøm, and Jakobshavn Isbræ all produced more glacial earthquakes during this time than in preceding years. We link patterns in glacial-earthquake production and cessation to the presence or absence of floating ice tongues at glaciers on both coasts of Greenland. The calving model predicts glacial-earthquake force azimuths oriented perpendicular to the calving front, and comparisons between seismic data and satellite imagery confirm this in most instances. At two glaciers we document force azimuths that have recently changed orientation and confirm that similar changes have occurred in the calving-front geometry. We also document glacial earthquakes at one previously quiescent glacier. Consistent with previous work, we model the glacial-earthquake force-time function as a boxcar with horizontal and vertical force components that vary synchronously. We investigate limitations of this approach and explore improvements that could lead to a more accurate representation of the glacial earthquake source.

Prevention and mitigation of rainfall induced geological hazards after the Ms=8 Wenchuan earthquake on May 12th, 2008 were gained more significance for the rebuild of earthquake hit regions in China. After the Wenchuan earthquake, there were thousands of slopes failure, which were much more susceptible to subsequent heavy rainfall and many even transformed into potential debris flows. An typical example can be found in the catastrophic disaster occurred in Zhongxing County, Chengdu City on 10th July, 2013 in which the unknown fractured slope up the mountain was triggered by a downpour and transformed into subsequent debris flow which wiped the community downstream, about 200 victims were reported in that tragic event. The transform patterns of rainfall-induced mass re-mobilization was categorized into three major type as the erosion of fractured slopes, initiate on loosen deposit and outbreak of landslide (debris flow) dams according to vast field investigation in the earthquake hit region. Despite the widespread and hidden characters,the complexity of the process also demonstrated in the transforms of the mass re-mobilized by the erosion of both gravity and streams in the small watersheds which have never been reported before the giant Wenchuan Earthquake in many regions. As a result, an increasing number of questions for disaster relief and mitigation were proposed including the threshold of early warning and measurement of the volume for the design of mitigation measures on rainfall-induced mass re-mobilization in debris flow gullies. This study is aimed for answer the essential questions about the threshold and amount of mass initiation triggered by the subsequent rainfall in post earthquake time. In this study, experimental tests were carried out for simulating the failure of the rainfall-induced mass re-mobilization in respectively in a natural co-seismic fractured slope outside and the debris flow simulation platform inside the laboratory. A natural

Seismograms from 52 aftershocks of the 1971 San Fernando earthquake recorded at 25 stations distributed across the San Fernando Valley are examined to identify empirical Green's functions, and characterize the dependence of their waveforms on moment, focal mechanism, source and recording site spatial variations, recording site geology, and recorded frequency band. Recording distances ranged from 3.0 to 33.0 km, hypocentral separations ranged from 0.22 to 28.4 km, and recording site separations ranged from 0.185 to 24.2 km. The recording site geologies are diorite gneiss, marine and nonmarine sediments, and alluvium of varying thicknesses. Waveforms of events with moment below aboutmore » 1.5 {times} 10{sup 21} dyn cm are independent of the source-time function and are termed empirical Green's functions. Waveforms recorded at a particular station from events located within 1.0 to 3.0 km of each other, depending upon site geology, with very similar focal mechanism solutions are nearly identical for frequencies up to 10 Hz. There is no correlation to waveforms between recording sites at least 1.2 km apart, and waveforms are clearly distinctive for two sites 0.185 km apart. The geologic conditions of the recording site dominate the character of empirical Green's functions. Even for source separations of up to 20.0 km, the empirical Green's functions at a particular site are consistent in frequency content, amplification, and energy distribution. Therefore, it is shown that empirical Green's functions can be used to obtain site response functions. The observations of empirical Green's functions are used as a basis for developing the theory for using empirical Green's functions in deconvolution for source pulses and synthesis of seismograms of larger earthquakes.« less

Since 2008, there has been a dramatic increase in earthquake activity in the central United States in association with major oil and gas operations. Oklahoma is now considered one the most seismically active states. Although seismic networks are able to detect activity and map its locus, they are unable to image the distribution of fluids in the fault responsible for triggering seismicity. Electrical geophysical methods are ideally suited to image fluid bearing faults since the injected waste-waters are highly saline and hence have a high electrical conductivity. To date, no study has imaged the fluids in the faults in Oklahoma and made a direct link to the seismicity. The 2016 M5.8 Pawnee, Oklahoma earthquake provides an unprecedented opportunity for scientists to provide that link. Several injection wells are located within a 20 km radius of the epicenter; and studies have suggested that injection of fluids in high-volume wells can trigger earthquakes as far away as 30 km. During late October to early November, 2016, we are collecting magnetotelluric (MT) data with the aim of constraining the distribution of fluids in the fault zone. The MT technique uses naturally occurring electric and magnetic fields measured at Earth's surface to measure conductivity structure. We plan to carry out a series of short two-dimensional (2D) profiles of wideband MT acquisition located through areas where the fault recently ruptured and seismic activity is concentrated and also across the faults in the vicinity that did not rupture. The integration of our results and ongoing seismic studies will lead to a better understanding of the links between fluid injection and seismicity.

Mountain watershed in Western China is prone to flash floods. The Wenchuan earthquake on May 12, 2008 led to the destruction of surface, and frequent landslides and debris flow, which further exacerbated the flash flood hazards. Two giant torrent and debris flows occurred due to heavy rainfall after the earthquake, one was on August 13 2010, and the other on August 18 2010. Flash floods reduction and risk assessment are the key issues in post-disaster reconstruction. Hydrological prediction models are important and cost-efficient mitigation tools being widely applied. In this paper, hydrological observations and simulation using remote sensing data and the WMS model are carried out in the typical flood-hit area, Longxihe watershed, Dujiangyan City, Sichuan Province, China. The hydrological response of rainfall runoff is discussed. The results show that: the WMS HEC-1 model can well simulate the runoff process of small watershed in mountainous area. This methodology can be used in other earthquake-affected areas for risk assessment and to predict the magnitude of flash floods. Key Words: Rainfall-runoff modeling. Remote Sensing. Earthquake. WMS.

INTRODUCTIONThe Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced to test the validity of their assumptions and to explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but we focus on statistical methods using future earthquake catalog data only. We envision two evaluations: a test of consistency with observed data and a comparison of all pairs of models for relative consistency. Both tests are based on the likelihood method, and both are fully prospective (i.e., the models are not adjusted to fit the test data). To be tested, each model must assign a probability to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified “bins” with location, magnitude, time, and focal mechanism limits.Seismology cannot yet deterministically predict individual earthquakes; however, it should seek the best possible models for forecasting earthquake occurrence. This paper describes the statistical rules of an experiment to examine and test earthquake forecasts. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which models can be assigned weights in a consensus model or be judged as suitable for particular regions.In this paper we develop a statistical method for testing earthquake likelihood models. A companion paper (Schorlemmer and Gerstenberger 2007, this issue) discusses the actual implementation of these tests in the framework of the RELM initiative.Statistical testing of hypotheses is a common task and a

National Research Institute for Earth Science and Disaster Prevention (NIED) has been conducting _gFault zone drilling_h. Fault zone drilling is especially important in understanding the structure, composition, and physical properties of an active fault. In the Chubu district of central Japan, large active faults such as the Atotsugawa (with 1858 Hietsu earthquake) and the Atera (with 1586 Tensho earthquake) faults exist. After the occurrence of the 1995 Kobe earthquake, it has been widely recognized that direct measurements in fault zones by drilling. This time, we describe about the Atera fault and the Nojima fault. Because, these two faults are similar in geological situation (mostly composed of granitic rocks), so it is easy to do comparative study of drilling investigation. The features of the Atera fault, which have been dislocated by the 1586 Tensho earthquake, are as follows. Total length is about 70 km. That general trend is NW45 degree with a left-lateral strike slip. Slip rate is estimated as 3-5 m / 1000 years. Seismicity is very low at present and lithologies around the fault are basically granitic rocks and rhyolite. Six boreholes have been drilled from the depth of 400 m to 630 m. Four of these boreholes (Hatajiri, Fukuoka, Ueno and Kawaue) are located on a line crossing in a direction perpendicular to the Atera fault. In the Kawaue well, mostly fractured and alternating granitic rock continued from the surface to the bottom at 630 m. X-ray fluorescence analysis (XRF) is conducted to estimate the amount of major chemical elements using the glass bead method for core samples. The amounts of H20+ are about from 0.5 to 2.5 weight percent. This fractured zone is also characterized by the logging data such as low resistivity, low P-wave velocity, low density and high neutron porosity. The 1995 Kobe (Hyogo-ken Nanbu) earthquake occurred along the NE-SW-trending Rokko-Awaji fault system, and the Nojima fault appeared on the surface on Awaji Island when this

Insofar as slip in an earthquake is related to the strain accumulated near a fault since a previous earthquake, and this process repeats many times, the earthquake cycle approximates an autonomous oscillator. Its asymmetric slow accumulation of strain and rapid release is quite unlike the harmonic motion of a pendulum and need not be time predictable, but still resembles a class of repeating systems known as integrate-and-fire oscillators, whose behavior has been shown to demonstrate a remarkable ability to synchronize to either external or self-organized forcing. Given sufficient time and even very weak physical coupling, the phases of sets of such oscillators, with similar though not necessarily identical period, approach each other. Topological and time series analyses presented here demonstrate that earthquakes worldwide show evidence of such synchronization. Though numerous studies demonstrate that the composite temporal distribution of major earthquakes in the instrumental record is indistinguishable from random, the additional consideration of event renewal interval serves to identify earthquake groupings suggestive of synchronization that are absent in synthetic catalogs. We envisage the weak forces responsible for clustering originate from lithospheric strain induced by seismicity itself, by finite strains over teleseismic distances, or by other sources of lithospheric loading such as Earth's variable rotation. For example, quasi-periodic maxima in rotational deceleration are accompanied by increased global seismicity at multidecadal intervals.

Background Sichuan is a province in China with an extensive history of earthquakes. Recent earthquakes, including the Lushan earthquake in 2013, have resulted in thousands of people losing their homes and their families. However, there is a research gap on the efficiency of government support policies. Therefore, this study develops a new perspective to study the health of earthquake survivors, based on the effect of post-earthquake rescue policies on health-related quality of life (HRQOL) of survivors of the Sichuan earthquake. Methods This study uses data from a survey conducted in five hard-hit counties (Wenchuan, Qingchuan, Mianzhu, Lushan, and Dujiangyan) in Sichuan in 2013. A total of 2,000 questionnaires were distributed, and 1,672 were returned; the response rate was 83.6%. Results Results of the rescue policies scale and Medical Outcomes Study Short Form 36 (SF-36) scale passed the reliability test. The confirmatory factor analysis model showed that the physical component summary (PCS) directly affected the mental component summary (MCS). The results of structural equation model regarding the effects of rescue policies on HRQOL showed that the path coefficients of six policies (education, orphans, employment, poverty, legal, and social rescue policies) to the PCS of survivors were all positive and passed the test of significance. Finally, although only the path coefficient of the educational rescue policy to the MCS of survivors was positive and passed the test of significance, the other five policies affected the MCS indirectly through the PCS. Conclusions The general HRQOL of survivors is not ideal; the survivors showed a low satisfaction with the post-earthquake rescue policies. Further, the six post-earthquake rescue policies significantly improved the HRQOL of survivors and directly affected the promotion of the PCS of survivors. Aside from the educational rescue policy, all other policies affected the MCS indirectly through the PCS. This finding

Among various types of 3D heterogeneity in the Earth, trench might be the most complex systems, which includes rapidly varying bathymetry and usually thick sediment below water layer. These structure complexities can cause substantial waveform complexities on seismograms, but their corresponding impact on the earthquake source studies has not yet been well understood. Here we explore those effects via studies of two moderate aftershocks (one near the coast while the other close to the Peru-Chile trench axis) in the 2015 Illapel earthquake sequence. The horizontal locations and depths of these two events are poorly constrained and the reported results of various agencies display substantial variations. Thus, we first relocated the epicenters using the P-wave first arrivals and determined other parameters by waveform fitting. In a jackknifing way, we found that the trench event has large differences between regional and teleseismic solutions, in particular for depth, while the coastal event shows consistent results. The teleseismic P/Pdiff waves between these two events also display distinctly different features. More specifically, the trench event has more complex P/Pdiff waves and stronger coda waves, in terms of amplitude and duration (longer than 100s). The coda waves are coherent across stations at different distances and azimuths, indicating a more likely origin of scattering waves due to 3D heterogeneity near trench. To quantitatively model those 3D effects, we adopted a hybrid waveform simulation approach that computes the 3D wavefield in the source region by the Spectral Element Method (SEM) and then propagates the wavefield to teleseismic and shadow zone distances through the Direct Solution Method (DSM). We incorporated the GEBCO bathymetry and water layer into the SEM simulations and assumed the IASP91 1D model for DSM computation. Comparing with the poor 1D synthetics fitting to the data, we do obtain dramatic improvement in 3D waveform fittings across a

A seismic array of the most contemporary technology has been recently installed in the area of Southwest Peloponnese, Greece, an area well known for its high seismic activity. The tectonic regime of the Hellenic arc was the reason for many lethal earthquakes with considerable damage to the broader area of East Mediterranean sea. The seismic array is based on nine 32-bit stations with broadband borehole seismometers. The seismogenic region, monitored by the array, is offshore. At this place the earthquake location suffers by poor azimuthal coverage and the stations of the national seismic network are very distant to this area. Therefore, the existing network cannot effectively monitor the microseismicity. The new array achieved a detailed monitoring of the small events dropping considerably the magnitude of completeness. The detectability of the microearthquakes has been drastically improved permitting so the statistical assessment of earthquake sequences in the area. In parallel the monitored seismicity is directly related with Radon measurement in the soil, taken at three stations in the area.. Radon measurements are performed indirectly by means γ-ray spectrometry of its radioactive progenies 214Pb and 214Bi (emitted at 351 keV and 609 keV, respectively). NaI(Tl) detectors have been installed at 1 m depth, at sites in vicinity of faults providing continuous real time data. Local meteorological records for atmospheric corrections are also continuously recorded. According to the Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) model atmospheric thermal anomalies observed before strong events can be attributed to increased radon concentration. This is also supported by the statistical analysis of AVHRR/NOAA-18 satellite thermal infrared (TIR) daily records. A combined study of precursor's signals is expected to provide a reliable assessment of their ability on short-term forecasting.

On 16 December, 2013, an earthquake of Ms5.1 occurred in Badong County, the Three Gorges Reservoir area, China. We collected all the 150 published focal mechanism solutions (FMS) and inversed the tectonic stress field in Badong, the Three Gorges Dam and Huangling anticline area using the software SATSI (Hardebeck and Michael, 2006). Inversion results show that the orientations of maximum principle stress axis (σ1) in Badong plunge to NNE or SSW. Detailed characteristics of the stress field indicate that the σ1 axis is almost vertical in the center of Huangling anticline and turns horizontal to the west. As to deep structures, we studied the satellite gravity anomalies of 8-638 order in this area using the EIGEN-6C2 model provided by ICGRM. Combining the seismic sounding profile through the epicenter of Badong earthquake and the petrology data, we reinterpreted the deep structure in the study area. The results show that the deep crust in Badong is unstable and the deep material's upwelling leads to Huangling anticline continued uplifting, which is consistent with the result indicated from the stress filed. Both of them provide energy for the preparation of earthquake. The FMS shows that Gaoqiao Fault is the causative fault of this Ms5.1 earthquake. Field investigations indicated that the lithology and fracture characteristic in Badong is beneficial to reservoir water infiltration. Before the earthquake, reservoir water level raised to 175m, the highest storage level, which increased the loading. Based on above researches, we believe that the Ms5.1 Badong earthquake is controlled by deep tectonic environment and stress field in shallow crust. The reservoir water infiltration and uploading increase generated by water storage of the Three Gorges area reduced the strength of Gaoqiao Fault and changed its stress state. These factors jointly promoted an abrupt movement of the fault in the critical stress state, and triggered the Ms5.1 Badong earthquake.

The 2010 M7.0 Haiti earthquake was the first major earthquake in southern Haiti in 250 years. As this event could represent the beginning of a new period of active seismicity in the region, and in consideration of how vulnerable the population is to earthquake damage, it is important to understand the nature of this event and how it has influenced seismic hazards in the region. Most significantly, the 2010 earthquake occurred on the secondary Leogâne thrust fault (two fault segments), not the Enriquillo Fault, the major strike-slip fault in the region, despite it being only a few kilometers away. We first use a finite element model to simulate rupture along the Leogâne fault. We varied friction and background stress to investigate the conditions that best explain observed surface deformations and why the rupture did not to jump to the nearby Enriquillo fault. Our model successfully replicated rupture propagation along the two segments of the Leogâne fault, and indicated that a significant stress increase occurred on the top and to the west of the Enriquillo fault. We also investigated the potential ground shaking level in this region if a rupture similar to the Mw 7.0 2010 Haiti earthquake were to occur on the Enriquillo fault. We used a finite element method and assumptions on regional stress to simulate low frequency dynamic rupture propagation for the segment of the Enriquillo fault closer to the capital. The high-frequency ground motion components were calculated using the specific barrier model, and the hybrid synthetics were obtained by combining the low-frequencies ( 1Hz) from the stochastic simulation using matched filtering at a crossover frequency of 1 Hz. The average horizontal peak ground acceleration, computed at several sites of interest through Port-au-Prince (the capital), has a value of 0.35g. Finally, we investigated the 3D local tomography of this region. We considered 897 high-quality records from the earthquake catalog as recorded by

PART A: The seismic history of the southeastern United States is dominated by the 1886 earthquake near Charleston, S.C. An understanding of the specific source and the uniqueness of the neotectonic setting of this large earthquake is essential in order to properly assess seismic hazards in the southeastern United States. Such knowledge will also contribute to the fundamental understanding of intraplate earthquakes and will aid indirectly in deciphering the evolution of Atlantic-type continental margins. The 15 chapters in this volume report on the first stage of an ongoing multidisciplinary study of the Charleston earthquake of 1886. The Modified Mercalli intensity for the 1886 earthquake was X in the meizoseismal area, an elliptical area 35 by 50 km, the center of which was Middleton Place. Seismic activity is continuing today in the Middleton Place-Summerville area at a higher level than prior to 1886. The present seismicity is originating at depths of 1 to 8 km, mostly in the crystalline basement beneath sedimentary rocks of the Coastal Plain. The crystalline basement beneath the Charleston-Summerville area is not simply a seaward extension of crystalline rocks of the Appalachian orogen that are exposed in the Piedmont to the northwest, but has a distinctive magnetic signature that does not reflect Appalachian orogenic trends. The area underlain by this distinctive geophysical basement, the Charleston block, may represent a broad zone of Triassic and (or) Jurassic crustal extension formed during the early stages of the opening of the Atlantic Ocean. The Charleston block is characterized in part by prominent, roughly circular magnetic and gravity highs that are thought to reflect maflc or ultramafic plutons. A continuously cored borehole put down over the shallowest (about 1.5 km deep) of these magnetic anomalies on the edge of the meizoseismal area bottomed at 792 m in amygdaloidal basalt. Although the K-Ar ages of about 100 m.y. for the basalt are consistent

U.S. Geological Survey instrumental seismic studies in the Parkfield-Cholame area consist of three related parts that were undertaken as pilot studies in a program designed to develop improved tools and concepts for investigating the properties and behavior of the San Andreas fault. These studies include: 1. The long=term monitoring of the seismic background on the San Andreas fault in Cholame Valley by means of a short-period Benioff seismograph station at Gold Hill. 2. The investigation of the geometry of the zone of aftershocks of the June 27 earthquakes by means of a small portable cluster of short-period, primarily vertical-component seismographs. 3. The seismic-refraction calibration of the region enclosing the aftershock source by means of three short reversed refraction profiles and a "calibration shot" near the epicenter of the main June 27 earthquake. This brief report outlines the work that has been completed and presents some preliminary results obtained from analysis of records from Gold Hill and the portable cluster.

Simulated ground motions can be used in structural and earthquake engineering practice as an alternative to or to augment the real ground motion data sets. Common engineering applications of simulated motions are linear and nonlinear time history analyses of building structures, where full acceleration records are necessary. Before using simulated ground motions in such applications, it is important to assess those in terms of their frequency and amplitude content as well as their match with the corresponding real records. In this study, a framework is outlined for assessment of simulated ground motions in terms of their use in structural engineering. Misfit criteria are determined for both ground motion parameters and structural response by comparing the simulated values against the corresponding real values. For this purpose, as a case study, the 12 November 1999 Duzce earthquake is simulated using stochastic finite-fault methodology. Simulated records are employed for time history analyses of frame models of typical residential buildings. Next, the relationships between ground motion misfits and structural response misfits are studied. Results show that the seismological misfits around the fundamental period of selected buildings determine the accuracy of the simulated responses in terms of their agreement with the observed responses.

The historical record of seismicity in Australia is too short (less than 150 years) to confidently define seismic source zones, particularly the recurrence rates for large, potentially damaging earthquakes, and this leads to uncertainty in hazard assessments. One way to extend this record is to search for evidence of earthquakes in the landscape, including Quaternary fault scarps, tilt blocks and disruptions to drainage patterns. A recent Geoscience Australia compilation of evidence of Quaternary tectonics identified over one hundred examples of potentially recent structures in Australia, testifying to the fact that a greater hazard may exist from large earthquakes than is evident from the recorded history alone. Most of these structures have not been studied in detail and have not been dated, so the recurrence rate for damaging events is unknown. One example of recent tectonic activity lies on the Victoria-New South Wales border, where geologically recent uplift has resulted in the formation of the Cadell Fault Scarp, damming Australia's largest river, the Murray River, and diverting its course. The scarp extends along a north-south strike for at least 50 km and reaches a maximum height of about 13 metres. The scarp displaces sands and clays of the Murray Basin sediments which overlie Palaeozoic bedrock at a depth of 100 to 250 m. There is evidence that the river system has eroded the scarp and displaced the topographic expression away from the location where the fault, or faults, meets the surface. Thus, to locate potential sites for trenching which intersect the faults, Geoscience Australia acquired ground-penetrating radar, resistivity and multi-channel high-resolution seismic reflection and refraction data along traverses across the scarp. The seismic data were acquired using an IVI T15000 MiniVib vibrator operating in p-wave mode, and a 24-channel Stratavisor acquisition system. Four 10-second sweeps, with a frequency range of 10-240 Hz, were carried out

Earthquake early warning (EEW) systems aim at providing fast and accurate estimates of event parameters or local ground shaking over wide ranges of source dimensions and epicentral distances. The Swiss Seismological Service (SED) has integrated EEW solutions into the SeisComP3 (SC3) professional earthquake monitoring software. VS(SC3) provides fast magnitude estimates for network-based point-sources using conventional triggering and phases association techniques, while FinDer(SC3) matches the evolving patterns of ground motion to track on-going rupture extent, and can provide accurate ground motion predictions for finite fault ruptures. SC3 is widely used, including in Central America, and at INETER in Nicaragua. In 2016, SED and INETER started a joint project to assess the feasibility of EEW in Nicaragua and Central America and to set up a prototype EEW system. We test VS(SC3) and FinDer(SC3) softwares at INETER since 2016. Excellent relations between regional seismic networks mean broadband and strong motion seismic data are exchanged across Central America in real time, which means the network is sufficient to warrant investigation into its potential for EEW. We report on the successes and challenges of operating an EEW system where seismicity is high, but infrastructure is fragile and the design and operation of a seismic network is challenging (in Nicaragua, on average 50% of all stations do not work effectively for EEW). The current best EEW delays for on-shore earthquakes in Nicaragua is in the order of 20s and 40s offshore. However, the current network should be able to provide EEW in 10 to 15s on-shore and 20 to 25s off-shore which correspond to potential EEW intensities over or equal to VII. We compare the performances of EEW in Nicaragua with an ideal setting, featuring optimized data availability. We evaluate improvements strategies of the Nicaraguan and the Joint Central American Seismic Networks for EEW. And we discuss how to combine real-time EEW

Part one of my dissertation examines the initiation of earthquake rupture. We study the initial subevent (ISE) of the Mw 6.7 1994 Northridge, California earthquake to distinguish between two end-member hypotheses of an organized and predictableearthquake rupture initiation process or, alternatively, a random process. We find that the focal mechanisms of the ISE and mainshock are indistinguishable, and both events may have nucleated on and ruptured the same fault plane. These results satisfy the requirements for both end-member models, and do not allow us to distinguish between them. However, further tests show the ISE's waveform characteristics are similar to those of typical nearby small earthquakes (i.e., dynamic ruptures). The second part of my dissertation examines aftershocks of the M 7.1 1989 Loma Prieta, California earthquake to determine if theoretical models of static Coulomb stress changes correctly predict the fault plane geometries and slip directions of Loma Prieta aftershocks. Our work shows individual aftershock mechanisms cannot be successfully predicted because a similar degree of predictability can be obtained using a randomized catalogue. This result is probably a function of combined errors in the models of mainshock slip distribution, background stress field, and aftershock locations. In the final part of my dissertation, we test the idea that earthquake triggering occurs when properties of a fault and/or its loading are modified by Coulomb failure stress changes that may be transient and oscillatory (i.e., dynamic) or permanent (i.e., static). We propose a triggering threshold failure stress change exists, above which the earthquake nucleation process begins although failure need not occur instantaneously. We test these ideas using data from the 1992 M 7.4 Landers earthquake and its aftershocks. Stress changes can be categorized as either dynamic (generated during the passage of seismic waves), static (associated with permanent fault offsets

Demand surge is understood to be a socio-economic phenomenon where repair costs for the same damage are higher after large- versus small-scale natural disasters. It has reportedly increased monetary losses by 20 to 50%. In previous work, a model for the increased costs of reconstruction labor and materials was developed for hurricanes in the Southeast United States. The model showed that labor cost increases, rather than the material component, drove the total repair cost increases, and this finding could be extended to earthquakes. A study of past large-scale disasters suggested that there may be additional explanations for demand surge. Two such explanations specific to earthquakes are the exclusion of insurance coverage for earthquake damage and possible concurrent causation of damage from an earthquake followed by fire or tsunami. Additional research into these aspects might provide a better explanation for increased monetary losses after large- vs. small-scale earthquakes.

Objectives: Although attention has been paid to post-traumatic stress disorder (PTSD) among health care professionals after disasters, the impact of traumatic events on their work has not been elucidated. The aim of this study was to examine whether disaster-related distress, resilience, and post-traumatic growth (PTG) affect work engagement among health care professionals who had been deployed to the areas affected by the Great East Japan Earthquake that occurred on March 11, 2011. Methods: We recruited disaster medical assistance team members who were engaged in rescue activities after the earthquake. The short version of the Resilience Scale (RS-14) and Peritraumatic Distress Inventory (PDI) were administered one month after the earthquake, and the short form of Posttraumatic Growth Inventory (SF-PTGI) and Utrecht Work Engagement Scale (UWES) were administered four years after the earthquake. Work engagement is composed of vigor, dedication, and absorption. Regression analyses were used to examine the relationship of UWES with RS-14, PDI, and SF-PTGI. Results: We obtained baseline data of 254 participants in April 2011, and 191 (75.2%) completed the follow-up assessment between December 2014 and March 2015. The results showed that RS-14 predicted vigor, dedication, and absorption; in addition, SF-PTGI was positively related with these three parameters (p<0.01 for all). Conclusions: Resilience at baseline and PTG after rescue activities may increase work engagement among health care professionals after disasters. These findings could be useful for establishing a support system after rescue activities during a large-scale disaster and for managing work-related stress among health care professionals. PMID:27265533

Although attention has been paid to post-traumatic stress disorder (PTSD) among health care professionals after disasters, the impact of traumatic events on their work has not been elucidated. The aim of this study was to examine whether disaster-related distress, resilience, and post-traumatic growth (PTG) affect work engagement among health care professionals who had been deployed to the areas affected by the Great East Japan Earthquake that occurred on March 11, 2011. We recruited disaster medical assistance team members who were engaged in rescue activities after the earthquake. The short version of the Resilience Scale (RS-14) and Peritraumatic Distress Inventory (PDI) were administered one month after the earthquake, and the short form of Posttraumatic Growth Inventory (SF-PTGI) and Utrecht Work Engagement Scale (UWES) were administered four years after the earthquake. Work engagement is composed of vigor, dedication, and absorption. Regression analyses were used to examine the relationship of UWES with RS-14, PDI, and SF-PTGI. We obtained baseline data of 254 participants in April 2011, and 191 (75.2%) completed the follow-up assessment between December 2014 and March 2015. The results showed that RS-14 predicted vigor, dedication, and absorption; in addition, SF-PTGI was positively related with these three parameters (p<0.01 for all). Resilience at baseline and PTG after rescue activities may increase work engagement among health care professionals after disasters. These findings could be useful for establishing a support system after rescue activities during a large-scale disaster and for managing work-related stress among health care professionals.

A number of observers claim to have seen thermal anomalies prior to earthquakes, but subsequent analysis by others has failed to produce similar findings. What exactly are these anomalies? Might they be useful for earthquakeprediction? It is the purpose of this study to determine if thermal anomalies can be found in association with known earthquakes by systematically co-registering weather satellite images at the sub-pixel level and then determining if statistically significant responses occurred prior to the earthquake event. A new set of automatic co-registration procedures was developed for this task to accommodate all properties particular to weather satellite observations taken at night, and it relies on the general condition that the ground cools after sunset. Using these procedures, we can produce a set of temperature-sensitive satellite images for each of five selected earthquakes (Algeria 2003; Bhuj, India 2001; Izmit, Turkey 2001; Kunlun Shan, Tibet 2001; Turkmenistan 2000) and thus more effectively investigate heating trends close to the epicenters a few hours prior to the earthquake events. This study will lay tracks for further work in earthquakeprediction and provoke the question of the exact nature of the thermal anomalies.

Koyna, located near the west coast of India, is a classical site of artificial water reservoir triggered earthquakes. Triggered earthquakes started soon after the impoundment of the Koyna Dam in 1962. The activity has continued till now including the largest triggered earthquake of M 6.3 in 1967; 22 earthquakes of M ≥ 5 and several thousands smaller earthquakes. The latest significant earthquake of ML 3.7 occurred on 24th November 2016. In spite of having a network of 23 broad band 3-component seismic stations in the near vicinity of the Koyna earthquake zone, locations of earthquakes had errors of 1 km. The main reason was the presence of 1 km thick very heterogeneous Deccan Traps cover that introduced noise and locations could not be improved. To improve the accuracy of location of earthquakes, a unique network of eight borehole seismic stations surrounding the seismicity was designed. Six of these have been installed at depths varying from 981 m to 1522 m during 2015 and 2016, well below the Deccan Traps cover. During 2016 a total of 2100 earthquakes were located. There has been a significant improvement in the location of earthquakes and the absolute errors of location have come down to ± 300 m. All earthquakes of ML ≥ 0.5 are now located, compared to ML ≥1.0 earlier. Based on seismicity and logistics, a block of 2 km x 2 km area has been chosen for the 3 km deep pilot borehole. The installation of the borehole seismic network has further elucidated the correspondence between rate of water loading/unloading the reservoir and triggered seismicity.

maximum Modified Mercalli Intensity X, Smith, 1962), the 1811 -1812 series of earthquakes near New Madrid , Missouri (maximum intensity XII, Fuller, 1912...sediments during the New Madrid earthquakes . Secondly, there are no known major faults with evidence of large scale movements since the Trlassic. In...1970, Seismic geology of the eastern United States: Assoc. Eng. Geologists Bull., v. 7, p. 21-43. Fuller, M.L., 1912, The New Madrid earthquake : U.S

We review changes in groundwater chemistry as precursory signs for earthquakes. In particular, we discuss pH, total dissolved solids (TDS), electrical conductivity, and dissolved gases in relation to their significance for earthquakeprediction or forecasting. These parameters are widely believed to vary in response to seismic and pre-seismic activity. However, the same parameters also vary in response to non-seismic processes. The inability to reliably distinguish between changes caused by seismic or pre-seismic activities from changes caused by non-seismic activities has impeded progress in earthquake science. Short-term earthquakeprediction is unlikely to be achieved, however, by pH, TDS, electrical conductivity, and dissolved gas measurements alone. On the other hand, the production of free hydroxyl radicals (•OH), subsequent reactions such as formation of H2O2 and oxidation of As(III) to As(V) in groundwater, have distinctive precursory characteristics. This study deviates from the prevailing mechanical mantra. It addresses earthquake-related non-seismic mechanisms, but focused on the stress-induced electrification of rocks, the generation of positive hole charge carriers and their long-distance propagation through the rock column, plus on electrochemical processes at the rock-water interface.

Determination of seismic performance of existing buildings has become one of the key concepts in structural analysis topics after recent earthquakes (i.e. Izmit and Duzce Earthquakes in 1999, Kobe Earthquake in 1995 and Northridge Earthquake in 1994). Considering the need for precise assessment tools to determine seismic performance level, most of earthquake hazardous countries try to include performance based assessment in their seismic codes. Recently, Turkish Earthquake Code 2007 (TEC'07), which was put into effect in March 2007, also introduced linear and non-linear assessment procedures to be applied prior to building retrofitting. In this paper, a comparative study is performedmore » on the code-based seismic assessment of RC buildings with linear static methods of analysis, selecting an existing RC building. The basic principles dealing the procedure of seismic performance evaluations for existing RC buildings according to Eurocode 8 and TEC'07 will be outlined and compared. Then the procedure is applied to a real case study building is selected which is exposed to 1998 Adana-Ceyhan Earthquake in Turkey, the seismic action of Ms = 6.3 with a maximum ground acceleration of 0.28 g It is a six-storey RC residential building with a total of 14.65 m height, composed of orthogonal frames, symmetrical in y direction and it does not have any significant structural irregularities. The rectangular shaped planar dimensions are 16.40 mx7.80 m = 127.90 m{sup 2} with five spans in x and two spans in y directions. It was reported that the building had been moderately damaged during the 1998 earthquake and retrofitting process was suggested by the authorities with adding shear-walls to the system. The computations show that the performing methods of analysis with linear approaches using either Eurocode 8 or TEC'07 independently produce similar performance levels of collapse for the critical storey of the structure. The computed base shear value according to Eurocode is

Determination of seismic performance of existing buildings has become one of the key concepts in structural analysis topics after recent earthquakes (i.e. Izmit and Duzce Earthquakes in 1999, Kobe Earthquake in 1995 and Northridge Earthquake in 1994). Considering the need for precise assessment tools to determine seismic performance level, most of earthquake hazardous countries try to include performance based assessment in their seismic codes. Recently, Turkish Earthquake Code 2007 (TEC'07), which was put into effect in March 2007, also introduced linear and non-linear assessment procedures to be applied prior to building retrofitting. In this paper, a comparative study is performed on the code-based seismic assessment of RC buildings with linear static methods of analysis, selecting an existing RC building. The basic principles dealing the procedure of seismic performance evaluations for existing RC buildings according to Eurocode 8 and TEC'07 will be outlined and compared. Then the procedure is applied to a real case study building is selected which is exposed to 1998 Adana-Ceyhan Earthquake in Turkey, the seismic action of Ms = 6.3 with a maximum ground acceleration of 0.28 g It is a six-storey RC residential building with a total of 14.65 m height, composed of orthogonal frames, symmetrical in y direction and it does not have any significant structural irregularities. The rectangular shaped planar dimensions are 16.40 m×7.80 m = 127.90 m2 with five spans in x and two spans in y directions. It was reported that the building had been moderately damaged during the 1998 earthquake and retrofitting process was suggested by the authorities with adding shear-walls to the system. The computations show that the performing methods of analysis with linear approaches using either Eurocode 8 or TEC'07 independently produce similar performance levels of collapse for the critical storey of the structure. The computed base shear value according to Eurocode is much higher

On March 11, 2011, huge earthquake and tsunamis took place coastal regions of Northeast Japan. Coastal infrastructure collapsed due to high waves of tsunamis. Marine ecosystems were also strongly disturbed by the earthquakes and tsunamis. TEAMS (Tohoku Ecosystem-Associated Marine Sciences) has started for monitoring recovering process of marine ecosystems. The project continues ten years. First five years are mainly monitored recovery process, then we should transfer our knowledge to fishermen and citizens for restoration of fishery and social systems. But, how can we actually transfer our knowledge from science to citizens? This is new experience for us. Socio-technology constructs a "high quality risk communication" model how scientific knowledge or technologies from scientific communities to citizens. They are progressing as follows, "observation, measurements and data", → "modeling and synthesis" → "information process" → "delivery to society" → " take action in society". These steps show detailed transition from inter-disciplinarity to trans-disciplinarity in science and technology. In our presentation, we plan to show a couple of case studies that are going forward from science to society.

Slip histories for the 2002 M7.9 Denali fault, Alaska, earthquake are derived rapidly from global teleseismic waveform data. In phases, three models improve matching waveform data and recovery of rupture details. In the first model (Phase I), analogous to an automated solution, a simple fault plane is fixed based on the preliminary Harvard Centroid Moment Tensor mechanism and the epicenter provided by the Preliminary Determination of Epicenters. This model is then updated (Phase II) by implementing a more realistic fault geometry inferred from Digital Elevation Model topography and further (Phase III) by using the calibrated P-wave and SH-wave arrival times derived from modeling of the nearby 2002 M6.7 Nenana Mountain earthquake. These models are used to predict the peak ground velocity and the shaking intensity field in the fault vicinity. The procedure to estimate local strong motion could be automated and used for global real-time earthquake shaking and damage assessment. ?? 2004, Earthquake Engineering Research Institute.

The JMA-59-type electromagnetic seismograph was the standard seismograph for routine observations by the Japan Meteorological Agency (JMA) from the 1960's to the 1990's. Some features of those seismograms include 1) displacement wave records (electrically integrated from a velocity output by a moving-coil-type sensor), 2) ink records on paper (analog recording with time marks), 3) continuous drum recording for 12 h, and 4) lengthy operation time over several decades. However, the digital revolution in recording systems during the 1990's made these analog features obsolete, and their abundant and bulky paper-based records were stacked and sometimes disregarded in the library of every observatory. Interestingly, from an educational aspect, the disadvantages of these old-fashioned systems become highly advantageous for educational or outreach purposes. The updated digital instrument is essentially a 'black-box,' not revealing its internal mechanisms and being too fast for observing its signal processes. While the old seismometers and recording systems have been disposed of long since, stacks of analog seismograms continue to languish in observatories' back rooms. In our study, we develop some classroom exercises for studyingearthquakes at the mid- to high-school level using these analog seismograms. These exercises include 1) reading the features of seismic records, 2) measuring the S-P time, 3) converting the hypocentral distance from Omori's distance formula, 4) locating the epicenter/hypocenter using the S-P times of surrounding stations, and 5) estimating earthquake magnitude using the Tsuboi's magnitude formula. For this calculation we developed a 'nomogram'--a graphical paper calculator created using a Python-based freeware tool named 'PyNomo.' We tested many seismograms and established the following rules: 1) shallow earthquakes are appropriate for using the Tsuboi's magnitude formula; 2) there is no saturation at peak amplitude; 3) seismograms make it easy to

Water utilities are vulnerable to a wide variety of human-caused and natural disasters. The Water Network Tool for Resilience (WNTR) is a new open source PythonTM package designed to help water utilities investigate resilience of water distribution systems to hazards and evaluate resilience-enhancing actions. In this paper, the WNTR modeling framework is presented and a case study is described that uses WNTR to simulate the effects of an earthquake on a water distribution system. The case study illustrates that the severity of damage is not only a function of system integrity and earthquake magnitude, but also of the available resourcesmore » and repair strategies used to return the system to normal operating conditions. While earthquakes are particularly concerning since buried water distribution pipelines are highly susceptible to damage, the software framework can be applied to other types of hazards, including power outages and contamination incidents.« less

Water utilities are vulnerable to a wide variety of human-caused and natural disasters. The Water Network Tool for Resilience (WNTR) is a new open source PythonTM package designed to help water utilities investigate resilience of water distribution systems to hazards and evaluate resilience-enhancing actions. In this paper, the WNTR modeling framework is presented and a case study is described that uses WNTR to simulate the effects of an earthquake on a water distribution system. The case study illustrates that the severity of damage is not only a function of system integrity and earthquake magnitude, but also of the available resourcesmore » and repair strategies used to return the system to normal operating conditions. While earthquakes are particularly concerning since buried water distribution pipelines are highly susceptible to damage, the software framework can be applied to other types of hazards, including power outages and contamination incidents.« less

With this in mind, I organized a small workshop for approximately 30 people on February 2 and 3, 1978, in Menlo Park, Calif. the purpose of the meeting was to discuss methods of involving volunteers in a meaningful way in earthquake research and in educating the public about earthquake hazards. The emphasis was on earthquakeprediction research, but the discussions covered the whole earthquake hazard reduction program. Representatives attended from the earthquake research community, from groups doing socioeconomic research on earthquake matters, and from a wide variety of organizations who might sponsor volunteers.

Earthquake induced Coulomb stresses, whether static or dynamic, suddenly change the probability of future earthquakes. Models to estimate stress and the resulting seismicity changes could help to illuminate earthquake physics and guide appropriate precautionary response. But do these models have improved forecasting power compared to empirical statistical models? The best answer lies in prospective testing in which a fully specified model, with no subsequent parameter adjustments, is evaluated against future earthquakes. The Center of Study of EarthquakePredictability (CSEP) facilitates such prospective testing of earthquake forecasts, including several short term forecasts. Formulating Coulomb stress models for formal testing involves several practical problems, mostly shared with other short-term models. First, earthquake probabilities must be calculated after each “perpetrator” earthquake but before the triggered earthquakes, or “victims”. The time interval between a perpetrator and its victims may be very short, as characterized by the Omori law for aftershocks. CSEP evaluates short term models daily, and allows daily updates of the models. However, lots can happen in a day. An alternative is to test and update models on the occurrence of each earthquake over a certain magnitude. To make such updates rapidly enough and to qualify as prospective, earthquake focal mechanisms, slip distributions, stress patterns, and earthquake probabilities would have to be made by computer without human intervention. This scheme would be more appropriate for evaluating scientific ideas, but it may be less useful for practical applications than daily updates. Second, triggered earthquakes are imperfectly recorded following larger events because their seismic waves are buried in the coda of the earlier event. To solve this problem, testing methods need to allow for “censoring” of early aftershock data, and a quantitative model for detection threshold as a function of

Recent theoretical and experimental studies explicitly demonstrated the ability of space technologies to identify and monitor the specific variations at near-earth space plasma, atmosphere and ground surface associated with approaching severe earthquakes (named as earthquake precursors) appearing several days (from 1 to 5) before the seismic shock over the seismically active areas. Several countries and private companies are in the stage of preparation (or already launched) the dedicated spacecrafts for monitoring of the earthquake precursors from space and for short-term earthquakeprediction. The present paper intends to outline the optimal algorithm for creation of the space-borne system for the earthquake precursors monitoring and for short-term earthquakeprediction. It takes into account the following considerations: Selection of the precursors in the terms of priority, taking into account their statistical and physical parameters Configuration of the spacecraft payload Configuration of the satellite constellation (orbit selection, satellite distribution, operation schedule) Proposal of different options (cheap microsatellite or comprehensive multisatellite constellation) Taking into account that the most promising are the ionospheric precursors of earthquakes, the special attention will be devoted to the radiophysical techniques of the ionosphere monitoring. The advantages and disadvantages of such technologies as vertical sounding, in-situ probes, ionosphere tomography, GPS TEC and GPS MET technologies will be considered.

Recent theoretical and experimental studies explicitly demonstrated the ability of space technologies to identify and monitor the specific variations at near-earth space plasma, atmosphere and ground surface associated with approaching severe earthquakes (named as earthquake precursors) which appear several days (from 1 to 5) before the seismic shock over the seismically active areas. Several countries and private companies are in the stage of preparation (or already launched) the dedicated spacecrafts for monitoring of the earthquake precursors from space and for short-term earthquakeprediction. The present paper intends to outline the optimal algorithm for creation of the space-borne system for the earthquake precursors monitoring and for short-term earthquakeprediction. It takes into account the following: Selection of the precursors in the terms of priority, considering their statistical and physical parameters.Configuration of the spacecraft payload.Configuration of the satellite constellation (orbit selection, satellite distribution, operation schedule).Different options of the satellite systems (cheap microsatellite or comprehensive multisatellite constellation). Taking into account that the most promising are the ionospheric precursors of earthquakes, the special attention is devoted to the radiophysical techniques of the ionosphere monitoring. The advantages and disadvantages of such technologies as vertical sounding, in-situ probes, ionosphere tomography, GPS TEC and GPS MET technologies are considered.

Seismically induced landslides often contribute to a significant degree to the losses related to earthquakes. The identification of possible extends of landslide affected areas can help to target emergency measures when an earthquake occurs or improve the resilience of inhabited areas and critical infrastructure in zones of high seismic hazard. Moreover, landslide event sizes are an important proxy for the estimation of the intensity and magnitude of past earthquakes in paleoseismic studies, allowing us to improve seismic hazard assessment over longer terms. Not only earthquake intensity, but also factors such as the fault characteristics, topography, climatic conditions and the geological environment have a major impact on the intensity and spatial distribution of earthquake induced landslides. Inspired by classical reviews of earthquake induced landslides, e.g. by Keefer or Jibson, we present here a review of factors contributing to earthquake triggered slope failures based on an `event-by-event' classification approach. The objective of this analysis is to enable the short-term prediction of earthquake triggered landslide event sizes in terms of numbers and size of the affected area right after an earthquake event occurred. Five main factors, `Intensity', `Fault', `Topographic energy', `Climatic conditions' and `Surface geology' were used to establish a relationship to the number and spatial extend of landslides triggered by an earthquake. Based on well-documented recent earthquakes (e.g. Haiti 2010, Wenchuan 2008) and on older events for which reliable extensive information was available (e.g. Northridge 1994, Loma Prieta 1989, Guatemala 1976, Peru 1970) the combination and relative weight of the factors was calibrated. The calibrated factor combination was then applied to more than 20 earthquake events for which landslide distribution characteristics could be crosschecked. We present cases where our prediction model performs well and discuss particular cases

The low predictability of earthquakes and the high uncertainty associated with their forecasts make earthquakes one of the worst natural calamities, capable of causing instant loss of life and property. Here, we discuss the studies reporting the observed anomalies in the satellite-derived Land Surface Temperature (LST) before an earthquake. We compile the conclusions of these studies and evaluate the use of remotely sensed LST anomalies as precursors of earthquakes. The arrival times and the amplitudes of the anomalies vary widely, thus making it difficult to consider them as universal markers to issue earthquake warnings. Based on the randomness in the observations of these precursors, we support employing a global-scale monitoring system to detect statistically robust anomalous geophysical signals prior to earthquakes before considering them as definite precursors.

The Longmen Shan, located in the conjunction of the eastern margin the Tibet plateau and Sichuan basin, is a typical area for studying the deformation pattern of the Tibet plateau. Following the 2008 Mw 7.9 Wenchuan earthquake (WE) rupturing the Longmen Shan Fault (LSF), a great deal of observations and studies on geology, geophysics, and geodesy have been carried out for this region, with results published successively in recent years. Using the 2D viscoelastic finite element model, introducing the rate-state friction law to the fault, this thesis makes modeling of the earthquake recurrence process and the dynamic evolutionary processes in an earthquake cycle of 10 thousand years. By analyzing the displacement, velocity, stresses, strain energy and strain energy increment fields, this work obtains the following conclusions: (1) The maximum coseismic displacement on the fault is on the surface, and the damage on the hanging wall is much more serious than that on the foot wall of the fault. If the detachment layer is absent, the coseismic displacement would be smaller and the relative displacement between the hanging wall and foot wall would also be smaller. (2) In every stage of the earthquake cycle, the velocities (especially the vertical velocities) on the hanging wall of the fault are larger than that on the food wall, and the values and the distribution patterns of the velocity fields are similar. While in the locking stage prior to the earthquake, the velocities in crust and the relative velocities between hanging wall and foot wall decrease. For the model without the detachment layer, the velocities in crust in the post-seismic stage is much larger than those in other stages. (3) The maximum principle stress and the maximum shear stress concentrate around the joint of the fault and detachment layer, therefore the earthquake would nucleate and start here. (4) The strain density distribution patterns in stages of the earthquake cycle are similar. There are two

Using scaling relations to understand nonlinear geosystems has been an enduring theme of Don Turcotte's research. In particular, his studies of scaling in active fault systems have led to a series of insights about the underlying physics of earthquakes. This presentation will review some recent progress in developing scaling relations for several key aspects of earthquake behavior, including the inner and outer scales of dynamic fault rupture and the energetics of the rupture process. The proximate observations of mining-induced, friction-controlled events obtained from in-mine seismic networks have revealed a lower seismicity cutoff at a seismic moment Mmin near 109 Nm and a corresponding upper frequency cutoff near 200 Hz, which we interpret in terms of a critical slip distance for frictional drop of about 10-4 m. Above this cutoff, the apparent stress scales as M1/6 up to magnitudes of 4-5, consistent with other near-source studies in this magnitude range (see special session S07, this meeting). Such a relationship suggests a damage model in which apparent fracture energy scales with the stress intensity factor at the crack tip. Under the assumption of constant stress drop, this model implies an increase in rupture velocity with seismic moment, which successfully predicts the observed variation in corner frequency and maximum particle velocity. Global observations of oceanic transform faults (OTFs) allow us to investigate a situation where the outer scale of earthquake size may be controlled by dynamics (as opposed to geologic heterogeneity). The seismicity data imply that the effective area for OTF moment release, AE, depends on the thermal state of the fault but is otherwise independent of fault's average slip rate; i.e., AE ~ AT, where AT is the area above a reference isotherm. The data are consistent with β = 1/2 below an upper cutoff moment Mmax that increases with AT and yield the interesting scaling relation Amax ~ AT1/2. Taken together, the OTF

The Mw 7.7 Bhuj earthquake occurred in the Kachchh District of the State of Gujarat, India on 26 January 2001, and was one of the most damaging intraplate earthquakes ever recorded. This earthquake is in many ways similar to the three great New Madrid earthquakes that occurred in the central United States in 1811-1812, An Indo-US team is studying the similarities and differences of these sequences in order to learn lessons for earthquake hazard in intraplate regions. Herein we present some preliminary conclusions from that study. Both the Kutch and New Madrid regions have rift type geotectonic setting. In both regions the strain rates are of the order of 10-9/yr and attenuation of seismic waves as inferred from observations of intensity and liquefaction are low. These strain rates predict recurrence intervals for Bhuj or New Madrid sized earthquakes of several thousand years or more. In contrast, intervals estimated from paleoseismic studies and from other independent data are significantly shorter, probably hundreds of years. All these observations together may suggest that earthquakes relax high ambient stresses that are locally concentrated by rheologic heterogeneities, rather than loading by plate-tectonic forces. The latter model generally underlies basic assumptions made in earthquake hazard assessment, that the long-term average rate of energy released by earthquakes is determined by the tectonic loading rate, which thus implies an inherent average periodicity of earthquake occurrence. Interpreting the observations in terms of the former model therefore may require re-examining the basic assumptions of hazard assessment.

In light of newly-acquired geophysical information about earthquake generation in the Tokai area, Central Japan, where occurrence of a great earthquake of magnitude 8 or so has recently been feared, probabilities of earthquake occurrence in the near future are reevaluated. Much of the data used for evaluation here relies on recently-developed paleoseismology, tsunami study and GPS geodesy.The new Weibull distribution analysis of recurrence tendency of great earthquakes in the Tokai-Nankai zone indicates that the mean return period of great earthquakes there is estimated as 109 yr with a standard deviation amounting to 33 yr. These values do not differ much from those of previous studies (Rikitake, 1976, 1986; Utsu, 1984).Taking the newly-determined velocities of the motion of Philippine Sea plate at various portions of the Tokai-Nankai zone into account, the ultimate displacements to rupture at the plate boundary are obtained. A Weibull distribution analysis results in the mean ultimate displacement amounting to 4.70 m with a standard deviation estimated as 0.86 m. A return period amounting to 117 yr is obtained at the Suruga Bay portion by dividing the mean ultimate displacement by the relative plate velocity.With the aid of the fault models as determined from the tsunami studies, the increases in the cumulative seismic slips associated with the great earthquakes are examined at various portions of the zone. It appears that a slip-predictable model can better be applied to the occurrence mode of great earthquakes in the zone than a time-predictable model. The crustal strain accumulating over the Tokai area as estimated from the newly-developed geodetic work including the GPS observations is compared to the ultimate strain presumed by the above two models.The probabilities for a great earthquake to recur in the Tokai district are then estimated with the aid of the Weibull analysis parameters obtained for the four cases discussed in the above. All the probabilities

The Southern California Earthquake Center (SCEC) has been developing groundbreaking computer modeling capabilities for studyingearthquakes. These visualizations were initially shared within the scientific community but have recently have gained visibility via television news coverage in Southern California. These types of visualizations are becoming pervasive in the teaching and learning of concepts related to earth science. Computers have opened up a whole new world for scientists working with large data sets, and students can benefit from the same opportunities (Libarkin &Brick, 2002). Earthquakes are ideal candidates for visualization products: they cannot be predicted, are completed in a matter of seconds, occur deep in the earth, and the time between events can be on a geologic time scale. For example, the southern part of the San Andreas fault has not seen a major earthquake since about 1690, setting the stage for an earthquake as large as magnitude 7.7 -- the "big one." Since no one has experienced such an earthquake, visualizations can help people understand the scale of such an event. Accordingly, SCEC has developed a revolutionary simulation of this earthquake, with breathtaking visualizations that are now being distributed. According to Gordin and Pea (1995), theoretically visualization should make science accessible, provide means for authentic inquiry, and lay the groundwork to understand and critique scientific issues. This presentation will discuss how the new SCEC visualizations and other earthquake imagery achieve these results, how they fit within the context of major themes and study areas in science communication, and how the efficacy of these tools can be improved.

Distinguishing the seismic ruptures during the earthquake from a lot of fractures in borehole core is very important to understand rupture processes and seismic efficiency. In particular, a great earthquake like the 1995 Mw 7.2 Kobe earthquake, but again, evidence has been limited to the grain size analysis and the color of fault gouge. In the past two decades, increasing geological evidence has emerged that seismic faults and shear zones within the middle to upper crust play a crucial role in controlling the architectures of crustal fluid migration. Rock-fluid interactions along seismogenic faults give us a chance to find the seismic ruptures from the same event. Recently, a new project of "Drilling into Fault Damage Zone" has being conducted by Kyoto University on the Nojima Fault again after 20 years of the 1995 Kobe earthquake for an integrated multidisciplinary study on the assessment of activity of active faults involving active tectonics, geochemistry and geochronology of active fault zones. In this work, we report on the signature of slip plane inside the Nojima Fault associated with individual earthquakes on the basis of trace element and isotope analyses. Trace element concentrations and 87Sr/86Sr ratios of fault gouge and host rocks were determined by an inductively coupled plasma mass spectrometer (ICP-MS) and thermal ionization mass spectrometry (TIMS). Samples were collected from two trenches and an outcrop of Nojima Fault which. Based on the geochemical result, we interpret these geochemical results in terms of fluid-rock interactions recorded in fault friction during earthquake. The trace-element enrichment pattern of the slip plane can be explained by fluid-rock interactions at high temperature. It also can help us find the main coseismic fault slipping plane inside the thick fault gouge zone.

During a study conducted to find the effect of Earth tides on the occurrence of earthquakes, for small areas [typically 1000km X1000km] of high-seismicity regions, it was noticed that the Sun's position in terms of universal time [GMT] shows links to the sum of EMD [longitude of earthquake location - longitude of Moon's foot print on earth] and SEM [Sun-Earth-Moon angle]. This paper provides the details of this relationship after studyingearthquake data for over forty high-seismicity regions of the world. It was found that over 98% of the earthquakes for these different regions, examined for the period 1973-2008, show a direct relationship between the Sun's position [GMT] and [EMD+SEM]. As the time changes from 00-24 hours, the factor [EMD+SEM] changes through 360 degree, and plotting these two variables for earthquakes from different small regions reveals a simple 45 degree straight-line relationship between them. This relationship was tested for all earthquakes and earthquake sequences for magnitude 2.0 and above. This study conclusively proves how Sun and the Moon govern all earthquakes. Fig. 12 [A+B]. The left-hand figure provides a 24-hour plot for forty consecutive days including the main event (00:58:23 on 26.12.2004, Lat.+3.30, Long+95.980, Mb 9.0, EQ count 376). The right-hand figure provides an earthquake plot for (EMD+SEM) vs GMT timings for the same data. All the 376 events including the main event faithfully follow the straight-line curve.

Pseudotachylites are melts produced by frictional heating during seismic slip. Understanding their origin and their influence on slip behavior is critical to understanding the physics of earthquakes. To provide insight into this topic, we conducted a case study in the proto-mylonitic to mylonitic Asbestos Mountain granitoid in the eastern Peninsular Ranges batholith (California), which records both ductile (mylonites) and brittle deformation features (pseudotachylites and ultracataclasites). U-Pb chronology and Zr thermometry of titanite porphyroblasts in the mylonites indicate that mylonitization of the plutons occurred at near solidus conditions (∼ 750 °C) over a 10 Ma interval from 89 to 78 Ma. Mylonitization resulted in recrystallization of quartz, plagioclase and biotite, with the biotite concentrated into biotite-rich foliation planes. Subsequent brittle deformation is superimposed on the ductile fabrics. Micro-XRF elemental mapping and in situ LA-ICP-MS analyses on these brittle deformation products show that the pseudotachylites are more mafic (lower Si, but higher Fe) and K-rich than the host mylonite, while the ultracataclasites are intermediate between the host and the pseudotachylites. Inverse mass balance calculations show that both brittle deformation products are depleted in quartz but enriched in biotite, with the pseudotachylites showing the most significant enrichment in biotite, indicating preferential involvement of biotite during brittle deformation. We suggest that biotite-rich layers generated during ductile deformation may have been the preferred locus of subsequent brittle deformation, presumably because such layers represent zones of weakness. Frictional heating associated with slip along such planes results in melting, which causes a decrease in viscosity, in turn leading to further strain localization. During the short time span of an earthquake, frictional melting appears to be a disequilibrium process, in which the minerals are

Rapid urban growth is a process which can be observed in cities worldwide. Managing these growing urban areas has become a major challenge for both governing bodies and citizens. Situated not only in a highly earthquake and landslide-prone area, but comprising also the cultural and political capital of Nepal, the fast expanding Kathmandu Valley in the Himalayan region is of particular interest. Vulnerability assessment has been an important tool for spatial planning in this already densely populated area. The magnitude 8.4 earthquake of Bihar in 1934 cost 8600 Nepalis their lives, destroyed 20% of the Kathmandu building stock and heavily damaged another 40%. Since then, Kathmandu has grown into a hub with over a million inhabitants. Rapid infrastructure and population growth aggravate the vulnerability conditions, particularly in the core area of Metropolitan Kathmandu. We propose an integrative framework for vulnerability and risk in Kathmandu Valley. In order to move towards a more systemic and integrated approach, we focus on interactions between natural hazards, physically engineered systems and society. High resolution satellite images are used to identify structural vulnerability of the building stock within the study area. Using object-based image analysis, the spatial dynamics of urban growth are assessed and validated using field data. Complementing this is the analysis of socio-economic attributes gained from databases and field surveys. An indicator-based vulnerability and resilience index will be operationalized using multi-attribute value theory and statistical methods such as principal component analysis. The results allow for a socio-economic comparison of places and their relative potential for harm and loss. The objective in this task is to better understand the interactions between nature and society, engineered systems and built environments through the development of an interdisciplinary framework on systemic seismic risk and vulnerability. Data

The equivalent of a single degree of freedom (SDOF) nonlinear model, the Q-model-13, was examined. The study intended to: (1) determine the seismic response of a torsionally coupled building based on the multidegree of freedom (MDOF) and (SDOF) nonlinear models; and (2) develop a simple SDOF nonlinear model to calculate displacement history of structures with eccentric centers of mass and stiffness. It is shown that planar models are able to yield qualitative estimates of the response of the building. The model is used to estimate the response of a hypothetical six-story frame wall reinforced concrete building with torsional coupling, using two different earthquake intensities. It is shown that the Q-Model-13 can lead to a satisfactory estimate of the response of the structure in both cases.

We focus on Internet rumors and present an empirical analysis and simulation results of their diffusion and convergence during emergencies. In particular, we study one rumor that appeared in the immediate aftermath of the Great East Japan Earthquake on March 11, 2011, which later turned out to be misinformation. By investigating whole Japanese tweets that were sent one week after the quake, we show that one correction tweet, which originated from a city hall account, diffused enormously. We also demonstrate a stochastic agent-based model, which is inspired by contagion model of epidemics SIR, can reproduce observed rumor dynamics. Our model can estimate the rumor infection rate as well as the number of people who still believe in the rumor that cannot be observed directly. For applications, rumor diffusion sizes can be estimated in various scenarios by combining our model with the real data. PMID:25831122

Ionosonde data and crustal earthquakes with magnitude M ≥ 6.0 observed in Greece during the 2003-2015 period were examined to check if the relationships obtained earlier between precursory ionospheric anomalies and earthquakes in Japan and central Italy are also valid for Greek earthquakes. The ionospheric anomalies are identified on the observed variations of the sporadic E-layer parameters (h'Es, foEs) and foF2 at the ionospheric station of Athens. The corresponding empirical relationships between the seismo-ionospheric disturbances and the earthquake magnitude and the epicentral distance are obtained and found to be similar to those previously published for other case studies. The large lead times found for the ionospheric anomalies occurrence may confirm a rather long earthquake preparation period. The possibility of using the relationships obtained for earthquakeprediction is finally discussed.

It's uncertain whether more near-field earthquakes are triggered by static or dynamic stress changes. This ratio matters because static earthquake interactions are increasingly incorporated into probabilistic forecasts. Recent studies were unable to demonstrate all predictions from the static-stress-change hypothesis, particularly seismicity rate reductions. However, current dynamic stress change hypotheses do not explain delayed earthquake triggering and Omori's law. Here I show numerically that if seismic waves can alter some frictional contacts in neighboring fault zones, then dynamic triggering might cause delayed triggering and an Omori-law response. The hypothesis depends on faults following a rate/state friction law, and on seismic waves changing the mean critical slip distance (Dc) at nucleation zones.

The possibility to directly associate the damages to the ground motion parameters is always a great challenge, in particular for civil protections. Indeed a ground motion parameter, estimated in near real time that can express the damages occurred after an earthquake, is fundamental to arrange the first assistance after an event. The aim of this work is to contribute to the estimation of the ground motion parameter that better describes the observed intensity, immediately after an event. This can be done calculating for each ground motion parameter estimated in a near real time mode a regression law which correlates the above-mentioned parameter to the observed macro-seismic intensity. This estimation is done collecting high quality accelerometric data in near field, filtering them at different frequency steps. The regression laws are calculated using two different techniques: the non linear least-squares (NLLS) Marquardt-Levenberg algorithm and the orthogonal distance methodology (ODR). The limits of the first methodology are the needed of initial values for the parameters a and b (set 1.0 in this study), and the constraint that the independent variable must be known with greater accuracy than the dependent variable. While the second algorithm is based on the estimation of the errors perpendicular to the line, rather than just vertically. The vertical errors are just the errors in the 'y' direction, so only for the dependent variable whereas the perpendicular errors take into account errors for both the variables, the dependent and the independent. This makes possible also to directly invert the relation, so the a and b values can be used also to express the gmps as function of I. For each law the standard deviation and R2 value are estimated in order to test the quality and the reliability of the found relation. The Amatrice earthquake of 24th August of 2016 is used as case of study to test the goodness of the calculated regression laws.

Analogs are used to understand complex or poorly understood phenomena for which little data may be available at the actual repository site. Earthquakes are complex phenomena, and they can have a large number of effects on the natural system, as well as on engineered structures. Instrumental data close to the source of large earthquakes are rarely obtained. The rare events for which measurements are available may be used, with modfications, as analogs for potential large earthquakes at sites where no earthquake data are available. In the following, several examples of nuclear reactor and liquified natural gas facility siting are discussed.more » A potential use of analog earthquakes is proposed for a high-level nuclear waste (HLW) repository.« less

This study aimed to determine the relationships between depressive and posttraumatic stress disorder (PTSD) symptoms in a sample of adolescent survivors following the Wenchuan earthquake in China. Two-hundred adolescent survivors were reviewed at 12, 18 and 24-months post-earthquake. Depression and PTSD were assessed by two self-report…

. The proposed process provides a more consistent model of gradual accumulation of strain and non-uniform release through large earthquakes and can be applied in the evaluation of seismic risk. The cumulative seismic energy released by major earthquakes throughout the period from 1897 to 2007 of last 110 years in the all the zones are calculated and plotted. The plot gives characteristics curve for each zone. Each curve is irregular, reflecting occasional high activity. The maximum earthquake energy available at a particular time in a given area is given by S. The difference between the theoretical upper limit given by S and the cumulative energy released up to that time is calculated to find out the maximum magnitude of an earthquake which can occur in future. Energy blocked of the three source regions are 1.35*1017 Joules, 4.25*1017 Joules and 0.12*1017 in Joules respectively for source zone 1, 2 and 3, as a supply for potential earthquakes in due course of time. The predicted maximum magnitude (mmax) obtained for each source zone AYZ, HZ, and SPZ are 8.2, 8.6, and 8.4 respectively by this model. This study is also consistent with the previous predicted results by other workers.

Artificial water reservoir-triggered earthquakes have continued at Koyna in the Deccan Traps province, India, since the impoundment of the Shivaji Sagar reservoir in 1962. Existing models, to comprehend the genesis of triggered earthquakes, suffer from lack of observations in the near field. To investigate further, scientific deep drilling and setting up a fault zone observatory at depth of 5-7 km is planned in the Koyna area. Prior to undertaking deep drilling, an exploratory phase of investigations has been launched to constrain subsurface geology, structure and heat flow regime in the area that provide critical inputs for the design of the deep borehole observatory. Two core boreholes drilled to depths of 1,522 and 1,196 m have penetrated the Deccan Traps and sampled the granitic basement in the region for the first time. Studies on cores provide new and direct information regarding the thickness of the Deccan Traps, the absence of infra-Trappean sediments and the nature of the underlying basement rocks. Temperatures estimated at a depth of 6 km in the area, made on the basis of heat flow and thermal properties data sets, do not exceed 150 °C. Low-elevation airborne gravity gradient and magnetic data sets covering 5,012 line km, together with high-quality magnetotelluric data at 100 stations, provide both regional information about the thickness of the Deccan Traps and the occurrence of localized density heterogeneities and anomalous conductive zones in the vicinity of the hypocentral zone. Acquisition of airborne LiDAR data to obtain a high-resolution topographic model of the region has been completed over an area of 1,064 km2 centred on the Koyna seismic zone. Seismometers have been deployed in the granitic basement inside two boreholes and are planned in another set of six boreholes to obtain accurate hypocentral locations and constrain the disposition of fault zones.

Sanchiao fault is a western boundary fault of the Taipei basin located in northern Taiwan, close to the densely populated Taipei metropolitan area. According to the report of Central Geological Survey, the terrestrial portion of the Sanchiao fault can be divided into north and south segments. The south segment is about 13 km and north segment is about 21 km. Recent study demonstrated that there are about 40 km of the fault trace that extended to the marine area offshore of northern Taiwan. Combined with the marine and terrestrial parts, the total fault length of Sanchiao fault could be nearly 70 kilometers. Based on the recipe proposed by IRIKURA and Miyake (2010), we estimate the Sanchiao fault has the potential to produce an earthquake with moment magnitude larger than Mw 7.2. The total area of fault rupture is about 1323 km2, asperity to the total fault plane is 22%, and the slips of the asperity and background are 2.8 m and 1.6 m respectively. Use the characteristic source model based on this assumption, the 3D spectral-element method simulation results indicate that Peak ground acceleration (PGA) is significantly stronger along the surface fault-rupture. The basin effects play an important role when wave propagates in the Taipei basin which cause seismic wave amplified and prolong the shaking for a very long time. It is worth noting that, when the rupture starts from the southern tip of the fault, i.e. the hypocenter locates in the basin, the impact of the Sanchiao fault earthquake to the Taipei metropolitan area will be the most serious. The strong shaking can cover the entire Taipei city, and even across the basin that extended to eastern-most part of northern Taiwan.

Collaboratory for the Study of EarthquakePredictability (CSEP) is a global project on earthquakepredictability research. The final goal of this project is to search for the intrinsic predictability of the earthquake rupture process through forecast testing experiments. The Earthquake Research Institute, the University of Tokyo joined CSEP and started the Japanese testing center called as CSEP-Japan. This testing center provides an open access to researchers contributing earthquake forecast models applied to Japan. Now more than 100 earthquake forecast models were submitted on the prospective experiment. The models are separated into 4 testing classes (1 day, 3 months, 1 year and 3 years) and 3 testing regions covering an area of Japan including sea area, Japanese mainland and Kanto district. We evaluate the performance of the models in the official suite of tests defined by CSEP. The total number of experiments was implemented for approximately 300 rounds. These results provide new knowledge concerning statistical forecasting models. We started a study for constructing a 3-dimensional earthquake forecasting model for Kanto district in Japan based on CSEP experiments under the Special Project for Reducing Vulnerability for Urban Mega Earthquake Disasters. Because seismicity of the area ranges from shallower part to a depth of 80 km due to subducting Philippine Sea plate and Pacific plate, we need to study effect of depth distribution. We will develop models for forecasting based on the results of 2-D modeling. We defined the 3D - forecasting area in the Kanto region with test classes of 1 day, 3 months, 1 year and 3 years, and magnitudes from 4.0 to 9.0 as in CSEP-Japan. In the first step of the study, we will install RI10K model (Nanjo, 2011) and the HISTETAS models (Ogata, 2011) to know if those models have good performance as in the 3 months 2-D CSEP-Japan experiments in the Kanto region before the 2011 Tohoku event (Yokoi et al., in preparation). We use CSEP

We are presenting the latest development in multi-sensors observations of short-term pre-earthquake phenomena preceding major earthquakes. Our challenge question is: "Whether such pre-earthquake atmospheric/ionospheric signals are significant and could be useful for early warning of large earthquakes?" To check the predictive potential of atmospheric pre-earthquake signals we have started to validate anomalous ionospheric / atmospheric signals in retrospective and prospective modes. The integrated satellite and terrestrial framework (ISTF) is our method for validation and is based on a joint analysis of several physical and environmental parameters (Satellite thermal infrared radiation (STIR), electron concentration in the ionosphere (GPS/TEC), radon/ion activities, air temperature and seismicity patterns) that were found to be associated with earthquakes. The science rationale for multidisciplinary analysis is based on concept Lithosphere-Atmosphere-Ionosphere Coupling (LAIC) [Pulinets and Ouzounov, 2011], which explains the synergy of different geospace processes and anomalous variations, usually named short-term pre-earthquake anomalies. Our validation processes consist in two steps: (1) A continuous retrospective analysis preformed over two different regions with high seismicity- Taiwan and Japan for 2003-2009 (2) Prospective testing of STIR anomalies with potential for M5.5+ events. The retrospective tests (100+ major earthquakes, M>5.9, Taiwan and Japan) show STIR anomalous behavior before all of these events with false negatives close to zero. False alarm ratio for false positives is less then 25%. The initial prospective testing for STIR shows systematic appearance of anomalies in advance (1-30 days) to the M5.5+ events for Taiwan, Kamchatka-Sakhalin (Russia) and Japan. Our initial prospective results suggest that our approach show a systematic appearance of atmospheric anomalies, one to several days prior to the largest earthquakes That feature could be

The lessons we have learned from the Great Tohoku EQ (Japan, 2011) how this knowledge will affect our future observation and analysis is the main focus of this presentation.We present multi-sensors observations and multidisciplinary research in our investigation of phenomena preceding major earthquakes. These observations revealed the existence of atmospheric and ionospheric phenomena occurring prior to theM9.0 Tohoku earthquake of March 11, 2011, which indicates s new evidence of a distinct coupling between the lithosphere and atmosphere/ionosphere, as related to underlying tectonic activity. Similar results have been reported before the catastrophic events in Chile (M8.8, 2010), Italy (M6.3, 2009) and Sumatra (M9.3, 2004). For the Tohoku earthquake, our analysis shows a synergy between several independent observations characterizing the state of the lithosphere /atmosphere coupling several days before the onset of the earthquakes, namely: (i) Foreshock sequence change (rate, space and time); (ii) Outgoing Long wave Radiation (OLR) measured at the top of the atmosphere; and (iii) Anomalous variations of ionospheric parameters revealed by multi-sensors observations. We are presenting a cross-disciplinary analysis of the observed pre-earthquake anomalies and will discuss current research in the detection of these signals in Japan. We expect that our analysis will shed light on the underlying physics of pre-earthquake signals associated with some of the largest earthquake events

Workshop on New Madrid Geodesy and the Challenges of Understanding Intraplate Earthquakes; Norwood, Massachusetts, 4 March 2011 Twenty-six researchers gathered for a workshop sponsored by the U.S. Geological Survey (USGS) and FM Global to discuss geodesy in and around the New Madrid seismic zone (NMSZ) and its relation to earthquake hazards. The group addressed the challenge of reconciling current geodetic measurements, which show low present-day surface strain rates, with paleoseismic evidence of recent, relatively frequent, major earthquakes in the region. The workshop presentations and conclusions will be available in a forthcoming USGS open-file report (http://pubs.usgs.gov).

Seismically induced ground effects characterize moderate to high magnitude seismic events, whereas they are not so common during seismic sequences of low to moderate magnitude. A low to moderate magnitude seismic sequence with a M w = 5.16 ± 0.07 main event occurred from December 2013 to February 2014 in the Matese ridge area, in the southern Apennines mountain chain. In the epicentral area of the M w = 5.16 main event, which happened on December 29th 2013 in the southeastern part of the Matese ridge, field surveys combined with information from local people and reports allowed the recognition of several earthquake-induced ground effects. Such ground effects include landslides, hydrological variations in local springs, gas flux, and a flame that was observed around the main shock epicentre. A coseismic rupture was identified in the SW fault scarp of a small-sized intermontane basin (Mt. Airola basin). To detect the nature of the coseismic rupture, detail scale geological and geomorphological investigations, combined with geoelectrical and soil gas prospections, were carried out. Such a multidisciplinary study, besides allowing reconstruction of the surface and subsurface architecture of the Mt. Airola basin, and suggesting the occurrence of an active fault at the SW boundary of such basin, points to the gravitational nature of the coseismic ground rupture. Based on typology and spatial distribution of the ground effects, an intensity I = VII-VIII is estimated for the M w = 5.16 earthquake according to the ESI-07 scale, which affected an area of at least 90 km2.

Data assimilation is a technique that optimizes the parameters used in a numerical model with a constraint of model dynamics achieving the better fit to observations. Optimized parameters can be utilized for the subsequent prediction with a numerical model and predicted physical variables are presumably closer to observations that will be available in the future, at least, comparing to those obtained without the optimization through data assimilation. In this work, an adjoint data assimilation system is developed for optimizing a relatively large number of spatially inhomogeneous frictional parameters during the afterslip period in which the physical constraints are a quasi-dynamic equation of motion and a laboratory derived rate and state dependent friction law that describe the temporal evolution of slip velocity at subduction zones. The observed variable is estimated slip velocity on the plate interface. Before applying this method to the real data assimilation for the afterslip of the 2003 Tokachi-oki earthquake, a synthetic data assimilation experiment is conducted to examine the feasibility of optimizing the frictional parameters in the afterslip area. It is confirmed that the current system is capable of optimizing the frictional parameters A-B, A and L by adopting the physical constraint based on a numerical model if observations capture the acceleration and decaying phases of slip on the plate interface. On the other hand, it is unlikely to constrain the frictional parameters in the region where the amplitude of afterslip is less than 1.0 cm d-1. Next, real data assimilation for the 2003 Tokachi-oki earthquake is conducted to incorporate slip velocity data inferred from time dependent inversion of Global Navigation Satellite System time-series. The optimized values of A-B, A and L are O(10 kPa), O(102 kPa) and O(10 mm), respectively. The optimized frictional parameters yield the better fit to the observations and the better prediction skill of slip

Parkfield's regularly occurring M6 mainshocks, about every 25 years, have over two decades stoked seismologists' hopes to successfully predict an earthquake of significant size. However, with the longest known inter-event time of 38 years, the latest M6 in the series (28 Sep 2004) did not conform to any of the applied forecast models, questioning once more the predictability of earthquakes in general. Our study investigates the spatial pattern of b-values along the Parkfield segment through the seismic cycle and documents a stably stressed structure. The forecasted rate of M6 earthquakes based on Parkfield's microseismicity b-values corresponds well to observed rates. We interpret the observed b-value stability in terms of the evolution of the stress field in that area: the M6 Parkfield earthquakes do not fully unload the stress on the fault, explaining why time recurrent models fail. We present the 1989 M6.9 Loma Prieta earthquake as counter example, which did release a significant portion of the stress along its fault segment and yields a substantial change in b-values.

The SAFE (Swarm for Earthquakestudy) project (funded by European Space Agency in the framework "STSE Swarm+Innovation", 2014-2016) aimed at applying the new approach of geosystemics to the analysis of Swarm satellite (ESA) electromagnetic data for investigating the preparatory phase of earthquakes. We present in this talk the case study of the most recent seismic sequence in Italy. First a M6 earthquake on 24 August 2016 and then a M6.5 earthquake on 30 October 2016 shocked almost in the same region of Central Italy causing about 300 deaths in total (mostly on 24 August), with a revival of other significant seismicity on January 2017. Analysing both geophysical and climatological satellite and ground data preceding the major earthquakes of the sequence we present results that confirm a complex solid earth-atmosphere coupling in the preparation phase of the whole sequence.

The U.S. Geological Survey (USGS) is investigating how the social networking site Twitter, a popular service for sending and receiving short, public, text messages, can augment its earthquake response products and the delivery of hazard information. The goal is to gather near real-time, earthquake-related messages (tweets) and provide geo-located earthquake detections and rough maps of the corresponding felt areas. Twitter and other social Internet technologies are providing the general public with anecdotal earthquake hazard information before scientific information has been published from authoritative sources. People local to an event often publish information within seconds via these technologies. In contrast, depending on the location of the earthquake, scientific alerts take between 2 to 20 minutes. Examining the tweets following the March 30, 2009, M4.3 Morgan Hill earthquake shows it is possible (in some cases) to rapidly detect and map the felt area of an earthquake using Twitter responses. Within a minute of the earthquake, the frequency of “earthquake” tweets rose above the background level of less than 1 per hour to about 150 per minute. Using the tweets submitted in the first minute, a rough map of the felt area can be obtained by plotting the tweet locations. Mapping the tweets from the first six minutes shows observations extending from Monterey to Sacramento, similar to the perceived shaking region mapped by the USGS “Did You Feel It” system. The tweets submitted after the earthquake also provided (very) short first-impression narratives from people who experienced the shaking. Accurately assessing the potential and robustness of a Twitter-based system is difficult because only tweets spanning the previous seven days can be searched, making a historical study impossible. We have, however, been archiving tweets for several months, and it is clear that significant limitations do exist. The main drawback is the lack of quantitative information

The modification of a WWSSN Sprengnether vertical seismometer has resulted in significantly improved performance at low frequencies. Instead of being used as a velocity detector as originally designed, the Faraday subsystem is made to function as an actuator to provide a type of force feedback. Added to the instrument to detect ground motions is an array form of the author's symmetric differential capacitive (SDC) sensor. The feedback circuit is not conventional, but rather is used to eliminate long-term drift by placing between sensor and actuator an operational amplifier integrator having a time constant of several thousand seconds. Signal to noise ratio at low frequencies is increased, since the modified instrument does not suffer from the 20dB/decade falloff in sensitivity that characterizes conventional force-feedback seismometers. A Hanning-windowed FFT algorithm is employed in the analysis of recorded earthquakes, including that of the very large Indonesia earthquake (M 7.9) of 25 July 2004. The improved low frequency response allows the study of the free oscillations of the Earth that accompany large earthquakes. Data will be provided showing oscillations with spectral components in the vicinity of 1 mHz, that frequently have been observed with this instrument to occur both before as well as after an earthquake. Additionally, microseisms and other interesting data will be shown from records collected by the instrument as Hurricane Charley moved across Florida and up the eastern seaboard.

Chile is one of the most seismically and volcanically active regions in the South America due to a constant subdiction of the South American plate, converging with the Nazca plate in the extreme North of Chile. Four events, namely: the Ovalle earthquake of Juny 18, 2003, M=6.3, with epicenter localized at (-30:49:33, -71:18:53), the Calama earthquake of Junly 19, 2001, M=5.2, (-30:29:38,-68:33:18), the Pica earthquake of April 10, 2003, M=5.1, (-21:03:20,-68:47:10) and the La Ligua earthquake of May 6, 2001, M=5.1, (-32:35:31,-71:07:58:) were analysed using the 15 m resolution satellite images, provided by the ASTER/VNIR instrument. The Lineament Extraction and Stripes Statistic Analysis (LESSA) software package was used to examine changes in the lineament features caused by sismic activity. Lack of vegetation facilitates the study of the changes in the topography common to all events and makes it possible to evaluate the sismic risk in this region for the future.