« Einstein vs. Quantum Orthodoxy, Revisited | Main | Constructal Theory of Everything? »
Wednesday
Jul112007

Modeling Acts of God, Part 2 - Earthquakes

parkfield.jpgThis is #2 in a series. (Click here for #1 on modeling hurricanes)

Earthquake modeling seems to generate a wide range of emotions, vitriol, successes, and failures - so wide as to need a logarithmic scale, a la Richter.

The Parkfield Earthquake Experiment, now running for over 22 years, has been the development test bed and experimental "lab" for US Geological Survey/State of California efforts to develop  physical models of earthquakes that will lead to viable predictions. The USGS site contains a wealth of information on the experiment, and good background on the history of earthquake prediction, which is still highly hit or miss. An interesting excerpt from the site neatly illuminates the need for prediction based on an understanding of physical causes as opposed to one based on statistical correlation only:

Early scientific efforts toward earthquake prediction in the U.S. were directed primarily toward the measurement of physical parameters in areas where earthquakes occur, including seismicity, crustal structure, heat flow, geomagnetism, electrical potential and conductivity, gas chemistry. Central to these efforts was the concept that a precursor might be observed in one or more of these measurements. However, the connection between a commonly accepted precursor and the earthquake was often speculative and uncertain. A coherent physical model was lacking.

A model on which a scientific prediction could be based began to be developed in the late 1970's and early 1980's, and is described in three seminal papers. Allan Lindh of the USGS, proposed a multi-year, integrated observation program at Parkfield, combining seismic, geodetic, creep, strain, tilt and magnetic measurements with theoretical models of fault mechanics in 1978

This site will be an essential resource for the next version of the chaos course. There are two others that will serve as complementary resources...

A debate in Nature titled Is the reliable prediction of individual earthquakes a realistic scientific goal? Moderated by Ian Main, the site is a fascinating give and take on the topic by a number of scientists. For the debate, earthquake prediction is considered in light of one of four scenarios:

  1. Time-independent hazard... in which past occurrences of earthquakes are associated with specific land areas
  2. Time-dependent hazard ...in which prediction is based on simple correlative models, including clustering in space and time
  3. Earthquake forecasting ... in which forecasts are made based on a precursory signal - say some unexpected plate movement or low-level fore-shock
  4. Deterministic prediction ...in which earthquakes are assumed to be inherently predictable.

Main provides a provocative call to arms at the beginning of the debate:

Time-independent hazard has now been standard practice for three decades, although new information from geological and satellite data is increasingly being used as a constraint. In contrast, few seismologists would argue that deterministic prediction as defined above is a reasonable goal in the medium term, if not for ever. In the USA, the emphasis has long been shifted to a better fundamental understanding of the earthquake process, and on an improved calculation of the seismic hazard, apart from an unsuccessful attempt to monitor precursors to an earthquake near Parkfield, California, which failed to materialize on time. In Japan, particularly in the aftermath of the Kobe earthquake in 1995, there is a growing realization that successful earthquake prediction might not be realistic. In China, thirty false alarms have brought power lines and business operations to a standstill in the past three years, leading to recent government plans to clamp down on unofficial 'predictions'.

So, if we cannot predict individual earthquakes reliably and accurately with current knowledge, how far should we go in investigating the degree of predictability that might exist?

One person who believes that quakes can be predicted but that current accepted models are hopelessly wrong is geologist Jim Berkland, who claims that his Seismic Window Theory - based on tidal forces associated with Sun/Moon alignment and "abnormal animal behavior" - is a much better predictor. Berkland's web site - Syzygy Job - is fascinating reading because it not only presents up-to-date earthquake news, but also contains Berkland's description of his ostracization by main-stream earthquake scientist. In his words:

Despite my successes in earthquake prediction (using tides and abnormal animal behavior), I found it almost impossible to publish on the subject in scientific journals...

Mainstream scientists generally try to debunk various aspects of my earthquake predictions or to ridicule me personally, with epithets such as crackpot or clown. My response is to question their own records in earthquake prediction, and to point out that the main action of a stream is not near the center, but closer to the edge. Near the fringes, with eddies and cross-currents, erosion and deposition are more effective, sometimes leading to changes in the course of the stream...

The experts of High Science state that earthquake prediction is currently a scientific impossibility. I maintain that the topic is too important to leave to the experts and I continue to do the impossible with a better than 75% battering average , which is more than 300% greater than chance.

"Battering average"? A chance typo, or revealing slip of the tongue, in a field where slippage of tectonic plates is a sign that it is too late to predict - the quake is already here.

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (2)

Professor DiDio,

Very interesting post. I had no idea the topic of earthquake prediction could generate such seismicity. It seems to me, however, that a more important application of physics to earthquakes is in the design of better seismic retrofitting so that predictions are less and less needed. Unless earthquakes become more intense over time (a possibility?) I would think that developing better safety precautions would be enough to prevent the massive injuries we've seen in the past. Am I too trusting of modern technology?

Also, observing how little people care about living in "earthquake zones" or "tornado allies" or "hurricane belts", I wonder if populations will ever respond to the predictions, even if they are greatly improved. I think of all the people who refuse to leave their homes in the midst of massive forest fires. And they know exactly where the fire is and when it's coming!

That said, I would appreciate the knowledge of when and where an earthquake is going to hit, especially since I am on my way home to San Francisco.

Bryan

July 23, 2007 | Unregistered CommenterBryan Nelson

Bryan - The design of earthquake-resistant structures is more of an engineering problem than a physics one. The basic physics is easy: divert the energy that would normally go towards demolishing a house to something more benign..bending, oscillation with damping, etc.

I don't recall seeing anything about earthquake intensities changing in time. If anything, there might now be a lower average intensity due to the fact that there are probably more lower-intensity quakes now being recorded because of advances in detection technology.

And I agree that there will always be some who choose to live in a dangerous spot, regardless of predictions. However, if there was just one highly accurate prediction of some disaster, I bet that more would heed the modeler's call.

rad

July 23, 2007 | Registered CommenterR.A. DiDio

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Post:
 
All HTML will be escaped. Textile formatting is allowed.