People familiar with my Analog fact articles may be surprised to learn that my Ph.D. has nothing to do with anything I tend to write about in these pages. Specifically, my degree is in economics and my dissertation title was The Role of the Forest Service in the Development of Ski Resorts. In it, I was studying the process by which most of America’s ski areas, especially in the West, are created. Other than the base facilities, they generally lie on public land (usually national forests), operating under what are called “special use permits.”
When someone applies for a permit for a new ski area (or an expansion of an existing one) the U.S. Forest Service evaluates the proposal and produces an environmental impact statement. To a large degree, these studies involve classic environmental topics, such as the effects on rivers, air quality, wildlife, and adjacent wild lands. But they also look at why the development is needed (relevant for a long-term commitment of federal land) and assess its economic impacts.
Reading these in the late 1970s, it became clear that the people writing them weren’t economists and often didn’t understand the field’s basics. On occasion they counted as benefits things that economists would see as costs. Furthermore, substantial amounts of time and effort were spent examining even the most harebrained proposals (and real estate development is a field rife with impractical dreamers).
Eventually, I published a 71-page law review article proposing not only a more economically sound way of doing this, but advising a triage approach to the entire problem. That way the Forest Service could quickly weed out the ridiculous, give easy approval to the obviously good, and save the serious study for things where both the environmental and economic stakes were high enough to warrant it.
My work got enough attention that the agency decided to give it a try. The result, however, was a bit of a disaster. Instead of saving time and effort (and reaching a more rational decision), they got sued by the developer, who insisted it had a right to have things done the way they’d always been done before. The court agreed. It didn’t matter that the new approach might represent a better mousetrap. The old mousetrap was the way things were to be done. I’m a bit out of date in the field, but to the best of my knowledge, nothing has changed since.1
The same, sadly, can happen in science. This column started out being about economics, but it’s also the story of two earthquakes. One we all know about: the March 13, 2011 giant that made the number 311 resonate in Japan almost like 911 does here. It weighed in at magnitude 9.0, killed 20,000 people, did nearly four times as much damage as Hurricane Katrina, and filled the media with fears of nuclear meltdown.
The other was on Friday, April 13, in California’s South Bay area. Despite the unlucky date, pretty much nobody heard about it—no surprise because at magnitude 3.5, it wasn’t much of an earthquake. The only thing that made it notable at all was that it happened near the epicenter for 1989’s magnitude 6.9 Loma Prieta earthquake, which knocked down freeway spans, collapsed buildings, and left 63 people dead throughout the Bay Area.
What the two earthquakes had in common was that both triggered early warning systems.
Years ago, I wrote in detail about such systems,2 but the basic idea is simple: put a few seismometers between you and likely fault zones, link these instruments to a fast computer for a quick calculation of epicenter and magnitude, and get warnings out to people downrange before the shock waves hit. On land, you don’t get a lot of warning, but it’s enough to slam on the brakes on fast-moving trains, lock down gurney wheels in hospitals, close entrance ramps onto bridges, etc. For undersea earthquakes, you actually have several tens of minutes before the slower-moving tsunami arrives —a lot of time for people to be running for the hills.
It works. “At my desktop in Berkley, I got twenty-five seconds’ warning [of the April 13 tremor],” Richard Allen, director of the seismology laboratory at the University of California, Berkeley, announced a few days later at a meeting of the Seismological Society of America. “It was a good test of how quickly the system could provide warning.”
That sounds good, but wait a second. Test? Japan, Taiwan, and Mexico have had such systems for years. And the Bay Area is still undergoing testing?
The reason, of course, is funding. Japan, Taiwan, and Mexico have histories of big, recent earthquakes. That provides a good deal of incentive to get the technology off the shelf and into the field, promptly. But there’s another aspect of this that’s even more important for the present discussion. Even as this technology is still being tested, there’s a better one that’s been around since at least 2002.
All of this takes us back to Japan on March 11.
Japan’s earthquake-warning system has long been online, and on one level, it worked extremely well. Within eight seconds of the time the closest seismometers to the earthquake first jerked, Allen says, warnings were out. There was just one problem. The warning system calculated the magnitude at 7.1. Within two minutes, Allen says, the estimate increased to magnitude 8.1, but it took another twenty minutes for the Japanese computers to determine that the earthquake was actually a magnitude 9.0. By then, the giant tsunami was already well on its way, striking only ten minutes later.
Again, this is a mix of good news and bad news. The Japanese knew there’d been a big earthquake and that a tsunami was likely. Of the 200,000 people in harm’s way, 180,000 escaped. By any conventional measure, such an evacuation was a massive success, points out Seth Stein, a geophysicist at Northwestern University, Evanston, Illinois.
But, that said, the Japanese blew it. They had a better mousetrap at their disposal . . . and never used it.
The issue has to do with rapid estimation of earthquake magnitudes —important because a magnitude 9 is thirty times more powerful than a magnitude 8 and therefore produces a bigger tsunami.
This is important because all you have to do to survive a tsunami is get high enough, or far enough inland, to avoid it. In fact, Patrick Corcoran, a hazards-outreach specialist with Oregon SeaGrant Extension, Astoria, Oregon, later told me, in Japan there was a very stark demarcation line between areas that were devastated by the wave and those that weren’t. “It wasn’t that far [away] for most people,” he says. “You just had to know where the line was and get to it. . . . If you got above it, you were safe. If you didn’t, you weren’t.”
“A lot of people who didn’t evacuate, or didn’t evacuate far enough, didn’t think it [the tsunami] would be that big,” adds Stein. “It’s important to get a better take, early on, about how big an earthquake is.”
The problem was that the Japanese relied on seismometers to trigger their warning system, and, for complex reasons, seismometers are slow to recognize the difference between big earthquakes and huge ones. “Seismometers do a beautiful job of discriminating among magnitude 2, 3, 4, 6 earthquakes,” says Tim Melbourne, a geodesist at Central Washington University, Ellensburg, Washington. “But they run into trouble where you have to distinguish a magnitude 8 from a magnitude 9.”
Melbourne believes the solution is to supplement seismometer readings with real-time data from GPS receivers located as close to the fault as possible. That’s not because GPS receivers are better than seismometers at detecting shaking (they aren’t) but because they can quickly detect the types of permanent changes in position that occur when large areas shift in response to earthquakes. “If the ground lurches [in one direction] by many meters it’s unambiguous evidence for a very large earthquake,” says Susan Hough, a seismologist for the U.S. Geological Survey’s Pasadena, California, office.
(It should be noted, by the way, that these GPS stations don’t have to be located immediately adjacent to the fault. On March 13, the entire island of Honshu shifted about 2.4 meters eastward, and parts of the coast moved by as much as five meters.)
Using real-time GPS data available to the Japanese during the earthquake, Melbourne has found that GPS-linked computers could have recognized the earthquake’s true magnitude within two minutes of its onset —ten times faster than the Japanese seismometers. That’s an extra eighteen minutes in which to tell people that they might be hit by a truly enormous wave. Would it have made a difference? Who knows? But it’s hard not to imagine that some additional lives would have been saved.
There are, of course, potential glitches with the new approach. Hough, for example, notes that GPS signals can get spikes when GPS satellites rise above the horizon. Other problems could come from solar storms or other random factors interfering with satellite signals. “Occasionally you can get an excursion [in the data sent from the GPS receiver] that may look like the Earth is moving but is actually an artifact of the processing,” says Yehuda Bock, a geodesist at Scripps Institution of Oceanography, La Jolla, California. In other words, there’s a risk of false alarms simply due to errant GPS signals.
For these reasons, Bock thinks the best solution is to piggyback low-grade seismometers on GPS stations. That way, if the GPS receiver shows a big lurch but the seismometer shows nothing, it’s a false alarm. (Currently he’s part of a group designing just such a system for Southern California.)
Melbourne, however, thinks enough planning is enough. “At some point,” he says, “you have to say, okay, it seems to be working.” In April, therefore, Melbourne’s team announced that it has gone online with a warning system linking hundreds of GPS receivers spanning a range from Northern California to British Columbia. “The Cascadia Subduction Zone [offshore from these areas] is our main focus,” he says. “That is potentially a magnitude 9 fault.”
It’s not yet a true warning system, however, because Melbourne’s team does not have authority to issue official warnings. Rather, all they can do is dial up those who do and tell them that something big appears to be going on. “This is how things go,” he says, “when projects of this complexity and scale grow organically from the ground up among many agencies and universities doing small projects, rather than as top-down, Apollo-mission type projects with huge budgets where everything is planned in advance.”
With lives potentially on the line, why has it taken so long for the technology to progress? GPS has been around for decades, and at least as far back as 2002, scientists have been pointing out that it could be used precisely in this manner. But it’s only now that something is finally happening.
Partly, of course, it’s Moore’s Law, the computer-world empiricism that processing speeds, etc., double every 18 months. In 2002, scientists were estimating that it would take several seconds to produce earthquake warnings, once the first data reached their computers. Now, Melbourne says, “the data get to our lab in under a tenth of a second and we can process that into position estimates good to a couple of centimeters within a half-second.”
But this was never a field in which we needed centimeter accuracy and fraction-of-a-second speed. As we noted earlier, the ground motions we’re looking here are on the order of meters and the traditional seismometer-based approach took twenty minutes to reach the right answer. Nor is it a matter of installing expensive equipment. Japan had long ago installed an array of GPS receivers, and the ones in the U.S. have been continuously online for years, supplying high-precision data to land surveyors needing hyper-precise benchmarks. It’s simply a matter of making use of the data we’re already getting. “Japan has the gold standard for everything [related to earthquakes]” Melbourne says. “The fact they hadn’t incorporated [GPS] . . . I thought the Japanese would have incorporated that a decade ago.”
Basically, it’s another example of the difficulty in trying to change the way things have always been done, especially when it involves people in one field trying to convince people in another field that there’s a better way. In my case, I was trying to convince foresters and lawyers that there were better ways to do economics. Here, researchers in the relatively young field of geodesy (the study of the precise locations of points on the Earth’s surface) are invading a field that used to be the sole domain of someone else.
“A hundred years ago, somebody invented seismometers,” Melbourne sighs. “The rest is history. That’s how earthquakes have traditionally been measured.”
“The seismology community has been slow to really appreciate how useful GPS is,” adds Bock. Though, he notes, “that’s changing.”
It’s changing rapidly enough, in fact, that this particular issue is quickly being resolved. But how many other better mousetraps are out there that have never found the light of day because they’re simply not the way things have always been done?
Copyright © 2012 Richard A. Lovett