- Internet Services
- Satellite frequencies
- Solar Power
- IFR 1200 hints
- Free Video Email
TOKYO — In planning their defense against a killer tsunami, the people running Japan’s now-hobbled nuclear power plant dismissed important scientific evidence and all but disregarded 3,000 years of geological history, an Associated Press investigation shows.
The misplaced confidence displayed by Tokyo Electric Power Co. was prompted by a series of overly optimistic assumptions that concluded the Earth couldn’t possibly release the level of fury it did two weeks ago, pushing the six-reactor Fukushima Dai-ichi complex to the brink of multiple meltdowns.
Instead of the reactors staying dry, as contemplated under the power company’s worst-case scenario, the plant was overrun by a torrent of water much higher and stronger than the utility argued could occur, according to an AP analysis of records, documents and statements from researchers, the utility and the Japan’s national nuclear safety agency.
And while TEPCO and government officials have said no one could have anticipated such a massive tsunami, there is ample evidence that such waves have struck the northeast coast of Japan before – and that it could happen again along the culprit fault line, which runs roughly north to south, offshore, about 220 miles (350 kilometers) east of the plant.
TEPCO officials say they had a good system for projecting tsunamis. They declined to provide more detailed explanations, saying they were focused on the ongoing nuclear crisis.
What is clear: TEPCO officials discounted important readings from a network of GPS units that showed that the two tectonic plates that create the fault were strongly “coupled,” or stuck together, thus storing up extra stress along a line hundreds of miles long. The greater the distance and stickiness of such coupling, experts say, the higher the stress buildup – pressure that can be violently released in an earthquake.
That evidence, published in scientific journals starting a decade ago, represented the kind of telltale characteristics of a fault being able to produce the truly overwhelming quake – and therefore tsunami – that it did.
On top of that, TEPCO modeled the worst-case tsunami using its own computer program instead of an internationally accepted prediction method.
It matters how Japanese calculate risk. In short, they rely heavily on what has happened to figure out what might happen, even if the probability is extremely low. If the view of what has happened isn’t accurate, the risk assessment can be faulty.
That approach led to TEPCO’s disregard of much of Japan’s tsunami history.
In postulating the maximum-sized earthquake and tsunami that the Fukushima Dai-ichi complex might face, TEPCO’s engineers decided not to factor in quakes earlier than 1896. That meant the experts excluded a major quake that occurred more than 1,000 years ago – a tremor followed by a powerful tsunami that hit many of the same locations as the recent disaster.
A TEPCO reassessment presented only four months ago concluded that tsunami-driven water would push no higher than 18 feet (5.7 meters) once it hit the shore at the Fukushima Dai-ichi complex. The reactors sit up a small bluff, between 14 and 23 feet (4.3 and 6.3 meters) above TEPCO’s projected high-water mark, according to a presentation at a November seismic safety conference in Japan by TEPCO civil engineer Makoto Takao.
“We assessed and confirmed the safety of the nuclear plants,” Takao asserted.
However, the wall of water that thundered ashore two weeks ago reached about 27 feet (8.2 meters) above TEPCO’s prediction. The flooding disabled backup power generators, located in basements or on first floors, imperiling the nuclear reactors and their nearby spent fuel pools.
The story leading up to the Tsunami of 2011 goes back many, many years – several millennia, in fact.
The Jogan tsunami of 869 displayed striking similarities to the events in and around the Fukushima Dai-ichi reactors. The importance of that disaster, experts told the AP, is that the most accurate planning for worst-case scenarios is to study the largest events over the longest period of time. In other words, use the most data possible.
The evidence shows that plant operators should have known of the dangers – or, if they did know, disregarded them.
As early as 2001, a group of scientists published a paper documenting the Jogan tsunami. They estimated waves of nearly 26 feet (8 meters) at Soma, about 25 miles north of the plant. North of there, they concluded that a surge from the sea swept sand more than 2 1/2 miles (4 kilometers) inland across the Sendai plain. The latest tsunami pushed water at least about 1 1/2 miles (2 kilometers) inland.
The scientists also found two additional layers of sand and concluded that two additional “gigantic tsunamis” had hit the region during the past 3,000 years, both presumably comparable to Jogan. Carbon dating couldn’t pinpoint exactly when the other two hit, but the study’s authors put the range of those layers of sand at between 140 B.C. and A.D. 150, and between 670 B.C. and 910 B.C.
In a 2007 paper published in the peer-reviewed journal Pure and Applied Geophysics, two TEPCO employees and three outside researchers explained their approach to assessing the tsunami threat to Japan’s nuclear reactors, all 54 of which sit near the sea or ocean.
To ensure the safety of Japan’s coastal power plants, they recommended that facilities be designed to withstand the highest tsunami “at the site among all historical and possible future tsunamis that can be estimated,” based on local seismic characteristics.
But the authors went on to write that tsunami records before 1896 could be less reliable because of “misreading, misrecording and the low technology available for the measurement itself.” The TEPCO employees and their colleagues concluded, “Records that appear unreliable should be excluded.”
Two years later, in 2009, another set of researchers concluded that the Jogan tsunami had reached 1 mile (1.5 kilometers) inland at Namie, about 6 miles (10 kilometers) north of the Fukushima Dai-ichi plant.
The warning from the 2001 report about the 3,000-year history would prove to be most telling: “The recurrence interval for a large-scale tsunami is 800 to 1,100 years. More than 1,100 years have passed since the Jogan tsunami, and, given the reoccurrence interval, the possibility of a large tsunami striking the Sendai plain is high.”
The fault involved in the Fukushima Dai-ichi tsunami is part of what is known as a subduction zone. In subduction zones, one tectonic plate dives under another. When the fault ruptures, the sea floor snaps upward, pushing up the water above it and potentially creating a tsunami. Subduction zones are common around Japan and throughout the Pacific Ocean region.
TEPCO’s latest calculations were started after a magnitude-8.8 subduction zone earthquake off the coast of Chile in February 2010.
In such zones over the past 50 years, earthquakes of magnitude 9.0 or greater have occurred in Alaska, Chile and Indonesia. All produced large tsunamis.
When two plates are locked across a large area of a subduction zone, the potential for a giant earthquake increases. And those are the exact characteristics of where the most recent quake occurred.
TEPCO “absolutely should have known better,” said Dr. Costas Synolakis, a leading American expert on tsunami modeling and an engineering professor at the University of Southern California. “Common sense,” he said, should have produced a larger predicted maximum water level at the plant.
TEPCO’s tsunami modelers did not judge that, in a worst-case scenario, the strong subduction and coupling conditions present off the coast of Fukushima Dai-ichi could produce the 9.0-magnitude earthquake that occurred. Instead, it figured the maximum at 8.6 magnitude, meaning the March 11 quake was four times as powerful as the presumed maximum.
Shogo Fukuda, a TEPCO spokesman, said that 8.6 was the maximum magnitude entered into the TEPCO internal computer modeling for Fukushima Dai-ichi.
Another TEPCO spokesman, Motoyasu Tamaki, used a new buzzword, “sotegai,” or “outside our imagination,” to describe what actually occurred.
U.S. tsunami experts said that one reason the estimates for Fukushima Dai-ichi were so low was the way Japan calculates risk. Because of the island nation’s long history of killer waves, Japanese experts often will look at what has happened – then project forward what is likely to happen again.
Under longstanding U.S. standards that are gaining popularity around the world, risk assessments typically scheme up a worst-case scenario based on what could happen, then design a facility like a nuclear power plant to withstand such a collection of conditions – factoring in just about everything short of an extremely unlikely cataclysm, like a large meteor hitting the ocean and creating a massive wave that kills hundreds of thousands.
In the early 1990s, Harry Yeh, now a tsunami expert and engineering professor at Oregon State University, was helping assess potential threats to the Diablo Canyon nuclear power plant on the central California coast in the United States. During that exercise, he said, researchers considered a worst-case scenario involving a significantly larger earthquake than had ever been recorded there.
And then a tsunami was added. And in that Diablo Canyon model, the quake hit during a monster storm that was already pushing onto the shore higher waves than had ever been measured at the site.
In contrast, when TEPCO calculated its high-water mark at 18 feet (5.7 meters), the anticipated maximum earthquake was in the same range as others recorded off the coast of Fukushima Dai-ichi – and the only assumption about the water level was that the tsunami arrived at high tide.
Which, as is abundantly clear now, could not have been more wrong.
This AP story was written by YURI KAGEYAMA AND JUSTIN PRITCHARD
Pritchard reported from Los Angeles. AP writers Mari Yamaguchi in Tokyo and Alicia Chang in Los Angeles and AP researcher Barbara Sambrinski in New York contributed to this report.