Monthly Archives: August 2013

Catastrophe modeling of floods is not just a problem of stochastic rainfalls, run-off and channel flows. It also requires anticipating the actions of the human factors; for flood is as much a man-made peril as it is a natural peril.

The passive human interventions, such as the permanent flood defenses, are less of a challenge to model than the active interventions. Will the portable flood defenses be installed in time?

Perhaps they have already been borrowed by some upstream community, as happened for one town in the U.K. along the River Severn in the summer 2007 floods.

In “active flood management,” land and properties upstream get deliberately sacrificed to protect a downstream concentration of value. In modeling one can assume the decisions are rational and involve carefully calculated trade-offs. The same cannot always be said for human actions.

In 1927 the grandees of New Orleans, concerned that the city was about to be inundated by the Mississippi river, blew up the levees 13 miles downstream of the city with 39 tons of dynamite (with the idea of speeding up the flow of water through the city). The action proved completely unnecessary.

The Great Mississippi Flood of 1927 in Natchez, Mississippi, showing a submerged train with boats brought in for rescue (Courtesy of NOAA’s National Weather Service Collection from the family of Captain Jack Sammons, Coast and Geodetic Survey)

One of the biggest of all these challenges of flood modeling concerns how to factor in the role of dams.

What is the water level in the dam likely to be when the flood wave arrives?

How are the operators of the dam likely to have behaved ahead of the flood?

In many low latitude countries the problem for the operators is they often have two irreconcilable objectives.

Objective 1: The dam operator has to hold onto as much water as possible through the rainy period so that water remains available to all agricultural, industrial and domestic users through the dry season. The reservoir should be completely full the day the rain stops.

Objective 2: The dam operator is expected to hold back a large proportion of a flood wave, releasing the water after the wave has passed. To be effective the operator needs to have as little water as possible in the reservoir before the flood arrives.

Simply because dry years tend to happen more often than extreme floods, most operators work to the first objective.

In Thailand there was a drought in 2010 and the dam operators were accused of not holding enough water in reserve, so they topped up their reservoirs at the start of 2011 and had little capacity to manage the flood waves of the ensuing summer and autumn.

Earlier the same year, much the same situation happened in Brisbane, Australia. After catastrophic floods in 1974 a main branch of the Brisbane River had been dammed to create Lake Wivenhoe. Over the years the dam was increasingly used for water retention. When the intense rains came in early January 2011 the dam operators soon ran out of any storage capacity. In March 2011 the Insurance Council of Australia claimed that “release from Wivenhoe Dam raised water levels in the Brisbane River by up to 10 meters” – and that the January flood event could be classed as a “dam release flood.”

“Dammed if you do and dammed if you don’t.”

Being a dam operator can be a very stressful function! Ideally dam operators need optimization software to assist in this process – including long-range rainfall forecasts to determine their optimum strategy and costs for the flooding as well as an expected price of water in a period of low rainfall.

For now, when developing a river flood catastrophe loss model it is safest to assume that the dam will not be functioning at the optimum for flood wave reduction. In episodes of prolonged heavy rainfall the reservoir will cease to have any capacity for water retention – so that the flooding downstream will be as if the dam did not exist.

As the industry and modeling organizations continue to learn from the impacts of Hurricane Sandy in 2012, it’s important to know that this event, although historical and record breaking on many counts, was not unprecedented.

The historical record shows that there have been dozens of other tropical cyclones to impact the Northeast U.S., some of which were more intense than Hurricane Sandy from both a wind and surge perspective.

Noteworthy examples include the 1938 New England Hurricane, 1954 Hurricane Carol, and the 1893 New York Hurricane, which will be marking its 120th anniversary on August 24. This storm is notable for being one of only two hurricanes to make a direct landfall in New York City, the other being the 1821 Long Island Hurricane.

First identified as a tropical storm on August 15, 1893 in the central Atlantic Ocean, the storm gradually intensified over the next seven days as it tracked northwestward toward the U.S. By August 22, it had reached its peak intensity of 115 mph (185 km/h), categorizing it as major hurricane status (Category 3). At this point, it began to recurve to the north, bringing it in-line with coastal New Jersey and New York. Two days later, after land interaction with parts of New Jersey resulted in some weakening, the storm made landfall on western Long Island with peak winds around 85 mph (140 km/h).

The hurricane impacted much of the coastal and interior portions of the Northeast with tropical-storm force winds, and much of the New York City with hurricane-force winds. From a surge perspective, the storm brought a 30-foot (9.1 m) storm surge that completely flooded southern Brooklyn and Queens, NY, along with many other low-lying regions.

Given the severity of this storm’s surge component, it is well known for destroying the majority of Hog Island, a 1 mile (1.6 km) long island that existed south of the modern-day Long Island coast.

According to version 13.0 of the RMS U.S. Hurricane Model, if the 1893 New York Hurricane were to occur today, the modeled insured losses from a wind-only perspective would be $6.4 billion, and $6.9 billion from a wind and surge perspective. Although not as damaging as Hurricane Sandy, this storm would be a top-10 historical event in the Mid-Atlantic and Northeast regions.

Compared to Hurricane Sandy, the 1893 New York Hurricane was estimated to have been smaller in overall size and intensity at landfall, but significantly larger in terms of surge height and extent. Model-generated hazard and damage footprints for the 1893 New York Hurricane are narrower in width and comparable in terms of peak wind gust.

Further, the impacted areas are confined to coastal New England regions due to the traditional clockwise recurving nature of the storm. On the contrary, Hurricane Sandy took a counterclockwise turn toward the coast just before landfall and prior to recurving toward the north and east, which resulted in a hazard footprint that included many Mid-Atlantic states.

Nevertheless, an event such as the 1893 New York Hurricane demonstrates that from a hazard perspective, Hurricane Sandy was not an once-in-a-lifetime type of storm.

Similar events have and will continue to occur in the future, especially given the period of heightened hurricane activity in the Atlantic, and high surge risk in the Northeast U.S. During this time, it is imperative that the industry increases awareness of these risks and monitors them accordingly.

As I have observed many times, every catastrophe is the “perfect storm” and the one common factor of all catastrophes is they are all unique. Best practice is looking beyond the models and having a strong sense of “plausible impossibilities”.

We must also make sure we do not forget lessons that are learned in the past, for example the importance of completeness and accuracy of data, and making sure that you understand the policy terms, for example sum insured or paying out for replacement costs. In the case of New Zealand, replacement must be to the latest building codes.

One key question today has been whether the Christchurch earthquake could occur under a big Australian city. An earthquake of the same magnitude of the Lyttleton earthquake is certainly possible, but the soil types are quite different.

As described in Robert Muir-Wood’s previous blog on ultra-liquefaction, one of the key characteristics of ultra-liquefiable soils is that they are glacially deposited; fortunately something that Australia, and even other cities in New Zealand such as Wellington do not have to the same extent as Christchurch. However, other potential surprises may occur, such as landslides in Wellington.

The earthquakes of 2011 are clearly an opportunity to learn and improve our models, but we all need to embrace the fact that there will continue to be sources of surprise – ‘unknown unknowns’ are called that for a reason.

Science and knowledge is always evolving. Best practice today will change tomorrow, just like in sports as diverse as rugby union or the Americas cup, where technology, training practices and even clothing was unimaginable 10 years ago. Our best understanding today will certainly change in the future.

However this does not make models irrelevant. My favorite quote is a play on General Eisenhower’s statement that “In preparing for battle, I have always found that plans are useless but planning is indispensable.” I would say, “all models are wrong, but modeling is indispensable”.

Modeling allows users to develop understanding of the models’ strengths and weaknesses, validate with whatever information is available, assess the methodologies and assumptions used, and decide what they are more comfortable with. In addition, users should consider stress tests and scenarios to further increase their intuition and knowledge of the risk potential.

Whilst written in Europe, the principles in this paper are applicable globally in all regions subject to all perils. As Australasia’s risks increase, together with regulatory interest in catastrophe modeling, this paper will continue to provide guidance and advice to all those involved in using catastrophe models to understand and manage their risk.

The Terrorism Risk Insurance Act (TRIA), which provides assistance to pay claims in the USA in the event of a terrorist attack, expires at the end of 2014.

The debate over whether the Act should be extended a third time is likely to be acrimonious given partisan divides over financial legislation. If the renewal fails, the banking, construction, and insurance sectors will be impacted in a significant and troubling manner.

RMS: TRIA Program Highlights, August 2013

TRIA was first passed in 2002 and has since been extended and amended, twice. Each extension of the Act has led to tightened coverage by raising deductibles, increasing minimum losses, and reducing the pro-rata government share of losses (currently 85% of a $100 billion layer).

Sponsoring members from both parties have proposed the upcoming 2014 extension three times in Congress this year. This is a hopeful sign of bipartisan support. But the potential for strong opposition should not be underestimated.

Opponents of a TRIA renewal will be quick to label the legislation a “subsidy”. They will correctly point out that TRIA’s federal guarantee has been provided—and insurer money collected—for over ten years without incident. And opponents would be remiss to not mention the U.S. insurance industry surplus, which has grown to almost $600 billion as of this writing—compared to only $290 billion at the time of the September 11, 2001 attacks. From this, they will argue that the insurance industry is sufficiently capitalized to absorb the losses from a catastrophic terrorist incident without government assistance.

This argument against a further TRIA extension is likely to fall on the receptive ears of an electorate (and many freshman lawmakers) that has been galvanized by recent federal involvement in the financial and automotive sectors. In order to successfully counter this narrative, the parties’ case must be reframed to include a discussion of the public benefits of terrorism insurance, of which there are many.

Additionally, when the bill approached its first expiration in 2005, many property insurers inserted sunset clauses into their contracts, enabling them to alter or revoke terrorism cover in the event of a TRIA non-renewal.

The demand for financial protection against terrorism is as undeniable as the insurance industry’s reluctance to provide it.

The impact of a TRIA non-renewal would be felt the most by cities perceived to be appealing terrorist targets. The RMS® Probabilistic Terrorism Model classifies the most terrorist-prone cities as New York, Washington DC, Chicago, San Francisco, and Los Angeles.

Without TRIA, these cities can expect a shortage of terrorism insurance capacity and corresponding rate increases at the very least. At most, construction and lending activity will be compromised; and the economic consequences (lost jobs, stalled projects, missed opportunities) would surely follow.

TRIA must be viewed in the context of the government’s broader role in the insurance industry. In additional to terrorism insurance, the federal government provides billions of dollars annually in subsidized coverage for lines of business including flood, crop, mortgage, pension, and health— sometimes as a direct primary insurer, other times as a reinsurer.

Occasionally, as in the case of TRIA’s recoupment provision, the federal government’s role is similar to that of a bank, whereby losses are indemnified and then recovered, with interest, through future policy surcharges.

Is terrorism fundamentally different from other perils in regard to how the federal government should approach it?

What is public benefit of terrorism coverage, and can it be quantified?

Can the demand for coverage ever be met by private means alone?

These are critical questions, and they must be addressed directly and publicly by stakeholders in order to justify TRIA’s renewal.