Catastrophe modeling: the third wave of disruptive technology

Upload your content

By 

Catastrophe models, conceived in the 1970s and created at the end of the 1980s, have proved to be a “disruptive technology” in reshaping the catastrophe insurance and reinsurance sectors. The first wave of disruption saw the arrival of fresh capital, to found eight new “technical” Bermudan catastrophe reinsurers. The “Class of 1993” included Centre Cat Ltd., Global Capital Re, IPC Re, LaSalle Re, Mid-Ocean Re, Partner Re, Renaissance Re and Tempest Re. Using catastrophe models, these companies were able to set up shop and price hurricane and earthquake contracts without having decades of their own claims history. While only two of these companies survive as independent reinsurers, the legacy of the disruption of 1993 is Bermuda’s sustained dominance in global reinsurance.

A second wave of disruption starting in the mid-1990s saw the introduction of catastrophe bonds: a slow trickle at first but now a steady flow of new structures, as investors who knew nothing about catastrophic loss came to trust modeled risk estimates to establish the bond interest rates and default probabilities. Catastrophe bonds have subsequently undergone their own “Cambrian explosion” into a diverse set of insurance-linked securities (ILS) structures, including those in which the funds go back to supplement reinsurer’s capital. Again, this disruption in accessing novel sources of pension and investment fund capital would have been impossible without catastrophe loss models.

The Third Wave of Disruptive Technology

Since 2000, catastrophe modeling has delivered a third wave of disruptive technology through flood risk cost data delivered at building resolution. Flood hazard is different to other common perils such as high winds and damaging ground shaking. In most circumstances a flood can be precisely determined, unlike chaotic gusts of wind or the wild range of frequencies and durations of strong earthquake shaking. The height of the flood water is typically consistent among nearby locations. It is the elevation of buildings that varies.

The revolution in high resolution flood risk modeling is a consequence of the availability of big data sets on daily rainfalls and river flows, allied with the vast outputs of climate models and digital terrain data. The development of these high-resolution flood loss models has required massive computing power, to model runoff to river flow, and river flow to flooding, generating synthetic historical catalogs of flood events, resolved at building-specific granularity, across thousands of simulation years. Where there may be intrinsic uncertainty, as in inferred threshold elevations, in principle at least it is now possible to calculate flood risk costs.

This ability to measure flood risk cost has opened Pandora’s Box in revealing information that a wide range of agencies, officials and the general public do not really know how to handle.

Simple Schemes for Complex Problems

In many developed countries, flood insurance systems were developed through the 1960s and 1970s, at a time when it was not possible to calculate the individual property and portfolio-wide flood risk costs. Hence these schemes adopted a range of simplifying strategies — some applying implicit, but unquantified cross subsidies from those outside the flood zone, while others simply flat-rated the risk. Politics often underlay the flood pricing model. In the U.S., the National Flood Insurance Program (NFIP) gave large and perpetual discounts for pre-existing properties in flood-zones. The total annual revenues collected by the NFIP proved just enough for the scheme to break-even in ordinary years but failed to accumulate reserves needed to pay for tail losses. As a result of running an unsustainable business model, the NFIP has had to borrow more than twenty billion dollars from the federal government. An attempt to move the system to technical pricing was backed by Congress in 2012 but proved so unpopular it had to be rescinded.

In the U.K. a 1960s “Gentleman’s Agreement” meant that insurers sustained homeowners’ flood insurance as long as the government invested in building more flood defenses. For the insurers, the differential between properties with high and low flood risk was suppressed, with those on the hills subsidizing those in the floodplains.

As with the early days of catastrophe modeling, building-resolution flood risk quantification has taken a decade to become refined and established. The disruption in the industry that is a consequence of this capability is still spreading its way through the institutional architecture of preexisting and new flood insurance schemes.

Using the new risk pricing models we can calculate the flood risk costs that would be saved in establishing a new flood defense and measure the loss if the flood wall should be overtopped or fail. If we know by how much extreme rainfall intensity or sea level rise is expected to change in a warmer world, we can calculate the impact on property-level flood risk costs.

Challenging Convention

The new building-resolution flood risk cost information has challenged how people and governments think about flood risk and the role of insurance. The preconception is that the cost of household insurance should not vary significantly, but should be as consistent as fuel prices, or mortgage rates. The reality is that flood risk costs can vary a hundredfold.

The disruption brought by building-resolution flood risk cost quantification has already transformed the U.K. residential flood insurance sector. Armed with the new ability to calculate flood risk costs, insurers wanted to apply full flood risk rating. For those at highest risk this might reasonably require paying thousands of pounds each year. Knowing that such prices would be politically unacceptable, insurers preferred to drop the coverage to those at highest risk. In turn, these homeowners complained to the government in such numbers as to require a policy change.

Rather than let the calculated risk costs prevail, the Flood Re system now caps the costs and gets everyone with lower risk to subsidize those at high risk (not so different to the previous system). However, for those who are currently subsidized, insurance prices are intended to trend to the technical risk costs over a 25-year period, conveniently beyond the career of a politician in government. For properties constructed before January 1, 2009, the Flood Re formula has no incentive to drive flood risk mitigation. Therefore, full risk rating in 2040 may be no more palatable than it was in 2015.

Will flat-rated schemes, like the French Cat Nat scheme, survive now it becomes clear how to use risk models to calculate the wide differentials in the underlying cost of the risk? Are such schemes established in the name of “solidarity” or ignorance?

FEMA’s iconic demarcation of the “100-year flood” (one percent annual probability) line, which determines whether a U.S. homeowner is obligated to purchase the NFIP flood policy when taking out a mortgage, used to pass without challenge. In many places, the line is now seen as unreliable, perhaps the highest level reached in historical floods rather than a full analysis of potential storm surge or river flow profiles and their probabilities.

Going Granular on Flood Risk

RMS high definition (HD) flood models are already available for Europe and have recently expanded to include Ireland and Italy, to provide coverage for 15 European countries. The RMS U.S. Inland Flood HD Model will follow later this year. These models deliver this granular flood hazard and risk information, and also capture two unique components of the peril.

First, the largest floods can persist for weeks often shifting their geography. A flood wave can take days to travel down a large river. Also, once the ground is saturated it takes much less rain to renew the flooding. In the model there is a need to track flooding day by day, to see how alternative definitions of duration will affect the “event” cost.

Second, the deterministic nature of floods means that they can be held at bay with a flood defense. (There is no prospect of building a wall to keep out hurricane winds or earthquake shaking.) This wall needs to be included in how we model the extent of flooding. Yet this defense could overtop or fail, so we need to explore what impact this would then have on the extent of flooding and the losses.

Part of the disruptive challenge of the new flood models is that they can reveal a precipitous landscape of flood risk costs with peaks and steep gradients. This flood risk landscape is far from intuitive. The risk cost can vary by a factor of ten even among neighboring properties with only a few feet difference in their elevation. The new generation of flood models will also be needed to help educate homeowners why flood risk costs can be so granular, and in certain situations, so high.

But that does not need to be the end of the conversation. While the owner of Ferryman’s Cottage on River Road may be unable to change the hazard, through understanding what mitigating actions he or she could take, such as waterproofing the building or elevating the furniture when a flood is forecasted, there could be many available ways in which to lower the risk cost. Homeowners need to know the truth about their flood risk, or the steps to building resilience and mitigation will simply stall.

Explore further

Hazards Flood
Country and region Algeria

Please note: Content is displayed as last posted by a PreventionWeb community member or editor. The views expressed therein are not necessarily those of UNDRR, PreventionWeb, or its sponsors. See our terms of use

Is this page useful?

Yes No
Report an issue on this page

Thank you. If you have 2 minutes, we would benefit from additional feedback (link opens in a new window).