Tag Archives: geocoding

Flood warning: Working to provide better coverage

Flood is one of the most devastating catastrophic perils, in which a single event can create tens of billions of dollars of loss. It is also one of the least insured perils, affecting people in every part of the United States. Advanced risk models now provide granularity, assessing flood risk at local levels. Such technological development presents insurers the opportunity to offer affordable, risk-based coverage within a private insurance market. Milliman colleagues Nancy Watkins, Matt Chamberlain, Andrei Stoica, and Garrett Bradford offer perspective in this video.

To learn more about Milliman’s flood expertise, click here.

Reading list: Florida’s private flood insurance market

Advances in catastrophe models and new state insurance regulations have opened the door for an affordable, risk-based private insurance market in Florida. This reading list highlights articles focusing on various issues and implications related to the market. The articles feature Milliman consultants Nancy Watkins and Matt Chamberlain, whose knowledge and experience is helping insurers to understand and price flood risk more precisely.

Forbes: “The private flood insurance market is stirring after more than 50 years of dormancy
The reemergence of private flood insurance has piqued the interest of carriers seeking to enter the market. Some catastrophe (CAT) modeling companies are creating flood models to help insurers price policies. Here’s an excerpt:

Nancy Watkins, a principal consulting actuary for Milliman, likened the current level of interest from insurers to enter the private flood insurance market to popcorn.

“We are at that stage where you can hear the space between pops. You can hear one kernel at a time,” she said. “What I think is going to happen is, in one to two years, there’s going to be a lot more going on.”

Bradenton Herald: “Important for homeowners to compare flood insurance options
Florida homeowners must consider the issues related to the National Flood Insurance Program (NFIP) and private flood policies. Private insurers can use predictive modeling technology to determine a home’s distinct flood risk.

Tampa Bay Times: “Remember the flood insurance scare of 2013? It’s creeping back into Tampa Bay and Florida
Real estate and insurance experts comment on the possible effects that high flood insurance rates may have on homeowners. Insurers express interest in the granular modeling of flood-prone territories.

Tampa Bay Business Journal: “Why some Tampa Bay property insurers are offering flood coverage and others are not” (subscription required)
Insurers need to weight the risks and rewards associated with the underwriting of flood insurance. A few carriers have already decided to participate in Florida’s private flood insurance market.

Geographic information systems can help insurers price flood risk

Insurers have been cautious about reentering the homeowners flood insurance market, which is due to high risks related to floods. In his Best Review’s article “High water mark,” Milliman’s Matt Chamberlain discusses the reasons behind the industry’s trepidation. He also provides perspective on how geographic information systems (GIS) can help insurers develop granular rating plans. Here is an excerpt:

There are several reasons why flood has been considered an uninsurable risk. First, flood is a localized peril; a distance of a few hundred feet, or less, can make a large difference in risk. This produces an information asymmetry, because the insured has a clear understanding of the local topography, while the insurer does not. The insured knows how far the house is from water, and whether it is on the top of a hill or if it is in a depression.

Insurers, on the other hand, typically use large rating territories for homeowners insurance, in some cases larger than a county. If these territories were to be used for flood insurance, it would create the potential for adverse selection. Insureds that were at highest risk of a flood would be most likely to want the coverage, and if insurance companies do not have a means of distinguishing higher-risk from lower-risk policies, anti-selection would result….

Geographic Information Systems, when coupled with the new flood catastrophe models to provide a very granular rating plan, may help insurance companies overcome these risks. Territories can be based on “hydrological units,” or watersheds, so that areas that water is not likely to flow between are not grouped together. Within a territory, appropriate rating factors are distance-to-coast (relating to storm surge risk), distance-to-river/stream (relating to river flood risk), and elevation (because all else being equal, there is lower flood risk at higher elevations).

Using all of these rating factors produces a rating plan that is able to distinguish different levels of risk even among points that are near each other. This produces true risk-based pricing that is likely to be sustainable in the long run. The top map at right shows this approach and compares it to the traditional method of rating flood insurance used by the NFIP, shown at bottom.

The video below presents an example of how GIS can improve pricing strategy.

Hurricane Sandy reading list

Reuters is just out with preliminary analysis from Eqecat indicating there is ample coverage in the insurance industry to cover losses associated with Hurricane Sandy. This analysis offers an estimate of $5 billion to $10 billion, which represents “a preliminary forecast that could change after the storm makes landfall.”

In the same article, Morgan Stanley provides this observation:

“With $500 billion-plus of capital … we expect the (property and casualty) industry is once again well prepared to pay all Frankenstorm insured losses,” Morgan Stanley analyst Gregory Locraft said in a report on Monday, using the nickname for the Sandy-nor’easter combo.

Rather than predicting what’s about to happen, we’ll point you to some reading on events in the rearview mirror. You can learn a lot about insurance just through how the industry has evolved following natural catastrophes.

  • David Sanders frames up the challenge of insuring natural catastrophes in his 2006 paper, “The Price of Civilization.” Sanders builds off of something Voltaire said after witnessing the wreckage following the 1755 Lisbon earthquake: “Is this the price mankind must pay for civilization?”  Sanders tries to answer this question by looking at how we pay the price that natural catastrophes extract and examining who bears the brunt of that expense. Here’s a helpful excerpt:

To assess how dangerous an insurance risk is, it is often convenient to apply the Pareto parameter. This rule—commonly known as the 80/20 rule—states that 20% of the claims in a particular portfolio are responsible for more than 80% of the total portfolio claim amount. With the Pareto parameter as a baseline, we can assess a portfolio’s vulnerability. If a single event can spell financial ruin, there may be a problem.

Hurricane data in the Caribbean indicates that insurers can make profit for a number of years, and then find themselves hit by a “one-in-1,000-year” hurricane, which swallows up 95% of the sum insured in one go. For example, when Hugo hit the U.S. Virgin Islands, the total cost of the loss for residential property insurers was equal to 1,000 years’ worth of premiums.

The regulators of the insurance industry generally target a one-in-100-year to one-in-200-year insolvency level. They do not cater to the one-in-1,000-year event. Typical solvency levels for major developed insurance markets that cover catastrophes are on the order of three to six times the cost of a once-in-a-century event. However, Katrina-type losses are not one-in-100-year events. Recent history indicates that they are more like one-in-five-year events, which means every five years the insurance industry can expect a $50 billion loss [most recently Katrina4].

There is a finite amount of reinsurance capacity, with billions available worldwide. A company might find a reinsurer willing to insure a $100 million dollar loss, but can they find one willing to cover single-year losses that exceed that threshold? It is difficult to adequately spread around risk and fill a reinsurance dance card when aggregate losses reach ten digits, which is why “securitizing” insurance risks in the capital markets has become an attractive option. Cat bonds were born in the early 1990s following Hurricane Andrew–a seminal event because it revealed the limits of reinsurance. Since then, companies, insurers, and reinsurers have used cat bonds to provide another layer of insurance, often protecting against an insurer’s unlikely third or fourth hundred-million-dollar loss–the one that finally exhausts insurance or reinsurance capacity.

  • Companies looking to triage the flood of claims that will come their way in the next two to five days might look to text mining as a way of comprehending Sandy’s claims. Phil Borba’s article on text mining shows how insurers can analyze claim notes to better screen and triage their claims.
  • Matt Chamberlain’s recent article examines how something called geocoding can lead to more precise pricing for homeowners policies in hurricane-prone parts of Florida.

Although it may seem like defining the “coastline” is clear-cut, it is actually quite ambiguous when considering a property’s exposure to a hurricane. Does the coastline follow bays, such as Tampa Bay? Does it follow barrier islands? Does it follow rivers and, if so, how far? After a company decides that it should organize its territories based on distance to the coast, that company’s first instinct may be to use an existing coastline. However, such a coastline may not be suitable for the purpose. Off-the-shelf coastlines may follow many small-scale features that do not, in fact, affect hurricane risk.

Maybe we’ll have to publish a follow-up article analyzing the potential for geocoding in Lower Manhattan.

Wherever you are, stay safe and dry. And if you are lucky enough to maintain electricity and Internet, feel free to post related reading below.

Geocoding coastlines and rate-making

Determining a “coastline” is not as easy as some would think. The following excerpt from an Insight article by Matt Chamberlain discusses the challenge of defining a coastline:

Although it may seem like defining the “coastline” is clear-cut, it is actually quite ambiguous when considering a property’s exposure to a hurricane. Does the coastline follow bays, such as Tampa Bay? Does it follow barrier islands? Does it follow rivers and, if so, how far?

After a company decides that it should organize its territories based on distance to the coast, that company’s first instinct may be to use an existing coastline. However, such a coastline may not be suitable for the purpose. Off-the-shelf coastlines, such as the one in the map in Figure 4, may follow many small-scale features that do not, in fact, affect hurricane risk.4 The coastline in Figure 4 even follows inland features, such as Lake Okeechobee. A considerable amount of preprocessing work is required to create a coastline that matches the expectation of risk. It is even possible that different coastlines could be used for different purposes.

Different hurricane model vendors may have designed their models using different coastlines. If a company wants to calibrate its rating structure to a particular hurricane model, it should use a coastline that matches its preferred model vendor’s interpretation. If the company wants to understand the relationship between risk and storm surge, it makes sense to use a coastline that captures the more finely detailed features that are relevant to storm surge risk. If the company is concerned about wind risk, it makes sense to use a coarser coastline that more closely corresponds to the hurricane peril.

Once a coastline is defined, an insurance company can begin geocoding territories to rate policies. Chamberlain outlines the process of geocoding in this excerpt:

In order to rate a policy, it must be “geocoded.” This requires the location’s address to be entered into a “geocoder,” which returns the location’s latitude and longitude. A geographic information system (GIS) program can use that latitude and longitude to determine which territory it is in. This provides the ability to determine the risk at a location much more precisely. Instead of rating the location based on the average risk in a territory, which in turn is based on counties or ZIP codes, this method allows the company to estimate the risk for that specific location. In practice, a company may still choose to create territories that group together similar risks, but the territories can be made as small as necessary, ensuring that each one is homogeneous.

To read Matt Chamberlain’s article on geocoding hurricane risk in Florida, click here.