Enhanced processes of mining unstructured data

April 22nd, 2014 No comments

Innovative analytical tools and high-performance computing are providing insurers the means needed to analyze huge volumes of unstructured data. In this Risk.net article (subscription required), Milliman’s Neil Cantle discusses how these advances offer carriers a more sophisticated approach in analyzing inherent risks and developing best business practices.

Here is an excerpt:

Many of the new generation of tools for unstructured data were initially developed to enable search engines such as Yahoo and Google to tackle the vast resources of the web. Key among these is the Hadoop framework for the management and processing of large-scale disparate datasets on clusters of commodity hardware. Hadoop has a number of modules for such things as distributing data across groups of processors, filtering, sorting and summarizing information, and automatically handling the inevitable hardware failures that arise in large computing grids. All of the technologies mentioned are open source, which means they are free and readily available, and they are also supported by many proprietary commercial extensions and equivalents.

The breakthrough with new data sources and tools is the ability to query things for which the data has not been organized in advance. This can reveal new patterns, trends and correlations that can be helpful in managing risk and spotting opportunities, says Neil Cantle, principal and consulting actuary at Milliman, based in London.

… “[The new data capabilities] enable insurers to look more broadly and deeply into the world in which the policyholder lives without necessarily being specific about the person, and allow them to start making inferences about an individual and their behavior,” says Cantle.

The article also focuses on the emergence of data scientists who are entrusted with mining new data sources. Milliman’s Peggy Brinkmann expounds on data science and the techniques data scientists use to extract value from large amounts of information in her paper “Why big data is a big deal.”

Leveraging quality control sampling for your business

April 9th, 2014 No comments

The Federal Home Loan Mortgage Corporation (FHLMC, known as Freddie Mac) and the Federal National Mortgage Association (FNMA, known as Fannie Mae) are government-sponsored enterprises (GSEs) that have issued guidance beginning in September 2012 concerning changes in their respective representations and warranty frameworks. The changes, effective for loans acquired by the GSEs on or after January 1, 2013, require lenders to report defects on various samples of loans delivered to the GSEs. These reports should be leveraged by lenders to monitor and mitigate their own risks of future repurchases.

In their article “Leveraging quality control sampling for your business,” Milliman’s Eric Wunder and Jonathan Glowacki offer perspective on the three types of required samples to monitor defect rates: Random sampling, discretionary sampling, and targeted sampling.

Risk management is about the details

April 4th, 2014 No comments

Risk management helps companies understand emerging uncertainties that may impact their bottom lines. In this Raconteur article, Milliman’s Neil Cantle explains how companies can use information arising from risk management processes to adapt amid changing business and industry dynamics. Here’s an excerpt:

A risk management function that can work with the business to help develop “insights” into emerging uncertainties plays a vital role in surfacing things that should be discussed. Providing a structured way of talking about these uncertainties and helping the business to reach consensus about the appropriate adjustments to the path enables a business to be considerably more resilient and adaptive.

It is clear that “oversight” is an important part of business infrastructure, to ensure that more stable processes with clear outcomes are carried out as intended. But this need to also facilitate “insight” offers a perspective into a new dimension of risk management which has perhaps been less common as companies focus more on regulatory compliance than the strategic benefits that a risk function can bring.

Tort overhaul: Patient compensation system legislation raises more questions than answers

April 3rd, 2014 No comments

In recent legislative proposals in Florida and Georgia, lawmakers have sought to establish a patient compensation system (PCS) as an alternative to litigation for compensating patients with injuries that could have been avoided under alternative healthcare (referred to as “medical injuries” within the legislation).

Proponents say offering a PCS as an alternative to litigation could lead to faster outcomes with claims. Advocates claim that faster claim resolutions and less attorney involvement would ultimately reduce overall costs, while providing access to compensation for more patients. They also argue that this system would benefit claimants with minor injuries, who are frequently excluded under the current system, because their claims generally do not result in the kind of large monetary awards that make taking a medical professional liability (MPL) case cost-effective for plaintiff attorneys.

Can PCSs really provide the many benefits, in cost savings, fairness, greater access, and efficiencies, that their proponents claim? This article by Christine Fleming, Eric Wunder, and Susan Forray offers some perspective.

This article was originally published in Inside Medical Liability, First Quarter 2014.

ERM survey: 2014 Risk Minds Conference

March 28th, 2014 No comments

Milliman’s survey from the Third Annual Risk Minds Conference offers some perspective from risk management professionals in the insurance industry. Here are the final results.

RiskMinds_Final-01

RiskMinds_Final-02

RiskMinds_Final-03

RiskMinds_Final-04

RiskMinds_Final-05

RiskMinds_Final-06

Pilot Projects and risk-focused examination experiences offer lessons for the ORSA

March 26th, 2014 No comments

The National Association of Insurance Commissioners (NAIC) has conducted two Own Risk and Solvency Assessment (ORSA) Feedback Pilot Projects. The aim was to provide high-level advice that insurers can utilize in their approach to the ORSA.

Milliman consultants Aaron Koch, François Dauphin, and William Hines discuss some of the NAIC’s findings and provide items that insurers should consider when completing their ORSA process in the paper “One year to go: An ORSA checkup.” Here is an excerpt:

ORSA in practice: The feedback Pilot Projects
The two Pilot Projects provided a laboratory for fine-tuning the industry’s approach to the ORSA. Regulators’ public feedback identified the following shortcomings (among others) related to the submitted “sample” Summary Reports:

• A tendency for insurers to simply attest to the existence of risk limits instead of describing them
• A lack of explanation of the methodologies underlying insurers’ internal capital models
• A need for some insurers, especially life insurers, to provide additional stress testing on liquidity rather than a single focus on capital
• A need for some insurers to more clearly identify internal “risk owners” and key risks

It is worth noting that the NAIC ORSA Working Group explicitly refrained from leveraging the above observations into further prescriptive requirements in the ORSA Guidance Manual. Instead, they characterized them as items that insurers “may choose to consider” when completing Summary Reports.

This approach of “comment but don’t codify” preserves the ORSA’s flexibility and should be healthy for the long-term prospects of the ORSA. Nevertheless, it may cause some short-term frustration for insurers trying to grasp what the ORSA might mean for them. So how can they best meet regulators’ expectations, particularly when there is still some lack of definition regarding what the full extent of those expectations might be?

The authors also list several risk-focused examination issues insurers should consider when completing the ORSA:

Assess both the frequency and severity of risks. Some insurers present risks along a single continuum (low, medium, high)—or simply provide a listing of “important” risks. Assessing all risks along two dimensions allows for added insight into solvency evaluation (for example, low-frequency/high-severity risks are likely to be a bigger threat to solvency than high-frequency/low-severity risks). It also helps identify optimal risk mitigation strategies for a given risk.
Consider the entire horizon. Certain insurance liabilities have long duration, which increases exposure to financial risk. This is especially true in the life insurance industry. What seems like a reasonable risk strategy in the short term may lead to suboptimal outcomes in the long term, while the opposite may also be true.
Quantify risk for comparability. At a high level, this can be as simple as estimating the potential impact of a risk as a percent of surplus or reserves. Admittedly, not every risk is easily expressible in such terms. Additionally, there is sometimes a temptation to overstate precision and understate uncertainty once numbers are assigned to qualitative risks. Nevertheless, placing risks into numerical terms provides a picture of potential materiality for outside observers. Quantification also helps management prioritize mitigation efforts across types of risk that otherwise might be difficult to compare (for example, operational risk and reserve risk).

The right retention

March 18th, 2014 No comments

The most important decision that a corporation makes in regard to any insurable risk it faces is determining the risk financing structure, including the trade-off between retained risk and transferred risk. Deciding the optimal amount of retained risk is often more art than science. Why don’t companies always put themselves in the optimal position from a risk retention standpoint? Because there is risk—the uncertainty of what losses will be—which is perceived as difficult to quantify. Rules of thumb and anecdotal evidence often win out in decision making.

The traditional insurance language has been and is “cost of risk” (or sometimes just “expense”). This metric doesn’t tell the whole story about risk. By incorporating the element of risk into retention analysis—calculating a distribution of loss incomes, as well as considering the effects of the firmness of the insurance market and taking a financial view of risk, the retention analysis can be made as a capital resource decision, incorporating the cost of capital embedded in an insurance purchase.

Milliman’s Stephen DiCenso and John Yonkunas, of JPY Services, use a sample insurance portfolio to quantitatively compare a cost of capital approach with a traditional cost of risk approach in this Captive Review article.

A pragmatic approach to modeling real-world interest rates

March 14th, 2014 No comments

Even without the advent of Solvency II and the appeal of internal models to model capital more accurately, it’s likely that the events following the global financial crisis (GFC) would have sharpened up European insurance companies’ risk modeling capabilities.

In Asia, insurance companies are also investing significant resources in developing their own economic capital models. Boards of directors have been charged with the measurement of risk and the need to plan their capital requirements through such things as an Own Risk and Solvency Assessment (ORSA) and an Internal Capital Adequacy Assessment Process (ICAAP) in Singapore and Malaysia, respectively.

Much has already been written about building complex Monte Carlo engines to calculate risk measures. This report by Milliman’s Clement Bonnet and Nigel Knowles addresses a question about the front end of the risk measurement process: How do we project our yield curve?

Fracking exposures could create large liability claims

March 13th, 2014 No comments

There has not yet been any environmental accident related to hydraulic fracturing (fracking). Still the potential for enormous liability claims exists. Milliman’s Jason Kurtz offers perspective regarding the financial effects that ecological contamination caused by fracking may have on a company in this Business Insurance article (subscription required):

Jason B. Kurtz, a consulting actuary at Milliman Inc. in New York, said past water contamination cases in other industries, such as those that involve gasoline additive MTBE, demonstrate how costly such events can be.

“If these types of things do manifest themselves, some of the companies involved (in fracking) may not be big enough to fully absorb the financial hit,” he said. “Regulators should be aware of that.”

Regulators overseeing fracking might want to look to insurer solvency requirements as a guide to requirements that could be imposed on companies involved in hydraulic fracturing that would ensure they have sufficient resources, either through their own funds or insurance, to cover groundwater contamination claims, Mr. Kurtz said.

Kurtz also details some insurance related uncertainties involved with fracking in his co-authored paper “Fracking: Considerations for risk management and financing.” Here is an excerpt:

A current lack of insurance capacity may be due to a lack of historical demand. Fracking-related energy production will be around for a long time, with many thousands of wells expected to be drilled in the next several decades, but as this technology is just starting to become more widespread, there may be a temporary lack of capacity as insurers increase their familiarity with the unique aspects of fracking exposures in a particular location.

…If insurers are too concerned about high pollution liability for fracking exposures, perhaps it’s worth evaluating the particular situation to see whether the pollution risk could somehow be further reduced or, at the extreme, avoided. A lack of insurance availability for certain energy companies in a region may be a signal that the likelihood of major pollution losses is too high, either because best practices are not being followed or because of the complexities associated with the use of fracking in that region.

Actuaries and reserve adequacy: Are P&C actuaries impacting the reserving cycle?

March 7th, 2014 No comments

For more than 30 years, reserve adequacy for the property and casualty (P&C) insurance industry has been highly cyclical, alternating between periods of adverse and favorable reserve development. No one knows for certain what factor or factors cause these swings. It’s commonly thought that internal industry influences—such as claims department practices, changes in pricing, or management decisions—are potential sources. Although we expect these elements do play a role, there’s no evidence to suggest they are the primary reason for the reserving cycle.

What few have considered, on the other hand, is the possibility that common methods used by actuaries to determine appropriate reserves may themselves be an important contributing factor to movements in the reserving cycle. In their article “Actuaries and reserve adequacy,” Milliman’s Susan Forray and Zachary Ballweg assess the potentially cyclical behavior of various actuarial reserving methods.

This article originally appeared in the March/April 2014 issue of Contingencies.