Tag Archives: predictive modeling

Milliman’s gradient A.I. platform brings first A.I. predictive analytics solution to professional employer organization (PEO) market

Milliman has announced that gradient A.I., a Milliman predictive analytics platform, now offers a professional employer organization (PEO)-specific solution for managing workers’ compensation risk. gradient A.I. is an advanced analytics and A.I. platform that uncovers hidden patterns in big data to deliver a daily decision support system (DSS) for insurers, self-insurers, and PEOs. It’s the first solution of its kind to be applied to PEO underwriting and claims management.

“Obtaining workers’ compensation insurance capacity has been historically difficult because of the lack of credible data to understand a PEO’s expected loss outcomes. Additionally, there were no formal pricing tools specific to the PEO community for use with any level of credibility—until gradient A.I. Pricing within a loss-sensitive environment can now be done with the science of Milliman combined with the instinct and intuition of the PEO,” says Paul Hughes, CEO of Libertate/RiskMD, an insurance agency/data analytics firm that specializes in providing coverage and consulting services to PEOs. “Within a policy term we can understand things like claims frequency and profitability, and we can get very good real-time month-to-month directional insight, in terms of here’s what you should have expected, here’s what happened, and as a result did we win or lose?”

gradient A.I., a transformational insurtech solution, aggregates client data from multiple sources, deposits it into a data warehouse, and normalizes the data in comprehensive data silos. “The uniqueness for PEOs and their service providers—and the power of gradient A.I.—emerges from the application of machine-learning capabilities on the PEOs’ data normalization,” says Stan Smith, a predictive analytics consultant and Milliman’s gradient A.I. practice leader. “With the gradient A.I. data warehouse, companies can reduce time, costs, and resources.”

For more on how gradient A.I. and Libertate brought predictive analytics solutions to PEOs, click here.

How can predictive analytics enhance group life and disability insurance?

The group life and disability insurance sector has been slower to adopt predictive analytics than other lines of insurance. One reason for the sector’s lag is because insurers often have limited information on who they are insuring. However, there are still many ways to incorporate predictive modeling technology to improve results. Milliman consultant Jennifer Fleck provides some perspective in her article “Group insurance ‘Project Insight’.”

Milliman debuts proprietary predictive modeling platform for advanced analytics and enhanced data management

Milliman will debut its proprietary predictive modeling platform at the Insider Tech Conference held in New York City on December 6. Milliman’s recently created analytics software, Solys, uses advanced computer languages, models, and machine learning so that consultants can serve their clients with increased speed, reach, and cost-efficiency.

An internal tool that can be used to benefit Milliman’s current and future clients, Solys simplifies processes, improves data management, and performs advanced predictive analytics using the latest software environments and programming languages. The leading technology increases efficiencies and consultant capabilities in the growing InsurTech field. Milliman consultants will be discussing the tool and the firm’s work in InsurTech at a panel discussion at the Insider Tech event in New York on December 6.

As insurers face disruption around the “Internet of Things,” the shared economy, and autonomous vehicles, it’s vital that their consultants provide the best answers in the fastest and most cost-efficient manner possible. Milliman’s advanced predictive modeling tool enables consultants to address their clients’ InsurTech questions and remain leaders in this rapidly changing industry.

To read Milliman’s InsurTech research,  click here. Also, to subscribe to Milliman’s InsurTech updates, contact us here.

Using predictive modelling in assumption setting

Milliman is carrying out a series of policyholder behaviour experience studies using predictive analytics. This blog post discusses the most recent US-based study looking at Guaranteed Lifetime Withdrawal Benefit (GLWB) utilisation, which, along with lapse, is a key driver of variable annuity (VA) business value.

The study was based on a data set containing around 2 million unique VA policies issued between 2003 and 2015 of seven large variable annuity writers based in the US. These policies represent roughly $220 billion of account value (based on initial purchase amounts) and cover a range of GLWB product designs as well as demographic attributes. This provides a rich data set with which to study policyholder behaviour.

A predictive model can be constructed with common variables such as age, tax-qualified status and single/joint status to allow easy implementation. The models constructed for our study use drivers that are readily available in a typical in-force data file, making them suitable for implementation in existing actuarial projection platforms. Including additional explanatory variables or interactions to the assumption formula is a natural step of predictive modelling because many variables can be captured in a single model without double-counting the individual variables’ effects. This framework allows iterative improvements to predictions and better differentiation of policyholder behaviour at a seriatim level.

The 2016 Milliman VALUES™ GLWB Utilisation study examined both when the policyholders chose to begin taking lifetime withdrawals, as well as how efficiently they continued to take them thereafter. We were able to confirm and, more importantly, quantify many intuitive assumptions about these behaviours and what drives them, and discovered new insights as well. For example, less than half of all policyholders currently taking GLWB withdrawals utilise their GLWB benefit with 100% efficiency (i.e., taking precisely the maximum allowed withdrawal amount). This is interesting as we believe many companies price on a basis of 100% efficiency.

Continue reading

Validating machine-learning models

While machine-learning techniques can improve business processes, predict future outcomes, and save money, they also increase modeling risk because of their complex and opaque features. In this article, Milliman’s Jonathan Glowacki and Martin Reichhoff discuss how model validation techniques can mitigate the potential pitfalls of machine-learning algorithms.

Here is an excerpt:

An independent model validation carried out by knowledgeable professionals can mitigate the risks associated with new modeling techniques. In spite of the novelty of machine-learning techniques, there are several methods to safeguard against overfitting and other modeling flaws. The most important requirement for model validation is for the team performing the model validation to understand the algorithm. If the validator does not understand the theory and assumptions behind the model, then they are likely to not perform an effective model validation on the process. After demonstrating an understanding on the model theory, the following procedures are helpful in performing the validation.

Outcomes analysis refers to comparing modeled results to actual data. For advanced modeling techniques, outcomes analysis becomes a very simple yet useful approach to understanding model interactions and pitfalls. One way to understand model results is to simply plot the range of the independent variable against both the actual and predicted outcome along with the number of observations. This allows the user to visualize the univariate relationship within the model and understand if the model is overfitting to sparse data. To evaluate possible interactions, cross plots can also be created looking at results in two dimensions as opposed to a single dimension. Dimensionality beyond two dimensions becomes difficult to evaluate, but looking at simple interactions does provide an initial useful understanding of how the model behaves with independent variables….

…Cross-validation is a common strategy to help ensure that a model isn’t overfitting the sample data it’s being developed with. Cross-validation has been used to help ensure the integrity of other statistical methods in the past, and with the rising popularity of machine-learning techniques, it has become even more important. In cross-validation, a model is fitted using only a portion of the sample data. The model is then applied to the other portion of the data to test performance. Ideally, a model will perform equally well on both portions of the data. If it doesn’t, it’s likely that the model has been over fit.

Reading list: Florida’s private flood insurance market

Advances in catastrophe models and new state insurance regulations have opened the door for an affordable, risk-based private insurance market in Florida. This reading list highlights articles focusing on various issues and implications related to the market. The articles feature Milliman consultants Nancy Watkins and Matt Chamberlain, whose knowledge and experience is helping insurers to understand and price flood risk more precisely.

Forbes: “The private flood insurance market is stirring after more than 50 years of dormancy
The reemergence of private flood insurance has piqued the interest of carriers seeking to enter the market. Some catastrophe (CAT) modeling companies are creating flood models to help insurers price policies. Here’s an excerpt:

Nancy Watkins, a principal consulting actuary for Milliman, likened the current level of interest from insurers to enter the private flood insurance market to popcorn.

“We are at that stage where you can hear the space between pops. You can hear one kernel at a time,” she said. “What I think is going to happen is, in one to two years, there’s going to be a lot more going on.”

Bradenton Herald: “Important for homeowners to compare flood insurance options
Florida homeowners must consider the issues related to the National Flood Insurance Program (NFIP) and private flood policies. Private insurers can use predictive modeling technology to determine a home’s distinct flood risk.

Tampa Bay Times: “Remember the flood insurance scare of 2013? It’s creeping back into Tampa Bay and Florida
Real estate and insurance experts comment on the possible effects that high flood insurance rates may have on homeowners. Insurers express interest in the granular modeling of flood-prone territories.

Tampa Bay Business Journal: “Why some Tampa Bay property insurers are offering flood coverage and others are not” (subscription required)
Insurers need to weight the risks and rewards associated with the underwriting of flood insurance. A few carriers have already decided to participate in Florida’s private flood insurance market.