Tag Archives: machine learning

How can advancements in predictive analytics help identify reinsurance workers’ comp claims early?

Developments over the past few years in predictive analytics are providing opportunities to improve the early identification of claims with a higher likelihood of piercing workers’ compensation reinsurance layers. Over the past decade or so, the field of claim analytics has moved from performing forensic work on closed claims to analytics that can identify at 60 days from the date of injury (or sooner) claims with a high likelihood of exceeding a retention level.

While an excess loss is obvious for some catastrophic claims, the buildup to the attachment point is less obvious for many excess loss claims due to the subtleties of compounding factors. A significant challenge with early identification analytics for claims that have not reached an excess loss attachment point is that the administration of the claim is often handled by several specialists without any single participant noticing the aggregation of costly factors.

A recent development in predictive analytics is the use of machine learning software that extends the principles of conventional multivariate analyses. In contrast to the conventional analyses, these advanced analytic methods are not limited to linear relationships. Another development is the extraction of text information from claim adjusters’ notes, nurse care manager reports, and medical reports.

The advances with machine learning software and text mining algorithms are necessary tools for the early identification of claims most likely to become excess loss claims. To learn more about how analytics has affected the early identification of claims, read this article by Lori Julga and Phil Borba.

Parallel cloud computing enhances actuarial analyses

Parsing a large computational process into smaller independent tasks that run in parallel to each other can help actuaries benefit from the time-saving efficiencies of cloud computing. Machine learning has parallel compute capabilities to assist with these tasks. In this article, Milliman’s Joe Long and Dan McCurley discuss how they were able to cut a three-month machine learning project down to four days using open source tools and the Microsoft Azure cloud.

Milliman debuts proprietary predictive modeling platform for advanced analytics and enhanced data management

Milliman will debut its proprietary predictive modeling platform at the Insider Tech Conference held in New York City on December 6. Milliman’s recently created analytics software, Solys, uses advanced computer languages, models, and machine learning so that consultants can serve their clients with increased speed, reach, and cost-efficiency.

An internal tool that can be used to benefit Milliman’s current and future clients, Solys simplifies processes, improves data management, and performs advanced predictive analytics using the latest software environments and programming languages. The leading technology increases efficiencies and consultant capabilities in the growing InsurTech field. Milliman consultants will be discussing the tool and the firm’s work in InsurTech at a panel discussion at the Insider Tech event in New York on December 6.

As insurers face disruption around the “Internet of Things,” the shared economy, and autonomous vehicles, it’s vital that their consultants provide the best answers in the fastest and most cost-efficient manner possible. Milliman’s advanced predictive modeling tool enables consultants to address their clients’ InsurTech questions and remain leaders in this rapidly changing industry.

To read Milliman’s InsurTech research,  click here. Also, to subscribe to Milliman’s InsurTech updates, contact us here.

Riding the data: How a transportation company used data science to improve decision-making processes

How can a company leverage customer data and turn it into actionable information? This was the challenge one transportation provider faced when its modeling system began underperforming after the company implemented it to predict revenue and passenger traffic. In this article, Milliman consultant Antoine Ly discusses how the firm created a machine-learning model that helps the company analyze various aspects of its ridership, leading to more informed financial decisions.

Here is an excerpt:

Working from a mock-up drafted by the client, the [Milliman] team reproduced the dashboard to the client’s specifications, but it is now supported by newly developed software as well as the client’s data warehouse. The dashboard allows the client’s management team to quire different aspects of passenger usage to gain insight into traffic flows and revenue. Colour-coded symbols, which when clicked on, give managers a concise picture of a train’s revenue and traffic. Managers can also quire the system based on selected features for both past usage and anticipated ridership, and are now able to make more informed decisions about pricing, the need for discounts or adjustments to marketing campaigns.

Because the model can adapt to new situations, deviations from the average error are confined to a much more narrow range. This means managers can have more confidence in the model’s predictive value and increases their ability to manage revenue.

Emerging risk analytics: Application of advanced analytics to the understanding of emerging risk

This report by Milliman’s Neil Cantle uses advanced machine learning algorithms, such as deep neural networks, to analyse social media conversations about Brexit. The purpose of the study was to examine whether useful information could be extracted from social media in what is effectively real time on a key topic in a political economy.

Validating machine-learning models

While machine-learning techniques can improve business processes, predict future outcomes, and save money, they also increase modeling risk because of their complex and opaque features. In this article, Milliman’s Jonathan Glowacki and Martin Reichhoff discuss how model validation techniques can mitigate the potential pitfalls of machine-learning algorithms.

Here is an excerpt:

An independent model validation carried out by knowledgeable professionals can mitigate the risks associated with new modeling techniques. In spite of the novelty of machine-learning techniques, there are several methods to safeguard against overfitting and other modeling flaws. The most important requirement for model validation is for the team performing the model validation to understand the algorithm. If the validator does not understand the theory and assumptions behind the model, then they are likely to not perform an effective model validation on the process. After demonstrating an understanding on the model theory, the following procedures are helpful in performing the validation.

Outcomes analysis refers to comparing modeled results to actual data. For advanced modeling techniques, outcomes analysis becomes a very simple yet useful approach to understanding model interactions and pitfalls. One way to understand model results is to simply plot the range of the independent variable against both the actual and predicted outcome along with the number of observations. This allows the user to visualize the univariate relationship within the model and understand if the model is overfitting to sparse data. To evaluate possible interactions, cross plots can also be created looking at results in two dimensions as opposed to a single dimension. Dimensionality beyond two dimensions becomes difficult to evaluate, but looking at simple interactions does provide an initial useful understanding of how the model behaves with independent variables….

…Cross-validation is a common strategy to help ensure that a model isn’t overfitting the sample data it’s being developed with. Cross-validation has been used to help ensure the integrity of other statistical methods in the past, and with the rising popularity of machine-learning techniques, it has become even more important. In cross-validation, a model is fitted using only a portion of the sample data. The model is then applied to the other portion of the data to test performance. Ideally, a model will perform equally well on both portions of the data. If it doesn’t, it’s likely that the model has been over fit.