Milliman has announced that gradient A.I., a Milliman predictive analytics platform, now offers a professional employer organization (PEO)-specific solution for managing workers’ compensation risk. gradient A.I. is an advanced analytics and A.I. platform that uncovers hidden patterns in big data to deliver a daily decision support system (DSS) for insurers, self-insurers, and PEOs. It’s the first solution of its kind to be applied to PEO underwriting and claims management.
“Obtaining workers’ compensation insurance capacity has been historically difficult because of the lack of credible data to understand a PEO’s expected loss outcomes. Additionally, there were no formal pricing tools specific to the PEO community for use with any level of credibility—until gradient A.I. Pricing within a loss-sensitive environment can now be done with the science of Milliman combined with the instinct and intuition of the PEO,” says Paul Hughes, CEO of Libertate/RiskMD, an insurance agency/data analytics firm that specializes in providing coverage and consulting services to PEOs. “Within a policy term we can understand things like claims frequency and profitability, and we can get very good real-time month-to-month directional insight, in terms of here’s what you should have expected, here’s what happened, and as a result did we win or lose?”
gradient A.I., a transformational insurtech solution, aggregates client data from multiple sources, deposits it into a data warehouse, and normalizes the data in comprehensive data silos. “The uniqueness for PEOs and their service providers—and the power of gradient A.I.—emerges from the application of machine-learning capabilities on the PEOs’ data normalization,” says Stan Smith, a predictive analytics consultant and Milliman’s gradient A.I. practice leader. “With the gradient A.I. data warehouse, companies can reduce time, costs, and resources.”
For more on how gradient A.I. and Libertate brought predictive analytics solutions to PEOs, click here.
The group life and disability insurance sector has been slower to adopt predictive analytics than other lines of insurance. One reason for the sector’s lag is because insurers often have limited information on who they are insuring. However, there are still many ways to incorporate predictive modeling technology to improve results. Milliman consultant Jennifer Fleck provides some perspective in her article “Group insurance ‘Project Insight’.”
Milliman has announced the launch of gradient A.I. (formerly MillimanMAX), an advanced analytics platform that uncovers hidden patterns in big data in order to improve workers’ compensation claims management. The gradient A.I. platform is a transformative InsurTech technology built on the latest advanced techniques and artificial intelligence (A.I.), and delivers a daily decision support system (DSS) for insurers and self-insurers.
Milliman has been conducting research and development in the most advanced areas of artificial intelligence—also known as “deep learning”—for over five years, and the rebranding of gradient A.I. is a reflection of that enhanced experience. Our goal with gradient A.I. is to deliver the most actionable intelligence to our clients in the form of “decision support—and we’re pleased to note that so far clients have seen underwriting profit improvements of 3% to 5% and claim cost reductions in the neighborhood of 5% to 10%.
The key differentiator of gradient A.I. is its ability to identify relationships between structured and unstructured data, unlocking powerful and previously unknown information to deliver a competitive advantage to self-insured groups, carriers, and third-party administrators within the property and casualty (P&C) market. Additional product features include a custom data warehouse, easily identifiable and actionable risk drivers, dynamic reporting, and customizable reports and dashboard.
To learn more, click here.
Milliman will debut its proprietary predictive modeling platform at the Insider Tech Conference held in New York City on December 6. Milliman’s recently created analytics software, Solys, uses advanced computer languages, models, and machine learning so that consultants can serve their clients with increased speed, reach, and cost-efficiency.
An internal tool that can be used to benefit Milliman’s current and future clients, Solys simplifies processes, improves data management, and performs advanced predictive analytics using the latest software environments and programming languages. The leading technology increases efficiencies and consultant capabilities in the growing InsurTech field. Milliman consultants will be discussing the tool and the firm’s work in InsurTech at a panel discussion at the Insider Tech event in New York on December 6.
As insurers face disruption around the “Internet of Things,” the shared economy, and autonomous vehicles, it’s vital that their consultants provide the best answers in the fastest and most cost-efficient manner possible. Milliman’s advanced predictive modeling tool enables consultants to address their clients’ InsurTech questions and remain leaders in this rapidly changing industry.
To read Milliman’s InsurTech research, click here. Also, to subscribe to Milliman’s InsurTech updates, contact us here.
Today actuaries and insurers are able to apply predictive analytics in novel ways because of advanced technologies, larger data sets, and increased computing power. A recent Risk & Insurance article featuring Milliman’s Peggy Brinkman and Phil Borba explores four key areas where advances in predictive analytics are changing the way insurers conduct business: claims, driving safety, property risk, and competitive rating.
While machine-learning techniques can improve business processes, predict future outcomes, and save money, they also increase modeling risk because of their complex and opaque features. In this article, Milliman’s Jonathan Glowacki and Martin Reichhoff discuss how model validation techniques can mitigate the potential pitfalls of machine-learning algorithms.
Here is an excerpt:
An independent model validation carried out by knowledgeable professionals can mitigate the risks associated with new modeling techniques. In spite of the novelty of machine-learning techniques, there are several methods to safeguard against overfitting and other modeling flaws. The most important requirement for model validation is for the team performing the model validation to understand the algorithm. If the validator does not understand the theory and assumptions behind the model, then they are likely to not perform an effective model validation on the process. After demonstrating an understanding on the model theory, the following procedures are helpful in performing the validation.
Outcomes analysis refers to comparing modeled results to actual data. For advanced modeling techniques, outcomes analysis becomes a very simple yet useful approach to understanding model interactions and pitfalls. One way to understand model results is to simply plot the range of the independent variable against both the actual and predicted outcome along with the number of observations. This allows the user to visualize the univariate relationship within the model and understand if the model is overfitting to sparse data. To evaluate possible interactions, cross plots can also be created looking at results in two dimensions as opposed to a single dimension. Dimensionality beyond two dimensions becomes difficult to evaluate, but looking at simple interactions does provide an initial useful understanding of how the model behaves with independent variables….
…Cross-validation is a common strategy to help ensure that a model isn’t overfitting the sample data it’s being developed with. Cross-validation has been used to help ensure the integrity of other statistical methods in the past, and with the rising popularity of machine-learning techniques, it has become even more important. In cross-validation, a model is fitted using only a portion of the sample data. The model is then applied to the other portion of the data to test performance. Ideally, a model will perform equally well on both portions of the data. If it doesn’t, it’s likely that the model has been over fit.