Registration for the 2017 Data Science Game is officially open. The Data Science Game is a two-phase competition showcasing teams of data science students from universities around the world. An online qualifier will take place on April 15 with the final stage happening in September.
Milliman’s Pixel is a web-based, competitive analytics platform that helps insurers use objective and comprehensive information to grow their business.
In this video, Milliman actuaries Nancy Watkins, Peggy Brinkman, and Cody Webb discuss how Pixel helps insurers compare their premiums with those of competitors, identify market sectors where they might be experiencing adverse selection, and access competitive information needed to make sound pricing decisions.
In July, teams of data science students from more than 50 universities around the globe competed in the qualification phase of the 2016 Data Science Game. Over 140 teams of four students were asked to develop an algorithm that could recognize the orientation of a roof from a satellite photograph by building on more than 10,000 photograph of roofs categorized through crowdsourcing.
Twenty-two teams have qualified for the final phase. The top three ranking teams were Jonquille (University Pierre and Marie Curie), PolytechNique (Ecole Polytechnique), and The Nerd Herd (University of Amsterdam). The final is being held in Paris on September 10 and 11, where the teams will compete in a big data analysis challenge.
For more information on the Data Science Game, click here.
Milliman is a sponsor of the 2016 Data Science Game.
Milliman is a sponsor of the 2016 Data Science Game, a two-phase competition showcasing teams of data science students from universities around the world. After an online eliminatory challenge, the best 20 teams will be invited to a two-day competition in Paris.
Last year, teams competed to solve a machine learning challenge created by Google. Students from the Moscow State University won the competition. Who will win this year?
Teams can register at www.datasciencegame.com. The deadline to register is May 31. The online challenge will take place in June while the two-day competition is scheduled for September.
In his article “Analysing competitor tariffs with machine learning,” Milliman consultant Bernhard Konig provides a sample analysis demonstrating how machine learning can help insurers better understand their competitors’ tariffs and premium rates. The excerpt below explains some advantages of the machine learning technique.
Machine learning techniques provide a flexible tool set to derive accurate estimates of competitor premiums without any knowledge about the underlying tariff structure. The machine learning approach we developed as part of our research is faster and much less expensive than exhaustive web scraping or mystery shopping. It [enables] insurance executives to make better informed decisions about not only tariff changes, but also marketing campaigns and commercial discounts for certain customer segments. The impact of a tariff change on profitability and business volume can certainly be much better assessed in the presence of competitor premiums. In an ideal scenario, a company has an estimate of the competitor premiums at the point of sale. This allows adjusting one’s own quote to increase either the probability of conversion (by lowering the quote) or the profitability (by increasing the quote).
Telematics can help auto insurers implement usage-based insurance (UBI) by obtaining valuable data related to individuals’ driving behaviors. However, producing actionable information through generalized linear model (GLM) methods has been difficult for auto insurers. Machine learning techniques provide insurers with a better way of analyzing big data.
For auto insurers, machine learning holds the promise of enabling carriers to explore hundreds, if not thousands, of factors involved in calculating the potential risk of individual customers. Moving beyond GLM and introducing machine learning techniques with telematics data may enable insurers to leverage key competitive advantages.
UBI pricing necessarily supersedes traditional GLM methods because the complex interactions of the factors in play require machine learning to uncover them within a reasonable timeframe in order to be cost-effective. Pricing differences often cannot be fitted by GLM distributions. And correlations between telematics and non-telematics effects will tend to disturb the clarity of results in a single GLM. Distribution over different frequency and severity models only confuses the analysis of differences within telematics policies.
GLM techniques will thus always show how a business differs in terms of its dependencies on a limited set of specific factors, such as age, or how much mileage goes on a car, or other very high-level factors. But if a whole model is taken and everything modeled together to try to understand the risk for all of the policies, in the end there tends to be only a mixed bag of different effects. An insurer still hasn’t been able to look very deep into its business.