Category Archives: Risk

Social media influencers bear reputational risk that insurance may cover

Influencer marketing is a lucrative business. Top social media influencers can earn upwards of $25,000 per post in partnership with a brand or company. Still, social media influencers must think about reputational risks that can have a measurable effect on their revenue.

In this article, Milliman’s Madeline Johnson discusses why individuals who rely on their name for income may need some type of reputation risk or business interruption insurance. She also explains the factors insurance companies should consider if they design an individual reputation risk insurance product.

Here is an excerpt from the article:

Starting with the premise that our “good name” translates to our own individual “brand,” protecting one’s individual reputation correlates to protecting one’s personal brand – and the corresponding income stream and overall marketability contained therein. Just as Bruce Springsteen insured his voice or Heidi Klum her legs, for many professionals and celebrities their income is often dependent on the individual reputation they have created. As social media usage increases, the potential for a negatively received public comment does too. A negatively received post has potential implications not only for the social media star but also potentially for the partner company or brand. These companies hire influencers and pay them to endorse their products or services on various social media venues. Reputation risk insurance could provide a financial safety net by providing coverage if a significant negative media event occurred that quantifiably affected an influencer’s future revenue stream….

… In exploring a structure for a reputation risk insurance product for individuals, an insurance company would need to consider the ramifications of insuring an influencer’s potentially poor choice in posting. In most insurance policies, the insurer is offering protection from an outside risk exposure, not an intentional communication on social media. From an insurer’s perspective, issues to consider include defining the specific social media coverage event excluding instances where protocols were not used and, most importantly, the ability to quantify the premium and loss coverage accurately. The insurer would need a methodology to estimate the predicted occurrence of the negative social media event to determine the risk of loss to the insurer. We would expect the actuarial value of the covered losses to be a key component to the policy. Insurance companies would need to structure the policy using a set of assumptions related to how much has been damaged or lost and for how long. Evaluating past social media influencer income streams versus changes after varying posts and videos to form a predictive view may be helpful in understanding risk exposure. A prudent approach to determining insurance terms and pricing is to perform an actuarial study to evaluate the frequency and severity data from similar past events. This can be accomplished by evaluating relationships between social media influencers that have partnerships with certain brands or products, costs of the ultimate drop in followers and sales, and any existing mitigation activities.

Recovery and Resolution Plans: More to it than meets the eye

Have you ever wondered what options would be available to your company should it get into financial difficulty? Does your company have a ‘plan B’ and how practical and realistic is it? These are questions (re)insurance companies may soon need to answer. Recovery and Resolution Plans (RRPs) have already been introduced in the banking industry. In this blog I outline a few insights the insurance industry can learn from the recovery and resolution planning process which the banking industry has already commenced. (Re)insurance companies may find this useful particularly in light of the European Insurance and Occupational Pensions Authority (EIOPA) opinion issued last month recommending a harmonised recovery and resolution framework for all insurers across the EU.

Based on the feedback from the banking industry, it would appear that there is more to recovery and resolution planning than meets the eye. In the banking industry, recovery plans, for example, are intended to be living documents which demonstrate that the recovery strategies presented can be implemented in reality—and that is not an easy task.

The following diagram illustrates the embeddedness of recovery plans within banks as well as some of the key considerations which I will expand upon in this blog.

Recovery plans can span hundreds of pages as the practicalities of recovery strategies are explored in great detail in order to have a plan of action in place that is realistic, achievable and capable of being put into action straight away. Regulators expect a short timeframe for implementation of a recovery plan, with the recovery strategies presented typically required to be fully executable within a 12-month period. In addition, it is expected that the recovery strategies take account of the particular scenarios the company may find itself in. For example, the recovery strategies may vary depending on whether an idiosyncratic or a systemic risk has materialised, given that the options a company could take when it alone is in financial difficulty compared to when many companies are in the same boat may well be different.

Continue reading

Milliman and Zendrive create driving risk score with 30 billion miles of smartphone data

As more drivers use smartphones to talk, text, and perform other functions while driving, concern over distracted driving and its contribution to climbing collision rates has increased. Using data collected by Zendrive, Milliman recently studied the impact of distracted driving and other driving behaviors on collision frequency. Consultant Sheri Scott provides some perspective in this article.

Milliman consultant speaking at Mortgage Bankers Association forum

Milliman consultant Madeline Johnson, CMB, will speak at the 2017 MBA Risk Management, QA and Fraud Prevention Forum this September in Miami, Florida. She is scheduled to speak at the session entitled “QC for Purchase Markets” on Monday, September 25.

The three-day forum will be held from September 24 to 26. For more information on the talk and forum, click here.

Obstacle course racing presents insurers with unique hurdles

Obstacle course racing (OCR) like events featured on American Ninja Warrior have grown in popularity. As the extreme factor of OCR increases so does the risk for event organizers. These competitions do not have the reliable historical data, consistency of events, and general safety measures seen in traditional footraces, making it difficult for insurers to price OCR’s exposures.

A new article by Michael Henk entitled “Obstacles for insurers of obstacle course racing” explores OCR’s unique risks. It also provides perspective for insurers to consider when pricing premiums in this emerging market.

Here is an excerpt from the article:

Imagine that there is a local half marathon looking for liability insurance to cover its event. An insurance company can use data from past races (either in the same location or spread across a broad geography) to predict expected losses. Because half marathons have been around and been insured for decades, there is enough data for a credible analysis. Because OCR was almost nonexistent until 2010, insurance companies do not have that same degree of industry data. As with any emerging market (such as cyber liability, drone insurance, and self-driving cars), insurers do not know what to expect, and therefore, insurance premiums are priced higher to make up for the unknowns.

Another obstacle in the way of establishing a credible database is that all obstacle course races are not the same. When you decide to run a marathon, you know what to expect: run 26.2 miles. Road races might vary by elements such as terrain, local weather, and elevation changes, but overall, similar risks can be expected across all events. If you run a marathon in Chicago, it is similar to running a marathon in Miami. Likewise, insurers also know what to expect with these traditional races. They can use past data and rely on well-established safety standards to determine the proper level of risk and premiums.

Obstacle courses do not have the same consistency. Running a Tough Mudder race in Minnesota is entirely different from a Spartan race in Florida. The lack of standardization makes it difficult to price insurance policies. For example, if one race has a wall that is 20 feet high and another event has one that is five feet high, they pay the same premium even though the risk of injury from falling is greater with the 20-foot wall. These higher premiums can potentially cause race organizers to pay more for insurance than necessary. The risks associated with one obstacle course can be completely different from the risks of another, but insurance companies will still price them relatively the same as there is not enough historical data to allow for differentiation in the policies.

If the industry developed a consistent and credible database of obstacles, insurers would be able to accurately price each race based on the risk of individual obstacles. In fact, with a database like that, races could even be tailored to fit a specific target “riskiness,” selecting obstacles that result in an organizer-preferred premium amount. The current way of one-size-fits-all is not an efficient use of funds for race organizers.

The article was coauthored by Jenna Hildebrandt, an actuarial science student at the University of Wisconsin – Madison.

Validating machine-learning models

While machine-learning techniques can improve business processes, predict future outcomes, and save money, they also increase modeling risk because of their complex and opaque features. In this article, Milliman’s Jonathan Glowacki and Martin Reichhoff discuss how model validation techniques can mitigate the potential pitfalls of machine-learning algorithms.

Here is an excerpt:

An independent model validation carried out by knowledgeable professionals can mitigate the risks associated with new modeling techniques. In spite of the novelty of machine-learning techniques, there are several methods to safeguard against overfitting and other modeling flaws. The most important requirement for model validation is for the team performing the model validation to understand the algorithm. If the validator does not understand the theory and assumptions behind the model, then they are likely to not perform an effective model validation on the process. After demonstrating an understanding on the model theory, the following procedures are helpful in performing the validation.

Outcomes analysis refers to comparing modeled results to actual data. For advanced modeling techniques, outcomes analysis becomes a very simple yet useful approach to understanding model interactions and pitfalls. One way to understand model results is to simply plot the range of the independent variable against both the actual and predicted outcome along with the number of observations. This allows the user to visualize the univariate relationship within the model and understand if the model is overfitting to sparse data. To evaluate possible interactions, cross plots can also be created looking at results in two dimensions as opposed to a single dimension. Dimensionality beyond two dimensions becomes difficult to evaluate, but looking at simple interactions does provide an initial useful understanding of how the model behaves with independent variables….

…Cross-validation is a common strategy to help ensure that a model isn’t overfitting the sample data it’s being developed with. Cross-validation has been used to help ensure the integrity of other statistical methods in the past, and with the rising popularity of machine-learning techniques, it has become even more important. In cross-validation, a model is fitted using only a portion of the sample data. The model is then applied to the other portion of the data to test performance. Ideally, a model will perform equally well on both portions of the data. If it doesn’t, it’s likely that the model has been over fit.