Category Archives: Risk

Recovery and Resolution Plans: More to it than meets the eye

Have you ever wondered what options would be available to your company should it get into financial difficulty? Does your company have a ‘plan B’ and how practical and realistic is it? These are questions (re)insurance companies may soon need to answer. Recovery and Resolution Plans (RRPs) have already been introduced in the banking industry. In this blog I outline a few insights the insurance industry can learn from the recovery and resolution planning process which the banking industry has already commenced. (Re)insurance companies may find this useful particularly in light of the European Insurance and Occupational Pensions Authority (EIOPA) opinion issued last month recommending a harmonised recovery and resolution framework for all insurers across the EU.

Based on the feedback from the banking industry, it would appear that there is more to recovery and resolution planning than meets the eye. In the banking industry, recovery plans, for example, are intended to be living documents which demonstrate that the recovery strategies presented can be implemented in reality—and that is not an easy task.

The following diagram illustrates the embeddedness of recovery plans within banks as well as some of the key considerations which I will expand upon in this blog.

Recovery plans can span hundreds of pages as the practicalities of recovery strategies are explored in great detail in order to have a plan of action in place that is realistic, achievable and capable of being put into action straight away. Regulators expect a short timeframe for implementation of a recovery plan, with the recovery strategies presented typically required to be fully executable within a 12-month period. In addition, it is expected that the recovery strategies take account of the particular scenarios the company may find itself in. For example, the recovery strategies may vary depending on whether an idiosyncratic or a systemic risk has materialised, given that the options a company could take when it alone is in financial difficulty compared to when many companies are in the same boat may well be different.

Continue reading

Milliman and Zendrive create driving risk score with 30 billion miles of smartphone data

As more drivers use smartphones to talk, text, and perform other functions while driving, concern over distracted driving and its contribution to climbing collision rates has increased. Using data collected by Zendrive, Milliman recently studied the impact of distracted driving and other driving behaviors on collision frequency. Consultant Sheri Scott provides some perspective in this article.

Milliman consultant speaking at Mortgage Bankers Association forum

Milliman consultant Madeline Johnson, CMB, will speak at the 2017 MBA Risk Management, QA and Fraud Prevention Forum this September in Miami, Florida. She is scheduled to speak at the session entitled “QC for Purchase Markets” on Monday, September 25.

The three-day forum will be held from September 24 to 26. For more information on the talk and forum, click here.

Obstacle course racing presents insurers with unique hurdles

Obstacle course racing (OCR) like events featured on American Ninja Warrior have grown in popularity. As the extreme factor of OCR increases so does the risk for event organizers. These competitions do not have the reliable historical data, consistency of events, and general safety measures seen in traditional footraces, making it difficult for insurers to price OCR’s exposures.

A new article by Michael Henk entitled “Obstacles for insurers of obstacle course racing” explores OCR’s unique risks. It also provides perspective for insurers to consider when pricing premiums in this emerging market.

Here is an excerpt from the article:

Imagine that there is a local half marathon looking for liability insurance to cover its event. An insurance company can use data from past races (either in the same location or spread across a broad geography) to predict expected losses. Because half marathons have been around and been insured for decades, there is enough data for a credible analysis. Because OCR was almost nonexistent until 2010, insurance companies do not have that same degree of industry data. As with any emerging market (such as cyber liability, drone insurance, and self-driving cars), insurers do not know what to expect, and therefore, insurance premiums are priced higher to make up for the unknowns.

Another obstacle in the way of establishing a credible database is that all obstacle course races are not the same. When you decide to run a marathon, you know what to expect: run 26.2 miles. Road races might vary by elements such as terrain, local weather, and elevation changes, but overall, similar risks can be expected across all events. If you run a marathon in Chicago, it is similar to running a marathon in Miami. Likewise, insurers also know what to expect with these traditional races. They can use past data and rely on well-established safety standards to determine the proper level of risk and premiums.

Obstacle courses do not have the same consistency. Running a Tough Mudder race in Minnesota is entirely different from a Spartan race in Florida. The lack of standardization makes it difficult to price insurance policies. For example, if one race has a wall that is 20 feet high and another event has one that is five feet high, they pay the same premium even though the risk of injury from falling is greater with the 20-foot wall. These higher premiums can potentially cause race organizers to pay more for insurance than necessary. The risks associated with one obstacle course can be completely different from the risks of another, but insurance companies will still price them relatively the same as there is not enough historical data to allow for differentiation in the policies.

If the industry developed a consistent and credible database of obstacles, insurers would be able to accurately price each race based on the risk of individual obstacles. In fact, with a database like that, races could even be tailored to fit a specific target “riskiness,” selecting obstacles that result in an organizer-preferred premium amount. The current way of one-size-fits-all is not an efficient use of funds for race organizers.

The article was coauthored by Jenna Hildebrandt, an actuarial science student at the University of Wisconsin – Madison.

Validating machine-learning models

While machine-learning techniques can improve business processes, predict future outcomes, and save money, they also increase modeling risk because of their complex and opaque features. In this article, Milliman’s Jonathan Glowacki and Martin Reichhoff discuss how model validation techniques can mitigate the potential pitfalls of machine-learning algorithms.

Here is an excerpt:

An independent model validation carried out by knowledgeable professionals can mitigate the risks associated with new modeling techniques. In spite of the novelty of machine-learning techniques, there are several methods to safeguard against overfitting and other modeling flaws. The most important requirement for model validation is for the team performing the model validation to understand the algorithm. If the validator does not understand the theory and assumptions behind the model, then they are likely to not perform an effective model validation on the process. After demonstrating an understanding on the model theory, the following procedures are helpful in performing the validation.

Outcomes analysis refers to comparing modeled results to actual data. For advanced modeling techniques, outcomes analysis becomes a very simple yet useful approach to understanding model interactions and pitfalls. One way to understand model results is to simply plot the range of the independent variable against both the actual and predicted outcome along with the number of observations. This allows the user to visualize the univariate relationship within the model and understand if the model is overfitting to sparse data. To evaluate possible interactions, cross plots can also be created looking at results in two dimensions as opposed to a single dimension. Dimensionality beyond two dimensions becomes difficult to evaluate, but looking at simple interactions does provide an initial useful understanding of how the model behaves with independent variables….

…Cross-validation is a common strategy to help ensure that a model isn’t overfitting the sample data it’s being developed with. Cross-validation has been used to help ensure the integrity of other statistical methods in the past, and with the rising popularity of machine-learning techniques, it has become even more important. In cross-validation, a model is fitted using only a portion of the sample data. The model is then applied to the other portion of the data to test performance. Ideally, a model will perform equally well on both portions of the data. If it doesn’t, it’s likely that the model has been over fit.

Liquidity risk: A wolf in sheep’s clothing?

Liquidity risk is one of those risks we often don’t pause to think that much about, but it’s one that can wreak havoc on a business if not kept constantly in check. It is also a risk that has become heightened in recent times, because of a combination of regulatory and macroeconomic developments. Companies can often grow complacent about liquidity risk, especially if they have tended to generate cash on a consistent basis through ongoing operating performance. However, certain activities, such as mergers and acquisitions (M&A), a new product launch, or perhaps regulatory development, can give rise to new exposures. It’s worth reminding ourselves of some of the key drivers of exposure to liquidity risk, and what we can do to manage and mitigate this risk.

In Europe, the ability to recognize negative best estimate liabilities on the solvency balance sheet, effectively capitalizing estimated future profits on books of in-force business, and considering these profits to be immediately available to absorb losses in the business, requires companies to be extra vigilant. In reality, such assets may be far from liquid, unless they can be repackaged through value-in-force (VIF) monetization, or used to secure reinsurance financing of some sort. The same may be said of the likes of deferred tax assets, except that these assets may be even less liquid, unless they can be sold on to other entities within a group structure.

Other aspects of the liability side of the balance sheet can also pose liquidity challenges. Take, for example, a company with a range of unit-linked funds operating on a t+1 basis (i.e., settlement occurs one day after the transaction date), with a further range of funds operating on a t+2 basis. Policyholder fund switches out of the t+2 funds and into the t+1 funds can leave companies needing to provide liquidity for transaction settlements upon purchase of the t+1 assets, before payment is received from the sale of the t+2 assets. Depending on the volume of transactions, which could be significant, firms may struggle to provide such financing on an ongoing basis. More severe examples of firms struggling to cope with fund switch activity have included suspension of redemptions from property funds, albeit more because of the underlying lack of liquidity of the assets than the nature of the pricing basis, though ultimately leading to similar problems. Funds that permit a mix of both individual and corporate investors may be particularly susceptible, as the latter potentially have the ability to move vast sums of money very quickly, and before redemptions are suspended, precipitating the lack of liquidity for individuals.

Continue reading