Milliman has announced the results of a first-of-its-kind study to assess the feasibility of a private flood insurance market in several key states across the United States. The study, which was conducted in collaboration with risk modeling firm KatRisk, set out to model private flood insurance risk and potential premiums for all single-family homes in Florida, Louisiana, and Texas—which combined account for 56% of National Flood Insurance Program (NFIP) policies in-force nationwide. The study includes all single-family homes in those states, not only those who are currently purchasing flood insurance from the NFIP, and the modeled NFIP premiums do not include the effects of grandfathering. The estimated private insurance premiums were developed using reasonable assumptions selected by Milliman.
Key findings include:
• For all single-family homes, 77% in Florida, 69% in Louisiana, and 92% in Texas could see cheaper premiums with private insurance than with the NFIP.
• In Florida, 44% of homes modeled could see premiums less than one-fifth that of the NFIP, while the same holds true for 42% of homes in Louisiana and 70% of homes in Texas.
• Conversely, private insurance would cost at least double the NFIP premium for 14% of single-family homes in Florida, 21% in Louisiana, and 5% in Texas.
• Across Special Flood Hazard Areas (SFHAs)—the high-risk zones in which flood insurance is mandatory—private insurance could offer cheaper premiums than the NFIP for 49% of single-family homes in Florida, 65% in Louisiana, and 77% in Texas.
The catastrophic rainstorms in Louisiana in 2016 are one example of the devastating financial effect flood can have on communities outside mandatory purchase areas. “A thriving private insurance market would provide wider and in many cases less expensive options that could protect more U.S. consumers, expand the awareness of the need for flood insurance, and spread the risk beyond the NFIP,” the report says.
To view the complete report including additional findings and critical assumptions, click here.
Typically many insurance companies have been using the Black model as a benchmark pricing model to derive the implied volatility quote, often referred to as Black volatility. The interest rate movements for the euro in the past six to nine months, however, have unveiled a major drawback of the Black volatility quote, which can affect current best practice approaches of insured companies’ risk and valuation models in a significant way. Milliman consultants provide perspective in this research report.
Adaero is a stochastic modeling software platform that walks users through the financial model building process. It helps organizations develop financial models quickly and efficiently.
The platform automatically builds financial models as users answer questions about their model needs and provide risk assessment and financial data. The software platform assembles risk register models, risk-adjusted capital expenditure models, and integrated financial statement models.
Adaero allows users to iterate multiple risk scenarios and stress test financials with different sets of assumptions. Adaero incorporates risk inventories by industry and has a model risk management process embedded into each file. Because Adaero is Excel-based, the file format, icons, and menus are simple and intuitive for the user.
Even without the advent of Solvency II and the appeal of internal models to model capital more accurately, it’s likely that the events following the global financial crisis (GFC) would have sharpened up European insurance companies’ risk modeling capabilities.
In Asia, insurance companies are also investing significant resources in developing their own economic capital models. Boards of directors have been charged with the measurement of risk and the need to plan their capital requirements through such things as an Own Risk and Solvency Assessment (ORSA) and an Internal Capital Adequacy Assessment Process (ICAAP) in Singapore and Malaysia, respectively.
Much has already been written about building complex Monte Carlo engines to calculate risk measures. This report by Milliman’s Clement Bonnet and Nigel Knowles addresses a question about the front end of the risk measurement process: How do we project our yield curve?
Techniques for assessing operational risk have come a long way in the past 10 years. Today, many companies are going beyond the regulatory minimum to implement sophisticated models that contribute to better understanding and management of operational risk across the business.
One question that tends to push the limits of existing models, however, is identifying emerging operational risk before it produces a loss. Given that risk events are typically not entirely new but rather simply new combinations of known risks, an approach that enables us to analyze which risk drivers exhibit evolutionary change can identify which ones are most likely to create emergent risks. By borrowing a technique from biology—phylogenetics, the study of evolutionary relationships—we can understand how certain characteristics of risk drivers evolve over time to generate new risks. The success of such an approach is heavily dependent on the degree to which operational risk loss data is available, coherent, compatible, and comprehensive. A well-structured loss data collection (LDC) framework can be a key asset in attempting to understand and manage emergent risks.
Broadening the definition of operational risk
In the financial industry, where operational risk has been a significant target of regulators for more than a decade, operational risk is typically defined as “the risk of loss resulting from inadequate or failed internal processes, people, and systems, or from external events.” However, this definition doesn’t consider all the productive inputs of an operation, and, more critically, does not account for the interaction between internal and external factors.
A broader, more useful definition is “the risk of loss resulting from inadequate or failed productive inputs used in an operational activity.” Operational risk includes a very broad range of occurrences, from fraud to human error to information technology failures. Different production factors can be more or less important among various industries and companies, and relationships among them—particularly where labor is concerned—are changing rapidly. To be effective as tools for managing operational risk day-to-day, models need to account for the specific risk characteristics of a given company as well as how those characteristics can change over time.
Examples of productive inputs relevant for operational risk
The physical space used to carry out the production process that may be owned, rented, or otherwise utilized.
Naturally occurring goods such as water, air, minerals, flora, and fauna.
Physical work performed by people.
The value that employees provide through the application of their personal skills that are not owned by an organization.
The supportive infrastructure, brand, patents, philosophies, processes, and databases that enable human capital to function.
The stock of trust, mutual understanding, shared values, and socially held knowledge, commonly transmitted throughout an organization as part of its culture.
The stock of intermediate goods and services used in the production process such as parts, machines, and buildings.
The stock of public goods and services used but not owned by the organizations such as roads and the Internet.
Every organization tries to reduce operational risk as a basic part of day-to-day operations whether that means enforcing safety procedures or installing antivirus software. Yet not as many take the next steps to holistically assess operational risk, quantify the severity, likelihood, and frequency of different risks, and understand the interdependencies among risk drivers. Companies may see operational risk modeling as an unnecessary cost, or they may not have considered it at all. Yet the right approach to modeling operational risk can support a wide range of best practices within an organization, including:
• Risk assessment: Measuring an organization’s exposure to the full range of operational risks to support awareness and action.
• Economic capital calculation: Setting capital reserves that enable organizations to survive adverse operational events without tying up excessive capital.
• Business continuity and resilience planning: Discovering where material risks lie and changing systems, processes, and procedures to minimize the damage to operations caused by an adverse event.
• Risk appetite and risk limit setting: Creating a coherent policy concerning the amount of operational risk an organization is willing to accept, and monitoring it to ensure the threshold is not breached.
• Stress testing: Modeling how an organization performs in an adverse situation to aid in planning and capital reserving.
• Reverse stress testing: Modeling backward from a catastrophic event to understand which risks are most material to an organization’s solvency.
• Dynamic operational risk management: Monitoring, measuring, and responding to changing characteristics of operational risk that is due to shifts in the operating environment, risk management policies, or company structure.
At the more basic level, having a detailed understanding of operational risk simply supports efforts to manage and reduce it—a worthy goal for almost any organization. Modeling enables an organization to consciously set an appropriate balance between operational resilience and profitability.
In order to achieve these goals, it is important to choose a methodology for which the results are accessible and actionable for the decision makers on the front lines of operational risk. Even financial organizations that once chose models primarily to meet regulatory requirements are beginning to move toward models that help the organization actively understand and reduce operational risk. The tangible business benefits are simply too great to ignore.