Speed, flexibility and rapid scalability – all are becoming essential features of the computing systems required to support insurers’ modelling activity. Three recent case studies illustrate the art of the possible when it comes to achieving all three goals.

The challenge of providing actuaries with flexible and highly scalable computational grid solutions is something that has come sharply into focus as insurers have responded to demand for more detailed analysis within tighter timeframes.

Increasingly, traditional on-premise data centres struggle to provide the levels of flexibility required. While grids that use High Performance Compute (HPC) capabilities have had the ability to run large models with enormous numbers of scenarios running in parallel, not only do they typically require an insurer’s IT function to build such environments, but they can also be quickly overrun by the amount of workload thrust upon them.

For example, tighter timeframes, often forced by regulatory requirements such as Solvency II and IFRS, lead to runs needing to have more parallel capability – meaning more servers targeting more compute cores; which means more investment. Naturally, companies only have a certain amount of budget for continued grid growth and are very unlikely to increase the frequency of hardware refresh to gain access to higher performing servers. 

Case study 1: Customised cloud

Actuaries can control and run their own grid solutions without the support or investment from their IT team

Faced with this challenge, a Willis Towers Watson actuarial outsource team turned to vGrid solution. Using Microsoft Azure’s in-cloud scheduling capability integrated into our RiskAgility FM software, it’s now possible to push the whole run from workstation to cloud without the use of any on-premise HPC environment. Actuaries can control and run their own grid solutions without the support or investment from their IT team.

Within a typical quarter, the outsourced client team runs multiple test models as part of a development cycle, followed by a production process of around 30 stochastic model runs used to calculate the capital requirement under Solvency II. In addition there are five supplementary runs that are needed in order to complete the necessary reports.

Simon Skinner, Actuarial Outsource Director of Willis Towers Watson, explains the set-up. “Our initial project specification for vGrid was anytime ability to use up to 480 cores, split into two pools of 320 and 160 cores to allow us to scale accordingly for production and test runs. The model is designed for heavy distributed processing, because over 340,000 tasks are run to determine the appropriate stress to use for some of the Solvency II non-market stress runs. The environment was available in a matter of days.”

Despite production runs that were approximately 150% quicker on vGrid than the more traditional HPC environment, the team wanted to run projections even quicker, and so requested the full number of available cores to be increased to 960. This accelerated the run by another 50%. 

The client team also benefits from:

  • Transparency of what is happening in vGrid, through access to a portal that monitors and evaluates runs in real-time.
  • Available scale up and down with operational cost reduction for vGrid that far exceeds what would be possible previously with purchased assets that depreciate with low utilization rates – ideal for an outsource operation.
  • Dedicated pools which are available immediately and enable more parallel runs resulting in greater operational efficiency for the team- gone is the shared and overstretched HPC grid.
  • Easy monitoring of cost which allows for accurate budgeting
  • Data transferred to vGrid is user managed with the ability to purge from the environment upon completion of a model run, assisting in the proper governance and safekeeping of company information.

Case study 2: Acquisition support

All calculations were completed within two and a half days […] a dramatic example of vGrid’s speed, flexibility and scalability

Mergers and acquisitions can often produce short-term peaks in modelling requirements that require companies to ramp up capacity and resources very quickly.

That was certainly the case when Willis Towers Watson was approached one Friday for some urgent help with valuing a block of business for a potential acquisition.

  • The valuation required 45 projection tasks to be completed over the weekend, where each projection:
    • Required 1,000 iterations across different economic scenarios
    • Involved a range of 5,000 to 48,000 model points
    • Involved 30 year projection, with calculations performed monthly
  • Assuming an average of 26,500 model points, this would work out to about 429 billion calculation steps (45 × 1,000 × 26,500 × 360), with each step involving many calculations.

The team used 600 Microsoft HPC compute notes within the Willis Towers Watson data centre, combined seamlessly with 504 Microsoft Azure compute nodes that were provisioned within 15 minutes. The combined 1,104 compute nodes allowed the calculations to be split and run in parallel so they would finish in much less time.

All calculations were completed within two and a half days so the client had the precise valuations they needed on Monday and could proceed with negotiations – a dramatic example of vGrid’s speed, flexibility and scalability.

Case study 3: Pushing the limits

In under two hours, and using over 100,000 processing cores, [to insure the world’s population] we came up with a figure of approximately $190 trillion, or roughly 2.5 time’s world GDP

These two examples demonstrate what companies are already achieving. But in the autumn of 2015, Willis Towers Watson decided to test the cloud computing and software technology to the limit. How? Well, how about calculating the cost of insuring the entire world’s population?

The calculation, run on the Microsoft Azure cloud platform, involved an analysis of the insurance cost of providing each of the world’s 7.3 billion people with a $100,000 whole-of-life insurance policy. In under two hours, and using over 100,000 processing cores, we came up with a figure of approximately $190 trillion, or roughly 2.5 time’s world GDP.

By our estimates, the calculation would have taken 19 years on a standalone computer with a single core. But the entire exercise – including a one-off set-up and configuration of the customised grid and model, in addition to running the model several times – took less than 24 hours and used data centres around the world including in Japan, India, Europe, Brazil and Australia – a powerful example of speed, flexibility and scalability.

Jonathan Silverman, Insurance Industry Solutions Director at Microsoft Corp., said: “What question could be bigger for a life insurance company than figuring out the cost of insuring the entire world's population? Answering this complex question, albeit it’s a theoretical one, in less than 24 hours and using just a single programming interface on Microsoft Azure to do it, regardless of the number of cores involved, shows how easy it has become to achieve a level of scalability that was previously only possible through complex coding and intense management input.”

The possible redefined

With insurers’ modelling requirements only likely to increase due to requirements such as Solvency II quarterly reporting templates, the key information document for Packaged Retail and Insurance Based Investment Products (PRIIPs) and IFRS 4 Phase 2, computing infrastructure will need to keep pace.

Already, the traditional boundaries of speed, flexibility and scalability are being redefined. Cloud computing makes it possible.