This article is the first of two that document a project between Willis Towers Watson and Hannover Re. The project consisted of two parts, both related to the usage of SQL databases combined with an Excel front-end. The first stage, documented in this article, covered process optimisation. The second covered assumption management of MoSes/RiskAgilityFM models. This will be described in a future issue.


In the current market, the insurance sector is dominated by buzzwords like Solvency II, ORSA, quarterly reporting and fast close leading to an increased significance of the component time. Insurance companies – the actuarial departments in particular – are facing enormous pressure on reporting requirements and standards, internal as well as external. The well-known German saying ‘After the match is before the match’ can easily be transferred to the insurance sector: ‘After the reporting is before the reporting.’ In other words, the time lag between two reporting periods has declined rapidly in recent years.

Furthermore, the frequency of inquiries from in-house division managers or the managing board have risen. These so-called ‘ad-hoc analyses’ need to be done in addition to regular work.

In summary, more complex questions have to be answered in greater detail and at a higher frequency.

Focusing on projection model tasks, and assuming unlimited resources in the actuarial departments, increased reporting requirements can only be fulfilled by changing the following two parameters:

  1. Reducing the runtime of the projection models
  2. Optimising the linked processes

In the case of the first parameter, both quantitative (technical) and qualitative (code and model optimisation) solutions exist. We provide technical (vGrid, HPC, Azure) as well as qualitative solutions – which benefit from our extensive knowledge in actuarial modelling.

The second parameter is a more bespoke concern, and is significantly dependent on the company itself. A large number of processes may require optimisation, and consideration will need to be paid to the time of each individual process, the interaction between them, and the extent of optimisation that is possible, or whether it is at all.

Case study

In this project, the process of creating modelpoints and their related assumptions was identified as a critical step that needed to be optimised. The motivation for the project was simple: in order to answer an inquiry using a projection tool, a short runtime is beneficial only if the tool can be made ready for use quickly and efficiently. Consequently, the less time-consuming the preparations are, the more in-depth the analysis of the results can be – time is quality.

The creation of modelpoints for reinsurance business mainly depends on the underwriters’ estimations due to a lack of readily available information. Consequently, a manual process was required to integrate the underwriting departments.

Status quo

  • The company-wide input assumptions for the model points were Excel-based. Numerous versions of this Excel workbook were sent via email or stored on a file share.
  • Manual work steps that were difficult to reproduce.
  • The entire process was too time-consuming.


  • Implement an automated process, reducing time consumption, which would be better spent on the required qualitative analysis.
  • Achieve higher reproducibility and security.
  • Contain approval processes contained within the new automated process.

In the first step of the new process, HR’s modelling team creates a quarterly report of inforce data for parts of the business, a change from the previous semi-annual cycle.

Within this step, the overall data is filtered and prepared before it is passed to the second work step. Now, the global departments enter relevant information, such as margins and lapse assumptions. At this stage, information relating to existing treaties can be reused, reducing workload. Finally, modelpoints are created based on aggregated figures using predefined modelpoint templates (Figure 1).

As the whole process was entirely completed in Excel, it resulted in significant manual work for the modelling team at HR. The team needed to set up links to data from last quarter, copy thousands of data sets and carry out extensive manual quality control. Furthermore, the whole process needed to be completed for both the best estimate and also repeated for all calculation steps included in the analysis of change.

The cornerstone of this project was the analysis of the above process and the subsequent discussions on how it could be optimised. The project team decided to implement a process based on a SQL database in order to achieve the targets. A database can fulfill all requirements relating to approval, security and reproducibility. More importantly, it was possible to move the manual process into an automated, user-defined process.

This was possible as a result of:

  • Uniquely identifiable data
  • Aggregation rules that may change in time, but are unambiguous for a specific reporting period
  • Calculation routines which can be fit into an algorithm using a decision tree.

The process was implemented in the database, but retained the capability to control and manipulate the data within Excel, minimising the process changes and additional training requirements that would result in using a new software.

Another challenge was the trade-off between automation and the flexibility offered by manual adjustments. Of course, everyone dreams of an ideal process that would be almost fully automated, allowing for the input of expert judgement when required, and provide instant results at the click of a button. However, the user will need to be involved in the process in order to apply actuarial knowledge.

The process described above was eventually implemented in a SQL database that interacted with an Excel front end implemented as an add-in (Figure 1). Thus, the user can work within the usual Excel interface where the database connection does not interrupt his work steps. But, for example clicking onto a ‘save’ button, a stored procedure of the database is called and stores the visible data into a database table.

Figure 1. The Excel add-in adds a new ribbon tab

Figure 2. The Excel add-in adds a new ribbon tab
Click to enlarge

The process itself was divided into several substeps allowing the user to interact and to adjust parameters in Excel – that is user interaction points (Figure 2).

Figure 2. Optimised process

Towers Watson Media

Project set-up

As mentioned previously, time is a scarce resource in the insurance sector. The design of an efficient project was therefore a critical challenge, in order to ensure a smooth as possible process change and minimise the burden to the HR team. In order to achieve this, the project was organised in ‘sprints’.

New targets and timelines were agreed and (re)prioritised between us and HR at the beginning of each sprint. At the conclusion of each sprint, a handover to HR would occur that allowed for a testing phase of the new components. The last sprint included a user acceptance test phase in which the whole product was tested and integrated into the day-to-day work processes (Figure 3).

In this project HR benefitted from our full range of consulting experience:

  • Analysis of the process and conceptual design of the database and the related Excel add-in by senior consultants
  • Database set-up and implementation of basic routines by IT specialists of Technical Consultant Services (TCS) from Reigate, UK
  • Further development and implementation of the HR’s feedback by consultants of Risk Consultant Services (RCS) from Cologne, Germany.

Figure 3. Project set up

Towers Watson Media


The first major step to implement the new process was the MCEV restatement projection run to assess the impact of the model changes. This was a time consuming and challenging task requiring a significant amount of training, data analysis and transformation. This step was carried out by the modelling team and was integrated within the user acceptance test phase. The HR modelling team used real data and tested the tool in one step where feedback was given on a regular basis. The system was then modified by Willis Towers Watson based on this feedback, resulting in a tool that was fully accepted and integrated in the day-to-day work.

The first step was carried out locally and by employees with an actuarial background; the second step consisted of “going live”. At this stage, the tool was integrated into the communication process between the HR modelling team and the global underwriting departments. The underwriters were able to update the data of their specific treaties using Excel, which was then directly passed to the database. Any individual underwriter was only be able to access and edit information relating to treaties assigned to their own department. Once the data was stored in the database, it was available for further analysis carried out by the modelling team and could be used to create modelpoints and assumption tables in an automated process.

The database tool includes a number of enhanced data quality features:

  • The data is automatically checked once passed to the database, where validation rules can be easily parameterized for each data field. For example, the underwriter will receive a warning message if a percentage value larger than 100% was entered. These checks can help to reduce errors in later steps, such as a failed projection run due to a lapse rate of 150%.
  • The modelling team can define required and optional data fields. Once this has been set up, a list of incomplete treaties per underwriting centre can be generated. This reduces unnecessary work downstream since any relevant missing data will be flagged immediately.
  • Once the data is complete and approved, the modelling team is able to close a data set. At this point, it is not possible to change anything in the data – unless the data set is re-opened for material reasons. The closing and re-opening dates of data sets are stored in a log file for audit purposes.
  • Furthermore, the database facilitates the analysis of results since it allows direct comparison between data sets, for example, comparison of the data sets at the previous and new valuation date on an aggregated or even on a treaty basis.

The automated tool moves the whole process onto a new level. The focus of the work is now on the data itself and the change in data – instead of data collection from the different departments. However, additional improvements are always possible. Already, some additional ideas (and technical solutions as well) exist on how to enhance the new process. The main issue is the uncertainty of linking results and assumptions. Now, HR is in a position to track the derivation of assumptions and modelpoints from the raw data to the final data.

However, the export from the database to the MoSes assumption folders is still controlled manually. Therefore, a solution has been developed wherein the export is controlled by MoSes (or RiskAgility FM) itself. This ensures that, before running a projection, all required data is exported from the database into the model folder, guaranteeing that the run is carried out with the appropriate data. This idea is discussed in a follow-up article in greater detail where the second part of the project is described that is related to the assumption management. The assumption management is implemented in the same Excel add-in as the modelpoint process and moves the whole management of MoSes/RiskAgility FM assumptions from Excel workbooks to a SQL database.