Reinsurance Pricing, the Data Science Way (2024)

An introduction to excess of loss reinsurance pricing and a comparison between deterministic, stochastic, and kernel density estimate approaches to it.

Reinsurance Pricing, the Data Science Way (1)

Published in

Towards Data Science

·

8 min read

·

Jun 20, 2021

--

Reinsurance Pricing, the Data Science Way (3)

“Insurance for insurance companies”, reinsurance is an interesting and challenging area to work in. A wide variety of contract types and limited data history makes pricing reinsurance treaties challenging in most instances.

Over time, different approaches to pricing these complex insurance instruments have emerged.

I’ll briefly cover and compare 2 approaches I’ve seen used to price excess of loss reinsurance contracts (deterministic solutions and stochastic simulation) and then dip into a different approach (kernel density estimation).

Excess of loss reinsurance is a type of non-proportional reinsurance and is designed to limit or cap an insurer’s “per risk” or “per event” loss.

Usually these contracts are quoted as providing indemnity of a maximum amount over and above an excess amount. They are usually available as adjacent “layers” and are quoted as layer size in excess (xs) attachment point.

So as an example, an excess of loss treaty £1m xs £2m provides indemnity of up to £1m for the portion of a loss in excess of £2m:

  • if a loss of £2.5m occurred, the contract would provide £0.5m cover
  • if a loss of £4m occurred, the contract would provide £1m cover
  • if a loss of £0.5m occurred, the contract would provide no cover

Let’s look at some theory behind calculating the price for a reinsurance layer, which will conveniently bleed into the deterministic approach to arriving at a layer’s technical price.

We’re interested in calculating the expected loss cost to a layer of non-proportional reinsurance. For now, let’s say the insurance provides indemnity of £b in excess of £a — i.e. that the reinsurer covers the part of a claim which falls into the interval [£a, £(a+b)].

If the random variable X with probability density function f represents the “ground up” insurance loss, we can define the loss to the reinsurance contract Y as:

Reinsurance Pricing, the Data Science Way (4)

Then the expected loss to the layer can be written as:

Reinsurance Pricing, the Data Science Way (5)

E[Y] represents the technical price of the reinsurance layer — i.e. the premium to be charged to break even. The reinsurer would normally add charges for the cost of capital, administration expenses, and a profit loading to the technical price to arrive at the office premium or street price but we will ignore those calculations for now.

Solving E[Y] — probably numerically — is the deterministic approach to pricing excess of loss cover as it always yields the same result. While this is in theory a clean and pleasing approach, there are a couple of downsides to this approach:

  • it relies on an accurate assumption on the form of the underlying ground up claims distribution X
  • it requires the user to accurately derive this distribution’s parameters

There is also the implicit assumption that the user will be able to perform the complicated integration.

Python users have access to functionality which makes numerical integration fairly straightforward; practitioners who only have access to spreadsheets might find that stochastic simulation brightens their day.

The stochastic approach to the problem is quite simple, and can be summarised fairly shortly:

  1. Randomly draw a sample from the assumed (and appropriately) parameterised ground up claims distribution X
  2. Apply the reinsurance layer structure to determine the layer loss and store the result.
  3. Repeat many, many times — say 100,000 times.
  4. Take the average across the simulated layer loss costs as the technical price for the layer — that is, E[Y] = average simulation outcome.

Although simple, the process is quite powerful:

  • The simulation results in a distribution of outcomes rather than a single point estimate, and with enough simulation the results converge to the closed form solution.
  • As such, it’s quite straightforward to calculate statistics like mean, variance, kurtosis, and skewness. It can be a little trickier to do this through the deterministic approach.
  • The distribution of outcomes allows us to quite easily derive confidence intervals and percentiles — useful for understanding and communicating results. Again, this is likely to be a little trickier when going down the deterministic route.

Unfortunately this approach also requires the user to correctly specify and parameterise the ground up claims distribution; although there is no requirement for intricate integration, the user needs to be able to draw samples from a distribution (many times).

Clearly, a non-parametric approach is desirable — if it can match the performance of the deterministic and stochastic approaches!

Kernel density estimation (KDE) is a non-parametric way to estimate the probability density function of a random variable and is commonly used where inferences about the population are made based on a data sample.

The KDE approach is very similar to the stochastic simulation approach, except we use a KDE to estimate the ground up claims distribution and draw samples from this.

It sounds simple but it is quite powerful, as it removes errors introduced by the user when trying to specify the distributional (and parametric) form of X.

Let’s specify a workflow that will enable us to compare the different approaches:

  1. Generate historic ground up claims data — i.e. realisations of X. In this instance we’ll use samples from a lognormal distribution.
  2. Set excess of loss reinsurance contract details.
  3. Parameterise our observed log-normal distribution; parameters will be derived from data generated in (1).
  4. Calculate the deterministic layer price.
  5. Calculate the simulated layer price.
  6. Calculate the (KDE) simulated layer price.
  7. Compare results and talk shop.

Before we go on — a word about the log-normal distribution…

The log-normal distribution is a continuous probability distribution of a random variable whose logarithm is Normally distributed.

So if Y is log-normally distributed then 𝑙𝑛(𝑌) is Normally distributed i.e. 𝑙𝑛(𝑌) ∼ 𝑁(𝜇,𝜎²).

The distribution supports positive values greater than 0, and is long-tailed — ideal for modelling low frequency — high severity events that may fall into our reinsurance layer.

In this example, we’ll assume the underlying ground up claims distribution X is log-normally distributed.

Working in £m, let’s draw 500 observations from a lognormal distribution with mean 15.

https://gist.github.com/brad-stephen-shaw/e73dd7e915a47fe002b8101a096420cb

Reinsurance Pricing, the Data Science Way (6)

The chart above shows a histogram of the 500 observations sampled randomly from the “true” ground up claims distribution. As expected, all observations are non-zero and the distribution is skewed to the right.

Let’s look to price a £5m xs £20m contract — that is a = 20 and b = 5. Simples.

Using the generated data, parameterise an observed log-normal distribution by method of moments.

Reinsurance Pricing, the Data Science Way (7)

Chart 2 shows the difference — and loss of information — between the true underlying ground up claims distribution (blue) and the distribution parameterised from observations (orange). We’ll discuss this in a bit more detail later.

Let’s move on to building:

  1. A Python function to numerically integrate a given function between two bounds.
  2. Python functions describing the parameterised ground up loss distribution and the expected loss for ground up claims falling within the reinsurance layer.

Deterministic layer cost: £598, 900.

With help from numpy, it's straightforward to perform stochastic simulation and arrive at a technical price for the layer:

Stochastic layer cost: £599, 921.

The reader will notice that the resulting layer prices are very similar and should remember that the results of a stochastic simulation should converge to the closed-form solution with enough repetition.

This time we’ll rely on sklearn to perform the KDE, but drawing samples from the KDE object and calculating the layer cost is again very straightforward.

KDE simulation layer cost: £586, 920.

The KDE layer cost is not miles away from the deterministic and vanilla stochastic simulation — interesting!

Before we compare results, let’s calculate the “true” layer cost.

True layer cost: £630, 300.

Let’s summarise and discuss…

Reinsurance Pricing, the Data Science Way (8)

I notice a few things:

  1. All approaches under-priced the layer, although the KDE approach seems to be the worst culprit.
  2. The KDE result is not significantly worse than either the deterministic or the vanilla simulation.
  3. Both the deterministic and vanilla stochastic approach require the user to specify the underlying ground up claims distribution. This can be thought of as providing additional “information” to the process, and so we should expect better performance.

Since the KDE is non-parametric and essentially goes in “blind”, I think its performance is admirable and the slight loss in accuracy an acceptable price to pay for the increased freedom provided by the approach.

Talking shop

  • In reality, the observed data will come from the insurance companies themselves. Data volumes available depend on the territory, size, claims history, and lines of business written by the insurance company. More likely than not, the data will be scarce and having 500 observations to use will be a rare luxury.
  • Before parameterising distributions or fitting KDE the historic data should be cleaned appropriately, adjusting for trends and inflation, as well as removing once-off events and making allowances for discontinued underwriting practices. As ever, it’s also good practice to include in the analysis claims that are roughly half the size of the attachment point prior to adjustment.
  • The KDE functionality in sklearn contains a bandwidth parameter which controls the fit to the data. Increasing the parameter value increases the ability for the KDE object to generalise while decreasing the parameter value improves the KDE fit to observed data.
  • Pooling “similar” historic data together prior to fitting the KDE and then applying the reinsurance contract details could offer a different perspective to the layer price. As an example:
  • Large ground up UK motor insurance claims arising from multiple insurers could be pooled together (after appropriate adjustment) and the reinsurance layer price calulcated using a KDE.
  • The KDE approach could be used on UK motor insurer X’s historic claims to arrive at a layer price.
  • Insurer X’s price can be compared to the “market” price and relevant adjustments made when calculating the street price.
  • The KDE approach may be preferable in this instance, as it may be difficult to parameterise a “combined” distribution.

Disclaimer: all statistics and workings contained in this notebook are purely hypothetical and are in no way related to actual events.

I am an expert in reinsurance pricing, particularly in the context of excess of loss reinsurance. My expertise is demonstrated by a deep understanding of the concepts discussed in the article titled "An introduction to excess of loss reinsurance pricing and a comparison between deterministic, stochastic, and kernel density estimate approaches" by Bradley Stephen Shaw, published on Towards Data Science on June 20, 2021.

In this article, Shaw explores the complexities of pricing excess of loss reinsurance contracts, which are designed to limit or cap an insurer's "per risk" or "per event" loss. The key concepts covered include deterministic solutions, stochastic simulation, and kernel density estimation (KDE).

The deterministic approach involves calculating the expected loss cost to a layer of non-proportional reinsurance by solving the equation E[Y], representing the technical price of the reinsurance layer. This method relies on accurate assumptions about the underlying ground up claims distribution X and requires the derivation of distribution parameters.

The stochastic approach, on the other hand, involves randomly drawing samples from the assumed ground up claims distribution X, applying the reinsurance layer structure, and repeating the process many times. The average of the simulated layer loss costs provides the technical price for the layer, offering a distribution of outcomes.

The kernel density estimation (KDE) approach is non-parametric, estimating the probability density function of a random variable. Similar to stochastic simulation, KDE uses a distribution estimated from data to draw samples, eliminating errors introduced by user-specified distributional forms.

The article concludes with a comparison of the results obtained through these approaches. Interestingly, the KDE approach, despite being non-parametric and "blind," demonstrates admirable performance, with a slight loss in accuracy considered acceptable for the increased freedom it provides.

The author emphasizes the importance of data quality and appropriate cleaning before parameterizing distributions or fitting KDE. Additionally, the article suggests pooling similar historic data together before applying reinsurance contract details, offering a different perspective on layer pricing.

Please let me know if you have any specific questions or if you would like further clarification on any aspect of reinsurance pricing discussed in the article.

Reinsurance Pricing, the Data Science Way (2024)

References

Top Articles
Latest Posts
Article information

Author: Stevie Stamm

Last Updated:

Views: 5395

Rating: 5 / 5 (60 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Stevie Stamm

Birthday: 1996-06-22

Address: Apt. 419 4200 Sipes Estate, East Delmerview, WY 05617

Phone: +342332224300

Job: Future Advertising Analyst

Hobby: Leather crafting, Puzzles, Leather crafting, scrapbook, Urban exploration, Cabaret, Skateboarding

Introduction: My name is Stevie Stamm, I am a colorful, sparkling, splendid, vast, open, hilarious, tender person who loves writing and wants to share my knowledge and understanding with you.