The hazard rate function, an introduction

The goal of this post is to introduce the concept of hazard rate function by modifying one of the postulates of the approximate Poisson process. The rate of changes in the modified process is the hazard rate function. When a “change” in the modified Poisson process means a termination of a system (be it manufactured or biological), the notion of the hazard rate function leads to the concept of survival models. We then discuss several important examples of survival probability models that are defined by the hazard rate function. These examples include the Weibull distribution, the Gompertz distribution and the model based on the Makeham’s law.

We consider an experiment in which the occurrences of a certain type of events are counted during a given time interval or on a given physical object. Suppose that we count the occurrences of events on the interval (0,t). We call the occurrence of the type of events in question a change. We assume the following three conditions:

  1. The numbers of changes occurring in nonoverlapping intervals are independent.
  2. The probability of two or more changes taking place in a sufficiently small interval is essentially zero.
  3. The probability of exactly one change in the short interval (t,t+\delta) is approximately \lambda(t) \delta where \delta is sufficiently small and \lambda(t) is a nonnegative function of t.

For the lack of a better name, throughout this post, we call the above process the counting process (*). The approximate Poisson process is defined by conditions 1 and 2 and the condition that the \lambda(t) in condition 3 is a constant function. Thus the process we describe here is a more general process than the Poisson process.

Though the counting process indicated here can model the number of changes occurred in a physical object or a physical interval, we focus on the time aspect by considering the counting process as models for the number of changes occurred in a time interval where a change means “termination” or ‘failure” of a system under consideration. In many applications (e.g. in actuarial science and reliability engineering), the interest is on the time until termination or failure. Thus, the distribution for the time until failure is called a survival model. The rate of change function \lambda(t) indicated in condition 3 is called the hazard rate function. It is also called the failure rate function in reliability engineering. In actuarial science, the hazard rate function is known as the force of mortality.

Two random variables naturally arise from the counting process (*). One is the discrete variable N_t, defined as the number of changes in the time interval (0,t). The other is the continuous random variable T, defined as the time until the occurrence of the first (or next) change.

Claim 1. Let \displaystyle \Lambda(t)=\int_{0}^{t} \lambda(y) dy. Then e^{-\Lambda(t)} is the probability that there is no change in the interval (0,t). That is, \displaystyle P[N_t=0]=e^{-\Lambda(t)}.

We are interested in finding the probability of zero changes in the interval (0,y+\delta). By condition 1, the numbers of changes in the nonoverlapping intervals (0,y) and (y,y+\delta) are independent. Thus we have:

\displaystyle P[N_{y+\delta}=0] \approx P[N_y=0] \times [1-\lambda(y) \delta] \ \ \ \ \ \ \ \ (a)

Note that by condition 3, the probability of exactly one change in the small interval (y,y+\delta) is \lambda(y) \delta. Thus [1-\lambda(y) \delta] is the probability of no change in the interval (y,y+\delta). Continuing with equation (a), we have the following derivation:

\displaystyle \frac{P[N_{y+\delta}=0] - P[N_y=0]}{\delta} \approx -\lambda(y) P[N_y=0]

\displaystyle \frac{d}{dy} P[N_y=0]=-\lambda(y) P[N_y=0]

\displaystyle \frac{\frac{d}{dy} P[N_y=0]}{P[N_y=0]}=-\lambda(y)

\displaystyle \int_0^{t} \frac{\frac{d}{dy} P[N_y=0]}{P[N_y=0]} dy=-\int_0^{t} \lambda(y)dy

Integrating the left hand side and using the boundary condition of P[N_0=0]=1, we have:

\displaystyle ln P[N_t=0]=-\int_0^{t} \lambda(y)dy

\displaystyle P[N_t=0]=e^{-\int_0^{t} \lambda(y)dy}

Claim 2
As discussed above, let T be the length of the interval that is required to observe the first change in the counting process (*). Then the following are the distribution function, survival function and pdf of T:

  • \displaystyle F_T(t)=\displaystyle 1-e^{-\int_0^t \lambda(y) dy}
  • \displaystyle S_T(t)=\displaystyle e^{-\int_0^t \lambda(y) dy}
  • \displaystyle f_T(t)=\displaystyle \lambda(t) e^{-\int_0^t \lambda(y) dy}

In Claim 1, we derive the probability P[N_y=0] for the discrete variable N_y derived from the counting process (*). We now consider the continuous random variable T. Note that P[T > t] is the probability that the first change occurs after time t. This means there is no change within the interval (0,t). Thus S_T(t)=P[T > t]=P[N_t=0]=e^{-\int_0^t \lambda(y) dy}. The distribution function and density function can be derived accordingly.

Claim 3
The hazard rate function \lambda(t) is equivalent to each of the following:

  • \displaystyle \lambda(t)=\frac{f_T(t)}{1-F_T(t)}
  • \displaystyle \lambda(t)=\frac{-S_T^{'}(t)}{S_T(t)}

Remark
Based on the condition 3 in the counting process (*), the \lambda(t) is the rate of change in the counting process. Note that \lambda(t) \delta is the probability of a change (e.g. a failure or a termination) in a small time interval of length \delta. Thus the hazard rate function can be interpreted as the failure rate at time t given that the life in question has survived to time t. Claim 3 shows that the hazard rate function is the ratio of the density function and the survival function of the time until failure variable T. Thus the hazard rate function \lambda(t) is the conditional density of failure at time t. It is the rate of failure at the next instant given that the life or system being studied has survived up to time t.

It is interesting to note that the function \Lambda(t)=\int_0^t \lambda(y) dy defined in claim 1 is called the cumulative hazard rate function. Thus the cumulative hazard rate function is an alternative way of representing the hazard rate function (see the discussion on Weibull distribution below).

Examples of Survival Models

Exponential Distribution
In many applications, especially those for biological organisms and mechanical systems that wear out over time, the hazard rate \lambda(t) is an increasing function of t. In other words, the older the life in question (the larger the t), the higher chance of failure at the next instant. For humans, the probability of a 85 years old dying in the next year is clearly higher than for a 20 years old. In a Poisson process, the rate of change \lambda(t)=\lambda indicated in condition 3 is a constant. As a result, the time T until the first change derived in claim 2 has an exponential distribution with parameter \lambda. In terms of mortality study or reliability study of machines that wear out over time, this is not a realistic model. However, if the mortality or failure is caused by random external events, this could be an appropriate model.

Weibull Distribution
This distribution is an excellent model choice for describing the life of manufactured objects. It is defined by the following cumulative hazard rate function:

\displaystyle \Lambda(t)=\biggl(\frac{t}{\beta}\biggr)^{\alpha} where \alpha > 0 and \beta>0

As a result, the hazard rate function, the density function and the survival function for the lifetime distribution are:

\displaystyle \lambda(t)=\frac{\alpha}{\beta} \biggl(\frac{t}{\beta}\biggr)^{\alpha-1}

\displaystyle f_T(t)=\frac{\alpha}{\beta} \biggl(\frac{t}{\beta}\biggr)^{\alpha-1} \displaystyle e^{\displaystyle -\biggl[\frac{t}{\beta}\biggr]^{\alpha}}

\displaystyle S_T(t)=\displaystyle e^{\displaystyle -\biggl[\frac{t}{\beta}\biggr]^{\alpha}}

The parameter \alpha is the shape parameter and \beta is the scale parameter. When \alpha=1, the hazard rate becomes a constant and the Weibull distribution becomes an exponential distribution.

When the parameter \alpha<1, the failure rate decreases over time. One interpretation is that most of the defective items fail early on in the life cycle. Once they they are removed from the population, failure rate decreases over time.

When the parameter 1<\alpha, the failure rate increases with time. This is a good candidate for a model to describe the lifetime of machines or systems that wear out over time.

The Gompertz Distribution
The Gompertz law states that the force of mortality or failure rate increases exponentially over time. It describe human mortality quite accurately. The following is the hazard rate function:

\displaystyle \lambda(t)=\alpha e^{\beta t} where \alpha>0 and \beta>0.

The following are the cumulative hazard rate function as well as the survival function, distribution function and the pdf of the lifetime distribution T.

\displaystyle \Lambda(t)=\int_0^t \alpha e^{\beta y} dy=\frac{\alpha}{\beta} e^{\beta t}-\frac{\alpha}{\beta}

\displaystyle S_T(t)=\displaystyle e^{\displaystyle \frac{\alpha}{\beta} e^{\beta t}-\frac{\alpha}{\beta}}

\displaystyle F_T(t)=\displaystyle 1-e^{\displaystyle \frac{\alpha}{\beta} e^{\beta t}-\frac{\alpha}{\beta}}

\displaystyle f_T(t)=\displaystyle \alpha e^{\beta t} \thinspace e^{\displaystyle \frac{\alpha}{\beta} e^{\beta t}-\frac{\alpha}{\beta}}

Makeham’s Law
The Makeham’s Law states that the force of mortality is the Gompertz failure rate plus an age-indpendent component that accounts for external causes of mortality. The following is the hazard rate function:

\displaystyle \lambda(t)=\alpha e^{\beta t}+\mu where \alpha>0, \beta>0 and \mu>0.

The following are the cumulative hazard rate function as well as the survival function, distribution function and the pdf of the lifetime distribution T.

\displaystyle \Lambda(t)=\int_0^t (\alpha e^{\beta y}+\mu) dy=\frac{\alpha}{\beta} e^{\beta t}-\frac{\alpha}{\beta}+\mu t

\displaystyle S_T(t)=\displaystyle e^{\displaystyle \frac{\alpha}{\beta} e^{\beta t}-\frac{\alpha}{\beta}+\mu t}

\displaystyle F_T(t)=\displaystyle 1-e^{\displaystyle \frac{\alpha}{\beta} e^{\beta t}-\frac{\alpha}{\beta}+\mu t}

\displaystyle f_T(t)=\biggl( \alpha e^{\beta t}+\mu t \biggr) \thinspace e^{\displaystyle \frac{\alpha}{\beta} e^{\beta t}-\frac{\alpha}{\beta}+\mu t}

Introduction to Buhlmann credibility

In this post, we continue our discussion in credibility theory. Suppose that for a particular insured (either an individual entity or a group of insureds), we have observed data X_1,X_2, \cdots, X_n (the numbers of claims or loss amounts). We are interested in setting a rate to cover the claim experience X_{n+1} from the next period. In two previous posts (Examples of Bayesian prediction in insurance, Examples of Bayesian prediction in insurance-continued), we discussed this estimation problem from a Bayesian perspective and presented two examples. In this post, we discuss the Buhlmann credibility model and work the same two examples using the Buhlmann method.

First, let’s further describe the setting of the problem. For a particular insured, the experience data corresponding to various exposure periods are assumed to be independent. Statistically speaking, conditional on a risk parameter \Theta, the claim numbers or loss amounts X_1, \cdots, X_n,X_{n+1} are independent and identically distributed. Furthermore, the distribution of the risk characteristics in the population of insureds and potential insureds is represented by \pi_{\Theta}(\theta). The experience (either claim numbers or loss amounts) of a particular insured with risk parameter \Theta=\theta is modeled by the conditional distribution f_{X \lvert \Theta}(x \lvert \theta) given \Theta=\theta.

The Buhlmann Credibility Estimator
Given the observations X_1, \cdots, X_n in the prior exposure periods, the Buhlmann credibility estimate C of the claim experience X_{n+1} is

\displaystyle C=Z \overline{X}+(1-Z)\mu

where Z is the credibility factor assigned to the observed experience data and \mu is the unconditional mean E[X] (the mean taken over all members of the risk parameter \Theta). The credibility factor Z is of the form \displaystyle Z=\frac{n}{n+K} where n is a measure of the exposure size (it is the number of observation periods in our examples) and \displaystyle K=\frac{E[Var[X \lvert \Theta]]}{Var[E[X \lvert \Theta]]}. The parameter K will be further explained below.

The Buhlmann credibility estimator C is a linear function of the past data. Note that it is of the form:

\displaystyle C=Z \overline{X}+(1-Z)\mu=w_0+\sum \limits_{i=1}^{n} w_i X_i

where w_0=(1-Z)\mu and \displaystyle w_i=\frac{Z}{n} for i=1, \cdots, n.

Not only is the Buhlmann credibility estimator a linear estimator, it is the best linear estimator to the Bayesian predictive mean E[X_{n+1} \lvert X_1, \cdots, X_n] and the hypothetical mean E[X_{n+1} \lvert \Theta] in terms of minimizing squared error loss. In other words, the coefficients w_i are obtained in such a way that the following expectations (loss functions) are minimized where the expectations are taken over all observations and/or \Theta (see [1]):

\displaystyle L_1=E\biggl( \biggl[E[X_{n+1} \lvert \Theta]-w_0-\sum \limits_{i=1}^{n} w_i X_i \biggr]^2 \biggr)

\displaystyle L_2=E\biggl( \biggl[E[X_{n+1} \lvert X_1, \cdots, X_n]-w_0-\sum \limits_{i=1}^{n} w_i X_i \biggr]^2 \biggr)

The Buhlmann Method
As discussed above, the Buhlmann credibility factor Z=\frac{n}{n+K} is chosen such that C=Z \overline{X}+(1-Z) \mu is the best linear approximation to the Bayesian estimate of the next period’s claim experience. Now we focus on the calculation of the parameter K.

Conditional on the risk parameter \Theta, E[X \lvert \Theta] is called the hypothetical mean and Var[X \lvert \Theta] is called the process variance. Then \mu=E[X]=E[E[X \lvert \Theta]] is the expected value of hypothetical means (the unconditional mean). The total variance of this random process is:

\displaystyle Var[X]=E[Var[X \lvert \Theta]]+Var[E[X \lvert \Theta]]

The first part of the total variance E[Var[X \lvert \Theta]] is called the expected value of process variance (EPV) and the second part Var[E[X \lvert \Theta]] is called the variance of the hypothetical means (VHM). The parameter K in the Buhlmann method is simply the ratio K=\frac{EPV}{VHM}.

We can get an intuitive feel of this formula by considering the variability of the hypothetical means E[X \lvert \Theta] across many values of the risk parameter \Theta. If the entire population of insureds (and potential insureds) is fairly homogeneous with respect to the risk parameter \Theta, then VHM=Var[E[X \lvert \Theta]] does not vary a great deal and is relatively small in relation to EPV=E[Var[X \lvert \Theta]]. As a result, K is large and Z is closer to 0. This agrees with the notion that in a homogeneous population, the unconditional mean (the overall mean) is of more value as a predictor of the next period’s claim experience. On the other hand, if the population of insureds is heterogeneous with respect to the risk parameter \Theta, then the overall mean is of less value as a predictor of future experience and we should reply more on the experience of the particular insured. Again, the Buhlmann formula agrees with this notion. If VHM=Var[E[X \lvert \Theta]] is large relative to EPV=E[Var[X \lvert \Theta]], then K is small and Z is closer to 1.

Another attractive feature of the Buhlmann formula is that as more experience data accumulate (as n \rightarrow \infty), the credibility factor Z approaches 1 (the experience data become more and more credible).

Example 1
In this random experiment, there are a big bowl (called B) and two boxes (Box 1 and Box 2). Bowl B consists of a large quantity of balls, 80% of which are white and 20% of which are red. In Box 1, 60% of the balls are labeled 0, 30% are labeled 1 and 10% are labeled 2. In Box 2, 15% of the balls are labeled 0, 35% are labeled 1 and 50% are labeled 2. In the experiment, a ball is selected at random from bowl B. The color of the selected ball from bowl B determines which box to use (if the ball is white, then use Box 1, if red, use Box 2). Then balls are drawn at random from the selected box (Box i) repeatedly with replacement and the values of the series of selected balls are recorded. The value of first selected ball is X_1, the value of the second selected ball is X_2 and so on.

Suppose that your friend performs this random experiment (you do not know whether he uses Box 1 or Box 2) and that his first selected ball is a 1 (X_1=1) and his second selected ball is a 2 (X_2=2). What is the predicted value X_3 of the third selected ball?

This example was solved in (Examples of Bayesian prediction in insurance) using the Bayesian approach. We now work this example in the Buhlmann approach.

The following restates the prior distribution of \Theta and the conditional distribution of X \lvert \Theta. We denote “white ball from bowl B” by \Theta=1 and “red ball from bowl B” by \Theta=2.

\pi_{\Theta}(1)=0.8
\pi_{\Theta}(2)=0.2

\displaystyle f_{X \lvert \Theta}(0 \lvert \Theta=1)=0.60
\displaystyle f_{X \lvert \Theta}(1 \lvert \Theta=1)=0.30
\displaystyle f_{X \lvert \Theta}(2 \lvert \Theta=1)=0.10

\displaystyle f_{X \lvert \Theta}(0 \lvert \Theta=2)=0.15
\displaystyle f_{X \lvert \Theta}(1 \lvert \Theta=2)=0.35
\displaystyle f_{X \lvert \Theta}(2 \lvert \Theta=2)=0.50

The following computes the conditional means (hypothetical means) and conditional variances (process variances) and the other parameters of the Buhlmann method.

Hypothetical Means
\displaystyle E[X \lvert \Theta=1]=0.60(0)+0.30(1)+0.10(2)=0.50
\displaystyle E[X \lvert \Theta=2]=0.15(0)+0.35(1)+0.50(2)=1.35

\displaystyle E[X^2 \lvert \Theta=1]=0.60(0)+0.30(1)+0.10(4)=0.70
\displaystyle E[X^2 \lvert \Theta=2]=0.15(0)+0.35(1)+0.50(4)=2.35

Process Variances
\displaystyle Var[X \lvert \Theta=1]=0.70-0.50^2=0.45
\displaystyle Var[X \lvert \Theta=2]=2.35-1.35^2=0.5275

Expected Value of the Hypothetical Means
\displaystyle \mu=E[X]=E[E[X \lvert \Theta]]=0.80(0.50)+0.20(1.35)=0.67

Expected Value of the Process Variance
\displaystyle EPV=E[Var[X \lvert \Theta]]=0.8(0.45)+0.20(0.5275)=0.4655

Variance of the Hypothetical Means
\displaystyle VHM=Var[E[X \lvert \Theta]]=0.80(0.50)^2+0.20(1.35)^2-0.67^2=0.1156

Buhlmann Credibility Factor
\displaystyle K=\frac{4655}{1156}

\displaystyle Z=\frac{2}{2+\frac{4655}{1156}}=\frac{2312}{6967}=0.33185

Buhlmann Credibility Estimate
\displaystyle C=\frac{2312}{6967} \frac{3}{2}+\frac{4655}{6967} (0.67)=\frac{6586.85}{6967}=0.9454356

Note that the Bayesian estimate obtained in Examples of Bayesian prediction in insurance is 1.004237288. Under the Buhlmann model, the past claim experience of the insured in this example is assigned 33% weight in projecting the claim frequency in the next period.

Example 2
The number of claims X generated by an insured in a potfolio of independent insurance policies has a Poisson distribution with parameter \Theta. In the portfolio of policies, the parameter \Theta varies according to a gamma distribution with parameters \alpha and \beta. We have the following conditional distributions of X and distribution of the risk parameter \Theta.

\displaystyle f_{X \lvert \Theta}(x \lvert \theta)=\frac{\theta^x e^{-\theta}}{x!} where x=0,1,2, \cdots

\displaystyle \pi_{\Theta}(\theta)=\frac{\beta^{\alpha}}{\Gamma(\alpha)} \theta^{\alpha-1} e^{-\beta \theta} where \Gamma(\cdot) is the gamma function.

Suppose that a particular insured in this portfolio has generated 0 and 3 claims in the first 2 policy periods. What is the Buhlmann estimate of the number of claims for this insured in period 3?

Since the conditional distribution of X is Poisson, we have E[X \lvert \Theta]=\Theta and Var[X \lvert \Theta]=\Theta. As a result, the EPV, VHM and K are:

\displaystyle EPV=E[\Theta]=\frac{\alpha}{\beta}

\displaystyle VHM=Var[\Theta]=\frac{\alpha}{\beta^2}

\displaystyle K=\frac{EPV}{VHM}=\beta

As a result, the credibility factor for a 2-period experience period is Z=\frac{2}{2+\beta} and the Buhlmann estimate of the claim frequency in the next period is:

\displaystyle C=\frac{2}{2+\beta} \thinspace \biggl(\frac{3}{2}\biggr)+\frac{\beta}{2+\beta} \thinspace \biggl(\frac{\alpha}{\beta}\biggr)

To generalize the above results, suppose that we have observed X_1=x_1, \cdots, X_n=x_n for this insured in the prior periods. Then the Buhlmann estimate for the claim frequency in the next period is:

\displaystyle C=\frac{n}{n+\beta} \thinspace \biggl(\frac{\sum \limits_{i=1}^{n}x_i}{n}\biggr)+\frac{\beta}{n+\beta} \thinspace \biggl(\frac{\alpha}{\beta}\biggr)

In this example, the Buhlmann estimate is exactly the same as the Bayesian estimate (Examples of Bayesian prediction in insurance-continued).

Reference

  1. Klugman S. A., Panjer H. H., Willmot G. E., Loss Models, From Data To Decisions, Second Edition, 2004, John Wiley & Sons, Inc.

Examples of Bayesian prediction in insurance-continued

This post is a continuation of the previous post Examples of Bayesian prediction in insurance. We present another example as an illustration of the methodology of Bayesian estimation. The example in this post, along with the example in the previous post, serve to motivate the concept of Bayeisan credibility and Buhlmann credibility theory. So these two posts are part of an introduction to credibility theory.

Suppose X_1, \cdots, X_n,X_{n+1} are independent and identically distributed conditional on \Theta=\theta. We denote the density function of the common distribution of X_j by f_{X \lvert \Theta}(x \lvert \theta). We denote the prior distribution of the risk parameter \Theta by \pi_{\Theta}(\theta). The following shows the steps of the Bayesian estimate of the next observation X_{n+1} given X_1, \cdots, X_n.

The Marginal Distribution
\displaystyle f_{X_1, \cdots, X_n}(x_1, \cdots, x_n)=\int \limits_{\theta} \biggl[\prod \limits_{i=1}^{n} f_{X \lvert \Theta}(x_i \lvert \theta)\biggr] \pi_{\Theta}(\theta) d \theta

The Posterior Distribution
\displaystyle \pi_{\Theta \lvert X_1, \cdots, X_n}(\theta \lvert x_1, \cdots, x_n)

\displaystyle =\ \ \ \ \ \ \ \ \ \ \frac{1}{f_{X_1, \cdots, X_n}(x_1, \cdots, x_n)} \biggl[\prod \limits_{i=1}^{n} f_{X \lvert \Theta}(x_i \lvert \theta)\biggr] \pi_{\Theta}(\theta)

The Predictive Distribution
\displaystyle f_{X_{n+1} \lvert X_1, \cdots, X_n}(x \vert x_1, \cdots, x_n)

\displaystyle =\ \ \ \ \ \ \ \ \ \ \int \limits_{\theta} \biggl(f_{X \lvert \Theta}(x \lvert \theta)\biggr) \thinspace \pi_{\Theta \lvert X_1, \cdots, X_n}(\theta \lvert x_1, \cdots, x_n) d \theta

The Bayesian Predictive Mean of the Next Period
\displaystyle E[X_{n+1} \lvert X_1=x_1, \cdots, X_n=x_n]

\displaystyle =\ \ \ \ \ \ \ \ \ \ \int \limits_{x} x \thinspace f_{X_{n+1} \lvert X_1, \cdots, X_n}(x \vert x_1, \cdots, x_n) dx

\displaystyle E[X_{n+1} \lvert X_1=x_1, \cdots, X_n=x_n]

\displaystyle =\ \ \ \ \ \ \ \ \ \ \int \limits_{\theta} E[X \lvert \theta] \thinspace \pi_{\Theta \lvert X_1, \cdots, X_n}(\theta \lvert x_1, \cdots, x_n) d \theta

Example 2
The number of claims X generated by an insured in a potfolio of independent insurance policies has a Poisson distribution with parameter \Theta. In the portfolio of policies, the parameter \Theta varies according to a gamma distribution with parameters \alpha and \beta. We have the following conditional distributions of X and prior distribution of \theta.

\displaystyle f_{X \lvert \Theta}(x \lvert \theta)=\frac{\theta^x e^{-\theta}}{x!} where x=0,1,2, \cdots

\displaystyle \pi_{\Theta}(\theta)=\frac{\beta^{\alpha}}{\Gamma(\alpha)} \theta^{\alpha-1} e^{-\beta \theta} where \Gamma(\cdot) is the gamma function.

Suppose that a particular insured in this portfolio has generated 0 and 3 claims in the first 2 policy periods. What is the Bayesian estimate of the number of claims for this insured in period 3?

Note that the conditional mean E[X \lvert \Theta]=\Theta. Thus the unconditional mean E[X]=E[\Theta]=\frac{\alpha}{\beta}.

Comment
Note that the unconditional distribution of X is a negative binomial distribution. In a previous post (Compound negative binomial distribution), it was shown that if N \sim Poisson(\Lambda) and \Lambda \sim Gamma(\alpha,\beta), then the unconditional distribution of X has the following probability function. We make use of this result in the Bayesian estimation problem in this post.

\displaystyle P[N=n]=\frac{\Gamma(\alpha+n}{\Gamma(\alpha) \Gamma(n)} \biggl[\frac{\beta}{\beta+1}\biggr]^{\alpha} \biggl[\frac{1}{\beta+1}\biggr]^n

The Marginal Distribution
\displaystyle f_{X_1,X_2}(0,3)=\int_{0}^{\infty} e^{-\theta} \frac{\theta^3 e^{-\theta}}{3!} \frac{\beta^{\alpha}}{\Gamma(\alpha)} \theta^{\alpha-1} e^{-\beta \theta} d \theta

\displaystyle =\int_{0}^{\infty} \frac{\beta^{\alpha}}{3! \Gamma(\alpha)} \theta^{\alpha+3-1} e^{\beta+2} d \theta=\frac{\Gamma(\alpha+3)}{6 \Gamma(\alpha)} \frac{\beta^{\alpha}}{(\beta+2)^{\alpha+3}}

The Posterior Distribution
\displaystyle \pi_{\Theta \lvert X_1,X_2}(\theta \lvert 0,3)=\frac{1}{f_{X_1,X_2}(0,3)} e^{-\theta} \frac{\theta^3 e^{-\theta}}{3!} \frac{\beta^{\alpha}}{\Gamma(\alpha)} \theta^{\alpha-1} e^{-\beta \theta}

\displaystyle =K \thinspace \theta^{\alpha+3-1} e^{-(\beta+2) \theta}

In the above expression K is a constant making \pi_{\Theta \lvert X_1,X_2}(\theta \lvert 0,3) a density function. Note that it has the form of a gamma distribution. Thus the posterior distribution must be:

\displaystyle \pi_{\Theta \lvert X_1,X_2}(\theta \lvert 0,3)=\frac{(\beta+2)^{\alpha+1}}{\Gamma(\alpha+3)} \thinspace \theta^{\alpha+3-1} e^{-(\beta+2) \theta}

Thus the posterior distribution of \Theta is a gamma distribution with parameter \alpha+3 and \beta+2.

The Predictive Distribution
Note that the predictive distribution is simply the mixture of Poisson(\Theta) with Gamma(\alpha+3,\beta+2) as mixing weights. By the comment above, the predictive distribution is a negative binomial distribution with the following probability function:

\displaystyle f_{X_3 \lvert X_1,X_2}(x \lvert 0,3)=\frac{\Gamma(\alpha+5)}{\Gamma(\alpha+3) \Gamma(2)} \biggl[\frac{\beta+2}{\beta+3}\biggr]^{\alpha+3} \biggl[\frac{1}{\beta+3}\biggr]^{2}

The Bayesian Predictive Mean
\displaystyle E[X_3 \lvert 0,3]=\frac{\alpha+3}{\beta+2}=\frac{2}{\beta+2} \biggl(\frac{3}{2}\biggr)+\frac{\beta}{\beta+2} \biggl(\frac{\alpha}{\beta}\biggr) \ \ \ \ \ \ \ \ \ \ (1)

Note that E[X \lvert \Theta]=\Theta. Thus the Bayesian predictive mean in this example is simply the mean of the posterior distribution of \Theta, which is E[\Theta \vert 0,3]=\frac{\alpha+3}{\beta+2}.

Comment
Generalizing the example, suppose that in the first n periods, the claim counts for the insured are X_1=x_1, \cdots, X_n=x_n. Then the posterior distribution of the parameter \Theta is a gamma distribution.

\biggl[\Theta \lvert X_1=x_1, \cdots, X_n=x_n\biggr] \sim Gamma(\alpha+\sum_{i=1}^{n} x_i,\beta+n)

Then the predictive distribution of X_{n+1} given the observations has a negative binomial distribution. More importantly, the Bayesian predictive mean is:

\displaystyle E[X_{n+1} \lvert X_1=x_1, \cdots, X_n=x_n]=\frac{\alpha+\sum_{i=1}^{n} x_i}{\beta+n}

\displaystyle =\frac{n}{\beta+n} \biggl(\frac{\sum \limits_{i=0}^{n} x_i}{n}\biggr)+\frac{\beta}{\beta+n} \biggl(\frac{\alpha}{\beta}\biggr)\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2)

It is interesting that the Bayesian predictive mean of the (n+1)^{th} period is a weighted average of the mean of the observed data (\overline{X}) and the unconditional mean E[X]=\frac{\alpha}{\beta}. Consequently, the above Bayesian estimate is a credibility estimate. The weight given to the observed data Z=\frac{n}{\beta+n} is called the credibility factor. The estimate and the factor are called Bayesian credibility estimate and Bayesian credibility factor, respectively.

In general, the credibility estimate is an estimator of the following form:

\displaystyle E=Z \thinspace \overline{X}+ (1-Z) \thinspace \mu_0

where \overline{X} is the mean of the observed data and \mu_0 is the mean based on other information. In our example here, \mu_0 is the unconditional mean. In practice, \mu_0 could be the mean based on the entire book of business, or a mean based on a different block of similar insurance policies. Another interpretation is that \overline{X} is the mean of the recent experience data and \mu_0 is the mean of prior periods.

One more comment about the credibility factor Z=\frac{n}{\beta+n} derived in this example. As n \rightarrow \infty, Z \rightarrow 1. This makes intuitive sense since this gives more weight to \overline{X} as more and more data are accumulated.

Examples of Bayesian prediction in insurance

We present two examples to illustrate the notion of Bayesian predictive distributions. The general insurance problem we aim to illustrate is that of using past claim experience data from an individual insured or a group of insureds to predict the future claim experience. Suppose we have X_1,X_2, \cdots, X_n with each X_i being the number of claims or an aggregate amount of claims in a prior period of observation. Given such results, what will be the number of claims during the next period, or what will be the aggregate claim amount in the next period? These two examples will motivate the notion of credibility, both Bayesian credibility theory and Buhlmann credibility theory. We present Example 1 in this post. Example 2 is presented in the next post (Examples of Bayesian prediction in insurance-continued).

Example 1
In this random experiment, there are a big bowl (called B) and two boxes (Box 1 and Box 2). Bowl B consists of a large quantity of balls, 80% of which are white and 20% of which are red. In Box 1, 60% of the balls are labeled 0, 30% are labeled 1 and 10% are labeled 2. In Box 2, 15% of the balls are labeled 0, 35% are labeled 1 and 50% are labeled 2. In the experiment, a ball is selected at random from bowl B. The color of the selected ball from bowl B determines which box to use (if the ball is white, then use Box 1, if red, use Box 2). Then balls are drawn at random from the selected box (Box i) repeatedly with replacement and the values of the series of selected balls are recorded. The value of first selected ball is X_1, the value of the second selected ball is X_2 and so on.

Suppose that your friend performs this random experiment (you do not know whether he uses Box 1 or Box 2) and that his first ball is a 1 (X_1=1) and his second ball is a 2 (X_2=2). What is the predicted value X_3 of the third selected ball?

Though it is straightforward to apply the Bayes’ theorem to this problem (the solution can be seen easily using a tree diagram) to obtain a numerical answer, we use this example to draw out the principle of Bayesian prediction. So it may appear that we are making a simple problem overly complicated. We are merely using this example to motivate the method of Bayesian estimation.

For convenience, we denote “draw a white ball from bowl B” by \theta=1 and “draw a red ball from bowl B” by \theta=2. Box 1 and Box 2 are conditional distributions. The Bowl B is a distribution for the parameter \theta. The distribution given in Bowl B is a probability distribution over the space of all parameter values (called a prior distribution). The prior distribution of \theta and the conditional distributions of X given \theta are restated as follows:

\pi_{\theta}(1)=0.8
\pi_{\theta}(2)=0.2

\displaystyle f_{X \lvert \Theta}(0 \lvert \theta=1)=0.60
\displaystyle f_{X \lvert \Theta}(1 \lvert \theta=1)=0.30
\displaystyle f_{X \lvert \Theta}(2 \lvert \theta=1)=0.10

\displaystyle f_{X \lvert \Theta}(0 \lvert \theta=2)=0.15
\displaystyle f_{X \lvert \Theta}(1 \lvert \theta=2)=0.35
\displaystyle f_{X \lvert \Theta}(2 \lvert \theta=2)=0.50

The following shows the conditional means E[X \lvert \theta] and the unconditional mean E[X].

\displaystyle E[X \lvert \theta=1]=0.6(0)+0.3(1)+0.1(2)=0.50
\displaystyle E[X \lvert \theta=2]=0.15(0)+0.35(1)+0.5(2)=1.35
\displaystyle E[X]=0.8(0.50)+0.2(1.35)=0.67

If you know which particular box your friend is using (\theta=1 or \theta=2), then the estimate of the next ball should be E[X \lvert \theta]. But the value of \theta is unkown to you. Another alternative for a predicted value is the unconditional mean E[X]=0.67. While the estimate E[X]=0.67 is easy to calculate, this estimate does not take the observed data (X_1=1 and X_2=2) into account and it certainly does not take the parameter \theta into account. A third alternative is to incorporate the observed data into the estimate of the next ball. We now continue with the calculation of the Bayesian estimation.

Unconditional Distribution
\displaystyle f_X(0)=0.6(0.8)+0.15(0.2)=0.51
\displaystyle f_X(1)=0.3(0.8)+0.35(0.2)=0.31
\displaystyle f_X(2)=0.1(0.8)+0.50(0.2)=0.18

Marginal Probability
\displaystyle f_{X_1,X_2}(1,2)=0.1(0.3)(0.8)+0.5(0.35)(0.2)=0.059

Posterior Distribution of \theta
\displaystyle \pi_{\Theta \lvert X_1,X_2}(1 \lvert 1,2)=\frac{0.1(0.3)(0.8)}{0.059}=\frac{24}{59}

\displaystyle \pi_{\Theta \lvert X_1,X_2}(2 \lvert 1,2)=\frac{0.5(0.35)(0.2)}{0.059}=\frac{35}{59}

Predictive Distribution of X
\displaystyle f_{X_3 \lvert X_1,X_2}(0 \lvert 1,2)=0.6 \frac{24}{59} + 0.15 \frac{35}{59}=\frac{19.65}{59}

\displaystyle f_{X_3 \lvert X_1,X_2}(1 \lvert 1,2)=0.3 \frac{24}{59} + 0.35 \frac{35}{59}=\frac{19.45}{59}

\displaystyle f_{X_3 \lvert X_1,X_2}(2 \lvert 1,2)=0.1 \frac{24}{59} + 0.50 \frac{35}{59}=\frac{19.90}{59}

Here is another formulation of the predictive distribution of X_3. See the general methodology section below.
\displaystyle f_{X_3 \lvert X_1,X_2}(0 \lvert 1,2)=\frac{0.6(0.1)(0.3)(0.8)+0.15(0.5)(0.35)(0.2)}{0.059}=\frac{19.65}{59}

\displaystyle f_{X_3 \lvert X_1,X_2}(1 \lvert 1,2)=\frac{0.3(0.1)(0.3)(0.8)+0.35(0.5)(0.35)(0.2)}{0.059}=\frac{19.45}{59}

\displaystyle f_{X_3 \lvert X_1,X_2}(2 \lvert 1,2)=\frac{0.1(0.1)(0.3)(0.8)+0.5(0.5)(0.35)(0.2)}{0.059}=\frac{19.90}{59}

The posterior distribution \pi_{\theta}(\cdot \lvert 1,2) is the conditional probability distribution of the parameter \theta given the observed data X_1=1 and X_2=2. This is a result of applying the Bayes’ theorem. The predictive distribution f_{X_3 \lvert X_1,X_2}(\cdot \lvert 1,2) is the conditional probability distribution of a new observation given the past observed data of X_1=1 and X_2=2. Since both of these distributions incorporate the past observations, the Bayesian estimate of the next observation is the mean of the predictive distribution.

\displaystyle E[X_3 \lvert X_1=1,X_2=2]

\displaystyle =0 \thinspace f_{X_3 \lvert X_1,X_2}(0 \lvert 1,2)+1 \thinspace f_{X_3 \lvert X_1,X_2}(1 \lvert 1,2)+2 \thinspace f_{X_3 \lvert X_1,X_2}(2 \lvert 1,2)

\displaystyle =0 \frac{19.65}{59}+1 \frac{19.45}{59}+ 2 \frac{19.90}{59}

\displaystyle =\frac{59.25}{59}=1.0042372

\displaystyle E[X_3 \lvert X_1=1,X_2=2]

\displaystyle =E[X \lvert \theta=1] \medspace \pi_{\Theta \lvert X_1,X_2}(1 \lvert 1,2)+E[X \lvert \theta=2] \medspace \pi_{\Theta \lvert X_1,X_2}(2 \lvert 1,2)

\displaystyle =0.5 \frac{24}{59}+1.35 \frac{35}{59}=\frac{59.25}{59}

Note that we compute the Bayesian estimate E[X_3 \vert X_1,X_2] in two ways, one using the predictive distribution and the other using the posterior distribution of the parameter \theta. The Bayesian estimate is the mean of the hypothetical means E[X \lvert \theta] with expectation taken over the entire posterior distribution \pi_{\theta}(\cdot \lvert 1,2).

Discussion of General Methodology
We now use Example 1 to draw out general methodology. We first describe the discrete case and have the continuous case as a generalization.

Suppose we have a family of conditional density functions f_{X \lvert \Theta}(x \lvert \theta). In Example 1, the bowl B is the distribution of the parameter \theta. Box 1 and Box 2 are the conditional distributions with density f_{X \lvert \Theta}(x \lvert \theta). In an insurance application, the \theta is a risk parameter and the conditional distribution f_{X \lvert \Theta}(x \lvert \theta) is the claim experience in a given fixed period (conditional on \Theta=\theta).

Suppose that X_1,X_2, \cdots, X_n,X_{n+1} (conditional on \Theta=\theta) are independent and identically distributed where the common density function is f_{X \lvert \Theta}(x \lvert \theta). In our Example 1, once a box is selected (e.g. Box 1), then the repeated drawing of the balls are independent and identically distributed. In an insurance application, the X_k are the claim experience from an insured (or a group of insureds) where the insured belongs to the risk class with parameter \theta.

We are interested in the conditional distribution of X_{n+1} given \Theta=\theta to predict X_{n+1}. In our example, X_{n+1} is the value of the ball in the (n+1)^{st} draw. In an insurance application, X_{n+1} may be the claim experience of an insured (or a group of insureds) in the next policy period. We can use the unconditional mean E[X]=E[E(X \lvert \Theta)] (the mean of the hypothetical means). This approach does not take the risk parameter of the insured into the equation. On the other hand, if we know the value of \theta, then we can use f_{X \lvert \Theta}(x \lvert \theta). But the risk parameter is usually unknown. The natural alternative is to condition on the observed experience in the n prior periods X_1, \cdots, X_n rather than conditioning on the risk parameter \theta. Thus we derive the predictive distribution of X_{n+1} given the observation X_1, \cdots, X_n. Given the observed experience data X_1=x_1,X_2=x_2, \cdots, X_n=x_n, the following is the derivation of the Bayesian predictive distribution. Note that the prior distribution of the parameter \theta is \pi_{\Theta}(\theta).

The Unconditional Distribution
\displaystyle f_X(x)=\sum \limits_{\theta} f_{X \lvert \Theta}(x \lvert \theta) \ \pi_{\Theta}(\theta)

The Marginal Distribution
\displaystyle f_{X_1, \cdots, X_n}(x_1, \cdots, x_n)=\sum \limits_{\theta} \biggl[\prod \limits_{i=1}^{n} f_{X_i \lvert \Theta}(x_i \lvert \theta)\biggr] \pi_{\Theta}(\theta)

The Posterior Distribution
\displaystyle \pi_{\Theta \lvert X_1, \cdots, X_n}(\theta \lvert x_1, \cdots, x_n)

\displaystyle = \ \ \ \ \ \ \ \ \ \ \frac{1}{f_{X_1, \cdots, X_n}(x_1, \cdots, x_n)} \biggl[\prod \limits_{i=1}^{n} f_{X_i \lvert \Theta}(x_i \lvert \theta)\biggr] \pi_{\Theta}(\theta)

The Predictive Distribution
\displaystyle f_{X_{n+1} \lvert X_1, \cdots, X_n}(x \vert x_1, \cdots, x_n)

\displaystyle =\ \ \ \ \ \ \ \ \ \ \sum \limits_{\theta} f_{X \lvert \Theta}(x \lvert \theta) \thinspace \pi_{\Theta \lvert X_1, \cdots, X_n}(\theta \lvert x_1, \cdots, x_n)

Another formulation is:
\displaystyle f_{X_{n+1} \lvert X_1, \cdots, X_n}(x \vert x_1, \cdots, x_n)

\displaystyle =\ \ \ \ \ \ \ \ \ \ \frac{1}{f_{X_1, \cdots, X_n}(x_1, \cdots, x_n)} \sum \limits_{\theta} f_{X_{n+1} \lvert \Theta}(x \lvert \theta) \biggl[ \prod \limits_{j=1}^{n}f_{X_j \lvert \Theta}(x_j \lvert \theta)\biggr] \thinspace \pi_{\Theta}(\theta)

The Bayesian Predictive Mean of the Next Period
\displaystyle E[X_{n+1} \lvert X_1=x_1, \cdots, X_n=x_n]

\displaystyle =\ \ \ \ \ \ \ \ \ \ \sum \limits_{x} x \thinspace f_{X_{n+1} \lvert X_1, \cdots, X_n}(x \vert x_1, \cdots, x_n)

\displaystyle E[X_{n+1} \lvert X_1=x_1, \cdots, X_n=x_n]

\displaystyle =\ \ \ \ \ \ \ \ \ \ \sum \limits_{\theta} E[X \lvert \theta] \thinspace \pi_{\Theta \lvert X_1, \cdots, X_n}(\theta \lvert x_1, \cdots, x_n)

We state the same results for the case that the claim experience X is continuous.

The Unconditional Distribution
\displaystyle f_{X}(x) = \int_{\theta} f_{X \lvert \Theta} (x \lvert \theta) \ \pi_{\Theta}(\theta) \ d \theta

The Marginal Distribution
\displaystyle f_{X_1, \cdots, X_n}(x_1, \cdots, x_n)=\int \limits_{\theta} \biggl[\prod \limits_{i=1}^{n} f_{X \lvert \Theta}(x_i \lvert \theta)\biggr] \pi_{\Theta}(\theta) d \theta

The Posterior Distribution
\displaystyle \pi_{\Theta \lvert X_1, \cdots, X_n}(\theta \lvert x_1, \cdots, x_n)

\displaystyle =\ \ \ \ \ \ \ \ \ \ \frac{1}{f_{X_1, \cdots, X_n}(x_1, \cdots, x_n)} \biggl[\prod \limits_{i=1}^{n} f_{X \lvert \Theta}(x_i \lvert \theta)\biggr] \pi_{\Theta}(\theta)

The Predictive Distribution
\displaystyle f_{X_{n+1} \lvert X_1, \cdots, X_n}(x \vert x_1, \cdots, x_n)

\displaystyle =\ \ \ \ \ \ \ \ \ \ \int \limits_{\theta} f_{X \lvert \Theta}(x \lvert \theta) \thinspace \pi_{\Theta \lvert X_1, \cdots, X_n}(\theta \lvert x_1, \cdots, x_n) \ d \theta

Another formulation is:
\displaystyle f_{X_{n+1} \lvert X_1, \cdots, X_n}(x \vert x_1, \cdots, x_n)

\displaystyle =\ \ \ \ \ \ \ \ \ \ \frac{1}{f_{X_1, \cdots, X_n}(x_1, \cdots, x_n)} \int \limits_{\theta} f_{X_{n+1} \lvert \Theta}(x \lvert \theta) \biggl[ \prod \limits_{j=1}^{n}f_{X_j \lvert \Theta}(x_j \lvert \theta)\biggr] \thinspace \pi_{\Theta}(\theta) \ d \theta

The Bayesian Predictive Mean of the Next Period
\displaystyle E[X_{n+1} \lvert X_1=x_1, \cdots, X_n=x_n]

\displaystyle =\ \ \ \ \ \ \ \ \ \ \int \limits_{x} x \thinspace f_{X_{n+1} \lvert X_1, \cdots, X_n}(x \vert x_1, \cdots, x_n) dx

\displaystyle E[X_{n+1} \lvert X_1=x_1, \cdots, X_n=x_n]

\displaystyle =\ \ \ \ \ \ \ \ \ \ \int \limits_{\theta} E[X \lvert \theta] \thinspace \pi_{\Theta \lvert X_1, \cdots, X_n}(\theta \lvert x_1, \cdots, x_n) d \theta

See the next post (Examples of Bayesian prediction in insurance-continued) for Example 2.

Compound mixed Poisson distribution

Let the random sum Y=X_1+X_2+ \cdots +Y_N be the aggregate claims generated in a fixed period by an independent group of insureds. When the number of claims N follows a Poisson distribution, the sum Y is said to have a compound Poisson distribution. When the number of claims N has a mixed Poisson distribution, the sum Y is said to have a compound mixed Poisson distribution. A mixed Poisson distribution is a Poisson random variable N such that the Poisson parameter \Lambda is uncertain. In other words, N is a mixture of a family of Poisson distributions N(\Lambda) and the random variable \Lambda specifies the mixing weights. In this post, we present several basic properties of compound mixed Poisson distributions. In a previous post (Compound negative binomial distribution), we showed that the compound negative binomial distribution is an example of a compound mixed Poisson distribution (with gamma mixing weights).

In terms of notation, we have:

  • Y=X_1+X_2+ \cdots +Y_N,
  • N \sim Poisson(\Lambda),
  • \Lambda \sim some unspecified distribution.

The following presents basic proeprties of the compound mixed Poisson Y in terms of the mixing weights \Lambda and the claim amount random variable X.

Mean and Variance

\displaystyle E[Y]=E[\Lambda] E[X]

\displaystyle Var[Y]=E[\Lambda] E[X^2]+Var[\Lambda] E[X]^2

Moment Generating Function

\displaystyle M_Y(t)=M_{\Lambda}[M_X(t)-1]

Cumulant Generating Function

\displaystyle \Psi_Y(t)=ln M_{\Lambda}[M_X(t)-1]=\Psi_{\Lambda}[M_X(t)-1]

Measure of Skewness
\displaystyle E[(Y-\mu_Y)^3]=\Psi_Y^{(3)}(0)

\displaystyle =\Psi_{\Lambda}^{(3)}(0) E[X]^3 + 3 \Psi_{\Lambda}^{(2)}(0) E[X] E[X^2]+\Psi_{\Lambda}^{(1)}(0) E[X^3]

\displaystyle =\gamma_{\Lambda} Var[\Lambda]^{\frac{3}{2}} E[X]^3 + 3 Var[\Lambda] E[X] E[X^2]+E[\Lambda] E[X^3]

Measure of skewness: \displaystyle \gamma_Y=\frac{E[(Y-\mu_Y)^3]}{(Var[Y])^{\frac{3}{2}}}

Previous Posts on Compound Distributions

An introduction to compound distributions
Some examples of compound distributions
Compound Poisson distribution
Compound Poisson distribution-discrete example
Compound negative binomial distribution

Compound Poisson distribution-discrete example

We present a discrete example of a compound Poisson distribution. A random variable Y has a compound distribution if Y=X_1+ \cdots +X_N where the number of terms N is a discrete random variable whose support is the set of all nonnegative integers (or some appropriate subset) and the random variables X_i are identically distributed (let X be the common distribution). We further assume that the random variables X_i are independent and each X_i is independent of N. When N follows the Poisson distribution, Y is said to have a compound Poisson distribution. When the common distribution for the X_i is continuous, Y is a mixed distribution if P[N=0] is nonzero. When the common distribution for the X_i is discrete, Y is a discrete distribution. In this post we present an example of a compound Poisson distribution where the common distribution X is discrete. The compound distribution has a natural insurance interpretation (see the following links).

Compound Poisson distribution
Some examples of compound distributions
An introduction to compound distributions

General Discussion
In general, the distribution function of a compound Poisson random variable Y is the weighted average of all the n^{th} convolutions of the common distribution function of the individual claim amount X. The following shows the form of such a distribution function:

\displaystyle F_Y(y)=\sum \limits_{n=0}^{\infty} F^{*n}(y) P[N=n]

where \displaystyle F is the common distribution of the X_n and F^{*n} is the n^{th} convolution of F.

If the distribution of the individual claim X is discrete, we can obtain the probability mass function of Y by convolutions as follows:

\displaystyle f_Y(y)=P[Y=y]=\sum \limits_{n=0}^{\infty} p^{*n}(y) P[N=n]

where \displaystyle p^{*1}(y)=P[X=y]
and \displaystyle p^{*n}=p^* \cdots p^{*}(x)=P[X_1+X_2+ \cdots +X_n=y]
and \displaystyle p^{*0}(y)=\left\{\begin{matrix}0&\thinspace y \ne 0\\{1}&\thinspace x=0\end{matrix}\right.

Example
Suppose the number of claims generated by a portfolio of insurance policies over a fixed time period has a Poisson distribution with parameter \lambda. Individual claim amounts will be 1 or 2 with probabilities 0.6 and 0.4, respectively. For the compound Poisson aggregate claims Y=X_1+ \cdots +X_N, find P[Y=k] for k=0,1,2,3,4.

The probability mass function of N is: \displaystyle f_N(n)=\frac{\lambda^n e^{-\lambda}}{n!} where n=0,1,2, \cdots. The individual claim amounnt X has a Bernoulli distribution since it is a two-valued discrete random variable. For convenience, we let p=0.4 (i.e. we consider X=2 is a success). Then the sum X_1+ \cdots + X_n has a Binomial distribution. Consequently, the n^{th} convolution p^{*n} is simply the distribution function of Binomial(n,p). The following shows p^{*n} for n=1,2,3,4.

\displaystyle p^{*1}(1)=0.6, \thinspace p^{*1}(2)=0.4

\displaystyle p^{*2}(2)=\binom{2}{0} (0.4)^0 (0.6)^2=0.36
\displaystyle p^{*2}(3)=\binom{2}{1} (0.4)^1 (0.6)^1=0.48
\displaystyle p^{*2}(4)=\binom{2}{2} (0.4)^2 (0.6)^0=0.16

\displaystyle p^{*3}(3)=\binom{3}{0} (0.4)^0 (0.6)^3=0.216
\displaystyle p^{*3}(4)=\binom{3}{1} (0.4)^1 (0.6)^2=0.432
\displaystyle p^{*3}(5)=\binom{3}{2} (0.4)^2 (0.6)^1=0.288
\displaystyle p^{*3}(6)=\binom{3}{3} (0.4)^3 (0.6)^0=0.064

\displaystyle p^{*4}(4)=\binom{4}{0} (0.4)^0 (0.6)^4=0.1296
\displaystyle p^{*4}(5)=\binom{4}{1} (0.4)^1 (0.6)^3=0.3456
\displaystyle p^{*4}(6)=\binom{4}{2} (0.4)^2 (0.6)^2=0.3456
\displaystyle p^{*4}(7)=\binom{4}{3} (0.4)^3 (0.6)^1=0.1536
\displaystyle p^{*4}(8)=\binom{4}{4} (0.4)^4 (0.6)^0=0.0256

Since we are interested in finding P[Y=y] for y=0,1,2,3,4, we only need to consider N=0,1,2,3,4. The following matrix shows the relevant values of p^{*n}. The rows are for y=0,1,2,3,4. The columns are p^{*0}, p^{*1}, p^{*2}, p^{*3}, p^{*4}.

\displaystyle \begin{pmatrix} 1&0&0&0&0 \\{0}&0.6&0&0&0 \\{0}&0.4&0.36&0&0 \\{0}&0&0.48&0.216&0 \\{0}&0&0.16&0.432&0.1296\end{pmatrix}

To obtain the probability mass function of Y, we simply multiply each row by P[N=n] where n=0,1,2,3,4.

\displaystyle P[Y=0]=e^{-\lambda}
\displaystyle P[Y=1]=0.6 \lambda e^{-\lambda}
\displaystyle P[Y=2]=0.4 \lambda e^{-\lambda}+0.36 \frac{\lambda^2 e^{-\lambda}}{2}
\displaystyle P[Y=3]=0.48 \frac{\lambda^2 e^{-\lambda}}{2}+0.216 \frac{\lambda^3 e^{-\lambda}}{6}
\displaystyle P[Y=4]=0.16 \frac{\lambda^2 e^{-\lambda}}{2}+0.432 \frac{\lambda^3 e^{-\lambda}}{6}+0.1296 \frac{\lambda^4 e^{-\lambda}}{24}

Compound Poisson distribution

The compound distribution is a model for describing the aggregate claims arised in a group of independent insureds. Let N be the number of claims generated by a portfolio of insurance policies in a fixed time period. Suppose X_1 is the amount of the first claim, X_2 is the amount of the second claim and so on. Then Y=X_1+X_2+ \cdots + X_N represents the total aggregate claims generated by this portfolio of policies in the given fixed time period. In order to make this model more tractable, we make the following assumptions:

  • X_1,X_2, \cdots are independent and identically distributed.
  • Each X_i is independent of the number of claims N.

The number of claims N is associated with the claim frequency in the given portfolio of policies. The common distribution of X_1,X_2, \cdots is denoted by X. Note that X models the amount of a random claim generated in this portfolio of insurance policies. See these two posts for an introduction to compound distributions (An introduction to compound distributions, Some examples of compound distributions).

When the claim frequency N follows a Poisson distribution with a constant parameter \lambda, the aggreagte claims Y is said to have a compound Poisson distribution. After a general discussion of the compound Poisson distribution, we discuss the property that an independent sum of compound Poisson distributions is also a compound Poisson distribution. We also present an example to illustrate basic calculations.

Compound Poisson – General Properties

Distribution Function
\displaystyle F_Y(y)=\sum \limits_{n=0}^{\infty} F^{*n}(y) \frac{\lambda^n e^{-\lambda}}{n!}

where \lambda=E[N], F is the common distribution function of X_i and F^{*n} is the n-fold convolution of F.

Mean and Variance
\displaystyle E[Y]=E[N] E[X]= \lambda E[X]

\displaystyle Var[Y]=\lambda E[X^2]

Moment Generating Function and Cumulant Generating Function
\displaystyle M_Y(t)=e^{\lambda (M_X(t)-1)}

\displaystyle \Psi_Y(t)=ln M_Y(t)=\lambda (M_X(t)-1)

Note that the moment generating function of the Poisson N is M_N(t)=e^{\lambda (e^t - 1)}. For a compound distribution Y in general, M_Y(t)=M_N[ln M_X(t)].

Skewness
\displaystyle E[(Y-\mu_Y)^3]=\Psi_Y^{(3)}(0)=\lambda E[X^3]

\displaystyle \gamma_Y=\frac{E[(Y-\mu_Y)^3]}{Var[Y]^{\frac{3}{2}}}=\frac{1}{\sqrt{\lambda}} \frac{E[X^3]}{E[X^2]^{\frac{3}{2}}}

Independent Sum of Compound Poisson Distributions
First, we state the results. Suppose that Y_1,Y_2, \cdots, Y_k are independent random variables such that each Y_i has a compound Poisson distribution with \lambda_i being the Poisson parameter for the number of claim variable and F_i being the distribution function for the individual claim amount. Then Y=Y_1+Y_2+ \cdots +Y_k has a compound Poisson distribution with:

  • the Poisson parameter: \displaystyle \lambda=\sum \limits_{i=1}^{k} \lambda_i
  • the distribution function: \displaystyle F_Y(y)=\sum \limits_{i=1}^{k} \frac{\lambda_i}{\lambda} \thinspace F_i(y)

The above result has an insurance interpretation. Suppose we have k independent blocks of insurance policies such that the aggregate claims Y_i for the i^{th} block has a compound Poisson distribution. Then Y=Y_1+Y_2+ \cdots +Y_k is the aggregate claims for the combined block during the fixed policy period and also has a compound Poisson distribution with the parameters stated in the above two bullet points.

To get a further intuitive understanding about the parameters of the combined block, consider N_i as the Poisson number of claims in the i^{th} block of insurance policies. It is a well known fact in probability theory (see [1]) that the indpendent sum of Poisson variables is also a Poisson random variable. Thus the total number of claims in the combined block is N=N_1+N_2+ \cdots +N_k and has a Poisson distribution with parameter \lambda=\lambda_1 + \cdots + \lambda_k.

How do we describe the distribution of an individual claim amount in the combined insurance block? Given a claim from the combined block, since we do not know which of the constituent blocks it is from, this suggests that an individual claim amount is a mixture of the individual claim amount distributions from the k blocks with mixing weights \displaystyle \frac{\lambda_1}{\lambda},\frac{\lambda_2}{\lambda}, \cdots, \frac{\lambda_k}{\lambda}. These mixing weights make intuitive sense. If insurance bock i has a higher claim frequency \lambda_i, then it is more likely that a randomly selected claim from the combined block comes from block i. Of course, this discussion is not a proof. But looking at the insurance model is a helpful way of understanding the independent sum of compound Poisson distributions.

To see why the stated result is true, let M_i(t) be the moment generating function of the individual claim amount in the i^{th} block of policies. Then the mgf of the aggregate claims Y_i is \displaystyle M_{Y_i}(t)=e^{\lambda_i (M_i(t)-1)}. Consequently, the mgf of the independent sum Y=Y_1+ \cdots + Y_k is:

\displaystyle M_Y(t)=\prod \limits_{i=0}^{k} e^{\lambda_i (M_i(t)-1)}= e^{\sum \limits_{i=0}^{k} \lambda_i(M_i(t)-1)} \displaystyle = e^{\lambda \biggl[\sum \limits_{i=0}^{k} \frac{\lambda_i}{\lambda} M_i(t) - 1 \biggr]}

The mgf of Y has the form of a compound Poisson distribution where the Poisson parameter is \lambda=\lambda_1 + \cdots + \lambda_k. Note that the component \displaystyle \sum \limits_{i=0}^{k} \frac{\lambda_i}{\lambda}M_i(t) in the exponent is the mgf of the claim amount distribution. Since it is the weighted average of the individual claim amount mgf’s, this indicates that the distribution function of Y is the mixture of the distribution functions F_i.

Example
Suppose that an insurance company acquired two portfolios of insurance policies and combined them into a single block. For each portfolio the aggregate claims variable has a compound Poisson distribution. For one of the portfolios, the Poisson parameter is \lambda_1 and the individual claim amount has an exponential distribution with parameter \delta_1. The corresponding Poisson and exponential parameters for the other portfolio are \lambda_2 and \delta_2, respectively. Discuss the distribution for the aggregate claims Y=Y_1+Y_2 of the combined portfolio.

The aggregate claims Y of the combined portfolio has a compound Poisson distribution with Poisson parameter \lambda=\lambda_1+\lambda_2. The amount of a random claim X in the combined portfolio has the following distribution function and density function:

\displaystyle F_X(x)=\frac{\lambda_1}{\lambda} (1-e^{-\delta_1 x})+\frac{\lambda_2}{\lambda} (1-e^{-\delta_2 x})

\displaystyle f_X(x)=\frac{\lambda_1}{\lambda} (\delta_1 \thinspace e^{-\delta_1 x})+\frac{\lambda_2}{\lambda} (\delta_2 \thinspace e^{-\delta_2 x})

The rest of the discussion mirrors the general discussion earlier in this post.

Distribution Function
As in the general case, \displaystyle F_Y(y)=\sum \limits_{n=0}^{\infty} F^{*n}(y) \frac{\lambda^n e^{-\lambda}}{n!}

where \lambda=\lambda_1 +\lambda_2, F=F_X and F^{*n} is the n-fold convolution of F_X.

Mean and Variance
\displaystyle E[Y]=\frac{\lambda_1}{\delta_1}+\frac{\lambda_2}{\delta_2}

\displaystyle Var[Y]=\frac{2 \lambda_1}{\delta_1^2}+\frac{2 \lambda_2}{\delta_2^2}

Moment Generating Function and Cumulant Generating Function
To obtain the mgf and cgf of the aggregate claims Y, consider \lambda [M_X(t)-1]. Note that M_X(t) is the weighted average of the two exponential mgfs of the two portfolios of insurance policies. Thus we have:

\displaystyle M_X(t)=\frac{\lambda_1}{\lambda} \frac{\delta_1}{\delta_1 - t}+\frac{\lambda_2}{\lambda} \frac{\delta_2}{\delta_2 - t}

\displaystyle \lambda [M_X(t)-1]=\frac{\lambda_1 t}{\delta_1 - t}+\frac{\lambda_2 t}{\delta_2 - t}

\displaystyle M_Y(t)=e^{\lambda (M_X(t)-1)}=e^{\frac{\lambda_1 t}{\delta_1 - t}+\frac{\lambda_2 t}{\delta_2 - t}}

\displaystyle \Psi_Y(t)=\frac{\lambda_1 t}{\delta_1 -t}+\frac{\lambda_2 t}{\delta_2 -t}

Skewness
Note that \displaystyle E[(Y-\mu_Y)^3]=\Psi_Y^{(3)}(0)=\frac{6 \lambda_1}{\delta_1^3}+\frac{6 \lambda_2}{\delta_2^3}

\displaystyle \gamma_Y=\displaystyle \frac{\frac{6 \lambda_1}{\delta_1^3}+\frac{6 \lambda_2}{\delta_2^3}}{(\frac{2 \lambda_1}{\delta_1^2}+\frac{2 \lambda_2}{\delta_2^2})^{\frac{3}{2}}}

Reference

  1. Hogg R. V. and Tanis E. A., Probability and Statistical Inference, Second Edition, Macmillan Publishing Co., New York, 1983.

Some examples of compound distributions

We present two examples of compound distributions to illustrate the general formulas presented in the previous post (An introduction to compound distributions).

For the examples below, let N be the number of claims generated by either an individual insured or a group of independent insureds. Let X be the individual claim amount. We consider the random sum Y=X_1+ \cdots + X_N. We discuss the following properties of the aggregate claims random variable Y:

  1. The distribution function F_Y
  2. The mean and higher moments: E[Y] and E[Y^n]
  3. The variance: Var[Y]
  4. The moment generating function and cumulant generating function:M_Y(t) and \Psi_Y(t).
  5. Skewness: \gamma_Y.

Example 1
The number of claims for an individual insurance policy in a policy period is modeled by the binomial distribution with parameter n=2 and p. The individual claim, when it occurs, is modeled by the exponential distribution with parameter \lambda (i.e. the mean individual claim amount is \frac{1}{\lambda}).

The distribution function F_Y is the weighted average of a point mass at y=0, the exponential distribution and the Erlang-2 distribution function. For x \ge 0, we have:

\displaystyle F_Y(x)=(1-p)^2+2p(1-p)(1-e^{-\lambda x})+p^2(1-\lambda x e^{-\lambda x}-e^{-\lambda x})

The mean and variance are are follows:

\displaystyle E[Y]=E[N] \thinspace E[X]=\frac{2p}{\lambda}

\displaystyle Var[Y]=E[N] \thinspace Var[X]+Var[N] \thinspace E[X]^2

\displaystyle =\frac{2p}{\lambda^2}+\frac{2p(1-p)}{\lambda^2}=\frac{4p-2p^2}{\lambda^2}

The following calculates the higher moments:

\displaystyle E[Y^n]=(1-p)^2 0 + 2p(1-p) \frac{n!}{\lambda^n}+p^2 \frac{(n+1)!}{\lambda^n}

\displaystyle = \frac{2p(1-p)n!+p^2(n+1)!}{\lambda^n}

The moment generating function M_Y(t)=M_N[ln \thinspace M_X(t)]. So we have:

\displaystyle M_Y(t)=\biggl(1-p+p \frac{\lambda}{\lambda -t}\biggr)^2

\displaystyle =(1-p)^2+2p(1-p) \frac{\lambda}{\lambda -t}+p^2 \biggl(\frac{\lambda}{\lambda -t}\biggr)^2

Note that \displaystyle M_N(t)=(1-p+p e^{t})^2 and \displaystyle M_X(t)=\frac{\lambda}{\lambda -t}.

For the cumulant generating function, we have:

\displaystyle \Psi_Y(t)=ln M_Y(t)=2 ln\biggl(1-p+p \frac{\lambda}{\lambda -t}\biggr)

For the measure of skewness, we rely on the cumulant generating function. Finding the third derivative of \Psi_Y(t) and then evaluate at t=0, we have:

\displaystyle \Psi_Y^{(3)}(0)=\frac{12p-12p^2+4p^3}{\lambda^3}

\displaystyle \gamma_Y=\frac{\Psi_Y^{(3)}(0)}{Var(Y)^{\frac{3}{2}}}=\frac{12p-12p^2+4p^3}{(4p-2p^2)^{\frac{3}{2}}}

Example 2
In this example, the number of claims N follows a geometric distribution. The individual claim amount X follows an exponential distribution with parameter \lambda.

One of the most interesting facts about this example is the moment generating function. Note that \displaystyle M_N(t)=\frac{p}{1-(1-p)e^t}. The following shows the derivation of M_Y(t):

\displaystyle M_Y(t)=M_N[ln \thinspace M_X(t)]=\frac{p}{1-(1-p) e^{ln M_X(t)}}

\displaystyle =\frac{p}{1-(1-p) \frac{\lambda}{\lambda -t}}=\cdots=p+(1-p) \frac{\lambda p}{\lambda p-t}

The moment generating function is the weighted average of a point mass at y=0 and the mgf of an exponential distribution with parameter \lambda p. Thus this example of compound geometric distribution is equivalent to a mixture of a point mass and an exponential distribution. We make use of this fact and derive the following basic properties.

Distribution Function
\displaystyle F_Y(y)=p+(1-p) (1-e^{\lambda p y})=1-(1-p) e^{-\lambda p y} for y \ge 0

Density Function
\displaystyle f_Y(y)=\left\{\begin{matrix}p&\thinspace y=0\\{(1-p) \lambda p e^{-\lambda p y}}&\thinspace 0 < y\end{matrix}\right.

Mean and Higher Moments
\displaystyle E[Y]=(1-p) \frac{1}{\lambda p}=\frac{1-p}{p} \frac{1}{\lambda}=E[N] E[X]

\displaystyle E[Y^n]=p 0 + (1-p) \frac{n!}{(\lambda p)^n}=(1-p) \frac{n!}{(\lambda p)^n}

Variance
\displaystyle Var[Y]=\frac{2(1-p)}{\lambda^2 p^2}-\frac{(1-p)^2}{\lambda^2 p^2}=\frac{1-p^2}{\lambda^2 p^2}

Cumulant Generating Function
\displaystyle \Psi_Y(t)=ln \thinspace M_Y(t)=ln\biggl(p+(1-p) \frac{\lambda p}{\lambda p-t}\biggr)

Skewness
\displaystyle E\biggl[\biggl(Y-\mu_Y\biggr)^3\biggr]=\Psi_Y^{(3)}(0)=\frac{2-2p^3}{\lambda^3 p^3}

\displaystyle \gamma_Y=\frac{\Psi_Y^{(3)}(0)}{(Var[Y])^{\frac{3}{2}}}=\frac{2-2p^3}{(1-p^2)^{\frac{3}{2}}}

More insurance examples of mixed distributions

Four posts have already been devoted to describing three models for “per loss” insurance payout. These are mixed distributions modeling the amount the insurer pays out for each random loss. They can also be viewed as mixtures. We now turn our attention to the mixed distributions modeling the “per period” payout for an insurance policy. That is, the mixed distributions we describe here are to model the total amount of losses paid out for each insurance policy in a given policy period. This involves the uncertain random losses as well as uncertain claim frequency. In other words, there is a possiblity of having no losses. When there are losses in a policy period, the number of losses can be uncertain (there can be only one loss or multiple losses). The links to the previous posts on mixed distributions are found at the end of this post.

The following is the general setting of the insurance problem we discuss in this post.

  1. The random variable X is the size of the random loss that is covered in an insurance contract. We assumme that X is a continuous random variable. Naturally, the support of X is the set of nonnegative numbers (or some appropriate subset).
  2. Let Z be the “per loss” payout paid to the insured by the insurer. The variable Z could refect the coverage modification such as deductible and/or policy cap or other policy provisions that are applicable in the insurance contract.
  3. Let N be the number of claims in a given policy period. In this post, we assume that N has only two possibilities: N=0 or N=1. In other words, each policy has at most one claim in a period. Let p=P[N=1].
  4. Let Y be the total amount paid to the insured by the insurer during a fixed policy period.

The total claims variable Y is the mixture of Y \lvert N=0 and Y \lvert N=1. The conditional variable Y \lvert N=0 is a point mass representing “no loss”. On the other hand, we assume that [Y \lvert N=1]=Z. Thus Y is a mixture of a point mass at the origin and the “per loss” payout variable Z

We first have a general discussion of the stated insurance setting. Then we discuss several different cases based on four coverage modifications that can be applied in the insurance contract. In each case, we illustrate with the exponential distribution. The four cases are:

  • Case 1. Z=X. There is no coverage modification. The insurer pays the entire loss amount.
  • Case 2. The insurance contract has a cap and the cap amount is m.
  • Case 3. The insurance contract is an excess-of-loss policy. The deductible amount is d.
  • Case 4. The insurance contract has a deductible d and a policy cap m where d<m.

General Discussion
The total payout Y is the mixture of a point mass at y=0 and the “per loss” payout Z. The following is the distribution F_Y(y):

\displaystyle F_Y(y)=(1-p) \thinspace F_U(y)+p \thinspace F_Z(y) where \displaystyle F_U(x)=\left\{\begin{matrix}0&\thinspace x<0\\{1}&\thinspace 0 \le x\end{matrix}\right.

Since the distribution of Y is a mixture, we have a wealth of information available for us. For example, the following lists the mean, higher moments, variance, the moment generating function and the skewness.

  • \displaystyle E[Y]=p \thinspace E[Z]
  • \displaystyle E[Y^n]=p \thinspace E[Z^n] for al integers n>1
  • \displaystyle Var[Y]=pE[Z^2]-p^2 E[Z]^2
  • \displaystyle M_Y(t)=(1-p)+p \thinspace M_Z(t)
  • \displaystyle \gamma_Y=p \thinspace \gamma_Z

The Derivations:
\displaystyle E[Y]=(1-p) \thinspace 0+p \thinspace E[Z]=p \thinspace E[Z]

\displaystyle E[Y^n]=(1-p) \thinspace 0^n+p \thinspace E[Z^n]=p \thinspace E[Z^n] for al integers n>1

\displaystyle Var[Y]=E[Y^2]-E[Y]^2=pE[Z^2]-p^2 E[Z]^2

\displaystyle M_Y(t)=(1-p) \thinspace e^0+p \thinspace M_Z(t)=(1-p)+p \thinspace M_Z(t)

\displaystyle \gamma_Y=(1-p) \thinspace 0+p \thinspace \gamma_Z=p \thinspace \gamma_Z

The following is another way to derive Var[Y] using the total variance formula:

\displaystyle Var[Y]=E_N[Var(Y \lvert N)]+Var_N[E(Y \lvert N)]

\displaystyle =(1-p)0+pVar[Z] + E_N[E(Y \lvert N)^2]-E_N[E(Y \lvert N)]^2

\displaystyle =pVar[Z] + (1-p)0^2+pE[Z]^2-p^2 E[Z]^2

\displaystyle =pE[Z^2]-p E[Z]^2 +pE[Z]^2-p^2 E[Z]^2

\displaystyle =pE[Z^2]-p^2 E[Z]^2

The above derivations are based on the idea of mixtures. The two conditional variables are Y \lvert N=0 and \lbrace{Y \lvert N=1}\rbrace=Z. The mixing weights are P[N=0] and P[N=1]. For more basic information on distributions that are mixtures, see this post (Basic properties of mixtures).

We now discuss the four specific cases based on the variations on the coverage modifications that can be placed on the “per loss” variable Z.

Case 1
This is the case that the insurance policy has no coverage modification. The insurer pays the entire random loss. Thus Z=X. The following is the payout rule of Y:

\displaystyle Y=\left\{\begin{matrix}0&\thinspace \text{no loss occurs}\\{Z=X}&\thinspace \text{a loss occurs}\end{matrix}\right.

This is a mixed distribution consisting of a point mass at the origin (no loss) and the random loss X. In this case, the “per loss” variable Z=X. Thus Y is a mixture of of the following two distributions.

\displaystyle F_U(x)=\left\{\begin{matrix}0&\thinspace x<0\\{1}&\thinspace 0 \le x\end{matrix}\right.

\displaystyle F_Z(x)=\left\{\begin{matrix}0&\thinspace x<0\\{F_X(x)}&\thinspace 0 \le x\end{matrix}\right.

Case 1 – Distribution Function
The following shows F_Y as a mixture, the explicit rule of F_Y and the density of Y.

\displaystyle F_Y(x)=(1-p) \thinspace F_U(x) + p \thinspace F_Z(x).

\displaystyle F_Y(x)=\left\{\begin{matrix}0&\thinspace x<0\\{1-p+p \thinspace F_X(x)}&\thinspace 0 \le x\end{matrix}\right.

\displaystyle f_Y(x)=\left\{\begin{matrix}1-p&\thinspace x=0\\{p \thinspace f_X(x)}&\thinspace 0 < x\end{matrix}\right.

Case 1 – Basic Properties
Using basic properties of mixtures stated in the general case, we obtain the following:

\displaystyle E[Y]=p \thinspace E[X]

\displaystyle E[Y^n]=p \thinspace E[X^n] for all integers n>1

\displaystyle Var[Y]=p \thinspace E[X^2] - p^2 E[X]^2

\displaystyle M_Y(t)=1-p + p \thinspace M_X(t)

\displaystyle \gamma_Y=p \thinspace \gamma_X

Case 1 – Exponential Example
If the unmodified random loss has an exponential distribution, we have the following results:

\displaystyle E[Y]=\frac{p}{\lambda}

\displaystyle E[Y^n]=\frac{p \thinspace n!}{\lambda^n} for all integers n>1

\displaystyle Var[Y]=\frac{2p-p^2}{\lambda^2}

\displaystyle M_Y(t)=1-p+\frac{p \thinspace \lambda}{\lambda - t}

\displaystyle \gamma_Y=2 \thinspace p

Case 2
This is the case that the insurance policy has a policy cap. The “per loss” payout amount is capped at the amount m. The following is the payout rule of Y:

\displaystyle Y=\left\{\begin{matrix}0&\thinspace \text{no loss occurs}\\{Z}&\thinspace \text{a loss occurs}\end{matrix}\right.

\displaystyle Z=\left\{\begin{matrix}X&\thinspace X<m\\{m}&\thinspace X \ge m\end{matrix}\right.

\displaystyle Y=\left\{\begin{matrix}0&\thinspace \text{no loss occurs}\\{X}&\thinspace \text{a loss occurs and } X<m\\{m}&\text{a loss occurs and }X \ge m\end{matrix}\right.

Case 2 – Per Loss Variable Z
The following lists out the information we need for Z. For more information about the “per loss” payout for an insurance contract with a policy cap, see the post An insurance example of a mixed distribution – I.

\displaystyle F_Z(x)=\left\{\begin{matrix}0&\thinspace x<0\\{F_X(x)}&\thinspace 0 \le x<m\\{1}&\thinspace x \ge m\end{matrix}\right.

\displaystyle f_Z(x)=\left\{\begin{matrix}f_X(x)&\thinspace X<m\\{1-F_X(m)}&\thinspace X=m\end{matrix}\right.

\displaystyle E[Z]=\int_0^{m} x \thinspace f_X(x) \thinspace dx + m \thinspace [1-F_X(m)]

\displaystyle E[Z^n]=\int_0^{m} x^n \thinspace f_X(x) \thinspace dx + m^n \thinspace [1-F_X(m)] for all integers n > 1

\displaystyle M_Z(t)=\int_0^m e^{tx}f_X(x)dx+e^{tm}[1-F_X(m)]

\displaystyle \gamma_Z=\int_0^m \biggl(\frac{z-\mu_Z}{\sigma_Z}\biggr)^3f_X(z)dz+\biggl(\frac{m-\mu_Z}{\sigma_Z}\biggr)^3 [1-F_X(m)]

Case 2 – Distribution Function
Since Z is a mixture, the distribution of Y is a mixture of a point mass at the origin (no loss) and the mixture Z. As in the general case discussed above, the distribution function F_Y is a weighted average of F_U and F_Z where F_U is the distribution function of the point mass at y=0. The following shows the distribution function and the density function of Y.

\displaystyle F_U(x)=\left\{\begin{matrix}0&\thinspace x<0\\{1}&\thinspace 0 \le x\end{matrix}\right.

\displaystyle F_Y(y)=(1-p) \thinspace F_U(y) + p \thinspace F_Z(y).

\displaystyle F_Y(y)=\left\{\begin{matrix}0&\thinspace y<0\\{1-p+pF_X(y)}&\thinspace 0 \le y<m\\{1}&y \ge m\end{matrix}\right.

\displaystyle f_Y(y)=\left\{\begin{matrix}1-p&y=0\\{pf_X(y)}&\thinspace 0<y<m\\{p(1-F_X(m))}&y=m\end{matrix}\right.

Case 2 – Basic Properties
To obtain the basic properties such as E[Y], E[Y^2], M_Y(t) and \gamma_Y, just take the weighted average of the point mass and the “per loss” Z of this case. In other words, they are obtained by weighting the point mass (of no loss) with the “per loss variable Z.

Case 2 – Exponential Example
If the unmodified loss X has an exponential distribution, we have the following results:

\displaystyle E[Y]=\frac{p}{\lambda}(1-e^{-\lambda m})

\displaystyle E[Y^2]=p\biggl(\frac{2}{\lambda^2}-\frac{2m}{\lambda}e^{-\lambda m}-\frac{2}{\lambda^2}e^{-\lambda m}\biggr)

\displaystyle Var[Y]=p \thinspace E[Z^2] - p^2 E[Z]^2

\displaystyle M_Y(t)=1-p+pM_Z(t) where

\displaystyle M_Z(t)=\int_0^m e^{tx} \lambda e^{-\lambda x}dx+e^{tm} e^{-\lambda m}

\displaystyle =\frac{\lambda}{\lambda -t}-\frac{\lambda}{\lambda -t} e^{-(\lambda-t)m}+e^{-(\lambda-t)m}

Case 3
This is the case that the insurance policy is an excess-of-loss policy. The insurer agrees to pay the insured the amount of the random loss X in excess of a fixed amount d. The following is the payout rule of Y:

\displaystyle Y=\left\{\begin{matrix}0&\thinspace \text{no loss occurs}\\{Z}&\thinspace \text{a loss occurs}\end{matrix}\right.

\displaystyle Z=\left\{\begin{matrix}0&\thinspace X<d\\{X-d}&\thinspace X \ge d\end{matrix}\right.

\displaystyle Y=\left\{\begin{matrix}0&\thinspace \text{no loss occurs}\\{0}&\thinspace \text{a loss occurs and } X<d\\{X-d}&\text{a loss occurs and }X \ge d\end{matrix}\right.

Case 3 – Per Loss Variable Z
The following lists out the information we need for Z. For more information about the “per loss” payout for an insurance contract with a deductible, see the post An insurance example of a mixed distribution – II.

\displaystyle F_Z(y)=\left\{\begin{matrix}0&\thinspace y<0\\{F_X(y+d)}&\thinspace y \ge 0\end{matrix}\right.

\displaystyle f_Z(y)=\left\{\begin{matrix}F_X(d)&\thinspace y=0\\{f_X(y+d)}&\thinspace y > 0\end{matrix}\right.

\displaystyle E[Z]=\int_o^{\infty} y \thinspace f_X(y+d) \thinspace dy

\displaystyle E[Z^n]=\int_o^{\infty} y^n \thinspace f_X(y+d) \thinspace dy for all integer n>1

\displaystyle M_Z(t)=F_X(d) e^{0} + \int_0^{\infty} e^{tz}f_X(z+d) dz

\displaystyle =F_X(d) + e^{-td} \int_d^{\infty} e^{tw} f_X(w) dw

\displaystyle \gamma_Z=F_X(d) \biggl(\frac{0-\mu_Z}{\sigma_Z}\biggr)^3+\int_0^{\infty} \biggl(\frac{z-\mu_Z}{\sigma_Z}\biggr)^3 f_X(z+d) dz

Case 3 – Distribution Function
Since Z is a mixture, the distribution of Y is a mixture of a point mass at the origin (no loss) and the mixture Z. As in the general case discussed above, the distribution function F_Y is a weighted average of F_U and F_Z where F_U is the distribution function of the point mass at y=0. The following shows the distribution function and the density function of Y.

\displaystyle F_U(x)=\left\{\begin{matrix}0&\thinspace x<0\\{1}&\thinspace 0 \le x\end{matrix}\right.

\displaystyle F_Y(y)=(1-p) \thinspace F_U(y) + p \thinspace F_Z(y).

\displaystyle F_Y(y)=\left\{\begin{matrix}0&\thinspace y<0\\{1-p+pF_X(y+d)}&\thinspace 0 \le y\end{matrix}\right.

\displaystyle f_Y(y)=\left\{\begin{matrix}1-p+pF_X(d)&y=0\\{pf_X(y+d)}&\thinspace 0<y\end{matrix}\right.

Note that the point mass of Y is made up of two point masses, one from having no loss and one from having losses less than the deductible.

Case 3 – Basic Properties
The basic properties of Y as a mixture are obtained by applying the general formulas with the specific information about the “per loss” Z in this case. In other words, they are obtained by weighting the point mass (of no loss) with the “per loss variable Z.

Case 3 – Exponential Example
If the unmodified loss X has an exponential distribution, then we have the following results:

\displaystyle E[Y]=pE[Z]=p \thinspace \frac{e^{-\lambda d}}{\lambda}=p \thinspace e^{-\lambda d} E[X]

\displaystyle E[Y^2]=pE[Z^2]=p \thinspace \frac{2e^{-\lambda d}}{\lambda^2}=p \thinspace e^{-\lambda d}E[X^2]

\displaystyle Var[Y]=p \thinspace \frac{2e^{-\lambda d}}{\lambda^2}-p^2 \thinspace \frac{e^{-2\lambda d}}{\lambda^2}=pe^{-\lambda d}(2-pe^{-\lambda d})Var[X]

\displaystyle M_Y(t)=1-e^{-\lambda d}+e^{-\lambda d} \frac{\lambda}{\lambda -t}

\displaystyle = 1-e^{-\lambda d}+e^{-\lambda d} M_X(t)

Case 4
This is the case that the insurance policy has both a policy cap and a deductible. The “per loss” payout amount is capped at the amount m and is positive only when the loss is in excess of the deductible d. The following is the payout rule of Y:

\displaystyle Y=\left\{\begin{matrix}0&\thinspace \text{no loss occurs}\\{Z}&\thinspace \text{a loss occurs}\end{matrix}\right.

\displaystyle Z=\left\{\begin{matrix}0&\thinspace X<d\\{X-d}&\thinspace d \le X < d+m\\{m}&d+m \le X\end{matrix}\right.

\displaystyle Y=\left\{\begin{matrix}0&\thinspace \text{no loss occurs}\\{0}&\text{a loss and }X<d\\{X-d}&\text{a loss and }d \le X < d+m\\{m}&\thinspace \text{a loss and }X \ge m\end{matrix}\right.

Case 4 – Per Loss Variable Z
The following lists out the information we need for Z. For more information about the “per loss” payout for an insurance contract with a deductible and a policy cap, see the post An insurance example of a mixed distribution – III.

\displaystyle F_Z(y)=\left\{\begin{matrix}0&\thinspace y<0\\{F_X(y+d)}&\thinspace 0 \le y < m\\{1}&m \le y\end{matrix}\right.

\displaystyle f_Z(y)=\left\{\begin{matrix}F_X(d)&\thinspace y=0\\{f_X(y+d)}&\thinspace 0 < y < m\\{1-F_X(d+m)}&y=m\end{matrix}\right.

\displaystyle E[Z]=\int_0^m y \thinspace f_X(y+d) \thinspace dy + m \thinspace [1-F_X(d+m)]

\displaystyle E[Z^n]=\int_0^m y^n \thinspace f_X(y+d) \thinspace dy + m^n \thinspace [1-F_X(d+m)] for all integer n>1

\displaystyle M_Z(t)=F_X(d) e^0 + \int_0^m e^{tx} f_X(x+d) dx + e^{tm} [1-F_X(d+m)]

\displaystyle =F_X(d) + \int_0^m e^{tx} f_X(x+d) dx + e^{tm} [1-F_X(d+m)]

\displaystyle \gamma_Z=F_X(d) \biggl(\frac{0-\mu_Z}{\sigma_Z}\biggr)^3+\int_0^{\infty} \biggl(\frac{z-\mu_Z}{\sigma_Z}\biggr)^3 f_X(z+d) dz

\displaystyle + \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \  [1-F_X(d+m)] \biggl(\frac{m-\mu_Z}{\sigma_Z}\biggr)^3

Case 4 – Distribution Function
Since Z is a mixture, the distribution of Y is a mixture of a point mass at the origin (no loss) and the mixture Z. As in the general case discussed above, the distribution function F_Y is a weighted average of F_U and F_Z where F_U is the distribution function of the point mass at y=0. The following shows the distribution function and the density function of Y.

\displaystyle F_U(x)=\left\{\begin{matrix}0&\thinspace x<0\\{1}&\thinspace 0 \le x\end{matrix}\right.

\displaystyle F_Y(y)=(1-p) \thinspace F_U(y) + p \thinspace F_Z(y).

\displaystyle F_Y(y)=\left\{\begin{matrix}0&\thinspace y<0\\{1-p+pF_X(y+d)}&\thinspace 0 \le y<m\\{1}&y \ge m\end{matrix}\right.

\displaystyle f_Y(y)=\left\{\begin{matrix}1-p+pF_X(d)&y=0\\{pf_X(y+d)}&\thinspace 0<y<m\\{p[1-F_X(d+m)]}&y=m\end{matrix}\right.

Note that the point mass of Y is made up of two point masses, one from having no loss and one from having losses less than the deductible.

Case 4 – Basic Properties
The basic properties of Y as a mixture are obtained by applying the general formulas with the specific information about the “per loss” Z in this case. In other words, they are obtained by weighting the point mass (of no loss) with the “per loss variable Z.

Case 4 – Exponential Example
If the unmodified loss X has an exponential distribution, then we have the following results:

\displaystyle E[Y]=pE[Z]=p e^{-\lambda d} \frac{1}{\lambda} (1-e^{-\lambda m})=p e^{-\lambda d} (1-e^{-\lambda m}) E[X]

Another view of E[Y]:
\displaystyle E[Y]=e^{-\lambda d} E[Y_2] where Y_2 is the Y in Case 2.

Also, it can be shown that:
\displaystyle E[Y^2]=e^{-\lambda d} E[Y_2^2] where Y_2 is the Y in Case 2.

Here's the links to the previous discussions of mixed distributions:
An insurance example of a mixed distribution – I
An insurance example of a mixed distribution – II
An insurance example of a mixed distribution – III
Mixed distributions as mixtures

Mixed distributions as mixtures

A random variable X is a mixture if its distribution function F_X is a weighted average of a family of conditional distribution functions. The random variable X is a mixed distribution if it is a distribution that has at least one probability mass (i.e. there is at least one point a in the support of X such that P[X=a]>0) and there is some interval (a,b) contained in the support such that P[X=c]=0 for every c \in (a,b). It turns out that a mixed distribution can be expressed as a mixture. Three examples of mixed distributions from insurance applications have been presented in this blog. We demonstrate that these three mixed distributions are mixtures. The links to some previous posts on mixtures can be found at the end of this post.

Example 1
Link: An insurance example of a mixed distribution – I The mixed distribution in this example is the “per loss” payout for an insurance contract that has a policy maximum.

Example 2
Link: An insurance example of a mixed distribution – II The mixed distribution in this example is the “per loss” payout for an insurance policy that has a deductible.

Example 3
Link: An insurance example of a mixed distribution – III The mixed distribution in this example is the “per loss” payout of an insurance contract where there are both a deductible and a policy maximum.

Throughout this post, let X be the unmodified random loss. We assume that X is a continuous random variable with support the nonnegative real numbers.

Discussion of Example 1
Let Y_1 be the “per loss” insurance payout for a policy where the payout is capped at m. The following are the payout rule, the distribution function and the density function:

\displaystyle Y_1=\left\{\begin{matrix}X&\thinspace X<m\\{m}&\thinspace X \ge m\end{matrix}\right.

\displaystyle F_{Y_1}(x)=\left\{\begin{matrix}0&\thinspace x<0\\{F_X(x)}&\thinspace 0 \le x<m\\{1}&\thinspace x \ge m\end{matrix}\right.

\displaystyle f_{Y_1}(x)=\left\{\begin{matrix}f_X(x)&\thinspace 0 < x<m\\{1-F_X(m)}&\thinspace x=m\end{matrix}\right.

We show that F_{Y_1} can be expressed as a weighted average of two distribution functions. One of the distributions is the random loss between 0 and m. This is a limited loss and call this loss U. The second distribution is the point mass at m. Call this point mass V. The following are the distribution functions:

\displaystyle F_U(x)=\left\{\begin{matrix}0&\thinspace x<0\\{\displaystyle \frac{F_X(x)}{F_X(m)}}&\thinspace 0 \le x<m\\{1}&\thinspace x \ge m\end{matrix}\right.

\displaystyle F_V(x)=\left\{\begin{matrix}0&\thinspace x<m\\{1}&\thinspace m \le x\end{matrix}\right.

It follows that F_{Y_1}(x)= p \thinspace F_U(x) + (1-p) \thinspace F_V(x) where p=F_X(m). Note that the distribution of U only describes the loss within (0,m). Thus the distribution function F_U is obtained from F_X by a scaler adjustment.

Discussion of Example 2
Let Y_2 be the “per loss” insurance payout for a policy where there is a deductible d. For each loss, the insurer pays the insured in excess of the deductible d. The following are the payout rule, the distribution function and the density function:

\displaystyle Y_2=\left\{\begin{matrix}0&\thinspace X<d\\{X-d}&\thinspace X \ge d\end{matrix}\right.

\displaystyle F_{Y_2}(y)=\left\{\begin{matrix}0&\thinspace y<0\\{F_X(y+d)}&\thinspace y \ge 0\end{matrix}\right.

\displaystyle f_{Y_2}(y)=\left\{\begin{matrix}F_X(d)&\thinspace y=0\\{f_X(y+d)}&\thinspace y > 0\end{matrix}\right.

We show that F_{Y_2} can be expressed as a weighted average of two distribution functions. One of the distributions is the random loss greater than d. Call this loss U. The second distribution is the point mass at 0. Call this point mass V. The following are the distribution functions:

\displaystyle F_U(x)=\left\{\begin{matrix}0&\thinspace x<0\\{\displaystyle \frac{F_X(x+d)-F_X(d)}{1-F_X(d)}}&\thinspace 0 \le x\end{matrix}\right.

\displaystyle F_V(x)=\left\{\begin{matrix}0&\thinspace x<0\\{1}&\thinspace 0 \le x\end{matrix}\right.

It follows that F_{Y_2}(x)= p \thinspace F_U(x) + (1-p) \thinspace F_V(x) where p=1-F_X(d). The random variable V is a point mass at the origin reflecting the case where no claim is made by the insurer. This point mass has weight F_X(d). The random variable U is the distribution describing the random losses that are greater than d.

Discussion of Example 3
Let Y_3 be the “per loss” insurance payout for a policy where there are both a deductible d and a policy cap m with d<m. For each loss, the insurer pays the insured in excess of the deductible d up to the policy cap m. The following are the payout rule, the distribution function and the density function:

\displaystyle Y_3=\left\{\begin{matrix}0&\thinspace X<d\\{X-d}&\thinspace d \le X < d+m\\{m}&d+m \le X\end{matrix}\right.

\displaystyle F_{Y_3}(y)=\left\{\begin{matrix}0&\thinspace y<0\\{F_X(y+d)}&\thinspace 0 \le y < m\\{1}&m \le y\end{matrix}\right.

\displaystyle f_{Y_3}(y)=\left\{\begin{matrix}F_X(d)&\thinspace y=0\\{f_X(y+d)}&\thinspace 0 < y < m\\{1-F_X(d+m)}&y=m\end{matrix}\right.

The distribution of Y_3 can be expressed as a mixture of three distributions – two point masses (one at the origin and one at m) and one continuous variable describing the random losses in between 0 and m. Consider the following distribution functions:

\displaystyle F_U(x)=\left\{\begin{matrix}0&\thinspace x<0\\{1}&\thinspace 0 \le x\end{matrix}\right.

\displaystyle F_V(x)=\left\{\begin{matrix}0&\thinspace x<0\\{\displaystyle \frac{F_X(x+d)-F_X(d)}{F_X(d+m)-F_X(d)}}&\thinspace 0 \le x<m\\{1}&m \le x\end{matrix}\right.

\displaystyle F_W(x)=\left\{\begin{matrix}0&\thinspace x<m\\{1}&\thinspace m \le x\end{matrix}\right.

The random variables U and W represent the point masses at 0 and m, respectively. The variable V describes the random losses in between 0 and m. It follows that F_{Y_3} is the weighted average of these three distribution functions.

\displaystyle F_{Y_3}(x)=p_1 \thinspace F_U(x)+p_2 \thinspace F_V(x)+p_3 \thinspace F_W(x)

The weights are: p_1=F_X(d), p_2=F_X(d+m)-F_X(d), and p_3=1-F_X(d+m)

Here’s the links to examples of mixed distributions:
Example 1 An insurance example of a mixed distribution – I
Example 2 An insurance example of a mixed distribution – II
Example 3 An insurance example of a mixed distribution – III

Here’s the links to some previous posts on mixtures:
Examples of mixtures
Basic properties of mixtures