Compound Poisson distribution

The compound distribution is a model for describing the aggregate claims arised in a group of independent insureds. Let N be the number of claims generated by a portfolio of insurance policies in a fixed time period. Suppose X_1 is the amount of the first claim, X_2 is the amount of the second claim and so on. Then Y=X_1+X_2+ \cdots + X_N represents the total aggregate claims generated by this portfolio of policies in the given fixed time period. In order to make this model more tractable, we make the following assumptions:

  • X_1,X_2, \cdots are independent and identically distributed.
  • Each X_i is independent of the number of claims N.

The number of claims N is associated with the claim frequency in the given portfolio of policies. The common distribution of X_1,X_2, \cdots is denoted by X. Note that X models the amount of a random claim generated in this portfolio of insurance policies. See these two posts for an introduction to compound distributions (An introduction to compound distributions, Some examples of compound distributions).

When the claim frequency N follows a Poisson distribution with a constant parameter \lambda, the aggreagte claims Y is said to have a compound Poisson distribution. After a general discussion of the compound Poisson distribution, we discuss the property that an independent sum of compound Poisson distributions is also a compound Poisson distribution. We also present an example to illustrate basic calculations.

Compound Poisson – General Properties

Distribution Function
\displaystyle F_Y(y)=\sum \limits_{n=0}^{\infty} F^{*n}(y) \frac{\lambda^n e^{-\lambda}}{n!}

where \lambda=E[N], F is the common distribution function of X_i and F^{*n} is the n-fold convolution of F.

Mean and Variance
\displaystyle E[Y]=E[N] E[X]= \lambda E[X]

\displaystyle Var[Y]=\lambda E[X^2]

Moment Generating Function and Cumulant Generating Function
\displaystyle M_Y(t)=e^{\lambda (M_X(t)-1)}

\displaystyle \Psi_Y(t)=ln M_Y(t)=\lambda (M_X(t)-1)

Note that the moment generating function of the Poisson N is M_N(t)=e^{\lambda (e^t - 1)}. For a compound distribution Y in general, M_Y(t)=M_N[ln M_X(t)].

Skewness
\displaystyle E[(Y-\mu_Y)^3]=\Psi_Y^{(3)}(0)=\lambda E[X^3]

\displaystyle \gamma_Y=\frac{E[(Y-\mu_Y)^3]}{Var[Y]^{\frac{3}{2}}}=\frac{1}{\sqrt{\lambda}} \frac{E[X^3]}{E[X^2]^{\frac{3}{2}}}

Independent Sum of Compound Poisson Distributions
First, we state the results. Suppose that Y_1,Y_2, \cdots, Y_k are independent random variables such that each Y_i has a compound Poisson distribution with \lambda_i being the Poisson parameter for the number of claim variable and F_i being the distribution function for the individual claim amount. Then Y=Y_1+Y_2+ \cdots +Y_k has a compound Poisson distribution with:

  • the Poisson parameter: \displaystyle \lambda=\sum \limits_{i=1}^{k} \lambda_i
  • the distribution function: \displaystyle F_Y(y)=\sum \limits_{i=1}^{k} \frac{\lambda_i}{\lambda} \thinspace F_i(y)

The above result has an insurance interpretation. Suppose we have k independent blocks of insurance policies such that the aggregate claims Y_i for the i^{th} block has a compound Poisson distribution. Then Y=Y_1+Y_2+ \cdots +Y_k is the aggregate claims for the combined block during the fixed policy period and also has a compound Poisson distribution with the parameters stated in the above two bullet points.

To get a further intuitive understanding about the parameters of the combined block, consider N_i as the Poisson number of claims in the i^{th} block of insurance policies. It is a well known fact in probability theory (see [1]) that the indpendent sum of Poisson variables is also a Poisson random variable. Thus the total number of claims in the combined block is N=N_1+N_2+ \cdots +N_k and has a Poisson distribution with parameter \lambda=\lambda_1 + \cdots + \lambda_k.

How do we describe the distribution of an individual claim amount in the combined insurance block? Given a claim from the combined block, since we do not know which of the constituent blocks it is from, this suggests that an individual claim amount is a mixture of the individual claim amount distributions from the k blocks with mixing weights \displaystyle \frac{\lambda_1}{\lambda},\frac{\lambda_2}{\lambda}, \cdots, \frac{\lambda_k}{\lambda}. These mixing weights make intuitive sense. If insurance bock i has a higher claim frequency \lambda_i, then it is more likely that a randomly selected claim from the combined block comes from block i. Of course, this discussion is not a proof. But looking at the insurance model is a helpful way of understanding the independent sum of compound Poisson distributions.

To see why the stated result is true, let M_i(t) be the moment generating function of the individual claim amount in the i^{th} block of policies. Then the mgf of the aggregate claims Y_i is \displaystyle M_{Y_i}(t)=e^{\lambda_i (M_i(t)-1)}. Consequently, the mgf of the independent sum Y=Y_1+ \cdots + Y_k is:

\displaystyle M_Y(t)=\prod \limits_{i=0}^{k} e^{\lambda_i (M_i(t)-1)}= e^{\sum \limits_{i=0}^{k} \lambda_i(M_i(t)-1)} \displaystyle = e^{\lambda \biggl[\sum \limits_{i=0}^{k} \frac{\lambda_i}{\lambda} M_i(t) - 1 \biggr]}

The mgf of Y has the form of a compound Poisson distribution where the Poisson parameter is \lambda=\lambda_1 + \cdots + \lambda_k. Note that the component \displaystyle \sum \limits_{i=0}^{k} \frac{\lambda_i}{\lambda}M_i(t) in the exponent is the mgf of the claim amount distribution. Since it is the weighted average of the individual claim amount mgf’s, this indicates that the distribution function of Y is the mixture of the distribution functions F_i.

Example
Suppose that an insurance company acquired two portfolios of insurance policies and combined them into a single block. For each portfolio the aggregate claims variable has a compound Poisson distribution. For one of the portfolios, the Poisson parameter is \lambda_1 and the individual claim amount has an exponential distribution with parameter \delta_1. The corresponding Poisson and exponential parameters for the other portfolio are \lambda_2 and \delta_2, respectively. Discuss the distribution for the aggregate claims Y=Y_1+Y_2 of the combined portfolio.

The aggregate claims Y of the combined portfolio has a compound Poisson distribution with Poisson parameter \lambda=\lambda_1+\lambda_2. The amount of a random claim X in the combined portfolio has the following distribution function and density function:

\displaystyle F_X(x)=\frac{\lambda_1}{\lambda} (1-e^{-\delta_1 x})+\frac{\lambda_2}{\lambda} (1-e^{-\delta_2 x})

\displaystyle f_X(x)=\frac{\lambda_1}{\lambda} (\delta_1 \thinspace e^{-\delta_1 x})+\frac{\lambda_2}{\lambda} (\delta_2 \thinspace e^{-\delta_2 x})

The rest of the discussion mirrors the general discussion earlier in this post.

Distribution Function
As in the general case, \displaystyle F_Y(y)=\sum \limits_{n=0}^{\infty} F^{*n}(y) \frac{\lambda^n e^{-\lambda}}{n!}

where \lambda=\lambda_1 +\lambda_2, F=F_X and F^{*n} is the n-fold convolution of F_X.

Mean and Variance
\displaystyle E[Y]=\frac{\lambda_1}{\delta_1}+\frac{\lambda_2}{\delta_2}

\displaystyle Var[Y]=\frac{2 \lambda_1}{\delta_1^2}+\frac{2 \lambda_2}{\delta_2^2}

Moment Generating Function and Cumulant Generating Function
To obtain the mgf and cgf of the aggregate claims Y, consider \lambda [M_X(t)-1]. Note that M_X(t) is the weighted average of the two exponential mgfs of the two portfolios of insurance policies. Thus we have:

\displaystyle M_X(t)=\frac{\lambda_1}{\lambda} \frac{\delta_1}{\delta_1 - t}+\frac{\lambda_2}{\lambda} \frac{\delta_2}{\delta_2 - t}

\displaystyle \lambda [M_X(t)-1]=\frac{\lambda_1 t}{\delta_1 - t}+\frac{\lambda_2 t}{\delta_2 - t}

\displaystyle M_Y(t)=e^{\lambda (M_X(t)-1)}=e^{\frac{\lambda_1 t}{\delta_1 - t}+\frac{\lambda_2 t}{\delta_2 - t}}

\displaystyle \Psi_Y(t)=\frac{\lambda_1 t}{\delta_1 -t}+\frac{\lambda_2 t}{\delta_2 -t}

Skewness
Note that \displaystyle E[(Y-\mu_Y)^3]=\Psi_Y^{(3)}(0)=\frac{6 \lambda_1}{\delta_1^3}+\frac{6 \lambda_2}{\delta_2^3}

\displaystyle \gamma_Y=\displaystyle \frac{\frac{6 \lambda_1}{\delta_1^3}+\frac{6 \lambda_2}{\delta_2^3}}{(\frac{2 \lambda_1}{\delta_1^2}+\frac{2 \lambda_2}{\delta_2^2})^{\frac{3}{2}}}

Reference

  1. Hogg R. V. and Tanis E. A., Probability and Statistical Inference, Second Edition, Macmillan Publishing Co., New York, 1983.
Advertisements

Some examples of compound distributions

We present two examples of compound distributions to illustrate the general formulas presented in the previous post (An introduction to compound distributions).

For the examples below, let N be the number of claims generated by either an individual insured or a group of independent insureds. Let X be the individual claim amount. We consider the random sum Y=X_1+ \cdots + X_N. We discuss the following properties of the aggregate claims random variable Y:

  1. The distribution function F_Y
  2. The mean and higher moments: E[Y] and E[Y^n]
  3. The variance: Var[Y]
  4. The moment generating function and cumulant generating function:M_Y(t) and \Psi_Y(t).
  5. Skewness: \gamma_Y.

Example 1
The number of claims for an individual insurance policy in a policy period is modeled by the binomial distribution with parameter n=2 and p. The individual claim, when it occurs, is modeled by the exponential distribution with parameter \lambda (i.e. the mean individual claim amount is \frac{1}{\lambda}).

The distribution function F_Y is the weighted average of a point mass at y=0, the exponential distribution and the Erlang-2 distribution function. For x \ge 0, we have:

\displaystyle F_Y(x)=(1-p)^2+2p(1-p)(1-e^{-\lambda x})+p^2(1-\lambda x e^{-\lambda x}-e^{-\lambda x})

The mean and variance are are follows:

\displaystyle E[Y]=E[N] \thinspace E[X]=\frac{2p}{\lambda}

\displaystyle Var[Y]=E[N] \thinspace Var[X]+Var[N] \thinspace E[X]^2

\displaystyle =\frac{2p}{\lambda^2}+\frac{2p(1-p)}{\lambda^2}=\frac{4p-2p^2}{\lambda^2}

The following calculates the higher moments:

\displaystyle E[Y^n]=(1-p)^2 0 + 2p(1-p) \frac{n!}{\lambda^n}+p^2 \frac{(n+1)!}{\lambda^n}

\displaystyle = \frac{2p(1-p)n!+p^2(n+1)!}{\lambda^n}

The moment generating function M_Y(t)=M_N[ln \thinspace M_X(t)]. So we have:

\displaystyle M_Y(t)=\biggl(1-p+p \frac{\lambda}{\lambda -t}\biggr)^2

\displaystyle =(1-p)^2+2p(1-p) \frac{\lambda}{\lambda -t}+p^2 \biggl(\frac{\lambda}{\lambda -t}\biggr)^2

Note that \displaystyle M_N(t)=(1-p+p e^{t})^2 and \displaystyle M_X(t)=\frac{\lambda}{\lambda -t}.

For the cumulant generating function, we have:

\displaystyle \Psi_Y(t)=ln M_Y(t)=2 ln\biggl(1-p+p \frac{\lambda}{\lambda -t}\biggr)

For the measure of skewness, we rely on the cumulant generating function. Finding the third derivative of \Psi_Y(t) and then evaluate at t=0, we have:

\displaystyle \Psi_Y^{(3)}(0)=\frac{12p-12p^2+4p^3}{\lambda^3}

\displaystyle \gamma_Y=\frac{\Psi_Y^{(3)}(0)}{Var(Y)^{\frac{3}{2}}}=\frac{12p-12p^2+4p^3}{(4p-2p^2)^{\frac{3}{2}}}

Example 2
In this example, the number of claims N follows a geometric distribution. The individual claim amount X follows an exponential distribution with parameter \lambda.

One of the most interesting facts about this example is the moment generating function. Note that \displaystyle M_N(t)=\frac{p}{1-(1-p)e^t}. The following shows the derivation of M_Y(t):

\displaystyle M_Y(t)=M_N[ln \thinspace M_X(t)]=\frac{p}{1-(1-p) e^{ln M_X(t)}}

\displaystyle =\frac{p}{1-(1-p) \frac{\lambda}{\lambda -t}}=\cdots=p+(1-p) \frac{\lambda p}{\lambda p-t}

The moment generating function is the weighted average of a point mass at y=0 and the mgf of an exponential distribution with parameter \lambda p. Thus this example of compound geometric distribution is equivalent to a mixture of a point mass and an exponential distribution. We make use of this fact and derive the following basic properties.

Distribution Function
\displaystyle F_Y(y)=p+(1-p) (1-e^{\lambda p y})=1-(1-p) e^{-\lambda p y} for y \ge 0

Density Function
\displaystyle f_Y(y)=\left\{\begin{matrix}p&\thinspace y=0\\{(1-p) \lambda p e^{-\lambda p y}}&\thinspace 0 < y\end{matrix}\right.

Mean and Higher Moments
\displaystyle E[Y]=(1-p) \frac{1}{\lambda p}=\frac{1-p}{p} \frac{1}{\lambda}=E[N] E[X]

\displaystyle E[Y^n]=p 0 + (1-p) \frac{n!}{(\lambda p)^n}=(1-p) \frac{n!}{(\lambda p)^n}

Variance
\displaystyle Var[Y]=\frac{2(1-p)}{\lambda^2 p^2}-\frac{(1-p)^2}{\lambda^2 p^2}=\frac{1-p^2}{\lambda^2 p^2}

Cumulant Generating Function
\displaystyle \Psi_Y(t)=ln \thinspace M_Y(t)=ln\biggl(p+(1-p) \frac{\lambda p}{\lambda p-t}\biggr)

Skewness
\displaystyle E\biggl[\biggl(Y-\mu_Y\biggr)^3\biggr]=\Psi_Y^{(3)}(0)=\frac{2-2p^3}{\lambda^3 p^3}

\displaystyle \gamma_Y=\frac{\Psi_Y^{(3)}(0)}{(Var[Y])^{\frac{3}{2}}}=\frac{2-2p^3}{(1-p^2)^{\frac{3}{2}}}

More insurance examples of mixed distributions

Four posts have already been devoted to describing three models for “per loss” insurance payout. These are mixed distributions modeling the amount the insurer pays out for each random loss. They can also be viewed as mixtures. We now turn our attention to the mixed distributions modeling the “per period” payout for an insurance policy. That is, the mixed distributions we describe here are to model the total amount of losses paid out for each insurance policy in a given policy period. This involves the uncertain random losses as well as uncertain claim frequency. In other words, there is a possiblity of having no losses. When there are losses in a policy period, the number of losses can be uncertain (there can be only one loss or multiple losses). The links to the previous posts on mixed distributions are found at the end of this post.

The following is the general setting of the insurance problem we discuss in this post.

  1. The random variable X is the size of the random loss that is covered in an insurance contract. We assumme that X is a continuous random variable. Naturally, the support of X is the set of nonnegative numbers (or some appropriate subset).
  2. Let Z be the “per loss” payout paid to the insured by the insurer. The variable Z could refect the coverage modification such as deductible and/or policy cap or other policy provisions that are applicable in the insurance contract.
  3. Let N be the number of claims in a given policy period. In this post, we assume that N has only two possibilities: N=0 or N=1. In other words, each policy has at most one claim in a period. Let p=P[N=1].
  4. Let Y be the total amount paid to the insured by the insurer during a fixed policy period.

The total claims variable Y is the mixture of Y \lvert N=0 and Y \lvert N=1. The conditional variable Y \lvert N=0 is a point mass representing “no loss”. On the other hand, we assume that [Y \lvert N=1]=Z. Thus Y is a mixture of a point mass at the origin and the “per loss” payout variable Z

We first have a general discussion of the stated insurance setting. Then we discuss several different cases based on four coverage modifications that can be applied in the insurance contract. In each case, we illustrate with the exponential distribution. The four cases are:

  • Case 1. Z=X. There is no coverage modification. The insurer pays the entire loss amount.
  • Case 2. The insurance contract has a cap and the cap amount is m.
  • Case 3. The insurance contract is an excess-of-loss policy. The deductible amount is d.
  • Case 4. The insurance contract has a deductible d and a policy cap m where d<m.

General Discussion
The total payout Y is the mixture of a point mass at y=0 and the “per loss” payout Z. The following is the distribution F_Y(y):

\displaystyle F_Y(y)=(1-p) \thinspace F_U(y)+p \thinspace F_Z(y) where \displaystyle F_U(x)=\left\{\begin{matrix}0&\thinspace x<0\\{1}&\thinspace 0 \le x\end{matrix}\right.

Since the distribution of Y is a mixture, we have a wealth of information available for us. For example, the following lists the mean, higher moments, variance, the moment generating function and the skewness.

  • \displaystyle E[Y]=p \thinspace E[Z]
  • \displaystyle E[Y^n]=p \thinspace E[Z^n] for al integers n>1
  • \displaystyle Var[Y]=pE[Z^2]-p^2 E[Z]^2
  • \displaystyle M_Y(t)=(1-p)+p \thinspace M_Z(t)
  • \displaystyle \gamma_Y=p \thinspace \gamma_Z

The Derivations:
\displaystyle E[Y]=(1-p) \thinspace 0+p \thinspace E[Z]=p \thinspace E[Z]

\displaystyle E[Y^n]=(1-p) \thinspace 0^n+p \thinspace E[Z^n]=p \thinspace E[Z^n] for al integers n>1

\displaystyle Var[Y]=E[Y^2]-E[Y]^2=pE[Z^2]-p^2 E[Z]^2

\displaystyle M_Y(t)=(1-p) \thinspace e^0+p \thinspace M_Z(t)=(1-p)+p \thinspace M_Z(t)

\displaystyle \gamma_Y=(1-p) \thinspace 0+p \thinspace \gamma_Z=p \thinspace \gamma_Z

The following is another way to derive Var[Y] using the total variance formula:

\displaystyle Var[Y]=E_N[Var(Y \lvert N)]+Var_N[E(Y \lvert N)]

\displaystyle =(1-p)0+pVar[Z] + E_N[E(Y \lvert N)^2]-E_N[E(Y \lvert N)]^2

\displaystyle =pVar[Z] + (1-p)0^2+pE[Z]^2-p^2 E[Z]^2

\displaystyle =pE[Z^2]-p E[Z]^2 +pE[Z]^2-p^2 E[Z]^2

\displaystyle =pE[Z^2]-p^2 E[Z]^2

The above derivations are based on the idea of mixtures. The two conditional variables are Y \lvert N=0 and \lbrace{Y \lvert N=1}\rbrace=Z. The mixing weights are P[N=0] and P[N=1]. For more basic information on distributions that are mixtures, see this post (Basic properties of mixtures).

We now discuss the four specific cases based on the variations on the coverage modifications that can be placed on the “per loss” variable Z.

Case 1
This is the case that the insurance policy has no coverage modification. The insurer pays the entire random loss. Thus Z=X. The following is the payout rule of Y:

\displaystyle Y=\left\{\begin{matrix}0&\thinspace \text{no loss occurs}\\{Z=X}&\thinspace \text{a loss occurs}\end{matrix}\right.

This is a mixed distribution consisting of a point mass at the origin (no loss) and the random loss X. In this case, the “per loss” variable Z=X. Thus Y is a mixture of of the following two distributions.

\displaystyle F_U(x)=\left\{\begin{matrix}0&\thinspace x<0\\{1}&\thinspace 0 \le x\end{matrix}\right.

\displaystyle F_Z(x)=\left\{\begin{matrix}0&\thinspace x<0\\{F_X(x)}&\thinspace 0 \le x\end{matrix}\right.

Case 1 – Distribution Function
The following shows F_Y as a mixture, the explicit rule of F_Y and the density of Y.

\displaystyle F_Y(x)=(1-p) \thinspace F_U(x) + p \thinspace F_Z(x).

\displaystyle F_Y(x)=\left\{\begin{matrix}0&\thinspace x<0\\{1-p+p \thinspace F_X(x)}&\thinspace 0 \le x\end{matrix}\right.

\displaystyle f_Y(x)=\left\{\begin{matrix}1-p&\thinspace x=0\\{p \thinspace f_X(x)}&\thinspace 0 < x\end{matrix}\right.

Case 1 – Basic Properties
Using basic properties of mixtures stated in the general case, we obtain the following:

\displaystyle E[Y]=p \thinspace E[X]

\displaystyle E[Y^n]=p \thinspace E[X^n] for all integers n>1

\displaystyle Var[Y]=p \thinspace E[X^2] - p^2 E[X]^2

\displaystyle M_Y(t)=1-p + p \thinspace M_X(t)

\displaystyle \gamma_Y=p \thinspace \gamma_X

Case 1 – Exponential Example
If the unmodified random loss has an exponential distribution, we have the following results:

\displaystyle E[Y]=\frac{p}{\lambda}

\displaystyle E[Y^n]=\frac{p \thinspace n!}{\lambda^n} for all integers n>1

\displaystyle Var[Y]=\frac{2p-p^2}{\lambda^2}

\displaystyle M_Y(t)=1-p+\frac{p \thinspace \lambda}{\lambda - t}

\displaystyle \gamma_Y=2 \thinspace p

Case 2
This is the case that the insurance policy has a policy cap. The “per loss” payout amount is capped at the amount m. The following is the payout rule of Y:

\displaystyle Y=\left\{\begin{matrix}0&\thinspace \text{no loss occurs}\\{Z}&\thinspace \text{a loss occurs}\end{matrix}\right.

\displaystyle Z=\left\{\begin{matrix}X&\thinspace X<m\\{m}&\thinspace X \ge m\end{matrix}\right.

\displaystyle Y=\left\{\begin{matrix}0&\thinspace \text{no loss occurs}\\{X}&\thinspace \text{a loss occurs and } X<m\\{m}&\text{a loss occurs and }X \ge m\end{matrix}\right.

Case 2 – Per Loss Variable Z
The following lists out the information we need for Z. For more information about the “per loss” payout for an insurance contract with a policy cap, see the post An insurance example of a mixed distribution – I.

\displaystyle F_Z(x)=\left\{\begin{matrix}0&\thinspace x<0\\{F_X(x)}&\thinspace 0 \le x<m\\{1}&\thinspace x \ge m\end{matrix}\right.

\displaystyle f_Z(x)=\left\{\begin{matrix}f_X(x)&\thinspace X<m\\{1-F_X(m)}&\thinspace X=m\end{matrix}\right.

\displaystyle E[Z]=\int_0^{m} x \thinspace f_X(x) \thinspace dx + m \thinspace [1-F_X(m)]

\displaystyle E[Z^n]=\int_0^{m} x^n \thinspace f_X(x) \thinspace dx + m^n \thinspace [1-F_X(m)] for all integers n > 1

\displaystyle M_Z(t)=\int_0^m e^{tx}f_X(x)dx+e^{tm}[1-F_X(m)]

\displaystyle \gamma_Z=\int_0^m \biggl(\frac{z-\mu_Z}{\sigma_Z}\biggr)^3f_X(z)dz+\biggl(\frac{m-\mu_Z}{\sigma_Z}\biggr)^3 [1-F_X(m)]

Case 2 – Distribution Function
Since Z is a mixture, the distribution of Y is a mixture of a point mass at the origin (no loss) and the mixture Z. As in the general case discussed above, the distribution function F_Y is a weighted average of F_U and F_Z where F_U is the distribution function of the point mass at y=0. The following shows the distribution function and the density function of Y.

\displaystyle F_U(x)=\left\{\begin{matrix}0&\thinspace x<0\\{1}&\thinspace 0 \le x\end{matrix}\right.

\displaystyle F_Y(y)=(1-p) \thinspace F_U(y) + p \thinspace F_Z(y).

\displaystyle F_Y(y)=\left\{\begin{matrix}0&\thinspace y<0\\{1-p+pF_X(y)}&\thinspace 0 \le y<m\\{1}&y \ge m\end{matrix}\right.

\displaystyle f_Y(y)=\left\{\begin{matrix}1-p&y=0\\{pf_X(y)}&\thinspace 0<y<m\\{p(1-F_X(m))}&y=m\end{matrix}\right.

Case 2 – Basic Properties
To obtain the basic properties such as E[Y], E[Y^2], M_Y(t) and \gamma_Y, just take the weighted average of the point mass and the “per loss” Z of this case. In other words, they are obtained by weighting the point mass (of no loss) with the “per loss variable Z.

Case 2 – Exponential Example
If the unmodified loss X has an exponential distribution, we have the following results:

\displaystyle E[Y]=\frac{p}{\lambda}(1-e^{-\lambda m})

\displaystyle E[Y^2]=p\biggl(\frac{2}{\lambda^2}-\frac{2m}{\lambda}e^{-\lambda m}-\frac{2}{\lambda^2}e^{-\lambda m}\biggr)

\displaystyle Var[Y]=p \thinspace E[Z^2] - p^2 E[Z]^2

\displaystyle M_Y(t)=1-p+pM_Z(t) where

\displaystyle M_Z(t)=\int_0^m e^{tx} \lambda e^{-\lambda x}dx+e^{tm} e^{-\lambda m}

\displaystyle =\frac{\lambda}{\lambda -t}-\frac{\lambda}{\lambda -t} e^{-(\lambda-t)m}+e^{-(\lambda-t)m}

Case 3
This is the case that the insurance policy is an excess-of-loss policy. The insurer agrees to pay the insured the amount of the random loss X in excess of a fixed amount d. The following is the payout rule of Y:

\displaystyle Y=\left\{\begin{matrix}0&\thinspace \text{no loss occurs}\\{Z}&\thinspace \text{a loss occurs}\end{matrix}\right.

\displaystyle Z=\left\{\begin{matrix}0&\thinspace X<d\\{X-d}&\thinspace X \ge d\end{matrix}\right.

\displaystyle Y=\left\{\begin{matrix}0&\thinspace \text{no loss occurs}\\{0}&\thinspace \text{a loss occurs and } X<d\\{X-d}&\text{a loss occurs and }X \ge d\end{matrix}\right.

Case 3 – Per Loss Variable Z
The following lists out the information we need for Z. For more information about the “per loss” payout for an insurance contract with a deductible, see the post An insurance example of a mixed distribution – II.

\displaystyle F_Z(y)=\left\{\begin{matrix}0&\thinspace y<0\\{F_X(y+d)}&\thinspace y \ge 0\end{matrix}\right.

\displaystyle f_Z(y)=\left\{\begin{matrix}F_X(d)&\thinspace y=0\\{f_X(y+d)}&\thinspace y > 0\end{matrix}\right.

\displaystyle E[Z]=\int_o^{\infty} y \thinspace f_X(y+d) \thinspace dy

\displaystyle E[Z^n]=\int_o^{\infty} y^n \thinspace f_X(y+d) \thinspace dy for all integer n>1

\displaystyle M_Z(t)=F_X(d) e^{0} + \int_0^{\infty} e^{tz}f_X(z+d) dz

\displaystyle =F_X(d) + e^{-td} \int_d^{\infty} e^{tw} f_X(w) dw

\displaystyle \gamma_Z=F_X(d) \biggl(\frac{0-\mu_Z}{\sigma_Z}\biggr)^3+\int_0^{\infty} \biggl(\frac{z-\mu_Z}{\sigma_Z}\biggr)^3 f_X(z+d) dz

Case 3 – Distribution Function
Since Z is a mixture, the distribution of Y is a mixture of a point mass at the origin (no loss) and the mixture Z. As in the general case discussed above, the distribution function F_Y is a weighted average of F_U and F_Z where F_U is the distribution function of the point mass at y=0. The following shows the distribution function and the density function of Y.

\displaystyle F_U(x)=\left\{\begin{matrix}0&\thinspace x<0\\{1}&\thinspace 0 \le x\end{matrix}\right.

\displaystyle F_Y(y)=(1-p) \thinspace F_U(y) + p \thinspace F_Z(y).

\displaystyle F_Y(y)=\left\{\begin{matrix}0&\thinspace y<0\\{1-p+pF_X(y+d)}&\thinspace 0 \le y\end{matrix}\right.

\displaystyle f_Y(y)=\left\{\begin{matrix}1-p+pF_X(d)&y=0\\{pf_X(y+d)}&\thinspace 0<y\end{matrix}\right.

Note that the point mass of Y is made up of two point masses, one from having no loss and one from having losses less than the deductible.

Case 3 – Basic Properties
The basic properties of Y as a mixture are obtained by applying the general formulas with the specific information about the “per loss” Z in this case. In other words, they are obtained by weighting the point mass (of no loss) with the “per loss variable Z.

Case 3 – Exponential Example
If the unmodified loss X has an exponential distribution, then we have the following results:

\displaystyle E[Y]=pE[Z]=p \thinspace \frac{e^{-\lambda d}}{\lambda}=p \thinspace e^{-\lambda d} E[X]

\displaystyle E[Y^2]=pE[Z^2]=p \thinspace \frac{2e^{-\lambda d}}{\lambda^2}=p \thinspace e^{-\lambda d}E[X^2]

\displaystyle Var[Y]=p \thinspace \frac{2e^{-\lambda d}}{\lambda^2}-p^2 \thinspace \frac{e^{-2\lambda d}}{\lambda^2}=pe^{-\lambda d}(2-pe^{-\lambda d})Var[X]

\displaystyle M_Y(t)=1-e^{-\lambda d}+e^{-\lambda d} \frac{\lambda}{\lambda -t}

\displaystyle = 1-e^{-\lambda d}+e^{-\lambda d} M_X(t)

Case 4
This is the case that the insurance policy has both a policy cap and a deductible. The “per loss” payout amount is capped at the amount m and is positive only when the loss is in excess of the deductible d. The following is the payout rule of Y:

\displaystyle Y=\left\{\begin{matrix}0&\thinspace \text{no loss occurs}\\{Z}&\thinspace \text{a loss occurs}\end{matrix}\right.

\displaystyle Z=\left\{\begin{matrix}0&\thinspace X<d\\{X-d}&\thinspace d \le X < d+m\\{m}&d+m \le X\end{matrix}\right.

\displaystyle Y=\left\{\begin{matrix}0&\thinspace \text{no loss occurs}\\{0}&\text{a loss and }X<d\\{X-d}&\text{a loss and }d \le X < d+m\\{m}&\thinspace \text{a loss and }X \ge m\end{matrix}\right.

Case 4 – Per Loss Variable Z
The following lists out the information we need for Z. For more information about the “per loss” payout for an insurance contract with a deductible and a policy cap, see the post An insurance example of a mixed distribution – III.

\displaystyle F_Z(y)=\left\{\begin{matrix}0&\thinspace y<0\\{F_X(y+d)}&\thinspace 0 \le y < m\\{1}&m \le y\end{matrix}\right.

\displaystyle f_Z(y)=\left\{\begin{matrix}F_X(d)&\thinspace y=0\\{f_X(y+d)}&\thinspace 0 < y < m\\{1-F_X(d+m)}&y=m\end{matrix}\right.

\displaystyle E[Z]=\int_0^m y \thinspace f_X(y+d) \thinspace dy + m \thinspace [1-F_X(d+m)]

\displaystyle E[Z^n]=\int_0^m y^n \thinspace f_X(y+d) \thinspace dy + m^n \thinspace [1-F_X(d+m)] for all integer n>1

\displaystyle M_Z(t)=F_X(d) e^0 + \int_0^m e^{tx} f_X(x+d) dx + e^{tm} [1-F_X(d+m)]

\displaystyle =F_X(d) + \int_0^m e^{tx} f_X(x+d) dx + e^{tm} [1-F_X(d+m)]

\displaystyle \gamma_Z=F_X(d) \biggl(\frac{0-\mu_Z}{\sigma_Z}\biggr)^3+\int_0^{\infty} \biggl(\frac{z-\mu_Z}{\sigma_Z}\biggr)^3 f_X(z+d) dz

\displaystyle + \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \  [1-F_X(d+m)] \biggl(\frac{m-\mu_Z}{\sigma_Z}\biggr)^3

Case 4 – Distribution Function
Since Z is a mixture, the distribution of Y is a mixture of a point mass at the origin (no loss) and the mixture Z. As in the general case discussed above, the distribution function F_Y is a weighted average of F_U and F_Z where F_U is the distribution function of the point mass at y=0. The following shows the distribution function and the density function of Y.

\displaystyle F_U(x)=\left\{\begin{matrix}0&\thinspace x<0\\{1}&\thinspace 0 \le x\end{matrix}\right.

\displaystyle F_Y(y)=(1-p) \thinspace F_U(y) + p \thinspace F_Z(y).

\displaystyle F_Y(y)=\left\{\begin{matrix}0&\thinspace y<0\\{1-p+pF_X(y+d)}&\thinspace 0 \le y<m\\{1}&y \ge m\end{matrix}\right.

\displaystyle f_Y(y)=\left\{\begin{matrix}1-p+pF_X(d)&y=0\\{pf_X(y+d)}&\thinspace 0<y<m\\{p[1-F_X(d+m)]}&y=m\end{matrix}\right.

Note that the point mass of Y is made up of two point masses, one from having no loss and one from having losses less than the deductible.

Case 4 – Basic Properties
The basic properties of Y as a mixture are obtained by applying the general formulas with the specific information about the “per loss” Z in this case. In other words, they are obtained by weighting the point mass (of no loss) with the “per loss variable Z.

Case 4 – Exponential Example
If the unmodified loss X has an exponential distribution, then we have the following results:

\displaystyle E[Y]=pE[Z]=p e^{-\lambda d} \frac{1}{\lambda} (1-e^{-\lambda m})=p e^{-\lambda d} (1-e^{-\lambda m}) E[X]

Another view of E[Y]:
\displaystyle E[Y]=e^{-\lambda d} E[Y_2] where Y_2 is the Y in Case 2.

Also, it can be shown that:
\displaystyle E[Y^2]=e^{-\lambda d} E[Y_2^2] where Y_2 is the Y in Case 2.

Here's the links to the previous discussions of mixed distributions:
An insurance example of a mixed distribution – I
An insurance example of a mixed distribution – II
An insurance example of a mixed distribution – III
Mixed distributions as mixtures

An insurance example of a mixed distribution – III

In the previous two posts, we discuss mixed distributions that are derived from modifying coverage on insurance contracts. Let X be the dollar amount of an random loss covered by an insurance contract. Without any coverage modification, the insurer would be obligated to pay the entire amount of the loss X. With some type of coverage modification, we are interested in the amount Y paid out by the insurer. How do we model Y based on the distribution of X? In one previous post, we discussed the model of the insurance payout Y when the insurance contract has a policy maximum (An insurance example of a mixed distribution – I). In another post, the coverage modification is having a deductible (An insurance example of a mixed distribution – II). In this post, we consider an insurance contract that has a combination of a deductble and a policy maximum. We discuss the model for the insurance payout Y and illustrate the calculation with the exponential distribution.

Note that the model for Y in this post and in the previous two posts is to model the insurance per loss or per claim. In other words, we model the payment made by the insurer for each insured loss. In future posts, we will discuss models that describe the insurance payments per insurance policy during a policy period. Such models will have to take into account that there may be no loss (or claim) during a period or that there may be multiple losses or claims in a policy period.

Modifying insurance coverage (e.g. having a policy maximum and/or a deductible) is akin to censoring and/or truncating the random loss amounts. Each type of censoring creates a probability mass in the distribution in the “per loss” insurance payout. So the presence of a deductible and a policy maximum in the same contract creates two probability masses. Let d be the deductible and let m be the policy maximum where d<m. Specifically, the following is the payout rule:

\displaystyle Y=\left\{\begin{matrix}0&\thinspace X<d\\{X-d}&\thinspace d \le X < d+m\\{m}&d+m \le X\end{matrix}\right.

The two probability masses are at y=0 and y=m. Thus the distribution function F_Y of Y has two jumps:

\displaystyle F_Y(y)=\left\{\begin{matrix}0&\thinspace y<0\\{F_X(y+d)}&\thinspace 0 \le y < m\\{1}&m \le y\end{matrix}\right.

Note that the distribution function F_Y is obtained by shifting the graph of F_X between the points (d,F_X(d)) and (d+m,F_X(d+m)) leftward to the point (0,F_X(d)).

The point mass at y=0 has probability P[X<d] and the point mass at y=m has probability P[X \ge d+m]. Thus the following is the density function of the “per loss” insurance payout Y:

\displaystyle f_Y(y)=\left\{\begin{matrix}F_X(d)&\thinspace y=0\\{f_X(y+d)}&\thinspace 0 < y < m\\{1-F_X(d+m)}&y=m\end{matrix}\right.

Here’s the mean payout and the higher moments of the payout:

\displaystyle E[Y]=\int_0^m y \thinspace f_X(y+d) \thinspace dy + m \thinspace [1-F_X(d+m)]

\displaystyle E[Y^n]=\int_0^m y^n \thinspace f_X(y+d) \thinspace dy + m^n \thinspace [1-F_X(d+m)] for all integer n>1

Example
Suppose the random loss X follows an exponential distribution with parameter \lambda. Let Y be the “per loss” payout for an insurance contract that has a combination of a deductible d and a policy maximum m. Let Z be the payout for an insurance contract with the same policy maximum m but with no deductible. Interestingly, in this exponential example, we can express E[Y] and Var[Y] in terms of E[Z] and E[Z^2] (see An insurance example of a mixed distribution – I).

\displaystyle E[Y]=\int_0^m y \thinspace \lambda e^{-\lambda (y+d)} \thinspace dy + m \thinspace [e^{-\lambda (d+m)}]

\displaystyle =e^{-\lambda d} \thinspace \biggl(\int_0^m y \thinspace \lambda \thinspace e^{-\lambda y} \thinspace dy+m \thinspace e^{-\lambda m}\biggr)=e^{-\lambda d} E[Z]

\displaystyle E[Y^2]=\int_0^m y^2 \thinspace \lambda \thinspace e^{-\lambda (y+d)} \thinspace dy+m^2 \thinspace e^{-\lambda (d+m)}

\displaystyle =e^{-\lambda d} \thinspace \biggl(\int_0^m y^2 \thinspace \lambda \thinspace e^{-\lambda y} \thinspace dy+m^2 \thinspace e^{-\lambda m}\biggr)=e^{-\lambda d} E[Z^2]

\displaystyle Var[Y]=e^{-\lambda d} E[Z^2] - e^{-2 \lambda d} E[Z]^2

Comment about the exponential example
In the insurance contract with a policy maximum but no deductible, the expected insurance payout (per loss) is reduced from \displaystyle \frac{1}{\lambda} to \displaystyle \frac{1-e^{-\lambda m}}{\lambda}. With the addition of a deductible d, the expected payout is reduced from \displaystyle \frac{1}{\lambda} to \displaystyle e^{-\lambda d} \frac{1-e^{-\lambda m}}{\lambda}. The expected insurance payout is reduced by the amount \displaystyle \frac{1-e^{-\lambda d}(1-e^{-\lambda m})}{\lambda}. Then the fraction of the loss eliminated by the deductible and the policy cap is \displaystyle 1-e^{-\lambda d}(1-e^{-\lambda m}). Note that the fraction of the loss eliminated by the policy cap alone is \displaystyle 1-(1-e^{-\lambda m})=e^{-\lambda m}.

Intuitively, it is clear that a higher fraction of the random loss is eliminated if there is a deductible on top of the policy cap. In this example, this is also borne out by the following inequality:

\displaystyle e^{-\lambda m} < 1-e^{-\lambda d}(1-e^{-\lambda m})

It is also intuitively clear that the presence of a deductible and a policy maximum reduces the variance of the insurance payout. We show that this is the case for the exponential example. In fact, we show that Var[Y] is less than Var[Z], that is when there is a deductible on top of the policy maximum, the variance of the “per loss” payout is less than the variance when there is only a policy maximum. We have the following derivation and the following claim:

\displaystyle Var[Y]=e^{-\lambda d} E[Z^2] - e^{-2 \lambda d} E[Z]^2

\displaystyle =e^{-\lambda d} (Var[Z]+E[Z]^2) - e^{-2 \lambda d} E[Z]^2

\displaystyle =e^{-\lambda d} Var[Z] + (e^{-\lambda d}-e^{-2 \lambda d}) E[Z]^2

Claim.
\displaystyle Var[Z]>e^{-\lambda d} Var[Z] + (e^{-\lambda d}-e^{-2 \lambda d}) E[Z]^2

Suppose not. The following derives a contradiction:

\displaystyle Var[Z] \le e^{-\lambda d} Var[Z] + (e^{-\lambda d}-e^{-2 \lambda d}) E[Z]^2

\displaystyle (1-e^{-\lambda d}) Var[Z] \le e^{-\lambda d}(1-e^{-\lambda d})E[Z]^2

\displaystyle Var[Z] \le e^{-\lambda d}E[Z]^2

Note that the last inequality holds for all positive number d>0. Thus Var[Z] \le 0. This is a contradiction as Var[Z]>0. Thus Var[Y] is less than Var[Z].

An insurance example of a mixed distribution – II

This is a continuation of the previous post, presenting another example of a mixed distribution involving an insurance contract. Suppose we have an insurance contract where the insurance payout per loss is subject to a deductible d. That is, if the random loss is less than d, the insurer pays nothing (the loss is assumed by the insured). If the loss exceeds d, the insurer pays X-d. This type of insurance contracts is also called the excess-of-loss contracts, since the insurer agrees to pay the insured the amount of the random loss X in excess of a fixed amount d. The following is another statement of the insurance payout rule:

\displaystyle Y=\left\{\begin{matrix}0&\thinspace X<d\\{X-d}&\thinspace X \ge d\end{matrix}\right.

We assume that the random loss X only takes on nonnegative numbers and is a continuous random variable. Let F_X be the distribution function of the random loss X. The insurance payout variable Y has a point mass at y=0. The following is the distribution function of Y.

\displaystyle F_Y(y)=\left\{\begin{matrix}0&\thinspace y<0\\{F_X(y+d)}&\thinspace y \ge 0\end{matrix}\right.

The distribution function F_Y has a jump at y=0 and the size of the jump is F_X(d). The point mass at y=0 reflects the probability of having no insurance payout. As a result, the following is the density of Y.

\displaystyle f_Y(y)=\left\{\begin{matrix}F_X(d)&\thinspace y=0\\{f_X(y+d)}&\thinspace y > 0\end{matrix}\right.

The following is the calculation for the mean insurance payout and higher moments:

\displaystyle E[Y]=\int_o^{\infty} y \thinspace f_X(y+d) \thinspace dy

\displaystyle E[Y^n]=\int_o^{\infty} y^n \thinspace f_X(y+d) \thinspace dy for all integer n>1

Example of Calculation
Suppose we have an excess-of-loss policy with deductible d>0. Furthermore, the random loss X has an exponential distribution with parameter \lambda. Then the distribution function of X is F_X(x)=1-e^{-\lambda x}. The following is the distribution function of the insurance payout:

\displaystyle F_Y(y)=\left\{\begin{matrix}0&\thinspace y<0\\{1-e^{-\lambda (y+d)}}&\thinspace y \ge 0\end{matrix}\right.

The size of the jump at y=0 is 1-e^{-\lambda d}. As a result, the density of the insurance payout is:

\displaystyle f_Y(y)=\left\{\begin{matrix}1-e^{-\lambda d}&\thinspace y=0\\{\lambda e^{-\lambda (y+d)}}&\thinspace y > 0\end{matrix}\right.

The expected insurance payout is:

\displaystyle E[Y]=\int_o^{\infty} y \thinspace \lambda e^{-\lambda (y+d)} \thinspace dy=\frac{e^{-\lambda d}}{\lambda}

The following calculation produces the variance of the insurance payout:

\displaystyle E[Y^2]=\int_o^{\infty} y^2 \thinspace \lambda e^{-\lambda (y+d)} \thinspace dy=\frac{2 e^{- \lambda d}}{\lambda^2}

\displaystyle Var[Y]=\frac{2 e^{- \lambda d}}{\lambda^2}-\biggl(\frac{e^{-\lambda d}}{\lambda}\biggr)^2=\frac{1}{\lambda^2} \thinspace e^{-\lambda d} \thinspace (2-e^{-\lambda d})

Comment
The presence of a deductible reduces the “per loss” expected payout paid by the insurer and also has the effect of variance reduction. In the exponential example, the expected payout is reduced from \displaystyle \frac{1}{\lambda} to \displaystyle \frac{e^{-\lambda d}}{\lambda}. Due to the ductible, the “per loss” payout is reduced by the amount \displaystyle \frac{1-e^{-\lambda d}}{\lambda}. The fraction of the expected loss eliminated by the presence of the deductible is 1-e^{-\lambda d}.

The presence of the deductible is clearly variance reducing. Note that Var[Y] < \frac{1}{\lambda^2}.

Comment
Note that the model for Y in this post and in the previous post is to model the insurance per loss or per claim. In other words, we model the payment made by the insurer for each insured loss. In future posts, we will discuss models that describe the insurance payments per insurance policy during a policy period. Such “per policy” models will have to take into account that there may be no loss (or claim) during a period or that there may be multiple losses or claims in a policy period.

An insurance example of a mixed distribution – I

Mixed distributions arise naturally in many applications. In this post and in the next several posts we discuss several examples of mixed distributions based on insurance concepts. We then illustrate the calculation using the exponential distribution as the model for the random loss.

The support of a random variable X is the subset S of the real numbers on which the probability mass function (if X is discrete) or the probability density function (if X is continuous) is positive. If X is any continuous random variable, P[X=a]=0 for any a \in S. On the other hand, if X is any discrete random variable, P[X=a] > 0 for any a \in S. In other words, a discrete distribution consists of a finite or countably infinite number of probability masses (or point masses) while continuous distributions have no point masses. This is one distinguishing characteristic between continuous random variables and discrete random variables. Mixed distributions (or mixed random variables) are nether continuous nor discrete. This means if X has a mixed distribution, X has at least one probability mass (i.e. P[X=a]>0 for at least one a \in S) and it is also true that there is some interval (a,b) \subset S such that P[X=c]=0 for all c \in (a,b).

Let’s consider one insurance example of a mixed distribution. Let X be the size (in dollar amount) of a random loss for an insurance contract. Suppose that the insurance contract has a policy maximum of M. For the purpose of this discussion, we assume that X is a continuous random variable. We also assume that P[X>M]>0. If the random loss is less than M, the insurer pays the entire random loss amount. If the random loss exceeds M, the insurance payout is capped at M. Let Y be the “per loss” amount payable by the insurer. Then Y is determined by the following rule:

\displaystyle Y=\left\{\begin{matrix}X&\thinspace X<M\\{M}&\thinspace X \ge M\end{matrix}\right.

Since the random loss X has a continuous distribution, F_X, the distribution function of X, is a continuous function. Then the insurance payout Y has a mixed distribution. The distribution of Y has a probability mass (point mass) at x=M and the distribution is continuous on the interval (0,M). The cumulative distribution function F_Y of Y is a step function with a jump at x=M. The following is F_Y(x):

\displaystyle F_Y(x)=\left\{\begin{matrix}0&\thinspace x<0\\{F_X(x)}&\thinspace 0 \le x<M\\{1}&\thinspace x \ge M\end{matrix}\right.

Since the right tail \displaystyle \int_M^{\infty} f_X(x) \thinspace dx is positive, F_Y(x) has a jump at x=M. The size of the jump is 1-F_X(M), which is the size of the point mass at x =M. The density function of Y is a hybrid one:

\displaystyle f_Y(x)=\left\{\begin{matrix}f_X(x)&\thinspace X<M\\{1-F_X(M)}&\thinspace X=M\end{matrix}\right.

The following derives the mean and the higher moments of the insurance payout.

\displaystyle E[Y]=\int_0^{M} x \thinspace f_X(x) \thinspace dx + M \thinspace [1-F_X(M)]

\displaystyle E[Y^n]=\int_0^{M} x^n \thinspace f_X(x) \thinspace dx + M^n \thinspace [1-F_X(M)] for all integers n > 1

Consequently, the following computes the variance of the insurance payout Y:

\displaystyle Var[Y]=E[Y^2]-\biggl(E[Y]\biggr)^2

Example of Calculation
Suppose we have an insurance contract with a policy maximum M. Furthermore, the random loss amount in the insurance contract has an exponential distribution with parameter \lambda. Then F_X(x)=1-e^{-\lambda x}. The distribution function of the insurance payout Y is:

\displaystyle F_Y(x)=\left\{\begin{matrix}0&\thinspace x<0\\{1-e^{- \lambda x}}&\thinspace 0 \le x<M\\{1}&\thinspace x \ge M\end{matrix}\right.

In the distribution function, the size of the jump at x=M is e^{- \lambda M}. As a result, the density function of the insurance per loss payout Y is:

\displaystyle f_Y(x)=\left\{\begin{matrix}\lambda e^{- \lambda x}&\thinspace X<M\\{e^{- \lambda M}}&\thinspace X=M\end{matrix}\right.

The mean expected insurance payout is:

\displaystyle E[Y]=\int_0^{M} x \thinspace \lambda \thinspace e^{-\lambda x} \thinspace dx + M \thinspace [e^{-\lambda M}]= \frac{1}{\lambda}(1-e^{-\lambda M})

The following calculation derives The variance of the insurance payout:

\displaystyle E[Y^2]=\int_0^{M} x^2 \thinspace \lambda \thinspace e^{-\lambda x} \thinspace dx + M^2 \thinspace [e^{-\lambda M}]

\displaystyle = \frac{2}{\lambda^2} - \frac{2M}{\lambda}e^{-\lambda M}-\frac{2}{\lambda^2}e^{-\lambda M}

\displaystyle Var[Y]=\frac{2}{\lambda^2} - \frac{2M}{\lambda}e^{-\lambda M}-\frac{2}{\lambda^2}e^{-\lambda M} - \frac{1}{\lambda^2} \biggl(1-e^{-\lambda M}\biggr)^2

\displaystyle = \frac{1}{\lambda^2}-\frac{2M}{\lambda}e^{-\lambda M}-\frac{1}{\lambda^2}e^{-2 \lambda M}

Comment
Not surprisingly the policy cap reduces risk for the insurer. The cap for the insurance benefit has the effect of reducing the amount paid out by the insurer. In the above caculation involving the exponential distribution, the expected insurance payout is reduced from \displaystyle \frac{1}{\lambda} to \displaystyle \frac{1}{\lambda} (1-e^{-\lambda M}). In other words, due to the policy cap, the expected insurance per loss payout is reduced by the amount \displaystyle \frac{e^{-\lambda M}}{\lambda}. The fraction of the loss eliminated by the policy cap is \displaystyle e^{-\lambda M}.

On the other hand, it is clear that the policy cap is variance reducing. For the exponential example, we show that Var[Y] is less than Var[X], that is, the variance of the “per loss” insurance payout when there is a policy maximum is less than the variance of the unmodified random loss. We have the following claim.

Claim
\displaystyle Var[X]=\frac{1}{\lambda^2}> \frac{1}{\lambda^2}-\frac{2M}{\lambda}e^{-\lambda M}-\frac{1}{\lambda^2}e^{-2 \lambda M}=Var[Y]

Suppose the claim is not true. Then the following derivation produces a contradiction.

\displaystyle \frac{1}{\lambda^2}\le \frac{1}{\lambda^2}-\frac{2M}{\lambda}e^{-\lambda M}-\frac{1}{\lambda^2}e^{-2 \lambda M}

\displaystyle 1 \le 1-2M \lambda e^{-\lambda M}-e^{-2 \lambda M}

\displaystyle 2M \lambda e^{-\lambda M}+e^{-2 \lambda M} \le 0

Note that the last inequality is false as the left hand side of the inequality is positive.

Comment
Note that the model for Y in this post is to model the insurance per loss or per claim. In other words, we model the payment made by the insurer for each insured loss. In future posts, we will discuss models that describe the insurance payments per insurance policy during a policy period. Such “per policy” models will have to take into account that there may be no loss (or claim) during a period or that there may be multiple losses or claims in a policy period.