# Basic properties of mixtures

In this post, we discuss some basic properties of mixtures (see these two previous posts for examples of mixtures – The law of total probability and Examples of mixtures). We also present the example of the negative binomial distribution. We show that the negative binomial distribution is a mixture of Poisson distributions with gamma mixing weights.

A random variable $X$ is a mixture if its distribution is a weighted sum (or integral) of a family of distribution functions $F_{X \lvert Y}$ where $Y$ is the mixing random variable. More specifically, $X$ is a mixture if its distribution function $F_X$ is of one of the following two forms:

$\displaystyle F_X(x)=\sum \limits_{y} F_{X \lvert Y=y}(x) P(Y=y)$

$\displaystyle F_X(x)=\int_{-\infty}^{+\infty} F_{X \lvert Y=y}(x) \thinspace f_Y(y) \thinspace dy$

In the first case, $X$ is a discrete mixture (i.e. the discrete random variable $Y$ provides the weights). In the second case, $X$ is a continuous mixture (i.e. the continuous random variable $Y$ provides the weights). In either case, the distribution of $Y$ is said to be the mixing variable (its distribution is the mixing distribution). In some probability and statistics texts, the notion of mixtures is called compounding.

Mixtures arise in many settings. The notion of mixtures is important in insurance applications (e.g. when the risk class of a policyholder is uncertain). The distribution for modeling the random loss for an insured risk (or a group of insured risks) is often a mixture distribution. Discrete mixture arises when the risk classifications is discrete. Continuous mixture is important for the situations where the random loss distribution has an uncertain risk parameter and the risk parameter follows a continuous distribution. See these two previous posts for examples of mixtures – The law of total probability and Examples of mixtures.

Unconditional Expectation
Let $X$ be a mixture and $Y$ be the mixing variable. Let $h:\mathbb{R} \rightarrow \mathbb{R}$ be a continuous function. We show the following fact about unconditional expectation. This formula is used below for establishing basic facts of mixtures.

$E[h(X)]=E_Y[\thinspace E(h(X) \lvert Y)] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (*)$

The following is the derivation:

$\displaystyle E_Y[\thinspace E(h(X) \lvert Y)]$

$\displaystyle=\int_{-\infty}^{+\infty}E[h(X) \lvert Y=y] \thinspace f_Y(y) \thinspace dy$

$\displaystyle=\int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty} h(x) \thinspace f_{X \lvert Y}(x \lvert y) \thinspace dx \thinspace f_Y(y) \thinspace dy$

$\displaystyle=\int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty} h(x) \thinspace f_{X \lvert Y}(x \lvert y) \thinspace f_Y(y) \thinspace dx \thinspace dy$

$\displaystyle=\int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty} h(x) \thinspace f_{X,Y}(x,y) \thinspace dx \thinspace dy=E[h(X)]$

Basic Properties of a Mixture
Let $X$ be a mixture with mixing variable $Y$. The unconditional mean, variance and moment generating function of $X$ are:

(1) $E[X]=E_Y[E(X \lvert Y)]$

(2) $Var[X]=E_Y[Var(X \lvert Y)]+Var_Y[E(X \lvert Y)]$

(3) $M_X(t)=E_Y[M_{X \lvert Y}(t)]$

Discussion of (1)
The statement (1) is called the total expectation and follows from the unconditional expectation formula (*) above. Suppose $X$ is the random loss amount for an insured whose risk class is uncertain. The formula (1) states that the the average loss is the average of averages. The idea is to find the average loss for each risk class and then take the weighted average of these averages according to the distribution of the mixing variable (e.g. the distribution of the policyholders by risk class).

Discussion of (2)
The statement (2) is called the total variance. Sticking with the example of insured risks in a block of insurance policies, the total variance of the random loss for an insured comes from two sources – the average of the variation in each risk class plus the variation of the average loss in each risk class. As we will see in the example below and in subsequent posts, the uncertainty in a risk parameter in the distribution of $X$ (through the conditioning of $Y$) has the effect of increasing the variance in the unconditional random loss. The following derivations establish the formula for total variance.

$E_Y[Var(X \lvert Y)]$

$=E_Y \lbrace{E[X^2 \lvert Y]-E[X \lvert Y]^2}\rbrace$

$=E_Y[E(X^2 \lvert Y)]-E_Y[E(X \lvert Y)^2]$

$=E[X^2]-E_Y[E(X \lvert Y)^2]$

On the other hand, $Var_Y[E(X \lvert Y)]$

$=E_Y[E(X \lvert Y)^2]-E_Y[E(X \lvert Y)]^2$

$=E_Y[E(X \lvert Y)^2]-E[X]^2$

Discussion of (3)
This follows from the unconditional expectation formula (*) above. Note that $\displaystyle M_X(t)=E[e^{t \thinspace X}]=E_Y[E(e^{t \thinspace X} \lvert Y)]=E_Y[M_{X \lvert Y}(t)]$

Example
We show that the negative binomial distribution is a mixture of a family of Poisson distributions with gamma mixing weights. We then derive the mean, variance and moment generating function of the negative binomial distribution.

The following is the conditional Poisson probability function:

$\displaystyle P_{N \lvert \Lambda}(n \lvert \lambda)=\frac{\lambda^n \thinspace e^{-\lambda}}{n!}$ where $n=0,1,2,3,\cdots$

The Poisson parameter $\lambda$ is uncertain and it follows a gamma distribution with parameter $\alpha$ and $\beta$ with the following density function:

$\displaystyle f_{\Lambda}(\lambda)=\frac{\beta^\alpha}{\Gamma(\alpha)} \thinspace \lambda^{\alpha-1} \thinspace e^{-\beta \lambda}$

The unconditional probability function of $X$ is:

$\displaystyle P_N(n)=\int_0^{\infty} P_{N \lvert \Lambda}(n \lvert \lambda) \thinspace f_\Lambda(\lambda) \thinspace d \lambda$

$\displaystyle =\int_0^{\infty} \frac{\lambda^n \thinspace e^{-\lambda}}{n!} \thinspace \frac{\beta^\alpha}{\Gamma(\alpha)} \thinspace \lambda^{\alpha-1} \thinspace e^{-\beta \lambda} \thinspace d \lambda$

$\displaystyle =\frac{\beta^\alpha}{n! \Gamma(\alpha)} \int_0^{\infty} \lambda^{\alpha+n-1} \thinspace e^{-(\beta+1) \lambda} \thinspace d \lambda$

$\displaystyle =\frac{\beta^\alpha}{n! \Gamma(\alpha)} \thinspace \frac{\Gamma(\alpha+n)}{(\beta+1)^{\alpha+n}} \int_0^{\infty} \frac{(\beta+1)^{\alpha+n}}{\Gamma(\alpha+n)} \lambda^{\alpha+n-1} \thinspace e^{-(\beta+1) \lambda} \thinspace d \lambda$

$\displaystyle =\frac{\beta^\alpha}{n! \Gamma(\alpha)} \thinspace \frac{\Gamma(\alpha+n)}{(\beta+1)^{\alpha+n}}$

$\displaystyle =\frac{\Gamma(\alpha+n)}{\Gamma(\alpha) \Gamma(n+1)} \thinspace \biggl(\frac{\beta}{\beta+1}\biggr)^{\alpha} \thinspace \biggl(1-\frac{\beta}{\beta+1}\biggr)^n$

Note that the above unconditional probability function is that of a negative binomial distribution with parameters $\alpha$ and $p=\frac{\beta}{\beta+1}$. Note that $\displaystyle E[\Lambda]=\frac{\alpha}{\beta}$ and $\displaystyle Var[\Lambda]=\frac{\alpha}{\beta^2}$. We now compute the unconditional mean $E[N]$, the total variance $Var[N]$ and the moment generating function $M_N(t)$. Let $p=\frac{\beta}{\beta+1}$.

$\displaystyle E[N]=E[E(N \lvert \Lambda)]=E[\Lambda]=\frac{\alpha}{\beta}=\frac{\alpha \thinspace (1-p)}{p}$

$\displaystyle Var[N]=E[Var(N \lvert \Lambda)]+Var[E(N \lvert \Lambda)]$

$\displaystyle =E[\Lambda]+Var[\Lambda]=\frac{\alpha}{\beta}+\frac{\alpha}{\beta^2}=\frac{\alpha \thinspace (\beta+1)}{\beta^2}=\frac{\alpha \thinspace (1-p)}{p^2}$

To derive the moment generating function, note that the conditional Poisson mgf is $\displaystyle M_{N \lvert \Lambda}(t)=e^{\Lambda \thinspace (e^t - 1)}$.

$\displaystyle M_N(t)=\int_0^{\infty} M_{N \lvert \Lambda=\lambda}(t) \thinspace f_{\Lambda}(\lambda) \thinspace d \lambda$

$\displaystyle =\int_0^{\infty} e^{\lambda \thinspace (e^t - 1)} \thinspace \frac{\beta^\alpha}{\Gamma(\alpha)} \thinspace \lambda^{\alpha-1} \thinspace e^{-\beta \lambda} \thinspace d \lambda$

$\displaystyle =\frac{\beta^\alpha}{(\beta+1-e^t)^\alpha} \thinspace \int_0^{\infty} \frac{(\beta+1-e^t)^\alpha}{\Gamma(\alpha)} \thinspace \lambda^{\alpha-1} \thinspace e^{-(\beta+1-e^t) \thinspace \lambda} \thinspace d \lambda$

$\displaystyle =\frac{\beta^\alpha}{(\beta+1-e^t)^\alpha}=\biggl(\frac{\beta}{\beta+1-e^t}\biggr)^\alpha=\biggl(\frac{p}{1-(1-p) \thinspace e^t}\biggr)^\alpha$

Comment
In the above example, note that $E[N]. This stands in contrast with the Poisson distribution ($E[X]=Var[X]$) and with the binomial distribution ($Var[X]). In the above example, there is uncertainty in the risk parameter in the conditional Poisson distribution. The additional uncertainty causes the unconditional variance to increase.