Introduction to Buhlmann credibility

In this post, we continue our discussion in credibility theory. Suppose that for a particular insured (either an individual entity or a group of insureds), we have observed data X_1,X_2, \cdots, X_n (the numbers of claims or loss amounts). We are interested in setting a rate to cover the claim experience X_{n+1} from the next period. In two previous posts (Examples of Bayesian prediction in insurance, Examples of Bayesian prediction in insurance-continued), we discussed this estimation problem from a Bayesian perspective and presented two examples. In this post, we discuss the Buhlmann credibility model and work the same two examples using the Buhlmann method.

First, let’s further describe the setting of the problem. For a particular insured, the experience data corresponding to various exposure periods are assumed to be independent. Statistically speaking, conditional on a risk parameter \Theta, the claim numbers or loss amounts X_1, \cdots, X_n,X_{n+1} are independent and identically distributed. Furthermore, the distribution of the risk characteristics in the population of insureds and potential insureds is represented by \pi_{\Theta}(\theta). The experience (either claim numbers or loss amounts) of a particular insured with risk parameter \Theta=\theta is modeled by the conditional distribution f_{X \lvert \Theta}(x \lvert \theta) given \Theta=\theta.

The Buhlmann Credibility Estimator
Given the observations X_1, \cdots, X_n in the prior exposure periods, the Buhlmann credibility estimate C of the claim experience X_{n+1} is

\displaystyle C=Z \overline{X}+(1-Z)\mu

where Z is the credibility factor assigned to the observed experience data and \mu is the unconditional mean E[X] (the mean taken over all members of the risk parameter \Theta). The credibility factor Z is of the form \displaystyle Z=\frac{n}{n+K} where n is a measure of the exposure size (it is the number of observation periods in our examples) and \displaystyle K=\frac{E[Var[X \lvert \Theta]]}{Var[E[X \lvert \Theta]]}. The parameter K will be further explained below.

The Buhlmann credibility estimator C is a linear function of the past data. Note that it is of the form:

\displaystyle C=Z \overline{X}+(1-Z)\mu=w_0+\sum \limits_{i=1}^{n} w_i X_i

where w_0=(1-Z)\mu and \displaystyle w_i=\frac{Z}{n} for i=1, \cdots, n.

Not only is the Buhlmann credibility estimator a linear estimator, it is the best linear estimator to the Bayesian predictive mean E[X_{n+1} \lvert X_1, \cdots, X_n] and the hypothetical mean E[X_{n+1} \lvert \Theta] in terms of minimizing squared error loss. In other words, the coefficients w_i are obtained in such a way that the following expectations (loss functions) are minimized where the expectations are taken over all observations and/or \Theta (see [1]):

\displaystyle L_1=E\biggl( \biggl[E[X_{n+1} \lvert \Theta]-w_0-\sum \limits_{i=1}^{n} w_i X_i \biggr]^2 \biggr)

\displaystyle L_2=E\biggl( \biggl[E[X_{n+1} \lvert X_1, \cdots, X_n]-w_0-\sum \limits_{i=1}^{n} w_i X_i \biggr]^2 \biggr)

The Buhlmann Method
As discussed above, the Buhlmann credibility factor Z=\frac{n}{n+K} is chosen such that C=Z \overline{X}+(1-Z) \mu is the best linear approximation to the Bayesian estimate of the next period’s claim experience. Now we focus on the calculation of the parameter K.

Conditional on the risk parameter \Theta, E[X \lvert \Theta] is called the hypothetical mean and Var[X \lvert \Theta] is called the process variance. Then \mu=E[X]=E[E[X \lvert \Theta]] is the expected value of hypothetical means (the unconditional mean). The total variance of this random process is:

\displaystyle Var[X]=E[Var[X \lvert \Theta]]+Var[E[X \lvert \Theta]]

The first part of the total variance E[Var[X \lvert \Theta]] is called the expected value of process variance (EPV) and the second part Var[E[X \lvert \Theta]] is called the variance of the hypothetical means (VHM). The parameter K in the Buhlmann method is simply the ratio K=\frac{EPV}{VHM}.

We can get an intuitive feel of this formula by considering the variability of the hypothetical means E[X \lvert \Theta] across many values of the risk parameter \Theta. If the entire population of insureds (and potential insureds) is fairly homogeneous with respect to the risk parameter \Theta, then VHM=Var[E[X \lvert \Theta]] does not vary a great deal and is relatively small in relation to EPV=E[Var[X \lvert \Theta]]. As a result, K is large and Z is closer to 0. This agrees with the notion that in a homogeneous population, the unconditional mean (the overall mean) is of more value as a predictor of the next period’s claim experience. On the other hand, if the population of insureds is heterogeneous with respect to the risk parameter \Theta, then the overall mean is of less value as a predictor of future experience and we should reply more on the experience of the particular insured. Again, the Buhlmann formula agrees with this notion. If VHM=Var[E[X \lvert \Theta]] is large relative to EPV=E[Var[X \lvert \Theta]], then K is small and Z is closer to 1.

Another attractive feature of the Buhlmann formula is that as more experience data accumulate (as n \rightarrow \infty), the credibility factor Z approaches 1 (the experience data become more and more credible).

Example 1
In this random experiment, there are a big bowl (called B) and two boxes (Box 1 and Box 2). Bowl B consists of a large quantity of balls, 80% of which are white and 20% of which are red. In Box 1, 60% of the balls are labeled 0, 30% are labeled 1 and 10% are labeled 2. In Box 2, 15% of the balls are labeled 0, 35% are labeled 1 and 50% are labeled 2. In the experiment, a ball is selected at random from bowl B. The color of the selected ball from bowl B determines which box to use (if the ball is white, then use Box 1, if red, use Box 2). Then balls are drawn at random from the selected box (Box i) repeatedly with replacement and the values of the series of selected balls are recorded. The value of first selected ball is X_1, the value of the second selected ball is X_2 and so on.

Suppose that your friend performs this random experiment (you do not know whether he uses Box 1 or Box 2) and that his first selected ball is a 1 (X_1=1) and his second selected ball is a 2 (X_2=2). What is the predicted value X_3 of the third selected ball?

This example was solved in (Examples of Bayesian prediction in insurance) using the Bayesian approach. We now work this example in the Buhlmann approach.

The following restates the prior distribution of \Theta and the conditional distribution of X \lvert \Theta. We denote “white ball from bowl B” by \Theta=1 and “red ball from bowl B” by \Theta=2.

\pi_{\Theta}(1)=0.8
\pi_{\Theta}(2)=0.2

\displaystyle f_{X \lvert \Theta}(0 \lvert \Theta=1)=0.60
\displaystyle f_{X \lvert \Theta}(1 \lvert \Theta=1)=0.30
\displaystyle f_{X \lvert \Theta}(2 \lvert \Theta=1)=0.10

\displaystyle f_{X \lvert \Theta}(0 \lvert \Theta=2)=0.15
\displaystyle f_{X \lvert \Theta}(1 \lvert \Theta=2)=0.35
\displaystyle f_{X \lvert \Theta}(2 \lvert \Theta=2)=0.50

The following computes the conditional means (hypothetical means) and conditional variances (process variances) and the other parameters of the Buhlmann method.

Hypothetical Means
\displaystyle E[X \lvert \Theta=1]=0.60(0)+0.30(1)+0.10(2)=0.50
\displaystyle E[X \lvert \Theta=2]=0.15(0)+0.35(1)+0.50(2)=1.35

\displaystyle E[X^2 \lvert \Theta=1]=0.60(0)+0.30(1)+0.10(4)=0.70
\displaystyle E[X^2 \lvert \Theta=2]=0.15(0)+0.35(1)+0.50(4)=2.35

Process Variances
\displaystyle Var[X \lvert \Theta=1]=0.70-0.50^2=0.45
\displaystyle Var[X \lvert \Theta=2]=2.35-1.35^2=0.5275

Expected Value of the Hypothetical Means
\displaystyle \mu=E[X]=E[E[X \lvert \Theta]]=0.80(0.50)+0.20(1.35)=0.67

Expected Value of the Process Variance
\displaystyle EPV=E[Var[X \lvert \Theta]]=0.8(0.45)+0.20(0.5275)=0.4655

Variance of the Hypothetical Means
\displaystyle VHM=Var[E[X \lvert \Theta]]=0.80(0.50)^2+0.20(1.35)^2-0.67^2=0.1156

Buhlmann Credibility Factor
\displaystyle K=\frac{4655}{1156}

\displaystyle Z=\frac{2}{2+\frac{4655}{1156}}=\frac{2312}{6967}=0.33185

Buhlmann Credibility Estimate
\displaystyle C=\frac{2312}{6967} \frac{3}{2}+\frac{4655}{6967} (0.67)=\frac{6586.85}{6967}=0.9454356

Note that the Bayesian estimate obtained in Examples of Bayesian prediction in insurance is 1.004237288. Under the Buhlmann model, the past claim experience of the insured in this example is assigned 33% weight in projecting the claim frequency in the next period.

Example 2
The number of claims X generated by an insured in a potfolio of independent insurance policies has a Poisson distribution with parameter \Theta. In the portfolio of policies, the parameter \Theta varies according to a gamma distribution with parameters \alpha and \beta. We have the following conditional distributions of X and distribution of the risk parameter \Theta.

\displaystyle f_{X \lvert \Theta}(x \lvert \theta)=\frac{\theta^x e^{-\theta}}{x!} where x=0,1,2, \cdots

\displaystyle \pi_{\Theta}(\theta)=\frac{\beta^{\alpha}}{\Gamma(\alpha)} \theta^{\alpha-1} e^{-\beta \theta} where \Gamma(\cdot) is the gamma function.

Suppose that a particular insured in this portfolio has generated 0 and 3 claims in the first 2 policy periods. What is the Buhlmann estimate of the number of claims for this insured in period 3?

Since the conditional distribution of X is Poisson, we have E[X \lvert \Theta]=\Theta and Var[X \lvert \Theta]=\Theta. As a result, the EPV, VHM and K are:

\displaystyle EPV=E[\Theta]=\frac{\alpha}{\beta}

\displaystyle VHM=Var[\Theta]=\frac{\alpha}{\beta^2}

\displaystyle K=\frac{EPV}{VHM}=\beta

As a result, the credibility factor for a 2-period experience period is Z=\frac{2}{2+\beta} and the Buhlmann estimate of the claim frequency in the next period is:

\displaystyle C=\frac{2}{2+\beta} \thinspace \biggl(\frac{3}{2}\biggr)+\frac{\beta}{2+\beta} \thinspace \biggl(\frac{\alpha}{\beta}\biggr)

To generalize the above results, suppose that we have observed X_1=x_1, \cdots, X_n=x_n for this insured in the prior periods. Then the Buhlmann estimate for the claim frequency in the next period is:

\displaystyle C=\frac{n}{n+\beta} \thinspace \biggl(\frac{\sum \limits_{i=1}^{n}x_i}{n}\biggr)+\frac{\beta}{n+\beta} \thinspace \biggl(\frac{\alpha}{\beta}\biggr)

In this example, the Buhlmann estimate is exactly the same as the Bayesian estimate (Examples of Bayesian prediction in insurance-continued).

Reference

  1. Klugman S. A., Panjer H. H., Willmot G. E., Loss Models, From Data To Decisions, Second Edition, 2004, John Wiley & Sons, Inc.
Advertisements

Examples of Bayesian prediction in insurance-continued

This post is a continuation of the previous post Examples of Bayesian prediction in insurance. We present another example as an illustration of the methodology of Bayesian estimation. The example in this post, along with the example in the previous post, serve to motivate the concept of Bayeisan credibility and Buhlmann credibility theory. So these two posts are part of an introduction to credibility theory.

Suppose X_1, \cdots, X_n,X_{n+1} are independent and identically distributed conditional on \Theta=\theta. We denote the density function of the common distribution of X_j by f_{X \lvert \Theta}(x \lvert \theta). We denote the prior distribution of the risk parameter \Theta by \pi_{\Theta}(\theta). The following shows the steps of the Bayesian estimate of the next observation X_{n+1} given X_1, \cdots, X_n.

The Marginal Distribution
\displaystyle f_{X_1, \cdots, X_n}(x_1, \cdots, x_n)=\int \limits_{\theta} \biggl[\prod \limits_{i=1}^{n} f_{X \lvert \Theta}(x_i \lvert \theta)\biggr] \pi_{\Theta}(\theta) d \theta

The Posterior Distribution
\displaystyle \pi_{\Theta \lvert X_1, \cdots, X_n}(\theta \lvert x_1, \cdots, x_n)

\displaystyle =\ \ \ \ \ \ \ \ \ \ \frac{1}{f_{X_1, \cdots, X_n}(x_1, \cdots, x_n)} \biggl[\prod \limits_{i=1}^{n} f_{X \lvert \Theta}(x_i \lvert \theta)\biggr] \pi_{\Theta}(\theta)

The Predictive Distribution
\displaystyle f_{X_{n+1} \lvert X_1, \cdots, X_n}(x \vert x_1, \cdots, x_n)

\displaystyle =\ \ \ \ \ \ \ \ \ \ \int \limits_{\theta} \biggl(f_{X \lvert \Theta}(x \lvert \theta)\biggr) \thinspace \pi_{\Theta \lvert X_1, \cdots, X_n}(\theta \lvert x_1, \cdots, x_n) d \theta

The Bayesian Predictive Mean of the Next Period
\displaystyle E[X_{n+1} \lvert X_1=x_1, \cdots, X_n=x_n]

\displaystyle =\ \ \ \ \ \ \ \ \ \ \int \limits_{x} x \thinspace f_{X_{n+1} \lvert X_1, \cdots, X_n}(x \vert x_1, \cdots, x_n) dx

\displaystyle E[X_{n+1} \lvert X_1=x_1, \cdots, X_n=x_n]

\displaystyle =\ \ \ \ \ \ \ \ \ \ \int \limits_{\theta} E[X \lvert \theta] \thinspace \pi_{\Theta \lvert X_1, \cdots, X_n}(\theta \lvert x_1, \cdots, x_n) d \theta

Example 2
The number of claims X generated by an insured in a potfolio of independent insurance policies has a Poisson distribution with parameter \Theta. In the portfolio of policies, the parameter \Theta varies according to a gamma distribution with parameters \alpha and \beta. We have the following conditional distributions of X and prior distribution of \theta.

\displaystyle f_{X \lvert \Theta}(x \lvert \theta)=\frac{\theta^x e^{-\theta}}{x!} where x=0,1,2, \cdots

\displaystyle \pi_{\Theta}(\theta)=\frac{\beta^{\alpha}}{\Gamma(\alpha)} \theta^{\alpha-1} e^{-\beta \theta} where \Gamma(\cdot) is the gamma function.

Suppose that a particular insured in this portfolio has generated 0 and 3 claims in the first 2 policy periods. What is the Bayesian estimate of the number of claims for this insured in period 3?

Note that the conditional mean E[X \lvert \Theta]=\Theta. Thus the unconditional mean E[X]=E[\Theta]=\frac{\alpha}{\beta}.

Comment
Note that the unconditional distribution of X is a negative binomial distribution. In a previous post (Compound negative binomial distribution), it was shown that if N \sim Poisson(\Lambda) and \Lambda \sim Gamma(\alpha,\beta), then the unconditional distribution of X has the following probability function. We make use of this result in the Bayesian estimation problem in this post.

\displaystyle P[N=n]=\frac{\Gamma(\alpha+n}{\Gamma(\alpha) \Gamma(n)} \biggl[\frac{\beta}{\beta+1}\biggr]^{\alpha} \biggl[\frac{1}{\beta+1}\biggr]^n

The Marginal Distribution
\displaystyle f_{X_1,X_2}(0,3)=\int_{0}^{\infty} e^{-\theta} \frac{\theta^3 e^{-\theta}}{3!} \frac{\beta^{\alpha}}{\Gamma(\alpha)} \theta^{\alpha-1} e^{-\beta \theta} d \theta

\displaystyle =\int_{0}^{\infty} \frac{\beta^{\alpha}}{3! \Gamma(\alpha)} \theta^{\alpha+3-1} e^{\beta+2} d \theta=\frac{\Gamma(\alpha+3)}{6 \Gamma(\alpha)} \frac{\beta^{\alpha}}{(\beta+2)^{\alpha+3}}

The Posterior Distribution
\displaystyle \pi_{\Theta \lvert X_1,X_2}(\theta \lvert 0,3)=\frac{1}{f_{X_1,X_2}(0,3)} e^{-\theta} \frac{\theta^3 e^{-\theta}}{3!} \frac{\beta^{\alpha}}{\Gamma(\alpha)} \theta^{\alpha-1} e^{-\beta \theta}

\displaystyle =K \thinspace \theta^{\alpha+3-1} e^{-(\beta+2) \theta}

In the above expression K is a constant making \pi_{\Theta \lvert X_1,X_2}(\theta \lvert 0,3) a density function. Note that it has the form of a gamma distribution. Thus the posterior distribution must be:

\displaystyle \pi_{\Theta \lvert X_1,X_2}(\theta \lvert 0,3)=\frac{(\beta+2)^{\alpha+1}}{\Gamma(\alpha+3)} \thinspace \theta^{\alpha+3-1} e^{-(\beta+2) \theta}

Thus the posterior distribution of \Theta is a gamma distribution with parameter \alpha+3 and \beta+2.

The Predictive Distribution
Note that the predictive distribution is simply the mixture of Poisson(\Theta) with Gamma(\alpha+3,\beta+2) as mixing weights. By the comment above, the predictive distribution is a negative binomial distribution with the following probability function:

\displaystyle f_{X_3 \lvert X_1,X_2}(x \lvert 0,3)=\frac{\Gamma(\alpha+5)}{\Gamma(\alpha+3) \Gamma(2)} \biggl[\frac{\beta+2}{\beta+3}\biggr]^{\alpha+3} \biggl[\frac{1}{\beta+3}\biggr]^{2}

The Bayesian Predictive Mean
\displaystyle E[X_3 \lvert 0,3]=\frac{\alpha+3}{\beta+2}=\frac{2}{\beta+2} \biggl(\frac{3}{2}\biggr)+\frac{\beta}{\beta+2} \biggl(\frac{\alpha}{\beta}\biggr) \ \ \ \ \ \ \ \ \ \ (1)

Note that E[X \lvert \Theta]=\Theta. Thus the Bayesian predictive mean in this example is simply the mean of the posterior distribution of \Theta, which is E[\Theta \vert 0,3]=\frac{\alpha+3}{\beta+2}.

Comment
Generalizing the example, suppose that in the first n periods, the claim counts for the insured are X_1=x_1, \cdots, X_n=x_n. Then the posterior distribution of the parameter \Theta is a gamma distribution.

\biggl[\Theta \lvert X_1=x_1, \cdots, X_n=x_n\biggr] \sim Gamma(\alpha+\sum_{i=1}^{n} x_i,\beta+n)

Then the predictive distribution of X_{n+1} given the observations has a negative binomial distribution. More importantly, the Bayesian predictive mean is:

\displaystyle E[X_{n+1} \lvert X_1=x_1, \cdots, X_n=x_n]=\frac{\alpha+\sum_{i=1}^{n} x_i}{\beta+n}

\displaystyle =\frac{n}{\beta+n} \biggl(\frac{\sum \limits_{i=0}^{n} x_i}{n}\biggr)+\frac{\beta}{\beta+n} \biggl(\frac{\alpha}{\beta}\biggr)\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2)

It is interesting that the Bayesian predictive mean of the (n+1)^{th} period is a weighted average of the mean of the observed data (\overline{X}) and the unconditional mean E[X]=\frac{\alpha}{\beta}. Consequently, the above Bayesian estimate is a credibility estimate. The weight given to the observed data Z=\frac{n}{\beta+n} is called the credibility factor. The estimate and the factor are called Bayesian credibility estimate and Bayesian credibility factor, respectively.

In general, the credibility estimate is an estimator of the following form:

\displaystyle E=Z \thinspace \overline{X}+ (1-Z) \thinspace \mu_0

where \overline{X} is the mean of the observed data and \mu_0 is the mean based on other information. In our example here, \mu_0 is the unconditional mean. In practice, \mu_0 could be the mean based on the entire book of business, or a mean based on a different block of similar insurance policies. Another interpretation is that \overline{X} is the mean of the recent experience data and \mu_0 is the mean of prior periods.

One more comment about the credibility factor Z=\frac{n}{\beta+n} derived in this example. As n \rightarrow \infty, Z \rightarrow 1. This makes intuitive sense since this gives more weight to \overline{X} as more and more data are accumulated.

Examples of Bayesian prediction in insurance

We present two examples to illustrate the notion of Bayesian predictive distributions. The general insurance problem we aim to illustrate is that of using past claim experience data from an individual insured or a group of insureds to predict the future claim experience. Suppose we have X_1,X_2, \cdots, X_n with each X_i being the number of claims or an aggregate amount of claims in a prior period of observation. Given such results, what will be the number of claims during the next period, or what will be the aggregate claim amount in the next period? These two examples will motivate the notion of credibility, both Bayesian credibility theory and Buhlmann credibility theory. We present Example 1 in this post. Example 2 is presented in the next post (Examples of Bayesian prediction in insurance-continued).

Example 1
In this random experiment, there are a big bowl (called B) and two boxes (Box 1 and Box 2). Bowl B consists of a large quantity of balls, 80% of which are white and 20% of which are red. In Box 1, 60% of the balls are labeled 0, 30% are labeled 1 and 10% are labeled 2. In Box 2, 15% of the balls are labeled 0, 35% are labeled 1 and 50% are labeled 2. In the experiment, a ball is selected at random from bowl B. The color of the selected ball from bowl B determines which box to use (if the ball is white, then use Box 1, if red, use Box 2). Then balls are drawn at random from the selected box (Box i) repeatedly with replacement and the values of the series of selected balls are recorded. The value of first selected ball is X_1, the value of the second selected ball is X_2 and so on.

Suppose that your friend performs this random experiment (you do not know whether he uses Box 1 or Box 2) and that his first ball is a 1 (X_1=1) and his second ball is a 2 (X_2=2). What is the predicted value X_3 of the third selected ball?

Though it is straightforward to apply the Bayes’ theorem to this problem (the solution can be seen easily using a tree diagram) to obtain a numerical answer, we use this example to draw out the principle of Bayesian prediction. So it may appear that we are making a simple problem overly complicated. We are merely using this example to motivate the method of Bayesian estimation.

For convenience, we denote “draw a white ball from bowl B” by \theta=1 and “draw a red ball from bowl B” by \theta=2. Box 1 and Box 2 are conditional distributions. The Bowl B is a distribution for the parameter \theta. The distribution given in Bowl B is a probability distribution over the space of all parameter values (called a prior distribution). The prior distribution of \theta and the conditional distributions of X given \theta are restated as follows:

\pi_{\theta}(1)=0.8
\pi_{\theta}(2)=0.2

\displaystyle f_{X \lvert \Theta}(0 \lvert \theta=1)=0.60
\displaystyle f_{X \lvert \Theta}(1 \lvert \theta=1)=0.30
\displaystyle f_{X \lvert \Theta}(2 \lvert \theta=1)=0.10

\displaystyle f_{X \lvert \Theta}(0 \lvert \theta=2)=0.15
\displaystyle f_{X \lvert \Theta}(1 \lvert \theta=2)=0.35
\displaystyle f_{X \lvert \Theta}(2 \lvert \theta=2)=0.50

The following shows the conditional means E[X \lvert \theta] and the unconditional mean E[X].

\displaystyle E[X \lvert \theta=1]=0.6(0)+0.3(1)+0.1(2)=0.50
\displaystyle E[X \lvert \theta=2]=0.15(0)+0.35(1)+0.5(2)=1.35
\displaystyle E[X]=0.8(0.50)+0.2(1.35)=0.67

If you know which particular box your friend is using (\theta=1 or \theta=2), then the estimate of the next ball should be E[X \lvert \theta]. But the value of \theta is unkown to you. Another alternative for a predicted value is the unconditional mean E[X]=0.67. While the estimate E[X]=0.67 is easy to calculate, this estimate does not take the observed data (X_1=1 and X_2=2) into account and it certainly does not take the parameter \theta into account. A third alternative is to incorporate the observed data into the estimate of the next ball. We now continue with the calculation of the Bayesian estimation.

Unconditional Distribution
\displaystyle f_X(0)=0.6(0.8)+0.15(0.2)=0.51
\displaystyle f_X(1)=0.3(0.8)+0.35(0.2)=0.31
\displaystyle f_X(2)=0.1(0.8)+0.50(0.2)=0.18

Marginal Probability
\displaystyle f_{X_1,X_2}(1,2)=0.1(0.3)(0.8)+0.5(0.35)(0.2)=0.059

Posterior Distribution of \theta
\displaystyle \pi_{\Theta \lvert X_1,X_2}(1 \lvert 1,2)=\frac{0.1(0.3)(0.8)}{0.059}=\frac{24}{59}

\displaystyle \pi_{\Theta \lvert X_1,X_2}(2 \lvert 1,2)=\frac{0.5(0.35)(0.2)}{0.059}=\frac{35}{59}

Predictive Distribution of X
\displaystyle f_{X_3 \lvert X_1,X_2}(0 \lvert 1,2)=0.6 \frac{24}{59} + 0.15 \frac{35}{59}=\frac{19.65}{59}

\displaystyle f_{X_3 \lvert X_1,X_2}(1 \lvert 1,2)=0.3 \frac{24}{59} + 0.35 \frac{35}{59}=\frac{19.45}{59}

\displaystyle f_{X_3 \lvert X_1,X_2}(2 \lvert 1,2)=0.1 \frac{24}{59} + 0.50 \frac{35}{59}=\frac{19.90}{59}

Here is another formulation of the predictive distribution of X_3. See the general methodology section below.
\displaystyle f_{X_3 \lvert X_1,X_2}(0 \lvert 1,2)=\frac{0.6(0.1)(0.3)(0.8)+0.15(0.5)(0.35)(0.2)}{0.059}=\frac{19.65}{59}

\displaystyle f_{X_3 \lvert X_1,X_2}(1 \lvert 1,2)=\frac{0.3(0.1)(0.3)(0.8)+0.35(0.5)(0.35)(0.2)}{0.059}=\frac{19.45}{59}

\displaystyle f_{X_3 \lvert X_1,X_2}(2 \lvert 1,2)=\frac{0.1(0.1)(0.3)(0.8)+0.5(0.5)(0.35)(0.2)}{0.059}=\frac{19.90}{59}

The posterior distribution \pi_{\theta}(\cdot \lvert 1,2) is the conditional probability distribution of the parameter \theta given the observed data X_1=1 and X_2=2. This is a result of applying the Bayes’ theorem. The predictive distribution f_{X_3 \lvert X_1,X_2}(\cdot \lvert 1,2) is the conditional probability distribution of a new observation given the past observed data of X_1=1 and X_2=2. Since both of these distributions incorporate the past observations, the Bayesian estimate of the next observation is the mean of the predictive distribution.

\displaystyle E[X_3 \lvert X_1=1,X_2=2]

\displaystyle =0 \thinspace f_{X_3 \lvert X_1,X_2}(0 \lvert 1,2)+1 \thinspace f_{X_3 \lvert X_1,X_2}(1 \lvert 1,2)+2 \thinspace f_{X_3 \lvert X_1,X_2}(2 \lvert 1,2)

\displaystyle =0 \frac{19.65}{59}+1 \frac{19.45}{59}+ 2 \frac{19.90}{59}

\displaystyle =\frac{59.25}{59}=1.0042372

\displaystyle E[X_3 \lvert X_1=1,X_2=2]

\displaystyle =E[X \lvert \theta=1] \medspace \pi_{\Theta \lvert X_1,X_2}(1 \lvert 1,2)+E[X \lvert \theta=2] \medspace \pi_{\Theta \lvert X_1,X_2}(2 \lvert 1,2)

\displaystyle =0.5 \frac{24}{59}+1.35 \frac{35}{59}=\frac{59.25}{59}

Note that we compute the Bayesian estimate E[X_3 \vert X_1,X_2] in two ways, one using the predictive distribution and the other using the posterior distribution of the parameter \theta. The Bayesian estimate is the mean of the hypothetical means E[X \lvert \theta] with expectation taken over the entire posterior distribution \pi_{\theta}(\cdot \lvert 1,2).

Discussion of General Methodology
We now use Example 1 to draw out general methodology. We first describe the discrete case and have the continuous case as a generalization.

Suppose we have a family of conditional density functions f_{X \lvert \Theta}(x \lvert \theta). In Example 1, the bowl B is the distribution of the parameter \theta. Box 1 and Box 2 are the conditional distributions with density f_{X \lvert \Theta}(x \lvert \theta). In an insurance application, the \theta is a risk parameter and the conditional distribution f_{X \lvert \Theta}(x \lvert \theta) is the claim experience in a given fixed period (conditional on \Theta=\theta).

Suppose that X_1,X_2, \cdots, X_n,X_{n+1} (conditional on \Theta=\theta) are independent and identically distributed where the common density function is f_{X \lvert \Theta}(x \lvert \theta). In our Example 1, once a box is selected (e.g. Box 1), then the repeated drawing of the balls are independent and identically distributed. In an insurance application, the X_k are the claim experience from an insured (or a group of insureds) where the insured belongs to the risk class with parameter \theta.

We are interested in the conditional distribution of X_{n+1} given \Theta=\theta to predict X_{n+1}. In our example, X_{n+1} is the value of the ball in the (n+1)^{st} draw. In an insurance application, X_{n+1} may be the claim experience of an insured (or a group of insureds) in the next policy period. We can use the unconditional mean E[X]=E[E(X \lvert \Theta)] (the mean of the hypothetical means). This approach does not take the risk parameter of the insured into the equation. On the other hand, if we know the value of \theta, then we can use f_{X \lvert \Theta}(x \lvert \theta). But the risk parameter is usually unknown. The natural alternative is to condition on the observed experience in the n prior periods X_1, \cdots, X_n rather than conditioning on the risk parameter \theta. Thus we derive the predictive distribution of X_{n+1} given the observation X_1, \cdots, X_n. Given the observed experience data X_1=x_1,X_2=x_2, \cdots, X_n=x_n, the following is the derivation of the Bayesian predictive distribution. Note that the prior distribution of the parameter \theta is \pi_{\Theta}(\theta).

The Unconditional Distribution
\displaystyle f_X(x)=\sum \limits_{\theta} f_{X \lvert \Theta}(x \lvert \theta) \ \pi_{\Theta}(\theta)

The Marginal Distribution
\displaystyle f_{X_1, \cdots, X_n}(x_1, \cdots, x_n)=\sum \limits_{\theta} \biggl[\prod \limits_{i=1}^{n} f_{X_i \lvert \Theta}(x_i \lvert \theta)\biggr] \pi_{\Theta}(\theta)

The Posterior Distribution
\displaystyle \pi_{\Theta \lvert X_1, \cdots, X_n}(\theta \lvert x_1, \cdots, x_n)

\displaystyle = \ \ \ \ \ \ \ \ \ \ \frac{1}{f_{X_1, \cdots, X_n}(x_1, \cdots, x_n)} \biggl[\prod \limits_{i=1}^{n} f_{X_i \lvert \Theta}(x_i \lvert \theta)\biggr] \pi_{\Theta}(\theta)

The Predictive Distribution
\displaystyle f_{X_{n+1} \lvert X_1, \cdots, X_n}(x \vert x_1, \cdots, x_n)

\displaystyle =\ \ \ \ \ \ \ \ \ \ \sum \limits_{\theta} f_{X \lvert \Theta}(x \lvert \theta) \thinspace \pi_{\Theta \lvert X_1, \cdots, X_n}(\theta \lvert x_1, \cdots, x_n)

Another formulation is:
\displaystyle f_{X_{n+1} \lvert X_1, \cdots, X_n}(x \vert x_1, \cdots, x_n)

\displaystyle =\ \ \ \ \ \ \ \ \ \ \frac{1}{f_{X_1, \cdots, X_n}(x_1, \cdots, x_n)} \sum \limits_{\theta} f_{X_{n+1} \lvert \Theta}(x \lvert \theta) \biggl[ \prod \limits_{j=1}^{n}f_{X_j \lvert \Theta}(x_j \lvert \theta)\biggr] \thinspace \pi_{\Theta}(\theta)

The Bayesian Predictive Mean of the Next Period
\displaystyle E[X_{n+1} \lvert X_1=x_1, \cdots, X_n=x_n]

\displaystyle =\ \ \ \ \ \ \ \ \ \ \sum \limits_{x} x \thinspace f_{X_{n+1} \lvert X_1, \cdots, X_n}(x \vert x_1, \cdots, x_n)

\displaystyle E[X_{n+1} \lvert X_1=x_1, \cdots, X_n=x_n]

\displaystyle =\ \ \ \ \ \ \ \ \ \ \sum \limits_{\theta} E[X \lvert \theta] \thinspace \pi_{\Theta \lvert X_1, \cdots, X_n}(\theta \lvert x_1, \cdots, x_n)

We state the same results for the case that the claim experience X is continuous.

The Unconditional Distribution
\displaystyle f_{X}(x) = \int_{\theta} f_{X \lvert \Theta} (x \lvert \theta) \ \pi_{\Theta}(\theta) \ d \theta

The Marginal Distribution
\displaystyle f_{X_1, \cdots, X_n}(x_1, \cdots, x_n)=\int \limits_{\theta} \biggl[\prod \limits_{i=1}^{n} f_{X \lvert \Theta}(x_i \lvert \theta)\biggr] \pi_{\Theta}(\theta) d \theta

The Posterior Distribution
\displaystyle \pi_{\Theta \lvert X_1, \cdots, X_n}(\theta \lvert x_1, \cdots, x_n)

\displaystyle =\ \ \ \ \ \ \ \ \ \ \frac{1}{f_{X_1, \cdots, X_n}(x_1, \cdots, x_n)} \biggl[\prod \limits_{i=1}^{n} f_{X \lvert \Theta}(x_i \lvert \theta)\biggr] \pi_{\Theta}(\theta)

The Predictive Distribution
\displaystyle f_{X_{n+1} \lvert X_1, \cdots, X_n}(x \vert x_1, \cdots, x_n)

\displaystyle =\ \ \ \ \ \ \ \ \ \ \int \limits_{\theta} f_{X \lvert \Theta}(x \lvert \theta) \thinspace \pi_{\Theta \lvert X_1, \cdots, X_n}(\theta \lvert x_1, \cdots, x_n) \ d \theta

Another formulation is:
\displaystyle f_{X_{n+1} \lvert X_1, \cdots, X_n}(x \vert x_1, \cdots, x_n)

\displaystyle =\ \ \ \ \ \ \ \ \ \ \frac{1}{f_{X_1, \cdots, X_n}(x_1, \cdots, x_n)} \int \limits_{\theta} f_{X_{n+1} \lvert \Theta}(x \lvert \theta) \biggl[ \prod \limits_{j=1}^{n}f_{X_j \lvert \Theta}(x_j \lvert \theta)\biggr] \thinspace \pi_{\Theta}(\theta) \ d \theta

The Bayesian Predictive Mean of the Next Period
\displaystyle E[X_{n+1} \lvert X_1=x_1, \cdots, X_n=x_n]

\displaystyle =\ \ \ \ \ \ \ \ \ \ \int \limits_{x} x \thinspace f_{X_{n+1} \lvert X_1, \cdots, X_n}(x \vert x_1, \cdots, x_n) dx

\displaystyle E[X_{n+1} \lvert X_1=x_1, \cdots, X_n=x_n]

\displaystyle =\ \ \ \ \ \ \ \ \ \ \int \limits_{\theta} E[X \lvert \theta] \thinspace \pi_{\Theta \lvert X_1, \cdots, X_n}(\theta \lvert x_1, \cdots, x_n) d \theta

See the next post (Examples of Bayesian prediction in insurance-continued) for Example 2.