This post is a continuation of the previous post Examples of Bayesian prediction in insurance. We present another example as an illustration of the methodology of Bayesian estimation. The example in this post, along with the example in the previous post, serve to motivate the concept of Bayeisan credibility and Buhlmann credibility theory. So these two posts are part of an introduction to credibility theory.

Suppose are independent and identically distributed conditional on . We denote the density function of the common distribution of by . We denote the prior distribution of the risk parameter by . The following shows the steps of the Bayesian estimate of the next observation given .

**The Marginal Distribution**

**The Posterior Distribution**

**The Predictive Distribution**

**The Bayesian Predictive Mean of the Next Period**

**Example 2**

The number of claims generated by an insured in a potfolio of independent insurance policies has a Poisson distribution with parameter . In the portfolio of policies, the parameter varies according to a gamma distribution with parameters and . We have the following conditional distributions of and prior distribution of .

where

where is the gamma function.

Suppose that a particular insured in this portfolio has generated 0 and 3 claims in the first 2 policy periods. What is the Bayesian estimate of the number of claims for this insured in period 3?

Note that the conditional mean . Thus the unconditional mean .

**Comment**

Note that the unconditional distribution of is a negative binomial distribution. In a previous post (Compound negative binomial distribution), it was shown that if and , then the unconditional distribution of has the following probability function. We make use of this result in the Bayesian estimation problem in this post.

**The Marginal Distribution**

**The Posterior Distribution**

In the above expression is a constant making a density function. Note that it has the form of a gamma distribution. Thus the posterior distribution must be:

Thus the posterior distribution of is a gamma distribution with parameter and .

**The Predictive Distribution**

Note that the predictive distribution is simply the mixture of with as mixing weights. By the comment above, the predictive distribution is a negative binomial distribution with the following probability function:

**The Bayesian Predictive Mean**

Note that . Thus the Bayesian predictive mean in this example is simply the mean of the posterior distribution of , which is .

**Comment**

Generalizing the example, suppose that in the first periods, the claim counts for the insured are . Then the posterior distribution of the parameter is a gamma distribution.

Then the predictive distribution of given the observations has a negative binomial distribution. More importantly, the Bayesian predictive mean is:

It is interesting that the Bayesian predictive mean of the period is a weighted average of the mean of the observed data () and the unconditional mean . Consequently, the above Bayesian estimate is a credibility estimate. The weight given to the observed data is called the credibility factor. The estimate and the factor are called Bayesian credibility estimate and Bayesian credibility factor, respectively.

In general, the credibility estimate is an estimator of the following form:

where is the mean of the observed data and is the mean based on other information. In our example here, is the unconditional mean. In practice, could be the mean based on the entire book of business, or a mean based on a different block of similar insurance policies. Another interpretation is that is the mean of the recent experience data and is the mean of prior periods.

One more comment about the credibility factor derived in this example. As , . This makes intuitive sense since this gives more weight to as more and more data are accumulated.