One of the many new additions to
Weibull++ 7 is the method of estimating confidence bounds based on the
Bayes
theorem. This type of confidence bounds relies on a different school of
thought in statistical analysis, where prior information is combined with
sample data in order to make inferences about model parameters and their
functions. An introduction to Bayesian methods is given in this article.

Introduction

Bayesian confidence bounds are derived from Bayes's rule, which states that:

(1)

where:

f(θ|Data)
is the posterior pdf
of
θ.

θ is the parameter vector of the
chosen distribution (i.e. Weibull, lognormal, etc.)

L(.) is the likelihood function.

φ(θ)
is the prior pdf of
the parameter vector.

ς is the range of θ.

In other words, the prior knowledge is
provided in the form of the prior pdf of the parameters, which in
turn is combined with the sample data in order to obtain the posterior
pdf. Different forms of prior information exist, such as past data,
expert opinion or non-informative. It can be seen from Eqn. (1) that we are
now dealing with distributions of parameters rather than single value
parameters. For example, consider a one-parameter distribution with a
positive parameter θ1.
Given a set of sample data and a prior distribution for θ1 ,φ(θ1)
Eqn. (1) can be written as:

(2)

In other words, we now have the
distribution of θ1 and can make statistical inferences
on this parameter, such as calculating probabilities. Specifically, the
probability that θ1
is less than or equal to a
value x, P( θ1 ≤ x
) can be obtained by integrating
Eqn. (2), or:

(3)

Eqn. (3) essentially calculates a
confidence bound on the parameter, where P( θ1 ≤ x ) is the confidence level and x is the
confidence bound. (Note: In Bayesian statistics, the term confidence bounds
is not correct. Credible bounds is the correct term. However, since
from an application perspective the result has the same interpretation, we
will use the term confidence bounds to avoid confusion.) Substituting Eqn.
(2) into Eqn. (3) yields:

(4)

The only question at this point is what we
should use as a prior distribution of
θ1.
For the confidence bounds
calculation application, non-informative prior distributions are utilized.
Non-informative prior distributions are distributions that have no
population basis and play a minimal role in the posterior distribution. The
idea behind the use of non-informative prior distributions is to make
inferences that are not affected by external information, or when external
information is not available. In the general case of calculating confidence
bounds using Bayesian methods, the method should be independent of external
information and it should rely on only the current data. Therefore,
non-informative priors are used. Specifically, the uniform distribution is
used as a prior distribution for the different parameters of the selected
fitted distribution. For example, if the Weibull distribution is fitted to
the data, the prior distributions for beta and eta are assumed
to be uniform.

Eqn. (4) can be generalized for any
distribution having a vector of parameters θ,
yielding the general equation for calculating Bayesian confidence bounds:

(5)

where:

CL is confidence level.

θis the parameter vector.

L(.) is the likelihood function.

φ(θ) is
the prior pdf of the parameter vector.

ς is the range of θ.

ξ is the range in which θ
changes from Ψ(T,R) till θ
maximum value or from θ's minimum value till Ψ(T,R). Ψ(T,R) is
the function such that if T is given then the bounds are
calculated for R and if R is given, then the bounds are
calculated for T.

If T is given, then from Eqn. (5)
and
Ψ and for a given CL, the bounds on R
are calculated. For more details, click
here.

If R is given, then from Eqn. (5)
and
Ψ and for a given CL, the bounds on T
are calculated. For more details, click
here.

When the data set being analyzed is of small size, the Bayesian bounds
method is usually preferred over the
Fisher Matrix, the
Likelihood Ratio and the
Beta Binomial
methods. The advantage of the Bayesian bounds method lies in the fact that
it makes the fewest assumptions about the distribution of the parameters.
The Fisher Matrix method relies on a normality assumption. The Likelihood
Ratio method relies on the assumption that

follows a Chi-Square distribution. The Beta Binomial method is a
non-parametric method, which discourages making predictions outside the
range of data. (Note also that in
Weibull++ 7 the Beta Binomial method is only available for the Mixed
Weibull distribution.) The Bayesian confidence method is free of all of
these assumptions since the posterior distribution is calculated directly.