An alternative competing risk model to the Weibull distribution for modelling aging in lifetime data analysis

Abstract

A simple competing risk distribution as a possible alternative to the Weibull distribution in lifetime analysis is proposed. This distribution corresponds to the minimum between exponential and Weibull distributions. Our motivation is to take account of both accidental and aging failures in lifetime data analysis. First, the main characteristics of this distribution are presented. Then, the estimation of its parameters are considered through maximum likelihood and Bayesian inference. In particular, the existence of a unique consistent root of the likelihood equations is proved. Decision tests to choose between an exponential, Weibull and this competing risk distribution are presented. And this alternative model is compared to the Weibull model from numerical experiments on both real and simulated data sets, especially in an industrial context.

Notes

Acknowledgements

We warmly thank the Associate Editor and the reviewers for their numerous suggestions and corrections which greatly help to improve the presentation of the paper.

Appendix: existence of a consistent root of the likelihood equations

The aim of this section is to prove that the likelihood equations for the
\(\mathcal{B}\) distribution have a root which is consistent and asymptotically normally distributed. The proof uses a Chanda’s theorem that is recalled first.

Theorem 1

(Chanda 1954) Let f(x;θ) be a probability density function,
\(\theta=(\theta_{1},\ldots,\theta_{k})\) being a vector parameter belonging to the parameter space Ω, and x1,...,xn be independent observations of a random variable X with density f(x;θ). The likelihood equations are given by
\(\frac{\partial\ln L}{\partial\theta}=0\), where
\(\ln L=\sum_{i=1}^{n}\ln f(x_{i},\theta).\) Let θ0 denote the true value of θ. It is assumed that θ0 lies at some point in Ω. Then, if Conditions 1–3 below hold, there exists a unique consistent estimator θn, solution of the likelihood equations. Furthermore,
\(\sqrt{n}(\theta_{n}-\theta_{0})\) is asymptotically normally distributed with mean zero and covariance matrix I(θ0)−1, where I(θ0) is the Fisher information matrix.

Condition 1: For almost all x and for all
\(\theta\in\overline{\Omega}\frac{\partial\ln f}{\partial\theta_{r}},\frac{\partial^{2}\ln f}
{\partial\theta_{r}\partial\theta_{s}}\) and
\(\frac{\partial^{3}\ln f}{\partial\theta_{r}\partial\theta_{s}\partial\theta_{t}}\) exist for all r, s, t = 1,...,k.

Condition 2: For almost all x and for all
\(\theta\in\overline{\Omega} \left| \frac{\partial f}{\partial\theta_{r}}\right | < F_{r}(x), \left |
\frac{\partial^{2}f}{\partial\theta_{r}\partial\theta_{s}}\right | < F_{rs}(x)\) and
\(\left|\frac{\partial^{3} f}{\partial\theta_{r}\partial\theta_{s}\partial\theta_{t}}\right | < H_{rst}(x)\), where Hrst is such that
\(\int_{-\infty}^{+\infty}H_{rst}(x)f(x){\rm d}x\leq M < \infty\), and Fr(x) and Frs(x) are bounded for all r, s, t = 1,...,k.

The three conditions of the Chanda theorem are now checked for the likelihood equations of the
\(\mathcal{B}\) distribution. It is to be remarked that the proof below includes the possibility of censored observations. In such a case, denoting c such a censored observation, the expression of the pdf is then
\(f(c)=e^{-\frac{1}{\eta_{0}}c-\left(
\frac{c}{\eta_{1}}\right)^{\beta}}\) instead of (7). Henceforth, x may denote a censored observation in what follows.

where
\(P(\frac{1}{\eta_{0}},\frac{1}{\eta_{1}},\beta)\) and
\(Q_{k_1k_2k_3}(\frac{1}{\eta _{0}},\frac{1}{\eta_{1}},\beta)\) are polynomials in
\(\frac{1}{\eta_{0}},\frac{1}{\eta_{1}}\) and β. Consequently, Condition 1 is satisfied.

Now any partial derivative g is a continuous function in x and θ. Thus g is bounded for
\(\theta\in\overline{\Omega}\) and x in any closed interval. Therefore to check Condition 2, it suffices to consider its behavior for large values of x. It is easily seen that there exist positive numbers A and B such that g is inferior to e−Bx × xA for sufficiently large x and
\(\theta\in\overline{\Omega}\). Since e−Bx × xA is bounded, Condition 2 is satisfied.

As for Condition 3, I(θ), which is a covariance matrix, is positive definite unless it exists a,b,c not all equal to zero such that
\(a\frac{\partial\ln
f}{\partial\eta_{0}}+b\frac{\partial\ln f}{\partial
\eta_{1}}+c\frac{\partial\ln f}{\partial\beta}=0\). A simple examination of the derivatives shows straightforwardly that they are not collinear. Thus, the three conditions of Chanda theorem are verified.