Example 14.5 Profile-Likelihood-Based Confidence Intervals

This example calculates confidence intervals based on the profile likelihood for the parameters estimated in the previous
example. The following introduction on profile-likelihood methods is based on the paper of Venzon and Moolgavkar (1988).

Let be the maximum likelihood estimate (MLE) of a parameter vector and let be the log-likelihood function defined for parameter values .

The profile-likelihood method reduces to a function of a single parameter of interest, , where , by treating the others as nuisance parameters and maximizing over them. The profile likelihood for is defined as

where . Define the complementary parameter set and as the optimizer of for each value of . Of course, the maximum of function is located at . The profile-likelihood-based confidence interval for parameter is defined as

where is the th quantile of the distribution with one degree of freedom. The points are the endpoints of the profile-likelihood-based confidence interval for parameter . The points and can be computed as the solutions of a system of nonlinear equations in parameters, where :

where is the constant threshold . The first of these equations defines the locations and where the function cuts , and the remaining equations define the optimality of the parameters in . Jointly, the equations define the locations and where the function cuts the constant threshold , which is given by the roots of . Assuming that the two solutions exist (they do not if the quantile is too large), this system of nonlinear equations can be solved by minimizing the sum of squares of the functions :

For a solution of the system of nonlinear equations to exist, the minimum value of the convex function must be zero.

The following code defines the module for the system of nonlinear equations to be solved:

The following code implements the Levenberg-Marquardt algorithm with the NLPLM subroutine to solve the system of two equations
for the left and right endpoints of the interval. The starting point is the optimizer , as computed in the previous example, moved toward the left or right endpoint of the interval by an initial step (refer to
Venzon and Moolgavkar (1988)). This forces the algorithm to approach the specified endpoint.