Summary

In this chapter, we have described a Bayesian approach to efficiency analysis using stochastic frontier models. With cross-sectional data and a log-linear frontier, a simple Gibbs sampler can be used to carry out Bayesian inference. In the case of a nonlinear frontier, more complicated posterior simulation methods are neces­sary. Bayesian efficiency measurement with panel data is then discussed. We show how a Bayesian analog of the classical fixed effects panel data model can be used to calculate the efficiency of each firm relative to the most efficient firm. However, absolute efficiency calculations are precluded in this model and infer­ence on efficiencies can be quite sensitive to prior assumptions. Accordingly, we describe a Bayesian analog of the classical random effects panel data model which can be used for robust inference on absolute efficiencies. Throughout we emphasize the computational methods necessary to carry out Bayesian inference. We show how random number generation from well known distributions is sufficient to develop posterior simulators for a wide variety of models.

Notes

1 Throughout this chapter, we will use the term "firm" to refer to the cross-sectional unit of analysis. In practice, it could also be the individual or country, etc.

2 In this chapter we focus on production frontiers. However, by suitably redefining Y and X, the methods can be applied to cost frontiers.

4 This error reflects the stochastic nature of the frontier and we shall conveniently denote it by "measurement error". The treatment of measurement error is a crucial distinction between econometric and DEA methods. Most economic data sets are quite noisy and, hence, we feel including measurement error is important. DEA meth­ods can be quite sensitive to outliers since they ignore measurement error. Further­more, since the statistical framework for DEA methods is nonparametric, confidence intervals for parameter and efficiency estimates are very difficult to derive. However, econometric methods require the researcher to make more assumptions (e. g. about the error distribution) than do DEA methods. Recently, there has been some promis­ing work on using the bootstrap with DEA methods which should lessen some of the criticisms of DEA (see Simar and Wilson, 1998a, 1998b, and the references contained therein).

5 We use the terminology "sampling model" to denote the joint distribution of (y, z) given the parameters and shall base the likelihood function on the marginal distribu­tion of y given the parameters.

6 In van den Broeck et al. (1994) and Koop et al. (1995), the Erlang distribution (i. e. the gamma distribution with fixed shape parameter, here chosen to be 1, 2, or 3) was used for inefficiency. The computational techniques necessary to work with Erlang distribu­tions are simple extensions of those given in this section.

7 As shown in Fernandez et al. (1997), the use of the full model with data augmentation also allows for the derivation of crucial theoretical results on the existence of the posterior distribution and moments.

8 The assumption that the inefficiencies are drawn from the exponential distribution with unknown common mean X can be interpreted as a hierarchical prior for z,. Alternatively, a classical econometrician would interpret this distributional assump­tion as part of the sampling model. This difference in interpretation highlights the fact that the division into prior and sampling model is to some extent arbitrary. See Fernandez et al. (1997) for more discussion of this issue.

9 Note that we have not formally proven that the posterior mean and variance of 0 exist (although numerical evidence suggests that they do). Hence, we recommend using the posterior median and interquartile range of 0 to summarize properties of the posterior. Since 0 < t, < 1, we know that all posterior moments exist for the firm specific efficiencies.

10 Recent work on bootstrapping DEA frontiers is promising to surmount this problem and this procedure seems to be gaining some acceptance.

11 This is where the assumption that the w^s are 0-1 dummies is crucial.

12 The extension to a varying efficiency distribution as in (24.12) is trivial and proceeds along the lines of the previous model.

13 Of course, many of the issues which arise in the stochastic frontier model with panel data also arise in traditional panel data models. It is beyond the scope of the present chapter to attempt to summarize the huge literature on panel data. The reader is referred to Matyas and Sevestre (1996) or Baltagi (1995) for an introduction to the broader panel data literature.

14 Note that this implies we now deviate from Assumption 3 in subsection 2.2.

15 It is worth noting that the classical econometric analysis assigns the status of most efficient firm to one particular firm and measures efficiency relative to this. The present Bayesian analysis also measures efficiency relative to the most efficient firm, but allows for uncertainty as to which that firm is.

16 This procedure can be computationally demanding since P(rrel = 1| y, x) and p(rrtel | y, x, rrel = 1) must be calculated for every possible і and j. However, typically, P(rrel = 1| y, x) is appreciable (e. g. > 0.001) for only a few firms and the rest can be ignored (see Koop et al., 1997, p. 82).

17 Koop et al. (1999, 2000) also allow the frontier to shift over time and interpret such shifts as technical change. In such a framework, it is possible to decompose changes in output growth into components reflecting input change, technical change, and efficiency change. The ability of stochastic frontier models with panel data to cal­culate such decompositions is quite important in many practical applications. Also of interest are Baltagi and Griffin (1988) and Baltagi, Griffin, and Rich (1995), which develop a more general framework relating changes in the production function with technical change in a nonstochastic frontier panel data model.