Abstract of The convergence rate of the TM algorithm of Edwards and Lauritzen: Edwards & Lauritzen (Biometrika 88, 2001, pp 961-972) have recently proposed the TM algorithm for finding the maximum likelihood estimate when the likelihood can be truly or artificially regarded as a conditional likelihood, and the full likelihood is more easily maximised, . They have presented a proof of convergence, provided that the algorithm is supplemented by a line search. In this note a simple expression, in terms of observed information matrices, is given for the convergence rate of the algorithm per se, when it converges, and the result elucidates also in which situations the algorithm will require a line search. Essentially these are cases when the full model does not adequately fit the data.Some key words: Conditional likelihood; exponential families; graphical chain models; iterative method; ML estimation.

Abstract Report 2001:3Ancillarity and conditional inference for ML and interval estimates in some classical genetic linkage trials. The main object of study here is a classical example of linkage analysis, in which there are two separately but not jointly ancillary statistics, which are mutually exchangeable. In such cases it is not obvious how or even if the statistical inference about the parameter of interest (here the recombination probability) should be a conditional inference. We consider various precision measures, viz.\ the observed and the expected (Fisher) information quantities, and various conditional expected values in between, and we compare their ability to quantify the precision of the parameter estimate, as well as to quantify the confidence to be attached to interval estimates. The general conclusion drawn is that there is not much to be gained but much to be risked by conditional inference in this example.Some key words: Confidence, precision, recombination probability, relevance.

Abstract Report 2007:22A classical dataset from Williams, and its role in the study of supersaturated designs
A PlackettBurman type dataset from a paper by Williams (1968), with 28 observations and 24 two-level factors, has become a standard dataset for illustrating construction (by halving) of supersaturated designs (SSDs) and for a corresponding data analysis. The aim here is to point out that for several reasons this is an unfortunate situation. The original paper by Williams contains several errors and misprints. Some are in the design matrix, which will here be reconstructed, but worse is an outlier in the response values, which can be observed when data are plotted against the dominating factor. In addition, the data should better be analysed on log-scale than on original scale. The implications of the outlier for SSD analysis are drastic, and it will be concluded that the data should be used for this purpose only if the outlier is properly treated (omitted or modified).
Key words: half-fraction, log-transformation, outlier, PlackettBurman, SSD