2 Bird Study (1999) 6 (suppl.), S10-19 Program MARK: survival estimation from populations of marked animals GARY C. WHITE" and KENNETH P. BURNHAM1 Department of Fishery and Wildlife Biology, Colorado State University, Fort Collins, CO 805, USA and Colorado Cooperative Fish and Wildlife Research Unit, Colorado State University, Fort Collins, CO 805, USA MARK provides parameter estimates from marked animals when they are re-encountered at a later time as dead recoveries, or live recaptures or re-sightings. The time intervals between re-encounters do not have to be equal. More than one attribute group of animals can be modelled. The basic input to MARK is the encounter history for each animal. MARK can also estimate the size of closed populations. Parameters can be constrained to be the same across re-encounter occasions, or by age, or group, using the parameter index matrix. A set of common models for initial screening of data are provided. Time effects, group effects, time x group effects and a null model of none of the above, are provided for each parameter. Besides the logit function to link the design matrix to the parameters of the model, other link functions include the log log, complimentary log log, sine, log, and identity. The estimates of model parameters are computed via numerical maximum likelihood techniques. The number of parameters that are estimable in the model are determined numerically and used to compute the quasi-likelihood AIC value for the model. Both the input data, and outputs for various models that the user has built, are stored in the Results database which contains a complete description of the model building process. It is viewed and manipulated in a Results Browser window. Summaries available from this window include viewing and printing model output, deviance residuals from the model, likelihood ratio and analysis of deviance between models, and adjustments for over dispersion. Models can also be retrieved and modified to create additional models. These capabilities are implemented in a Microsoft Windows 95 interface. The online help system has been developed to provide all necessary program documentation. Expanding human populations and extensive destruction and alteration of habitats continue to affect the world's fauna and flora. As a result, monitoring of biological populations has begun to receive increasing emphasis in most countries, including the less developed areas of the world.' Use of marked individuals and capture recapture theory play an important role in this process. Risk assessment in higher vertebrates can be done within the framework of capture recapture theory. "Correspondence author. Population viability analyses rrrust rely on estimates of vital rates of a population; often these can only be derived from the study of uniquely marked animals. The richness component of biodiversity can often be estimated in the context of closed model capturerecapture. 5 Finally, the monitoring components of adaptive management can be rigorously addressed in terms of the analysis of data from marked subpopulations. Capture recapture surveys have been used as a general sampling and analysis method to assess population status and trends in many biological populations. 7 The use of marked 1999 British Trust for Ornithology

3 Survival estimation: program MARK S11 individuals is analogous to the use of various tracers in studies of physiology, medicine and nutrient cycling. Recent advances in technology allow a wide variety of marking methods. 8 The motivation for developing MARK was to bring a common programming environment to the estimation of survival from marked animals. Marked animals can be re-encountered as either live or dead, in a variety of experimental frameworks. Prior to MARK, no program either combined easily the estimation of survival from both live and dead re-encounters, or allowed for the modelling of capture and recapture probabilities in a general modelling framework for estimation of population size in closed populations. We describe the general features of MARK and provide users with a general idea of how the program operates. Documentation for the program is provided in the Help file that is distributed with the program. Specific details for all the menu options and dialogue controls are provided in the Help file. We assume the reader is familiar with the Cormack-Jolly-Seber9-1 capture-recapturem-18 and recovery models, 18- including concepts like the logit-link to incorporate covariates into models with a design matrix,- multi-group models, model selection8 with Akaike's information criterion (Aic),8 and maximum likelihood parameter estimation. 1 Cooch et al. 7 explain many of these basics in a general primer, although the material is focused on the SURGE program. This paper describes how these methods can be used in MARK. TYPES OF ENCOUNTER DATA USED BY MARK MARK provides parameter estimates for five types of re-encounter data: (1) Cormack-Jolly- Seber models (live animal recaptures that are released alive), () band or ring recovery models (dead animal recoveries), () models with both live and dead re-encounters, () known fate (e.g. radiotracking) models and (5) some closed capture-recapture models. Live recaptures Live recaptures are the basis of the standard Cormack-Jolly-Seber (CJS) model. Marked animals are released into the population, usually by trapping them from the population. The marked animals are re-encountered either by catching them alive and rereleasing them, or often just by visual resighting. If marked animals are released into the population on occasion 1, then each succeeding capture occasion is one encounter occasion. Consider the following scenario: Animals survive from initial release to the Live Releases71 1 NP Not Seen 1Dead or Emigrated Seen 11 OP 10 0(1 - P) second encounter with probability S 1, from the second encounter occasion to the third encounter occasion with probability S, and so on. The recapture probability at encounter occasion two is p at encounter occasion three it is p, and so on. At least two re-encounter occasions are required to estimate the survival probability (S,) between the first release occasion and the next encounter occasion in the full time-effects model. The survival probability between the last two encounter occasions is not estimable in the full timeeffects model because only the product of survival and recapture probability for this occasion is identifiable. Generally, as in the diagram, the conditional survival probabilities of the CJS model are labelled as :. 0 etc., because the quantity estimated is the probability of remaining available for recapture. Thus, animals that emigrate from the study area are not available for recapture and so appear to have died in this model. Thus, 0, = S,(1 - E,), where E, is the probability of emigrating from the study area, and 0, is termed the 'apparent' survival, because it is the probability that the animal remains alive and available for recapture and not, technically, the survival probability of marked animals in the population. Estimates of population size (I') or births and immigration (11) of the Jolly-Seber model are not provided in MARK, as they are in PoPAN-8-8 or JOLLY and JOLLYAGE British Trust for Ornithology, Bird Study, 6 (suppl.), S10-19

4 S1 G.C. White and K.P. Burnham Dead recoveries With dead recoveries (i.e. band, fish tag, or ring recovery models), animals are captured from the population, marked and released back into the population at each occasion. Later, marked animals are encountered as dead animals, typically from harvest or just found dead (e.g. gulls). The following diagram illustrates this scenario: Encounter History ye Live 10 S Releases rjor Reported 11 (1 - S)r SNn Dead - Not 10 (1 -S)(1 - r) Reported Marked animals are assumed to survive from one release to the next with survival probability S. If they die, the dead marked animals are reported during each period between releases with probability r,. The survival probability and reporting probability prior to the last release cannot be estimated individually in the full time-effects model but only as a product. This parameterization differs from that of Brownie et a/. 1 in that their f is replaced by f = (1- S,) r,. The r, are equivalent to the k, of life-table models.001 The reason for making this change is so that the encounter process, modelled with the r, parameters, can be separated from the survival process, modelled with the S, parameters. With the f parameterization, the two processes are both part of this parameter. Hence, developing more advanced models with the design matrix options of MARK is difficult, if not illogical, with the f parameterization. However, the negative side of this new parameterization is that the last S, and r confounded in the full time-effects model since only the product (1 - S,) r, is identifiable, and hence estimable. Both live and dead encounters The model for the joint live and dead encounter data type was first published by Burnham, but with a slightly different parameterization than is used in MARK. In MARK, the dead encounters are not modelled with the f of Burnham, but rather as f = (1 - r, as discussed above for the dead encounter models. The method is a combination of the two above but allows the estimation of fidelity F, = (1 - E,) or the probability that the animal remains on the study area and is available for capture. As a result, the estimates of S, are estimates of the survival probability of the marked animals and not the apparent survival (0, = SF,), as discussed for the live encounter model. In the models discussed above, live captures and resightings, modelled with the p, parameters, are assumed to occur over a short time-interval, whereas dead recoveries, modelled with the r, parameters, extend over a much longer time-interval. The actual time of the dead recovery is not used in the estimation of survival for two reasons. First, it is often not known. Second, even if the exact time of recovery is known, little information is contributed if the recovery probability (r,) varies during the time-interval. Known fates Known fate data assumes that there are no nuisance parameters involved with animal captures or resightings. The data derive from radiotracking studies, although some radiotracking studies fail to follow all the marked animals and so would not meet the assumptions of this model (they would then need to be analysed by mark-encounter models). This scenario is illustrated by: 5 5 Release Encounter S Encounter Encounter... where the probability of encounter on each occasion is 1.0 if the animal is alive. Closed captures Closed-capture data assume that all survival probabilities are 1.0 across the short timeintervals of the study. Thus, survival is not estimated. Rather, the probability of first capture (p,) and the probability of recapture (c,) are estimated, along with the number of animals in the population (N). The following diagram illustrates this scenario: 1999 British Trust for Ornithology, Bird Study, 6 (suppl.), S10-19

5 Survival estimation: program MARK S1 Occasion 1 Occasion Occasion Occasion... P1 P P C C 11First encounter cadditional encounter(s) where the c, recapture probability parameters are shown under the initial capture (p,) parameters. This data type is the same as that analysed with CAPTURE. 15 All the likelihood models in CAPTURE can be duplicated in MARK. However, MARK allows additional likelihood-based models not available in CAPTURE, plus comparisons between groups and the incorporation of time-specific and / or groupspecific coyariates into the model. The main limitation of MARK for closed capture recapture models is the lack of models incorporating individual heterogeneity. Individual covariates cannot be used in MARK with this data type because existing models in the literature, have not yet been implemented. Other models Models for other types of encounter data are also available in MARK, including the robust design mode1, 5-7 multi-strata mode1, 8. 9 Barker's extension to the joint live and dead encounters model, ring recovery models where the number of birds marked is unknown and the Brownie et al. 1 parameterization of ring recovery models. PROGRAM OPERATION MARK is operated by a Windows 95 interface. A 'batch' file mode of operation is available in that the numerical procedure reads an ASCII input file. However, models are constructed interactively with the interface much more easily than by creating them manually in an ASCII input file. Interactive and contextsensitive help is available at any time while working in the interface program. The help system is constructed with the Windows help system and provides full documentation of the program. All analyses in MARK are based on encounter histories. To begin construction of a set of models for a data set, the data must first be read by MARK from the Encounter Histories file. Next, the Parameter Index matrices (PIMs) can be manipulated, followed by the Design matrix. These tools provide the model specifications to construct a broad array of models. Once the model is fully specified, the Run window is opened, where the link function is specified (and currently must be the same for all parameters), parameters can be 'fixed' to specific values and the name of the model is specified. Once a set of models has been constructed, likelihood-ratio tests between models can be computed, or analysis of deviance (ANoDEv) tables constructed. To begin an analysis, an Encounter Histories file must be created using an ASCII text editor (e.g. Windows 95 Notepad or Wordpad editors). The format of the Encounter Histories file is similar to that of RELEASE 1 and is identical to RELEASE for live recapture data without covariates. MARK does not provide data management capabilities for the encounter histories. Once the Encounter Histories file is created, MARK is started and file 'New' selected. The dialogue box shown in Fig. 1 appears. As shown in Fig. 1, you are requested to enter the number of encounter occasions, number of groups (e.g. sex, ages, areas), number of individual covariates and, only if the multi-strata model was selected, the number of strata. Each of these variables can have additional input (by clicking the push button next to the input box). The type of data is specified on the left side, and the title for the data and name of the Encounter Histories file at the top (with one push button to help find the file, plus a second to examine its contents once selected). At the bottom of the screen is the OK button for proceeding once the necessary information is entered, a Cancel button to quit and a Help button to access the help file. When any of the controls on the window are highlighted, the Shift-Fl key can be used to obtain context-sensitive help for that specific control. Encounter Histories file To provide the data for parameter estimation, the Encounter Histories file is used. This file contains the raw data on encounter histories 1999 British Trust for Ornithology, Bird Study, 6 (suppl.), S10-19

6 S1 G.C. White and K.P. Burnham Figure 1. Dialogue box requesting the information to begin an analysis. needed by MARK. The format of the file depends on the data type, although all data types allow a conceptually similar encounter history format. The convention of MARK is that this file name ends in the INP suffix. The root part of an Encounter Histories file name dictates the name of the dbase file used to hold model results. For example, the input file MULEDEER.INP would produce a Results file with the name MULEDEER.DBF and two additional files (MULEDEER.FPT and MULEDEER.CDX) that would contain the memo fields holding model output and index ordering, respectively. Once the DBF, FPT, and CDX files are created, the INP file is no longer needed. Its contents are now part of the DBF file. Encounter Histories files do not contain any PROC statements (as in RELEASE) but only encounter histories or recovery matrices. The input file can have group label statements and comment statements as a reminder of its contents. These statements must end with a semicolon. The interactive interface adds the necessary program statements to produce parameter estimates with the numerical algorithm based on the model specified. The Encounter Histories file is incorporated into the Results database created to hold parameter estimates and other results. Because all results in the Results database depend on the encounter histories not changing, the input cannot be changed. Even if the values in the Encounter Histories file are changed, the Results file will not change. The only way to produce models from a changed Encounter Histories file is to incorporate the changed file into a new Results database (hence, start again). The Encounter Histories file can be viewed in the Results database by listing it in the results for a model and checking the list data checkbox in the Run window. Some simplified examples of Encounter Histories files follow. The full set of data for each example is provided both in the MARK Help file and as an example input file distributed with the program. The first is a live recapture data set with two groups and six encounter occasions. Note that the release occasion is counted as an encounter occasion (in contrast to the protocol used in SURGE). The encounter history is coded for just the live encounters, (i.e. LLLLLL) and the initial capture is counted as an encounter occasion. Again, the encounter histories are given, with '1' indicating a live capture or recapture and '0' meaning not captured. The number of animals in each group follows the encounter history. Negative values indicate animals that were not released again (i.e. losses on capture). The following example is the partial input for the example data in Burnham et al:" (page 9). As might be expected, any input file used with British Trust for Ornithology, Bird Study, 6 (suppl.), S10-19

7 Survival estimation: program MARK S15 RELEASE will work with MARK if the RELEASEspecific PROC statements are removed ; ; ; ; ; ; ; ; ; ; Next, a joint live recapture and dead recovery encounter histories file is shown, with only one group, but five encounter occasions. The encounter histories are alternating live (L) recaptures and dead (D) recoveries (i.e. LDLDLDLDLD) with '1' indicating a recapture or recovery and '0' indicating no encounter. Encounter histories always start with an L and end with a D in MARK (this is not restrictive if the actual data end with a live capture). The number after the encounter history is the number of animals with this history for group one, just as with RELEASE. Following the frequency for the last group, a semi-colon must be used to end the statement ; ; ; ; ; ; ; ; ; ; ; The next example is the summarized input for a dead recovery data set with 15 release occasions and 15 years of recovery data. Even though the raw data are read as recovery matrices, encounter histories are created internally in MARK. The triangular matrices represent two groups; adults, followed by young. Following each upper-triangular matrix is the number of animals marked and released into the population each year. This format is similar to that used by Brownie et a1. 1 The input lines identified with the phrase 'recovery matrix' are required to identify the input as a recovery matrix and are not interpreted as an encounter history (Table 1). No automatic capability to handle nontriangular recovery matrices has been included in MARK. Non-triangular matrices are generated when marking of new animals ceases but recoveries continued. Such data sets can be handled by forming a triangular recovery matrix with the additions of zeros for the number ringed and recovered. During analysis, the parameters associated with these zero data should be fixed to logical values (i.e. r, = 0) to reduce numerical problems with non-estimable parameters. Dead recoveries can also be coded as encounter histories in the LDLDLDLDLDLD format. The following is an example only of dead recoveries, in which a live animal is never captured alive after its initial capture (i.e. none of the encounter histories have more than a single '1' in an L column). This example has 15 encounter occasions and one group ; ; ; ; ; ; ; ; ; The next example is the input for a data set with known fates. As with dead recovery matrices, the summarized input is used to create encounter histories. Each line presents the number of animals monitored for one timeinterval, in this case, a week. The first value is the number of animals monitored, followed by the number that died during the interval. Each group is provided in a separate matrix. In the following example, a group of Black Ducks is monitored for eight weeks. Initially, 8 ducks were monitored, and one died during the first week. The remaining 7 ducks were monitored during the second week, when two died. However, four more were lost from the study for other reasons, resulting in a reduction in the numbers of animals monitored. As a result, only 1 ducks were available for monitoring 1999 British Trust for Ornithology, Bird Study, 6 (suppl.), S10-19

8 S16 G.C. White and K.P. Burnham Table 1. recovery matrix group = 1; ; ; ; ; ; ; ; ; ; ; ; ; ; 8 1; 10; ; recovery matrix group = ; o o o ; ; ; ; ; ; ; ; ; ; ; 1 0 0; 16 1; 5 ; 8; ; during the third week. The phrase 'known fate' on the first line of input is required to identify the input as this special format, instead of the usual encounter history format. known fate group = 1; 8 1; 7 ; 1 ; 9 5; ; 8 ; 5 1; 0; not captured (0). Following the capture history is the frequency, or count, of the number of animals with this capture history, for each group. In the example, two groups are provided. Individual covariates are not allowed with closed captures because the models of Huggins, have not been implemented in MARK. This data set could have been stored more compactly by not representing each animal on a separate line of input. The advantage of entering the data as shown, with only a '0' or '1' as the capture frequency, is that the input file can also be used with CAPTURE. The following input is an example of a ; closed capture-recapture data set. The cap ; ture histories are specified as a single '1' or ; for each occasion, representing captured (1) or ; 1999 British Trust for Ornithology, Bird Study, 6 (suppl.), S10-19

9 Survival estimation: program MARK S ; ; ; ; Additional examples of Encounter History files are provided in the help document distributed with the program. Parameter Index matrices The Parameter Index matrices (PIMs) allow constraints to be placed on the parameter estimates. There is a parameter matrix for each type of basic parameter in each group model, with each parameter matrix shown in its own window. As an example, suppose that two groups of animals are marked. Then, for live recaptures, two Apparent Survival (0) matrices (windows) would be displayed and two Recapture Probability (p) matrices (windows) would be shown. Likewise, for dead recovery data for two groups, two Survival (S) matrices and two reporting probability (r) matrices would be used, for four windows. When both live and dead recoveries are modelled, each group would have four parameters types: S, r, p and F. Thus, eight windows would be available. Only the first window is opened by default. Any (or all) of the PIM windows can be opened from the PIM menu option. Likewise, any PIM window that is currently open can be closed and later opened again. The Parameter Index matrices determine the number of basic parameters that will be estimated (i.e. the number of rows in the design matrix) and, hence, the PIM must be constructed before use of the Design Matrix window. PIMs may reduce the number of basic parameters, with further constraints provided by the design matrix. Commands are available to set all the parameter matrices to a particular format (e.g. all constant, all time-specific or all age-specific) or to set the current window to a particular format. Included on the PIM window are push buttons to close the window (values of parameter settings are not lost; they are just not displayed), display the Help screen, display the PIM chart to show the relationship among the PIM values, and to increment or decrement (+ and -, respectively) all the index values in the PIM window by one. Parameter matrices can be manipulated to specify various models. The following are the parameter matrices for live recapture data to specify a 10(g x t) p(g x t)) model for a data set with five encounter occasions (resulting in four survival intervals and four recapture occasions) and two groups. Apparent Survival group 1 1 Apparent Survival group Recapture Probabilities group Recapture Probabilities group In this example, parameter 1 is apparent survival for the first interval for group 1 and parameter is apparent survival for the second interval for group 1. Parameter 7 is apparent survival for the third interval for group and parameter 8 is apparent survival for the fourth interval for group. Parameter 9 is the recapture probability for the second occasion for group 1 and parameter 10 is the recapture probability for the third occasion for group 1. Parameter 15 is the recapture probability for the fourth occasion for group and parameter 16 is the recapture probability for the fifth occasion for group. Note that the capture probability for the first occasion does not appear in the Cormack-Jolly-Seber model and thus does not appear in the parameter index matrices for recapture probabilities. To reduce this model to I :1)(t) p(t)1, the following parameter matrices would work. Apparent Survival group British Trust for Ornithology, Bird Study, 6 (suppl.), S10-19

10 S18 G.C. White and K.P. Burnham Apparent Survival group 1 Recapture Probabilities group Recapture Probabilities group In the above example, the parameters by time and type are constrained to be equal across groups. The following parameter matrices have no time effect but do have a group effect. Thus, the model is 10 (g) p(g)} Apparent Survival group Apparent Survival group Recapture Probabilities group 1 Recapture Probabilities group Additional examples of PIMs demonstrating age and cohort models are provided with the program's help system. Also, Cooch et al. 7 provide many more examples of how to structure the parameter indices. The Parameter Index chart displays graphically the relationship between parameters across the attribute groups and time (Fig. ). The concept of a PIM derives from surge, 1, with graphical manipulation of the PIM first demonstrated by surph. However, a key difference in the implementation in MARK, from that in SURGE, is that the PIMs can allow overlap of parameter indices across parameters types and thus the same parameter estimate could be used for both a 0 and a p. Although such a model is unlikely for just live recaptures or dead recoveries, we can visualize such models with joint live and dead encounters. Design matrix Additional constraints can be placed on parameters with the Design matrix. The concept of a design matrix comes from general linear models (GLm). The design matrix (X) is multiplied by the parameter vector (0) to produce the original parameters (0, S. p, r, etc. ) via a link function. For instance, logit (0) = X 0 uses the logit link function to link the design matrix to the vector of original parameters (0). The elements of 0 are the original parameters, whereas the columns of matrix X correspond to the reduced parameter vector A. The vector A can be thought of as the 'likelihood parameters' because they appear in the likelihood. They are linked to the derived, or real parameters, 0. Assume that the PIM model is (0 (g x t) p(g x t)}, as shown above for five encounter occasions. To specify the fully additive {0 (g + t) p(g + 1-)1 model, where parameters vary temporally in parallel, the Design matrix must be used. The Design matrix is opened with the Design menu option. It always has the same number of rows as there are parameters in the PIMs but the number of columns can be variable. A choice of Full or Reduced is offered when the Design menu option is selected. The full Design matrix has the same number of columns as rows and defaults to an identity matrix, whereas the reduced Design matrix allows you to specify the number of columns and is initialized to all zeros. Each parameter specified in the PIMs will now be computed as a linear combination of the columns of the design matrix. Each parameter in the PIMs has its own row in the matrix. Thus, the Design matrix provides a set of constraints on the parameters in the PIMs by reducing the number of parameters (number of rows) from the number of unique values in the PIMs to the number of columns in the matrix. The concept of a design matrix for use with capture-recapture data was taken from surge. However, in MARK a single design matrix 1999 British Trust for Ornithology, Bird Study, 6 (suppl.), S10-19

11 Survival estimation: program MARK S19 IZt====ariniREM1=;::7.:1.111TiWMP:.:...Z = C Pearneas, Figure. Example of a Parameter Index chart depicting the Mt) p(g)} model for live recapture (Cormack Jolly Seber) data with six encounter occasions (resulting in five survival intervals) and two attribute groups. This chart allows verification of the parameter indices without manual checking of each PIM window. applies to all parameters whereas in SURGE effects for apparent survival, column 6 is the design matrices are specific to a parameter type group effect for recapture probabilities and (i.e. either apparent survival or recapture columns 7-10 are the time effects for the probability). The more general implementation recapture probabilities. The first four rows corin MARK allows parameters of different types to respond to the 0 for group one, the next four be modelled by the same function of one or rows for 0 for group, the next four rows for p, more covariates. As an example, parallelism for group 1, and the last four rows for p, for could be enforced between live recapture and group. Note that the coding for the group dead recovery probabilities, or between effect dummy variable is '0' for the first group survival and fidelity. Also, unlike SURGE, MARK and '1' for the second group. does not assume an intercept in the design matrix. If no design matrix is specified, the default is an identity matrix, where each parameter in the PIMs corresponds to a column in the design matrix. MARK always constructs and uses a design matrix. The following is an example of a design matrix for the additive 01)(g + t) p(g + t)1 model used to demonstrate various PIMs with five encounter occasions. In this model, the time effect is the same for each group, with the group effect additive to this time effect. In the following matrix, column 1 is the group effect for apparent survival, columns -5 are the time (D 1999 British Trust for Ornithology, Bird Study, 6 (suppl.), S10-19

12 S10 G.C. White and K.P. Burnham The design matrix can also be used to provide additional information from timevarying covariates. As an example, suppose the rainfall for the four survival intervals in the above example is, 10, and cm. This information could be used to model survival effects with the following model, where each group has a different intercept for survival but a common slope. The model name would be {) (g + rainfall) p(g + t)}. Column 1 codes the intercept, column codes the offset of the intercept for group, and column codes the rainfall variable A similar design matrix could be used to evaluate differences in the trend of survival for the two groups. The rainfall covariate in column has been replaced by a time trend variable, resulting in a covariance analysis on the 1's: (parallel regressions with different intercepts): Individual covariates are also incorporated into an analysis via the Design matrix. By specifying the name of an individual covariate in a cell of the Design matrix, MARK is told to use the value of this covariate in the Design matrix when the capture history for each individual is used in the likelihood. As an example, suppose that two individual covariates are included: age (0 = subadult, 1 = adult) and weight at time of initial capture. The names given to these variables are, naturally, AGE and WEIGHT and were assigned in the dialogue box where the encounter histories file was specified (Fig. 1). The following design matrix would use weight as a covariate, with an intercept term and a group effect. The model name would be {(1)(g + weight) p(g + t)}. 1 0 weight weight weight weight weight weight weight weight Each of the eight apparent survival probabilities would be modelled with the same slope parameter for WEIGHT, but on an individual animal basis. Thus, time is not included in the relationship. However, a group effect is included in survival, so that each group would have a different intercept. For example, to test the hypothesis that the relationship between survival and weight changes with each timeinterval, the following design matrix (Table ) would allow four different weight models, one for each survival occasion. The model would be named {0 [g + t x weight)] p(g + t)} British Trust for Ornithology, Bird Study, 6 (suppl.),

13 Survival estimation: program MARK S11 Table. 1 0 weight weight weight weight weight weight weight weight Further examples of how to construct the design matrix are provided in the MARK interactive help system and numerous menu options are described to avoid having to build a design matrix manually. Run window The Run window is opened by selecting the Run menu option. This window (Fig. ) allows specification of the Run Title, Model Name, parameters to Fix to a specified value, and the Link function and Variance Estimation algorithm to be used. Various program options can be selected, such as whether to list the raw Setup Numerical Estimatiun Run data (encounter histories) and! or variancecovariance matrices in the output, plus other program options that concern the numerical optimization of the likelihood function to obtain parameter estimates. The Title entry box allows you to modify the title printed on the output for a set of data. Normally, this should be set for the first model for which parameters are estimated, and then left unchanged. The Model Name entry box is where a name for the model you have specified with the PIMs and Design matrix is specified. Various model naming conventions have developed in the literature but we prefer the procedure of Lebreton et al. 1 1,..$; * )...**X Figure. Example of the Run window for MARK British Trust for Ornithology, Bird Study, 6 (suppl.), S10-19

14 S1 G.C. White and K.P. Burnham Fixing parameters Parameters can be fixed, rather than estimated from the data, by setting them to a specified value using the Fix parameters button on the Run window screen. A dialogue window, listing all the parameters, is presented, with edit boxes in which to enter the desired value for the fixed parameter. Fixing a parameter is a useful method for determining whether it is confounded with another parameter. For example, the last ol: and p are confounded in the time-specific Cormack Jolly Seber model, as are the last S and r in the time-specific dead recoveries model. You can set the last p or r to 1.0 to force the survival parameter to be estimated as the product. Link functions A link function links the linear model specified in the design matrix with the survival, recapture, reporting and fidelity parameters specified in the PIMs. MARK supports six different link functions. The default is the sine function because it is generally more computationally efficient for parameter estimation. For the sine link function, the linear combination from the design matrix (XP, where 0 is the vector of parameters and X is the design matrix) is converted to the interval [0,11 by the link function: parameter value = [sin(x0)+1] /. Other link functions include the logit: parameter value = exp(x0)/ [1+exp((0)1 the log log: parameter value = exp[ exp(xp)] the complementary log log: the log: parameter value = 1 exp[ exp(xp)] parameter value = exp(xp) and the identity: parameter value = xp The log and identity link functions do not constrain the parameter value to the interval [0, 1] and so can cause numerical problems when optimizing the likelihood. MARK uses the sine link function to obtain initial estimates for parameters and then transforms the estimates to the parameter space of the log or identity link functions. It then reoptimizes the likelihood function when those link functions are requested. The sine link should only be used with design matrices that contain a single '1' in each row (e.g. an identity matrix). The identity matrix is the default when no design matrix is specified. The logit link is better for nonidentity design matrices. It will reflect around the parameter boundary, and not enforce monotonic relationships, when multiple l's occur in a single row or covariates are used. It is the best link function to enforce parameter values in the [0, 1] interval and yet obtain correct estimates of the number of parameters estimated, mainly because the parameter value does reflect around the interval boundary. In contrast, the logit link allows the parameter value to approach the boundary asymptotically, which can cause numerical problems and suggest that the parameter is not estimable. The identity link is the best link function for determining the number of parameters estimated when the [0, 11 interval does not need to be enforced because no parameter is at a boundary that may be confused with the parameter not being estimable. The maximum likelihood estimate (mle) of p must be, and always is, in [0, 11, even with an identity link. However, or S can mathematically exceed one, yet the likelihood is computable. So the exact MLE of (1), or S, can exceed one. The mathematics of the model do not know there is an interpretation on these parameters that says they must all be in [0, 1]. It is also mathematically possible for the MLE of r or F to occur as greater than one. It is only required that the multinomial cell probabilities in the likelihood stay in the interval [0, 1]. When they do not, the program has computational problems. With too general a model (i.e. more parameters than supported by the available data), it is common that the exact MLE will occur with some greater than one (but this result depends on which basic model is used) British Trust for Ornithology, Bird Study, 6 (suppl.), S10-19

15 Survival estimation: program MARK S1 Variance estimation Four different procedures are provided in MARK to estimate the variance-covariance matrix of the estimates. The first (option Hessian) is the inverse of the Hessian matrix obtained as part of the numerical optimization of the likelihood function. This approach is not reliable in that the resulting variance-covariance matrix is not particularly close to the true variancecovariance matrix. It should only be used when the standard errors are not of interest and the number of parameters that were estimated are already known. The only reason for including this method in the program is that it is the fastest. The second method (option Observed) computes the information matrix (matrix of second partials of the likelihood function) by computing the numerical derivatives of the probabilities of each capture history, known as the cell probabilities. The information matrix is computed as the sum across capture histories of the partial of cell i, times the partial of cell j, times the observed cell frequency, divided by the cell probability squared. Because the observed cell frequency is used in place of the expected cell frequency, the label for this method is 'observed'. This method cannot be used with closed capture data because the likelihood involves more than just the capture history cell probabilities. The third method (option Expected) is much the same as the observed method, but instead of using the observed cell frequency, the expected value (equal to the size of the cohort times the estimated cell probability) is used. This method generally overestimates the variances of the parameters because information is lost from pooling all the unobserved cells (i.e. all the capture histories that were never observed are pooled into one cell). This method cannot be used with closed capture data because the likelihood involves more than just the capture history cell probabilities. The fourth method (option ndpart) computes the information matrix directly, using central difference approximations. This method provides the most accurate estimates of the standard errors and is the default and preferred method. However, it requires the most computation, and is therefore the slowest, because the likelihood function has to be evaluated for a large set of parameter values to compute the numerical derivatives. Because the rank of the variance-covariance matrix is used to determine the number of parameters that were actually estimated, using different methods will sometimes result in an indication of a different number of parameters estimated, and hence a different value of the corrected AIC (AIC). Estimation options On the right side of the Run window, options concerning the numerical optimization procedure can be specified. Specifically, 'Provide initial parameter estimates' allows the user to specify initial values for starting the numerical optimization process. This option is useful for models that do not converge well from the default starting values. 'Standardize Individual Covariates' allows the user to standardize individual covariates to values with a mean of zero and a standard deviation of one. This option is useful for individual covariates with a wide range of values that may cause numerical optimization problems. The option 'Use Alt. Opt. Method' provides a second numerical optimization procedure, which may be useful for a particularly poorly behaved model. When multiple parameters are not estimable, 'MuIt. non-identifiable par.' allows the numerical algorithm to loop so as to identify these parameters sequentially by fixing parameters determined to be not estimable to a value of 0.5. This process often fails to work well, since the user can fix non-estimable parameters to zero or one more intelligently. 'Set digits in estimates' allows the user to specify the number of significant digits to be determined in the parameter estimates. This affects the number of iterations required to compute the estimates (the default is seven). 'Set function evaluations' allows the user to specify the number of function evaluations allowed to compute the parameter esti-mates (the default is 000). 'Set number of parameters' allows the user to specify the number of parameters that are estimated in the model (i.e. the program does not use singular value decomposition to determine the number of estimable parameters). This option is generally unnecessary but does allow the user to specify the correct num British Trust for Ornithology, Bird Study, 6 (suppl.), S10-19

16 S1 G.C. White and K.P. Burnham ber of estimable parameters in cases where the program does this incorrectly. Estimation When the values are correctly entered in the Run window, the OK button is clicked to proceed with parameter estimation. Alternatively, the Cancel button can be clicked to return to the model specification screens. A set of Estimation windows is opened to monitor the progress of the numerical estimation process. Progress and a summary of the results are reported in the visible window. With Windows 95, this Estimation window can be minimized (click on the El- icon), and the specifications for another model can be developed while the estimation is in progress. When the estimation process is complete, the Estimation window will close and a message box reporting the results and asking if they should be appended to the Results database will appear (possibly just as an icon at the bottom of the screen). Clicking the OK button in the message box will append the results. Many (more than six, but depending on the machine's capabilities) estimation windows can be operating at once and, eventually, each will end in this way. If an estimation window is ended prematurely by clicking on the a icon, 'No' should be clicked in the message box so that the incomplete model is not appended. Multiple models Often, a range of models involving time and group effects are desired at the beginning of an analysis to evaluate the importance of these effects on each of the basic parameters. The Multiple Models option, available in the Run menu selection, generates a list of models based on constant parameters across groups and time (.), group-specific parameters across groups but constant with time (g), time-specific parameters constant across groups (t) and parameters varying with both groups and time (g x t). If only one group is involved in the analysis, then only the (.) and (t) models are provided. This list is generated for each of the basic parameter types in the model. Thus, for live recapture data with more than one group, a total of 16 models is provided to select for estimation (four models of times four models of p). Similarly for a joint live recapture and dead recovery model with only one group, each of the four parameters would have the (.) and (t) models, giving a total of 16 ( x x x ) models. Parameters are not fixed to allow for non-estimable parameters. It is not necessary to run every model in the list. Selection of one or more models from the list means that the program does not provide the same level of interaction. For example, the results of an estimation will automatically be appended to the Results database. This feature provides an easy way of adding a large number of models to the Results database without constant attention. Results browser The Results Browser window (Fig. ) allows examination of output from previously generated models, which is stored in the Results file. The Results file is a dbase file, with memo fields to hold the numerical estimation output and the residuals from the model. It is named with the same root name as the input file used to create it, but with the DBF extension. In addition, an FPT file is created to hold the memo fields (output and residuals) and a CDX file holds the index orderings (Model Name, QAICC, Number of Parameters, and Deviance). These three files (DBF, FPT and cox) should be kept together. Once the Results file has been created, the input file containing the encounter histories is no longer necessary (this information is stored in the Results file). The Results Browser displays a summary table of model results, including the Model Name, QAICC, Delta Qmcc and Deviance. Qmcc is computed as: -1og ( Likelihood ) + K + K ( K + 1 ) n-k-1 where c is the quasi-likelihood scaling parameter, K is the number of parameters estimated and n is the effective sample size. The Delta QAICC is the difference between the QAICC of the current model and that of the model with the minimum QAICC. Deviance is the difference in the -1og(Likelihood) for the current model and the saturated model (model with a parameter for every unique encounter history). In addition, the numerical output for the model is (1) 1999 British Trust for Ornithology, Bird Study, 6 (suppl.), S10-19

18 S16 G.C. White and K.P. Burnham included in a memo field in the Results database and can be viewed by clicking on the Output, Specific Model and NotePad menu choices. Other menu options allow retrieval of a previous model (including the Parameter Index matrices (PIMs) and Design matrix), creation of a new model, printing of numerical output from a model, production of a table of the model summary statistics in a NotePad window, and graphical display of the deviance residuals or Pearson residuals for a particular model (Fig. 5). With observed (0) and expected (E) values for each capture history, a deviance residual is defined for each capture history as: sign(0 E) sqrt{ [E log(0/ E)] where sign is the sign (plus or minus) of the value of 0 E, sqrt is square root, and log is the natural log. The value of c is the overdispersion scale parameter and is normally taken as one. A Pearson residual is defined as (0 E) / sqrt(e c). To see which observation is causing a particular residual value, the mouse cursor can be placed on the plotted point for the residual. Clicking the left button gives a description of this residual, including the attribute group to which the observation belongs, the encounter history, the observed value, the expected value and the residual value. Other menu options are available to change the quasi-likelihood scaling parameter (c), or modify the number of parameters identifiable in the model (which changes the QAICC value). For any model shown in the Results Browser window, and for which the variance covariance matrix was computed, the variance components estimators described by Burnham can be computed and a graph of the results produced. You are asked to specify both a set of parameters for which the shrinkage estimators are to be computed and the assumed model structure for the expected parameters. Three choices of model structures are provided: constant or mean, linear trend and a userspecified model that could include covariates. Variance components estimation is mainly used for a parameter type if that parameter is estimated under a time-saturated model. Variation across many attribute groups can also be estimated (i.e. some form of random effects model). Under the Tests menu selection, you can construct likelihood-ratio and ANODEV tests, and also evaluate the probability level of chi-square and F statistics. To construct likelihood-ratio tests, you can select two or more models. Tests between all pairs of the models selected will be computed. The user must ensure that the models are properly nested to obtain valid likelihood-ratio tests. ANODEV provides a means of evaluating the impact of a covariate by comparing the amount of deviance explained by the covariate with the amount of deviance not explained by this covariate., For ANODEV tests, the user must select three models: 1 Global: model with largest number of parameters that explains the total deviance, Covariate: model with the covariate that you want to test, and Constant: model with the smallest number of parameters that explains only the mean level of the effect you are examining. As an example, to compute the ANODEV for survival with a linear trend 15(T)Imodel, you would specify the Covariate model as {5(T)}, the Global model as {S(t)} and the Constant model as {SO). The three models you select will automatically be classified as the Global, Covariate or Constant models, based on their number of parameters. MARK will not tell you if the three models you select are not properly nested to form the ANODEV because it has no means of identifying this problem. NUMERICAL ESTIMATION PROCEDURES MARK computes the log(likelihood) based on the encounter histories: log (Likelihood ) = No of Unique Encounter Histories / log [Pr (Observing this Encounter History)] 1 x No. of animals with this Encounter History The Pr(Observing this Encounter History) is computed by parsing the encounter history, equivalent to the procedure demonstrated by Burnham (his Table )." Identical encounter histories are grouped to minimize computer time, except when individual covariates preclude such grouping. C) 1999 British Trust for Ornithology, Bird Study, 6 (suppl.), S10-19

19 Survival estimation: program MARK S17 Numerical optimization based on a quasi- Newton approach is used to maximize the likelihood. Initial parameter estimates are needed to start this process. The design matrix is assumed to be an identity matrix if no design matrix is specified. With all parameters in the 0 vector set to zero for the sine link function, the resulting initial value of each parameter estimate (i.e. (I), p, S, r etc.) is 0.5. The first step is to optimize the likelihood using the sine link function to obtain a set of estimates for 0. These estimates are then transformed with a linear matrix transformation to obtain approximate estimates for the desired link function if it is not the sine. Optimization continues with the desired link function to give final estimates of A. Unequal time-intervals between encounter occasions are handled by taking the length of the time-interval as the exponent of the survival estimate for the interval, (i.e. 5,9. This approach is also used for (I), and F, because both of these basic parameters apply to an interval. For the typical case of equal time-intervals (all = 1), this function has no effect. However, suppose the second time-interval is two increments in length, with the rest one increment. This function has the desired consequences: the survival estimates for each interval are comparable, but the increased span of time for the second interval is accommodated. Thus, models where the same survival probability applies to multiple intervals can be evaluated, even though survival intervals are of different length. This technique is applied to all models where the length of the time-intervals varies. The information matrix (matrix of second partial derivatives of the likelihood function) is computed with the method specified by the user. The singular value decomposition of this matrix is then performed via LINPACK subroutine DsvDc6 to obtain its pseudo-inverse (variance covariance matrix of 0) and the vector of singular values arranged in descending order of magnitude. The number of estimable parameters is taken as the rank of the information matrix, determined by inspecting the vector of singular values. If the smallest singular value is less than a criterion value specific for each method of computing the variance covariance matrix, the matrix is considered to not be full rank. Then the maximum ratio of consecutive elements of this vector is determined. The rank of the matrix is taken as the number of elements in the vector above this maximum ratio. In the case of a (declared) singular information matrix, the parameter corresponding to the smallest singular value is identified in the output, so that, if desired, the user can provide additional constraints on the model to remove the inestimable parameters. One option is to fix the parameter to a specific value. The variance-covariance matrix of a is then converted to the variance covariance matrix of the estimated parameters (0) specified in the PIMs based on the delta method, which is the same as obtaining the information matrix for (0 ) and inverting it. Likewise, the estimates of 0 are converted to estimates of parameters specified in the PIMs (i.e. 0 ). For problems that include individual covariates, the estimates for the back-transformed parameters are for the first individual listed in the input file after the encounter histories are sorted into ascending order. OPERATING SYSTEM REQUIREMENTS Windows 95 is required to run MARK. No other special software is required. However, because of the large amount of numerical computation needed to produce parameter estimates, a fast Windows 95 computer is desirable. There are no fixed limits on the maximum number of parameters, encounter occasions or attribute groups. The more memory available, the larger the problem that can be solved. Generally, machines with 6 Mb of memory or more perform best, but MARK runs satisfactorily on machines with only Mb. About 7 Mb of disk space is required for the program. PROGRAM AVAILABILITY MARK can be downloaded from the Web page: http: / / / gwhite / software.html where instructions for installation are also provided. ACKNOWLEDGEMENTS Dr David R. Anderson and Dr Alan Franklin contributed to the design and debugging of this (D 1999 British Trust for Ornithology, Bird Study, 6 (suppl.), S10-19

Chapter 192 Box-Cox Transformation for Simple Linear Regression Introduction This procedure finds the appropriate Box-Cox power transformation (1964) for a dataset containing a pair of variables that are

Data_Analysis.calm: dacmeta Using HLM for Presenting Meta Analysis Results R, C, Gardner Department of Psychology The primary purpose of meta analysis is to summarize the effect size results from a number

Chapter 327 Geometric Regression Introduction Geometric regression is a special case of negative binomial regression in which the dispersion parameter is set to one. It is similar to regular multiple regression

Chapter 190 Box-Cox Transformation Introduction This procedure finds the appropriate Box-Cox power transformation (1964) for a single batch of data. It is used to modify the distributional shape of a set

CHAPTER 4 Learning internal representations Introduction In the previous chapter, you trained a single-layered perceptron on the problems AND and OR using the delta rule. This architecture was incapable

Conducting a Path Analysis With SPSS/AMOS Download the PATH-INGRAM.sav data file from my SPSS data page and then bring it into SPSS. The data are those from the research that led to this publication: Ingram,

CHAPTER 5. BASIC STEPS FOR MODEL DEVELOPMENT This chapter provides step by step instructions on how to define and estimate each of the three types of LC models (Cluster, DFactor or Regression) and also

Recitation Supplement: Creating a Neural Network for Classification SAS EM December 2, 2002 Introduction Neural networks are flexible nonlinear models that can be used for regression and classification

i Applied Regression Modeling: A Business Approach Computer software help: SPSS SPSS (originally Statistical Package for the Social Sciences ) is a commercial statistical software package with an easy-to-use

Fact Sheet No.1 MERLIN Fact Sheet No.1: MERLIN Page 1 1 Overview MERLIN is a comprehensive software package for survey data processing. It has been developed for over forty years on a wide variety of systems,

E08 Lay Table Automation This example demonstrates how the OrcaFlex automation tools can be used to create installation lay tables. A basecase model is analysed to determine the range of top tension values

A User Manual for the Multivariate MLE Tool Before running the main multivariate program saved in the SAS file Part-Main.sas, the user must first compile the macros defined in the SAS file Part-Macros.sas

Graphical Analysis of Data using Microsoft Excel [2016 Version] Introduction In several upcoming labs, a primary goal will be to determine the mathematical relationship between two variable physical parameters.

Digital Image Classification Data vs. Information Data: a collection of numbers or facts that require further processing before they are meaningful Information: Derived knowledge from raw data. Something

An introduction to SPSS To open the SPSS software using U of Iowa Virtual Desktop... Go to https://virtualdesktop.uiowa.edu and choose SPSS 24. Contents NOTE: Save data files in a drive that is accessible

Non-Linear Least Squares Analysis with Excel 1. Installation An add-in package for Excel, which performs certain specific non-linear least squares analyses, is available for use in Chem 452. The package,

5.5 Regression Estimation Assume a SRS of n pairs (x, y ),..., (x n, y n ) is selected from a population of N pairs of (x, y) data. The goal of regression estimation is to take advantage of a linear relationship

11 th International Conference on Hydroinformatics HIC 2014, New York City, USA PARAMETERIZATION AND SAMPLING DESIGN FOR WATER NETWORKS DEMAND CALIBRATION USING THE SINGULAR VALUE DECOMPOSITION: APPLICATION

The Keyboard Macro Editor The Keyboard Macro Editor is a feature in the Designer TM for Windows TM software package that allows the user to associate specific functions with keys or touchcells on a UniOP

11 ARRAY VARIABLES (ROW VECTORS) % Variables in addition to being singular valued can be set up as AN ARRAY of numbers. If we have an array variable as a row of numbers we call it a ROW VECTOR. You can

Lab 17 Newton and Quasi-Newton Methods Lab Objective: Newton s method is generally useful because of its fast convergence properties. However, Newton s method requires the explicit calculation of the second

What is Testing? Software Testing Many people understand many definitions of testing :. Testing is the process of demonstrating that errors are not present.. The purpose of testing is to show that a program

Building Better Parametric Cost Models Based on the PMI PMBOK Guide Fourth Edition 37 IPDI has been reviewed and approved as a provider of project management training by the Project Management Institute

Minitab 17 commands Prepared by Jeffrey S. Simonoff Data entry and manipulation To enter data by hand, click on the Worksheet window, and enter the values in as you would in any spreadsheet. To then save

Chromatography Software Training Materials This document contains information on how to build a method, start the instrument to acquire data, and then process the data using the Galaxie Program. You will

Spatial Patterns We will examine methods that are used to analyze patterns in two sorts of spatial data: Point Pattern Analysis - These methods concern themselves with the location information associated

A4.8 Fitting relative potencies and the Schild equation A4.8.1. Constraining fits to sets of curves It is often necessary to deal with more than one curve at a time. Typical examples are (1) sets of parallel

Sociology 740 John Fox Lecture Notes 7. Collinearity and Model Selection Copyright 2014 by John Fox Collinearity and Model Selection 1 1. Introduction I When there is a perfect linear relationship among

CHAPTER 7 ASDA ANALYSIS EXAMPLES REPLICATION-SPSS/PASW V18 COMPLEX SAMPLES GENERAL NOTES ABOUT ANALYSIS EXAMPLES REPLICATION These examples are intended to provide guidance on how to use the commands/procedures

Learning to Detect Faces A Large-Scale Application of Machine Learning (This material is not in the text: for further information see the paper by P. Viola and M. Jones, International Journal of Computer

Chapter 4 Fuzzy Segmentation 4. Introduction. The segmentation of objects whose color-composition is not common represents a difficult task, due to the illumination and the appropriate threshold selection

Data In single-program multiple-data (SPMD) parallel programs, global data is partitioned, with a portion of the data assigned to each processing node. Issues relevant to choosing a partitioning strategy

Clinical and Translational Science Institute i2b2 User Guide i2b2 is a tool for discovering research cohorts using existing, de-identified, clinical data This guide is provided by the Office of Biomedical

Welcome to the Data Entry System User s Manual. This manual will cover all of the steps necessary for you to successfully navigate and operate the Data Management Unit s Web based data entry system. We