Preface
Data analysis is very important in epidemiological research. The capacity of computing facilities has been steadily increasing, moving state of the art epidemiological studies along the same direction of computer advancement. Currently, there are many commercial statistical software packages widely used by epidemiologists around the world. For developed countries, the cost of software is not a major problem. For developing countries however, the real cost is often too high. Several researchers in developing countries thus eventually rely on a pirated copy of the software. Freely available software packages are limited in number and readiness of use. EpiInfo, for example, is free and useful for data entry and simple data analysis. Advanced data analysts however find it too limited in many aspects. For example, it is not suitable for data manipulation for longitudinal studies. Its regression analysis facilities cannot cope with repeated measures and multi-level modelling. The graphing facilities are also limited. A relatively new and freely available software called R is promising. Supported by leading statistical experts worldwide, it has almost everything that an epidemiological data analyst needs. However, it is difficult to learn and to use compared with similar statistical packages for epidemiological data analysis such as Stata. The purpose of this book is therefore to bridge this gap by making R easy to learn for researchers from developing countries and also to promote its use. My experience in epidemiological studies spans over twenty years with a special fondness of teaching data analysis. Inspired by the spirit of the open-source software philosophy, I have spent a tremendous effort exploring the potential and use of R. For four years, I have been developing an add-on package for R that allows new researchers to use the software with enjoyment. More than twenty chapters of lecture notes and exercises have been prepared with datasets ready for self-study. Supported by WHO, TDR and the Thailand Research Fund, I have also run a number of workshops for this software in developing countries including Thailand, Myanmar, North Korea, Maldives and Bhutan, where R and Epicalc was very much welcomed. With this experience, I hereby propose that the use of this software should be encouraged among epidemiological researchers, especially for those who cannot afford to buy expensive commercial software packages.

i

R is an environment that can handle several datasets simultaneously. Users get access to variables within each dataset either by copying it to the search path or by including the dataset name as a prefix. The power of R in this aspect is a drawback in data manipulation. When creating a variable or modifying an existing one, without prefixing the dataset name, the new variable is isolated from its parental dataset. If prefixing is the choice, the original data is changed but not the copy in the search path. Careful users need to remove the copy in the search path and recopy the new dataset into it. The procedure in this aspect is clumsy. Not being tidy will eventually end up with too many copies in the search path overloading the system or confusing the analyst on where the variable is actually located. Epicalc presents a concept solution for common types of work where the data analyst works on one dataset at a time using only a few commands. In Epicalc the user can virtually eliminate the necessity of specifying the dataset and can avoid overloading of the search path very effectively and efficiently. In addition to make tidying of memory easy to accomplished, Epicalc makes it easy to recognize the variables by adopting variable labels or descriptions which have been prepared from other software such as SPSS or Stata or locally prepared by Epicalc itself. R has very powerful graphing functions that the user has to spend time learning. Epicalc exploits this power by producing a nice plot of the distribution automatically whenever a single variable is summarised. A breakdown of the first variable by a second categorical variable is also simple and graphical results are automatically displayed. This automatic graphing strategy is also applied to oneway tabulation and two-way tabulation. Description of the variables and the value or category labels are fully exploited with these descriptive graphs. Additional epidemiological functions added in by Epicalc include calculation of sample size, matched 1:n (n can vary) tabulation, kappa statistics, drawing of ROC curve from a table or from a logistic regression results, population pyramid plots from age and sex and follow-up plots. R has several advanced regression modelling functions such as multinomial logistic regression, ordinal logistic regression, survival analysis and multi-level modelling. By using Epicalc nice tables of odds ratios and 95% CI are produced, ready for simple transferal into a manuscript document with minimal further modification required. Although use of Epicalc implies a different way of working with R from conventional use, installation of Epicalc has no effect on any existing or new functions of R. Epicalc functions only increase efficiency of data analysis and makes R easier to use.

ii

This book is essentially about learning R with an emphasis on Epicalc. Readers should have some background in basic computer usage. With R, Epicalc and the supplied datasets, the users should be able to go through each lesson learning the concepts of data management, related statistical theories and the practice of data analysis and powerful graphing. The first four chapters introduce R concepts and simple handling of important basic elements such as scalars, vectors, matrices, arrays and data frames. Chapter 5 deals with simple data exploration. Date and time variables are defined and dealt with in Chapter 6 and fully exploited in a real dataset in Chapter 7. Descriptive statistics and one-way tabulations are automatically accompanied by corresponding graphs making it rather unlikely that important information is overlooked. Finally, time plots of exposure and disease onsets are plotted with a series of demonstrating commands. Chapter 8 continues to investigate the outbreak by two-way tabulation. Various kinds of risk assessment, such as the risk ratio and protective efficacy, are analysed with numeric and graphic results. Chapter 9 extends the analysis of the dataset to deal with levels of association or odds ratios. Stratified tabulation, the Mantel-Haenzsel odds ratio, and test of homogeneity of odds ratios are explained in detail. All results are complemented by simultaneous plots. With these graphs, the concept of confounding is made more understandable. Before proceeding further, the reader has a thorough exercise of data cleaning and standard data manipulation in Chapter 10. Simple looping commands are introduced to increase the efficiency of data management. Subsequently, and from time to time in the book, readers will learn how to develop these loops to create powerful graphs. Scatter plots, simple linear regression and analysis of variance are presented in Chapter 11. Stratified scatter plots to enhance the concept of confounding and interaction for continuous outcome variables are given in Chapter 12. Curvilinear models are discussed in Chapter 13. Linear modelling is extended to generalized linear modelling in Chapter 14. For binary outcome variables, Chapter 15 introduces logistic regression with additional comparison with stratified cross-tabulation learned in Chapter 9. The concept of a matched case control study is discussed in Chapter 16 with matched tabulation for 1:1 and 1:n matching. Finally, conditional logistic regression is applied. Chapter 17 introduces polytomous logistic regression using a case-control study in which one type of case series is compared with two types of control groups. Ordinal logistic regression is applied for ordered outcomes in Chapter 18.

iii

does include colour. survival analysis is discussed in Chapter 21 and the Cox proportional hazard model is introduced in Chapter 22.For a cohort study. Extra-Poisson regression for overdispersion is also discussed. however. Poisson regression is used in Chapter 19. At the end of each chapter some references are given for further reading. such as those encountered in the social sciences. Chapter 24 deals with day-to-day work in calculation of sample sizes and the technique of documentation that all professional data analysts must master is explained in Chapter 25.txt" An R package or library An R dataset An R function An Epicalc function (italic) An R object An argument to a function A variable within a data frame A data file on disk
iv
. Some suggested strategies for handling large datasets are given in chapter 26. Multi-level modelling and longitudinal data analysis are discussed in Chapter 20. The book ends with a demonstration of the tableStack command.
Colour
It is assumed that the readers of this book will simultaneously practice the commands and see the results on the screen.
Explanations of fonts used in this book
MASS Attitudes plot summ 'abc' 'pch' 'saltegg' "data. Solutions to these are given at the end of the book. which dramatically shortens the preparation of a tidy stack of tables with a special technique of copy and paste into a manuscript. In chapter 23 the focus is on analyzing datasets involving attitudes. This includes modeling the outcome using the negative binomial error distribution. Most chapters also end with some exercises to practice on. For cohort studies with individual follow-up times. with grouped exposure datasets. The electronic copy of the book. The explanations in the text sometimes describe the colour of graphs that appear in black and white in this book (the reason for this is purely for reducing the printing costs).

To install R.r-project. however R also works on other operating systems. In this page you can download the setup file for Windows. You can create multiple shortcut icons with different start-in folders for each project you are working on. The 'Properties' of the icon should have the 'Start in:' text box filled with 'C:\RWorkplace' (do not type the single quote signs ' and '.org/ but there are mirrors all around the world. if you are running R on a Windows XP in the Chinese language. MacOS X and Windows. R detects the main language of the operating system in the computer and tries to use menus and dialog boxes in that language. first go to the CRAN website and select your operating system from the top of the screen. Since this book is written in English. the input and output of files will be done in the program folder. Otherwise. They are used in this book to indicate objects or technical names). how to obtain help. It is freely available for use and distribution under the terms of this license.1-win32. it is advised to set the language to be
. Suppose the work related to this book will be stored in a folder called 'C:\RWorkplace'. the menus and dialog boxes will appear in Chinese. After installation. covering installation. Replace the default 'Start in' folder with your own working folder. For Windows users click the Windows link and follow the link to the base subdirectory. syntax of R commands and additional documentation. Right-click this R icon to change its start-up properties. R runs on the three common contemporary operating systems.6. The latest version of R and Epicalc and their documentation can be downloaded from CRAN (the Comprehensive R Archive Network). To run the installation simply double-click this file and follow the instructions.exe. Users should download the software from the nearest site. The main web site is http://cran. For example. a shortcut icon of R should appear on the desktop. which at the time of publication of this book was R-2. which is not a good practice. Note that this book was written for Windows users. This is the folder where you want R to work. The set-up file for R is around 28Mb. Click this link and click the "Save" button.
Installation
R is distributed under the terms of the GNU General Public License. Linux.Chapter 1: Starting to use R
This chapter concerns first use of R.

Include a space before the word 'Language'.1\bin\Rgui.
2
. In the 'Shortcut' tab of the R icon properties.
So.1 version icon would be: "C:\Program Files\R\R-2. In addition.English so that the responses on your computer will be the same as those in this book. add Language=en at the end of the 'Target'.6.6. the Target text box for R-2. the Epicalc package needs to be installed and loaded. a specialised text editor such as Crimson Editor or Tinn-R must be installed on your computer.exe" Language=en To use this book efficiently.

Crimson Editor has some nice features that can assist the user when working with R. or 'help.
Tinn-R
Tinn-R is probably the best text file editor to use in conjunction with the R program.1 (2007-11-26) Copyright (C) 2007 The R Foundation for Statistical Computing ISBN 3-900051-07-0 R is free software and comes with ABSOLUTELY NO WARRANTY.6. It is specifically designed for working with R script files. This means that sections of commands can be highlighted and sent to the R console (sourced) with a single button click. such as C++. Type 'demo()' for some demos. These features are important because they are commonly used in the R command language.exe file and following the instructions. In addition to syntax highlighting of R code. Tinn-R can interact with R using specific menus and tool bars.e. Installation and set-up of Crimson Editor is explained in Chapter 25.start()' for an HTML browser interface to help. It is very powerful for editing script or command files used by various software programs.
R version 2.Text Editors Crimson Editor
This software can be installed in the conventional fashion as all other software. >
3
. double-click the R icon on the desktop. The program should then start and the following output is displayed on the R console. PHP and HTML files. Tinn-R can be downloaded from the Internet at: www. Type 'license()' or 'licence()' for distribution details. i.sciviews.
Starting R Program
After modifying the start-up properties of the R icon. Line numbers can be shown and open and closed brackets can be matched. Natural language support but running in an English locale R is a collaborative project with many contributors. 'help()' for on-line help. Type 'q()' to quit R. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications.org/Tinn-R. by executing the setup. You are welcome to redistribute it under certain conditions.

but many are supplied as packages.
4
. In this book.image("C:/RWorkplace/myFile. released on November 26. Within this document both the R commands and output lines will be in Courier New font whereas the explanatory text are in Times New Roman.1. Any previous commands that have been typed at the R console will be saved into a file called '. whereas standard R commands are shown in normal font style. similar to what is shown at the R console window.6. A library is a collection of packages typically contained in a single directory on the computer. Note that before quitting R you can save your workspace image by typing
> save. Epicalc commands are shown in italic.Rdata". In the next session of computing. Continued use of R in this fashion (quitting and saving the unnamed workspace image) will result in these two files becoming larger and larger. Usually one would like to start R afresh every time so it is advised to always choose "No" when prompted to save the workspace. You should not type the ">". when R is started in this folder. If you choose "Yes". The second paragraph declares and briefly explains the warranty and license. two new files will be created in your working folder. Notice that these two files have no prefix. R commands begin with the ">" sign. A package is simply a collection of these functions together with datasets and documentation. the image of the working environment of the last saved R session will be retrieved automatically. "No" and "Cancel". Alternatively you may type:
> q("no")
to quit without saving the workspace image and prevent the dialog box message appearing.
R libraries & packages
R can be defined as an environment within which many classical and modern statistical techniques. Just type the commands.The output shown above was produced from R version 2. together with the command history. A few of these techniques are built into the base R environment. The first thing to practice is to quit the program. The third paragraph gives information about contributors and how to cite R in publications. The fourth paragraph suggests a few commands for first-time users to try. Then when you quit R you should answer "No" to the question. 2007. are implemented. Click the cross sign at the far right upper corner of the program window or type the following at the R console:
> q()
A dialog box will appear asking "Save workspace image?" with three choices: "Yes". Choose "Cancel" to continue working with R. called functions.Rhistory' while the current workspace will be saved into a file called ".RData")
where 'myFile' is the name of your file.

To see which packages are currently loaded into memory you can type:
> search() [1] ". A newer version is created to have bugs (errors in the programme) fixed. and so forth. When the navigating window appears.org.pdf).6.zip is the binary file for use on the Windows operating system and the version of Epicalc is 2.. For example.zip ('version' increases with time) is a compressed file containing the fully compiled Epicalc package. along with the documentation (.r-project. it will look inside '. The version number is in the suffix.6.tar.6. To install Epicalc. click 'Packages' on the menu bar at the top of the window. it then looks in the second search position.1. browse to find the file and open it.There are about 25 packages supplied with R (called “standard” or “recommended” packages) and many more are available through the CRAN web site. When R is told to do any work.zip) versions. You will also need to reinstall this package if you install a new version of R.tgz) and Windows (.GlobalEnv'.
Epicalc package
The Epicalc package can be downloaded from the web site http://cran. On the left pane of this web page. The Epicalc package is updated from time to time. The file epicalc_version. Move down along the alphabetical order of various packages to find 'epicalc'.GlobalEnv" [4] "package:graphics" [7] "package:datasets" "package:methods" "package:stats" "package:grDevices" "package:utils" "Autoloads" "package:base"
The list shown above is in the search path of R. click 'Packages'. Usually there is only one session of installation needed unless you want to overwrite the old package with a newer one of the same name.6. The short and humble description is 'Epidmiological calculator'. Click 'epicalc' to hyperlink to the download page. First. to improve the features of existing functions (commands) and to include new functions. in this case "package:methods". it will look for a particular object for it to work with from the search path.
5
. Installation of this package must be done within R itself.1. This will always be the first search position.gz). epicalc_2.". Only 7 of these packages are loaded into memory when R is executed.. On this page you can download the Epicalc source code(. Choose 'Install packages from local zip files. Any function that belongs to one of the loaded packages is always available during an R session. which is the global environment. If R cannot find what it wants here. and the two binary versions for MacIntosh (.

errors or warnings will be reported. Remember to replace the R version with the one you have installed. You may edit this file and insert the command above.
Updating packages
Whenever a new version of a package is released it is advised to keep up to date by removing (unloading) the old one and loading the new one. most of the time. A common warning is a report of a conflict. however functions within Epicalc are still not available until the following command has been executed:
> library(epicalc)
Note the use of lowercase letters. This just means that an object (usually a function) with the same name already exists in the working environment. you may need to quit R and start afresh. you may type the following at the R console:
> detach(package:epicalc)
After typing the above command. which is located in the C:\Program Files\R\R-2.site
Whenever R is run it will execute the commands in the "RProfile.site file.1\etc folder. To unload the Epicalc package. Otherwise. The command library(epicalc) must be typed everytime a new session of R is run. R will give priority to the object that was loaded more recently.site" file.site file should look something like this:
library(epicalc) # # # # Things you might want to change options(papersize="a4") options(editor="notepad") options(pager="internal")
6
. When the console accepts the command quietly. not very serious. If there are any problems. the Epicalc package will be automatically loaded and ready for use. you may then install the new version of the package as mentioned in the previous section. every time R is run. Your Rprofile. In this case.
RProfile. we can be reasonably confident that the command has been accepted. By including the command library(epicalc) in the RProfile.6.Successful installation will result in:
> utils:::menuInstallLocal() package 'epicalc' successfully unpacked and MD5 sums checked updating HTML package descriptions
Installation is now complete. This warning is.

are HTML (htmlhelp=TRUE) and compiled HTML (chmhelp=TRUE).search(". This information can also be obtained by typing 'help(myFun)' at the R console. where manuals can often be too technical or wordy. You can use this to refine a query when you get too many responses. To get help on the 'help' function you can type
> help(help)
or perhaps more conveniently
> ?help
For fuzzy searching you can try
> help. Each help format has its own advantages and you are free to choose the format you want. This function also allows you to search on multiple keywords. On-line help documentation comes in three different versions in R. which can be set in the "Rprofile. where 'myFun' is the name of the function. 'An Introduction to R' is the section that all R users should try to read first. The later version is Windows specific and if chosen. This is particularly true for non-native speakers of English. Self-studying is also possible from the on-line help of R. If the Epicalc package has been loaded properly. although with some difficulty. Click this to see what packages you have available.. help documentation will appear in a Windows help viewer. For self study. especially for first time users. Click 'Epicalc' to see the list of the functions available.. It is advised to combine the use of this book as a tutorial and on-line help as a reference manual. The other versions.start()
The system will open your web browser from the main menu of R. The default version is to show help information in a separate window within R. Another interesting section is 'Packages'.site" file mentioned previously. which is used to produce the printed manuals. Click each of the functions one by one and you will see the help for that individual function.")
Replace the dots with the keyword you want to search for. type
> help. then this name should also appear in the list.# to prefer Compiled HTML help # options(chmhelp=TRUE) # to prefer HTML help # options(htmlhelp=TRUE)
On-line help
On-line help is very useful when using software.
7
. This format is written in a simple markup language that can be read by R and can be converted to LaTeX.

Thus in the last example. We will encounter this in the corresponding section. If the Epicalc package has not been loaded. then the functions contained inside will not be available for use.Very often the user would want to know how to get other statistical analysis functions that are not available in a currently installed package. the package survival is necessary for survival analysis.
> 1+1 [1] 2
When you type '1+1' and hit the <Enter> key. it is recommended that any additional library should be called early in the session of R. Having the Epicalc package in the search path means we can use all commands or functions in that package. before reading in and attaching to a data frame. To find the value of e:
> exp(1) [1] 2. the result is 5. This is to make sure that the active dataset will be in the second search position. The order of the search path is sometimes important. R will show the result of the calculation. The user then can choose the website to go to for further learning. For the square root of 25:
> sqrt(25) [1] 5
The wording in front of the left round bracket is called a 'function'.718282
8
.e. i.
Using R
A basic but useful purpose of R is to perform simple calculations. For example. A better option would be to search from the CRAN website using the 'search' feature located on the left side of the web page and Google will do a search within CRAN. which is equal to 2. Now type
> search()
You should see "package:epicalc" in the list. The entity inside the bracket is referred to as the function's 'argument'. The results would be quite extensive and useful. and when imposed on 25. More details on this will be discussed in Chapter 4. For Epicalc users. 'sqrt()' is a function. Other packages can be called when appropriate.

Similarly. After the number of closed brackets equals the number of opened ones. or computer grammatical. there is no obvious difference between the use of = and <-. the assignment is written in the following way.is slightly more awkward to type than =. The difference applies at the R programming level and will not be discussed here. the new line will start with a '+' sign.Exponentiation of 1 results in the value of e. if the number of closed brackets exceeds the number of opened ones. the exponential value of -5 or e-5 would be
> exp(-5) [1] 0.2)) Error: syntax error
R objects
In the above simple calculation. the results are immediately shown on the screen and are not stored. the result is a syntax error.7.335001
However.
> a <. which is about 2. To perform a calculation and store the result in an object type:
> a = 3 + 5
We can check whether the assignment was successful by typing the name of the newly created object:
> a [1] 8
More commonly. Notice that there is no space between the two components of the assignment operator <-.
> log(3.sqrt(36)
9
. the former technique is recommended to avoid any confusion with the comparison operator (==).
> log(3. Now create a second object called 'b' equal to the square root of 36. if the number of closed brackets is fewer than the number of opened ones and the <Enter> key is pressed.006738
Syntax of R commands
R will compute only when the commands are syntactically correct. computation is carried out and the result appears. indicating that R is waiting for completion of the command.3 + 5 > a [1] 8
For ordinary users.
> b <.8 + ) [1] 1. Although <. For example.

Objects of this type cannot be used for calculation. The * symbol is needed.40 > baht.
> a + 3*b -> c > c [1] 26
However. which indicates multiplication. the following command does not work.
> a + 3b -> c Error: syntax error
R does not recognise '3b'.or the right hand side of ->.
> a + b [1] 14
We can also compute the value on the left and assign the result to a new object called 'c' on the right.dollar [1] 40
In conclusion.Then.
> xyx <.1 > xyx [1] 1
A nonsense thing can be typed into the R console such as:
> qwert Error: Object "qwert" not found
What is typed in is syntactically correct. the value will be stored to the object on the left of = and <. add the two objects together. Telephone numbers and post-codes are also strings. using the right assign operator. A dot can also be used as a delimiter for an object name. If the signs = or <.
Character or string objects
Character or string means alphanumeric or letter. when one types in anything at the R console. Examples include the name of a person or an address. The problem is that 'qwert' is not a recognizable function nor a defined object.per.
> baht. The name of an object can consist of more than one letter.or -> are encountered. ->.
10
.dollar <.per. the program will try to show the value of that object.

the author usually inserts some comments as a part of documentation to remind him/herself or to show some specific issue to the readers. as with most other programming documents. the final result is TRUE.> A <. such a sentence can be used for comments.
> 3*2 < 3^2 [1] TRUE
Logical connection using & (logical 'and')
Both TRUE and FALSE are logical objects. Examples:
> 3*3 = 3^2 > 3*3 == 3^2 > 3*2 == 3^2 # This gives a syntax error # This is correct syntax-wise.
> a [1] 8 > A [1] "Prince of Songkla University"
Putting comments in a command line
In this book. If all are TRUE. # Correct syntax but the result is FALSE
Logical: TRUE and FALSE
In the last few commands:
> 3*3 == 3^2 [1] TRUE
But
> 3*2 == 3^2 [1] FALSE
Note that we need two equals signs to check equality but only one for assignment. They are both in upper case."Prince of Songkla University" > A [1] "Prince of Songkla University"
R is case sensitive. Connection of more than one such object results in either TRUE or FALSE. For example:
> TRUE & TRUE [1] TRUE
11
. R ignores any words following the # symbol. Thus. so 'A' is not the same as 'a'.

13
.
References
An Introduction to R. R Language Definition. ISBN 3-900051-12-7. which follows in chapter 2. ISBN 3-900051-13-5. Both references above can be downloaded from the CRAN web site.Please remember that answering "No" is the preferred response in this book as we recommend typing
> q("no")
to end each R session. Responding "Yes" here is just an exercise in understanding the concept of workspace images.

96 2
δ2
π (1 − π )
where n is the sample size.
14
.
Problem 3. The term 'logit' denotes 'log{P/(1-P)}' where P is the risk or prevalence of a disease.
Problem 2. π is the prevalence in the population (not to be confused with the constant pi). The formula for sample size of a descriptive survey is
n=
1. 90% and 100%. Compute the logits from the following prevalences: 1%. and δ is half the width of the 95% confidence interval (precision).Exercises
Problem 1. 10%. Compute the required sample size if the prevalence is estimated to be 30% of the population and the 95% confidence interval is not farther from the estimated prevalence by more than 5%. Change the above prevalence to 5% and suppose each side of the 95% confidence interval is not farther from the estimated prevalence by more than 2%. 50%.

You may see this in the last line:
[Previously saved workspace restored]
This means that R has restored commands from the previous R session (or history) and the objects stored form this session. However.Rhistory".Rdata". and ".Rdata" is a binary file and only recognised by the R program. which is the first position in the search path. the results will come up as if you continued to work in the previous session. any libraries manually loaded in the previous session need to be reloaded. you should see two new files: ".site" file that we modified in the previous chapter). In this chapter. we will learn slightly more complicated issues. examine the working folder.GlobalEnv'. Epicalc will always be in the search path. Therefore. which recorded all the commands in the preceding R session.Chapter 2: Vectors
In the previous chapter. we introduced simple calculations and storage of the results of the calculations. which is the working environment saved from the latest R session.
History and saved objects
Outside R. while ". under this setting. regardless of whether the workspace image has been saved in the previous session or not. Open R from the desktop icon. ". Press the up arrow key and you will see the previous commands (both correct and incorrect ones).Rhistory" is a text file and can be edited using any text editor such as Notepad.
Note: ______________________________________________________________________ The image saved from the previous session contains only objects in the '. For example. Press <Enter> following the command. The whole search path is not saved.
> a [1] 8 > A [1] "Prince of Songkla University"
Both 'a' and 'A' are retained from the previous session. the Epicalc library is automatically loaded every time we start R (from the setting of the "Rprofile. Crimson Editor or Tinn-R.
15
.

e. words can be used to create string vectors. The following command will create a vector of integers from 1 to 10. To concatenate.Rdata". Go to the 'start in' folder and delete the two files ".3 * 7 / 10 * d ^ 2 / d == d
In addition to numbers.
> x <. numeric with numeric.Rhistory" and ".c("Faculty of Medicine". can be concatenated.
> B <. the function 'c()' is used with at least one atomised object as its argument. i. There should be no message indicating restoration of previously saved workspace and no history of previous commands.
> c(1. a vector is an object containing concatenated.If you want to remove the objects in the environment and the history.2.
> > > > > > > > d d d d d d d d + 4 . 2 and 3 as its elements.3) [1] 1 2 3
This vector has three elements: 1.2.1:10. string with string.3) -> d > d
Do some calculation with 'd' and observe the results. Press the up arrow key to reshow this command and type a right arrow to assign the result to a new object called 'd'. Then have a look at this object.
> c(1. atomised (no more divisible) objects of the same type. 2 and 3. Create a simple vector having the integers 1.
Concatenation
Objects of the same type."Prince of Songkla University") > B [1] "Faculty of Medicine" "Prince of Songkla University"
Vectors of systematic numbers
Sometimes a user may want to create a vector of numbers with a certain pattern. x [1] 1 2 3 4 5 6 7 8 9 10
16
. Then restart R. In fact. quit R without saving.

by=7) -> x > x [1] 3 10 17 24 31 38 45 52 59 66 73 80 87 94
In fact. When explicitly given. only a certain part of a vector needs to be used. Let's assume we have a vector of running numbers from 3 to 100 in steps of 7. to=-3. The function can be executed with at least two parameters. In this case.
Subsetting a vector with an index vector
In many instances. the order can be changed.For 5 repetitions of 13:
> rep(13. 'to' and 'by' is assumed if the words are omitted. If the 4th. 6th and 7th positions are required. from=10)
This rule of argument order and omission applies to all functions. then type:
> x[c(4.
> x[5] [1] 31
The number inside the square brackets '[]' is called a subscript. times=5) [1] 13 13 13 13 13
The function 'rep' is used to replicate values of the argument. For more details on 'seq' use the help feature.6. by = 3) [1] -1 2 5 8 11
In this case 'seq' is a function with three arguments 'from'.
> seq(by=-1. the value in the 5th position of the vector 'x' is 31. 'to' and 'by'. since the 'by' parameter has a default value of 1 (or -1 if 'to' is less than 'from'). What would be the value of the 5th number?
> seq(from=3.7)] [1] 24 38 45
17
. It denotes the position or selection of the main vector. to = 11. to=100. -3) [1] 10 9 8
7
6
5
4
3
2
1
0 -1 -2 -3
The order of the arguments 'from'. 23) [1] 10 11 12 13 14 15 16 17 18 19 20 21 22 23 > seq(10. For sequential numbers from -1 to 11 with an incremental step of 3 type:
> seq(from = -1.
> seq(10. but rather 94. the vector does not end with 100. 'from' and 'to'. since a further step would result in a number that exceeds 100.

> subset(x. thus all the chosen numbers are odd. type:
> x[-(1:4)] [1] 31 38 45 52 59 66 73 80 87 94
A minus sign in front of the subscript vector denotes removal of the elements of 'x' that correspond to those positions specified by the subscript vector.Note that in this example. Similarly. x/2!=trunc(x/2)) [1] 3 17 31 45 59 73 87
The operator ! prefixing an equals sign means 'not equal'. to comply with the R syntax. The same result can be obtained by using the 'subset' function. thus the concatenate function c is needed. then the comparison operator can simply be changed to !=. which means 'not equal'.6.
> B[2] [1] "Prince of Songkla University"
Using a subscript vector to select a subset
A vector is a set of numbers or strings. Similarly. For example. the object within the subscript can be a vector. a string vector can be subscripted. to choose only even numbers of the vector 'x' type:
> x[x/2 == trunc(x/2)] [1] 10 24 38 52 66 80 94
The function trunc means to truncate or remove the decimals. Application of a condition within the subscript results in a subset of the main vector. The condition that 'x' divided by 2 is equal to its truncated value is true iff (if and only if) 'x' is an even number.
> subset(x. 6. x/2==trunc(x/2))
If only odd numbers are to be chosen. to choose the elements of 'x' which are greater than 30 type:
> x[x>30] [1] 31 38 45 52 59 66 73 80 87 94
18
. The following would not be acceptable:
> x[4.7] Error in x[4. 7] : incorrect number of dimensions
To select 'x' with the first four elements omitted.

> agegr <.cut(age.15] 6 25 (15.
> data.
> is.60] (60. A round bracket in front of the group is exclusive or not including that value.15.60]" $class [1] "factor"
"(60.factor(agegr) [1] TRUE > attributes(agegr) $levels [1] "(0. The label of each group uses a square bracket to end the bin indicating that the last number is included in the group (inclusive cutting).60] 10 59 (15.frame function. which we can call 'children'.Creating a factor from an existing vector
An age group vector can be created from age using the cut function. who is 60 years old. who is 15 years old.60] 4 56 (15.100]
Note that the 5th person.60] 11 80 (60. with levels shown above. type:
> table(agegr) agegr (0. is in the second group.15] (15.100))
This creates 3 distinct groups.60] 9 60 (15.100] 2 8 1
There are two children.60] 7 40 (15. breaks=c(0. To obtain a frequency table of the age groups.
> summary(agegr) # same result as the preceding command > class(agegr) [1] "factor"
21
.100]"
The object 'agegr' is a factor.15] 2 23 (15.60] 3 48 (15. which combines (but not saves) the 2 variables in a data frame and displays the result. Note that the minimum and maximum of the arguments in cut are the outer most boundaries. agegr) age agegr 1 10 (0. eight adults and one elderly person.15]" "(15.frame(age. is classified into the first group and the 9th person. More details on this function is given the chapter 4.60] 5 15 (0.60] 8 21 (15. We can check the correspondence of 'age' and 'agegr' using the data.60. 'adults' and 'elderly'.

In R.The age group vector is a factor or categorical vector. 1. race and religion should always be factored.
> agegr1 <. Median Mean 3rd Qu.
Missing values
Missing values usually arise from data not being collected.909 2. missing age may be due to a person not giving his or her age. 60. and we want to draw a scatter plot in which the colours of the dots are to be classified by the different levels of 'sex'.160) > height [1] 100 150 NA 160 > weight <. Declaring a vector as a factor is very important.150. This will be demonstrated in future chapters. The unclassed value of a factor is used when the numeric (or integer) values of the factor are required. For example. For example.000
Categorical variables. abbreviated from 'Not Available'. which is explained in more detail in chapter 3.NA > b * 3 [1] NA > c <.
> b <.3 + b > c [1] NA
As an example of a missing value of a person in a vector series.NA. Any calculation involving NA will result in NA. classed as a factor.55) > weight [1] 33 45 60 55
22
. missing values are denoted by 'NA'.000 2. 45.c(100. for example sex.000 > class(agegr1) [1] "integer"
Max.unclass(agegr) > summary(agegr1) Min.000 2. particularly when performing regression analysis. Age group in this example is a factor although it has an ordered pattern. the colour argument to the plot function would be 'col = unclass(sex)'.000 1. if we are have a dataset containing a 'sex' variable. 3. which will be discussed in future chapters. 1st Qu. type the following commands:
> height <. It can be transformed into a simple numeric vector using the 'unclass' function.c(33.

omit(height)) [1] 3 > mean(na. although the length of this vector is available.omit(height)) [1] 136. na.omit() is an independent function that omits missing values from the argument object.rm' means 'not available (value) removed'.
> length(na.
> mean(height. the NA elements should be removed.rm=TRUE) [1] 136. 'na.
23
.6667
Thus na.rm = TRUE' is an internal argument of descriptive statistics for a vector.25 > mean(height) [1] NA
We can get the mean weight but not the mean height.6667
The term 'na. and is the same as when it is omitted by using the function na.
> length(height) [1] 4
In order to get the mean of all available elements.omit(). all weights are available but one height is missing.
> mean(weight) [1] 48.Among four subjects in this sample.

Exercises
Problem 1. Identify the persons who have the lowest and highest BMI and calculate the standard deviation of the BMI. Compute the sum of the elements of 'y' which are multiples of 7. Compute the value of 12 + 22 + 32 .. + 1002
Problem 2.
24
. Compute the body mass index (BMI) of each person where BMI = weight / height2. Create a vector called 'wt' corresponding to the family member's weights.000. The heights (in cm) and weights (in kg) of 10 family members are shown below:
ht Niece 120 Son 172 GrandPa 163 Daughter 158 Yai 153 GrandMa 148 Aunty 160 Uncle 170 Mom 155 Dad 167 wt 22 52 71 51 51 60 50 67 53 64
Create a vector called 'ht' corresponding to the heights of the 11 family members. Assign the names of the family members to the 'names' attribute of this vector.. Let 'y' be a series of integers running from 1 to 1.
Problem 3.

(1:10) > a [1] 1 2 3 > dim(a) NULL
4
5
6
7
8
9 10
Folding a vector to make an array is simple. Matrices and Tables
Real data for analysis rarely comes as a vector.1] [.2] [.] 2 4 6 8 10
25
.
Arrays
An array may generally mean something finely arranged. they come as a dataset containing many rows or records and many columns or variables.3] [.4] [. This is because R is an object-oriented program. matrices and tables. Just declare or re-dimension the number of rows and columns as follows:
> dim(a) <.c(2. R has a special ability to handle several arrays and datasets simultaneously. these datasets are called data frames. let us go through something simpler such as arrays. Before delving into data frames. R interprets rows and columns in a very similar manner.5) > a [. A dataset is basically an array. In most cases. Gaining concepts and skills in handing these types of objects will empower the user to manipulate the data very effectively and efficiently in the future.
> a <.Chapter 3: Arrays.5] [1.
Folding a vector into an array
Usually a vector has no dimension.] 1 3 5 7 9 [2. In R. an array consists of values arranged in rows and columns. In mathematics and computing. Moreover. Most statistical packages can handle only one dataset or array at a time.

] # for the first row and all columns of array 'a' a[.c(2.] 14 17 20 23 [3.] 15 18
In fact. . columns.3] [. The command 'dim(a) <.] and a[] both choose all rows and all columns of 'a' and thus are the same as typing 'a'.4] [1. Elements of this three-dimensional array can be extracted in a similar way.1] [. Specific rows and columns may be extracted by omitting one of the components.] 15 18 21 24
The first value of the dimension refers to the number of rows. The first subscript defines row selection.2] [1.1:24 > dim(b) <.array(1:24. an array requires two components.
> b[1:3.The numbers in the square brackets are the row and column subscripts.] 1 4 7 10 [2.c(3.3] # for all rows of the third column a[2.
26
. but for most epidemiological analysis are rarely used or needed. but keeping the comma.] 3 6 9 12 . 1 [.2] [. the second subscript defines column selection.] 2 5 8 11 [3.
> > > > a[1.
> b <. followed by number of columns and finally the number of strata.2] [. c(3. An array may also have 3 dimensions.2) # or b <. 2 [.1] [. from 2nd to 4th columns
The command a[.] 13 16 19 22 [2. .2:4] # 2nd row.4] [1.1:2.4.3] [. an array can have much higher dimensions.1] [.2] [. Individual elements of an array may be referenced by giving the name of the array followed by two subscripts separated by commas inside the square brackets. rows and subarrays using subscripts
While extracting a subset of a vector requires only one component number (or vector).4] # extract 1 cell from the 2nd row and 4th column a[2.
Extracting cells.5)' folds the vector into an array consisting of 2 rows and 5 columns.] 13 16 [2.2))
> b .] 14 17 [3.4.

fruit[2.c("orange"."mango") > Row.
> fruit <.cbind(fruits.fruit fruits fruits2 orange 5 1 banana 10 5 durian 1 3 mango 20 4
Alternatively.
> Row. type:
> Col. 5. either by column (using the function cbind) or by row (using the function rbind).
> fruit2 <. fruits2) > colnames(Col. 3.rbind(fruits."mango") > Col.fruit)
The total number of bananas is obtained by:
> sum(Col. 1.])
To obtain descriptive statistics of each buyer:
27
. the binding can be done by row. 'Row. In the above example.c(5. which are vectors of the same length.fruit orange banana durian mango fruits 5 10 1 20 fruits2 1 5 3 4
Transposition of an array
Array transposition means exchanging rows and columns of the array.Vector binding
Apart from folding a vector. Array transposition is achieved using the t function. an array can be created from vector binding. 4)
To bind 'fruits' with 'fruits2'.fruit) <. 20)
Suppose a second person buys fruit but in different amounts to the first person.
> t(Col."durian"."banana".fruits' is a transposition of 'Col.fruits' and vice versa.c("orange". 10. Let's return to our fruits vector.fruit) <."banana". fruits2)
We can give names to the rows of this array:
> rownames(Col.c(1.fruit) > t(Row.fruit)
Basic statistics of an array
The total number of fruits bought by both persons is obtained by:
> sum(Col.fruit <."durian".fruit <.

fruit.c("Somsri".fruit. "Somchai".
> fruits4 <. "Daeng".c(2.2). inserting the first element of 'fruits4' into the fourth row.fruit.c(20. In this situation R will automatically recycle the element of the shorter vector.2. may refer to each other without formal binding. an array can consist of character string objects. fruits3)
Note that the last element of 'fruits3' is removed before being added.] "Somsri" "Somchai" [2. fruits4)
Note that 'fruits4' is shorter than the length of the first vector argument.] "Daeng" "Veena"
Note that the elements are folded in colum-wise. 5. sequence. 3.
28
.
> Thais <. not row-wise. fruits3) fruits fruits2 fruits3 orange 5 1 20 banana 10 5 15 durian 1 3 3 mango 20 4 5 Warning message: number of rows of result is not a multiple of vector length (arg 2) in: cbind(Col.
Implicit array of two vectors of equal length
Two vectors. fruits4) fruits fruits2 fruits4 orange 5 1 1 banana 10 5 2 durian 1 3 3 mango 20 4 1 Warning message: number of rows of result is not a multiple of vector length (arg 2) in: cbind(Col. Thais [.1] [.
String arrays
Similar to a vector. especially with the same length.fruit)
To obtain descriptive statistics of each kind of fruit:
> summary(Row. "Veena") > dim(Thais) <.> summary(Col.fruit.3) > cbind(Col.2] [1.fruit)
Suppose fruits3 is created but with one more kind of fruit added:
> fruits3 <. 8) > cbind(Col. 15.c(1. with a warning.

which is an object returned from a regression analysis in a future chapter."Hat Yai".c("Bangkok".g."Chiang Mai") > postcode <. It has several mathematical properties and operations that are used behind statistical computations such as factor analysis.
Tables
A table is an array emphasizing the relationship between values among cells.postcode) cities postcode [1.
> cbind(cities.] "Chiang Mai" "50000"
Matrices
A matrix is a two-dimensional array. generalized linear modelling and so on. both displayed on the screen that can readily be seen and hidden as a returned object that can be used later. Usually.] "Hat Yai" "90110" [3. For exercise purposes. since all elements of an array must be of the same type. the numeric vector is coerced into a character vector.c(10000. a cross-tabulation between to categorical variables (using function table). cities=="Hat Yai") which(cities=="Hat Yai")
Note that when a character vector is binded with a numeric vector. a table is a result of an analysis. For example.
29
. thre are many ways to identify the order of a specific element. Users of statistical packages do not need to deal with matrices directly but some of the results of the analyses are in matrix form. cities=="Bangkok") [1] 10000
For a single vector. the following four commands all give the same result. e.> cities <.] "Bangkok" "10000" [2. we will examine the covariance matrix. 90110.
> > > > (1:length(cities))[cities=="Hat Yai"] (1:3)[cities=="Hat Yai"] subset(1:3. 50000) > postcode[cities=="Bangkok"] [1] 10000
This gives the same result as
> subset(postcode. to find the index of "Hat Yai" in the city vector.

list(Sex=sex. female.tapply(visits.1.
> table2 <.1.
> table2 <. the class of 'table2' is still a matrix.table(table2)
Summary of table vs summary of array
In R. age). female and female attend a clinic. then we can create this in R by typing. If the code is 1 for male and 2 for female.2)
Similarly.1.
30
.667 5
Although 'table1' has class table. the next two are old and the last one is young.table. male. if we characterize the ages of the patients as being either young or old and the first three patients are young. applying the function summary to a table performs a chi squared test of independence.table(sex.4. female.2.
> visits <. FUN=mean) Age Sex 1 2 1 1. One can convert it simply using the function as.2.as. list(Sex=sex.2.6) > table1 <. FUN=sum) > table2 Age Sex 1 2 1 1 4 2 11 5
To obtain the mean of each combination type:
> tapply(visits.c(1. Age=age). Age=age). table1 age sex 1 2 1 1 1 2 3 1
Note that table1 gives counts of each combination of the vectors sex and age while 'table2' (below) gives the sum of the number of visits based on the four different combinations of sex and age. respectively.5.2.1)
Suppose also that these patients had one to six visits to the clinic.
> age <.2.Suppose six patients who are male.3.c(1. and the codes for this age classification are 1 for young and 2 for old. then to create this in R type:
> sex <.2.000 4 2 3.c(1.

> list2 <. When properly displayed.123
$y [1] -0.375 [8] -1.547
0.8372
0.
> qqnorm(sample1)
The qqnorm function plots the sample quantiles.1103
The command qqnorm(sample1) is used as a graphical method for checking normality.4158
0. "fruits"))
This is equivalent to
> rm(list1).Note that the arguments of the function list consist of a series of new objects being assigned a value from existing objects or values.
> sample1 <.
32
. against the theoretical quantiles.375 -0. The creation of a list is not a common task in ordinary data analysis. boxplot(sample1) returns another list of objects to facilitate plotting of a boxplot.000 -0.9984 -0.123 -1.655 1.0645 2. Removing objects from the computer memory also requires a list as the argument to the function rm.547 -0. it also gives a list of the x and y coordinates.9595 -0.000
0.
> rm(list=c("list1".655
1. It is used here for the sake of demonstration of the list function only. However.9112 -0. Similarly.rnorm(10)
This generates a sample of 10 numbers from a normal distribution. While it produces a graph on the screen. or the corresponding expected values if the data were perfectly normally distributed. which can be saved and used for further calculation. $.5110 -0. or the sorted observed values.4772 -0.7763 [7] -0.qqnorm(sample1)
This stores the results into an object called list2. a list is sometimes required in the arguments to some functions. each new name is prefixed with a dollar sign.
> list2 $x [1] 0. rm(fruits)
A list may also be returned from the results of an analysis. but appears under a special class.

In this chapter. each variable can have lengthy variable descriptions. From Excel. Data from most software programs can be exported or saved as an ASCII file. which is character.csv" (comma separated values) format. the main structure of a data frame consists of columns (or variables) and rows (or records).txt" extension. the data can be saved as ". For most researchers. a very commonly used spreadsheet program. Simply open the Excel file and 'save as' the csv format. examples were given on arrays and lists. These contain the real data that most researchers have to work with. a data frame can consist of a column of 'idnumber'. these are sometimes called datasets. After
35
. including the ". a complete dataset can contain more than one data frame. usually having a ". Rules for subscripting. A data frame can also have extra attributes.xls" is originally an Excel spreadsheet. They can be transferred from one format to another through the ASCII file format. can have different classes of columns. All columns in an array are forced to be character if just one cell is a character. column or row binding and selection of a subset in arrays are directly applicable to data frames. A factor in a data frame often has 'levels' or value labels. on the other hand. a text file is the most common ASCII file. For example. data frames will be the main focus.R" command file discussed in chapter 25. Data frames are however slightly more complicated than arrays. In Windows.
Comparison of arrays and data frames
Many rules used for arrays are also applicable to data frames. For example. A data frame. However. This is an easy way to interface between Excel spreadsheet files and R.
Obtaining a data frame from a text file
Data from various sources can be entered using many different software programs.Chapter 4: Data Frames
In the preceding chapter. which is numeric and a column of 'name'. As an example suppose the file "csv1. There are several other files in ASCII format. They can also be created in R during the analysis. For example. These attributes can be transferred from the original dataset in other formats such as Stata or SPSS.

sex. Sometimes the file may not contain quotes. as.20 "B". the output file is called "csv1. the contents of which is:
"name".is=TRUE) > a name sex age 1 A F 20 2 B M 30 3 C F 40
The argument 'as.
> a <. For files with white space (spaces and tabs) as the separator.20 B.M.
> a <.table.csv"."M".30 C. The following command should therefore be typed:
> a$sex <.F.40
Note that the characters are enclosed in quotes and the delimiters (variable separators) are commas."F".
namesexage 1AF20 2BM30 3CF40
36
.is=TRUE)
The file "data2. the characters would have been coerced into factors.30 "C".csv". The variable 'name' should not be factored but 'sex' should.txt" is in fixed field format without field separators.is=TRUE' keeps all characters as they are. such as in the file "data1.40
For both files.csv". R will inform you that the object 'sex' cannot be found.table("data1."F". as.txt"."age" "A".'save as' into csv format.csv("csv1."sex". Had this not been specified. the command to use is read.read. the R command to read in the dataset is the same. as in the file "csv2.age A.read.factor(a$sex)
Note firstly that the object 'a' has class data frame and secondly that the names of the variables within the data frame 'a' must be referenced using the dollar sign notation. If not.
name. header=TRUE.F.txt".

A software program specially designed for data entry. to remove all objects in the current workspace without quitting R. such as the variable labels and descriptions.2). skip=1.epiinfo') but it is recommended to export data from Epidata (using the export procedure inside that software) to Stata format and use the function read. The first line.To read in such a file. the chance of human error is high with the spreadsheet or text mode data entry. It is also possible to enter data directly into R by using the function data. if the data size is large (say more than 10 columns and/or more than 30 rows). which is located in your working folder. "sex".
Clearing memory and reading in data
At the R console type:
> rm(list=ls())
The function rm stands for "remove".is=TRUE)
Data entry and analysis
The above section deals with creating data frames by reading in data created from programs outside R. must be skipped.dk.1. which is the header. such as Excel. type:
> zap()
37
. However. To see what objects are currently in the workspace type:
> ls() character(0)
The command ls() shows a list of objects in the current workspace.dta to read the dataset into R. automatic jumps and labelling of variables and values (codes) for each variable.entry. quit R and delete the file ". The name(s) of objects have class character. This will happen if you agreed to save the workspace image before quitting R.names = c("name". The result "character(0)" means that there are no ordinary objects in the environment. width=c(1.fwf is preferred. such as Epidata. The command above removes all objects in the workspace.epidata. If you do not see "character(0)" in the output but something else. as. Exporting data into Stata format maintains many of the attributes of the variables. Alternatively. The width of each variable and the column names must be specified by the user. Epidata has facilities for setting up useful constraints such as range checks. Their web site is: http://www.
> a <. There is a direct transfer between Epidata and R (using 'read. col. "age").read. the function read. is more appropriate. or rename it if you would like to keep the workspace from the previous R session. it means those objects were left over from the previous R session. To avoid this.txt".Rdata".fwf("data2.

Datasets included in Epicalc
Most add-on packages for R contain datasets used for demonstration and teaching. In this book.
> ls() [1] "Familydata"
Viewing contents of a data frame
If the data frame is small such as this one (11 records. you can type:
> names(Familydata) [1] "code" "age" "ht" "wt" "money" "sex"
Another convenient function that can be used to explore the data structure is str. vectors.
> data(Familydata)
The command data loads the Familydata dataset into the R workspace.
> Familydata code age ht 1 K 6 120 2 J 16 172 3 A 80 163 4 I 18 158 5 C 69 153 6 B 72 148 7 G 46 160 8 H 42 163 9 D 58 170 10 F 47 155 11 E 49 167 wt money sex 22 5 F 52 50 M 71 100 M 51 200 F 51 300 F 60 500 F 50 500 F 55 600 F 67 2000 M 53 2000 F 64 5000 M
To get the names of the variables (in order) of the data frame. Ordinary objects include data frames. such as datasets and epicalc. Function objects are spared deletion. just type its name to view the entire dataset. 6 variables). If there is no error you should be able to see this object in the workspace.
Reading in data
Let's try to load an Epicalc dataset. arrays. etc.
38
. most of the examples use datasets from the Epicalc package.This command will delete all ordinary objects from R memory. To check what datasets are available in all loaded packages in R type:
> data()
You will see names and descriptions of several datasets in various packages.

In addition. where each level is stored as an integer starting from 1 for the first level of the factor. This is because R interprets factor variables in terms of levels.
Extracting subsets from a data frame
A data frame has a subscripting system similar to that of an array. Information on each variable is completed without any missing as the number of observations are all 11. Descriptive statistics for factor variables use their unclassed values. The number of observations and standard deviations are included in the report replacing the first and third quartile values in the original summary function from the R base library.3] [1] 120 172 163 158 153 148 160 163 170 155 167
This is the same as
> Familydata$ht
Note that subscripting the data frame Familydata with a dollar sign $ and the variable name will extract only that variable. Try the following commands:
> > > > summary(Familydata$age) summ(Familydata$age) summary(Familydata$sex) summ(Familydata$sex)
Note that summ. The minimum and maximum are shown close to each other enabling the range of the variable to be easily determined. summary statistics for each variable is possible with both choices of functions. To choose only the third column of Familydata type:
> Familydata[. automatically gives a graphical output. Unclassing a factor variable converts the categories or levels into integers. The results are similar to summary statistics of the whole dataset. The values 'F' and 'M' for the variable 'sex' have been replaced by the codes 1 and 2. This will be examined in more detail in subsequent chapters. From the output above the same statistic from different variables are lined up into the same column.The function summ gives a more concise output.
> typeof(Familydata) [1] "list"
40
. showing one variable per line. More discussion about factors will appear later. respectively. This is because a data frame is also a kind of list (see the previous chapter). when applied to a variable.

c("ht". For example.
41
. select = c(ht.] code age ht wt money sex 1 K 6 120 22 5 F 4 I 18 158 51 200 F 5 C 69 153 51 300 F 6 B 72 148 60 500 F 7 G 46 160 50 500 F 8 H 42 163 55 600 F 10 F 47 155 53 2000 F
Note that the conditional expression must be followed by a comma to indicate selection of all columns. Another method of selection is to use the subset function.
> subset(Familydata. two equals signs are needed in the conditional expression. In addition.
Adding a variable to a data frame
Often it is necessary to create a new variable and append it to the existing data frame.c(3. For example. then we can type:
> Familydata[1:3. 'wt' and 'sex'. Recall that one equals sign represents assignment. if we want to display only the first 3 records of 'ht'.6)] ht wt sex 1 120 22 F 2 172 52 M 3 163 71 M
We could also type:
> Familydata[1:3. sex=="F". we may want to create a new variable called 'log10money' which is equal to log base 10 of the pocket money. The user must save this into a new object if further use is needed. sex=="F")
To select only the 'ht' and 'wt' variables among the females:
> subset(Familydata. we can use either the index number of the variable or the name.
> Familydata[Familydata$sex=="F".4."sex")] ht wt sex 1 120 22 F 2 172 52 M 3 163 71 M
The condition in the subscript can be a selection criteria.wt))
Note that the commands to select a subset do not have any permanent effect on the data frame.To extract more than one variable. such as selecting the females."wt".

1st Qu. If we try to use a variable in a data frame that is not in the search path.
43
. are actually read into R's memory and are resident in memory until they are detached. Our data frame is not in the search path. 45. especially if the data frame and variable names are lengthy. as well as the loaded packages.
> summary(age) Min. it is possible that you may have made some typing mistakes. an error will occur. computation of statistics on 'age' is now possible.
> zap() > data(Familydata)
Attaching the data frame to the search path
Accessing a variable in the data frame by prefixing the variable with the name of the data frame is tidy but often clumsy. 6. You can always refresh the R environment by removing all objects and then read in the dataset afresh.
> summary(age) Error in summary(age) : Object "age" not found
Try the following command:
> attach(Familydata)
The search path now contains the data frame in the second position. 80.00 Mean 3rd Qu. To check the search path type:
> search() [1] ". Placing or attaching the data frame into the search path eliminates the tedious requirement of prefixing the name of the variable with the data frame.00
Attaching a data frame to the search path is similar to loading a package using the library function.
> search() [1] ". which is now in the search path. The attached data frame.GlobalEnv" [4] "package:datasets" [7] "package:splines" [10] "package:utils" [13] "Autoloads" "Familydata" "package:epicalc" "package:graphics" "package:foreign" "package:base" "package:methods" "package:survival" "package:grDevices" "package:stats"
Since 'age' is inside Familydata.GlobalEnv" [3] "package:methods" [5] "package:graphics" [7] "package:utils" [9] "package:foreign" [11] "package:base" "package:epicalc" "package:stats" "package:grDevices" "package:datasets" "Autoloads"
The general explanation of search() is given in Chapter 1.50 Max. Some of them may be serious enough to make the data frame Familydata distorted or even not available from the environment.00 Median 47.At this stage.00 30.73 63.

The consequences can be disastrous. that is. repeatedly attaching to a large data frame may cause R to
44
. However. The data frame attached at position 2 may well be different to the object of the same name in another search position. Recall that every time a command is typed in and the <Enter> key is pressed. Attaching again creates conflicts in variable names.This is true even if the original data frame has been removed from the memory. vector) is created outside the data frame (in the global environment) with the same name as the data frame or if two different data frames in the search path each contain a variable with the same name.
> rm(Familydata) > search()
The data frame Familydata is still in the search path allowing any variable within the data frame to be used. all elements in the search path occupy system memory. as seen in the previous section where the variable 'log10money' was added and later removed. a variable in an attached data frame or a function in any of the loaded packages. a data frame can change at any time during a single session.g. R checks whether it is a component of the remaining search path. Confusion arises if an independent object (e. If not. In addition.
> search() [1] ".GlobalEnv" [4] "package:methods" [7] "package:survival" [10] "package:grDevices" [13] "package:stats" "Familydata" "package:datasets" "package:splines" "package:utils" "Autoloads" "Familydata" "package:epicalc" "package:graphics" "package:foreign" "package:base"
The search path now contains two objects named Familydata in positions 2 and 3. The data frame Familydata in the search path occupies the same amount of memory as the one in the current workspace. However.
> age [1] 6 16 80 18 69 72 46 42 58 47 49
Loading the same library over and over again has no effect on the search path but re-attaching the same data frame is possible and may eventually overload the system resources. Repeatedly loading the same library does not add to the search path because R knows that the contents in the library do not change during the same session. the system will first check whether it is an object in the global environment.
> data(Familydata) > attach(Familydata) The following object (s) are masked from Familydata ( position 3 ) : age code ht money sex wt
These variables are already in the second position of the search path. Both have more or less the same set of variables with the same names. Doubling of memory is not a serious problem if the data frame is small.

such as "family. all the datasets in Epicalc were originally in one of the file formats of . it is a good practice firstly. Stata (. do not define a new object (say vector or matrix) that may have the same name as the data frame in the search path. Most data analysis deals only with a single data frame.sav). Epicalc contains a command called use which eases the process. The Familydata data frame comes with Epicalc. simply type use("family. . SPSS (.not execute due to insufficient memory. to remove a data frame from the search path once it is not needed anymore. If you download the files and set the working directory for R to the default folder "C:\RWorkplace". Detach both versions of Familydata from the search path. For example. With these reasons. remains. The command zap() does the same. it will be overwritten by the new data frame. The dataset is copied into memory in a default data frame called .rec) and comma separated value (. If .dbf).dta") without typing the data command above. Thirdly. In order to reduce these steps of attaching and detaching.
> detach(Familydata) > detach(Familydata)
Note that the command detachAllData() in Epicalc removes all attachments to data frames.data already exists.dta")
45
. however.dta".rec.ac. you do not need to type data(Familydata) and use(Familydata). the command zap() is equivalent to rm(list=lsNoFunction()) followed by detachAllData().psu. we should not create a new vector called Familydata as we already have the data frame Familydata in the search path. .csv) formats. These datasets in their original format can be downloaded from http://medipe.dta). but in addition removes all non-function objects.th/Epicalc/. Secondly. remove any objects from the environment using rm(list=ls()) when they are not wanted anymore. At the R console type:
> zap() > data(Familydata) > use(Familydata)
The command use()reads in a data file from Dbase (.csv or .dta. as well as those that come pre-supplied with R packages. In fact.txt. If you want to read a dataset from a Stata file format. EpiInfo (.data. The original Familydata.
The 'use' command in Epicalc
Attaching to and detaching from a data frame is often tedious and cumbersome and if there is more than one data frame in the workspace then users must be careful that they are attached to the correct data frame when working with their data. but instead simply type:
> use("family. In other words.

and not .data is still there.dta") in the next chapter.
. In most parts of the book. In order to show that .data). Type
> ls(all=TRUE)
You will see .
> des() > summ()
46
. we have to attach to it manually.data is placed in the search path as well as being made the default data frame. .data). but . it will make no difference whether you type data(Familydata) followed by use(Familydata) or simply use("family.
> attach(. However.dta") because the dataset is already in the Epicalc package. If successful. Thus des() is the same as des(. may give you a real sense of reading actual files instead of the approach that is used in this book.data is really in the memory. into the search path.data)
The advantage of use() is not only that it saves time by making attach and detach unnecessary. the attachment to the search path is now lost
> search()
In order to put it back to the search path. The command use also automatically places the data frame.data is in the second position of the search path. putting "filename. which is readily available when you practice Epicalc to this point.data.extension" as the argument such as use("family. summ() is equivalent to summ(. and so forth.data because the name of this object starts with a dot and is classified as a hidden object.dta"). we chose to tell you to type data(Familydata) and use(Familydata) instead of use("family.The original Stata file will be read into R and saved as .data is resistant to zap()
Type the following at the R console:
> zap() > ls(all=TRUE)
The object Familydata is gone but . Type:
> search()
You will see that .data in the first position of the list.dta") in this chapter or use("timing. Type:
> ls()
You will see only the Familydata object. However.data.

The sequence of commands zap. it is advised to rename or copy the final data frame to . Then detach from the old .data and re-attach to the most updated one in the search path. This strategy does not have any effect on the standard functions of R. the command use() is sufficient to create this setting. The users of Epicalc can still use the other commands of R while still enjoying the benefit of Epicalc. use(datafile). A number of other commands from the Epicalc package work based on this strategy of making . data(datafile). For straightforward data analysis.
47
.data. des() and summ() is recommended for starting an analysis of almost all datasets in this book. In many cases where the data that is read in needs to be modified.data the default data frame and exclusively attached to the search path (all other data frames will be detached. unless the argument 'clear=FALSE' is specified in the use function).

use the last commands (zap. des.
48
. data.Exercises________________________________________________
With several datasets provided with Epicalc. summ) to have a quick look at them. use.

such as Epidata or Stata.
49
. of observations = 11 Variable Class 1 code character 2 age integer 3 ht integer 4 wt integer 5 money integer 6 sex factor
Description Age(yr) Ht(cm. The other variables have class integer. In this chapter. The use function places the data frame into a hidden object called . This is usually created by the software that was used to enter the data.Chapter 5: Simple Data Exploration
Data exploration using Epicalc
In the preceding chapter.
> > > > zap() data(Familydata) use(Familydata) des()
Anthropometric and financial data of a hypothetical family No. des and summ for initially exploring the data frame. Recall that a factor is what R calls a categorical or group variable. Subsequent lines show variable names and individual variable descriptions. we will work with more examples of data frames as well as ways to explore individual variables.data. The remaining integer variables ('age'. use for reading in a data file and codebook.) Pocket money(B. 'wt' and 'money') are intuitively continuous variables. The variable 'code' is a character string while 'sex' is a factor. keeping in mind that these are all Epicalc commands.)
The first line after the des() command shows the data label. we learnt the commands zap for clearing the workspace and memory. The variables 'code' and 'sex' have no variable descriptions due to omission during the preparation of the data prior to data entry. 'ht'. which is the descriptive text for the data frame.) Wt(kg. A character variable is not used for statistical calculations but simply for labelling purposes or for record identification. which is automatically attached to the search path.

The mean and median age. 80
50
. all values are stored internally as integers. The last variable. 6
max.73 157. in this case .d. since the minimum is 1 and the maximum is 2. 'sex'.11 14. 47 160 53 500 1 24.18 1023. height and weight are quite close together indicating relatively non-skewed distributions. Epicalc has another function that gives summary statistics for a numeric variable and a frequency table with level labels and codes for factors. the command summ gives summary statistics of all variables in the default data frame.87 1499. Their heights range from 120 to 172 (cm).3 12. This is very useful for numeric variables but less so for factors.18 1.18 54.000 (baht). Each of the six variables has 11 observations. the statistics are based on the unclassed values of this variable. If a factor has more than two levels.d. The variable 'money' has a mean much larger than the median signifying that the distribution is right skewed.505 min. The variable 'money' ranges from 5 to 5. is a factor. 6 120 22 5 1 max. Since the variable 'code' is class 'character' (as shown from the 'des()' command above). mean median s.11 ==================
min.
Codebook
The function summ gives summary statistics of each variable.4 percent of the subjects have the second level of the factor (in this case it is male). i. especially those with more than two levels.364
1 2 3 4 5 6
As mentioned in the previous chapter. the mean will have no useful interpretation. For factors. information about this variable is not shown. only 1 or 2 in this case.
> codebook() Anthropometric and financial data of a hypothetical family code : A character vector ================== age : Age(yr) obs. and their weights range from 22 to 71 (kg). The mean of 'sex' is 1.e. The ages of the subjects in this dataset range from 6 to 80 (years).727 47 24. 11 45.55 0. 80 172 71 5000 2
mean 45.364 indicating that 36. line by line.data. which means that there are no missing values in the dataset. 11 11 11 11 11 = 11 median s. However. We can see that there are two levels. name code age ht wt money sex Obs.> summ() Anthropometric and financial data of a hypothetical family No. of observations Var.

min.d. followed by frequency and percentage of the distribution.ht
: Ht(cm. If a variable label exists. There are 7 females and 4 males in this family. min. order Class # records Description 1 character 11
51
. the name of the table for the label of the levels is shown and the codes for the levels are displayed in the column. min.87 22 71 ================== money : Pocket money(B. the original label table is named 'sex1' where 1 = F and 2 = M.) obs.d. codebook deals with each variable in the data frame with more details. Note that the label table for codes of a factor could easily be done in the phase of preparing data entry using Epidata with setting of the ".) obs. then the label table of each variable will be exported along with the dataset. 11 1023.
> des(code) 'code' is a variable found in the following source(s): Var.chk" file.d. max. For 'sex'.55 5 5000 ================== sex : Label table: sex1 code Frequency Percent F 1 7 63.182 53 12. source .data Var.4 ==================
Unlike results from the summ function. The Epicalc codebook command fully utilizes this attribute allowing users to see and document the coding scheme for future referencing. 11 157.) obs. mean median s.3 120 172 ================== wt : Wt(kg. The function is therefore very useful. For factors. max. max. If the data is exported in Stata format. The output combines variable description with summary statistics for all numeric variables. mean median s.182 160 14. The output can be used to write a table of baseline data of the manuscript coming out from the data frame. it is given in the output. which is a factor. 11 54. mean median s.6 M 2 4 36.182 500 1499. We can also explore individual variables in more detail with the same commands des and summ by placing the variable name inside the brackets. The label tables are passed as attributes in the corresponding data frame.

but positioned freely outside the hidden data frame. 80
Distribution of Age(yr)
Subject sorted by X−axis values
20
40
60
80
52
. source .727 47 s.data.11 min. mean median 11 45. which is the part of . Using des() with other variables shows similar results.data Var. Next type:
> summ(age) Obs.The output tells us that 'code' is in . Now try the following command:
> summ(code)
This gives an error because 'code' is a character vector. order Class numeric 1 character # records Description 1 11
The output tells us that there are two 'codes'. we will delete the recently created object 'code'.d.
> rm(code)
After removal of 'code' from the global environment.
> code <. the latest des() command will describe the old 'code' variable. 6 max.GlobalEnv . and remains usable. Suppose we create an object. The first is the recently created object in the global environment.1 > des(code) 'code' is a variable found in the following source(s): Var. To avoid confusion. also called 'code'.data. .data. 24. The second is the variable inside the data frame.

represents each subject or observation sorted by the values of the variable.916 1
max. 4 etc.1:20 > summ(abc) Obs. The values increase from one observation to the next higher value. mean median 20 10. which is plotted at the bottom left. Since this increase is steady. the Y-axis. Now try the following commands:
> abc <.
53
. 3. then 2. the line is perfectly straight. 5. the smallest number is 1.d. the variable name will be presented instead. The main title of the graph contains a description of the variable after the words "Distribution of".The results are similar to what we saw from summ. min. labelled 'Subject sorted by X-axis values'. since the argument to the summ command is a single variable a graph is also produced showing the distribution of age. The other axis. The final observation is 20.5 10. If the variable has no description. For the object 'abc'. However. which is plotted at the top right. 20
Distribution of abc
Subject sorted by X−axis values
5
10
15
20
The object 'abc' has a perfectly uniform distribution since the dots form a straight line. A dot chart has one axis (in this case the X-axis) representing the range of the variable.5
s. The graph produced by the command summ is called a sorted dot chart.

thus these give a relatively steep increment on the Y-axis. 3. The ticks are placed at values of 1. In this session. min. 'side=2' denotes the Y-axis). 1:length(ht)) > sort(ht) [1] 120 148 153 155 158 160 163 163 167 170 172
54
.To look at a graph of age again type:
> summ(age) > axis(side=2.182 160 14. 5th. The 4th. The ticks are omitted by default since if the vector is too long. mean median s.303 120 172 > axis(side=2. the ticks would be too congested. From the 3rd observation to the 4th (42 years). up to 11 (which is the length of the vector age).
Distribution of Age(yr)
10 11 Subject sorted by X−axis values 1 2 3 4 5 6 7 8 9
20
40
60
80
To facilitate further detailed consideration. 11 157.
> sort(age) [1] 6 16 18 42 46 47 49 58 69 72 80
The relative increment on the X-axis from the first observation (6 years) to the second one (16 years) is larger than from the second to the third (18 years). the increment is even larger than the 1st one. the slope is relatively flat.d. In other words. 6th and 7th values are relatively close together. the sorted age vector is shown with the graph. Thus we observe a steep increase in the Y-axis for the second pair. the ticks will facilitate discussion. max. there is no dot between 20 and 40 years. 1:length(age))
The 'axis' command adds tick marks and value labels on the specified axis (in this case. 2.
> summ(ht) Obs.

There is a higher level of clustering of weight than height from the 2nd to 7th observations. The next two persons carry around 2. far away (in the X-axis) from the others.000 baht. the distribution is quite uniform. mean 11 1. 1 max.
56
.
> summ(sex) Obs.)
10 11 Subject sorted by X−axis values 1 2 3 4 5 6 7 8 9
0
1000
2000
3000
4000
5000
Next have a look at the distribution of the sex variable. as shown in the textual statistics) are male.d. For the distribution of the money variable.000 baht. The first seven persons carry less than 1. the values will show the name of the group. This is somewhat consistent with a theoretical exponential distribution. From the 8th to 11th observations.000 baht whereas the last carries 5.364 median 1 s.4%.
Distribution of Pocket money(B. these six persons have very similar weights. type:
> summ(money)
Money has the most skewed distribution. 0.5 min. When the variable is factor and has been labelled. 2
The graph shows that four out of eleven (36.

514 120
max. Most people are more acquainted with a dot plot than the sorted dot chart produced by summ. by=sex) For sex = F Obs. 14. In the figure above. When the sample size is small.) by sex
M
F
120
130
140
150
160
170
Clearly. 172
Distribution of Ht(cm. However. min. 163
median 168.
58
. mean median 7 151 155 For sex = M Obs.5
s. dotplot divides the scale into several small equally sized bins (default = 40) and stacks each record into its corresponding bin.d. Epicalc has another exploration tool called dotplot. we may simply compare the distributions of height by sex.916
min. plots by summ are more informative. there are three observations at the leftmost bin and one on the rightmost bin.
Dotplot
In addition to summ and tab1. males are taller than females.
> dotplot(money)
While the graph created from the summ command plots individual values against its rank. mean 4 168
s.d. the latter plot gives more detailed information with better accuracy.Since there are two sexes. 163
max. 3.
> summ(ht. The plot is very similar to a histogram except that the original values appear on the X-axis.

One may want to show even more information.When the sample size is large (say above 200). R can serve most purposes.) by sex
M
F
0
1000
2000
3000
4000
5000
The command summ easily produces a powerful graph.
59
. but the user must spend some time learning it.
Distribution of Pocket money(B. by=sex)
Distribution of Pocket money(B.)
20 Frequency 0
0
5
10
15
1000
2000
3000
4000
5000
> dotplot(money. dotplot is more understandable by most people.

"male"). To add the y-axis. the incremental pattern would not be seen. unlike its R base equivalent sort. the previous commands can be edited before executing again. If you make a serious mistake simply start again from the first line.
> dotchart(ht)
Had the data not been sorted. uclassing it gives a numeric vector with 1 for the first level (female) and 2 for the second level (male). To add the titles type:
> title(main="Distribution of height") > title(xlab="cms")
60
. y=10. Thus the black dots represent females and the red dots represent males. col=1:2. up to 9. Since 'sex' is a factor. More details on how to view or manipulate the palette can be found in the help pages. all the labels of the ticks will be horizontal to the axis. The whole data frame has been sorted in ascending order by the value of height. type the following command:
> axis(side=2. which represents gray. text.
> > > > > zap() data(Familydata) use(Familydata) sortBy(ht) .data
The command sortBy. where the number 1 represents black. the number 2 represents the red.col=1:2)
The argument 'pch' stands for point or plotting character. col=unclass(sex). pch=18)
Showing separate colours for each sex is done using the 'unclass' function. pch=18. Note that 'col' is for plot symbol colours and 'text. labels=code. which specifies the orientation of tick labelling on the axes. Colours can be specified in several different ways in R.data. legend=c("female". has a permanent effect on . Using the up arrow key. las=1)
The argument 'las' is a graphical parameter.at=1:length(ht).
> dotchart(ht.Let's draw a sorted dot chart for the heights. A legend is added using the 'legend' command:
> legend(x=130. Code 18 means the symbol is a solid diamond shape which is more prominent than pch=1 (a hollow round dot). One simple way is to utilise a small table of colours known as the palette. The command below should be followed step by step to see the change in the graphic window resulting from typing in each line. The default palette has 9 colours.col' is for text colour in the legend. When 'las=1'.

individual variables can be explored simply by summ(var. Further use of this command will be demonstrated when the number of observations is larger.var).Distribution of height
J D E H A G I F C B K female male
120
130
140 cms
150
160
170
To summarise. In addition to summary statistics. the sorted dot chart can be very informative. by=group.name) and summ(var.name. des and summ. The dotplot command trades in accuracy of the individual values with frequency dot plots. which is similar to a histogram. after use(datafile).
61
.

Chapter 6: Date and Time
One of the purposes of an epidemiological study is to describe the distribution of a population's health status in terms of time. date starting treatment and assessing outcome are elements needed to compute survival time.
Computation functions related to date
Working with dates can be computationally complicated. the follow-up time is usually marked by date of visit. In follow up studies. Try the following at the R console:
63
. minute and second. This is called an epoch. In an outbreak investigation. The time unit includes century. hour. month and day. Most data analyses. description of date of exposure and onset is crucial for computation of incubation period. which is a serial function of year. The basic task in working with dates is to link the time from a fixed date to the display of various date formats that people are familiar with. There are leap years. dates are stored as the number of days since 1st January 1970. days of the week and even leap seconds. In survival analysis. Birth date is necessary for computation of accurate age. In this chapter. year. months with different lengths. Different software use different starting dates for calculating dates. There are several common examples of the use of dates in epidemiological studies. R uses the first day of 1970 as its epoch (day 0). the emphasis will be on time. place and person. day. month. In other words. however deal more with a person than time and place. The chronological location of day is date. The most common unit that is directly involved in epidemiological research is day. Dates can even be stored in different eras depending on the calendar. with negative values for earlier dates.

"%b %d. while '%B' and '%b' represent the months. This time. it should work. Under some operating system conditions. summ(b) > setTitle("French"). 1970"
The function 'format' displays the object 'a' in a fashion chosen by the user. summ(b) > setTitle("Italian").numeric(a) [1] 0
The first command above creates an object 'a' with class Date. day. "C" is the motherland of R and the language "C" is American English. '%A' and '%a' are formats representing full and abbreviated weekdays.> a <. "C")
Now try the above format command again. '%b' denotes the month in the three-character abbreviated form. respectively. which varies from country to country. Try the following command:
> Sys. the value is 0. '%d' denotes the day value and '%Y' denotes the value of the year. Day 100 would be
> a + 100 [1] "1970-04-11"
The default display format in R for a Date object is ISO format. including the century. %Y") [1] "Jan 01. The American format of 'month.
> setTitle("German"). summ(b)
64
.a + (0:3) > b
Then change the language and see the effect on the R console and graphics device. such as the Thai Windows operating system. Try these:
> b <.as. R has the 'locale' or working location set by the operating system. year' can be achieved by
> format(a. When converted to numeric.Date("1970-01-01") > a [1] "1970-01-01" > class(a) [1] "Date" > as. '%b' and '%a' may not work or may present some problems with fonts. These are language and operating system dependent.setlocale("LC_ALL".

The command setTitle changes the locale as well as the fixed wording of the locale to match it. the three phrases often used in Epicalc ("Distribution of". Epicalc displays the results of the summ function in ISO format to avoid country biases. This is however a bit too complicated to demonstrate in this book. 4 1970-01-02 1970-01-02 <NA>
min. Manipulation of title strings. In case the dates are not properly displayed.
> format(b.up. To reset the system to your original default values. check whether the date format containing '%a' and '%b' works. variable labels and levels of factors using your own language means you can have the automatic graphs tailored to your own needs. Interested readers can contact the author for more information. "C")
Then. Thai and Chinese versions of Windows may give different results. To see what languages are currently available in Epicalc try:
> titleString() > titleString(return. and "Frequency") can be changed to your own language. For more details see the help for the 'titleString' function.d. You may try setTitle with different locales. mean median s. "%a %d%b%y") [1] "Thu 01Jan70" "Fri 02Jan70" "Sat 03Jan70" "Sun 04Jan70" > summ(b) obs.table=TRUE)
Note that these languages all use standard ASCII text characters. type
> setTitle("")
For languages with non-standard ASCII characters. like the vector 'b'. Note that '%a' denotes weekday in the three-character abbreviated form. "by".look. 1970-01-01 1970-01-04
65
. just solve the problem by typing:
> Sys. has the Xaxis tick mark labels in '%a%d%b' format. The graphic results in only a few range of days. max.setlocale("LC_ALL". The displayed results from these commands will depend on the operating system.

thus '%d' must also be in the middle position. This must be correspondingly specified in the format of the as. Create a vector of three dates stored as character:
> date1 <.csv) file format.c("07/13/2004". R can read in date variables from Stata files directly but not older version of EpiInfo with <dd/mm/yy> format."08/01/2004". which can only be day (since there are only 12 months).
66
. Slashes '/' separate month.as.is = TRUE' in the read.Date(date1. day and year. is in the middle position. "%m/%d/%Y")
The format or sequence of the original characters must be reviewed. When reading in data from a comma separated variable (."03/13/2005") > class(date1) [1] "character" > date2 <. Transferring date variables from one software to another sometimes results in 'characters' which are not directly computable by the destination software.Date command. In the first element of 'date1'. it is a good habit to put an argument 'as.Distribution of b
Subject sorted by X−axis values Thu01Jan
Fri02Jan
Sat03Jan
Sun04Jan
Reading in a date variable
Each software has its own way of reading in dates. This will be read in as 'character' or 'AsIs'. '13'. It is necessary to know how to create date variables from character format.csv command to avoid date variables being converted to factors.

c("12". When the year value is omitted. values of hour.Date(paste(day1.Date) > help(format. "%d%b%y") [1] "13Jul04" "01Aug04" "13Mar05"
Other formats can be further explored by the following commands:
> help(format. month and year presented. month1) [1] "12 07" "13 08" "14 12" > as.POSIXct)
It is not necessary to have all day.
Dealing with time variables
A Date object contains year. if only month is to be displayed."08"."12") > paste(day1. For example."14"). R automatically adds the current year of the system in the computer. "%d %m") [1] "2007-07-12" "2007-08-13" "2007-12-14"
The function paste joins two character variables together. Changing into the format commonly used in Epicalc is achieved by:
> format(date2. "%B") [1] "July" "August" "March"
To include day of the week:
> format(date2. For time. "%A")
Conversely.> date2 [1] "2004-07-13" "2004-08-01" "2005-03-13" > class(date2) [1] "Date"
The default date format is "%Y-%m-%d".c("07". you can type:
> format(date2. minute and second must be available. > month1 <.
67
. if there are two or more variables that are parts of date:
> day1 <. month and day values. "%a-%d%b") [1] "Tue-13Jul" "Sun-01Aug" "Sun-13Mar" > weekdays(date2) [1] "Tuesday" "Sunday"
"Sunday"
This is the same as
> format(date2.month1)."13".

time)
Min. day=14. hour=bedhr. Median Mean Max.ISOdatetime(year=2004.time <. tz="") > summ(bed. To recalculate the day type:
> bed.time)
Min. 13. the third otherwise. 14)
The ifelse function chooses the second argument if the first argument is TRUE. the day should be calculated based on the time that the participants went to bed. Median Mean Max.ISOdatetime(year=2004. 2004-12-14 00:00 2004-12-14 01:30 2004-12-14 08:09 2004-12-14 23:45
Distribution of bed. the day of the workshop. then the day should be December 13th. day=bed. tz="") > summ(bed.time <. If the participant went to bed between 12pm (midday) and 12am (midnight). sec=0.ifelse(bedhr > 12.day <. min=bedmin.day. month=12. month=12.
> bed. the function ISOdatetime is used. 2004-12-13 21:30 2004-12-14 00:22 2004-12-14 00:09 2004-12-14 02:30
69
. In fact.time
Subject sorted by X−axis values
5
10
15
00:00 03:00 06:00 09:00 12:00 15:00 18:00 21:00
The graph shows interrupted time.To create a variable equal to the time the participants went to bed. min=bedmin.
> bed. otherwise the day should be the 14th. hour=bedhr. sec=0.

up.up. 1:length(bed. col="blue".time.time) segments(bed.time' and the maximum of 'woke. woke. 1:n. but can be changed by the user if desired. pch=18. 1:n) points(woke.
> sortBy(bed.max(woke.time.
Displaying two variables in the same graph
The command summ of Epicalc is not appropriate for displaying two variables simultaneously.length(bed. col="red") title(main="Distribution of Bed time and Woke up time")
71
.time) > plot(bed. xlim=c(min(bed.time'. The argument yaxt="n" suppresses the tick labels on the Y-axis.
> > > > n <.time.time)). yaxt="n")
The argument 'xlim' (x-axis limits) is set to be the minimum of 'bed.up. pch=18. 1:n.Distribution of sleep.up. ylab=" ".time). Somebody slept very little.duration' are chosen.time. The original dotchart of R is the preferred graphical method.duration
Subject sorted by X−axis values
1
2
3
4 hours
5
6
7
8
A suitable choice of units for 'sleep.time).

Distribution of arrival. of observations =15 Variable Class Description 1 id integer code 2 gender factor gender 3 dbirth Date Date of birth 4 sleepy integer Ever felt sleepy in workshop 5 lecture integer Sometimes sleepy in lecture 6 grwork integer Sometimes sleepy in group work 7 kg integer Weight in Kg 8 cm integer Height in cm
73
. There was one male who was slightly late and one male who was late by almost one hour. Most males who had no responsibility arrived just in time. Females varied their arrival time considerably.time by gender
female
male
08:00
08:20
08:40
09:00
09:20
The command summ works relatively well with time variables.
> > > > zap() data(Sleep3) use(Sleep3) des()
Sleepiness among the participants in a workshop No. The following dataset contains subject's birth dates that we can use to try computing age. Quite a few of them arrived early because they had to prepare the workshop room. In this case.
Age and difftime
Computing age from birth date usually gives more accurate results than obtaining age from direct interview. it demonstrates that there were more females than males.

16 4.year. 10 28. 35. mean median s.86
Distribution of age. mean median s. Males have an obviously smaller sample size with the same range as women but most observations have relatively high values.712 20.03
max.17
For gender = female Obs.d. by=gender) For gender = male Obs.06 6.83 32.in.353 20.> summ(age. 4 29. This a missing value.year by gender
female
male
20
25 years
30
35
Note that there is a blank dotted line at the top of the female group. min.in. min.5 34. max.
75
.d.4 29.

Exercises________________________________________________
In the Timing dataset: Compute time since woke up to arrival at the workshop.
76
. time woke up and arrival time on the same axis. Plot time to bed.

abdominal pain and diarrhea. Most variable names are selfexplanatory. while code 90 represents totally missing information. or 'AsIs' in R terminology. On 25 August 1990. The dataset is called Outbreak. Variables are coded as 0 = no. which are in character format. Some participants experienced gastrointestinal symptoms. vomiting. the local health officer in Supan Buri Province of Thailand reported the occurrence of an outbreak of acute gastrointestinal illness on a national handicapped sports day. This variable records the number of pieces eaten by each participant. 'saltegg' (salted eggs) and 'water'. The ages of each participant are recorded in years with 99 representing a missing value. This chapter illustrates how the data can be described effectively. of observations =1094
77
.. Type the following at the R console:
> > > > zap() data(Outbreak) use(Outbreak) des()
No.
Quick exploration
Let's look at the data. such as: nausea. The variables 'exptime' and 'onset' are the exposure and onset times. Time and date data types are not well prepared and must be further modified to suit the need of the descriptive analysis. Missing values were coded as follows: 88 = "ate but do not remember how much".Chapter 7: An Outbreak Investigation:
Describing Time
An outbreak investigation is a commonly assigned task to an epidemiologist. Dr Lakkana Thaikruea and her colleagues went to investigate. 1 = yes and 9 = missing/unknown for three food items consumed by participants: 'beefcurry' (beef curry). a finger-shaped iced cake of choux pastry filled with cream. Also on the menu were eclairs.

We will first define the cases, examine the timing in this chapter and investigate the cause in the next section.

Case definition
It was agreed among the investigators that a case should be defined as a person who had any of the four symptoms: 'nausea', 'vomiting', 'abdpain' or 'diarrhea'. A case can then by computed as follows:
> case <- (nausea==1)|(vomiting==1)|(abdpain==1)|(diarrhea==1)

To incorporate this new variable into .data, we use the function label.var.
> label.var(case, "diseased")

The variable 'case' is now incorporated into .data as the 14th variable together with a variable description.
> des()

78

Timing of exposure
For the exposure time, first look at the structure of this variable.
> str(exptime) Class 'AsIs' chr [1:1094] "25330825180000" "25330825180000"...

The values of this variable contain fourteen digits. The first four digits represent the year in the Buddhist Era (B.E.) calendar, which is equal to A.D. + 543. The 5th and 6th digits contain the two digits representing the month, the 7th and 8th represent the day, 9th and 10th hour, 11th and 12th minute and 13th and 14th second.
> day.exptime <- substr(exptime, 7, 8)

The day of exposure was 25th of August for all records (ignoring the 39 missing values). We can extract the exposure time in a similar fashion.
> hr.exptime <- substr(exptime, 9, 10) > tab1(hr.exptime)

These are also acceptable, although note that most minutes have been rounded to the nearest hour or half hour. The time of exposure can now be calculated.
> time.expose <- ISOdatetime(year=1990, month=8, day= day.exptime, hour=hr.exptime, min=min.exptime, sec=0)

Of the subjects interviewed, 57.8% had a missing 'onset' and subsequently on the derived variable 'day.onset'. This was due to either having no symptoms or the subject could not remember. Among those who reported the time, 429 had the onset on the 25th August. The remaining 33 had it on the day after.
> > > > > hr.onset <- substr(onset, 9, 10) tab1(hr.onset) min.onset <- substr(onset, 11, 12) tab1(min.onset) time.onset <- ISOdatetime(year = 1990, month = 8, day = day.onset, hour = hr.onset, min = min.onset, sec=0, tz="") > label.var(time.onset, "time of onset") > summ(time.onset)
Distribution of time of onset

Subject sorted by X−axis values

0

200

400

600

800

1000

15:00

18:00

21:00

00:00

03:00

06:00

09:00

81

Min. Median Mean 1990-08-25 15:00 1990-08-25 21:30 1990-08-25 21:40

Max. 1990-08-26 09:00

The upper part of the graph is empty due to the many missing values. Perhaps a better visual display can be obtained wth a dotplot.
> dotplot(time.onset)
Distribution of time of onset
250 Frequency 0 15:00 50 100 150 200

18:00

21:00

00:00 HH:MM

03:00

06:00

09:00

Both graphs show the classic single-peak epidemic curve, suggesting a single point source. The earliest case had the onset at 3pm in the afternoon of August 25. The majority of cases had the onset in the late evening. By the next morning, only a few cases were seen. The last reported case occurred at 9am on August 26.

has 'pch' equal to -1 indicating no point is to be drawn.for. 25. The legend consists of three items as indicated by the character vector. 0).The plot pattern looks similar to that produced by 'summ(time. some text describing the key statistic of this variable is placed inside the plot area at 5pm and centred at 200. Finally. 1:n.expose. pch=c(20.
> with(data. 17. 1:n. y = 150. incubation period.26.graph.onset. A line joining each pair is now drawn by the segments command. The limits on the horizontal axis are from the minimum of time of exposure to the maximum of the time of onset. thus avoiding too much overlapping of the dots. srt = 90)
84
.2. The point characters and colours of the legend are specified in accordance with those inside the graph. 0. A legend is inserted to make the graph self-explanatory.for.8.
> legend(x = ISOdatetime(1990. which plots small solid circles. of exposure time and onset are 0 (no line) whereas that for incubation period is 1 (solid line).graph.0."Onset time".expose)'. time. labels = "median incubation period = 3. { points(time.1). The point character. pch=20) } )
The two sets of points are paired by subjects. lty=c(0.0.-1).20.5 hours". legend=c("Exposure time". The background of the legend was given lavender colour to supersede any lines or points behind the legend. These points are added in the following command:
> with(data."Incubation period").col=c("red". col = "grey45") } )
The complete list of built in colour names used by R can be found from colours(). 1:n. 'lty'. The last argument. y = 200. 'pch'. The line type.
> text(x = ISOdatetime(1990. The colours of the points and the lines are corresponding to that in the graph. is 20. bg="lavender")
The left upper corner of the legend is located at the right lower quadrant of the graph with the x coordinate being 2am and y coordinate being 150. allowing the points of onset to be put on the same graph.onset. 8. col="blue".0)."blue". { segments(time."grey45").

.
Exposure time & onset of food poisoning outbreak
Subject ID sorted by Exposure Time
400
Median incubation period = 3.The middle of the text is located at x = 19:00 and y = 200 in the graph. Savanpunyalert. P. J. 1995 An unusual outbreak of food poisoning.data. Pataraarechachai. Naluponjiragul. L. file = "Chapter7..5 hours
100
200
300
Exposure time Onset incubation period
0
11:00
15:00
19:00
23:00
03:00
07:00
Time (HH:MM)
Analysis of timing data has finished. In this case a rotation of 90 degrees gives the best picture.Rdata")
Reference
Thaikruea. The main data frame . U. Southeast Asian J Trop Med Public Health 26(1):78-85. The parameter 'srt' comes from 'string rotation'. Since the background colour is already grey. white text would be suitable.data is saved for further use in the next chapter.
85
..
> save(.

Exercise_________________________________________________
We recode the original time variable 'onset' right from the beginning using the command:
> onset[!case] <. why and how can we get a permanent change to the data frame that we are using? Note: the built-in Outbreak dataset cannot be modified. has the variable 'onset' been changed? If not.NA
For the data frame that we are passing to the next chapter.
86
.

9 to missing value.
> recode(eclair.factor(saltegg. This should be recoded first.data is ready for use des()
Recoding missing values
There are a number of variables that need to be recoded. labels=c("No". new.Chapter 8: An Outbreak Investigation:
Risk Assessment
The next step in analysing the outbreak is to deal with the level of risk.var(saltegg. the absolute missing value is 90. then recheck the data frame for the missing values."Yes")) saltegg <. "Beefcurry") label.var(beefcurry. The Epicalc command recode is used here.value = 99. water).
> > > > > > > zap() load("Chapter7. let's first load the data saved from the preceding chapter. NA)
The three variables can also be changed to factors with value labels attached.value = NA)
The variables with the same recoding scheme. 90. "Salted egg") label. saltegg.
> recode(var = age. NA) > summ()
87
. More details on this function are given in chapter 10.factor(beefcurry. They can be recoded together in one step as follows:
> recode(vars = c(beefcurry.factor(water. However. are 'beefcurry'.Rdata") ls(all=TRUE) # .data is there search() # No dataset in the search path use(.
> > > > > > beefcurry <. 9.var(water.data) search() # . old."Yes")) water <. The first variable to recode is 'age'. labels=c("No". "Water")
For 'eclair'. labels=c("No". 'saltegg' and 'water'."Yes")) label.

We will use the distribution of these proportions to guide our grouping of eclair consumption.
> tabpct(eclair. The first column of zero consumption has a very low attack rate. labels=c("0".4.lowest = TRUE. There is a tendency of increasing red area or attack rate from left to right indicating that the risk was increased when more pieces of eclair were consumed. The highest frequency is 2 pieces followed by 0 and 1 piece. therefore it should be a separate category. At this stage. Others who ate more than two pieces should be grouped into another category.">2"))
The argument 'include."1". The other numbers have relatively low frequencies. breaks = c(0.All variables look fine except 'eclair' which still contains the value 80 representing "ate but not remember how much". Only a few took half a piece and this could be combined with those who took only one piece. 2. 79). include. We will analyse its relationship with 'case' by considering it as an ordered categorical variable. case)
Distribution of diseased by eclair
0 0. those coded as '80' will be dropped due to the unknown amount of consumption as well as its low frequency.cut(eclair. 0. particularly the 5 records where 'eclair' was coded as 80.5 1 2 3 4 5 6 810 19 80 12 20
diseased
TRUE
FALSE
eclair
The width of the columns of the mosaic graph denotes the relative frequency of that category. Persons consuming 2 pieces should be kept as one category as their frequency is very high.
> eclairgr <. cross tabulation can be performed by using the Epicalc command tabpct. 1.lowest=TRUE' indicates that 0 eclair must be included in the lowest category."2".
88
. Finally.

9) (70.eat <.e. "pieces of > tabpct(eclairgr. The graph output is similar to the preceding one except that the groups are more concise.5) (54.1% among those heavy eaters of eclair. i. The next step is to create a binary exposure for eclair.
> label.eat.1) 1 54 51 (51.4) (48.1) ======== lines omitted ========= eclair eaten")
Total 294 (100) 105 (100) 446 (100) 127 (100)
Distribution of diseased by pieces of eclair eaten
0 1 2 >2
diseased
TRUE
FALSE
pieces of eclair eaten
The attack rate or percentage of diseased in each category of exposure. We now have a continuous variable of 'eclair' and a categorical variable of 'eclairgr'.data. case) ======== lines omitted ========= Row percent diseased pieces of eclai FALSE TRUE 0 279 15 (94. 'saltegg' and 'water'
89
. as shown in the bracket of the column TRUE.6) 2 203 243 (45.1% among those who did not eat any eclairs to 70.9) (5. 'beefcurry'.eclair > 0 > label.
> eclair. increases from 5.5) >2 38 89 (29. "eating eclair")
This binary exposure variable is now similar to the others.var(eclair.var(eclairgr.It is a good practice to label the new variable in order to describe it as well as put it into .

var(ageGrp.150 0.45] 1. which is another main feature of Epicalc.10] 3.
> des("????????") No.
> > > > > (age.272 (5. It indicates how many times the risk would increase had the subject changed their status from nonexposed to exposed.000
age by sex (percentage of each sex).150 1.483 16. Male 0.051 32. sex)) ageGrp <. The increment is considered in fold.5] 0.25] 6.age.150
Finally.000 (55.793 (25.502 1.011 (20.815 (50.359 (45.141 (30.902 (40. "Age Group") des() des("age*")
No.30] 11.55] 0.003 1.50] 0.Tabulation of Female [0.tab <.60] 0.261 (10.802 0.15] 46.250 (35. Let's return to the analysis of risk.108 3. of observations =1094 Variable Class 11 vomiting numeric 13 diarrhea numeric 18 eclairgr factor
Description
pieces of eclair eaten
We have spent some time learning these features of Epicalc for data exploration (a topic of the next chapter). both the table and age group can be saved as R objects for future use. of observations =1094 Variable Class 3 age numeric 20 ageGrp factor
Description Age Group
The des function can also display variables using wild card matching. Risk ratio – RR (also called relative risk) is the ratio of the risk of getting disease among the exposed compared with that among the non-exposed.
92
.20] 22.
Comparison of risk: Risk ratio and attributable risk
There are basically two methods for comparing the risk of disease in different exposure groups.583 33.817 8.196 (15.201 1.40] 1.tab$ageGroup label.pyramid(age. thus has a mathematical notation of being a 'multiplicative model'.35] 6.

5% had they not eaten any eclairs. a increase of 11 fold.eat) eating case FALSE FALSE 279 TRUE 15 Total 294 Rne 0. could lead to a huge amount of resources spent in health services. Even a relatively low level of fraction of risk attributable to tobacco in the population. -.
> cs(case. The risk ratio is an important indicator for causation. Similarly 'Re' is 383/683 = 0.56 and 'Rt' is 398/977 = 0. The risk of getting the disease among those eating eclairs could have been reduced by 91% and the risk among all participants in the sports carnival could have been reduced by 87. say 20%. A reduction of 51% substantially reduces the burden of the sport game attendants and the hospital services.'Rne'. respectively. eclair. measures direct health burden and the need for health services.(Rt-Rne)/Rt*100 %
'Rne'. an absolute increase of 51% whereas the risk ratio is 'Re' / 'Rne'. frac. The risk difference is 'Re' . The risk difference has more public health implications than the risk ratio.(Re-Rne)/Re Attr. A risk ratio above 10 would strongly suggest a causal relationship.05. Attributable fraction population indicates that the number of cases could have been reduced by 87% had the eclairs not been contaminated.41.Risk difference on the other hand. and has the mathematical notation of an additive model.91 87. A high risk ratio may not be of public health importance if the disease is very rare. This outbreak was transient if we consider a chronic overwhelming problem such as cardio-vascular disease or cancer.05 eclair TRUE Total 300 579 383 398 683 977 Re Rt 0. pop.56 0.1 0. exp. Those who ate eclairs had a high chance (55%) of getting symptoms. on the other hand.58 10. frac.48
Risk
Risk difference (attributable risk) Risk ratio Attr. 'Re' and 'Rt' are the risks in the non-exposed. 'Rne' in this instance is 15/294 = 0. suggests the amount of risk gained or lost had the subject changed from non-exposed to exposed.51 0. exposed and the total population. -. The risk difference.
93
. The increase is absolute.99 8 15. The Epicalc command cs is used to analyse such relationships.41 Estimate Lower95 Upper95 0.44 0.

although the cost must also be taken into account. From the protective efficacy value.eat > cs(case. We have eclair as a cause of disease.!eclair. NNT is a part of measurement of worthiness of intervention (either prevention or treatment) technology. then the exposure is likely to be the cause. The risk difference changes sign to negative.
94
. eclair.09 0.
> eclair.51 comes from an intervention on one individual. and is therefore just another way to express the risk ratio.96 individuals. A reduction of 1 would need to come from an intervention on 1/0.51 -0.9 Number needed to treat (NNT) 1. the exposure to the prevention program would have reduced the risk of the eclair eater (unexposed under this hypothetical condition) by 90.05 0.44 Risk ratio 0.
Dose-response relationship
One of the criteria for causation is the evidence of a dose-response relationship. A reduction of risk of 0.Attributable fraction exposure has little to do with level of disease burden in the population.no case FALSE TRUE Total FALSE 300 279 579 TRUE 383 15 398 Total 683 294 977 Rne 0. In our example.no) eclair. It is equal to 1 . To avert the same type of unwanted event. The lowest possible level of NNT is 1 or perfect prevention which also has 100% protective efficacy. The risk ratio reciprocates to a small value of 0. NNT is just the reciprocal of the negative of risk difference. An intervention of high NNT would need to be given to many individuals just to avert one unwanted event. let's assume that not eating eclairs is a prevention process. education. If a higher dose of exposure is associated with a higher level of risk in a linear fashion.9%.56 Re Rt 0. an intervention with low NNT is preferred to another with high NNT. the command shows protective efficacy and number needed to treat (NNT).12 protective efficacy (%) 90.41 Upper95 -0. law enforcement and improvement of environment.09.96
The risk among the exposed (not eating eclair) is lower than that among the nonexposed (eating eclair).07 # The ! sign means "NOT"
Risk
Estimate Lower95 Risk difference (absolute change) -0.no <. There are some interventions that can prevent the diseases such as vaccination. Instead of displaying the attributable fraction exposure and attributable fraction population.RR-1.51 or 1.58 0.

> save(. 18.49 1 9. they are not really zero.52 6.19 ) ( 6. P value = 0 Fisher's exact test (2-sided) P value = 0
Risk ratio from a cohort study
20
10
10. eclairgr) eclairgr case 0 1 FALSE 279 54 TRUE 15 51 Absolute risk Risk ratio lower 95% CI upper 95% CI 0.04 .data.
> cs(case.Rdata")
1
95
.f.52 ( 8.04 14.05 0. but have been rounded to three decimal places. See 'help(cs)' for more details on the arguments. the current data is saved for further use.We now explore the relationship between the risk of getting the disease and the number of eclairs consumed. The default rounding of decimals of odds ratios and relative risks is two and for the p-values is three.12 . 14. The step from not eating to the first group (up to one piece) is wide whereas further increases are shown at a flatter slope.7 13.66
Chi-squared = 237.54 10.72 )
13. 3 d. file = "Chapter8.19
>2 38 89 0.6 . The p values in the output are both zero..74 ( 10. 13. Before finishing this chapter.74 10.68 8. In fact.66
Risk ratio
2
5
1 0 1 eclairgr 2 >2
The risk ratio increases as the dose of exposure to eclairs increases.11 .6 13.72
2 203 243 0.68 9.11 18.

Are these statistically significant? If so.Exercise_________________________________________________
Compute the attributable risk and risk ratio of 'beefcurry'. what are the possible reasons?
96
. 'saltegg' and 'water'.

Rdata") > use(.rm = TRUE) > m.323129
97
.mean(case))
Note that when there are missing values in the variable.0
The probability of being a case is 469/1094 or 42. In this situation where noncases are coded as 0 and cases as 1.rm =TRUE' in the argument.m. The assessment of risk in this chapter is changed from the possible cause. or
> mean(case)/(1 .9 Total 1094 100.7504. Conversely. the probability would be equal to odds/(odds+1). p/(1-p) is known as the odds. Let's first load the data saved from the preceding chapter. the 'mean' must have 'na. we now focus on confounding among various types of foods. the probability is
> mean(case)
On the other hand the odds of being a case is 469/625 = 0.mean(eclair. For example the odds of eating eclairs is:
> m.9%. The next step in analysing the outbreak is to deal with the level of risk. na.eclair /(1 .1 TRUE 469 42.eclair <.
> tab1(case) Frequency Percent FALSE 625 57.data)
Odds and odds ratio
Odds has a meaning related with probability. Confounding and
Interaction
Having assessed various parameters of risk of participants in the outbreak in the last chapter.Chapter 9: Odds Ratios.
> zap() > load("Chapter8.eclair) [1] 2. If 'p' is the probability.eat.

86 Chi-squared = 221. eclair. P value = 0 Fisher's exact test (2-sided) P value = 0
The value of odds ratio from the cc function is slightly different from the calculations that we have done.eat) eating eclair case FALSE TRUE Total FALSE 279 300 579 TRUE 15 383 398 Total 294 683 977 OR = 23. For a cohort study we may compute the ratios of the odds of being a case among the exposed vs the odds among the non-exposed. its 95% confidence interval. eclair.eat) eclair.While a probability always ranges from 0 to 1. 1 d.746
This is the same value as the ratio of the odds of being exposed among cases and among non-cases. .f. an odds ranges from 0 to infinity.
> table(case. This is because the 'cc' function uses the exact method to calculate the odds ratio.
> (383 * 279)/(300 * 15)
Epicalc has a function cc producing odds ratio.eat case FALSE TRUE FALSE 279 300 TRUE 15 383
The conventional method for computing the odds ratio is therefore:
> (383/300)/(15/279) [1] 23.74 43.68 95% CI = 13.
> cc(case.
98
. performing the chi-squared and Fisher's exact tests and drawing a graph for the explanation.21 .
> (383/15)/(300/279)
It is also equal to the ratio between the cross-product.

saltegg) saltegg case 0 1 Total FALSE 66 554 620 TRUE 21 448 469 Total 87 1002 1089 OR = 2.44 Chi-squared = 13.
> cc(case.95
Confounding and its mechanism
For 'saltegg'. The exposed group estimate is 383/300 or slightly higher than 1.736 43.86
1/8
1/16
non−exposed Exposure category
exposed
The vertical lines of the resulting graph show the estimate and 95% confidence intervals of the two odds of being diseased.eat))$conf. eclair.Odds ratio from prospective/X−sectional study
1
1/2
Odds of outcome
1/4
OR = 23. There were more exposed than non-exposed."conf.eat))$estimate odds ratio 23. The latter value is over 23 times of the former. eclair.int [1] 13.74 .test(table(case. P value = 0 Fisher's exact test (2-sided) P value = 0
99
. . 43.54 95% CI = 1.f.862 attr(.
> fisher.681 > fisher.82 . The non-exposed group has the estimate value slightly below 1/16 since it real value is 15/279. 1 d. non-exposed on the left and exposed on the right.test(table(case. The size of the box at the estimate reflects the relative sample size of each subgroup.level") [1] 0.68 95% CI = 13. computed by the conventional method. the odds ratio can be similarly computed.51 4.

58 Chi-squared = 47. graph = FALSE) eating eclair saltegg FALSE TRUE Total 0 53 31 84 1 241 647 888 Total 294 678 972 OR = 4. 1. Stratified analysis gives the details of confounding as follows.36) MH−OR = 1.eat'. which is higher than 977 of the cross-tabulation results between 'case' and 'eclair. Exposure= saltegg
Exposed
100
.48. Both eclairs and salted eggs have significant odds ratios and were consumed by a large proportion of participants. P value = 0 Fisher's exact test (2-sided) P value = 0
There might be only one real cause and the other was just confounded.eatFALSE: OR = 0.
> mhor(case. In other words. eclair.f. the size of the box on the right is much larger than that on the left indicating a large proportion of exposure. Let's check the association between these two variables.22.787
Non−exposed Outcome= case .54. Similar to the analysis of the odds ratio for 'eclair'. saltegg.eatTRUE: OR = 1. The value of the odds ratio is not as high but is of statistical significance.
> cc(saltegg.02 (0. 1 d.089.07 (0.81 7. eclair. 5) eclair. .eat)
Stratified prospective/X−sectional analysis
2 1
Odds of outcome
1/2 1/4 1/8 1/16 1/32
eclair.eat. those participants who ate salted eggs also tended to eat eclairs.87 (0.02 .93) homogeneity test P value = 0.The total valid records for computation is 1.58 95% CI = 2. 2.

A higher average odds on the right-hand side leads to the crude odds ratio being higher than one. upper lim. P value eclair. the number of eclair non-consumers (as represented by the size of the lower box) is higher than that of the consumers.36 0. This crude odds ratio misleads us into thinking that salted egg is another cause of the disease where in fact it was just confounded by eclairs.eat' and 'saltegg'.739 eclair.944 M-H Chi2(1) = 0 . or those not consuming salted eggs.787
The above analysis of association between the disease and salted egg is stratified by level of eclair consumption based on records that have valid values of 'case'. P value = 0.eat FALSE 0.=0.224 5. is the weighted average of the two odds ratios. The Mantel-Haenszel (MH) odds ratio. in this case 'eclair.07. The distance between the two lines is between 16 to 32 fold of odds. chi-squared 1 d. The centre of the left-hand side therefore tends to lie closer to the lower box. The mechanism of this confounding can be explained with the above graph.eat TRUE 1. We will focus on the first part at this stage and come back to the second part later. In other words. the (weighted average) odds of diseased among the salted egg consumers is therefore closer to the upper box. The upper line lies far above the lower line meaning that the subset of eclair eaters had a much higher risk than the non-eaters. on the left-hand side. It is important to note that the distribution of subjects in this study is imbalanced in relation to eclair and salted eggs consumption. P value = 0. there are alot more eclair eaters (upper box) than non-eaters (lower box).541 1. The second part suggests whether the odds ratio of these strata can be combined.
101
. The first part concerns the odds ratio of the exposure of interest in each stratum defined by the third variable.855 M-H combined 1. In both strata.eat OR lower lim. Both the stratum-specific odds ratios and the MH odds ratio are not significantly different from 1 but the crude odds ratio is significantly different. In contrast. The level of confounding is noteworthy only if both of the following two conditions are met.944 Homogeneity test.023 0. the odds ratios are close to 1 and are not statistically significant.874 0. 'eclair.eat' as well as the odds ratio and chi-squared statistics computed by Mantel-Haenszel's technique.00 0.073 0. The distortion of the crude result from the adjusted result is called confounding. The opposite is true for the left-hand side where the weighted average odds of getting the disease should be closer to the lower box. The upper line of the graph denotes the subset or stratum of subjects who had eaten eclairs whereas the lower line represents those who had not. The centre of this right-hand side then tends to be closer to the location of the upper box. also called the adjusted odds ratio. There are two main parts of the results. which is also close to 1.481 2.f.Stratified analysis by eclair. The slopes of the two lines are rather flat. On the right-hand side (salted egg consumers). when the two strata are combined.93 0.

7 2.36)
Non−exposed Outcome= case . = 0. the two lines of strata are very close together indicating that 'saltegg' is not an independent risk factor.96 42. 42. the odds ratio of eclair.96.78 (13. P value = 0. Graphically. the odds for disease are close and the weighted average odds is therefore not influenced by the number of subjects.
102
. upper lim. chi-squared 1 d.f.8) and the MH odds ratio (24. a variable cannot confound another exposure variable.68).eat.eat
Exposed
Stratified by 'saltegg'.88) saltegg1: OR = 24. Exposure= eclair. 49.3 4. P value saltegg 0 19.4 8.68.736 1/32
Odds of outcome
saltegg0: OR = 19.12e-49 M-H Chi2(1) = 215.11 . saltegg) Stratified analysis by saltegg OR lower lim.32 (13.68 117.06e-07 saltegg 1 24.42e-51 M-H combined 24. Now we check whether the relationship between the disease and eclair is confounded by salted egg.3 and 24.8 13.3) are strong and close to the crude odds ratio (23. Thus not being an independent risk factor. the stratification factor must be an independent risk factor. 117. In each of the exposed and non-exposed groups.56 49.63 .Firstly.736
Stratified prospective/X−sectional analysis
2 1 1/2 1/4 1/8 1/16 homogeneity test P value = 0. P value = 0 Homogeneity test. there must be a significant association between the stratification factor and the exposure of interest.9 6.71) MH−OR = 24.31 (4.
> mhor(case. eclair.eat in both strata (19.3 13. Secondly.56.

P value beefcurry 0 5. chi-squared 1 d.33 fold or an odds ratio of 5. 21.33 1. The homogeneity test in the last line concludes that the odds ratios are not homogeneous.56 .08 (13. Eating beef curry increased the harmful effect of eclair or increased the susceptibility of the person to get ill by eating eclairs. Among those who had not eaten beef curry.49 68.33.12e-03 beefcurry 1 31. This increase is 5.eat. In statistics. In epidemiology.9 1.1 4.53.49.7 3. the odds of getting the disease among those not eating eclair was slightly below 1 in 6. this is called significant interaction. beefcurry) Stratified analysis by beefcurry OR lower lim. Exposure= eclair.39e-48 M-H Chi2(1) = 214. P value = 0 Homogeneity test. eclair. The odds increases to over 1 in 2 for those who ate eclairs only. the baseline odds among those eating beef curry only (left point of the green line) is somewhere between 1 in 32 and 1 in 16.eat'.
> mhor(case.33 (1.eat
Exposed
The slopes of the odds ratios of the two strata cross each other. 41.
103
. The odds however steps up very sharply to over 1 among the subjects who had eaten both eclairs and beef curry.007
Non−exposed Outcome= case .71) 1/8 1/16 1/32 beefcurry1: OR = 31. 68. In contrast.63 (16. the effect of 'eclair' was modified by 'beefcurry'.f.79e-56 M-H combined 24. which is the lowest risk group in the graph. We now check the effect of 'beefcurry' stratified by 'eclair.89) homogeneity test P value = 0.Interaction and effect modification
Let's analyse the association between eating eclairs and the developing acute gastrointestinal illness again but now using 'beefcurry' as the stratification factor. P value = 0. = 7.007
Stratified prospective/X−sectional analysis
1 1/2
Odds of outcome
1/4 beefcurry0: OR = 5.85.85 41.63 16.08 13. upper lim.23 .53 21.11) MH−OR = 24.

179 1. eclair.Rdata")
Exercise_________________________________________________
Analyse the effect of drinking water on the odds of the disease.55 0.78 .eat) Stratified analysis by eclair. 4.2396 M-H Chi2(1) = 1.83 0.4 (0. The homogeneity test also concludes that the two odds ratios are not homogeneous. We put the new variable 'eclair.376 0.eat.eat' into .77.
104
. The odds ratio among those eating eclairs is 2.data by using label.var(eclair.11.f.38 (0.0329 M-H combined 1.eat FALSE 0.var and save the whole data frame for future use with logistic regression. chi-squared 1 d. logistic regression is needed. = 6.009
Stratified prospective/X−sectional analysis
2 1 1/2 1/4 1/8 1/16 1/32
Odds of outcome
eclair.021 4. P value = 0.47 0.111 1. upper lim.24 Homogeneity test.eat TRUE 2.47) eclair. beefcurry. Tabulation and stratified graphs are very useful in explaining confounding and interaction. For a dataset with a larger number of variables. Check whether it is confounded with eating eclairs or other foods. they are limited to only two or three variables.769 2.
> label. P value eclair.55) homogeneity test P value = 0.eatFALSE: OR = 0.009
Non−exposed Outcome= case . Check for interaction. Exposure= beefcurry
Exposed
The effect of beef curry among those not eating eclairs tends to be protective but without statistical significance. The stratification factor eclair has modified the effect of beef curry from a non-significant protective factor to a significant risk factor.02.eatTRUE: OR = 2. 1.eat OR lower lim. file="chapter9.1446 eclair.18 (1.38 .83) MH−OR = 1. However.> mhor(case.data.401 0. 2. "ate at least some eclair") > save(.18 with statistical significance. P value = 0.

we can type the following:
> any(duplicated(id)) [1] TRUE
The result tells us that there is in fact at least one duplicated id. 251
Subject sorted by X−axis values
0
50
100
150
200
250
The graph looks quite uniformly distributed.996 126
s.> summ(id) Valid obs.996) is not equal to what it should be. To specify the id of the duplicates type:
> id[duplicated(id)] [1] 215
We see that id = 215 has one duplicate. To check for duplication.
> mean(1:251) [1] 126
There must be some duplication and/or some gaps within these id numbers. mean median 251 125. 72.
106
. One of them should be changed to 'id' = 216.597 1
Distribution of id
max. Further inspection of the data reveals that the record numbers are 215 and 216. These two records should be investigated as to which one is incorrect. the mean of id (125. min.d. However. there is no noticeable irregularity. Looking carefully at the graph.

This method can handle either a vector or array (several variables at the same time). income group (5th variable) and reason for family planning (7th variable). These are confirmed with the numerical statistics from the summ command seen earlier in this chapter. las=1. In this dataset. the value '9' represents a missing code for religion (3rd variable). main="Family Planning Clinic")
Family Planning Clinic
ht wt bpd bps reason am income ped relig age id
0
200
400
600
800
1000
The outlier values of 'bps'. These three methods use commands that are native to R.
> boxplot(.Missing values
This file is not ready for analysis yet. patient education (4th variable). The first method is based on the function replace. As is often the case. The second uses extraction and indexing with subscript '[]'.
107
. which handles one vector or variable at a time. which is by far the simplest method. the data were coded using outlier numbers to represent missing codes. There are four methods of changing values to missing (NA). horizontal=T. We first explore the data with boxplots. The fourth method uses the recode command from Epicalc.data. 'bpd' and 'ht' are rather obvious. The third method is based on the transform command.

1 max. and finally recode for the remaining necessary variables. Thus. 'income' and 'reason'. mean median 250 1. The first argument. one may want to coerce the value of diastolic blood pressure to be missing if the systolic blood pressure is missing. It has no effect on the original values. Secondly. The final argument. the target vector. The replace function handles only one variable at a time.
Changing values with extraction and indexing
The first variable to be replaced with this method is the 6th one.
108
.108 1 s. '->' or '<-'. 0. 'relig'. 'am'. transform for the 'wt' variable.
> replace(relig. 'am'. whenever 'relig' is equal to 9. 'NA'. whenever 'relig' is equal to 9. 'ped'. 'relig'. The second argument. relig==9. replace is a function.
> summ(.data$relig) Obs. The values obtained from this function must be assigned to the original values using the assignment operators. 'relig==9'. is the index vector specifying the condition. is the target vector containing values to be replaced. Note that the index vector. the variable has changed.
> summ(relig)
We wish to replace all occurrences of 9 with the missing value 'NA'.31 min. For example. which denotes age at first marriage. the index vector and the value. NA) -> . The replace function handles only one variable at a time. it will be replaced with 'NA'. or condition for change. Right now. in this case. 2
There was one subject with a missing value leaving 250 records for statistical calculations. is the new value that will replace the old value of 9.We will use the replace function for the 3rd variable. extraction and indexing for the 4th to 7th variables.data$relig
There are three essential arguments to the replace function. See the online help for more detailed information on its usage. need not be the same vector as the target vector.d. not a command. The remaining subjects have values of one and two only for 'religion'.
Replacing values in a data frame
We wish to replace all occurrences of 9 with the missing value 'NA'.

median and standard deviation are not correct due to this coding of missing values. the 4th. 8.8
109
.
'[.c(4. Instead of using the previous method.data$wt) Valid obs. For example. 'income' and 'reason'). .5.data.7)]==9] <.895 51.data that have a value of 9 are replaced with 'NA'.5. Note that the mean. any element in which the value equals 9 will be replaced by 'NA'.5.657 20
s.
> summ(.data[. and 7th variables of .> summ(. 5.5.NA
All the 4th.data[. For example.data$am[. this latter command is slightly more straightforward than the above one using the replace function.data$am==99] <. the alternative is:
> .NA' means the epression on the left is to be assigned a missing value (NA). 99
The value 99 represents a missing value code during data entry.c(4. wt)) -> .data marked by '[ ]'.data
The expression inside the function tells R to replace values of 'wt' that are greater than 99 with the NA value. 0 max. ' <.d.7)]==9]'means the subset of each particular column where
the row is equal to 9.data[.data$am) Valid obs. conditions and replacing value. 73. for these four variables. Now check the 'wt' variable inside the data frame. The above command can be explained as follows. 15
max. mean median 246 51.NA
With the same three components of the target vector. to transform 'wt'
> transform(. Thus.d.
Transforming variables in a data frame
The function transform does a similar job as the previous methods described above. ('ped'. 5th.7)][. 5 and 7. This method can also be used for many variables with the same missing code.c(4.data[.c(4.45 s.83
min.
> . 5th and 7th variables all use the value 9 as the code for a missing value. wt=ifelse(wt>99. There are two layers of subsets of .7)] means extract all rows of columns 4.91 min. mean median 251 20. The resulting object is saved into the data frame. NA.

data. Similar to other commands in Epicalc. mean median 250 20.
Recoding values using Epicalc
The function recode in Epicalc was created to make data transformation easier. 170
Variable 'bps' in . Let's start with 'bps'. summ. 'bpd' and 'ht'. for example use.d. 31
When the variable 'am' is used as the argument of summ. The new . the variable 'am' that is used now is the updated one which has been changed from the command in the preceding section. as shown below.851 51. This automatic updating has also affected other variables in the search path that we changed before.data)
Notice that the variable 'bps' has been changed. new. Similar to the results of previous methods. 11. 3.data as the default data frame. In fact.
110
.value=999.22 min.value=NA) > summ(. 14.data will have all variable descriptions removed.9
Note that the transformed data frame does not keep the variable labels or descriptions with it.
> summ(am) Valid obs. transform did not change the 'wt' variable inside the data frame in the search path. des. It then looks in the search path.d. mean median 244 113. We require replacing the values '999' to a missing value for variables 'bps'.Note the two outliers on the left-hand side of the graph. recode has automatically detached the old data frame and attached to the new one.033 110 s. old.var. The command is simple. 99. 0 max. The command recode makes variable manipulation simpler than the above three standard R methods.
> recode(var=bps.data and that in the search path have been synchronised. So this method reduces the power of Epicalc. which does not exist.344 20 s.09 min. the program looks for an independent object called 'am'.9 s.
> summ(bps) Valid obs.d. The number of valid records is reduced to 244 and the maximum is now 170 not 999. 0 max. tab1 and label.06 min. Since the data frame in the search path ('search()[2]') has been updated with the new . the command recode is restricted to the setting of having . 15 max. mean median 251 52.
> summ(wt) Valid obs.

main="Family Planning Clinic". A system separating variable labels from variable names is a better way of documentation. it is difficult to have intuitively understandable names for each variable. However. R does not come with a built-in variable labelling facility.var'
When there are only a few variables in the dataset. To correct these errors type:
> recode(wt. The box plot of all variables now has a different appearance.
> names(.data. such as 'age'. Firstly. las=1)
Family Planning Clinic
ht wt bpd bps reason am income ped relig age id
0
50
100
150
200
250
Labelling variables with 'label. NA) > summ()
It should be noted that after cleaning. NA) > recode(ht. the effective sample size is somewhat less than the original value of 251.
> boxplot(.The outlier is clearly seen (top left corner). the variable names of the data are displayed. or 'education'. 'sex'. when there are a large number of variables.data) [1] "id" "age" [7] "reason" "bps" "relig" "bpd" "ped" "wt" "income" "am" "ht"
112
. naming is not a problem. wt < 30. all of which are for common purposes. ht > 200. adds in this useful facility in a simple way. Epicalc however. horizontal=T.

it is not really a categorical variable.ped <.000 2. mean 226 3. Median 2.factor(ped.It is advised to keep each label short since it will be frequently used in the process of automatic graphical display and tabulation. "Bachelor degree"="6". During the analysis. 7
Note that there is no count for category 1 of 'ped'. otherwise it is optional. To convert a numeric vector to a categorical one use the 'factor' function. As mentioned previously. The argument 'exclude' is set to 'NULL' indicating no category (even missing or 'NA') will be excluded in the factoring process. medians and standard deviations.000
median 2
s. 3 = secondary school. "Secondary school"="3". When we summarise the statistics.
Labelling a categorical variable
Labelling values of a categorical variable is a good practice.
> summary(ped) Min. Primary="2". both outputs show means.list(None="1". 'txt’ or 'csv' formats.000 2. Vocational="5". In fact. It is therefore important to know how to label variables in R. 4 = high school.000 > summ(ped) Obs. 3. one can have: None="1" or "None"="1". 7 = other.d.data.
> label. 2
max. indicating a continuous. The data are numeric and therefore need to be converted into a factor. Others="7")
Each label needs to be enclosed in double quotes if it contains a space. 5 = vocational school.data) command or by summ.296 5. the best way to label variables is during the preparation of data entry using the data entry software.000 NA's 25. 7.
114
. numeric variable.296 Mean 3rd Qu. a labelled variable is much easier to understand and interpret than an unlabelled one. either by the summary(. However. According to the coding scheme: 1 = no education. exclude = NULL)
The new variable is a result of factoring the values of 'ped' in .
> educ <. For example. such as those directly imported from EpiInfo. It is a part of important documentation.000 Max. 1. "High school"="4". 1st Qu. 6 = bachelor degree. occasionally one may encounter an unlabelled dataset.66
min. 2 = primary school. The labels can be put into a list of 7 elements. In our example of the family planning data the variable 'ped' (patient's education level) is an unlabelled categorical variable. at this stage.

var(educ. especially if any sorting is done.label.data.> summary(educ) 2 3 4 117 31 20
5 26
6 16
7 <NA> 16 25
We can check the labels of a factor object using the levels command.data.ped > levels(educ) [1] "None" "Primary" [4] "High school" "Vocational" [7] "Others"
"Secondary school" "Bachelor degree"
Adding a variable to a data frame
Note that the variable 'educ' is not inside the data frame . The levels for the codes should be changed to meaningful words as defined previouisly. Remember that R has the capacity to handle more than one object simultaneously.
> label. incorporating all the important variables into the main data frame . although it is possible to go on analysing data with this variable outside the data frame. the whole data frame including the old and new variables can be written into another data format easily (see the function 'write.
> des() # same as before
To incorporate a new variable derived from the data frame . the variable can have a descriptive label.
> levels(educ) [1] "2" "3" "4" "5" "6" "7" NA
There are seven known levels. In addition.data is advised.
> levels(educ) <. of observations =251 Variable Class Description 1 id numeric ID code ============ Variables # 2 to 11 omitted ======= 12 educ factor education
115
.
> des() No. when necessary. simply label the variable name as follows. However. ranging from "2" to "7" and one missing level (NA). Note that these numbers are actually characters or group names.foreign' in the foreign package). There was no "1" in the data and correspondingly is omitted in the levels. "education")
Then recheck. More importantly.

4 6.0 10.data. The old 'free' variable outside the data frame is removed. The new data frame is attached to the search path.data. The new variable is labelled with a description. The tabulation can also be sorted.1 0.8 13.1 7.0 51.
116
. unless the argument 'pack=FALSE' is specified.
> tab1(educ) educ: education Frequency None 0 Primary 117 Secondary school 31 High school 20 Vocational 26 Bachelor degree 16 Others 16 NA's 25 Total 251 %(NA+) 0.0 100. the command label.0 %(NA-) 0.6 12.4 10.
Order of one-way tabulation
The new education variable can be tabulated.7 8. The new variable is incorporated into the data frame . A horizontal bar chart is produced when the number of groups exceeds 6 and the longest label of the group has more than 8 characters.0
Distribution of education
Missing Others Bachelor degree Vocational High school Secondary school Primary None 0 16 16 26 20 31 117 25
0
20
40
60
80
100
120
140
Frequency
The table and the graph show that most subjects had only primary education.var actually accomplishes five tasks.5 7.0 46.8 11. The old data frame is detached.For a variable outside .4 8.4 6.0 100.

it is very important to have preventive measures to minimise any errors during data collection and data entry. Whenever a variable is modified it is a good practice to update the variable inside the attached data frame with the one outside. In EpiInfo.0 %(NA-) 0. which is then incorporated into .8 18.var(ped2. In the remaining chapters. For example.6 7. If this had been properly done. treated for missing values and properly labelled. "level of education") des() tab1(ped2)
ped2 : level of education Frequency 0 117 31 20 42 16 25 251 %(NA+) 0.0 51.
Conclusion
In this chapter. then the difficult commands used in this chapter would not have been necessary.0
None Primary Secondary school High school Tertiary Others NA's Total
The two categories have been combined into one giving 42 subjects having a tertiary level of education. Missing values would better be entered with missing codes specific for the software.7 8.7 6.data at the end.8 13. we have looked at a dataset with a lot of data cleaning required. In real practice. Stata and SPSS these are period marks '. The analyst may want to combine two or more categories together into one. a constraint of range check is necessary in data entry. We can do this by creating a new variable.0 100.Collapsing categories
Sometimes a categorical variable may have too many levels.0 100.educ levels(ped2)[5:6] <.
> > > > > ped2 <. vocational and bachelor degree.' or simply left blank.6 12.0 16. which are the 5th and the 6th levels."Tertiary" label.0 46. For example. we will use datasets which have been properly entered. could be combined into one level called 'tertiary'.
118
. which can set legal ranges and several other logical checks as well as label the variables and values in an easy way.4 10.4 8.1 0. One of the best ways of entering data is to use the EpiData software.

the best way to update the data frame with new or modified variable(s) is to use label. which is a powerful command of Epicalc. and readers are encouraged to explore these very useful and powerful commands on their own. There are many other more advanced data management functions in R that are not covered in this chapter. Finally. reshape and merge. This command not only labels the variable for further use but also updates and incorporates the data frame with the variable outside. These include aggregate. Attachment to the new data frame is automatic.The best way to modify data is to use recode. It can work with one variable or a number of variables with the same recoding scheme or recoding a variable or variables under a condition. making data manipulation in R more smooth and simple.
119
.var.

Read the file into R and use the commands in this chapter to clean the data.Exercises________________________________________________
The VCT dataset contains data from a questionnaire involving female sex workers from Phuket.
120
. Thailand in 2004.

With this small sample size it is somewhat straightforward to verify that there is no repetition of 'id' and no missing values. xlab="No. and there is no title. which is very high. The 13th record has the highest blood loss of 86 ml per day. The records have been sorted in ascending order of 'worm' (number of worms) ranging from 32 in the first subject to 1.
bloss
20 0
40
60
80
500
1000 worm
1500
2000
The names of the variables are used for the axis labels. The axis labels can be modified and a title added by supplying extra arguments to the plot function. of worms".
Scatter plots
When there are two continuous variables cross plotting is the first necessary step.The file is clean and ready for analysis. as follows:
> plot(worm. bloss)
The above command gives a simple scatter plot with the first variable on the horizontal axis and the second on the vertical axis. not sorted.929 in the last one. Blood loss ('bloss') is however. ylab="ml. per day". bloss. main = "Blood loss by number of hookworms in the bowel")
122
.
> plot(worm. The objective of this analysis is to examine the relationship between these two variables.

of worms".
> text(worm.For a small sample size. ylab="ml. of worms
1500
2000
> plot(worm. type="n")
The above command produces an empty plot. bloss. per day". per day
20
0
40
60
80
500
1000 No. xlab="No. bloss. labels=id)
123
. The variable 'id' can be used as the text to write at the coordinates using the 'text' command. main="Blood loss by number of hookworms in the bowel". This is to set a proper frame for further points and lines. A value of "n" tells R not to plot anything. putting the identification of each dot can improve the information conveyed in the graph.
Blood loss by number of hookworms in the bowel
ml. The argument 'type' specifies the type of plot to be drawn.

one can look at the attributes of this model. Displaying the model by typing 'lm1' gives limited information. Most of them can be displayed with the summary function. Be careful not to confuse the letter "l" with the number "1".
> attr(lm1.
Components of a linear model
The function lm is used to perform linear modelling in R.lm(bloss ~ worm) > lm1 Call: lm(formula = bloss ~ worm) Coefficients: (Intercept) worm 10. "names") [1] "coefficients" [4] "rank" [7] "qr" [10] "call" "residuals" "fitted. a linear model using the above two variables should be fit to the data.
> lm1 <.04092
The model 'lm1' is created.values" "df.84733 0.
> summary(lm1)
124
. of worms 1500 2000
In order to draw a regression line. which look very similar. To get more information.Blood loss by number of hookworms in the bowel
13
80
15
60
ml.residual" "terms" "effects" "assign" "xlevels" "model"
There are 12 attributes. per day
12 11 4 8 9 10
40
14
20
3 2 1 0 6 57 500 1000 No. its summary and attributes of its summary.

7 on 13 degrees of freedom Multiple R-Squared: 0. The third section gives coefficients of the intercept and the effect of 'worm' on blood loss. This F-test more commonly appears in the analysis of variance table.8 ml per day. the median is close to zero. The second section gives the distribution of residuals.7502
3Q 4.30857 2.
125
.694 F-statistic: 32. R-squared and adjusted R-squared
> summary(aov(lm1)) Df Sum Sq Mean Sq F value worm 1 6192 6 192 32.81) of the median (0.3562
Max 34. (The calculation of Rsquared is discussed in the analysis of variance section below). The coefficient of 'worm' is 0.8118
Median 0.99 × 10-5) is equal to that tested by the t-distribution in the coefficient section.35) is. The pattern is clearly not symmetric. When there are many worms.716. Error t value Pr(>|t|) (Intercept) 10. the blood loss is estimated to be 10. it is highly significantly different from zero. The intercept is 10. Although the value is small.3896
Coefficients: Estimate Std. The multiple R-squared value of 0.84) and the first quartile is further left (-10.84733 5. The maximum is too far on the right (34.73 7e-05 Residual standard error: 13.38) compared to the minimum (-15.04 0.8 on 1 and 13 DF.0618.8 meaning that when there are no worms.04092 0.716 indicates that 71.99e-05
The first section of summary shows the formula that was 'called'. not significantly different from zero as the P value is 0.04 indicating that each worm will cause an average of 0. Adjusted R-squared: 0.6% of the variation in the data is explained by the model. This is however.00715 5. sum of squares and mean square of the outcome (blood loss) by sources (in this case there only two: worm + residuals).062 worm 0.
Analysis of variance table.8 Residuals 13 2455 189 Pr(>F) 7e-05
The above analysis of variance (aov) table breaks down the degrees of freedom. The last section describes more details of the residuals and hypothesis testing on the effect of 'worm' using the F-statistic.8461 -10. The adjusted value is 0. Otherwise. The P value from this section (6. p-value: 6.6942.75) than the third quartile (4.04 ml of blood loss per day. the level of blood loss can be very substantial.Call: lm(formula = bloss ~ worm) Residuals: Min 1Q -15.

sum((bloss-mean(bloss))^2).residual mean square) / total mean square.sum(residuals(lm1)^2).5 # See also the analysis of variance table
The sum of squares of worm or sum of squares of difference between the fitted values and the grand mean is:
> SSW <. the number of worms can reduce the total mean square (or variance) by: (total mean square . F [1] 32. one may consider the mean square as the level of variation.
> pf(F.
> resid. lower.6
The latter two sums add up to the first one.msq <.78
Using this F value with the two corresponding degrees of freedom (from 'worm' and residuals) the P value for testing the effect of 'worm' can be computed. Radj [1] 0.(var(bloss).SSW/resid.9904e-05
126
. The total sum of squares of blood loss is therefore:
> SST <. SST [1] 8647
The sum of squares from residuals is:
> SSR <. The R-squared is the proportion of sum of squares of the fitted values to the total sum of squares.msq)/var(bloss). Thus the number of worms can reduce or explain the variation by about 72%. the result is:
> F <.tail=FALSE) [1] 6.msq.sum((fitted(lm1)-mean(bloss))^2). SSW [1] 6191. SSR [1] 2455.69419
This is the adjusted R-squared shown in 'summary(lm1)' in the above section.
> SSW/SST [1] 0.
F-test
When the mean square of 'worm' is divided by the mean square of residuals. In such a case.residual mean square) / variance. df1=1.residual > Radj <.The so-called 'square' is actually the square of difference between the value and the mean.sum(residuals(lm1)^2)/lm1$df. df2=13. or (variance .resid. Instead of sum of squares.71603
This value of R-squared can also be said to be the percent of reduction of total sum of squares when the explanatory variable is fitted.

per day
12 11 4 8 910
40
14
20
3 2 6 1 0 57 500 1000 No. fitted values and residuals
A regression line can be added to the scatter plot with the following command:
> abline(lm1)
The regression line has an intercept of 10.res
Note that some residuals are positive and some are negative. lm1. col="pink")
Blood loss by number of hookworms in the bowel
13
80
15
60
ml.
> points(worm. that number of worms has a significant linear relationship with blood loss. The expected value is the value of blood loss estimated from the regression line with a specific value of 'worm'.The function pf is used to compute a P value from a given F value together with the two values of the degrees of freedom. col="blue")
A residual is the difference between the observed and expected value.04.
Regression line. Now the regression line can be drawn. fitted(lm1). The 13th residual has
127
.tail' is set to FALSE to obtain the right margin of the area under the curve of the F distribution. fitted(lm1). both the regression and analysis of variance give the same conclusion.
> segments(worm.res.8 and a slope of 0. pch=18. of worms 1500 2000
The actual values of the residuals can be checked from the specific attribute of the defined linear model. worm. In summary. The residuals can be drawn by the following command. The last argument 'lower. bloss.
> residuals(lm1) -> lm1.

However. sum(lm1. A better way to check normality is to plot the residuals against the expected normal score or (residual-mean) / standard deviation. the P value from the Shapiro-Wilk test is 0. The sum of the residuals and the sum of their squares can be checked.qqnorm(lm1.res) -> a
This puts the coordinates of the residuals into an object.res.
> qqnorm(lm1. the text symbols would have formed along the straight dotted line. a$y.
128
. labels=as. p-value = 0.5
The sum of residuals is close to zero whereas the sum of their squares is the value previously displayed in the summary of the model.the largest value (furthest from the fitted line). The distribution of residuals.res ^2) [1] 3.res). If the residuals were perfectly normally distributed.
> shapiro. However.res)
Numerically. if the model fits well.9968e-15 [1] 2455. it is difficult to draw any conclusion. respectively.res)
Checking normality of residuals
Plots from the above two commands do not suggest that residuals are normally distributed.
> sum(lm1.
> shapiro. Shapiro-Wilk test can also be applied. with such a small sample size.res)
Epicalc combines the three commands and adds the p-value of the test to the graph.character(id))
The X and Y coordinates are 'a$x' and 'a$y'. type="n") > text(a$x.0882 > qqline(lm1. The graph suggests that the largest residual (13th) is too high (positive) whereas the smallest value (7th) is not large enough (negative).8978. should be normal.test(lm1.
> hist(lm1. A common sense approach is to look at the histogram.res) Shapiro-Wilk normality test data: residuals (lm1) W = 0.08 suggesting that the possibility of residuals being normally distributed cannot be rejected.
> qqnorm(lm1. A reasonably straight line would indicate normality.

)") > label.
> label.var(smoke. is explained by random variation or other factors that were not measured. it is clear that blood loss is associated with number of hookworms. Interpret the results the best simple linear regression. "Smoke (mg/cu.
130
. each worm may cause 0. apart from hookworm.var(SO2. "SO2 (ppm.)")
Using scatter plots and linear regression check whether smoke or SO2 has more influence on logarithm of deaths.The above two diagnostic plots for the model 'lm1' can also be obtained from:
> windows(7. 4) > par(mfrow=c(1. which=1:2)
Final conclusion
From the analysis.04 ml of blood loss. The remaining uncertainty of blood loss.
Exercise_________________________________________________
Load the SO2 dataset and label the variables using the following commands. On average.m.2)) > plot.lm(lm1.

45. 224 max. Therefore.in.as. by = saltadd) For saltadd = no Obs. mean median 37 137. otherwise modelling would not be possible.624 80 s.428 106
Distribution of Systolic BP by Salt added on table
missing
yes
no 100 150 200
132
.25
The function as.25 days.39 min. 238
s. 201 max.in.numeric is needed to transform the units of age (difftime).days <.d. mean median 20 166.frame(sex.
> class(age.in.5 132 For saltadd = yes Obs. The calculation is based on 12th March 2001.> summary(data.birthdate
There is a leap year in every four years.numeric(age.Date("2001-03-12") .
> summ(sbp.
> age. the date of the survey. 80
max. min. mean median 43 163 171 For saltadd = missing Obs.days) [1] "difftime" > age <. 39. an average year will have 365. min.d.days)/365. 29.d.as. saltadd)) sex saltadd male :45 no :37 female:55 yes :43 NA's:20
The next step is to create a new age variable from birthdate.9 180
s.

A scatterplot of age against systolic blood pressure is now shown with the regression line added using the 'abline' function. previously mentioned in chapter 11.Recoding missing values into another category
The missing value group has the highest median and average systolic blood pressure. then the value is taken to be the slope of a line through the origin.
> lm1 <.8 57.Hg") > coef(lm1) (Intercept) 65. If this object has a 'coef' method.
> plot(age. we will ignore this group and continue the analysis with the original 'saltadd' variable consisting of only two levels. xlab = "Years".2709 F-statistic: 37. Error t value Pr(>|t|) (Intercept) 65.8422
133
. This function can accept many different argument forms. otherwise the first two values are taken to be the intercept and slope.saltadd levels(saltadd1) <.8 128. main = "Systolic BP by age".lm(sbp ~ age) > summary(lm1) Coefficients: Estimate Std.374 3. ylab = "mm. Adjusted R-squared: 0. "missing") saltadd1[is.1
Since there is not enough evidence that the missing group is important and for additional reasons of simplicity.2782. "yes".147 1. and it returns a vector of length 1. Before doing this however. p-value: 1.4 0. as is the case for 'lm1'.4484 0.1465 > abline(lm1)
age 1.05e-05 age 1. In order to create a new variable with three levels type:
> > > > saltadd1 <.78 on 1 and 98 DF. including a regression object.2997 6.c("no".56 on 98 degrees of freedom Multiple R-Squared: 0.8942 4."missing" summary(saltadd1) no yes missing 37 43 20 > summary(aov(age ~ saltadd1)) Df Sum Sq Mean Sq F value Pr(>F) saltadd1 2 114.64 Residuals 97 12421.712e-08
Although the R-squared is not very high.1465 14.8422 0. sbp.71e-08 Residual standard error: 33. a simple regression model and regression line are first fitted. the P value is small indicating important influence of age on systolic blood pressure.na(saltadd)] <.

> lm2 <.lm(sbp ~ age + saltadd) > summary(lm2) ==================== Coefficients: Estimate Std.3331. the following step creates an empty frame for the plots:
> plot(age.304 0.5 mmHg.1291 15. Adding table salt increases systolic blood pressure significantly by approximately 23 mmHg.979 3.005 0.3158 F-statistic: 19. sbp.9094 6.3118 4.5526 0.23 on 2 and 77 DF. p-value: 1.68e-07
On the average. Adjusted R-squared: 0. Error t value Pr(>|t|) (Intercept) 63.Hg
100
150
200
30
40
50 Years
60
70
Subsequent exploration of residuals suggests a non-significant deviation from normality and no pattern.9340 3. xlab="Years". Similar to the method used in the previous chapter. The next step is to provide different plot patterns for different groups of salt habits.81e-06 saltaddyes 22. ylab="mm. type="n")
134
.Systolic BP by age
mm. Details of this can be adopted from the techniques discussed in the previous chapter and are omitted here. a one year increment of age increases systolic blood pressure by 1.Hg".83 on 77 degrees of freedom Multiple R-Squared: 0.001448 --Residual standard error: 30.7645 4.000142 age 1. main="Systolic BP by age".

b. age has a constant independent effect on systolic blood pressure.Add blue hollow circles for subjects who did not add table salt. The final task is to draw two separate regression lines for each group.coef(lm2)[2]
Now the first (lower) regression line is drawn in blue.
135
. Since model 'lm2' contains 3 coefficients.
> points(age[saltadd=="no"]. the intercept is the first plus the third coefficient:
> a1 <. sbp[saltadd=="no"]. col="blue")
Then add red solid points for those who did add table salt. then the other in red.
> coef(lm2) (Intercept) 63. The red line is for the red points of salt adders and the blue line is for the blue points of non-adders. the slope is fixed at:
> b <.909449
We now have two regression lines to draw.coef(lm2)[1] + coef(lm2)[3]
For both groups. the red points are higher than the blue ones but mainly on the right half of the graph. In this model. col = "blue") > abline(a = a1. Look at the distributions of the points of the two colours. pch = 18)
Note that the red dots corresponding to those who added table salt are higher than the blue circles.129112 age 1.552615 saltaddyes 22. col = "red")
Note that X-axis does not start at zero. The intercept for non-salt users will be the first coefficient and for salt users will be the first plus the third.
> abline(a = a0. Thus the intercept for the non-salt users is:
> a0 <. one for each group. The slope for both groups is the same. a new model with interaction term is created. b.
> points(age[saltadd=="yes"]. Thus the intercepts are out of the plot frame. the command abline now requires the argument 'a' as the intercept and 'b' as the slope. To fit lines with different slopes. sbp[saltadd=="yes"].coef(lm2)[1]
For the salt users. col="red".

009 0.0066 20.3981 3. p-value: 4.0065572 age 1.2418547 saltaddyes age:saltaddyes -12. Error t value Pr(>|t|) (Intercept) 78.697965 age:saltaddyes 0.Hg
100
150
200
30
40
50 Years
60
70
The next step is to prepare a model with different slopes (or different 'b' for the abline arguments) for different lines. 'age * saltadd' is the same as 'age + saltadd + age:saltadd'.390 0.528e-07
In the formula part of the model.
136
.3186 F-statistic: 13. The model needs an interaction term between 'addsalt' and 'age'.lm(sbp ~ age * saltadd) > summary(lm3) Call: lm(formula = sbp ~ age * saltadd) =============== Coefficients: Estimate Std.3445.2419 0.2540 31.2539696 0.255441 --Multiple R-Squared: 0.7198851
The first coefficient is the intercept of the fitted line among non-salt users.Systolic BP by age
mm. Adjusted R-squared: 0.000267 *** age 1.
> coef(lm3) (Intercept) 78.146 0. They can also be checked as follows.4574 -0.824 0.
> lm3 <. The four coefficients are displayed in the summary of the model.003558 ** saltaddyes -12.7199 0.31 on 3 and 76 DF.4128 3.6282 1.

col = 2) > legend("topleft". b = b0. the second term and the fourth are all zero (since age is zero) but the third should be kept as such. ylab="mm.coef(lm3)[2] + coef(lm3)[4]
These terms are used to draw the two regression lines. sbp. main="Systolic BP by age".
> a0 <.
137
. legend = c("Salt added".
> b0 <. representing the non-salt adders and the salt adders.coef(lm3)[2] > b1 <.Hg 100 150
30
40
50 Years
60
70
Note that 'as. The slope for the salt users group includes the second and the fourth coefficients since 'saltaddyes' is 1.
> plot(age. col=as. the second coefficient alone is enough since the first and the third are not involved with each unit of increment of age and the fourth term has 'saltadd' being 0.For the intercept of the salt users. pch=18.coef(lm3)[1] > a1 <.Hg". b = b1. lty=1.numeric(saltadd)) > abline(a = a0.coef(lm3)[1] + coef(lm3)[3]
For the slope of the non-salt users. respectively. col = 1) > abline(a = a1. This term is negative. The intercept of salt users is therefore lower than that of the non-users. Redraw the graph but this time with black representing the non-salt adders. xlab="Years". These colour codes come from the R colour palette. "No salt added").numeric(saltadd)' converts the factor levels into the integers 1 (black) and 2 (red)."black"))
Systolic BP by age
Salt added No salt added
200 mm. col=c("red".

the systolic blood pressure of two groups are not much different as the two lines are close together on the left of the plot. at the age of 25.
138
. the difference is 5.96mmHg among the salt adders. The two slopes just differ by chance. The coefficient of the interaction term 'age:saltaddyes' is not statistically significant. age modifies the effect of adding table salt.
Exercise_________________________________________________
Plot systolic and diastolic blood pressures of the subjects. Interaction is a statistical term whereas effect modification is the equivalent epidemiological term. the procedures for computation of these two levels of difference are skipped in these notes). Increasing age increases the difference between the two groups. In this aspect. use red colour of males and blue for females as shown in the following figure. salt adding modifies the effect of age. [Hint: segments]
Systolic and diastolic blood pressure of the subjects
200
blood pressure
150
100
50
0 0 20 40 Index 60 80 100
Check whether there is any significant difference of diastolic blood pressure among males and females after adjustment for age.24+0.24mmHg per year among those who did not add salt but becomes 1. Thus.7mmHg. (For simplicity. For example. the difference is as great as 38mmHg.This model suggests that at the young age. On the other hand the slope of age is 1. At 70 years of age.72 = 1.

5
Sample Quantiles
Residuals
−1.0
1.data. p-value: 0.76944 age 0.0 1. There are too few points of fitted values in the model.102650 0. indicating that perhaps we need to include a quadratic term of age in the model.2
2.332 on 8 degrees of freedom Multiple R-Squared: 0.lm(log10(money) ~ age + I(age^2)) > summary(lm3) Coefficients: Estimate Std. The next step is to fit a regression line.Normal Q−Q plot of Residuals
Shapiro−Wilk test P value = 0.8
−0.875.0
2.0
0.844 F-statistic: 28 on 2 and 8 DF. A regression line is a line joining fitted values.30 0.11 0.6
2. Adding the quadratic age term improves the model substantially and is statistically significant. To fit a regression line under the log scale but with a linear (non-log scale) value would be too complicated.0 0. Adjusted R-squared: 0.0
−0.000201 -6. age2 = (6:80)^2)
142
.6866
Residuals vs Fitted Values
1.5 0.
> new <.30 0.000243
Both the adjusted and non-adjusted R-squared values are high.5
0.
> lm3 <.frame(age = 6:80.0
Theoretical Quantiles
Fitted. the values of the high residuals are in the middle of the range of the fitted values.001268 0.0 −0. A better way would be to transform 'money' into a new variable on a log base 10 scale and fit a new model with a quadratic term of age.5
−1.00010 I(age^2) -0.0
−1.017641 7.5 −1.values
The residuals now look normally distributed. Error t value Pr(>|t|) (Intercept) 0.4
2.0
0.5
1.338502 0. A new data frame is now created to include a new 'age' variable ranging from 6 to 80 (which is the age range of our subjects) and the corresponding age-squared term. a task that is not straightforward.00023 Residual standard error: 0.0
2.5
0.5 1.125355 0.8
3. However.

> predict1 <.predict. Then the value drops when age increases.4
The corresponding age is
> new$age[which. labels = code) > lines(new$age.) > text(age. The maximum value of 'predict1' is
> max(predict1) [1] 3. ylab = "log10(money)".5
G C
B
I
2.0
A J
1. log10(money).5
K 20 40 age 60 80
Maximum value in the quadratic model
The quadratic model explains that. The money carried increases with age and peaks between 40-50 years of age.0
D
H
log10(money) 2. more precise mathematical calculation from the coefficients can be obtained as follows:
143
.max(predict1)] [1] 49
However.lm(lm3.0
1. col = "blue")
Relationship between age and money carried
E
3. new) > plot(age.5
F
3.2012
The corresponding money value is
> 10^max(predict1) [1] 1589.Then the predicted values of this data frame are computed based on the last model. a young person such as "K" who is 5 years old carries very little money. predict1. main="Relationship between age and money carried". log10(money). type="n".

In most of the epidemiological data analysis.female) > lines(age.Note that the first line is the same as previous plots. age2 = (6:80)^2.male.frame2.
> age.frame2. not significant.female.data.
> age. predict2. is unclassed. age is often transformed to a categorical variable by cutting it into age groups. which is a factor.5
K 20 40 age 60 80
The red line is located consistently above the black line. we have analysed the effect of age as a continuous variable.4 or 1.frame2. predict2.male <.frame2. For every value of age. adults and elderly subjects with cut points of 20 and 40 years with the two
145
.5
G C
B
I
2.0
A J
1.
> lines(age. For this small dataset. "F" is coded 1 and "M" is coded 2.male$age. The second command creates a new vector based on 'lm4' and the new data frame.female <. First we draw the line for males.predict. The difference is however. the values become the numerical order of the levels. since our model did not include an interaction term. col = 2)
Finally the line for females. males tend to carry 102.male <. sex = factor ("F")) > predict2. sex = factor ("M")) > predict2. age2 = (6:80)^2.male)
The first command creates a data frame containing variables used in 'lm4'.frame2.frame2. When 'sex'. differentiates sex with colour. Note that the 'sex' here is confined to males. as given in the default colour palette of R.738 times more money than females.predict.frame(age = 6:80.lm(lm4.female$age.lm(lm4.data. age. col=1)
Relationship between age and money carried
E
3.5
F
3.0
D
H
log10(money) 2. we divide the subjects into children. The second line however.
From age to age group
So far.frame(age = 6:80. age.0
1.female <.

48 0. the contrasts can be changed.0062 agegrElder -0. Error t value Pr(>|t|) (Intercept) 3.408 -3. Other tpes of contrast can also be specified. The first level.169 0.7 times more money than children. is omitted since it is the referent level. This means the other levels will be compared to this level. The column 'Adult' in the model is equal to 1 when agegr is equal to "Adult" and zero otherwise. The rows show all the levels. but is not statistically significant. Moreover. When both 'Adult' and 'Elder' are equal to zero.There are two age group parameters in the model.
> summary(lm(log10(money) ~ sex + agegr)) ================== Lines omitted ================= Coefficients: Estimate Std.84 0. "Child". base=2)
The above command changes the referent group to level 2.6436 agegrChild -1.87 0.1079 ================== Lines omitted =================
Note that he coefficient of 'Child' is the negative of that of 'Adult' from model 'lm5'.286 10.treatment(levels(agegr). which is statistically significant.
147
.351 0. The column 'Elder' is 1 when 'agegr' is "Elder" and zero otherwise.
> contrasts(agegr) Child Elder Child 1 0 Adult 0 0 Elder 0 1
The 'Adult' column is now missing. If "Adult" is required to be the referent level. There is no column of 'Child'.578 or approximately 38 times more money than children.
> contrasts(agegr) <. We could check the pattern of contrasts as follows:
> contrasts(agegr) Adult Elder Child 0 0 Adult 1 0 Elder 0 1
The columns of the matrix are the variables appearing in the model. "Adult" and "Elder". Adults carried 101.752 0. See the references for more details.contr. the model then predicts the value of 'agegr' being "Child".088 0.408 -1.3e-05 sexM 0.578 0. Elders carried 100. elderly persons did not carry significantly more money than adults.8257 = 6.78 1.

W. B. (2002) Modern Applied Statistics with S.References
Venables. D. and Ripley. Fourth edition. Springer.
Exercise_________________________________________________
What will happen in 'lm3' if log base 2 is used instead of log base 10? Would the conclusion be the same?
148
. N.

GLM can handle outcomes that are expressed as proportions.glm <. and so this argument can be omitted.lm <. When the likelihood is maximised. For the glm function the default family is the Gaussian distribution. Poisson distributed (counts) and others such as those from the gamma and negative binomial distributions. We will first start with the outcome on a continuous scale as in the previous example of blood loss and hookworm infection. Generalized linear modelling (GLM) is. Modelling from lm is equivalent to that of analysis of variance or 'aov'. While classical linear modelling assumes the outcome variable is defined on a continuous scale.Chapter 14: Generalized Linear Models
From lm to glm
Linear modelling using the lm function is based on the least squares method. such as blood loss in the previous examples. (as well as assuming normality of errors and constant variance). The method is based on the likelihood function. the coefficients and variances (and subsequently standard errors) of independent variables are achieved. more general that just linear modelling.lm(bloss ~ worm) summary(bloodloss. The concept is to minimise the sum of squares of residuals.
> > > > > zap() data(Suwit) use(Suwit) bloodloss.
> bloodloss.lm)
The results are already shown in the previous chapter. Now we perform a generalised linear regression model using the function glm.glm(bloss ~ worm)
149
. The only difference is that the former focuses on coefficients of the independent variables whereas the latter focuses on their sum of squares. as it is called.

unscaled') in both models. The two sets of attributes are similar with more sub-elements for the 'bloodloss.044
Some of the attributes in of the 'glm' are rarely used but some.Note that 'bloodloss. There will be further discussion on this in future chapters.glm" "coefficients" "r.residual" "null.resid" [13] "aliased" "dispersion" [16] "cov.lm)) $names [1] "call" "terms" "residuals" [5] "aliased" "sigma" "df" [9] "adj.
> sum(bloodloss.glm$deviance [1] 2455.squared" "cov.468
Similarly. the 'deviance' from the glm is equal to the sum of squares of the residuals.glm$residuals^2) [1] 2455.lm" > attributes(summary(bloodloss.044 > bloodloss.
151
.scaled" $class [1] "summary.glm)) $names $names [1] "call" "terms" [4] "deviance" "aic" [7] "df.deviance [1] 8647. are very helpful. This covariance matrix is used for calculation of the standard errors and 95% confidence intervals of the coefficients.glm$null.
Attributes of model summary
> attributes(summary(bloodloss.deviance' is equal to the total sum of squares of the difference of individual amount of blood loss from the mean blood loss.deviance" [10] "iter" "deviance. the 'null. such as 'aic'.r. Sub-elements of the same names are essentially the same.glm '.unscaled"
"family" "contrasts" "df.null" "coefficients" "df"
A large proportion of the elements of both sets of attributes repeat those of the models.glm' also has class as lm in addition to its own glm. The additional attributes include the R squared in the 'lm' model and the covariance matrix ('cov. In this setting.468 > bloodloss.unscaled" "cov.squared" "fstatistic" $class [1] "summary.
> sum((bloss-mean(bloss))^2) [1] 8647.

5 -> se2 > se2 [1] 0.glm) # or summary(bloodloss.unscaled
The scaling factor is. the collective variation is denoted as covariance (compared to variance for a single variable).
Computation of standard errors.704665e-07
The latter covariance matrix can also be obtained from the summary of the ordinary linear model. It is stored as a symmetrical matrix since one variable can covary with each of the others.glm)[2.822006e-02 worm -0. The one from the 'lm' model gives 'cov. A covariance matrix can be 'scaled' or 'unscaled'.unscaled' while 'glm' gives both.
> vcov(bloodloss. or sigma squared.unscaled (Intercept) worm (Intercept) 0.unscaled * summary(bloodloss. the dispersion.494057e-04 worm -0.scaled (Intercept) worm (Intercept) 28. Thus the first matrix can be obtained from
> summary(bloodloss.
> summary(bloodloss.lm)$residuals^2)/13
The scaled covariance matrix is used for computing standard errors of the coefficients. t values and 95% confidence intervals
The standard error of 'worm' is
> vcov(bloodloss.glm)$dispersion
or
> summary(bloodloss.lm)$cov.lm)$cov. The diagonal term of this matrix where the row name is the same as the column name is the value of variance of the coefficient under the same name. and they are not independent.18090491 -2.glm)$cov. which is the sum of squares of residuals divided by degrees of freedom of the residual.2]^.unscaled * sum(summary(bloodloss.glm)$cov.0071475
152
.02822006 5.unscaled * summary(bloodloss.lm)$cov.0001494057 2.glm)$cov.lm)$sigma^2
or
> summary(bloodloss. Taking the square root of this term will result in the standard error of the coefficient. in fact.Covariance matrix
When there are two or more explanatory variables.108629e-05 > summary(bloodloss.1491983716 -1.

glm))[2.975).9904e-05
This value is equal to that in the summary of the coefficients.3085690 2.025.06183205 0.02548089 0.043362 worm 0. 0.0071475 5.glm))[2. ci2 [1] 0.5 % 97. Finally to compute the 95% confidence interval:
> beta2 <.beta2 + qt(c(0. R has a command to compute the 95% confidence interval of the model as follows:
> confint(bloodloss. beta2 [1] 0.040922 0.glm) gives a slightly different confidence interval.2]^. lower. Note that the command confint(bloodloss.
153
.1] / summary(bloodloss.315793 worm 0. the P value is computed from
> pt(q=t2.025481 0.5 % (Intercept) -0. 13)*se2. the 't value' can be computed from division of the coefficient by the standard error:
> coef(summary(bloodloss.tail=FALSE) * 2 [1] 6.scaled[2. Therefore.glm)$cov. Error t value (Intercept) 10.621139 22. More details on the computation of a probability from the t distribution can be search from 'help(TDist)' or 'help(pt)'.
> coef(summary(bloodloss.lm) 2. df=13.847327 5.glm)) Estimate Std.725392 Pr(>|t|) 0.007147467 # 5.This can be checked against the summary of the coefficients.04092205 / 0.coef(summary(bloodloss. The more extreme can be on both sides or signs of the t value.5 -> t2 > t2
or
> 0.05636321
In fact.00006990
Subsequently. This is because the function uses the normal distribution instead of t distribution and therefore it is not as appropriate.056363
The results are the same but faster.7254
The P value is the probability that 't' can be at this or a more extreme value.1].04092205 > ci2 <.

468 > sum(summary(bloodloss. The value of the maximum likelihood is small because it is the product of probabilities. The maximum log likelihood can be obtained from the following function:
> logLik(bloodloss. this value is equal to the sum of squares of the residuals. Its logarithmic form is therefore better to handle.
154
. which means that the outcome variable is not transformed. the linear modelling or 'lm'.468
The interpretation of the error is the same as from the linear model.glm) == predict(bloodloss.51925 (df=3).
> all(fitted(bloodloss.' -59. Other types of 'family' and 'link' will be demonstrated in subsequent chapters.
> bloodloss. a larger deviance indicates a poorer fit.glm)) [1] TRUE
The 'glm' summarises the error using the 'deviance'.glm) # or bloodloss$family Family: gaussian Link function: identity
Modelling by lm is equivalent to glm with family being 'gaussian'. after being generalized to become 'glm'. the 15 values of the linear predictors for this family of 'glm' are the same as the fitted values (of both the 'lm' and 'glm' models). For the linear model. The link function is 'identity'.glm) 'log Lik. The model is said to have a 'family'. Since the link function is 'identity'.lm)$res^2) [1] 2455.Other parts of 'glm'
As mentioned before. Generalized linear modelling employs numerical iterations to achieve maximum likelihood. can accommodate more choices of outcome variables. To check the family:
> family(bloodloss.glm$deviance [1] 2455.

The higher (less negative) the log likelihood is.glm))+2*3 [1] 125. Having too many parameters can be inefficient. each model has its own explanatory parameters.0385
The AIC is very useful when choosing between models from the same dataset. However. a large number of parameters also results in a high AIC.glm) [1] 125. This and other important attributes will be discussed in more details in subsequent chapters. The AIC is therefore:
> -2*as. and Nelder. McCullagh P. A high likelihood or good fit will result in a low AIC value.
References
Dobson.
155
. However. J. Generalized Linear Models. An attribute of a model that balances the log-likelihood and the number of parameters is the AIC value. An Introduction to Generalized Linear Models. A. (1990). the better the model fits. where k is the penalty factor (usually 2) and npar represents the number of parameters in the fitted model.numeric(logLik(bloodloss. A. J.0385 > AIC(bloodloss. The number of parameters of this model is 3. When fitting models one always strives for parsimony. London: Chapman and Hall. It is abbreviated from "Akaike Information Criterion" and is equal to -2×loglikelihood + k×npar. (1989). London: Chapman and Hall.

156
. use the glm command with "family=Gaussian" to analyse models predicting systolic blood pressure from age and adding table salt with and without the interaction term. Use the AIC to choose the most efficient model.Exercise_________________________________________________
In the dataset BP.

For dichotomous data. if there are 50 subjects. the outcome is usually died or survived. For example. 'prevalence' is the proportion of the population with the disease of interest. the status of the outcome. For a continuous variable such as weight or height. 43 without disease (coded 0). is diseased vs non-diseased. 7 with disease (coded 1). then the mean is 7/50 = 0. which is more theoretical. Probability denotes the likeliness. For example.14. For a mortality study. For example. the proportion is used as the estimated probability. 1-P is probability of not having the disease. The odds is thus P/(1-P). having the outcome is often represented with 1 and 0 otherwise.Chapter 15: Logistic Regression
Distribution of binary outcome
In epidemiological data. the representative number is the proportion or percentage of one type of the outcome. For computation. Case-fatality is the proportion of deaths among the people with the disease. the single representative number for the population or sample is the mean or median. The prevalence is then the mean of diseased values among the study sample. If P is the probability of having a disease. Proportion is a simple straightforward term. which is the prevalence. the disease. mainly log(odds) can be plotted as follows. most of the outcomes are often binary or dichotomous.
157
. in the investigation of the cause of a disease. log(odds) or logit is more feasible. In the case of a dichotomous variable. Probability is useful due its simplicity. For complex calculations such as logistic regression. The relationship between probability and odds. The other related term is 'probability'.

has a linear increment with corresponding extremes of -infinity and +infinity and 0 for the mid-point. Among these X variables.p/(1-p) > plot(log(odds). The curve is called a logistic curve. main="Relationship between odds and probability". Others are potential confounders.5. one or a few are under testing of the specific hypothesis. las=1) > abline(h=.8
Probability
0.seq(from=0.6
0.4
0. by=. dead vs alive. ylab="Probability". etc. col="blue". Suppose there are independent or exposure variables: X1 and X2.5) > abline(v=0)
Relationship between odds and probability
1.2
0. The X can be age. type="l". where X denotes one or more independent variables. p. where β0 is the intercept.01) > odds <. the binary (also called dichotomous) outcome Y is often disease vs non-disease. Log(odds). and other prognostic variables.0 −4 −2 0 log(odds) 2 4
The probability has a minimum of 0. βX would be β0 + β1X1 + β2X2. sex. sometimes called co-variates. Modelling logit(Y|X) ~ βX is the general form of logistic regression. to=1. infinity and 1. the logit is a more appropriate scale for a binary outcome than the probability itself. The odds has its corresponding values at 0. In the medical field. maximum of 1 and mid value of 0. It means that the logit of Y given X (or under the condition of X).> p <. Being on a linear and wellbalanced scale.
158
. or often called 'logit'.0
0. can be determined by the sum of products between each specific coefficient with its value of X.

2e-09 =============== AIC: 535.Mathematically. 1 152.
Example: Tooth decay
The dataset Decay is a simple dataset containing two variables: 'decay'.strep. Hence. family=binomial.63 1 2 strep 436 95.
> zap() > data(Decay) > use(Decay) > des() No. The prevalence of having decayed teeth is equal to the mean of the 'decay' variable. sex. which indicates whether a person has at least one decayed tooth (1) or not (0). Error z value Pr(>|z|) (Intercept) -2.5
The outcome variable is 'decay'.strep 1.strep. i. To look at the 'strep' variable type:
> summ(strep)
The plot shows that the vast majority have the value at about 150.data) > summary(glm0) =============== Coefficients: Estimate Std. 0.25 105
s.5
max. and behaviour groups. of observations =436 Variable Class 1 decay numeric 2 strep numeric
Description Any decayed tooth CFU of mutan strep. it turns out that Pr(Y|X) is equal to exp(βX)/(1 + exp(βX)). Since the natural distribution of bacteria is logarithmic.63.var(log10.08 1. a transformed variable is created and used as the independent variable.d.e. which is a continuous variable.glm(decay~log10.strep <.276 6. 0 0.
> summ() No.log10(strep) > label. "Log strep base 10") > glm0 <. a group of bacteria suspected to cause tooth decay. 0. mean median 1 decay 436 0. data=.5
min.4e-07 log10.83
159
.554 0.681 0. name Obs.93 8. of observations =436 Var. For example.
> log10. prediction of probability of getting a disease under a given set of age.518 -4. which is binary and 'strep'. logistic regression is often used to compute the probability of an outcome under a given set of exposures. the number of colony forming units (CFU) of streptococci.48 53. etc. The exposure variable is 'strep'.

The estimated intercept is -2.strep is 1.4).2% of having at least one decayed tooth if the number of CFU of the mutan strep is at 1 CFU.data. xlim = c(-2. is significantly different from 0. by=.odds) -> baseline.strep. To make it clearer.8078
To see the relationship for the whole dataset:
> plot(log10. ylab=" ". This increment of logit is constant but not the increment of probability because the latter is not on a linear scale. We can then calculate the baseline odds and probability.01)) > predicted. In this case it is. The probability at each point of CFU is computed by replacing both coefficients obtained from the model. ie. xaxt="n". the probability is:
> coef(glm0)[1] + log10(100)*coef(glm0)[2] (Intercept) 0. the logit will increase by 1.strep is 0 (or strep equals 1 CFU).prob > baseline.
> exp(-2. or an increment of 10 CFU. fitted(glm0).554.
> newdata <.681.55.
> plot(log10.odds [1] 0. fitted(glm0))
A logistic nature of the curve is partly demonstrated.glm(glm0.strep' are statistically significant.strep. xlab=" ". Pr(>|z|) for 'log10. at 100 CFU. the predicted probability for each value of log10(strep). This means that when log10.07777 > baseline.Both the coefficients of the intercept and 'log10.
160
.prob [1] 0. ylim=c(0.strep' is the P value from Wald's test.newdata. the predicted line would be in between.681. the logit of having at least a decayed tooth is -2.frame(log10. Since the response is either 0 or 1.strep=seq(from=-2. the ranges of X and Y axes are both expanded to allow a more extensive curve fitting.072158
There is an odds of 0.681. to=4.077 or a probability of 7. For example.predict.odds > baseline.odds/(1+baseline. For every unit increment of log10(strep).type="response")
The values for predicted line on the above command must be on the same scale as the 'response' variable. 1.554) -> baseline. las=1)
Another vector of the same name 'log10. The coefficient of log10.strep' is created in the form of a data frame for plotting a fitted line on the same graph.1). This tests whether the co-efficient.line <.

predicted. we will use logistic regression to fit a model when the suspected causes are categorical variables. at=-2:4.0 0.1 1 10 CFU 100 1000 10000
Logistic regression with a binary independent variable
The above example of caries data has a continuous variable 'log10.line. In most epidemiological datasets. In this chapter.0 Probability of having decayed teeth
0. the independent variables are often categorical.
> > > > zap() load("chapter9.> lines(newdata$log10. Remember that we have a dataset on outbreak of food poisoning in Thailand analysed in Chapters 7-9. Readers are advised to compare the results of logistic regression in this chapter with those from the stratified analysis in previous chapters.data) des()
We model 'case' as the binary outcome variable and take 'eclair.
Relationship between mutan streptococci and probability of tooth decay
1.8
0.Rdata") use(. ylab="Probability of having decayed teeth")
Note the use of the '\n' in the command above to separate a long title into two lines.
161
.6
0.01 0.4
0.strep.strep' as the key independent variable. labels=as.character(10^(-2:4))) > title(main="Relationship between mutan streptococci \n and probability of tooth decay". xlab="CFU". col="blue") > axis(side=1.2
0.eat' as the only explanatory variable.

23. alpha=0.glm(case ~ eclair.001
eating eclair
Log-likelihood = -527.1])
The 95% confidence interval of the odds ratio is obtained from
> exp(coef(summary(glm0))[2. Epicalc manipulates this matrix and gives rise to a display more understandable by most epidemiologists.display(glm0.75 (13.
162
. decimal=2)
If the data frame has been specified in the glm command.display)
You can change the default values by adding the extra argument(s) in the command. Error z value Pr(>|z|) (Intercept) -2. of observations = 977 AIC value = 1059. data=.display are 95% for the confidence intervals and the digits are shown to two decimal places.96 * coef(summary(glm0))[2.data) > summary(glm0) Coefficients: Estimate Std.1) * 1.
> logistic.79) P(Wald's test) P(LR-test) < 0.1] + c(-1.6075 No.
> args(logistic.eatTRUE 3.75 is obtained from:
> exp(coef(summary(glm0))[2. The default values in logistic.001 < 0. i.82.01.eat. The P value from Wald's test is the same as that seen from the coefficient matrix of 'summary(glm0)'.display(glm0) Logistic regression predicting diseased OR (95% CI) 23. See the online help for details.03 <2e-16 eclair.
> logistic.display) > help(logistic.> glm0 <. The log-likelihood and the AIC value will be discussed later.923 0.167 0.265 -11.48 <2e-16 =================== Lines omitted =================
The above part of the display is actually a matrix from the object 'coef(summary(glm0))'.e. family=binomial.276 11.2
The odds ratio from the logistic regression is derived from exponentiation of the estimate.2])
These values are close to simple calculation of the 2-by-2 table discussed earlier in Chapter 9. the output will show the variable description instead of the variable names.40.

8236 No.2.0.=2 < 0. While results from Wald's test depend on the reference level of the explanatory variable.
> glm1 <.52. data=.08) < 0.03.99) 0. If we relevel the reference level to be 2 pieces of eclair. However.6 P(Wald's test) P(LR-test) < 0. 'glm0'. R assumes that the first level of an independent factor is the referent level.glm(case ~ eclairgr.001 < 0.display(glm1)
Logistic regression predicting diseased OR(95%CI) pieces of eclair eaten: ref.display also contains the 'LR-test' result.eat'. the LR-test does not add further important information because Wald's test has already tested the hypothesis.1. When the independent variable has more than two levels.=0 1 17.21) 0. By default. which checks whether the likelihood of the given model. this depends on the reference level.001 < 0.001 < 0. Wald's test gives a different impression.82.The output from logistic.66) >2 43. family=binomial. would be significantly different from the model without 'eclair.display(glm2)
Logistic regression predicting diseased OR(95%CI) P(Wald's test) P(LR-test) pieces of eclair eaten: ref. the LR-test is more important than Wald's test as the following example demonstrates. of observations = 972 AIC value = 1041.glm(case ~ eclairgr. data=.
163
.data) logistic.data) > logistic. For an independent variable with two levels.89. family=binomial. ref="2") pack() glm2 <.27 (12.96 (1.28.49) 2 22.79 (0. We will return to this discussion in a later chapter.275 >2 1.56 (22. which in this case would be the "null" model. the LR-test is concerned only with the contribution of the variable as a whole and ignores the reference level.001 1 0.91) Log-likelihood = -516.relevel(eclairgr.001
Interpreting Wald's test alone.21.38.001 0 0.
> > > > eclairgr <.33.82. one would conclude that all levels of eclair eaten would be significant.002 ==============================================================
The results show that eating only one piece of eclair does not reduce the risk significantly compared to eating two pieces.04 (0.57 (9.

99) 0. crude.001 0. family = binomial.53.93) 0.display(glm4.1. In fact.glm(case ~ eclairgr + saltegg. of observations = 972 AIC value = 1043.08) 1 0.2. Methods to handle missing values are beyond the scope of this book and for reasons of simplicity are ignored here. The P value of 'saltegg' is shown as 0 due to rounding. try 'saltegg' as the explanatory variable. OR(95%CI) P(Wald) P(LR-test) < 0.79 (0.001 P(LR-test) < 0.51. To check whether the odds ratio is confounded by 'eclairgr'.79 (0.54 (1. Epicalc.display(glm3) Logistic regression predicting case OR (95% CI) saltegg: 2.03.00112.4.6
The odds ratios of the explanatory variables in glm4 are adjusted for each other.001
The odds ratio for 'saltegg' is statistically significant and similar to that seen from the cross-tabulation in Chapter 9.001 0.37 (1.53. which is not less than 0. The number of valid records is also higher than the model containing 'eclairgr'.21) 0.001 0. family=binomial) > logistic.22) Yes vs No Log-likelihood = -736.001 0.823 No.04 (0.28.1.975 Yes vs No Log-likelihood = -516. The crude odds ratios are exactly the same as from the previous models with only single variable.001.975
saltegg: 2. data=.279 1.04 (0.96 (1. of observations = 1089 AIC value = 1478 P(Wald's test) < 0. the two explanatory variables are put together in the next model. for aesthetic reasons.Next.01 (0.08) < 0.0.p.4.52.002 1.03.28. displays P values as '< 0.99) P value < 0. Note: ______________________________________________________________ One should always be careful when analysing data that contain missing values.001' whenever the original value is less than 0.002 adj.
164
. it is 0.=2 0 0.glm(case ~ saltegg.
> glm3 <.2.1.99) 0.3.96 (1.998 No.value=TRUE)
Logistic regression predicting case crude OR(95%CI) eclairgr: ref.0.001.data) > logistic.21) >2 1. Readers are advised to deal with missing values properly prior to conducting their analysis.
> glm4 <.275 0.

16) 0.The adjusted odds ratios of 'eclairgr' do not change suggesting that it is not confounded by 'saltegg'.04 (0.76) 1.display is actually obtained from the lrtest command.99) 1.2.2 >2 1.07) < 0. whereas the odds ratio of 'saltegg' is celarly changed towards unity.48.41. When there is more than one explanatory variable.53) 0. of observations = 972 AIC value = 1031.19.2.001 0 0. which compares the current model against one in which the particular variable is removed.79 (0.92 (0. The MantelHaenszel method only gives the odds ratio of the variable of main interest.001 1 0.0009809 .glm(case~eclairgr+saltegg+sex.
> lrtest(glm4.=2 < 0.08) 0.1.58 (1.49.1. we can compare models 'glm4' and 'glm2' using the lrtest command.975
The P value of 0.19. Logistic regression gives both the adjusted odds ratios simultaneously.006 saltegg: 2. An additional advantage is that logistic regression can handle multiple covariates simultaneously.display(glm5)
Logistic regression predicting case crude OR(95%CI) adj.02.21) 0.2.808 < 0.37 (1.001
Log-likelihood = -509. glm2) Likelihood ratio test for MLE method Chi-squared 1 d.08) Male vs Female 0. OR(95%CI) P(Wald's test) P(LR-test) eclairgr: ref. The difference between the adjusted odds ratio and the crude odds ratio is an indication that 'saltegg' is confounded by 'eclairgr'.3. family=binomial) > logistic.35. and now has a very large P value.2.82 (1.975 is the same as that from 'P(LR-test)' of 'saltegg' obtained from the preceding command.85 (1.04 (0. = 0.807 < 0. while keeping all remaining variables.0
165
. which is an independent risk factor.0. The test determines whether removal of 'saltegg' in a model would make a significant difference than if it were kept.f.03.96 (1.52. P value =
0.28.75 (0.001 0.8) 0. These adjusted odds ratios are close to those obtained from the Mantel-Haenszel method shown in chapter 9.1.0.
> glm5 <.5181 No. 'P(LR-test)' from logistic.99) Yes vs No sex: 1. Now that we have a model containing two explanatory variables.

Examine the following model where the variables 'eclairgr' and 'beefcurry' are specified as an interaction term.2) >2 2 (1. 'eclairgr:beefcurry'. family=binomial) > logistic.5.
166
. Interpretation of the P values from Wald's test suggests that the interaction may not be significant.5.0. OR(95%CI) P(Wald's test) P(LR-test) < 0. The former is equivalent to 'x1+ x2+x1:x2'.8 No.0. the P value from the LR-test is more important.1.5 (0.33.6) (Yes vs No) adj. The reason for not being able to confound is its lack of association with either of the preceding explanatory variables.4 (0.2) 1:Yes 1. Computation of the LR-test P values for the main effects.glm(case ~ eclairgr*beefcurry.3.41 1.5) 0 0.The third explanatory variable 'sex' is another independent risk factor.3) 0.03 0. Since females are the reference level. in fact it is decisive.52 0. If an interaction is present.5) 0. 'eclairgr' and 'beefcurry'. However. The value of 0. is the interaction term.1. the effect of one variable will depend on the status of the other and thus they are not independent.8 (0. at least one of which must be categorical.001
eclairgr:beefcurry: ref. This variable is not a confounder to either of the preceding variables because it has not substantially changed the odds ratios of any of them (from 'glm4').1.03 indicates that both 'eclairgr' and 'beefcurry' are not acting independently from each other.001 0.3 (0.6
0. is not possible since models without main effects (but with interaction terms) have the same Log-likelihood as ones with the main effects included. males have an increased odds of 90% compared to females. of observations = 972 AIC value = 1039.=2 0 0 (0. In other words.display(glm6.2.1.4. decimal=1)
Logistic regression predicting diseased crude OR(95%CI) eclairgr: ref.9) Log-likelihood = -511.6) 0.6.7 (0.1) 1 0.39 0.
> glm6 <.7.3. males and females were not different in terms of eating eclairs and salted eggs.53 < 0.9.3.7) >2:Yes 4. The crude odds ratios for the interaction terms are also not applicable.5 (0.09 0.
Interaction
An interaction term consists of at least two variables.11
The last term.1 (0.8 (0. In R the interaction term can be specified in two ways: 'x1*x2' or 'x1:x2'.3) beefcurry: 2.=2:No 0:Yes 0.7 (1.1.

167
. Subsequent models use the 'eclair. family = binomial.na(eclair. At the first step. Note that the glm command also allows a subset of the dataset to be specified.saltegg 1 1026 1036 <none> 1026 1038 . a subset of the dataset is created to make sure that all the variables have valid (non missing) records. We let R select the model with lowest AIC. data=complete.relevel(eclairgr. the AIC is 1038. removal of 'saltegg' would give the lowest AIC and is therefore chosen and used for the next step.eclair. direction = "both") Start: AIC= 1038.na(sex)) > glm8 <. subset=!is. The top one having the lowest AIC is the best one.na(beefcurry) & !is.data <.Readers may like to relevel the 'eclairgr' variable back to the original reference level (ref=0) and compare the output. the new deviance and the new AIC.data) > logistic.eat' variable instead of 'eclairgr' in order to simplify the output.eat:beefcurry Df Deviance AIC <none> 1026 1036 .
> eclairgr <.sex 1 1039 1049 Step: AIC= 1036.step(glm8.eclair.5 case ~ eclair. The command step removes each independent variable and compares the degrees of freedom reduced.glm(case~eclairgr*beefcurry.
> complete.eat * beefcurry + saltegg + sex.eat) & !is.display(glm7)
Stepwise selection of independent variables
The following section demonstrates stepwise selection of models in R.5. ref="0") > pack() > glm7 <. family=binomial.subset(.eat * beefcurry + saltegg + sex Df Deviance AIC .5 case ~ eclair.eat + beefcurry + sex + eclair. data=.na(saltegg) & !is. The results are increasingly sorted by AIC.eat:beefcurry 1 1030 1038 + saltegg 1 1026 1038 .eat:beefcurry 1 1030 1040 .sex 1 1039 1047
Initially. First.
> modelstep <.data)
The model may be too excessive.glm(case ~ eclair.data.

"Yes" is compared to "No". the way most epidemiological studies are designed for.display(modelstep.00059 beefcurry -0.9 (2. The odds ratio for 'sex' is that of males compared to females.601 3.58 0. Now.903 0.eatTRUE 2. crude=FALSE)
Logistic regression predicting case adj. of observations = 972 AIC value = 1036.672 0.47) 4.25.06 0.039 LR-test < 0.163 3. we check the results. Error z value Pr(>|z|) (Intercept) -2. It should be noted that stepwise regression is limited to exploration and often not suitable for specific hypothesis testing.067 0. However.
Interpreting the odds ratio
Let's look more carefully at the final model.In the second selection phase. not removing any remaining independent variable gives the lowest AIC. It tends to remove all non-significant independent variables from the model.685 2.573 -1.
> logistic.eat'.43.2296 No.8 (1.25) 1.4 (0.44 0. Eating beef curry by itself is a protective factor. OR(95%CI) 7.59 0. For 'eclair.001 < 0.eatTRUE:beefcurry 1.001 0.
> summary(modelstep) =================== Lines omitted ================== Coefficients: Estimate St.494 -5.66) 0.eatTRUE:beefcurryYes Log-likelihood = -513.115 < 0.001 0.2.412 0.5
All the three variables 'eclair. Thus the selection process stops with the remaining variables kept. 'beefcurry' and 'sex' are dichotomous. the odds is increased and becomes positive.13.31. Sex is an independent risk factor. Eating eclairs is a risk factor.1 (1.71) P(Wald's) < 0. the effect of which was enhanced by eating beef curry.15.eat' it is TRUE vs FALSE and for 'beefcurry'.1.048
eclair.eat beefcurry: Yes vs No sex: Male vs Female eclair. when eaten with eclairs.00033 eclair. The odds ratios and their confidence intervals must still be calculated regardless of the statistical significance.
168
.586 0.001 0.001 < 0. In hypothesis testing one or a few independent variables are set for testing.07.41 6.3e-08 eclair.03923 =================== Lines omitted ==================
The final model has 'saltegg' excluded.11484 sexMale 0.

115 < 0.data$beefcurry. crude=FALSE) Logistic regression predicting case
eclair. 32. which means that males have approximately a 1. of observations = 972 AIC value = 1036.039 P(LR-test) < 0.47) (0.24 OR (95%CI) (16. However. 'beefcurry' and their interaction term 'eclair. The other two variables. family = binomial. the odds ratio is then 7.93) P(Wald's test) < 0.47 1. The required odds ratio can be obtained from computing the product of the appropriate odds ratio of the individual variables.The independent variable 'sex' has an odds ratio of approximately 1.001 0.eat beefcurry: No vs Yes sex: Male vs Female eclair.
> complete.9.data$beefcurry <.eat * beefcurry + sex.31.2296 No.eat:beefcurry' term is 0. The odds ratio of 'eclair.001 0.001 < 0.7. are interacting.8 0.2.8. Among the beef curry eaters. If 'beefcurry' is "No" (did not eat beef curry).3) (0.62.eat'.1 or approximately 32.display(glm9.eat' and beefcurry.0.4 2. A better way to get the odds ratio and 95% confidence interval for 'eclair.9 × 4. ref="Yes") > glm9 <.5
The odds ratio and 95% confidence interval of 'eclair.glm(case ~ eclair.eat' and 'beefcurry' are both 1).eat:beefcurry' need to be considered simultaneously.
169
.data) > logistic.eat' depends on the value of 'beefcurry' and vice versa.001 < 0.8 times higher risk than females. The odds ratio for eclair.eat' among those who ate beef curry are in the first row because the 'beefcurry' term in the second row and the interaction term in the last row are all 0.4.9. the standard errors and 95% confidence interval cannot be easily computed from the above result. Three terms 'eclair.59) (1. data = complete.001 0. the interaction term should be multiplied by 1 (since 'eclair.06.eatTRUE: beefcurryNo adj. 'eclair.relevel(complete.eat for this subgroup is therefore only 7.eat' among 'beefcurry' eaters is to relevel the variable and run the model again.048
Log-likelihood = -513.8. the 'eclair.

but separate columns. aggregated table."B") pack()
The Epicalc function pack identifies all free vectors with the same length as the number of records in .2. data=. the degrees of freedom is different.1. data2 has only four rows of data compared to .data and adds them into the data. The reshaped data. v.4) > data2 <.reshape(.data.
170
. timevar="death".
> glm(death ~ anc+clinic.data death 1 no 2 yes 3 no 4 yes 5 no 6 yes 7 no 8 yes anc clinic Freq old A 176 old A 12 new A 293 new A 16 old B 197 old B 34 new B 23 new B 4
This is a format with 'Freq' being a variable denoting numbers of subjects in each category.frame.3.3.4. idvar="condition".data)
The coefficients are the same as those from using the original dataset."new") clinic <.2.name="Freq".data$condition <.c("no".
> . However. Another data format for logistic regression is possible where the number of cases and number of controls of the same exposure are in the same row.c("old". binomial. Sometimes. These free vectors are then removed from the global environment.Other data formats
The above datasets are based on individual records.factor(clinic) levels(clinic) <.data.factor(death) levels(death) <.c("A".factor(anc) levels(anc) <."yes") anc <.
> > > > > > > > > > > zap() data(ANCtable) ANCtable use(ANCtable) death <. which has 8 rows.
> . ANCdata.c(1. direction="wide")
The variable 'condition' is created to facilitate reshaping. weight=Freq. the regression is required to be performed based on an existing. This variable is put as the 'weight' in the model.

The coefficients and standard errors from this command are the same as those above.no Freq. There were three groups of patients studied: ectopic pregnancy patients ('EP'). Try the following commands in R:
> > > > zap() data(Ectopic) use(Ectopic) des() No.yes. current clients who came for an induced abortion ('IA') and those who came for delivery ('deli'). which are occasionally found. Case-by-case format of data is most commonly dealt with in the actual data analysis. data=data2. at this stage.> data2 anc clinic condition Freq.yes 1 old A 1 176 12 3 new A 2 293 16 5 old B 3 197 34 7 new B 4 23 4
The first column in each row is the 'row. This data frame can be written to a text file with 'row. Logistic regression for 'data2' can be carried out as follows:
> glm(cbind(Freq. the latter two groups are combined and classed as the controls whereas the first group is classed as the cases. the residual deviance and AIC are much smaller due to the smaller number of degrees of freedom. are mainly of theoretical interest. However. The exposure of interest is 'hia' or history of previous induced abortion and a potential confounder is 'gravi' or level of gravidity. For simplicity.
More than 2 strata
The dataset Ectopic comes from a case-control study testing a hypothesis whether previous induced abortion is a risk factor for current ectopic pregnancy. of observations = 723 Variable Class Description id integer outc factor Outcome hia factor Previous induced abortion gravi factor Gravidity
1 2 3 4
> summ()
171
. family=binomial)
The left-hand side of the formula is a result of column binding the two outcome frequency columns. Freq.no) ~ anc + clinic.names' and the variable 'condition' (the third variable) omitted. The formats in ANCtable and 'data2'. The remaining parts of the commands remain the same as for the case-by-case format.names' of the data frame.

'help(mhor)'.Exercises________________________________________________
Problem 1. Problem 2. Problem 4. after adjustment for the effect of previous induced abortion ('hia'). Use the Hakimi dataset to do a similar analysis. compute the odds ratio and 95% confidence interval for combined exposure to 'eclair. Hint: 'help(xtabs)'. With the data frame 'complete.eat' and 'beefcurry' using the group who were exposed to neither eclair nor beef curry as the referent group.data'. In the Ectopic dataset.
176
. Use the ANCtable dataset and the function xtabs to create a stratified 2x2 table. Then use the mhor function to analyse the adjusted odds ratio. unclass 'gravi' and use logistic regression to investigate a dose response relationship (linear trend) between gravidity and risk of ectopic pregnancy. Problem 3.

drinking alcohol and working in the rubber industry are risk factors for oesophageal cancer. comparison is made within each matched set rather than one series against the other. the methods can still be applied to other matched case-control studies. This latter file is first used for matched pair analysis. particularly in the matched setting. of observations = 52 Variable Class 1 matset numeric 2 case numeric 3 smoking numeric 4 rubber numeric 5 alcohol numeric
Description
177
. The matching ratio varied from 1:1 to 1:6. and readers should consult the references at the end of this chapter. In the analysis of matched sets.Chapter 16: Matched Case Control Study
Examples in previous chapters have cases and control independently recruited. For a matched case control study. There are many good books on how to analyse case-control studies. when a case is recruited. Each case was matched with his/her neighbours of the same sex and age group. can be selected to match with the case in some parameters such as age and sex and other conditions such as being siblings or neighbours. the datasets VC1to1 and VC1to6 consist of data from a matched case-control study testing whether smoking. However. In this chapter. The examples in this chapter are for demonstration purposes only. The sample size is rather small for making solid conclusions. then the dataset should probably be analysed in a non-matched setting. a control. or a set of controls (more than one person).
> > > > zap() data(VC1to1) use(VC1to1) des()
No. If control series are chosen based on matching on only age and sex and the purpose of such selection is only to avoid imbalances. The file VC1to6 is the full dataset whereas VC1to1 has the number of controls per case reduced to 1 for all matched sets.

The level of contrast of history of smoking between the two based on matched pairs is called a conditional odds ratio. It is the value of the left lower corner cell divided by the right upper corner cell.data
179
. when the disease of interest is rare. 3. We now analyse the full dataset. use(VC1to6) des() summ()
No. it is often cost-effective to increase the number of controls per case. strata=matset) Number of controls = 1 No. this means that the ratio of discordant counts between cases having the exposure against controls having exposure is 1. of controls exposed No. where each case may have between 1 and 6 matched controls. smoking.
1:n matching
If there is no serious problem on scarcity of diseased cases. The efficiency (especially resources spent on collecting data from extra controls) is decreased but it means that the study may end sooner. the best ratio of matching is one case per control. of observations = 119 ================= lines omitted ============ > . of cases exposed 0 1 0 0 5 1 5 16 Odds ratio by Mantel-Haenszel method = 1 Odds ratio by maximum likelihood estimate (MLE) method = 1 95%CI= 0. However. Resources spent on collecting data from each individual will be most efficient regardless of whether the subject is a case or a control. In fact.29 .454
The two methods give the same values for the odds ratio. The MLE method also gives a 95% confidence interval of the estimate. Epicalc has a function matchTab that can be used to analyse the matched set (not necessary 1 case per 1 control) from the original dataset as follows:
> detach(wide) > rm(wide) # not required anymore > matchTab(case. In this case the conditional odds ratio (sometimes called McNemar's odds ratio) is 5/5 = 1.
> > > > zap() data(VC1to6).

of controls exposed No. However. of cases exposed 0 1 2 3 4 5 6 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 2 Odds ratio by Mantel-Haenszel method = 1.988 Odds ratio by maximum likelihood estimate (MLE) method = 2. shows that there are four matched sets with six controls per case. Let's use the Epicalc function matchTab instead.066 95%CI= 0. The last table. for example. strata=matset) Number of controls = 1 No. smoking. of cases exposed 0 1 0 0 0 1 0 3 Number of controls = 2 No. the conditional odds ratio for the 1:1 matched case-control study is based on the ratio of discordant exposures between cases and controls of
180
. One has case exposed and five of the six controls non-exposed. of cases exposed 0 1 2 0 0 0 1 1 1 1 0 ================= lines omitted ============ Number of controls = 6 No. One of them has case non-exposed and three out of the controls exposed.678 . The odds ratios from the two different datasets are slightly different. The remaining two sets have the case and all of the six controls exposed.matset case smoking rubber alcohol 1 1 1 1 0 0 2 1 0 1 0 0 3 2 1 1 0 1 4 2 0 1 1 0 ================= lines omitted ============ 116 26 0 0 0 0 117 26 0 1 1 0 118 26 0 0 0 0 119 26 0 1 1 1
It would be very cumbersome to reshape this data into a wide form.
Logistic regression for 1:1 matching
As discussed above. the effect of smoking on the outcome is still not statistically significant as the 95% confidence interval of the odds ratio contains the value 1.299
The command gives six tables based on the matched sets of the same size (cases per controls). of controls exposed No. of controls exposed No.
> matchTab(case. 6.

2.04 on 26 on 25 degrees of freedom degrees of freedom
In the above glm model.5 (1. which is the same as the result from the matched tabulation. Epicalc can display the results in a more convenient format.2895 3. the logit of which is 0.2.diff-1. In conditional logistic regression.diff
Log-likelihood = -18. there is no such intercept because the difference of the outcome is fixed to 1.044 Residual deviance: 36. the intercept is the expected value of the dependent variable (the variable on the left-hand side of the formula) when all the independent variables are equal to 0.display(co. Error z value Pr(>|z|) smoke.lr2 <.8 (1. the difference of the outcome (which is always 1 for the above reason) is predicted by the difference in smoking habit.044 AIC: 38.05 P(LR-test) 0.diff ~ smoke.
> co.9) 4.000 0. the odds ratio is exp(0) = 1. The 95% confidence interval of the odds ratio can be obtained from:
> exp(confint. decimal=1) Logistic regression predicting outcome.20.2) P(Wald's) 0. binomial) > logistic.default(co. adding the alcohol term. Usually.diff
smoke. With the coefficient of 0.lr1)) 2.lr1) Logistic regression predicting outcome.diff OR(95%CI) 1 (0.display(co.3.3.29.513
182
.23.66 0.diff 0.0437
Recall that the advantage of logistic regression is in its ability to handle more than one exposure variable.3.8) 4. of observations = 26 AIC value = 38.0218 No.diff crude OR(95%CI) adj.7 (0.4542
These values are exactly the same as those obtained from the matched tabulation.diff + alcohol.glm(outcome.5) 0.5 % smoke.66 0.45) P(Wald's test) P(LR-test) 1 -
smoke.diff alcohol.632 0 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 36.
> logistic. Run a logistic model again.Estimate Std.OR(95%CI) 1 (0.lr2.diff 0. There is an additional term '-1' in the right-hand side of the formula.5 % 97. which indicates that the intercept should be removed from the model.03
Log-likelihood = -15.

369 0. The original dataset in long format can be used.73 1. If the called command is used. and create the values of difference in the outcome and exposure variables. A simpler method of multivariate analysis of the VC1to1 dataset is to use the command clogit (short for 'conditional logit') from the survival package.
Conditional logistic regression
The above logistic regression analysis.81 0.182 2. 52).95 smoking 0.208 0.0814 Wald test = 3.
> > > > > zap() library(survival) use(.444 0.803 1. p=0.95 upper . the result will be the same.05 exp(coef) exp(-coef) lower . Moreover.92 alcohol 4. p=0.02 on 2 df.998 23.147 Score (logrank) test = 4. of observations = 26 AIC value = 35.data) clogit1 <.708 -0.clogit(case ~ smoking+alcohol+strata(matset)) summary(clogit1) n= 52 coef exp(coef) se(coef) z p smoking -0. which is based on manipulating the data.092 (max possible= 0. the method is applicable only for 1:1 matching.572 4. method = "exact")
The odds ratios and their 95% confidence intervals from clogit are the same as those obtained by modelling the difference.314 0.No. case) ~ smoking + alcohol + strata(matset). The statistical analyst needs to reshape the dataset.diff' has changed the coefficient of 'smoke.0991
The top section of the results reports that the clogit command actually calls another generic command coxph. The last section contains several test results.83 on 2 df.62 on 2 df.73 0. each of which indicates that the model is not significantly different from the null model (the model that does not include any predictor variables).5 ) Likelihood ratio test= 5. is still rather cumbersome.23 Rsquare= 0. p=0.66 alcohol 1.026
The introduction of 'alcohol.
> coxph(formula = Surv(rep(1.957 0.diff' substantially indicating that smoking is confounded by drinking alcohol.81 0.
183
.

The Analysis of Case-Control Studies (Statistical Methods in Cancer Research.20.81 (1. This conditional log likelihood can be used for comparison of nested models from the same dataset. 3.655 0.23.clogit(case ~ smoking + alcohol +rubber + strata(matset)) > attributes(clogit3) > clogit3$loglik [1] -37.The Epicalc function clogistic. which also indicates the level of fit.025
smoking: 1 vs 0 alcohol: 1 vs 0
No. OR(95%CI) 0.83) 52 adj.73 (0.2.
> clogit3 <.45) 4. The second sub-element is specific to the particular model. Choose the best fitting model. 1). Problem 2. Compare the results with those obtained from the conditional logistic regression analysis. Vol. is the same for all the conditional logistic regression models. of observations =
References
Breslow NE.89398
The element 'loglik' from each clogit command (analogous to 'logLik' of glm) contains two sub-elements. This test result can be seen from the display of the model.97. Day NE (1980).89489 -31.display(clogit1)
Conditional logistic regression predicting case : 1 vs 0 crude OR(95%CI) 1.23) P(Wald) 0. which is the conditional likelihood of the null model. The conditional logistic regression model gives neither the log likelihood nor AIC value but it does give the conditional log likelihood.66 0.display can be used to obtain a nicer output.29. Twice the absolute difference of the two sub-elements is equal to the likelihood ratio test for the model. The first sub-element.
184
.05 P(LR) 0. Try different models and compare their conditional log likelihoods.0 (0.5 (0. Carry out a matched tabulation for alcohol exposure in VC1to6.
Exercises________________________________________________
Problem 1. Int Agency for Research on Cancer.
> clogistic.18. Refer to the log likelihood and AIC values in the preceding chapter on generalized linear model.92) 4.

Chapter 17: Polytomous Logistic
Regression
Logistic regression is well known for the modelling of binary outcomes. In some occasions, the outcome can have more than two non-ordered categories. In chapter 15 we looked at the Ectopic dataset, which came from a study testing a hypothesis whether previous induced abortion is a risk factor for current ectopic pregnancy (EP). The outcome has two groups of controls: subjects coming for induced abortion services (IA) and women who delivered babies (Deli). Both groups were used to represent intra-uterine pregnancy. The outcome in this study has therefore three nominal categories.

The mosaic plot gives complicated information. The column of the plot is outcome, which is divided into EP, IA and Deli, as previously described. The sizes of the 3 “columns” are the same (241 subjects). Each row represents the three levels of gravidity (number of pregnancies): 1-2, 3-4 and > 4, respectively. The distribution of gravidity among the EP and IA groups are more or less the same, i.e. around a half having 1-2 pregancies, whereas among the women coming to deliver a baby, the percentage in this group is much higher (about 75%). Finally, information can be obtained from the different colours. Blue areas represent women who experienced previous induced abortion while white represents those who did not. In each column, such a percentage appears to increase with gravidity, i.e. women who have high gravidity will have a higher level of exposure to induced abortion in the past. Comparison among the three columns, which is the main hypothesis of this study, shows that the proportion of blue colour is highest among the EP group.

The upper part of the output concerns the iteration process of the neural network. The important part for epidemiology is in the 'Coefficients:' section. Interpretation of the coefficients of polytomous logistic regression is rather complicated, especially when the design has one group of cases and more than one group of controls. There are three outcome categories. The first one, 'EP', is the reference against which the two comparisons are made. The risk for being EP in this case is reverted to the chance of not being EP within the dataset. Since this study was a case control study, the intercept values should be ignored. The most important part is the coefficients of 'hia'. For those who had a history of induced abortion, the logit of being IA in this pregnancy changes by -0.90735 unit. This is equivalent to an odds ratio of exp(0.90735) or 0.403. "The odds of having intra-uterine pregnancy (and eventually came for induced abortion) is reduced by a factor of 0.403 if the subject had a history of induced abortion" can be rephrased as "The odds of having ectopic pregnancy (and therefore not in the IA group) is increased by 1/0.403, or a factor of 2.48". Similarly, the odds ratio for EP using Deli as the control is 1 / e-1.7258539 = 5.617. It is worth remembering that in the chapter on logistic regression, the odds ratio for history of previous induced abortion using two groups combined was obtained as follows:

Only the standard errors section is displayed because the coefficients section is shown above with the previous command and the correlation section is not directly related here. To obtain the z value for each cell, type:
> coef(s1) / s1$st -> z; z (Intercept) hiaever IA IA 3.6932 -4.6139 Deli 6.3136 -8.5943

High levels of 'z' indicate the coefficient is several times the value of the standard error. In other words, the coefficient is far away from 0, which the null hypothesis (of no association) is based on. P values can be further obtained by:
> pnorm(abs(z), lower.tail=FALSE)*2 -> p.values > p.values (Intercept) hiaever IA IA 2.2143e-04 3.9513e-06 Deli 2.7264e-10 8.3774e-18

Note that the absolute values of 'z' were used before computing the P values. The 95% confidence interval of the coefficients can be computed based on the coefficients and the standard errors.

The formatting of the output has been modified to fit on the page. The P values are coded with the number of asterisks conforming to those used in the summary of the 'glm' and 'lm' models. Odds ratios for the intercepts are irrelevant and are therefore omitted. As discussed previously, the odds ratios here are not for risk of ectopic pregnancy but for their reciprocals. To include the variable 'gravi' in the model, type:
> multi2 <- multinom(outc ~ hia + gravi) > mlogit.display(multi2)

Again, the formatting of the output has been modified to fit on the page. None of the coefficients and odds ratios of gravidity in this model are significant. However, this model has a much lower residual deviance compared to model 'multi1'. A reduction from 1507.464 to 1489.175 or 18.289 units at a cost of introducing four more parameters (two gravi levels for two outcomes) can be considered worthwhile since the P value from the chi-squared of 18.289 with 4 degrees of freedom is 0.001. Moreover, the AIC value from model 'multi2' of 1505.175 is obviously smaller than that from 'multi1' of 1515.464. For the final conclusion, after adjustment for gravidity, history of previous induced abortion significantly increases the risk for ectopic pregnancy. The odds ratio is 1/.33 or 3.03 if the client currently requesting for induced abortion is used as the referent group and 1/.225 or 4.4 if women who delivered a baby is the referent group. It is well known that induced abortion is often repeated. Current clients for this service usually experience more induced abortions than the general population. Ectopic pregnancy patients have even more experience of induced abortion than this group. Therefore, history of induced abortion is very likely a true risk factor for ectopic pregnancy.

Selection of referent outcome group
The outcome variable in a polytomous logistic regression is usually a factor containing more than two levels. The first level is usually taken as the referent level. The same results of the analysis could be obtained by creating three dummy outcome variables and using them in a matrix format with the cbind function.
> > > > > ep <- outc == "EP" ia <- outc == "IA" deli <- outc == "Deli" multi3 <- multinom(cbind(ep,ia,deli) ~ hia+gravi) summary(multi3)

> mlogit.display(multi3)

190

369**
RRR(95%CI) 4.55) 2.The above commands should give the same results as those from 'multi2' except that the names of outcome groups are in lower case.466(0. the odds of being 'EP' or having an ectopic pregnancy in this admission increased by 4. ep.
191
. increasing gravidity does not independently increase the risk for ectopic pregnancy but significantly. Since the first column is always used as the referent group. one can exploit this method to shuffle the order of outcome variables in order to change the referent group.6.366 ia Coeff.38/0.multinom(cbind(deli.154*** hiaever IA 1. For example.ep./SE (Intercept) -1. which is non-significant). increases the chance for being a client for induced abortion service in the current visit.205(1.877.996.display(multi4) Outcome =cbind(deli.222*** gravi3-4 0.2. and in a dose-response relationship fashion.7/0.16/0.4. 'deli' is put as the first column of the outcome matrix:
> multi4 <. Referent group = deli ep Coeff. to use 'deli' as the referent level.593(0.237*** gravi>4 1.607)
The output is relatively easy to interpret.732) 3. On the other hand.475.85/0.107)
RRR(95%CI) 1.215 gravi3-4 0./SE (Intercept) -0.963.2.47/0.233) 2.131*** hiaever IA 0. ia).51/0.443 fold (which is highly significant) and that for being a (repeating) induced abortion patient increased by only 47 percent (OR = 1.49/0.24 gravi>4 0.466.346(1.3.6.979. Using delivery as the referent outcome.861) 1.443(2.554.ia) ~ hia+gravi) > mlogit.02/0.005(0. for a woman with a history of induced abortion.

Outcome 1 1 1 1 2 2 2 2 3 3 3 3 vac 0 0 1 1 0 0 1 1 0 0 1 1 agegr 0 1 0 1 0 1 0 1 0 1 0 1 total 25 15 4 8 1 0 25 35 3 1 2 1
Problem 1. There were three levels of outcomes: 1 = no change. Is there any association between age group and outcome?
Problem 3. 3 = died. Is there any difference in age group among the two groups of these vaccine recipients?
Problem 2. 75 were given the vaccine ('vac' = 1) while 45 were given a placebo ('vac' = 0). Is there any difference in outcomes between the vaccine and placebo treatment groups?
192
. 2 = became immune.Exercises________________________________________________
In a fictitious trial of a vaccine on 120 mice. Among these were 35 young mice ('agegr' = 0) and 85 old mice ('agegr' = 1).

999 -0.920 AIC: 1214.hw)
Coefficients: Value Std.222388 agegr60+ yrs 1.999 epg).7744521 0.polr"
"deviance" "df.000 + epg vs no infection) is reduced by 72% if the subject wore shoes.
Modelling ordinal outcomes
Alternatively.999 and 2. The values of these intercepts are not so meaningful and can be ignored at this stage.For light infection (1-1.ord) # "ordered" "factor" ord. The coefficients of all independent variables are shared by two cut points of the dependent variable. since intensity is an ordered outcome variable. Shoe wearing has a protective effect on both light and heavy infection with odds ratios of 0. But first we have to tell R that the outcome is ordered.262.residual" "call" "model" "digits"
"fitted. Error t value agegr15-59 yrs 0.1834157 4. the logit of getting any infection (intense= 1. Shoe wearing has a negative coefficient indicating that it protects both levels of infection.3226504 3. At the first cut point.000 + epg vs any lower levels of intensity).8 and 6.064223 Intercepts: Value Std. Error t value 0|1-1. one for each cut point of the outcome.1579 13.1 times higher risk than the children. it is worth trying ordinal logistic regression.1780106 -4.1.920
This ordinal logistic regression model has two intercepts. respectively. the young adults and the elder subjects had a 2.7234746 0.1363 Residual Deviance: 1204.62 and 0.
> > > > > class(intense) # "factor" intense.999|2.ordered(intense) class(intense.hw) -> s1 > attributes(s1) $names [1] "coefficients" "zeta" [5] "lev" "terms" [9] "n" "nobs" [13] "convergence" "niter" [17] "xlevels" "pc" $class [1] "summary. For heavy infection (2.1293 -4. respectively.ord <.ord ~ agegr + shoes) summary(ord.
> summary(ord. only young adults had a higher risk than the children.000+ 2.000+ epg).8726 1-1.2797213 0. The command polr from the MASS package will do this.hw <.0745 0. Both coefficients are positive indicating that the risk of infection increases with age.6301 0.polr(intense.values" "edf" "method" "contrasts"
195
. so is the logit at the second cut point (intensity of 2.966278 shoesyes -0.

In epidemiology. the mean of the counts is equal to the variance. age-group) and the same period. Independent covariates are similar to those encountered in linear and logistic regression. looking at incidence density among person-time contributed by subjects of similar characteristics of interest. Within each stratum. Poisson regression is one of three common generalized linear models (GLM) used in epidemiological studies.Chapter 19: Poisson and Negative
Binomial Regression
The Poisson distribution
In nature. the occurrence is at random. it can be proved that under this condition. an event usually takes place in a very small amount of time. When the probability of having an event is affected by some factors. There are two main assumptions for Poisson regression. Firstly. At any given point of time. Secondly. When one event is independent from another. measurement is focused on density. While time is one dimension.g. the distribution is random. which have been covered in previous chapters.
197
. the same concept applies to the density of counts of small objects in a two-dimensional area or three-dimensional space. Variation among different strata is explained by the factors. the probability of encountering such an event is very small. Mathematically. the densities in different units of time vary with a variance equal to the average density. sex. or as the sample size becomes larger and larger. a model is needed to explain and predict the density. which means incidence or 'count' over a period of time. The other two that are more commonly used are linear regression and logistic regression.
Poisson regression
Poisson regression deals with outcome variables that are counts in nature (whole numbers or integers). Poisson regression is used for analysing grouped cohort data. Instead of probability. risk is homogeneous among person-times contributed by different subjects who have the same characteristics of interest (e. asymptotically.

3 2123. 2. In survival analysis using Cox regression (discussed in chapter 22).
> > > > zap() data(Montana) use(Montana) summ()
No. name respdeath personyrs agegr period start arsenic Obs.1 1. zero counts are difficult to handle in transformations. 19 12451 4 4 2 4
1 2 3 4 5 6
198
. 0 4.
Poisson regression eliminates some of the problems faced by other regression techniques.1 1. normal errors) are not appropriate for count data for four main reasons: 1.5 1.11 min. only the hazard ratio and not incidence density of each subgroup is computed. in logistic regression. the variance of the response may increase with the mean.d.
Example: Montana smelter study
The dataset Montana was extracted from an occupational cohort study conducted to test the association between respiratory deaths and exposure to arsenic in the industry. the errors will not be normally distributed. Analysing risk factors while ignoring differences in person-times is therefore wrong. after adjusting for various other risk factors. starting time of employment 'start' and the level of exposure to arsenic during the study period 'arsenic'. different subjects may have different person-times of exposure.46 2. 4. In other words.42 1096.47 median 1 335. This is the count of the number of deaths among 'personyrs' or personyears of subjects in each category.61 2. the model might lead to the prediction of negative counts. 3. Read in the data first and examine the variables.4 1. 3.15 3 2 1 2 s. Poisson regression produces both 'baseline incidence density' as well as 'incidence density ratio' among strata. of observations = 114 Var.2 1 1 1 1 max.41 2. The analysts and the readers may not have a clear idea on the descriptive statistics of these baseline risks. For example. The other variables are independent covariates including age group 'agegr'.Benefits of Poisson regression models
Straightforward linear regression methods (assuming constant variance. 114 114 114 114 114 114 mean 2.09 0. period of employment 'period'. The main outcome variable is 'respdeath'.

2001 1.2638 period1960-1969 0. An important criterion in the choice of a link function for various families of distributions is to ensure that the fitted values from the modelling stay within reasonable bounds. A logarithmic transformation is needed since.2036 2.4331 0.0177 AIC: 596 ==============================
The option 'offset = log(personyrs)' allows the variable 'personyrs' to be the denominator for the counts of 'respdeath'.0588 period1970-1977 0.372 0.511 <2e-16 period1950-1959 0.2117 1. see the help in R under 'help(family)'.4830 0. Error z value Pr(>|z|) (Intercept) -6.value [1] 9.
The first model above of Poisson regression with 'period' as the only independent variable suggests that the death rate increased with time. Specifying a log link (default for Poisson) ensures that the fitted counts are all greater than or equal to zero. for a Poisson generalized linear model. the link function is the natural log. offset = log(personyrs).Modelling with Poisson regression
> mode11 <. family = poisson) > summary(mode11) ============================== Coefficients: Estimate Std. The model can be tested for goodness of fit and the checked whether the Poisson assumptions mentioned earlier in the chapter have been violated.3781 0.
Note: ______________________________________________________________________ For more details on default links for various families of distributions related to generalized linear modelling.glm(respdeath ~ period.1715 -37.27 $df [1] 110 $p.2365 0.117 0.5784e-30
201
. type:
> poisgof(mode11) $results [1] "Goodness-of-fit test for Poisson assumption" $chisq [1] 369. and the default link for the Poisson family is the log link.
Goodness of fit test
To test the goodness of fit of the Poisson model.889 0.

0038 arsenic115+ years 0.0003295
Removal of 'period' further reduces the AIC but still violates the Poisson assumption to the same extent as the previous model.245 5.87 < 2e-16 agegr70-79 2.value on 113 on 107 degrees of freedom degrees of freedom
# 0.02 Residual deviance: 122.value
The P value is very small indicating a poor fit.
> poisgof(mode12)$p.67 1.25 AIC: 355.00032951
Model 'model2' still violates the Poisson assumption.05). a parameter reflecting the level of errors.value # 0.89 0.
> poisgof(mode11)$p.
> mode14 <.4e-07 arsenic15-14 years 0.804 0.14 < 2e-16 arsenic11-4 years 0.462 0.
202
.599 0.
> mode12 <.The component '$chisq' is actually computed from the model deviance.350 0. An alternative method is to a fit negative binomial regression model and check if the parameters are different from 1.10 3. The next step is to add the main independent variable 'arsenic1'. family = poisson) > AIC(mode12) # 396.596 0.
Note: ______________________________________________________________________ It should be noted that this method is under assumption of a large sample size.
We now add the second independent variable 'agegr' to the model. offset = log(personyrs). family = poisson) > summary(mode14) Coefficients: Estimate Std.96 2.5e-09 agegr60-69 2.998 0.glm(respdeath ~ agegr.4e-08 Null deviance: 376.176 5. Error z value Pr(>|z|) (Intercept) -7. offset=log(personyrs).74 < 2e-16 agegr50-59 1. If only the P value is wanted.value # 0.
> mode13 <.206 2.158 5.0 > poisgof(mode14)$p. which is demonstrated in the latter section of this chapter.995 0.glm(respdeath ~ agegr + arsenic1.238 9. A large chi-squared value with small degrees of freedom results in a significant violation of the Poisson assumption (p < 0. family = poisson) > AIC(mode13) # 394. offset=log(personyrs).glm(respdeath~agegr+period.224 -35.64
The AIC has decreased remarkably from 'model1' to 'model2' indicating a poor fit of the first model.256 10. the command can be shortened.14869
'model4' has a much lower AIC than model3 and it now does not violate the assumption.47 > poisgof(mode13)$p.

223 -35. it can be included in the model as a continuous variable.2327 -35.glm(respdeath~agegr+arsenic.366 0.245 5.2454 5. It would therefore be better keeping arsenic as factor.8 > poisgof(model6)$p.5572 0. offset=log(personyrs).42 < 2e-16 agegr50-59 1.009 0.30 < 2e-16 arsenic21+ years 0.811 0.0e-09 agegr60-69 2.value # 0.c("<1 year".
203
. offset=log(personyrs). the AIC value in 'model5' is higher than that of 'model4'.2416 0.237 9.
>model5 <.1e-11 ============================ AIC: 353. Error z value Pr(>|z|) (Intercept) -8.70 2.glm(respdeath ~ agegr + arsenic2.4572 0.121 6.470 0.Linear dose response relationship
Alternatively.77 < 2e-16 agegr70-79 2.069942
Although the linear term is significant. instead of having arsenic as a categorical variable.2558 10.arsenic1 levels(arsenic2) <. 3)) label.
> > > > arsenic2 <.2379 9. "Exposure to arsenic") model6 <.13999
At this stage.8109) or 2.31 > poisgof(model5)$p.99 2.25 times with statistical significance. we would accept 'model6' as the model of choice as it has the smallest AIC among all the models that we have tried.94 2. from 'model4' there does not appear to be any increase in the risk of death from more than 4 years of exposure to arsenic so it may be worth combining it into just two levels. family=poisson) > summary(model6) ============================ Coefficients: Estimate Std.00 < 2e-16 arsenic 0.3358 0.var(arsenic2. Error z value Pr(>|z|) (Intercept) -8.3236 0.9e-09 agegr60-69 2. We conclude that exposure to arsenic for at least one year would increase the risk for the disease by exp(0.624 0.40 1.0524 6. The original variable 'arsenic' is included in the next model.value # 0.86 < 2e-16 agegr50-59 1. rep("1+ years".255 10.5e-10 AIC: 360.98 < 2e-16 agegr70-79 2. If the P vaue is significant then this would imply that there is a linear dose-response relationship between exposure to arsenic and the risk for the disease. However. family=poisson) >summary(model5) ============================= Coefficients: Estimate Std.

personyrs=100000)) > predict(model6.as. to compute the incidence density from a population of 100.
Incidence density ratio
In a case control study. In a cohort study.26 per 100. The ratio between the incidence densities of two groups is called the incidence density ratio (IDR). The matrix 'table.
204
. we can divide the incidence among the former by that among the latter group. The ratio of the risks for the two groups is then called the 'risk ratio' or the 'relative risk'. the odds is close to the probability or risk. In the general linear model.inc10000' (created previously) gives the crude incidence density by age group and period. In this chapter. In a real cohort study. With the offset being log(person-time).Incidence density
In the Poisson model. this value is equal to the ratio between the odds of getting a disease among the exposed and the unexposed group. the outcome is a count. the link function for the Poisson distribution is the natural logarithm. The relative risk ignores the duration of follow up. to compute the incidence density ratio between the subjects exposed to arsenic for one or more years against those exposed for less than one year.000 people aged between 40-49 years who were exposed to arsenic for less than one year using 'model6'. Comparing the incidence density among two groups of subjects by their exposure status is fairer than comparing the crude risks.data. Therefore it is not a good measure of comparison of risk between the two groups. the relationship between the values of the outcome (as measured in the data and predicted by the model in the fitted values) and the linear predictor is determined by the link function. In 'model6'. This link function relates the mean value of the outcome to its linear predictor. arsenic2="<1 year". resulting in 'incidence density'. the odds ratio is used to compare the prevalence of exposure among cases and controls. Each of the Poisson regression models above can be used to compute the predicted incidence density when the variables in the model are given.000 person-years. which is an improved form of'relative risk. newdata. If the disease is rare. type="response") [1] 33. which is then used as the denominator for the event.257
This population would have an estimated incidence density of 33. type:
> newdata <. subjects do not always have the same follow-up duration. the value of the outcome becomes log(incidence density).frame(list(agegr="40-49". By default. all subjects pool their follow-up times and this number is called 'person time'. For example.

00865 1.47015 > exp(coef(model6)[5]) arsenic21+ years 2.c("IDR". newdata. type="response") > idr.
> coeff <.id[2]/id[1] > idr.arsenic [1] 2. which is the fifth coefficient in the model.display' to get 95% CI of IDR
The following steps explain how the 95% confidence interval of IDR for all variables can be obtained. Then the matrix column is labelled and the 95% CI is displayed.rbind(newdata.display in Epicalc. The IDR is then obtained from division of the incidence densities for arsenic2="<1 year" with arsenic2="1+ years". A shorter way to obtain this IDR is to exponentiate the coefficient of the specific variable 'arsenic'.95ci
A simpler way is to use the command idr.c("<1 year".
> coef(model6) (Intercept) agegr50-59 -8.95ci <. The responses or incidence densities of the two conditions are then computed. arsenic2="1+ years".2499
The above procedure starts by appending a new row to the data frame 'newdata' having everything the same as the first row except that the variable 'arsenic2' is "1+ years". "1+ years") > newdata <.95ci). 1)[-1.
> IDR.> levels(newdata$arsenic2) <. "lower95ci".95ci <.round(exp(coeff.predict(model6.36611 agegr70-79 arsenic21+ 2.81087
'idr.coef(model6) > coeff.]
The required values are obtained from exponentiating the last matrix with the first row or intercept removed.arsenic <.
> colnames(IDR.
205
. personyrs=100000)) > newdata agegr arsenic2 personyrs 1 40-49 <1 year 1e+05 2 40-49 1+ years 1e+05 > id <.2499 agegr60-69 2. "upper95ci") > IDR. The display is rounded to 1 decimal place for better viewing.cbind(coeff. list(agegr="40-49". confint(model6))
Note that confint(glm6) provides a 95% confidence interval for the model coefficients.95ci) <.62375 0.

001
< 0. of observations = 300 Variable Class 1 houseid integer 2 village integer 3 education factor 4 containers integer 5 viltype factor
Description no Village Educational level # infested vessels Village type
206
. If a count variable is overdispersed.8.> idr. The interpretation of the results is the same as that from Poisson regression. it is quite common for the variance of the outcome to be larger than the mean.7.8.display gives results to 3 decimal places by default.5 (8.0.001 0. Poisson regression underestimates the standard errors of the predictor variables.001 0.23.2.1) 2.8998 No.display(model6.
Negative binomial regression
Recall that for Poisson regression.7) arsenic2 2. allowing for greater dispersion or variance of counts.3 (7.9) 10. When overdispersion is evident.3.001
Log-likelihood = -171.3 (1.0) 70-79 14.3) 4.
> library(MASS) > data(DHF99). one of the assumptions for a valid model is that the mean and variance of the count variable are equal.5 (2.001 0.7.4. decimal=1) Poisson regression predicting respdeath with offset = log(personyrs)
crude IDR(95%CI) adj. Negative binomial regression gives the same coefficients as those from Poisson regression but give larger standard errors. of observations = 114 AIC value = 353.9) P(Wald's) < < < < 0. This can easily be changed by the user. In practice.8) 13.0) 60-69 11.8 (8. use(DHF99) > des() No.4 (2.001 P(LR-test) < 0.7. one solution is to specify that the errors have a negative binomial distribution. IDR(95%CI) agegr: ref.1.7.22. The data is contained in the dataset DHF99.17.7 (6. The negative binomial distribution is a more generalized form of distribution used for 'count' response data. This is called overdispersion.17. Take an example of counts of water containers infested with mosquito larvae in a field survey.5 (2.8.=40-49 50-59 4.8
The command idr.

Exercise_________________________________________________
Use step to select the best model predicting incidence densities of the Montana dataset. Compute the incidence density ratio for significant independent variables. Check the Poisson goodness of fit. Fit a negative binomial regression model to check the theta and its standard error term before conclusion whether there is any evidence of dispersion.
210
.

province. district. With the mixture of fixed and random effects in the same model. On the other hand.Chapter 20: Introduction to Multi-level
Modelling
There are many other names for multi-level modelling. if the sample size is small but the number of groups is high. multi-level modelling is also called 'mixed effects modelling'. Some variables may belong to a higher hierarchical order. For example. such as sex and age (individual). the number of parameters used to explain the effect of 'ethnic' is m-1 because the omitted one is used as the referent group. modelling with random effects. measurement of blood pressure belongs to an individual subject who can have more than one measurement. Independent variables at different levels of the hierarchy should not be treated in the same way.g. variables often have a hierarchy. hierarchical modelling. For this reason multi-level modelling is also called hierarchical modelling. mixed effects modelling. 50 subjects with multiple blood pressure measurements. such as time of measurement. ethnicity (family). Each name has its own implication. where the number of groups are not high. the individual person is at higher hierarchy than each measurement. however. say m ethnic groups under study. modelling is usually meant for explanation of the relationship of variables in an informative and efficient manner.
211
. the random effects must always have an average. They are all the same. In turn a family is usually a member of a village. and so forth. housing. etc. for example. village. In this situation. the grouping variables would have too many levels too put into the model. Thus the hierarchy can be country. An individual. e. In epidemiological studies. In this case. multi-level modelling is also called modelling with random effects. If the sample size is large and m is small the number of parameters used would not be too high. Certain independent variables will be at the individual measurement level. In simple modelling. which is used to estimate the overall effect. family. belongs to a family. an average value for the group is computed and the individual members are treated as random effects without a parameter. such as ethnicity. This average or overall effect is called the fixed effects. individual and measurement. To do this. all members of which may share several independent variables. In another aspect. However. and distance from the capital city (village).

To solve this problem. variances and covariances. each stratum is represented by the strata mean and each sample stratum is taken as a random member of the sets of strata in the population. are available from the nlme library. regardless of how large the number of strata is. The examples in this chapter are confined to the 'glmmPQL' function or Generalized Linear Mixed Models using Penalized Quasi-Likelihood. and the commands for computation. there would be only two parameters from the stratification factor: the mean and variance (or standard deviation). Each child can be initially interpreted as a stratum. 10. 12 and 14 years).
From stratified analysis to random effects modelling
Analysis of the effect of putting additional table salt into the meal in chapter 12 was carried out having two strata.Multi-level modelling is relatively new compared to other common types of modeling. including individual strata would add too many parameters to the model. The 27 strata are taken as a random sample of infinite population of strata. In a setting with a high number of strata. Readers are advised to explore other functions such as lme (linear mixed effects) and nlme (non-linear mixed effects). each with a relatively high number of subjects. The growth of 27 children (16 boys and 11 girls) was assessed by measuring the distance from the pituitary to pterygomaxillary fissure.
212
. There are variations in the methods of numerical iteration for computation of coefficients and standard errors. which is fixed for each child. The stratification factor (salt adding) has two levels 'yes/no' but only one parameter in the model. Therefore. each with a relatively small number of records. They generally give very close estimates but different standard errors. Age is also in level 1 since it can vary within individual subjects. The data is in hierarchical order. although the variation is constant for all children. The individual records or measurements are at level 1 whereas the individual children are at level 2. thus reducing the efficiency of explanation (too many variables used for explaining a small dataset). On the other hand. such as linear and Poisson regression. sex is at level 2. Measurements were made on each child every 4 years (ages 8. A child has more than one measurement recorded over time.
Example: Orthodontic Measurements
An example for such a situation. It can handle all families used in GLMs with similar arguments in the command except the additional terms defining the fixed and random effects.

Epicalc has a function called followup.494
min. des()
No.e. the dataset Orthodont can be used.
> followup. 2. Combining these two types of random and fixed effects. 16.825 0. name 1 distance 2 age 3 Subject 4 Sex
Description distance age Subject Sex
Obs. all subjects are assumed to have the same growth rate. the coefficient of 'age'.For the simplest multi-level modelling.
> > > > > > zap() library(MASS) # For the glmmPQL command library(nlme) # For the example dataset data(Orthodont) .93 2.407
median 23.5 8 1 1
max. For the intercept.data). 31. Be careful as some of the variable names in this data frame start with upper case. xlab="years")
213
.data <. ylab="mm".frame(Orthodont) use(.5 14 27 2
A follow-up plot is useful to visualize the data. the model is often called a 'mixed model'.02 11 14 1. is estimated as a single parameter. Once the library nlme has been loaded.plot. or the slope of the regression lines. outcome=distance.as.plot(id=Subject.d. time=age.col="multicolor") > title(main="PPMF distance by age". the model estimates the population 'mean intercept' and population standard deviation of the intercepts.25 7.data. The intercept has 'random effects' (for individual children) whereas the slope has a 'fixed effect' for the whole group. line.75 11 14 1
s. 108 108 108 108
mean 24. of observations =108 Variable Class 1 distance numeric 2 age numeric 3 Subject factor 4 Sex factor > summ() Var. which plots the outcome for each subject over time. i.

Males generally had larger pituitary to pterygomaxillary fissure distances. xlab="years")
PPMF distance by age
Male Female
mm
20 8
25
30
9
10
11 years
12
13
14
In both plots.outcome=distance. Otherwise.by=Sex) > title(main="PPMF distance by age". we replace the 'lines' argument with the 'by' argument in the command.
> followup.time=age. The rates of individuals are however criss-crossing to a certain extent. the highest and the lowest lines are quite consistent.plot(id=Subject. it is evident that as age increases so does distance.PPMF distance by age
mm
20
25
30
8
9
10
11 years
12
13
14
To see whether there is a gender difference. ylab="mm".
214
.

68695131 -0.data AIC BIC logLik NA NA NA Random effects: Formula: ~1 | Subject (Intercept) Residual StdDev: 2. For this first model.422728 Variance function: Structure: fixed weights Formula: ~invwt Fixed effects: distance ~ age Value Std.8020244 80 20.49100161 Number of Observations: 108 Number of Groups: 27
Max 3. The independent variable is 'age'. each subject is taken as a stratum. which has fixed effects (for all subjects). The glmmPQL command handles the 'family' argument of the model in the same way as the glm command. The dependent variable is 'distance'. In other words. 'Subject' is at a higher level. too many to have each of them as a parameter.glmmPQL(distance ~ age.Random intercepts model
For multi-level modelling.0617993 80 10. family = gaussian)
The above command creates a generalized linear multi-level model (glmm) using the Penalized Quasi-Likelihood (PQL) method of iteration.89851 0 age 0.01232442 0.072142 1.68272 0 Correlation: (Intr) age -0. The upper level of the model (following the '|' sign) is 'Subject' because the same subject has 4 repeated measurements. Instead.
> model0 <. the family is specified as 'gaussian'. Since the errors are assumed to be normally distributed.
> summary(model0) Linear mixed-effects model fit by maximum likelihood Data: . data = .761111 0. random = ~1 | Subject.660185 0. The random effects (as indicated by the word 'random') is a constant of 1.848 Standardized Within-Group Residuals: Min Q1 Med Q3 -3.Error DF t-value p-value (Intercept) 16.53862941 -0. a mean intercept is computed and the remaining are taken as random effects.74701484
215
.data. There are 27 intercepts. the slopes are forced to be the same.

The 'AIC' and 'BIC' values are derived from 'logLik', the log likelihood. They will be used to compare the level of fit with other models using the same dataset and the same method of iteration. Note that AIC is equal to -2×logLik + 2×npar and BIC is equal to -2×logLik + log(n)×npar, where npar is the number of parameters in the model (in this model, four; namely, the standard deviations of intercepts and residuals, which are the random effects, and the coefficient of the fixed intercept and the fixed effect of age) and n is the number of observations (108). Random effects express themselves as standard deviations of errors. There are two parts of errors. The first part is the standard deviations of difference between the fixed intercept and the intercepts of individual subjects. The second part is the standard deviation of the residuals or the difference between the final predicted values and the observed values for each subject. There is no coefficient for these random effects terms because the means should be close to zero. This is because they are assumed to come from the standard normal distribution. The fixed part of the summary, similar to a conventional regression model, contains the coefficients and their standard errors. The coefficient of the intercept is 16.76. This means that on the average, at the age of 0, the PPMF distance for a child is expected to be 16.76 mm. The coefficient of age is 0.66. This means that for each birthday reached, an average child is expected to gain 0.66 mm length of PPMF distance. This coefficient is statistically significant as the standard error is relatively small, resulting in a large t-value and a small P value. The standardised residuals within groups (or within the child) are distributed with a certain degree of symmetry since the median is close to 0, and the lower and upper quartiles are relatively equidistant from the median, as are the minimum and the maximum. Finally, the model confirms that there were 27 children giving 108 records.

There are two parts of the coefficients: the fixed part and the random part. The fixed part, shown in the summary, is the average for all of the 27 strata (children). The fixed intercept is 16.761111, which means that the (average) estimated distance at birth (when age is 0) is 16.76 mm. For each increasing year of age, the PPMF distance increases by approximately two-thirds of a millimetre (0.66). The second or random part shows 'random intercepts only' since there is no variable in this part as specified by 'random ~ 1'. There are 27 (additional coefficients for) intercepts, one for each child. For the first child (M16) who has a negative random intercept, or starting distance, the mean intercept from the fixed part (16.76) must be subtracted by 0.9152788. The second person (M05) shares the same intercept. Altogether, the random intercepts range from -4.940849 (F10) to +4.899434 (M10). There are many other attributes worth exploring. The next interesting one is 'fitted(model0)', which contains the fitted or predicted values of each point of observation.
> model0$fitted fixed Subject 1 22.043 25.377 2 23.363 26.697 3 24.683 28.017 4 26.004 29.338 5 22.043 21.463 6 23.363 22.783 7 24.683 24.104 8 26.004 25.424 ==== Up to 108th person ==========

There are two columns of fitted values: fixed (average of each point of time) and random (by Subject). In fact, the fixed part has only four values predicting the average value for each value of age.

Each value has 27 repeated records. In other words, there are only four terms of fixed effects, each shared by all 27 subjects. The second component is predicting the intercept value for each subject, which varies from one child to another.
> followup.plot(id=Subject, time=age, outcome=fitted(model0), line.col="multicolor") > title(main="Model 0: random intercepts", ylab="mm", xlab="years")

The X-coordinates for each line are the ages for that child. The corresponding Ycoordinates are the fitted values for the PPMF distance. Recall that there are two columns for the fitted values (for the fixed and random effects). The plot uses the second column, which is the predicted value for each child (random effects). The colour varies according to the (order of) 'Subject'.
Model 0: Random intercepts

mm

18

20

22

24

26

28

30

8

9

10

11 years

12

13

14

The model fixes the coefficient of the slope, allowing only the intercepts to be a random variable. The next model releases the effects of age to become random with a mean value.

Similar to 'model0', a graph can be plotted with the following commands.
> followup.plot(id=Subject, time=age, outcome=fitted(glmm1), line.col="multicolor") > title(main="Model1: random intercepts and slopes", ylab="mm", xlab="years")

219

Model1: random intercepts and slopes
32 mm 18 20 22 24 26 28 30

8

9

10

11 years

12

13

14

Model 'model0' is equivalent to a stratified analysis without interaction whereas 'model1' is equivalent to keeping an interaction term. The latter model suggests that each child has their own baseline distance (intercept) as well as their own growth rate. The graph shows different slopes for different subjects. The slopes are now a random effect as well as a fixed effect. In the random effects part, age has a standard deviation of 0.215 mm, which is relatively small compared to the randomness of the intercept (2.2 mm) and the residuals (1.3 mm). The variation due to differences in growth rate of the PPMF distance among subjects is small compared to the variation in baselines and the average growth rate. The correlation between age and intercept is negative (-0.585) in the random effects suggesting that the slope of the subjects tends to be flatter as the level of the Y-intercepts increases. The coefficients of the fixed effects for the intercept and age are not different from 'model0'. In fact the coefficients are the same as those from ordinary glm.
> summary(glm(distance ~ age, family=gaussian))

The standard errors from the generalised linear model are much higher than those of the multi-level models. These advanced models improve the precision of the estimates. In this example 'model1' has wider standard errors than 'model0'. When the age effect is partially individualised, the overall age effect reduces its precision. We have another independent variable 'Sex'. It would be interesting to examine whether the boys have larger distance than girls and whether the growth rates are different between the sexes.

'Sex' is introduced as a pure fixed effect. In fact, it cannot be a random effect because there is no variation of sex in an individual subject. The growth lines are now separated by 'Sex'.
> followup.plot(id=Subject, time=age, outcome=fitted(model2), by=Sex) > title(main="Model2: random intercepts", ylab="mm", xlab="years")
Model2: random intercepts
Male Female

mm

18 8

20

22

24

26

28

30

9

10

11 years

12

13

14

221

family = gaussian) > summary(model3) Linear mixed-effects model fit by maximum likelihood Data: .data. random = ~1 | Subject. girls have a longer average PPMF distance of 1. at birth (where age is 0).340625 0. indicating that under a linear growth assumption. outcome=fitted(model3).032102 1. by=Sex) > title(main="Model3: random intercepts.5376069 25 0.369159 Variance function: Structure: fixed weights Formula: ~invwt Fixed effects: distance ~ age * Sex Value Std. fixed effects of age:sex".0000 SexFemale 1. The coefficient of the interaction term is -0.056564 0.9814310 79 16. In other words. females have a shorter PPMF distance and a smaller growth rate.494580 0.glmmPQL(distance ~ age*Sex. data = .03.
> followup.It is clear that the lines for males tend to be in the upper half of the plot whereas those for females tend to be in the lower part. xlab="years")
222
.data AIC BIC logLik NA NA NA Random effects: Formula: ~1 | Subject (Intercept) Residual StdDev: 1.671239 0.0147 ========= Remaining parts of output omitted ========
The interaction term between age and sex is significant. The coefficient of the main effect of 'Female' is 1.649795 0. and interaction term between age and sex is introduced.
> model3 <.1221968 79 -2.0779963 79 10.3mm compared to boys.5082 age:SexFemale -0. To test whether the rates are different between the two sexes.304830 0.Error DF t-value p-value (Intercept) 16.03mm compared to boys. time=age. girls will have a shorter average PPMF distance of 0. ylab="mm".plot(id=Subject.30483 indicating that for each increment of one year.784375 0.0000 age 0.740851 1.

called lme4. it may replace the contents of this chapter.Model3: random intercepts.4. fixed effects of age:sex
Male Female
mm
18
20
22
24
26
28
30
8
9
10
11 years
12
13
14
In conclusion. When this package is fully developed. For example. it is still in the experimental stage. However. However.
223
.
Note on lme4 package
Mixed effects modeling is a fast moving subject. While this feature is more advanced than what has been demonstrated in this chapter. analysis of clinical visits could be simultanesouly nested both by patient and by physician. individual children have different baseline PPMF distances. this new package gives similar results for simple nesting. Girls tended to have a higher PPMF distance at birth. fitted values cannot be easily obtained. was introduced.1. A new package in R version 2. which is more efficient than the glmmPQL function in the MASS package and can accommodate more complicated types of nesting. The package contains a function called lmer. For example. boys have a faster growth rate than girls.

of children living")
Problem 1. age and living in urban area on the probability of contraceptive use among the women.
> > > > # > > > > zap() data(Bang) use(Bang) label. "No. Compute the 95% confidence interval of their odds ratios.var(woman.children) label. Does number of living children have a linear dose response relationship with contraceptive use? Problem 3. "woman ID") Response variable label.var(age_mean. "age(yr) centred around mean") living. Use glmmPQL to compute the effects of the number of living children.var(living. "current contraceptive use") label.var(user.Exercises________________________________________________
The dataset Bang consists of a subset of data from the '1988 Bangladesh Fertility Survey'.factor(living.children. Does age have the same effect among urban and rural women on contraceptive use?
224
. Should age be a random effect? Problem 4.children <. Problem 2.

Chapter 21: Survival Analysis
In a cohort study. For subjects whose events take place before the end of the study. Each of the 27 participants was asked to provide personal information on their sex.
Example: Age at marriage
A data management workshop was carried out in 1997. the desired outcome is a longer event-free duration. For the subjects whose follow up times end without the event. The objective of this analysis is to use survival analysis methods to examine this dataset. birth year. a person is followed up from a starting time to the end of the study or to the time the follow-up has been terminated by the outcome event. of observations =27 Variable id sex birthyr educ marital maryr endyr Class integer factor integer factor factor integer integer Description
1 2 3 4 5 6 7
year of birth level of eduction marital status year of marriage year of analysis
> summ() No. the end status is called 'censored' because the actual duration of time to the event is not known or 'censored' by the study. of observations = 27
225
. whichever comes first. Mathematically. the status is 1 if the event takes place and 0 otherwise. the total duration of time is known. education level.
> > > > library(survival) data(Marryage) use(Marryage) des()
No. The event-free duration is an important outcome. For an unwanted event. marital status and year of marriage (for those who were married). The outcome variable for each subject is therefore composed of 'time' and the 'status' at the end.

Note that the original codes for the variable 'educ' were 2 = bach-. 25
max.18 32 4. Clearly the married participants were older than the single ones. as shown in the output of the codebook command.596
min. and the label table associated with each categorical variable were kept with the data.38 37.d. 45
Distribution of Age by marital status
Married
Single
25
30
35
40
45
There were 16 (59%) married participants.d. is used for computation of age and age at marriage. mean median 16 37. the numeric codes for 'educ' are displayed as 1 (bach-) and 2 (>bachelor). and are never used during data analysis.birthyr > label. 11 31. In the output from the summ function however.endyr . fixed at 1997. the codes were only used during the original entry of the data.
> age <. The variable 'endyr'. 3 = >bachelor.var(age. 5. This was how the codes were defined in the original data entry program. When R converts something to a factor the first level will always have an underlying code of 1. by = marital) For marital = Single Obs.996 For marital = Married Obs.
227
. "Age") > summ(age.5
min. This anomaly is simply due to unclassing the levels of the factor variable in the output from the summ command. These numeric codes should not be confused with the original coding scheme. 39
s. mean median s. 29
max. In fact.

maryr . married)) [1] 26 26 29 25+ 26 26+ 28 28 28 36+ 36 39+ 29 33+ [15] 25 31 27 34+ 37+ 26 27+ 25 27 26+ 28+ 30 32+ > head(data. the procedures used on this simple dataset can be applied to other survival type data. 45 36
Among the 16 married participants the mean age at marriage was 27.marr <.marr))
228
. we do not know this length of time. In order to analyse this data. surv. whereas most data for survival analysis come from follow up studies.marr. 6. For a married person.var(age.5 s.8:9]) No.marr. In the Marryage dataset.77 min. 25 25 max. The status variable must be either numeric or logical. married. age.marital == "Married" > time <. The whole essence of survival analysis is related to “time-to-event”. 'marital' is a factor and so must be converted to one of the formats specified above. there are two options. We will choose the logical format.
> (surv. FALSE=censored and TRUE=event. name 1 age 2 age.Surv(time.11 2. So their current age is used instead. Our data comes from a cross-sectional survey. age)
Note that time for married and unmarried subjects are generated differently.frame(age. mean 27 34. Values must be either 0=censored and 1=event. However. The survival object for marriage can now be created and compared against other variables.
Survival object in R
The survival library contains all the functions necessary to analyse survival type data.
> married <. age. If numeric.marr. we know exactly that the duration of time is their age at marriage. In most epidemiological studies 'time' is usually considered to be duration of follow up and the event is usually occurrence of an unwanted event.> age.marr obs.d.birthyr > label.marr <.85 16 27. we need to create an object of class Surv.data[. For an unmarried person. which combines the information of time and status in a single object. Their survival time stops at the year of marriage. In this dataset we are using age as the time variable and marriage as the event. of observations = 27 Var. or 1=censored and 2=event. such as death or disease recurrence.94 years. "Age at marriage") > summ(. but this is arbitrary. If logical.ifelse(married.94 median 34 27.

event and survival probability over time.] 44 1 26 1 26 1 [5. a 25 year old male.] 29 1 25 1 25 1 [4. subsets of variables sorted by 'time' are displayed by the following command.
Life table
A life table is a tabulation of the survival. Those participants had not married at the time of the workshop. i. age. and the 5th. The event (marriage) had already occurred.marr)[order(time). For further exploration.] 25 1 NA 0 25 0 [2. thus her status = 1.] 34 1 26 1 26 1 ======== subsequent lines omitted ========
The 'Surv' object consists of both 'time' and 'status'. his event is censored. was single. married.marr' are equal to their age at marriage.e. who were all married.1 2 3 4 5 6
age age.] 37 2 26 1 26 1 [7. so this is her time. The classical method for this analysis in the general population has been well developed for centuries. the method involves calculating the cumulative survival probability.] age sex age. In general. the overall life table can be achieved by:
229
. The second person was a 32 year old woman who had married at age 25.] 32 2 25 1 25 1 [3.] 43 2 26 1 26 1 [6.marr married surv 44 26 TRUE 26 43 26 TRUE 26 45 29 TRUE 29 25 NA FALSE 25+ 37 26 TRUE 26 26 NA FALSE 26+
For the first three subjects.marr married time status [1. the values are equal to their current age.
> cbind(age. the values of 'surv.marr. surv. For our simple dataset. sex. etc. His time is 25 and his status is 0.] 26 2 NA 0 26 0 [8. which is the product of the survival probabilities at each step. The first person. The plus sign indicates that the actual 'time' is beyond those values but were censored. For the 4th and the 6th subjects.

This person is censored (not married) so is included in this row but not in subsequent rows.event survival std.349 0.627 0.380 0.588 37 2 0 0.668 31 8 1 0.196 0.926 0.549 0.894 28 15 3 0. In fact.err lower95CI upper95CI 25 27 2 0. At this time.449 0.622 32 7 0 0.349 0.262 0.117 0. The survival probability (probability of getting married at this age) is calculated as (27-2)/27 = 0.0820 0.4)/24 = 0. The probabilities are therefore unchanged.1048 0.686 0. When multiplying this value with the previous probability in the first row.926.marr) time n.349 0.1080 0. and since the third row says that only 18 persons remained at the next time point. there is one person aged 25 years who is not shown. This computation of cumulative survival probability continues in a similar way until the end of the dataset.0504 0.survfit(surv.1080 0.622 34 5 0 0.399 0. The survival probability for time 26 is therefore (24 .1029 0.1025 0. censor=TRUE) Call: survfit(formula = surv.196 0. there were 27 subjects.588 39 1 0 0.349 0.588
The first row of the output says that at time 25 (when all participants were aged 25 which is everyone).262 0.772 0. The above Kaplan-Meier life table is a slight modification from the classical demographical method where the time interval is fixed (usually at every 5 years of age) and adjustment for incomplete information of exact time of event is taken into account. the cumulative probability is (25/27) x (20/24) = 0. there were no events (n.283 0.950 27 18 2 0. 2 subjects must have been censored.832 1.791 29 11 2 0.196 0.marr) > summary(fit.711 30 9 1 0.1054 0.622 33 6 0 0.262 0.1029 0.526 0.> fit <.1029 0. there were 24 persons remaining who had reached or passed their 26th birthday (27 started.833.risk n.238 0. 33.196 0.117 0. Note that at the time points of 32. 37 and 39 years.000 26 24 4 0. 34. two of whom were married at that time.117 0. 2 events and 1 censored at the end of the 25th year).event = 0).1029 0.1080 0. 4 events took place.772.622 36 4 1 0.0926 0. On the second row.
230
.

conf. type="s")
If 'xlim=c(25. xlim=c(25.summary(fit.int=F. the curve will be very similar to that produced by the standard command.risk" "n. it would cross the survival curve at the point of the median survival time. xlim=c(25. they can be set be FALSE.0
0.err" "lower" "upper" "call" $class [1] "summary. mark. to produce a stepped line plot.6
0.survfit"
"conf. If a horizontal line were drawn at probability 50%.2
0. If less than half of the subjects have experienced the event then the median survival time is undefined.
> abline(h=.0 26 28 30 32 34 36 38 40
231
. the two 95% confidence interval lines and the time marks for censored subjects are included in the plot. which is called a survival curve or 'Kaplan-Meier curve'. las=1)
The vertical axis is survival probability and the horizontal axis is time. To suppress them.
> plot(fit. 40)' is added to the command.
> plot(fit.time=F.Kaplan-Meier curve
The summary of a survival object reveals many sub-objects. censor=T) > attributes(km1) $names [1] "surv" "time" "n.int"
We can use this 'km1' object to plot 'time' vs 'surv'.event" "std.5.
> plot(km1$time.4
0. col="red")
1.8
0. km1$surv. lty=2. 38). 40))
When there is only one curve plotted.
> km1 <.

724
With this small sample size. When rho = 0 (by default) the log-rank or Mantel-Haenszel chi-squared test is performed.63 0.0
0.marr ~ sex) N Observed Expected (O-E)^2/E (O-E)^2/V sex=male 9 6 5.8
0.4 male female 0.Age at Marriage
Proportion 1.125 Chisq= 0. which specifies the type of test to use.
234
. More formal comparison among groups is explained in detail in the next section. the 95% confidence interval lines are omitted.
> survdiff(surv. This compares the expected number of events in each group against the observed values. p= 0.0 0 10 20 Time (years) 30 40
When there are multiple survival curves.37 0.
Statistical comparison among survival curves
Survival curves can be tested for statistical difference with the survdiff command.2
0. indicating that both the males and females in the workshop married at similar rates.125 sex=female 18 10 10. If the level of difference between these two groups is too high. the chi-squared value will be high and the P value will be small indicating that the curves are significantly different. the last one being 'rho'.0746 0. If rho = 1 then the Peto modification of the Gehan-Wilcoxon test (sometimes called the Peto test) is performed. The curves appear very similar. which places more weight on earlier events.6
0.marr ~ sex) Call: survdiff(formula = surv. The survdiff command actually has 5 arguments. the difference can simply be explained by chance alone.0376 0.1 on 1 degrees of freedom.

Problem 2. Check the distribution of year of deaths and censoring. Problem 1. stage of disease ('stage') and socio-economic level ('ses'). Display the numbers at risk at reasonably spaced time intervals.
Problem 3.
236
. Draw Kaplan-Meier curves for each hospital group with censoring marks shown on the curves. Test the significance with and without adjustment for other potential confounders: age ('agegr').Exercises________________________________________________
The dataset Compaq contains data from a follow up study on breast cancer in Europe evaluating whether patients in private hospital ('hospital') had better survival ('year').

Each of them has a specific assumption about the distribution of the survival probability over time (so called hazard function). Cox regression focuses on testing for differences of survival probability among groups with adjustment for confounding factors. i = 1.
237
. survival outcomes can be tested for more than one predictor using regression modelling. This denotes that the summation of influences of one group over the other is a fixed proportion. the hazard rate h=h(t) is a function of (or depends on) say. which is the baseline hazard function when all the Xi are zero. X) = h0 (t)e ∑
βi
Xi
The left-hand side of the equation says that the hazard is influenced by time and the covariates. βi. X2. and t is time. 3. This baseline hazard function is multiplied by e to the power of the summation of all the covariates weighted by the estimated coefficients. While parametric regression models allow prediction of the probability of survival at each point of time. X3 … . There are many 'parametric regression' choices for the survival object. 2. Xn each of which is Xi. which has no assumption regarding the hazard function. X). The only important assumption it adheres to is 'proportional hazards'. The hazard function can also be written as h(t.…n. In epidemiological studies. the most popular regression choice for survival analysis is Cox regression. Under the proportional hazards assumption:
h(t. The right-hand side of the equation contains h0(t).Chapter 22: Cox Regression
Cox's proportional hazard model
Similar to other types of outcome variables. Mathematically. n independent covariates X. where X denotes the vector X1.

i.
> zap() > library(survival) Loading required package: splines > load("Marryage. To obtain its 95% confidence interval.304 2. We will use the data from the preceding chapter to examine the independent effect of sex on the age of marriage.19 0.e. the conditional probability.Rdata") > use(. or ratio.325 0. the proportional hazards assumption is unlikely to be violated.marr ~ sex) > cox1 =============================== coef exp(coef) se(coef) z p sexfemale -0. is 0. The right-hand side is the exponentiation of the sum of products of estimated coefficients and the covariate vector. If the two curves are parallel.844 0.844 suggesting an overall reduction of 16% hazard rate of females compared to males.522 -0. or proportion of subjects among different groups in getting the hazard. The hazard ratio. between the hazard of the group with exposure of X against the baseline hazard.35 ===============================
Testing the proportional hazards assumption
Graphically.844 1.95 sexfemale 0.data) > cox1 <.
238
. due to the independent effect of the ith variable. a summary of this 'coxph' object is necessary. Thus eβiXi is the increment of the hazard.170 0.
> summary(cox1) =============================== exp(coef) exp(-coef) lower . the curves of the two sexes can be compared after the vertical axis has been transformed by -log(log(y)) and plotted against log(time). which is now independent of time. is assumed constant. assumed constant over time.Consequently. Whenever there is an event.coxph(surv. or hazard ratio. X) = e∑ h0 (t)
The left-hand side is the proportion.
βi X i h(t. exp(coef). Xi.74
The coefficient is negative and non-significant.95 upper .

marr ~ sex) > plot(fit.0
−0.000883 0. A formal test of the proportional hazards assumption can be carried out as follows:
> cox.41).5 25
−2.zph(model1) -> diag1. fun="cloglog". xlim=c(25. the estimated coefficients. conf.0
−1.
Time trend of the hazard ratio
These attributes can be summarised in a graph by plotting the change of beta. col=c("red".survfit(surv. "blue"))
−2.int=FALSE.00756 0.5
0. This diagnostic result can be further explored. It is difficult to judge from the graph whether the assumption has been violated.5
−1. diag1 rho chisq p sexfemale 0. over time.zph(model1))
This graph should be read along with the previous results earlier in the chapter where the events and the information of sex of the subjects are sorted by time.
> diag1$x # x coordinates for plotting time > diag1$y # y coordinates for plotting beta coefficients > plot(cox.976
The evidence against the proportional hazards assumption is very weak.
239
.> fit <.0
30
35
40
The two curves cross more than once.

Subsequent points are plotted in the same fashion.831 1. surv. In between.coxph(surv. The probability of getting married for females is lower than for males when they are younger than 26 years or older than 29 years.Beta(t) for sexfemale
−4
−2
0
2
26
27
28 Time
29
30
32
> data. females have a higher probability of getting married.20 0.92. The hazard in 'diag1$y' is 1.95 upper. The duplicate values of beta result in a warning but this is not serious. In the 26th year.] age sex age.99 educ>bachelor 0.marr married surv. the test suggests that this finding can be simply explained by chance alone.frame(age. For multiple covariates the same principle applies.marr ~ sex + educ) > cox2 > summary(cox2) =================================================== exp(coef) exp(-coef) lower. A line is drawn to pass through these betas to illustrate the level of stability of the coefficient over time.975 1. age. there were four events of two males (beta = -3.95 sexfemale 0.19).marr 4 25 male NA FALSE 25+ 15 32 female 25 TRUE 25 22 29 male 25 TRUE 25 1 44 male 26 TRUE 26 2 43 female 26 TRUE 26 ========================================
The first two events occurred in the 25th year where one male and one female got married.marr.
> cox2 <.230 2.278 3.42 ===================================================
240
. However.43 and -2.marr)[order(time).16) and two females (beta = 1. sex.03 0. married.

01604 0. Again.
> zap()
241
.zph(cox2) -> diag2. socio-economic status and age.925 educ>bachelor 0.0321 0. they have a slightly higher chance of getting married than those with a lower education.0246 0. The reverse is true for the remaining times. diag2 rho chisq p sexfemale 0. Finally.
Stratified Cox regression
The above example had very few subjects. one for each variable > plot(cox. By the late twenties. var=1) # for the first variable of y
The coefficients of sex with adjustment for education were not much changed.00885 0.992
The test results are separated by each variable.01547 0. and not surprisingly the results were not significant. these differences are not significant and can be explained by chance alone.> cox. which was used as the exercise at the end of the preceding chapter. We now revisit the cancer dataset Compaq.zph(cox2).
> plot(cox.zph(cox2). The main aim now is to test whether breast cancer patients in private and public hospitals had different survival rates after adjusting for stage. var=2)
Beta(t) for educ>bachelor
−6
−4
−2
0
2
4
26
27
28 Time
29
30
32
The hazard rate for marriage of persons who had a higher education rises at around 27-29 years.901 GLOBAL NA 0. a global test is performed showing a non-significant result.
> diag2$x # x coordinates for plotting time: same as diag1 > diag2$y # two columns.

zph) for 'cox3' and 'cox4' to check the change of hazard ratio of private hospital over time.
244
.Exercises________________________________________________
Problem 1. Discuss the pattern of residuals. Use the command plot(cox. Could the other 2 variables (socio-economic status and age) be used as a stratification factor? Problem 2.

2 4. Hard work will improve oneself Smiling leads to trust I feel bad if I cannot give service
245
.6 4.7 1.3 4.6 0. however seven and even nine-point scales can also be used. this kind of rating scale is sometimes used in epidemiological studies such as those involving quality of life. These levels are often referred to as a Likert scale. an epidemiologist should have some idea on the elementary methods of analysis of this kind of data.7 0.1 0.5 4. It also detects the items that need to be reversed before the scores of the items are summed or averaged.Chapter 23 Analysing Attitudes Data
The 'Attitudes' dataset
Although a study on attitudes is in the field of social sciences. Traditionally a five-point scale is used.8 0. Although mostly used in the field of psychometrics..3 2.7 description I have pride in my job I'm happy to give service I feel difficulty in giving service I can improve my service A service person must have patience I would change my job if given ...6 0. Epicalc offers the tableStack function to display the distribution of the score of several variables that have the same rating scale.5 0. A questionnaire on attitudes usually contains questions where the respondents specify their level of agreement to a statement.2 4.2 0.6 3 4.. The Attitudes dataset comes from a survey on attitudes related to services among hospital staff.
> > > > > help(Attitudes) data(Attitudes) use(Attitudes) des() summ()
To obtain a compact summary of each questionnaire item simply type:
> tableStack(qa1:qa18)
qa1 qa2 qa3 qa4 qa5 qa6 qa7 qa8 qa9 qa10 1 0 0 30 0 0 17 0 0 0 1 2 0 2 52 0 3 19 3 5 0 1 3 7 13 25 10 5 58 7 20 4 16 4 54 60 20 90 39 29 68 59 41 74 5 75 61 9 36 89 12 58 52 91 44 count 136 136 136 136 136 135 136 136 136 136 mean 4. Devoting some personal time will.7 1.5 4. Its details can be sought from the following commands.2 sd 0.

Items that are negatively correlated with this average will be automatically reversed. In the Attitudes dataset.
> data(Oswego) > use(Oswego) > des() No.Reversed items are shown with a cross (x) in the column titled "Reversed". of observations = 75 Variable Class 1 age numeric 2 sex AsIs 3 timesupper numeric 4 ill logical 5 onsetdate AsIs 6 onsettime numeric 7 bakedham logical 8 spinach logical 9 mashedpota logical 10 cabbagesal logical 11 jello logical 12 rolls logical 13 brownbread logical 14 milk logical 15 coffee logical 16 water logical 17 cakes logical 18 vanilla logical 19 chocolate logical 20 fruitsalad logical
Description
247
. these are items 3.
tableStack for logical variables and factors
All questions in the Attitudes dataset are integers. making it possible to obtain the statistics for each item as well as those for the total score and grand mean. Let's explore the Oswego dataset. which contains data on 75 persons under investigation for the cause of acute food poisoning after a dinner party. 13. only the frequency counts are shown. indicating that the scale has been reversed for that item. reverse=TRUE)
The function will compute the correlation between each score of an item against a weighted average score of all the remaining ones. An alternative way to select the items to reverse is to set the 'reverse' argument to TRUE.
> tableStack(qa1:qa18. 6. The statistics for the total and average scores will likely change due to the reversed direction of scale of those items. 16 and 17. If the classes of the variables are not numeric. 12.

scales }
The above sequence of commands simply converts the 4th to 21st columns of the data (items 'qa1' : 'qa21') into factors and assigns the values of each item a label corresponding to the elements in 'scales'. this coefficient reflects the level of correlation among all items of the same scale.Return to the Attitudes data and change all the variables to factors. the scale is considered to have rather low internal consistency.data[.
> data(Attitudes) > use(Attitudes) > scales <. "strongly disagree"=5) > for(i in 4:21){ . If the data frame contains many variables. This is often the case when the choices are labelled during data entry.data[.
249
.i]) levels(.
> des()
All the items should now be factors. "neutral"=3. In brief. namely Cronbach's alpha. If the value of this coefficient is too low (say less than 0. These are the levels of the items. An analysis of attitude survey data would never be accepted by most social science journals unless Cronbach's alpha has been calculated. the next step in the analysis is to calculate the reliability coefficient.i] <.
> tableStack(qa1:qa18)
Note that the columns are now labelled. Sometimes it is called the reliability coefficient since it reflects the consistency among the items. "disagree"=4. "agree"=2. namely unclassDataframe.
> unclassDataframe(qa1:qa18) > des() > tableStack(qa1:qa18.factor(. Epicalc has a function to unclass all the variables inside a data frame resulting in the variables being converted to integers. which is a measure of the internal consistency of the questionnaire survey. this would be quite a laborious task. Using the tableStack function with this new data frame will result in statistics not being shown. reverse=TRUE)
Cronbach's alpha
For this attitude survey data.list("strongly agree"=1.7).i]) <. If summary statistics are desired then one would need to unclass all the variables in the data frame before using the function. and the total or mean score calculated from these inconsistent items may not properly reflect the domain that the questions are trying to measure.data[.

qa3 x qa4 . qa8 .764488 0.730758 0.674438 0.739174 0.708 standardized value = 0.
The function first computes the covariance matrix among all selected variables. which can be found in most textbooks.722168 0.296922 description I have pride in my job I'm happy to give I feel difficulty in I can improve my A service person must I would change my job Devoting some personal Hard work will improve Smiling leads to trust I feel bad if I cannot A client is not always Experienced clients A client violating the Understanding colleagues Clients like this place Clients who expect our Clients are often self… Clients should be..rest) 0.732773 0.685817 0.744489 r(item.744674 0. qa9 .736795 0.The function alpha from Epicalc calculates Cronbach's alpha.748163 0.556550 0.757130 0.311757 0.720390 0.695617 Std.677646 0.733966 0. qa5 . qa16 x qa17 x qa18 . Alpha 0.392348 0.692937 0. var.007361 0.749653 0.Alpha 0.674703 0. qa12 x qa13 x qa14 .461288 0. The standardized alpha is appropriate when the variables are coded on a different scale. qa11 .691590 0.755929 0.725548 0.057229 0.764704 0.685665 0.415518 0. qa6 x qa7 .682247 0.728148 0. and allows the user to see the effect of removing each item on both the coefficient and the correlation between each item and the remaining ones. This matrix is then used to compute the average of the inter-item correlation.693353 0.obs' unstandardized value = 0.708569 0. qa2 . which is less commonly found in a study on attitudes.686278 0.691061 0. qa15 .563173 0.complete. such as the Attitudes dataset where all items have a scale of 1 to 5.7549 Item(s) reversed and new alpha if the item omitted:
Reversed qa1 .765974 0.
250
.128318 0.699929 0.303587 0.153067 0.720186 0. The first argument is the vector of variable names (without quotes) or column index of the variables in the data frame. qa10 .329396 0.739252 0.484181 0.088212 0.
> alpha(qa1:qa18. The unstandardized value is suitable when all items share the same value coding.729312 0..467432 0.410533 0.710688 0. The arguments for the function are similar to the tableStack function. Secondly.labels=FALSE) Number of items in the scale = 18 Sample size = 136 Average inter-item correlation = 0. the unstandardized and standardized alpha coefficients are computed based on a formula.282889 0.1461 Cronbach's alpha: cov/cor computed with 'pairwise.

A successful selection of items would be to have a questionnaire with not too many items yet with an acceptably high alpha coefficient. qa15:qa16.
251
. respectively.71 and the candidate items that could be removed to improve (increase) the alpha coefficients are items 10. the default value being TRUE.
> alpha(c(qa1:qa9. Further analysis could be pursued by successive omission of items. In this dataset that would result in lower alpha values and most likely to incorrect conclusions. qa18))
Further removal of items does not result in any improvement to the alpha coefficients. qa15:qa16. 12. The function also has a 'reverse' argument. Altogether. As indicated by the third section of the results. From the previous output. 11. qa12:qa18))
Both the unstandardized and standardized alpha coefficients have increased. alpha' are the unstandardized and standardized alpha coefficients.
> alpha(c(qa1:qa10. Consider removing item 11.
> alpha(c(qa1:qa10. obtained when each variable is omitted from the computation. 5 items were removed from the original 18 items to arrive at the best model.Finally. the alpha coefficients can be further increased by removing item 12. a table is shown. qa13:qa16. This somewhat tedious task can be automated by using another Epicalc function called alphaBest. 14 and 17. qa13. the unstandardized coefficient is 0. qa13:qa18))
and then item 17. qa18))
and then item 14. since it results in the highest alpha coefficient if it is removed and also has the lowest correlation with all other items. qa18))
and then item 10. qa13.
> alpha(c(qa1:qa10. then the scale of all items are assumed to be measured in the same direction.
> alpha(c(qa1:qa10. If set to FALSE. with items that have been automatically reversed marked with an 'x'. similar to the tableStack command with no 'by' argument given. The columns 'alpha' and 'Std.

which is the 20th. etc. By default. which can be saved back to the default data frame for further hypothesis testing. Similarly.
> tableStack(vars=wanted. 'qa3'. then 'standardized' should be set to TRUE.7620925 $removed qa11 qa12 qa17 qa14 qa10 14 15 20 17 13 $remaining qa1 qa2 qa3 4 5 6
qa4 7
qa5 qa6 qa7 qa8 qa9 qa13 qa15 qa16 qa18 8 9 10 11 12 16 18 19 21
The best Cronbach's alpha is achieved with the index of the items removed and the ones remaining listed. we first removed 'qa11'.7 to 4. If best selection is to be based on the standardized alpha. the remaining variables are 'qa1'.> alphaBest(qa1:qa18) $best.score <. The saved object 'b' contains the mean and total scores. standardized=TRUE)
The results are exactly the same in this case since all items have the same scale.0 using the original (perhaps naïve) method of keeping all items and without investigating the need to reverse items. 'qa17'.test(mean.
> alphaBest(qa1:qa18)$remaining -> wanted
The tableStack function accepts an integer vector for the 'vars' argument. which is the 6th. which is the 14th variable.7348
252
. the next step is to use the tableStack command on the wanted items saving the results to an R object.score ~ sex) # p-value = 0. then 'qa12'. with necessary reversing. and so on.score pack() des() t. which is the 15th.score total. The vector of 'remaining' items can be saved and further used in the tableStack command described previously. Saving the removed and the remaining items as an index has a very important advantage as shown next.
> alphaBest(qa1:qa18. reverse=TRUE. var. 'qa2'. the function selects the best model based on the unstandardized alpha coefficient. For example. which is the 5th. which the 4th variable.b$mean. To get the best final set of items.score <.
> > > > > mean.labels=FALSE) -> b
Note that now the mean score has increased from 3. The values of these two vectors are the index of the variables in the data frame.b$total.alpha [1] 0.

it is a good idea to explore the variables with tableStack. The total and average scores of the best selected model with items correctly reversed can be saved and ready for further analysis. If the variables are factors. of the item's scale. Psychometrika. 1951. use unclassDataframe to convert them to integers.
253
. Details of the tableStack command using the 'by' argument are described in Chapter 27 – "Table Stacking for a Manuscript".
References
Cronbach. either positive or negative.=FALSE)
The function determines the appropriate statistical test to use for all variables. J. when you have a dataset on attitudes. Coefficient alpha and internal structure of tests. var. There is actually no need to save the total or mean scores at this stage. 16: 297–334. initially without any of the items reversed.score). then the Wilcoxon rank sum test is used instead of the ttest. Check Cronbach's alpha using the functions alpha and subsequently alphaBest to get the best subsets of items that maximize alpha. Save the results to an object and put the 'remaining' items as the 'vars' argument to the final tableStack command with 'reverse=TRUE'. If the distribution is not normal. Have a careful look at the comparative distribution of the items and read each question (or variable description) to get an idea of the direction. L. by=sex. The items that should be reversed are usually the ones with the distribution contrary to the remaining majority.An alternative way of displaying results from hypothesis testing for difference between two genders in the items and mean score would be:
> tableStack(vars=c(wanted. mean.
Summary
In summary.

> > > > library(psy) data(expsy) des(expsy) head(expsy)
Determine which of the items (it1 to it10) need to be reversed.
254
. Load this library and examine the expsy dataset. Find the best subset of the items using Cronbach's alpha coefficient.Exercise_________________________________________________
Download and install the psy library from the CRAN website.

Epicalc comes with four functions for sample size calculation. which can be for a case-control study. The third function is used for the comparison of two means. recruiting too many subjects into the study not only causes management and financial problems but also raises ethical concerns. cluster sampling is employed. The second is for comparison of two proportions. a survey with a sample size that is too small will not be able to detect a statistically significant effect if there truly is one. If a conclusion can be drawn from a small sample size. On the other hand. In clinical studies.
Field survey
The aim of a field survey is usually to document the prevalence in the population on a certain condition. cross-sectional study. such as helminthic infection. For many circumstances. and the other for comparing two proportions. consequently the costs involved in collecting data from all subjects would be high. the population size is large. The sample size required depends on the estimated prevalence and the level of errors of prevalence that the researcher can accept. cohort study or randomised controlled trial. The first one is for a prevalence survey. one for comparing two means. or coverage of a health service. The last one is for lot quality assurance sampling.Chapter 24: Sample size calculation
Sample size calculation is very important for an epidemiological study. In addition to these sample size calculations. recruiting more subjects than necessary may pose an unnecessary risk to the group of subjects whose treatment is known to be inferior. The advantage of this sampling method is that it reduces the time and budget for travelling to collect data. For most surveys. such as an immunization programme.
Functions to calculate sample size
Experimenting with functions to calculate sample sizes will enable new R users to understand the principles of arguments more quickly and meaningfully.
255
. there are two functions for computing the power of a comparative study.

This is the population size in which the survey is to be conducted. if p is estimated to be 30% but we still accept that the maximum error can result in 50% prevalence. If delta is not given. For example.0. deff can be large. 1 . popsize = FALSE. 30 and the sample size compensated by sampling more subjects from each selected village. In other words. The slight increase in sample size is more than offset by the large reduction in travelling costs. To have a look at the arguments of this function type:
> args(n. however. it will be ignored and assumed that the population is very large. deff = 1. The cluster sampling technique. subjects selected from the same cluster are usually not 'independent'. By definition. The function n. both of which are invalid.05)
The arguments to this function are as follows: p: The estimated prevalence as a proportion between 0 and 1. In general. Otherwise. any further increase in the population would have a rather little effect on the sample size.5 * min(c(p. Therefore the sample size estimated from a simple random sampling technique must be inflated to cover this 'alikeness among the same cluster' (or 'design effect') problem. and so would the required sample size. This can place a heavy burden on the travelling resources. delta has more influence on the sample size than p.for.for. People in the same villages often tend to be more similar to each other than from people from other villages in terms of disease risk and coverage of service etc.2. for simple random sampling. The default value is therefore quite acceptable for a rather low prevalence (say. A small population size will require a relatively smaller sample size. delta: The difference between the estimated prevalence and the margin of the confidence interval.5 . If the prevalence is in between these values. below 15%) or a rather high prevalence (say. popsize: Finite population size. simple random sampling may require 96 persons from 96 different villages to be surveyed. then half of p (or 1-p) would be too imprecise. say 5000. the lower limit of the confidence interval will be negative or the upper limit will be higher than 100%. When p is small. whichever is the smaller. alpha = 0. above 80%).p)). which is the adjustment factor for cluster sampling as explained above.survey in Epicalc is used for the calculation of the sample size for a survey.
256
. say. delta = 0. Instead. encounters another problem. the default value is set to be a half of either p or 1-p. deff is 1.3 = 0. deff: The design effect. If the value is FALSE. Usually when the size exceeds a certain value.For example. the number of villages can be reduced to. then delta is 0. The user should then give a smaller delta. In cluster sampling with a large cluster size and the level of similarity among subjects in the same cluster is high.survey) function(p. delta should be smaller than p.

05 in a large population. all the default values of the arguments can be accepted. is not given and so set at 1. If a survey is to be conducted with a small (less than 15%) prevalence. The sample size calculated is still relatively applicable even if cluster sampling is employed because of the small prevalence. a delta of 25% is too large. alpha is set at 0. With higher accuracy demands.05) Sample size for survey.05 Confidence limit = 95 % Delta = 0.05.alpha: Probability of a Type I error.5% to 7. The command then becomes:
> n. the value of 'deff' is usually greater than one.for. the required sample size will be increased. In standard situations. in standard 30-cluster sampling for assessment of immunization coverage where the prevalence is estimated to be near 80%. If the estimated prevalence is close to 50%. a 99% confidence limit. 'deff' for cluster sampling is usually close to unity. since it was omitted. If cluster sampling is employed under such a condition. in a large population. If the prevalence is low. The population size in this case is usually large and a 99% confidence limit is required instead of 95%. It would be better to reduce this to +5% or +10% of the prevalence. The population size is assumed to be very large and is thus not used in the calculation of the sample size.5% (from 2.survey(p=. Thus the confidence limit is 95%. Sample size = 292
The function sets the 'alpha' value at 0. then the sample size required is 292. The design effect.025 from the estimate. In this case. the function suggests that if a 95% confidence limit of 5% + 2. The argument 'delta' is automatically set to half of 5% or 0. 'deff' should be around 2. For example.05 and the confidence interval of p + delta is the 95% confidence limit of the prevalence. for example.5%) is desired for an estimated proportion of 0. the suggested calculation would be:
257
. 'deff'. Assumptions: Proportion = 0. In conclusion.025.

p2.05. power = 0.1. The power of a study is the probability of rejecting the null hypothesis when it is false.2p' is written for this purpose. the probability (p1) of getting cured (or improving) among subjects given a new treatment is compared with the probability (p2) of getting cured (or improving) among subjects given the old treatment. It is quite acceptable to have the power level set at 80%. In other words.05. In this situation it is the probability of detecting a statistically significant difference of proportions in the population. The argument alpha is the probability of committing a Type I error. alpha=. the difference in the two samples would be erroneously decided as statistically significant. Scientists usually allow a larger probability for a type II error than for a type I error. The type II error is simply 1-power.8.1 from the estimate.for. Assumptions: Proportion = 0. the average size per cluster would be 212/30 = 7 subjects. it is common practice to set the alpha value at 0. there will be a chance of 'alpha' that the null hypothesis will be rejected. If the two groups actually have the same proportion at the population level (the null hypothesis is true). Rejecting a new treatment that is actually better than the old one may
258
. alpha = 0.for. In a randomised controlled trial. ratio=1)
In a case-control study.8 Confidence limit = 99 % Delta = 0.for. deff=2. As before. Design effect = 2 Sample size = 212
With this total sample size of 212 and 30 clusters. As before. delta =. As the name indicates the function 'n. and is the probability of not rejecting the null hypothesis when it is false. This sample size could be used for a standard survey to assess immunization coverage in developing countries. the proportion (p1) of subjects exposed to a risk factor among the cases (diseased group) is compared against the proportion (p2) of subjects exposed among the controls (non-diseased group). the necessary arguments to this function can be examined as follows:
> args(n.> n. the probability (p1) of getting a disease among the exposed group is compared to the probability (p2) among the non-exposed group.
Comparison of two proportions
In epidemiological studies. In a cohort study. with the sample size from this calculation.01) Sample size for survey.2p) function(p1. which is in fact as large as that in the sample.8. comparison of two proportions is quite common.survey(p =.

The 'ratio' refers to the ratio of the number of subjects in sample 1 to the number of subjects in sample 2. as only p1 and p2 are needed to be input.5. the most efficient sample size (smallest size of total sample that can test the hypothesis) is achieved when the ratio between the two stratified groups is 1:1. say only 10 cases per year. in a cross-sectional study. if the collection of data per subject is fixed.2) Estimation of sample size for testing Ho: p1==p2 Assumptions: alpha power p1 p2 n2/n1 = = = = = 0. comparing two groups of treatment each of 50 subjects is much better than comparing 5 subjects in one group against 95 subjects in the other.for. the sample is non-contrived. The other arguments will be set to the default values automatically.2 1
Estimated required sample size: n1 = 45 n2 = 45 n1 + n2 = 90
The use of this function is not complicated. If the disease is rare. In conclusion. The ratio cannot be set at 1:1 but will totally depend on the setting.05 0. For example. the status of a subject on exposure and outcome is not known from the beginning. and the researcher wanted to complete the study early. For example. the value of the ratio must be specified in the calculation.probably be considered less serious than replacing the old treatment with a new one which is in fact not better. if a risk was determined to be as common as 50% among the diseased group and 20% among the control group. such as when a very rare disease is under investigation. only 45 cases and 45 controls are needed to test the hypothesis of no association. it might be quicker to finish the study with more than one control per case. In addition. he/she may increase the case:control ratio to 1:4
259
.2p(p1=0. For these three types of studies. the minimum sample size required to detect this difference for a case control study can be calculated by:
> n.8 0. Under these conditions where the ratios are not 1:1.5 0. In certain conditions. p2=0.

8 0.5 0.5)/(. in some instances. An increase in power from 0.2/(1-.2p(p1=0. power=0. a ratio of 1 case per 9 controls will reduce the required sample size to 23 cases (4 cases reduced) but increase the number of controls required to 207 (an increase of nearly 100). In other words. p2=0. and the odds ratio is 2. p2 and odds ratio are given.for.5.
Relationship between p1. p2=0. Fixing the ratio at 1:1
> n. Increasing the ratio above this has only a small effect on reduction of number of cases but a remarkably high effect on increasing the number of controls.2.
> . however 58 cases and 58 controls are required (an increase of 29% of the sample size required on both arms). However.2 4
Estimated required sample size: n1 = 27 n2 = 108 n1 + n2 = 135
Note that the ratio is n2/n1.9)
The output is omitted.for. the odds ratio would be the ratio of the two odds of exposure: p1/(1-p1) / {p2/(1-p2)}. For example. p2 and odds ratio in a case control study
To be consistent with the above agreement.2p(p1=0. ratio=4) Estimation of sample size for testing Ho: p1==p2 Assumptions: alpha power p1 p2 n2/n1 = = = = = 0. This study can be finished in less than 3 years instead of originally 4.5/(1-. For example.> n.05 0.8 to 0. the proportion of exposures among the cases (p1) and the required sample size can be calculated as follows:
260
.2.2)) [1] 4
Setting up p1 and p2 for calculation of sample size for a case control study is straightforward. if the proportion of exposures among the population (p2) is equal to 30%. It remains necessary then to find p1.9 also increases the requirement for the sample size considerably.5. there may be a demand to compute the sample size based on proportion of exposed in the general population (which is equal to the proportion among the controls due to the rarity of the disease) and the odds ratio.5 years.

the calculation is fairly straightforward. p2=0. For example.8 0.3 > or <. whether the calculation is based on the success rate or the failure rate. p2=. if treatment A gives a success rate of 90% and treatment B gives a success rate of 80%.4615385 0.2) ===== details omitted ========= n1 = 219 n2 = 219 n1 + n2 = 438
261
.or*odds2 > p1 <.for.4615385 > n.
> n.2p(p1.1.for.9.for.05 0.> p2 <. we may also say that treatment A and B have failure rates of 10% and 20% respectively. The calculation of sample sizes in both cases would yield the same result. [1] 0.p2/(1-p2) > odds1 <.8) ===== details omitted ========= n1 = 219 n2 = 219 n1 + n2 = 438 > n. In other words. the answer is the same.0.odds1/(1+odds1). In fact.3 1
Estimated required sample size: n1 = 153 n2 = 153 n1 + n2 = 306
The required sample size is larger than in the preceding example because the odds ratio to be detected is closer to unity.2 > odds2 <.2p(p1=.2p(p1=0.
Cohort study and randomised controlled trial
Given that p1 and p2 are the respective success rates among the two treatment or exposure groups. the level of difference to be detected is smaller.p2)
p1
Estimation of sample size for testing Ho: p1==p2 Assumptions: alpha power p1 p2 n2/n1 = = = = = 0.

p2=0. This will include 48 exposed and 192 non-exposed persons.8 0.2p(p1=0.2. which must be estimated from the prevalence of the exposure. This required sample size should be checked for adequacy of the other objective.2 0. This sample size for hypothesis testing is different from that for the descriptive purpose (which has been fully discussed above). i.e.Cross-sectional study: testing a hypothesis
A cross-sectional survey serves two purposes. With the prevalence of exposure being 20% the ratio n2:n1 would be 0. the prevalence of exposure might be estimated to be 20%.05. p1 and p2. For example. to describe the prevalence of exposure. which is estimated to be 20%. On the other hand. firstly to document the prevalence of a condition (either a disease or an exposure condition or both).2 = 4. in a survey. Calculation of the sample size for the second component (hypothesis testing) of the cross-sectional study should be based on the n.05 4
Estimated required sample size: n1 = 48 n2 = 192 n1 + n2 = 240
The total sample size for this cross-sectional survey to test the hypothesis is 240 subjects. the proportions. Similar to the cohort study and the randomised controlled trial.for.for.05 0. ratio=4) Estimation of sample size for testing Ho: p1==p2 Assumptions: alpha power p1 p2 n2/n1 = = = = = 0. the probabilities of getting a disease are 20% and 5% among the exposed and the non-exposed population. the value of the 'ratio' is the ratio between the exposed and nonexposed groups.8/0.2p function. and p2 is equal to the proportion of positive outcomes among the non-exposed group.
> n. secondly to test the association between the exposure and the outcome. should be orientated toward the outcome in each exposure group where p1 is equal to the proportion of positive outcomes among the exposed group.
262
.

> args(n.6 in a group of subjects and the expected corresponding standard deviations are 0.2 Confidence limit = 95 % Delta = 0. mu2.for.25. comparison of two means is not as common as that of two proportions. Assumptions: Proportion = 0. There are four compulsory arguments that a user must supply to the function.for. pain scores and quality of life.> n.2means) function(mu1. Thus.2) Sample size for survey.
Note: ______________________________________________________________________ Readers may be aware now that function arguments that include an equals sign followed by a value are optional. Thus the function for this calculation requires a few more arguments. the notation is straightforward. If omitted. there are also a lot of important health outcomes that are measured on a continuous scale. power=0. type the following command:
263
. namely the two means and their corresponding standard deviations. sd2. However. an error is generated. Examples of continuous outcomes include intelligence quotient.8)
Intuitively.2 and 0. however.survey(p=0. The value to the right of the sign is the default value used by the function when the argument is omitted.8 to 0. Sample size = 61
The required sample size of the descriptive study is smaller than that for hypothesis testing.
As an example. the difference of means of which can be of important social concern.
Comparison of two means
In epidemiology. compulsory. suppose a new therapeutic agent is expected to reduce the mean pain score from 0. sd1. ratio=1. To calculate the required sample size. This is mainly because a clinical or public health decision is mainly based on a hard-evidenced dichotomous outcome and less on the level of difference of the mean values. the latter (of 240 subjects) should be adopted.1 from the estimate.05. Two sample means usually have two different standard deviations. alpha=0. Arguments that do not include an equals sign are.

05 power = 0. If the percentage of defectives is estimated to be higher than a certain level. Thus.2means(mu1=0.6 sd1 = 0.
264
. sd1=0.2. mu2=0.2 sd2 = 0. The required sample size for this process is smaller than that for estimation of a prevalence or proportion. the whole lot is shipped to the market. sd1=0. A company takes a sample in order to check whether the lot of product is ready to be shipped. in the process of quality assurance of anti-TB drugs in southern Thailand. The difference between LQAS and other sampling methods is that LQAS does not estimate the exact percentage of defectives.2.
> n. For example.8 mu2 = 0. The LQAS method was employed to calculate the minimal sample size that is still sufficient to test whether the quality is acceptable. In fact. the mathematical formula for the calculation of the sample size does not require the exact values of mu1 and mu2.25) Estimation of sample size for testing Ho: mu1==mu2 Assumptions: alpha = 0.for.25 Estimated required sample size: n1 = 21 n2 = 21 n1 + n2 = 42
This anaesthesiological experiment would require 21 subjects in each group.2. It only checks whether the acceptable level is exceeded. the lot is rejected.> n.8. sd2=0.for. content assays and dissolution tests of the drug are rather expensive. changing the two means will have no effect on the calculated sample size. sd2=0. Thus the same results are obtained from the following command (output omitted). Otherwise.4. Health systems adopt LQAS mainly for surveillance of proportion of problems. the costs of checking can be decreased very considerably if the quality analysis of individual components is high.2means(mu1=0.25)
Lot quality assurance sampling
Lot quality assurance sampling (LQAS) was initially applied to manufacturing processes. mu2=0.6.8 mu1 = 0. If the difference in means and the standard deviations are fixed.

05 262
From this computation. the remaining lot of 10.000 by default.Suppose a highest acceptable proportion of defective specimens is set at 1 percent. The final sample size is 262.for.
265
. The actual proportion (whether it be higher or lower than this acceptable level) is not important. This means that if any of the 262 specimens is defective. then even if all randomly selected specimens were accepted. One of the easiest ways to understand this is to look at the computation results. then the lot is accepted. The maximum defective sample accepted is 0 (again the default). With this sample size. There are a few parameters controlling the sample size here. If the study suggests that the actual proportion is at this level or less. If the threshold is increased. say to 3%.000 will be rejected. you have wasted all those specimens that were tested. the researcher would take a random sample of 262 specimens and examine each one. the required sample size would be reduced (only 87 would be needed). all 10. If alpha is set to a stricter criterion. the proportion of 1% is considered to be exceeded and the lot is rejected. there is a 5% chance that there would be at least one defective specimen among the whole sample of 262. the acceptable proportion of the whole lot would be expected to be exceeded.5%. the threshold for the defective proportion (p) is set at 1%. Otherwise. the whole lot will be rejected.lqas(p=0. it would still not be certain that less than 1% of the whole lot was defective. then even if the percent defective is within the reasonable level.000-262 = 9. The lot size is assumed to be 10. say 1000. the sample size will increase. If the sample size is too big. say 20. This means that if the null hypothesis (the defective percentage is less than 1%) is true. Otherwise. With an optimal sample size.01 0.738 specimens can be marketed.
> n. The threshold proportion for a sample being accepted varies inversely with the sample size. This large sample size is excessive. should any of the randomly selected specimens be defective.01) Lot quality assurance sampling Method Population size Maximum defective sample accepted Probability of defect accepted Alpha Sample size required = = = = = = Normal approximation 10000 0 0. Alpha (the type I error rate) is usually set at 5%. If all of the specimens pass the tests. say 2. If the sample size is too small.

treat <.75 has a rather wide confidence interval.20. 1 d. Note that the power depends on the size of difference to be detected. you may type the following commands:
> > > > table1 <. However.placebo.as. P value = 0.table(table1) cc(cctable=table1) A B Total A 35 20 55 B 70 30 100 Total 105 50 155 OR = 0.treat.658 .5 * odds. . n2=50) alpha = 0.for. this can be any number.354 1.5.417 Fisher's exact test (2-sided) P value = 0.
Power determination for comparison of two proportions
Sometimes a reader may come across a study that reports no significant difference between two groups..placebo <.5 and the failure rate among the placebo group is the same.4 n1 = 105 n2 = 50 power = 0. Consider a trial with 105 subjects on one treatment arm consisting of 35 failures versus 50 subjects on a placebo with 20 failures.474
The odds ratio of 0. n1=105.4082
The sample size used in this study only had a 40% chance of finding a significant difference given that the treatment had an odds ratio of 0. It might be of interest to know the power of the sample size for this particular study if the true odds ratio is in fact 0.2) table1 <.odds.
> > > > > odds. p2=p. To set up this hypothetical data table.751 95% CI = 0.05 p1 = 0.20/30 odds. To obtain statistical significance for a large difference would require a smaller sample size than that for detecting a small difference if the power was kept the same.25 p2 = 0. the larger the required sample size. In theory.70.
266
. One may doubt whether the study had enough power to detect the significant difference if a clinically significant difference existed at the population level.c(35.30) dim(table1) <.treat) power.f.placebo <.treat/(1+odds.The maximum defective sample accepted is set at 0 by default in order to minimize the sample size.placebo p. The study was therefore inconclusive.c(2.2p(p1=p.20/50 p. the larger the number is.treat <.606 Chi-squared = 0.

mu2=100.1 power = 0. What is the power to determine an improvement of 5 units (new IQ = 100) if the parameters in the placebo groups and the standard deviation of the treatment group are not changed? Let group 1 represent the pupils on the placebo and group 2 be the pupils receiving the new treatment.7.7 sd2 = 10.
Power = 0.00
0.10 Ha: mu2 − mu1 = 5
0. the power to detect a difference of 5 points of IQ under these assumptions is approximately 90%.8988
With this relatively large sample size.20
Ho: mu2−mu1=0 0.Power for comparison of two means
Suppose a study reports that in a randomised controlled trial a micro-nutrient is given to 100 pupils and a placebo to another randomly selected 100. sd1=11.15
0. sd2=10.05
−2
0
2 mu2−mu1
4
6
8
267
. n1=100.1.25 mu1 = 95. sd2 = 10.1 and 95 ± 11.2means(mu1=95. n2=100) alpha = 0.8988
0. mu2 = 100 sd1 = 11. n2 = 100
0. By the end of the year.1 n1 = 100. the mean ± standard deviation of the IQ scores in the two respective groups is 98 ± 10. The command to calculate the power is:
> power.7.7.for.05 mu1 = 95 mu2 = 100 n1 = 100 n2 = 100 sd1 = 11.

Assume that 50% of the controls are not vaccinated. A case-control study is carried out to determine the efficacy of a vaccine for the prevention of childhood tuberulosis with a placebo. If the number of cases and controls are equal.5kg. an odds ratio of at least 2 in the target population?
Problem 3. with 80% power and 5% type I error. what sample size is needed to detect. assuming that the control group is twice as large as each of the two treatment groups and an 80% power is required for each comparison?
268
. with a precision of 5%.5kg.
Problem 2.Exercises________________________________________________
Problem 1. A randomised trial is to be conducted comparing two new treatments aimed at increasing the weights of malnourished children with a control group. What are the required sample sizes. Calculate the maximum sample size required to estimate the prevalence of respiratory tract infection. The minimal worthwhile benefit is an increase in mean weight of 2. in a target population consisting of children aged 1-5 years in a particular region of a developing country. and the standard deviations of weight changes are beleived to be 3.

the analyst needs to get acquainted with the dataset and the variables.
Reading in data files
If the dataset is in EpiInfo format (file extension = '. Examples include unbalanced brackets.Chapter 25: Documentation
Data can be analysed interactively as shown in the previous chapters or in a batch mode as shown in this chapter. Loading the necessary libraries for a particular purpose of the analysis.rec'). The user can simply press the up arrow key to retrieve the previous command and make the appropriate corrections. Under Epicalc. or comma separated values (".
269
. Typing and reading commands from the console is the most natural learning process. such as 'library(survival)' for analyzing survival data. the analyst types commands directly into the console and. This is very useful when he/she starts to learn the software for the first time. This learning phase of typing commands one at a time often results in several mistakes. Stata (". if there are no errors. unbalanced quotes and omission of delimiters (such as commas). obtains the output specific to that command. this can be done with the following steps: Starting with clearing the memory zap().dta"). which is mainly carried out interactively. 'library(nlme)' and 'library(MASS)' for multi-level modelling. This phase is often called 'Exploratory data analysis'. These mistakes however are easy to correct.sav"). SPSS (".csv") then it would be convenient to read in the data file with the command use("myFile") from the Epicalc library. The most common type of mistake is syntax error or violation of the rules imposed by the software. At the initial phase of the analysis.
Starting with interactive analysis
In the interactive mode. either syntactically or otherwise.

The saved file should have a ". Commands typed in during the interactive modes often contain mistakes. Instructions for installation are given in Chapter 1. Explore categorical variables using codebook() and tab1(varname). Do not accept this name. Use Windows Explorer to create a new text file. This file stores all the commands that have been typed in. type help. right click and choose 'Open with' then choose Crimson Editor (cedt.
270
. they should be 'cleaned up' using an appropriate text editor. The appropriate command to read in the dataset is 'read.R' or '.rhistory" extension. such as 'Chapter1' or 'HIV' and make sure you include the '. These commands will be used for further analysis. Quickly explore summary statistics of the variables using summ().table' from the R base library. Explore each variable one at a time using the command summ(varname). The next step is to open the saved file with a text editor. Since these commands will be reused in the future.r" or ". Instead choose a name appropriate to the purpose of the file. your computer should open it with either Crimson Editor or Tinn-R. Windows will offer to name the file. say 'New Text Document.start() . Explore the class and description of variables using des(). If your computer's file associations have been successfully set up. The Notepad program that comes with Windows does not have these features and is thus not suitable for working with a long command file. which are both public domain software. If not. Crimson Editor and Tinn-R are recommended for this purpose.txt'. For other data file formats. The current recommended programs are Crimson Editor and Tinn-R. maximum and look at the graph to see a detailed distribution. Pay attention to the minimum.
Crimson Editor
There are many good text editors available for editing a command file.If the data is in another format. Save the commands that have been typed using 'savehistory("filename")'.exe) or Tinn-R (Tinn-R. A good one should be able to show line numbers and matching brackets. Double click this new file. Choose 'packages' and then 'foreign'. By default.r' extension. check whether the first line is a header (variable names) and check the type of the variable separator. Note that 'varname' and 'filename' in the above list should be replaced with the appropriate variable name and file name.exe).

If not. Finally Click 'OK'.
Tinn-R
The advantage of using Tinn-R over Crimson Editor is it's ability to interface or interact with R itself. Note that Crimson Editor can have multiple files opened simultaneously. select 'Customize. under the R menu. Viewing line numbers is strongly recommended. The user needs to activate this by clicking 'Document'. Language specification and key words for the R program will be available for any file opened with Crimson Editor.' at the very bottom of the list. Those who like to use the function keys instead of the mouse can set the 'hotkeys' of R. highlight 'Visual'. Next to 'Lang Spec'. just uncheck them one by one.. You may use this newly created file to customise Crimson Editor menus and file preferences. depending on the nature of the work. This turns green once the file has been saved. the level of complication is not high. Check 'Tool bar'. In the list of Syntax Types. Editing is not too difficult. The authors preference is to set F2 for sending a single line.. then 'Preferences. Check 'Show line numbers'. Click 'Tool's from the menu bar. In the Preferences box. Tinn-R has many other nice features similar to Crimson Editor that make working with R easier and more convenient. 'Syntax types'.The following section is specific for Crimson Editor only. For a command file with only a few simple steps. 'Tool bars/Views'. This can be set under the View menu. Position the cursor in the 'Description' text box and type R. 'Highlight active line' and 'Highlight matching pairs'.
Editing a command file
A command file can be very simple or very complicated. The function key F3 is preserved for Searching (and find again). 'Syntax types' and selecting 'R' from the list. F5 for sending the current whole command file without prior saving and F6 for saving the file and sending as 'source'. If you want to know what work they do.spc'. Any file that has been changed but not yet saved will have a red dot in its MDI File tab. The 'Preference' dialog box will appear with 'Syntax Type' highlighted under the 'File' option. Finally. scroll down until you see the first '-Empty-' position and select it with the mouse. for the line number. type 'R. The editing tasks include the following steps:
271
.key'. F4 for sending the selected block. But the R command file is still not automatically associated with Crimson Editor yet. and for 'Keywords' type 'R. Choose 'View'.. See if R is in the list of known file types. in blocks of lines.'.. 'MDI file tabs' and 'Status bar'. or even the whole command file. Users can type the commands into the Tinn-R editor and send them to the R console line by line. From the menu bar select 'Document'.

The last line 'savehistory("filename. echo=TRUE)'. Switch back to the command file and correct the error then return to the R console and rerun the command 'source("filename.Open the saved history file using either Crimson Editor or Tinn-R. If you use Crimson Editor. However. Remove any duplicate commands. If the block of commands contains an error. The lines that need to be skipped by R. objects not found or files not being able to be opened. you can simply highlight the commands that you want to send to R and using the mouse. Correct the typing mistakes by deleting lines of erroneous commands. If you use Tinn-R. In the above example. you may copy blocks of commands and paste them into the R console. Even when all syntax errors have been removed. followed by saving the file. such as typing mistakes in commands. Check the structure of the commands. Copying and pasting has the advantage of seeing different colours of commands (red) and output (blue) on the R console. n = -1. any mistake or error in the middle of a large block of commands may escape notice.r". In these situations. "?") : syntax error at 3: library(nlme 4: use("Orthodont. However the line number will not be given. click on the “send” icon (or press the hotkey) to perform the operation. Return to the command file and make the appropriate correction. use. It is highly recommended that comments be included throughout the command file to enable other readers to follow easily. then saving and sending commands as source will stop at the line containing the first error.dta")
The report on syntax errors usually includes the line numbers of the (first) error. the error occurs at line 3 (missing closing bracket).
Error in parse(file. there may remain other types of command errors. NULL. such as author's comments or commands that the analyst want to skip for the time being can begin with '#'. For example. the console will show the results in the console up to the error line. Make sure it includes proper order of key commands as suggested above (with zap. etc) .
272
. Correct any lines that have incorrect syntax.r")' should be removed because it is not needed in the batch mode.

Executing the command file at this stage will allow the analyst to check this part of results instantly. For example. 'col' (colour) etc. Any preceding sections could possibly be bypassed without much a problem. Eventually. Graphing however can involve many steps and may require the addition of extra graphical parameters.r")' at the R console is a standard method of making good use of an existing command file. the results coming out on the console can be too much to store in the console buffer or too much to read. To do so.The amount of commands typed into the command file should be optimal. One of R's advantages is in its graphing capabilities. It is often necessary to break or interrupt the command file to see the details or graph at some point.
Executing only a section of a command file
The above method. the line 'tab1(newvar)' may not be necessary and can be subsequently deleted or skipped by placing a '#' before it. ensures that the command file has no syntax errors and the system works well up to the point of 'xxx'. Sometimes. For example. Other parameters such as 'pch' (point character). The method however may add too much time if some of the data file and/or command file are large or the computation process is CPU intensive. the analyst may want to by-pass these preceding sessions to get quick results from the section in the later part of the command file. Save and run the file.
Breaking in the middle of the command file
Since there can be several commands in the command file executed continuously. R does not know what to do and thus stops with an error message. 'xlab' (X-axis label). insert a line where the break is required. commands to create a new categorical variable from a continuous variable and to check the distribution of this new variable (using 'tab1(newvar)') should be kept together. once established. Type a single meaningless word such as 'xxx' on that line. When the command file is executed to this point. a good graph may need several lines of commands to produce it.r")' to run the command file can be easily repeated by pressing the up arrow key and then <Enter>. The command at the R console 'source("filename. It is a good practice to have the newly added command lines containing one set of related actions. Changing the breaking point of 'xxx' from one place to another in the command file followed by saving it and rerunning 'source("filename. A graph created from a command line will also be overwritten by a subsequent graph. Once the new variable is assured. This can be done if and only if the preceding sections are not the requirement of the section needed. can be added in the next round of command editing. The output just before the 'xxx' can be fully explored and any graph that is currently displayed can be saved. It is a good idea to start with a simple graph in a command. 'lty' (line type). a section starting with 'zap()' or 'rm(list=ls())' will erase almost all objects and attachments.
273
.

named "myFile. If the expression in the first (round) bracket is FALSE. To prevent this confusion. issue the command sink(). These blank lines will make the skipped section easily visualised. This will return the route back to the screen. Crimson Editor and Tinn-R have a highlighting facility for matching brackets but the opening and the closing ones sought may be very far apart with several other curly brackets nested inside. The simplest way is to highlight the area of text with the mouse and copy it to the clipboard before pasting to a destination area such as a part of document text.. one simply inserts one line with the command:
if(FALSE){
just before the to-be-bypassed section. not to the screen. The errors are can then be investigated in the output file. An alternative method is to use the sink(file = "myFile. the solution is to type sink() at the console.. Thus to bypass a large section. i. if there are too many lines to do so and all the lines are contiguous.e.txt") can then be placed at the beginning of the command file and 'sink()' placed at the end of the file. all the commands inside the curly brackets {} will be bypassed. If this happens. several blank lines should be inserted before the command line 'if(FALSE){' and after the matching closing bracket. 'sink' should used only when all the commands have been tested to be error free. to stop diverting output to the file.txt") command to divert all the subsequent output text to a file. Since the results are diverted to a file.
Saving the output text
There are a number of methods to save the output text. and one line with a closing curly bracket at the end of that section.}' control construct. A complication of using sink arises when there is an error in subsequent commands. See 'help(sink)' for more details about the usage of this function.txt". However. To prevent this. To return to interactive mode.
274
. for example no 'xxx' or other garbage allowed. The use of sink may be incorporated into the command file or carried out manually.Bypassing several lines in the command file
If a few lines need to be bypassed but not erased. The main problem with this method is finding and removing the matching curly brackets when the bypass is no longer required and the command file has been unused for a long time. The command sink(file="myFile. the easiest way is too put # in front. one can bypass them using the 'if(){. The whole section contained by the curly brackets will be skipped. the use will not recognize the error and the process will stop in the middle of confusion. Then submit the command file to R with the command source("command file").

jpg") png("filename. it is important that the device is turned off in order to write the graph contents to the file and reroute future graphical output to the screen. R limits the console window to the last 5.jpg". The Bitmap format may not have a sharp picture when the size is enlarged. instead of showing the graph on the screen. the graph can be routed to a file by issuing one of the following graphics device commands:
bmp("filename. The default destination file is "lastsave. Copying a graph to the clipboard and then pasting it to a program such as a word document or a PowerPoint presentation slide is simple.jpg") win. there is a limit to the number of lines that can be saved. postscript or PDF.'. Then copy or save the graph as mentioned above. To save a graph when commands are run from a source file.. When the commands that create the graph are executed.
Note: ______________________________________________________________________ This last method will not save output if the 'clear console' command has been issued. the graph can be saved in various other formats.pdf")
Each of these commands sets up the graphics device and must be followed by a command that creates the actual graph.off()
This rerouting method is useful because the whole process of the command file need not be interrupted in the middle by the method mentioned in the preceding paragraph. This will save all output currently in the console to a text file. such as JPEG. In addition.txt" but this can easily be changed. Therefore use this method only if your output is not very long. Alternatively. The commands below create a summary graph of the variable 'age' from the Outbreak dataset in Epicalc..000 or so lines that can be saved. which requires a final sink() to save and close the file. Alternatively.bmp") jpeg("filename.
dev.
Saving a graph
Routing of a graph to a file is simpler than routing the output text. The graph is routed to a file called "graph1. simply type 'xxx' after the graphing command to halt further execution of commands.
275
.Perhaps the simplest and best method to save the text output is to click 'File' at the menu bar and choose 'Save to File. Choose as a Bitmap or Metafile if the destination software can accept this format.wmf") pdf("filename. Click at the graph window and choose 'File' from the menu bar and 'Copy to the clipboard. A Metafile is slightly smaller in size and has a sharper line. The concept of turning the graphics device off after creating the graph is similar to using the sink command.metafile("filename.

jpg") summ(age) dev.> > > > > >
zap() data(Outbreak) use(Outbreak) jpeg("graph1.off()
The re-routing process can be done either interactively or inside a command file if there are no mistakes inside the graphics commands.
276
.

a data analyst often faces over 50 variables and several thousand records.
> data1 <. The inner loop then pastes the numbers 1 – 20 to these letters. fast CPU. and efficient data handling strategies. let's create a data frame containing 30.data. data1)
The first variable is called 'id'. separating the letters and numbers with a full stop.
Clearing R memory
R can handle many objects in one session. 160) > data1 <. The rnorm function is used to generate the random numbers from a standard normal distribution. it is a good practice to clear all unnecessary objects from the working environment and detach from all unnecessary data frames.Chapter 26: Strategies of Handling
Large Datasets
The datasets given in the Epicalc package and used in this book are relatively small.
> zap()
Simulating a large dataset
Instead of using an existing large dataset. If the amount of memory is limited.
277
.c(30000. In real life. Therefore. it is advisable to start any new project with the zap() command. data analysis may take too long or may not even be possible. which consists of the lower-case letters of the English alphabet. large hard disk space. Without these requirements. The outer loop generates the first character of the variable names (a – h). both in number of records and the number of variables. The requirements for such analytical processing include a large amount of computing memory. The naming of the remaining 160 variables can be achieved using two nested for loops and the built-in R constant 'letters'.rnorm(30000*160) > dim(data1) <.000 records with 161 variables.frame(id=1:30000.

labels")[1] <.
> des(select=1:20)
Only the first 10 variables.c(namesVar.. To show a subset of the variables in the data frame. To look at the variable descriptions of variables starting with "a. large output will usually scroll off the screen. without having to scroll up and down the screen. making viewing awkward." followed by only one character. paste(i.paste("Variable No."ID number" > for(i in 2:161){ attr(data1.")) } } > names(data1)[2:161] <. Then we move to see the next twenty.> namesVar <. and so forth.. type:
> des(select="a*")
In these case. If one wants to see only the variables names that start with "a". Glancing at about 20 variables at a time will allow users to see the variable descriptions more carefully.?")
278
. j. "var. using the attr function. type:
> des(select="a. there are 20 of them. depending on the speed of your computer. i) } > use(data1)
Describing a subset of variables
After entering commands at the R console.". sep=". specify the select argument in the des function.NULL > for (i in letters[1:8]) { for(j in 1:20){ namesVar <. "var. This process should only take a few seconds. their class and description will be shown.namesVar
Then give a variable description to each variable.
> attr(data1.labels")[i] <.
> des(select=21:40)
.

When you are satisfied that the commands work correctly. then you can apply it to the whole dataset.1' was generated from a standard normal distribution. as can be seen from
> des(.01)
The above command would again keep only 300 of the original number of records. but not the variables. The Epicalc function keepData can be used to select a subset of records from the whole data frame. of observations =300 Variable Class Description 1 id integer ID number 2 a. which has a mean of 0 and is symmetric about this mean. This is done by specifying a number between 0 and 1 for the 'sample' argument.data is just a subset of the original one. 2 ========= lines omitted=========================
which suggests that .data will be changed from having 30. The criteria for keeping records can also be specified using the 'subset' argument:
> keepData(subset=a. When testing R commands it may be better to just keep a subset of records.1 numeric Variable No. This method of selecting subsets of records can be applied to a real data frame.
> des()
The reduction is about a half since the variable 'a.000 to having only 300 records with the same number and description of variables. If one wants to use the original data frame. thus reducing the time involved.data)
Note that the first few lines read:
(subset) No. such as keeping the records of only one sex or a certain age group.1 < 0)
You will see a reduction of the total records.
> keepData(sample=0. simply type
> use(data1)
An alternative to specifying the number of records to randomly keep is to specify a percentage of the original records.
> keepData(sample=300)
The data frame ..
279
.Keeping only a subsample
Working with such a large data set can be time-consuming.

Further analysis can then be carried out more quickly. the first few lines of the file can then be edited to use the full original data frame in the final analysis.20) > des()
Variables from 'a. To exclude the last 10 items of each section. if the size of the data frame is large. and the commands are well organized.20'.1' and 'g. the wildcard feature of Epicalc can be exploited.
280
. the analyst can choose one or more of the above strategies to reduce the size.1' to 'g .
> use(data1) > keepData(exclude = "????") > des()
All the variables with a name of length four characters have been removed.20' have been excluded but note that the number of records remains the same. If all the commands are documented in a file as suggested by the previous chapter.1:g.
> use(data1) > keepData(exclude = a. Return to the original data frame and exclude the variables between 'a. As mentioned before.Data exclusion
The keepData function can also be used to exclude variables.

tableStack. then one can either use the table function from standard R or tabpct from Epicalc.
281
. In chapter 23. which will both show a crosstabulation of the variables. This grouping variable is initially analysed against baseline characteristics in the first table of the manuscript and against the variables of hypothesis testing in the second table. if the row variable is a factor. This is then subject to statistical testing using either a chi-squared test or Fisher’s exact test. The results of this can go directly into the manuscript. An additional (and also more important) goal is to compute the mean and total scores with the items correctly reversed where necessary. The orientation of the tables usually has the group variable as the column and other variables as the rows. chisquared test and non-parametric tests are rarely mentioned or explained in detail. the same function is also extensively used but with the 'by' argument included.
Concept of 'tableStack'
Epidemiological and clinical manuscripts often have objectives of testing certain hypothesis in human subjects. this command is extensively used in parallel with the commands alpha and alphaBest to display the distribution of each variable. All these statistical tests can be produced by one single Epicalc command. They are often used in the initial comparison of groups. These subjects are usually grouped by type of exposure (in a cohort or an interventional study) or outcome (in a case control study) of interest. In practice. In this chapter. which is commonly presented as the first table in most epidemiological manuscripts.Chapter 27 Table Stacking for a
Manuscript
Readers of this book may wonder why simple statistical tests such as the t-test.

a small dataset previously explored in chapter 4. for testing than two groups. If the data are normally distributed. with the appropriate statistical test shown.If the row variable is on a continuous scale. ie. For non-normal data. Let's start with the dataset Familydata.
> > > > zap data(Familydata) use(Familydata) des()
Anthropometric and financial data of a hypothetical family No.
282
. and one-way anova. non-parametric tests are favoured. which creates and stacks several tables with the appropriate statistics together into one convenient table. usually with some time-consuming formatting required. of observations = 11 Variable Class Description 1 code character 2 age integer Age(yr) 3 ht integer Ht(cm.numeric from the Epicalc package. 'sex'. For data with skewed or nonnormal distributions. means and standard deviations are the two commonly displayed statistics.) 4 wt integer Wt(kg. the Wilcoxon rank sum test for 2 groups and the Kruskal-Wallis test for more than 2 groups. Now we create a summary table of all variables by each level of sex.) 6 sex factor
The data contains only one factor variable. computing different statistics for the subgroups and then copying the results into the manuscript.) 5 money integer Pocket money(B. the required table could be obtained by the tapply or aggregate functions in the base and stats packages of R. which give one statistic of each subgroup at a time or aggregate. the analyst has to go through various steps of exploring the distributions. are used. which gives multiple statistics of the subgroups. For normally distributed data the t-test.
Example
All datasets with at least one factor variable can be used for trial. respectively. in a nice stacked format. the median and inter-quartile range (25th and 75th percentiles) are often used. In doing so. This labourious work is easily accomplished by the Epicalc function tableStack. for testing between two groups.

P value t (9 df): t = 0.5) Rank sum: W = 3 51(50. with 9 degrees of freedom and non-significant P value.
> tableStack(age:money.68) t (9 df): t = 1. The test statistic is small (t=0. by=sex)
The output table consists of four variables. p-value = 0.4(656.047
Pocket money(B.6) 0. pocket money was determined to be normally distributed and a t-test was carried out with a non-significant result.8683.3) M 50.) median(IQR) 42.08262
Moreover.8(26.5(166. so the median is shown instead of the mean.01542
283
.> tableStack(vars=2:5. p-value = 0.5 0. Age is determined to be normally distributed.) median(IQR) Wt(kg.5 155(150.) mean(SD) 586.test(money ~ sex) Bartlett test of homogeneity of variances data: money by sex Bartlett's K-squared = 5. Finally.170.5(61.5.627
Rank sum: W = 0.1)
0.54) 65.159) 168.014 Test stat.218
The numeric argument of 'vars' can also be replaced with the variable names. Height and weight are significantly different among males and females.1)
0. Note that for such a small sample size our conclusions are not so sound.9(24. df = 1.8722.5). which come from the second to the fifth (vars=2:5) in the dataset. Both variables are determined to have non-normal distributions. by=sex)
F Age(yr) mean(SD) Ht(cm.test(lm(money ~ sex)$residuals) Shapiro-Wilk normality test data: lm(money ~ sex)$residuals W = 0.33 1787.5(2326. the assumption of equal variance of residuals can be checked with
> bartlett. One can check the assumption for normality of residuals of 'money' by typing
> shapiro. thus a t-test is conducted to test for a difference between the mean ages of males and females.5. The inter-quartile range (IQR) is shown instead of the standard deviation (SD) and the Wilcoxon rank sum test is conducted instead of the t-test.

284
. by=sex. Chi(2) = 78.05.01.labels' is FALSE.5) 46(19. it is useful for data exploration. While this is not ready to 'copy & paste' to the manuscript. by=sex.labels=FALSE)
EP 3 : hia never IA 61(25. The latest command has a P value of 0.6) 131(54.2) 85(35. the variable index and variable name is displayed instead of the variable label. An abnormal category for the row variables.6) 158(65. One can try with other variables in this dataset to get familiar with the reasons for choosing parametric and non-parametric tests. name. may indicate a need to recode that variable before the final version can be used. test=FALSE) > tableStack(age:money.1) 13(5. For example:
> tableStack(age:money. by=outc. not P>0.5) 87(36. such as wrong levels of labeling.Epicalc has preset the significance level for the Shapiro-Wilk and Bartlett tests to switch the results from using the t-test to using the Wilcoxon rank sum test at P>0. of observations = 723 Variable Class 1 id integer 2 outc factor 3 hia factor 4 gravi factor > table(outc) outc EP IA Deli 241 241 241 > tableStack(hia:gravi.5) 182(75.72 P value < 0.4) 121(50. the name of the test. Users can also specify different output features.1) 37(15.3) ever IA 180(74. or rows with too small numbers.test=FALSE) > tableStack(age:money. var. iqr=c(age. not enough to activate this switching.001
Description Outcome Previous induced abortion Gravidity
Note that when 'var.3) 35(14.001 110(45. money))
More examples
The 'by' argument in the tableStack function can also have more than 2 levels. and the variables to apply non-parametric statistical tests to.
> data(Ectopic) > use(Ectopic) > des() No.4) < 0.7) 4 : gravi 1-2 3-4 >4 IA Deli Test stat.18 117(48.015. by=sex. such as not showing statistical test results.4) 83(34.4) Chi(4) = 46.

4) Gravidity 1-2 117(48.3) 110(45. percent="row")
Note that 'percent' should be set to "row" if we want to compare the percentage of the outcome variable which has been designated to the column variable. package="MASS") use(Cars93) des() tableStack(vars=4:25.20.7) Max.5(11.23) 22(19.1) 85(35.7%.5) 3-4 87(36. test=FALSE.9. The above two data frames have row variables consisting of only one type.6) 20(44.1) >4 37(15.8) Driver only 23(47.6.3(13. thus the prevalence is 74.48 7(15. the percentage of having 1-2 previous pregnancies is highest in the 'Deli' group and the difference of gravidity is also highly significant.4) 83(34. such as setting the output of hypothesis testing to FALSE or showing column percentages. a cross-tabulation for that variable against the 'by' variable is displayed.5) 121(50.1(11.22.001).672 Test stat.
> tableStack(hia:gravi. by=outc.4) 35(14.3)
0. either continuous variables or factors. test=FALSE) EP IA Deli Previous induced abortion never IA 61(25.
> > > > data(Cars93. The table indicates that there are 241 records of EP (women with an ectopic pregnancy).30) 30(25.6) ever IA 180(74.24.3) 46(19.812
USA Min.28. by=outc.Price median(IQR) MPG.4) > tableStack(hia:gravi.489
0.6) 158(65. Let's try with one with a mixture of both.5.7) 19.26.7) 131(54.33) Chi(2) = 0.city median(IQR) MPG.When the row variable is a factor.4) 18(40)
0.4.786
285
.191
AirBags Driver & Passenger 9(18.5) 21. 180 had a previous history of induced abortion. These default settings of the arguments can always be overruled. For 'gravi'.5) 13(5. by=Origin)
non-USA 16. Association of the outcome (column variable) and more than one row variable suggests potential confounding problems that require further analysis.9) None 16(33. Out of these.4(15.4)
Price median(IQR) 16.2) 182(75. P value Rank sum test 0.037
0.5) Rank sum test 20(18.3(9.26) Rank sum test 28(26. This is much higher than the corresponding IA group (54%) as well as in the delivery group (34%).9) Rank sum test 0.1.7(12.highway median(IQR)
Rank sum test 18. The chi-squared test is highly significant (P < 0.19.Price median(IQR) 14.

7.6) Fisher's test 0(0) 22(45. There are four factor variables. The other continuous variables are all tested with a t-test.4) 1(2.919
0.Price median(IQR) MPG.25.8.5) 0(0) 3(6.6)
================== remaining lines omitted =================
286
.7) 7.Price median(IQR) Price median(IQR) Max.7(12.3(13.19.column=T. by=Origin. rate of fuel consumption and power. are either non-normally distributed or have a highly different variance between the two origins of cars.5.4) 18(40)
16(17.26.
> tableStack(vars=4:25.011
================== remaining lines omitted =================
Some of the variables.23.26)
21(18.4) 11(24.column=TRUE)
In this case.3) 7(15.6.23)
22(19.3)
18.1(11.8) 5(11.
> tableStack(vars=4:25.2)
0. number of cylinders (Cylinders) violates the assumptions of the chi-squared test.3(9.highway median(IQR) AirBags Driver & Passenger Driver only None 14.8) 9(18.8) 0(0) 20(41.7(10. thus were tested with the non-parametric rank sum test.2.9) 16(33. type of drive train (DriveTrain) and availability of manual transmission (Man.24.
Colum of total
If required.1) 33(73.22.3)
7(15.2) 43(46.25)
28(26.2) 34(36.6) 20(44. total.3)
16.5) 9. test=F)
USA Min. an additional column of the total can be shown.30)
30(25.7) 19.6(14.trans. such as those related to price.3)
20(18. and so Fisher's exact test was used.9) 4.20.17 5(10.2) 1(2.4.1.avail) were tested with a chi-squared test. The two-sided P-value is very small indicating that pattern of cylinders between cars of US and non-US origin is significantly different.city median(IQR) MPG.28.33)
28(26. by=Origin.4(15.7) 6(12.8) 23(47.DriveTrain 4WD Front Rear Cylinders 3 4 5 6 8 rotary
Chi(2) = 0.7(12. On the other hand. total. omitting the test may look better.31)
9(18.4) non-USA Total
16.5(11.5) 21.4) 34(70. Location of airbags (AirBags).7) 27(60) 2(4.20.9.

file="table1.5) Poor 248(23. etc.8) 285(26. by="junk")
Exporting 'tableStack' and other tables into a manuscript
R has a useful function to write a matrix. Then copy and paste the output table to your manuscript document.8) 240(22. by=Origin. by="none") Total stage Stage 1 530(49.tableStack(vars=4:25. the table can easily be copied into the manuscript.6)
ses Rich 279(26.
> > > > data(Compaq) use(Compaq) des() tableStack(vars=4:6.In some occasions.6) Stage 4 63(5. the first table may be a description of information of the subjects on staging.csv(table1.
287
. which should contain the file "table1. age group.9) Age group <40 40-49 50-59 60+
296(27. sex.3)
In fact. table or data frame into a comma separated variable (csv) file that is readable by Excel.csv") > getwd()
The last command shows the current working directory. only the total column is worth displaying.
> table1 <. For example.8) 243(22.csv".7) Stage 3 81(7. After being read into Excel.8) Stage 2 390(36. in the Compaq dataset.
> tableStack(vars=4:6. Go to that directory and open the file in Excel to see the results.2) High-middle 383(36) Poor-middle 154(14. data=Cars93) > write. the "none" string can be replaced with any quoted value with the same results.

No need for education for interpretation. the power of discrimination is lost. the width of each box is determined by its sample size but not in linear proportion. Even a small difference can be noticed if the sample size is not large. 'dotplot' is more friendly when the sample size is large
Dotplot
292
.
Missing values Dotchart Dotplot Boxplot Missing values are placed as empty space on the top of each stratum Missing values are not shown.
Suitability related to sample size and number of strata Dotchart Most suitable when the sample size is not too large e. Viewers must be educated to give proper interpretation. Large number of strata can be a problem. Information on relative frequency is best conveyed by this graph. However. Similar problem with 'summ(var)' on the issue of stratification. Since adjacent values are often forced into the same bin. Flat or slow rising indicates low frequency whereas sharp or steep rising indicates high frequency. The length of the box is counter-intuitive. especially when the sample sizes among strata are grossly imbalanced. a short part means high density and a long part means low density. Thickness of strata determined by the height of the most frequent bin.Power to discriminate different values Dotchart Dotplot Boxplot Discrimination power is high. as indicated in the command.g. therefore. it can be visually distorted.
Dotplot Boxplot
Information on sample size in each stratum Dotchart Dotplot Boxplot Thickness of strata determined by the sample size. Poor discrimination power as most of the dots disappear in the box. < 200. Since the box is divided into two parts with more or less the same amount of data.
Perception for frequency distribution of the values Dotchart Empty space in the graph promptly conveys the information that there is no data in the area. When 'varwidth=TRUE'. Many people do not have this knowledge to interpret the result. Missing values are not shown.

The 'onset' that had been changed was the free vector created by the command
> onset[!case] <. check again:
> addmargins(table(. From the command
> onset[!case] <. both 'onset' and 'case' were those in the second position of the search path 'search()'.data in the next chapter would give no problem. One might think that these foods would have been contaminated. pch=18.data. The vectors in . 2. eclair.> title(ylab="Subject sorted by bed time") > legend("topleft". the variable 'time.
Chapter 8
Both 'beefcurry' and 'saltegg' have significant attributable risk and risk ratio.data$onset. does not have this problem.data$onset. "arrival time"). legend=c("Bed time". Using this variable in the .data$case))
By this method. This is discussed in the next chapter.47. As seen from
> addmargins(table(."red". the free vector 'onset' will be removed. water) # OR =1. the recode command in Epicalc should be used.14. In fact. . !case."black"). To get a permanent effect. "woke up time".data$case))
Three non-cases had reported onset time. However. .data and in 'search()[2]' which is not changed.onset'. In this command. These two copies are then different from the free vector which was created/modified by the command.eat.NA
itself.85 > table(case.data and in 'search()[2]' would also be automatically synchronised to the new value. NA)
Then.
> recode(onset. water)
294
. 95%CI = 0. col=c("blue". bg="cornsilk")
Chapter 7
No. which was an attached copy of . The first and the second one in . a POSIXt object.NA
there would be three copies of 'onset'.
Chapter 9
> cc(case. the increase in risk from consumption of these is due to confounding.

771 0.Date("2001-03-12") .
Chapter 12
> > > > > > > > > > zap() data(BP) use(BP) age. When the SO2 concentration in the air is doubled. log2(deaths)) > abline(lm5)
From the regression coefficient and the graph. the number of deaths will increase by x 0. Error t value Pr(>|t|) (Intercept) 48. the loge(SO2) also increases by 0.374 times.as. dbp).as.max(sbp)). For every unit increment of log2(SO2).45843.ylab="blood pressure") n <.9647 9. Given x is a positive number. The modelling for outcome variable that is discrete counting number can be more appropriately dealt with Poisson regression in chapter 19.The coefficients of log(SO2) from lm4 and of log2(SO2) from lm5 are the same: 0.32e-06 sexfemale 7. col=unclass(sex)) title(main="Systolic and diastolic blood pressure of the subjects") > summary(lm(dbp ~ sex + age)) ======================= Coefficients: Estimate Std.ylim=c(0.in. y=c(sbp.days <.9412 0.4928 5.14e-06 =======================
After adjusting for age.0797 age 0. This coefficient is thus independent of the base of logarithm. the number of deaths will increase by 2 0.2243 4.45843 or 1.158 1.birthdate age <.458 units.0798 1.458 units.192 1.45843 times. the difference between sexes is not statistically significant.pch=" ".
Chapter 13
All the conclusions are independent of the base for logarithm and must be the same. for every increment of SO2 by x times.25 sortBy(sbp) plot(sbp.
296
.length(sbp) segments(x=n.in.numeric(age. Similarly. for every unit increment of loge(SO2).1813 5. the log2(deaths) increases by 0.
> plot(log2(SO2).days)/365. This means that the relationship between these two variables is on the power scale.

final) > poisgof(model. as the number exceeds three. The odds almost doubles if the woman had two children and almost triples for women with three living children.
> ordinal.step(glm(respdeath ~ agegr + period + arsenic1 + start. A one year increment of age is associated with about a 3 percent reduction of odds of use.display(model.glmmPQL(user ~ urban + age_mean + living. increasing the number of living children does not have a linear dose-response relationship with use. data=.bang1)$fixed)
Note that urban women have two times the odds of using contraceptives compared to rural women.final) > idr.bang1 <. random = ~ age_mean | district.data) > logLik(model.
Chapter 19
> > > > data(Montana) use(Montana) arsenic1 <.or. However. family=poisson. data=.bang2 <.glmmPQL(user ~ urban+ age_mean+ living.children. binomial.
Chapter 20
Problem 1
> model.ord)
In conclusion.bang1)
To compute the 95% confidence interval of odds ratios
> exp(intervals(model. offset=log(personyrs). random=~1 | district.display(model.data) > summary(model. the odds of use does not further increase.children.The AIC = 189. data = .bang1) # -4244.final <.312 (df=8)
304
. Problem 3
> model.data)) > summary(model. Problem 2 From the last output. family=binomial.arsenic != "<1 year" model. which is better (lower) than the polytomous model. Moreover. workers who started to work from 1925 had significantly lower risk than those who had started earlier.037.final)
Note that using 'arsenic1' in the model is better than using 'arsenic' suggesting no evidence of a dose-response relationship. both drugs and being male have significant reduction on pain.

The alternating clustering of deaths and censoring would not be detected if the exploratory analysis was not done carefully.ca ~ ~ ~ ~ hospital) hospital + strata(stage)) hospital + strata(agegr)) hospital + strata(ses))
The difference of survival between patients from the two types of the hospitals is highly significant despite the adjustments.6 0. Multivariate adjustment using Cox regression is presented in chapter 22. Note that adjustment can only be done one variable at a time using this approach.Note that deaths are uniformly distributed in the first five years where there were only two censored observations. There is one patient who survived 15. Problem 2
> surv. "blue"). Problem 3
> > > > survdiff(surv.text = levels(hospital).2 0.ca ~ hospital). main="Breast Cancer Survival")
Bre as t Cance r Survival
1.0 0.
306
.8 years and was censored at the time the study ended. status) > plot(survfit(surv.8
0.ca survdiff(surv. The second peak of censoring came after the 10th year.0
Public hospital Private hospital
0
5
10
15
Note the very dense censoring immediately after the 5th and the 10th years.ca survdiff(surv. On the other hand.ca survdiff(surv. legend.4 0. there was a lot censoring between the 5th and the 6th years where there were very few deaths.ca <.Surv(year. col = c("red".

var = 1)
Beta(t) for hospitalPrivate hospital
-4
-2
0
2
4
6
0. Problem 2
> plot(cox. Unfortunately.99
1. we could not further investigate this finding.ca ~ hospital + stage model5 > cox.
307
.00494
Models based on stratification by socio-economic status and by age still violate the proportional hazard assumption. A notable feature of the graph is that there are two clusters of residuals.Chapter 22
Problem 1
> coxph(surv.4
4.3
7
9.6
3.00802 + ses + strata(agegr)) -> value = 0.8
2. Some extreme positive values are sparsely found at the top of the plot whereas the majority lie in another cluster within 0 to -3 units of beta. This may suggest that the data actually came from more than one group of patients.2
Time
The hazard ratio looks relative stable and slightly on the negative side for most of the time period.zph(model6) # Global test p + strata(ses) + agegr) -> value = 0.24
0.zph(model4).ca ~ hospital + stage model6 > cox.zph(model5) # Global test p > coxph(surv.

titleString unclassDataframe use zap
Replace commonly used words in Epicalc graph title Unclass factor(s) in the default data frame Quick command to read in data and attach Remove objects and detach all data frames
Epicalc Datasets
ANCdata ANCtable Attitudes BP Bang Compaq DHF99 Decay Ectopic Familydata HW93 Hakimi Marryage Montana Oswego Outbreak Planning SO2 Sleep3 Suwit Timing VC1to1. 1988 Dataset on cancer survival Dataset for exercise on predictors for mosquito larva infestation Dataset on tooth decay and mutan streptococci Dataset of a case-control study looking at history of abortion as a risk factor for ectopic pregnancy Dataset of a hypothetical family Dataset from a study on hookworm prevalence and intensity Dataset on effect of training personnel on neonatal mortality Dataset on age at marriage Dataset on arsenic exposure and respiratory deaths Dataset from an outbreak of food poisoning in the US Dataset from an outbreak of food poisoning on a sportsday. VC1to6 Dataset on effect of new antenatal care method on mortality Dataset on effect of new ANC method on mortality (as a table) Dataset from an attitude survey among hospital staff Dataset on blood pressure and determinants Dataset from a Bangladesh fertility survey. Thailand 1990. waking up and arrival at a workshop Datasets on a matched case-control study of esophageal cancer
313
. labelling and recoding Dataset on air pollution and deaths in UK Dataset on sleepiness in a workshop Hookworm infection and blood loss: SEAJTM 1970 Dataset on bed time. Dataset for practicing cleaning.

Epicalc. Epicalc. Hat Yai. especially in the developing countries.cran. is the result of a collaborative effort involving contributions from all over the world. written by Virasakdi Chongsuvivatwong of Prince of Songkla University. Epicalc has been welcomed by students and users alike. Equally. it helps young epidemiologists to learn the key terms and concepts based on numerical and graphical results of the analysis. The increasing complexity of research projects and associated analytical requirements led to the development of R in the late 1990s. The Special Programme for Research and Training in Tropical Diseases (TDR) sponsored by UNICEF/UNDP/World Bank/WHO has supported the preparation of an R add-on package. and is highly extensible. On one hand. On the other hand. has been well accepted by members of the R core-team and the package is downloadable from CRAN (Comprehensive R Archive Network) <http://www. R provides a wide variety of statistical and graphical techniques.org> which is mirrored by 69 academic institutes in 29 countries. it assists data analysts in data exploration and management. 2007
314
. an open-source statistical software initially written by Robert Gentleman and Ross Ihaka of the Statistics Department of the University of Auckland. where the need for computer software and the cost of some software applications has often been at odds.About Epicalc
Open source and free software has been a mainstay for researchers. Thailand. to enable R to more easily deal with epidemiological data. Steven Wayling Research Training Special Programme for Research and Training in Tropical Diseases (TDR) World Health Organization October.r-project. The current version of R.