Introduction to simglm

The simglm package aims to define a consistent framework for simulating regression models - including single level and multilevel models. This will hopefully allow the user to quickly simulate data for a class, project, or even a dissertation.

Installation

Currently development is happening on github. To install the package, use the devtools package:

This should load the devtools package, install the simglm package, and finally load the simglm package. The package has currently not been tested on a Mac machine. I do not anticipate any problems installing on a Mac however.

Simulate Data

The master function that handles the simulation grunt work is the sim_reg function. As always, you can do ?sim_reg to pull up the help file for the function.

Single Level

Let's look at a simple single level regression example to get started:

A few things to highlight about the above syntax, first the object fixed is a one sided formula that gives the names of the variables to be included in the simulated data. The intercept is directly shown in the formulation here, but can also be omitted (similar to linear models in R). I like to include the 1 as it reminds me that I do in fact want an intercept. The next object, fixed_param is the regression weights for the fixed effects, this must be the same length as fixed (or one larger if the 1 is not explicitly stated in the fixed object). Next, cov_param represents the mean, standard deviation, and type of variable from the fixed object (must by "single" for single level regression). The cov_param object must contain all variables except the intercept and any interactions.

The rest of the arguments are pretty straightforward, n is the sample size, error_var is the error variance, with_err_gen is the distribution of the residuals, and finally in the function call itself, data_str must be "single" in this instance to reflect a single level regression.

Finally, looking at the output from the above call:

head(temp_single)

As can be seen from the data itself, the first 5 columns represent the raw data used in the simulation, the column labeled "Fbeta" is the matrix multiplication of the design matrix (first 5 columns in this case) by the fixed_param object above (the desired values for the fixed effects). The "err" column is the simulated errors, the column labeled "sim_data" is the simulated data (taking "Fbeta" + "err"), and lastly an ID variable reflecting the individuals.

You could then use simple regression on these data to see how the simulation went:

Adding Factor/Categorical/Ordinal Variables

To add a factor, categorical, or ordinal variable, just append one of the following to the end of the variable name in the fixed object: ".f", ".c", ".o", "_f", "_c", or "_o". These indicate the variable is a discrete variable of some sort. By default, all variables with a ".f" or ".c" will be treated as factors and expanded using dummy coding. If ".o" is used in the variable names, the variable will be discrete but treated as continuous in the simulation (i.e. only a single fixed effect variable). See below for an example.

The next thing to add is a new object called fact_vars. This object must be a list that contains numlevels and var_type. Other optional features will be added in the future to increase functionality - such as including value labels, user specified probabilities, etc. Once these are passed into the sim.reg() function, the simulated data now looks like the following.

head(temp_single_o)

Correlated predictor variables

The ability to add correlated predictor variables is an easy addition. The additional argument cor_vars takes a vector of correlations between predictor variables (note this does not include the intercept, factor variables, or any interaction variables). These correlations are further turned into a covariance matrix with the standard deviations specified in the cov_param argument. Then through cholesky decomposition, the correlations between the variables are generated prior to simulating the response variable. Below is an example. The default value is no correlation between the covariates.

Editing distributions for covariates.

The distribution of covariates can also be changed to any R distribution function (e.g. rnorm or rt). These are called via the cov_param argument using dist_fun. Any additional parameters besides sample size are called via the optional opts argument within cov_param. The distributions need not be the same for each covariate. Below is an example generating two covariates using two different distributions and correlating them.

The skewness and kurtosis can be ways to ensure that these distributions are actually generated as desired. For example, the skewness and kurtosis for the rt distribution are: r e1071::skewness(temp_single_o$act) and r e1071::kurtosis(temp_single_o$act). Similarly done for the rgamma distribution: r e1071::skewness(temp_single_o$diff) and r e1071::kurtosis(temp_single_o$diff).

Heterogeneity

It is also possible to build in heterogeneity of variance into the simulation design. This is done through two arguments, homogeneity and heterogeneity_var. Below is a possible specification within a single level framework. By specifying homogeneity = FALSE, this indicates that some level of heterogeneity is desired. Finally, by passing a variable name as a character string to the heterogeneity_var argument, this is the variable in which heterogeneity in the residuals depends on (i.e. this variable is used to group variables).

The outcome of the heterogeneity from the simulation is best show with a figure.

As you can see from the figure, there is an increase in the error variance in the middle of the variable "diff" and smaller amounts of variance in the tails. What was done internally was that the variable "diff" was grouped into 3 groups (based on the length of the argument error_var) and the variance values passed from error_var are used to create the heterogeneity. In many instances, heterogeneity is thought of with regard to groups. This can be shown with the following:

Highlighting the new agruments needed, the first is a one sided formula random. This specifies which terms should be specified above. Related to random, random_param is a list of characteristics when simulation the random effects. This list must contain two named components: random_var and rand_gen. These two components represent the variance of the terms specified in the one sided random formula and rand_gen which represents a quoted distribution function (e.g. 'rnorm') to simulate the random effects. The random_var component must be the same length as the number of terms in random.

The optional named arguments to the random_param argument include ther, ther_sim, and cor_vars. First, cor_vars is a vector of correlations between random effects, the default is no correlation between random effects. The argument ther represents a vector of the theoretical mean and standard deviation of the random effects. This can be used to center simulated values at 0 or scale them to have a standard deviation of 1. The following standardization formula is used: $z = \frac{X - \mu}{\sigma}$, where $\mu$ and $\sigma$ are the values specified within the ther argument. The default values are 0 and 1 for $\mu$ and $\sigma$ respectively so that no scaling is performed. The ther_sim argument does the same process as ther, but the $\mu$ and $\sigma$ values are empirically drawn by sampling one million values from the rand_gen specified. Lastly, depending on the rand_gen function specified, additional arguments may be needed for that distribution function and can be specified as well. For example, if rand_gen = 'rchisq', including df = 1 would simulate a chi-square distribution with one degree of freedom.

The specification of the covariates is still done with the cov_param argument, however, now the var_type portion of cov_param must be either "level1" or "level2" to represent either level 1 or level 2 variables respectively. Note that the time variable is not included in the cov_param argument and is automatically specified as discrete starting from 0. In the future, differing time scales will be expanded.

The other new terms needed are straightforward, p is the within cluster sample size (i.e. how many repeated measurements) and data_str is now "long" for longitudinal data.

The simulated data now look like the following:

head(temp_long)

This structure is very similar to before, except now there are columns for the specific random effects as denoted by the lower case b's, a column reflecting the contribution for the random effects combined and lastly now two ID variables, one reflecting the within cluster ID and another being the cluster ID.

Checking how the simulation worked with the following:

One note when looking at the output is that standard deviations are given, not variances as inputted into the simulation function.

Specifying Time Variable

For longitudinal designs, the time variable (linear slope) is generated going from 0 to number of observations minus 1. Therefore, if there are 5 time points (measurement occasions), the time variable by default will be a sequence from 0 to 4 indexing by 1. An optional named argument within cov_param, named time_var, allows the user to specify the time scale. Below is an example with 5 time points. In this example the first three measurement occasions may be separated by 6 months with the last two occasions being measured 2 years apart and the time variable is represented on the time scale.

Cross-Sectional Data

A similar framework can be used for cross-sectional data, such as when students are nested within schools. The only thing that would need to be changed from the longitudinal portion above is the data_str argument from "long" to "cross" for cross-sectional data.

One last note about cross-sectional data. As a default for longitudinal data, time is always considered to be in the model in some fashion (as noted above when talking about the cov_param object). Therefore, when specifying the cov_param and fact_vars objects, ensure information about all variables is there.

Categorical Data for Nested Designs

The same framework to specify categorical data for single level regression designs is used for nested designs. Just asign the ".f", ".c", or ".o" (or "_f", "_c", or "_o") to the end of the name and the function will take care of the rest.

Bugs/Feature Requests

Lastly, for any bugs or feature requests go to the github repository to create post an issue. I will work to resolve them as quickly as possible. See simglm github repository