Nonexperimental Designs

The most frequently used experimental design type for research in industrial and organizational psychology and a number of allied fields is the nonexperiment. This design type differs from that of both the randomized experiment and the quasi-experiment in several important respects. Prior to describing the nonexperimental design type, we note that the article on experimental designs in this section considers basic issues associated with (a) the validity of inferences stemming from empirical research and (b) the settings within which research takes place. Thus, the same set of issues is not addressed in this entry.

Attributes of Nonexperimental Designs

Nonexperimental designs differ from both quasi-experimental designs and randomized experimental designs in several important respects. Overall, these differences lead research using nonexperimental designs to be far weaker than that using alternative designs, in terms of internal validity and several other criteria.

Measurement of Assumed Causes

In nonexperimental research, variables that are assumed causes are measured, as opposed to being manipulated. For example, a researcher interested in testing the relation between organizational commitment (an assumed cause) and worker productivity (an assumed effect) would have to measure the levels of these variables. Because of the fact that commitment levels were measured, the study would have little if any internal validity. Note, moreover, that the internal validity of such research would not be at all improved by a host of data analytic strategies (e.g., path analysis, structural equation modeling) that purport to allow for inferences about causal connections between and among variables (Stone-Romero, 2002; Stone-Romero & Rosopa, 2004).

Nonrandom Assignment of Participants and Absence of Conditions

In nonexperiments, there are typically no explicitly defined research conditions. For example, a researcher interested in assessing the relation between job satisfaction (an assumed cause) and organizational commitment (an assumed effect) would simply measure the level of both such variables. Because participants were not randomly assigned to conditions in which the level of job satisfaction was manipulated, the researcher would be left in the uncomfortable position of not having information about the many variables that were confounded with job satisfaction. Thus, the internal validity of the study would be a major concern. Moreover, even if the study involved the comparison of scores on one or more dependent variables across existing conditions over which the researcher had no control, the researcher would have no control over the assignment of participants to the conditions. For example, a researcher investigating the assumed effects of incentive systems on firm productivity in several manufacturing firms would have no control over the attributes of such systems. Again, this would serve to greatly diminish the internal validity of the study.

Measurement of Assumed Dependent Variables

In nonexperimental research, assumed dependent variables are measured. Note that the same is true of both randomized experiments and quasi-experiments. However, there are very important differences among the three experimental design types that warrant attention. More specifically, in the case of well-conducted randomized experiments, the researcher can be highly confident that the scores on the dependent variable(s) were a function of the study’s manipulations. Moreover, in quasi-experiments with appropriate design features, the investigator can be fairly confident that the study’s manipulations were responsible for observed differences on the dependent variable(s). However, in nonexperimental studies, the researcher is placed in the uncomfortable position of having to assume that what he or she views as dependent variables are indeed effects. Regrettably, in virtually all nonexperimental research, this assumption rests on a very shaky foundation. Thus, for example, in a study of the assumed effect of job satisfaction on intentions to quit a job, what the researcher assumes to be the effect may in fact be the cause. That is, individuals who have decided to quit for reasons that were not based on job satisfaction could, in the interest of cognitive consistency, view their jobs as not being satisfying.

Control Over Extraneous or Confounding Variables

Because of the fact that nonexperimental research does not benefit from the controls (e.g., random assignment to conditions) that are common to studies using randomized experimental designs, there is relatively little potential to control extraneous variables. As a result, the results of nonexperimental research tend to have little, if any, internal validity. For instance, assume that a researcher did a nonexperimental study of the assumed causal relation between negative affectivity and job-related strain and found these variables to be positively related. It would be inappropriate to conclude that these variables were causally related. At least one important reason for this is that the measures of these constructs have common items. Thus, any detected relation between them could well be spurious, as noted by Eugene F. Stone-Romero in 2005.

In hopes of bolstering causal inference, researchers who do nonexperimental studies often measure variables that are assumed to be confounds and then use such procedures as hierarchical multiple regression, path analysis, and structural equation modeling to control them. Regrettably, such procedures have little potential to control confounds. There are at least four reasons for this. First, researchers are seldom aware of all of the relevant confounds. Second, even if all of them were known, it is seldom possible to measure more than a few of them in any given study and use them as controls. Third, to the degree that the measures of confounds are unreliable, procedures such as multiple regression will fail to fully control for the effects of measured confounds. Fourth, and finally, because a large number of causal models may be consistent with a given set of covariances among a set of variables, statistical procedures are incapable of providing compelling evidence about the superiority of any given model over alternative models.