Name and Define 6 techniques used to control for threats to internal validity

1. Double-Blind: controls for effects of experimenter bias
2. Random Assignment: randomly assign Ss to control / exptl groups so as to obtain equivalency among those groups.
3. Matching: Used to control for effects of specific extraneous variable. E.g. Pair potential Ss on similarity in IQ score and split the pairs between two groups. Useful with small N and Random Assignment (therefore) might not obtain equivalency on the potential confound.
4. Blocking: Turning the potential confound into another IV so you can isolate its effects on DV
5. Holding Extraneous Variable Constant: use only subjects who are similar in terms of the potential confound (e.g. use only high IQ subjects) - trade off with external validity
6. Analysis of Covariance (ANCOVA): statistical technique, analagous to post-hoc matching, to adjust DV scores so that Ss are qualized in terms of status on extraneous variables. Only controls for variables that have been ID'd and measured (like matching)

Define External Validity:

Extent to which results can be generalized to other times, settings, or people

Name and describe 6 threats to external validity

1. Interaction btwn selection and tx: effects of the tx would not generalize to other members of the population of interest
2. Interxn betw Hx and Tx: The effects of the treatment would not generalize to other places and/or times
3. Interxn betw. testing and tx: When the pretest itself accounts for changes in DV (so results can't be generalized to Ss who don't receive pretest). E.g. "Pretest Sensitization": Ss are oriented to purposes of research study or their susceptibility to tx effects are increased.
4. Demand Characteristics: Cues in research setting allow Ss to guess the research hypothesis (which might lead them to try to confrim and/or disprove it)
5. Hawthorne Effect: tendency for Ss t behave differently due to the fact that they are participating in research (and therefore are being observed).
6. Order Effects (aka Carryover Effects or Multiple Treatment Interference): occurs in repeated measures designs. The effect of one of the tx is due in part to Ss having received a previous tx. Results can't be generalized to setting in wh the client will only receive one tx.