MCS 3030 - Chapter 3.docx

9
Pages

50
Views

Unlock Document

School

University of Guelph

Department

Marketing and Consumer Studies

Course

MCS 3030

Professor

lloydhetherington

Semester

Fall

Description

Chapter 3: The Theory of Measurement
 Measurement is the process of observing and recording the observations that are
collected as a part of a research effort
 Four major levels of measurement: Nominal, ordinal, interval, and ratio
Construct Validity
Construct Validity: Refers to the degree to which inferences can legitimately be made from
the operationalization’s in your study to the theoretical constructs on which those
operationalization’s are based
Operationalization: The act of translating a construct into its manifestation (i.e.,
translating the idea of your treatment or program into the actual program, or translating
the idea of what you want to measure into the real measure
Your translation of the idea or construct into something real and concrete
Example: lets say you have an idea for a treatment or program you would like to create. The
operationalization is the program or treatment itself, as it exists after you create it
 Construct validity is the degree to which the actual (operationalized) program
accurately reflects the idea (the program as you conceptualize or envision it)
 The population of interest in your study is the construct, and the sample is your
operationalization
 The construct validity question, “How well does my sample represent the idea of the
population?” merges with the external validity question, “How well can I generalize from
my sample to the population?”
Translation Validity: A type of construct validity related to how well you translated the
idea of your measure into its operationalization
Criterion – Related Validity: You examine whether the operationalization behaves the
way it should according to some criteria based on your understanding
This type of validity is a more relational approach to construct validity
Translation Validity
Face Validity: A type of validity that assures that “on its face” the operationalization seems
like a good translation of the construct
o Does the way you are measuring the construct appear to measure what you
want it to?
Example: You might look at a measure of math ability, read through the questions, and decide
if it seems like this is a good measure of math ability
Example 2: You might observe a teenage pregnancy – prevention program and conclude that
it is indeed a teenage pregnancy – prevention program  If this is all you do to assess face validity, it would clearly be weak evidence because it is
essentially a subjective judgment call. (Note: Just because it is weak evidence doesn’t
mean that is wrong. You need to rely on your subjective judgment throughout the
research process)
 You can improve the quality of face – validity assessment considerably by making it
more systematic
Example: If you were trying to assess the face validity of a math – ability measure, it would be
more convincing if you sent the test to a carefully selected sample of experts on math – ability
testing and they all reported back with the judgment that your measure appears to be a good
measure of math ability
Content Validity: A check of the operationalization against the relevant content domain for
the construct
o The content domain is like a comprehensive checklist of the traits of your
construct. This approach assumes that you have a good detailed description of
the content domain, something that’s not always true
Example: You might layout all of the characteristics of a teenage pregnancy – prevention
program. You would probably include in this domain specification of the definition of the
target group, a description of whether the program is preventive in nature (as opposed to
treatment – orientation), and the content that should be included, such as basic information
on pregnancy, the use of abstinence, birth control methods, and so on. Then armed with thee
characteristics, you create a type of checking – list when examining your program. Only
programs that have these characteristics can legitimately be defined as teenage – pregnancy
prevention programs
Criterion – Related Validity
 In criterion – related validity, you check performance of your operationalization against
some criterion
Predicative Validity: A type of construct validity based on the idea that your measure is
able to predict what it theoretically should be able to predict
Example: you might theorize that a measure of math ability should be able to predict
how well a person will do in an engineering – based profession
 A high correlation would provide evidence for predictive validity; it would show that
your measure can correctly predict something that you theoretically think it should be
able to predict
Concurrent Validity: An operationalization’s ability to distinguish between groups that it
should theoretically be able to distinguish between.
Example: If you come up with a way of assessing depression, your measure should be
able to distinguish between people who are diagnosed as depressed and those diagnosed
paranoid schizophrenic  As in any discriminating test, the results are more powerful if you are able to show that
you can discriminate between two similar groups than if you can show that you can
discriminate between two groups that are very different
Convergent Validity: The degree to which the operationalization is similar to (converges
on) other operationalization’s to which it should be theoretically similar
Example: to show the convergent validity of a Head Start program, you might gather
evidence that shows that the program is similar to other Head Start programs.
Discriminant Validity: The degree to which concepts that should not be related
theoretically are, in fact, not interrelated in reality
Example: to show the discriminant validity of a Head Start program, you might gather
evidence that shows that the program is not similar to other early childhood programs that
don’t label themselves as Head Start programs
 Construct validity refers to the degree to which inferences can legitimately be made
from the operationalization’s in your study to the theoretical constructs on which those
operationalization’s were based
o It is an assessment of how well your actual programs or measures reflect your
ideas or theories
 Convergent and discriminant validity are both considered subcategories or subtypes of
construct validity. The important thing to recognize is that they work together; if you
can demonstrate that you have evidence for both convergent and discriminant validity,
you have by definition demonstrated that you have evidence for construct validity
o Neither one alone is sufficient for establishing construct validity
 Measures of constructs that theoretically should be related to each other are, in fact,
observed to be related to each other (that is, you should be able to show a
correspondence or convergence between similar constructs)
 Measures of constructs that theoretically should not be related to each other are, in fact,
observed not to be related to each other (that is, you should be able to discriminate
between dissimilar constructs)
 Correlations between theoretically similar measures should be “high”, whereas
correlations between theoretically dissimilar measures should be “low”
 Convergent correlations should be as high as possible and discriminant ones should be
as low as possible
o Convergent correlations should always be higher than the discriminant ones
Convergent Validity
 To establish convergent validity, you need to show that measures that should be related
are in reality related.
See figure 3 – 2
Discriminant Validity  To establish discriminant validity, you need to show that measures that should not be
related are in reality not related
See figure 3 – 3
Threat to Construct Validity: Any factor that causes you to make an incorrect conclusion
about whether your operationalized variables (i.e., your program or outcome) reflect well
the construct they are intended to represent
Inadequate Preoperational Explication of Constructs – all together this phrase means
you didn’t do a good enough job defining what you meant by the construct before you tried
to translate it into a measure or program
Mono – Operation Bias: A threat to construct validity that occurs when you rely on only a
single implementation of your independent variable, cause, program, or treatment in your
study
 You use only one version of the treatment or program in your study.
Note: it is only relevant to the independent variable, cause, program, or treatment in your
study. It does not pertain to measures or outcomes
o If you only use a single version of a program in a single place at a single point in
time, you may not be capturing the full breath of the concept of the program
Solution: Try to implement multiple versions of your program
Mono – Method Bias: A threat to construct validity that occurs because you use only a
single method of treatment
Note: it is only relevant to your measures or observations, not to your programs or
causes. It is essentially the same issue as mono – operation bias
Interaction of Testing and Treatment
 If you are worried that a pretest makes your program participants more sensitive or
receptive to the treatment, randomly assign your program participants into two groups,
where one group gets the treatment and the other doesn’t
The Social Threats to Construct Validity
Hypothesis Guessing: Most people don’t just participate passively in a research project.
They guess at what the real purpose of the study is. Therefore, they are likely to base their
behavior on what they guess, not just on your treatment
Example: In an educational study conducted in a classroom, students might guess that the key
dependent variable has to do with class participation levels. If they increase their participation
not because of your program but because they think that’s what you’re studying, you cannot
label the outcome as an effect of the program – this makes it a construct validity threat
Evaluation Apprehension: Many people are anxious about being evaluated. Some are even
phobic about testing and measurement situations. If their apprehension makes them
perform poorly, you certainly can’t label that as a treatment effect  Another form of evaluation apprehension concerns the human tendency to want to look
good or look smart, and so on. If, I their desire to look good, participants perform better,
you would be wrong to label this as a treatment effect
 If it is appropriate, you may want to tell them that there are no right or wrong answers
and that they aren’t being judge or evaluated based on what they say or do
Experimente