A. Hypotheses.Research hypotheses for ordinal level dependent
data may deal with mean or median ranks. In such a case, the null
hypothesis takes the form H0: the mean (μ) or median (Md)
rank of one group is equal to that of the other(s):

H0:
μrankof Group 1 = μrankof Group
2, or

H0:
Mdrankof Group 1 = Mdrank of
Group 2. Researchers also could compare the distribution of one
set of data with the distribution of another, examining such hypotheses
as H0: The two distributions do not differ. The research
hypothesis suggests the alternative that the two distributions differ.

B. Measurement of Dependent Variables. Though the dependent
variable data originally may have been interval or quasi-interval data,
the actual form of the data analyzed involve some form of rank order
data (ordinal level measurement).

C. Conducting the Test. .

--Selecting the appropriate test statistics is not always simple since
nonparametric tests often reveal information about more than one
characteristic of interest to the researcher. For instance, the
Mann-Whitney U test and the Kolmogorov-Smirnov two-sample test
examine whether distributions differ, but they could differ by their
means, by their shapes, or by their variances (D. R. Anderson, Sweeney,
& Williams, 2003, p. 772; StatSoft, 2003b, ¶ 14).

D. Checking Assumptions.Though there are few assumptions
underlying nonparametric tests (which makes them very convenient),
occasionally some assumptions must be checked.

II. Comparing Ranks of One Group to Presumed Population
Characteristics: Analogous Tests to One-Sample t Tests

The One-Sample
Runs Test

--This test assumes that researchers are able to track the order of
occurrence of observations. This statistic also frequently is applied
to nominal level data, such as the number of men and women who arrive in
order at the beginning of a class.

--For samples over 20, the test statistics for the one-sample runs test
is:

where

r is the number of uninterrupted runs of events above or below
the median,

n1 is the number of scores above the median, and

n2 is the number of scores below the median.

--Using SPSS to Conduct the One-Sample Runs
Test

one-sample runs test: a nonparametric test
that examines the randomness of the occurrence of sequences in a set of
observations.

--This test uses a cumulative frequency
distribution. Though a cumulative frequency distribution is not a
standard normal curve, the standard normal curve sometimes is used to
define a cumulative distribution observed in data. Hence, this test is
often used to check on the normal distribution of responses.

--This test assumes:

·randomization and

·a theoretically based cumulative frequency distributions
of ranks (which means there must be an underlying continuum for the data
under examination).

·Sometimes the test is used for variables that are simple
dichotomies.

cumulative
frequency distribution:a running total of all the events
through each interval or class.

--The
test has high power efficiency

Power efficiency: the power a test has
relative to the sample sizes used (Plonsky, 1997, ¶ 2).

--For samples over 20, the test statistic for the one-samples runs test
is:

where

Fo is the cumulative observed frequency value for each
ranking level,

S is the cumulative expected frequency value for each ranking
level, and

N is the number of
events in the study.

--Using SPSS to Conduct the
Kolmogorov-Smirnov One-Sample Test

III. Comparing Ranks from Two Sample Groups

A.
Independent Groups: Analogous to Two-Sample t Tests

Independent groups: separate categories of
events or data.

1.
Median Test

Median test: a nonparametric test that
examines whethertwo different sample groups have been drawn from
a population with the same median.

--This test assumes that the dependent variable is measured on an
ordinal scale (even so, the median typically is computed from data that
actually are on the interval or ratio level)

--Limitations:

·when samples are
quite small, such as when the total number of events is under 20, or
when any expected frequency is under 5, researchers should use the
Fisher’s exact test;

·if any data points
fall exactly on the median, the researchers must make some adjustments,
by deleting data points (if large original samples are available) or by
phrasing the research hypothesis to explore the number of events that
are above the median

--The test statistic is:

where

and N is the
number of events

2.
Wald-Wolfowitz Runs Test

Wald-Wolfowitz runs test:a
nonparametric test that examines whether attempts to determine if two
samples differ in central tendency, variances, skewness, or any other
distribution pattern.

--This test assumes:

·randomness;

·that the dependent variable initially was a continuous
variable

--The
test statistic is:

where

r is the number of runs,

n1 is the number of events in the first group, and

n2 is
the number of events in the second group.

--Limitations:

·First, the test
merely identifies that there is a difference in the two compared
samples. The statistical significant test does not reveal whether any
effects are related to differences in means or difference in the
dispersion of data.

·Second, unless all
the tied ranks are from members of the same groups, the number of runs
may not be correctly identified.

--Using SPSS to Conduct
the Wald-Wolfowitz Runs Test

3.Test
for Large Samples: Mann-Whitney U test

--Though it is most often used by researchers who are interested in
comparing differences, this test actually compares the differences in
distributions including other differences other than means.
“Theoretically, in large samples the Mann-Whitney test can detect
differences in spread even when the medians are very similar” (Hart,
2001, p. 391). Hence, when reporting results, researchers should report
the features of the data (medians and shapes) as well as significance
statistics.

--The The method is a more powerful option than the Wald-Wolfowitz runs
test and is not plagued by difficulties related to tied ranks and may be
used when the underlying population distributions are not normal.

--Assumptions:

·randomization, and

·that the underlying
data are from a continuous distribution, even though the test uses only
the continuum of ranks.

--The
test statistic is:

, where

The test statistic is the larger
of the two following formulae:

where

n1 is the number of events in the smaller group and

n2 is the number of events in the larger group.

--Limitation: Large numbers of
tied ranks tend to make the Mann-Whitney U test very conservative.

--Using SPSS to Conduct the Mann-Whitney U test

Mann-Whitney U test:a
nonparametric test that examines the equality of two distributions

4.Test
for Small Samples: Kolmogorov-Smirnov Two-Sample Test

-- By using cumulative frequency distributions of ranks, this test
examines whether two sample distributions are the same.

--This test assumes:

·that data are measured on the ordinal level, and

·that data come from an underlying continuous distribution.

--The
test statistic is:

where

D is the largest absolute difference between cumulative frequency
distributions, and

n1 and n2 are the number of events
in the first and second groups

·For the two-sample
test, the degrees of freedom for chi-square are equal to two; for a
two-sample test, the degrees of freedom for chi-square are equal
to two.

--Though it has greater power efficiency than the Mann-Whitney U
test when applied to small samples, the Mann-Whitney U test has
superior power efficiency with large samples. Hence, researchers
generally are advised to use the Kolmogorov-Smirnov two-sample test when
the total sample size is 40 or fewer events. When the total sample size
is greater than 40, other nonparametric statistical tools such as the
Mann-Whitney U test are invited.

--Using SPSS to Conduct the Kolmogorov-Smirnov Two-Sample Test

B. Dependent (Matched) Groups

--These dependent scores may be “before and after” tests from the same
people, or they may reflect situations where researchers deal with
groups of people who may have influenced others’ responses.

--Unlike the sign test, which simply compares the signs of matched pairs
of scores, the Wilcoxon Matched Pairs Signed Ranks test assesses the
sizes and directions of the ranked differences.

--The Wilcoxon Matched Pairs Signed Ranks test is a more power efficient
test than the sign test and may be used to test whether the mean or
median of a single population is equal to any given value.

·because the
magnitude of differences is to be assessed, the Wilcoxon Matched Pairs
Signed Ranks test assumes that the dependent variable originally was
measured on the interval or quasi-interval scale;

·that the two sets
of scores are related in some way, such as testing subjects before and
after some treatment or using participants as their own controls;

·that “the
distribution of differences between the two populations in the pairs,
two-sample case is symmetric” (Aczel, 1989, p. 770). Among other things,
in a symmetrical distribution, the mean and median are the same.

--The null hypothesis is:

H0: The distributions
of the two populations are not different.

--If one assumes that differences
between the two population distributions involve the locations of the
mean and median, the researcher may make a directional hypotheses
because the assumption of symmetrical distributions

--The test statistic is:

where

T is the smaller sum of ranks with the same sign (To identify
this term, the researcher subtracts the pretest scores from the posttest
scores. Then, the absolute values of these differences are ranked from
the lowest to the highest. In the case of ties, the mean of the tied
ranks is assigned to all the tied examples. Next, the researcher looks
at the differences and determines which sign (positive or negative) is
least frequent. Thus, to compute T, these differences of
all the values for the differences with the least frequent sign are
summed; and

n is the number of matched pairs.

--Using SPSS to Conduct the Wilcoxon Matched Pairs Signed Ranks Test

IV.
Comparing Ranks From More Than Two Sample Groups: Analogous Tests to
One-Way ANOVA

--Kruskal-Wallis H test is similar to that of one-way analysis of
variance.

--When applied to two sample groups, the Kruskal-Wallis H test
and the Mann-Whitney U test are the same

--The Kruskal-Wallis H test examines the null hypothesis:

H0:
The distributions of the populations are not different.

As is the case with the Mann-Whitney U test, this statistic deals
with differences in the distributions, one characteristic of which is
the mean or median. Though the Kruskal-Wallis H test directly
explores differences located in the distributions, but any differences
may stem from different medians, means, modes, and/or shapes of
the distributions.

--Because this test does not
assume that there is an underlying normal distribution to the data, it
has become a popular tool for researchers who are uncomfortable assuming
normal distributions or homogeneity of variances in their data sets.

--Assumptions:

·randomization,

·that the groups are
independent groups of data, and

·that the underlying
data are from a continuous distribution, even though the test uses only
the continuum of ranks.

--Test statistic:

where

N is the number of events in the study,

k is the number of groups,

nj is the number of events in each group j, and

Rj is the sum of the ranks in each group.

--to correct for the number of
tied ranks, one divides the H statistics by:

where

where T is the number of ties squared (t2)
minus the number of ties (t), and

N is the number of events in the study.

·H is distributed as chi-square with degrees of
freedom equal to the number of groups minus 1.

--Regarding follow-up: the Kruskal-Wallis H test directly
explores differences located in the distributions, but any differences
may stem from different medians, means, modes, and/or shapes of
the distributions. The mean rank of the entire sample and of each sample
group is computed using the formulae

where

Riis each instance of a rank in the initial group to
be compared with another,

niis the number of ranks in the initial comparison
group,

Rjis each instance of a rank in the next group to be
compared with the initial group, and

niis the number of ranks in the next group to be
compared with the initial group.

where

Riis each instance of a rank in the initial group to
be compared with another,

niis the number of ranks in the initial comparison
group,

Rjis each instance of a rank in the next group to be
compared with the initial group, and

niis the number of ranks in the next group to be
compared with the initial group.

These formulae identify the mean ranks for each group. To test for
differences, researchers compute
to
identify the difference in ranks between each pair of groups.

To test if these differences are
statistically significant, the researcher computes the following

test statistic:

where

c2 α,k−1 is
the critical value of chi-square at the specified α (alpha risk) and
degrees of freedom

equal to k – 1 (number of sample groups minus one),

N is the number of events in the study,

niis the number of events in the initial group in the
comparison, and

njis the number of events in the second group in the
comparison.

--Using SPSS to Conduct the Kruskal-Wallis H Test

Kruskal-Wallis HTest:
a nonparametric test thatcompares two or more groups of ordinal
data

B. The Friedman Two-Way Analysis of Variance

--The Friedman Two-Way Analysis of Varianceis a
nonparametric alternative to the mixed-effects analysis of variance
design. Despite its name, this test is not a two-way ANOVA for
two fixed effects. It does not test directly for interaction effects. It
actually is a randomized block (mixed-effects design) for rank order
data. The procedure actually is
an extension of the Wilcoxon Matched Pairs Signed Ranks test for
situations where there are more than two groups of scores to be
examined.

--A significance test statistic indicates that there is a
difference somewhere among the groups compared. Multiple comparison
tests currently are not available to determine the locations of the
differences among more than two groups.

--Assumptions:

·samples are related
to each other in some way, and

·that the dependent
variable is measured on the ordinal level.

-- The test statistic for the Friedman two-way analysis of variance is:

Friedman Two-Way Analysis of Variance: a
nonparametric test designed to test whether two or more dependent
samples of ordinal dependent variable data differ (despite its name,
this test is not a two-way ANOVA for two fixed effects)