Note: These pages are prepared with MathJax. MathJax is an open source
JavaScript display engine for mathematics that works in all browsers.
See http://mathjax.org for details on supported browsers, accessibility,
copy-and-paste, and other features.

Rating

Section Starter Question

Suppose a simple random walk of 6
steps with Yi=±1
has 4 steps
+1 and
2 steps
−1. Explicitly
enumerate all the possible walks. In how many ways does the walk stay less than
2 throughout the entire
walk until step 6?

Key Concepts

The Hitting Time Theorem expresses a conditional probability. The
Hitting Time Theorem says that under reasonable, or even relatively
weak conditions on the steps, the conditional probability that an n-step
random walk starting at 0,
given that it ends at k>0,
ﬁrst hits k
at time n
is k∕n.

If the Yi
are i. i. d. r. v. or the joint probability distribution of (Y1,…,Yn)
is invariant under rotations or exchangeable, then

ℙTn=k,Tt<k, for all t<n=kn⋅ℙTn=k

The Hitting Time Theorem is equivalent to the Ballot Theorem.

The conditions of invariance under rotations and exchangeability are
weaker than the traditional requirement that Yi
be independent and identically distributed.

Vocabulary

The Hitting Time Theorem shows the conditional probability that
an n-step
random walk with steps (Y1,…,Yn)
having a joint probability distribution with reasonable conditions starting
at 0,
given that it ends at height k>0,
ﬁrst hits k
at time n
is k∕n.

If the joint probability distribution of (Y1,…,Yn)
is the same as the joint probability distribution of the rotated sequence

(Yr+1,…,Yn,Y1,…,Yr)

then we say the distribution is invariant under rotations.

A random vector (Y1,…,Yn)
is exchangeable if the joint probability distribution is invariant for any
permutation of (Y1,…,Yn).

Mathematical Ideas

Background and History

The Ballot Theorem can be interpreted as the probability that, during
the vote counting in an election between two candidates, the winning
candidate is always ahead of the other. Suppose that in an election with
n votes, candidate A
receives a votes and
candidate B receives b
votes, where a−b=k for
some positive integer k.
Assume that all ballot permutations are equally likely during the
counting. By dividing by the total number of ballot permutations
in the counting of the specialized ballot theorem, the probability is
k∕n, see
The Ballot Theorem and the Reﬂection Principle.

The problem can be restated as the probability that an
n-step
random walk taking independent and identically distributed
±1 steps stays positive
after time 0, given that
it ends at height k>0,
is k∕n. If each step
has probability 1∕2,
this probability can be derived easily using the reﬂection principle, an argument
appearing in many textbooks. In fact, it is possible to prove the same result under
weaker hypotheses, [?], [?].

The Hitting Time Theorem says the conditional probability that an
n-step random walk
starting at 0, giventhat it ends at k>0,
ﬁrst hits k
at time n
is k∕n.

Remark.Note that:

The Hitting Time Theorem expresses a conditional probability.

By time reversal and symmetry around k,
the Hitting Time Theorem is equivalent to the statement that an
n-step
random walk taking independent and identically distributed ±1
steps stays positive after time 0,
given that it ends at height k>0.

Note that the Hitting Time Theorem is diﬀerent from the Gambler’s
Ruin. In its usual form the Gambler’s Ruin gives the probability that
a random walk, given that it starts from a positive value ever hits the
value k
before it hits the value 0.

The Hitting Time Theorem is also diﬀerent from the Positive Walks
Theorem. The Positive Walks Theorem gives the probability that an
n-step
random walk starting at 0
is always positive throughout the remainder of the walk, without regardto the value of the endpoint.

The Hitting Time Theorem has other applications to branching
processes and random graphs, see [?] for references.

In 1949 Otter gave the ﬁrst proof of the Hitting Time Theorem for
k=1.
Kemperman in 1961 and Dwass in 1969 gave proofs for
k≥2 using generating
functions and the Lagrange inversion formula. For simple Bernoulli random walks making
steps of size ±1,
a proof using the Reﬂection Principle is in [?]. See [?] for references and
discussion.

The Hitting Time Theorem for Independent Identically Distributed Random
Variables

Fix n≥1. Let
(Y1,…,Yn) be a sequence of
random variables Yi
taking values in {…,−2,−1,0,1}.
Note that this is more general than, but includes, the standard
random walk where the random variables take values in
{−1,1}. Deﬁne
the walk T=(T0,…,Tn)
as the usual piecewise linear function deﬁned by the values
Ti= ∑j=0iYj at the
integers i=0,1,2,…,n.
Note that because the only positive value assumed by the steps
Yi is
1, for the random
walk T to gain a
height k, it has to
pass through all k−1
intermediate integer values.

Proof.The proof proceeds by induction on n≥1.
When n=1,
both sides are 0
when k>1
and k=0,
and are equal to ℙY1=1
when k=1.
This is the base case of the induction.

For the induction step, take n≥2
and consider the case k=0.
In this case, the left side requires Tt<0
for all t<n,
yet T0=0
so the event is empty and the probability is 0.
The right side is also equal to 0
so the case k=0
is satisﬁed. Thus, we may assume that k≥1.

So take n≥2 and
consider the case k>0.
Condition on the ﬁrst step to obtain

ℙTn=k,Tt<k for all t<n= ∑s=−∞1ℙTn=k,Tt<k for all t<n|Y1=sℙY1=s.

For a given s
and m=0,…,n−1,
let Tm(s)=Tm+1−s.
Because the steps are independent, identically distributed
random variables, by the random walk Markov property, with
T0=0 and conditionally
on Y1=s the
random walk {Tm(s)}m=0n−1
has the same distribution as the random walk path
{Tm}m=0n−1.

Figure 1: Illustration of {Tm}m=0n−1
and {Tm(s)}m=0n−1
for s=−1,
shown in red.

That is,

ℙTn=k,Tt<k for all t<n|Y1=s=ℙTn−1(s)=k−s,Tt(s)<k−s for all t<n−1=k−sn−1ℙTn−1(s)=k−s

where in the last equality we have used the induction hypothesis which is allowed
since n−1≥1
and k≥1
with s≤1
so k−s≥0.
Because the steps are independent, identically distributed random variables
ℙTn−1(s)=k−s=ℙTn=k|Y1=s, (see
Figure 1) substitute into the summation to obtain

ℙTn=k,Tt<k for all t<n= ∑s=−∞1k−sn−1ℙTn=k|Y1=sℙY1=s.

Temporarily ignore the n−1
in the denominator and consider the summation

Figure 2: Example of the Hitting Time Theorem with
n=6,
k=2,
a=4,
b=2.

Example Application of the Hitting Time Theorem

The following example is adapted from [?] and [?]. The original problem
is:

The Kentucky Derby is on Saturday, and a ﬁeld of 20
horses is slated to run “the fastest two minutes in sports” in
pursuit of the right to be draped with a blanket of roses. But let’s
consider, instead, the Lucky Derby, where things are a little more
bizarre:

The bugle sounds, and 20
horses make their way to the starting gate for the ﬁrst annual
Lucky Derby. These horses, all trained at the mysterious Riddler
Stables, are special. Each second, every Riddler-trained horse
takes one step. Each step is exactly one meter long. But what
these horses exhibit in precision, they lack in sense of direction.
Most of the time, their steps are forward (toward the ﬁnish line)
but the rest of the time they are backward (away from the ﬁnish
line). As an avid fan of the Lucky Derby, you’ve done exhaustive
research on these 20
competitors. You know that Horse One goes forward 52
percent of the time, Horse Two 54
percent of the time, Horse Three 56
percent, and so on, up to the favorite ﬁlly, Horse Twenty, who
steps forward 90
percent of the time. The horses’ steps are taken independently of
one another, and the ﬁnish line is 200
meters from the starting gate.

Handicap this race and place your bets! In other words, what
are the odds (a percentage is ﬁne) that each horse wins?

It is easy to create a simulation for this scenario. A
sample script is in the scripts section below. The simulation of
2000 races suggests that the
probability of horse 20
winning is about 70.48% and the
probability is about 22.86%.

Consider a particular horse. For each step, suppose it moves forward with probability
p (in this example
p>1∕2) or backward
with probability 1−p.
Recall that reaching an even-numbered position
2k (such
as 200 in
the problem statement) can only occur in an even number of steps, say
2n. Then the horse must
take n+k forward steps and
n−k backward steps. The
horse starts at position 0
and we would like to know the probability that it will reach position
2k (where speciﬁcally
in the example k=100
with 2k=200) in
exactly 2n
steps. For convenience, adopt the following notation:

2k
is a ﬁxed parameter, the value to be reached;

2n
is a possible number of steps required to reach the value 2k
where 2n≥2k;

let H
be the random variable for the hitting time, that is, H=2n
if T2n=2k,Tt<2k, for all t<2n;
and

for 2n≥2kℙH=2n=ℙT2n=2k,Tt<2k, for all t<2n=kn⋅ℙT2n=2k=kn2nn+kpn+k(1−p)n−k=P(2n;2k,p).

This probability mass distribution does not seem to have a common name, but it is
still possible to compute the statistics of this distribution.

Theorem 2.

P(2n;2k,p)is a probability mass distribution, that is

∑n=k∞P(2n;2k,p)=1.

𝔼H=2k2p−1,that is

∑n=k∞(2n)P(2n;2k,p)=k2p−1.

VarH=8kp(1−p)(2p−1)3,that is

∑n=k∞(2n−k2p−1)2P(2n;2k,p)=8kp(1−p)(2p−1)3

Proof.Starting with

∑n=k∞kn2nn+kpn+k(1−p)n−k

change the variables
n=j+k
to rewrite the summation as

p2k ∑j=0∞kj+k2(j+k)jpj(1−p)j.

This series resembling a binomial series expansion suggests that the series
might have a closed form. In fact, using Maple to ﬁnd a closed form solution,
obtain

∑j=0∞kj+k2(j+k)jxj=22k(1+1−4x)2k.

In turn the closed form for the power series suggests that it might be possible to
express the moment-generating function in closed form. Again using Maple to ﬁnd
a closed form expression

For the horse race example, the mean number and standard
deviation of steps required for each horse to reach position
200 are in
Table 1.

Horse

p

Mean

Variance

StdDev

1

0.52

5000.

3120000.

1766.3522

2

0.54

2500.

388125.

622.99679

3

0.56

1666.6667

114074.07

337.74853

4

0.58

1250.

47578.125

218.12410

5

0.6

1000.

24000.

154.91933

6

0.62

833.33333

13634.259

116.76583

7

0.64

714.28571

8396.5015

91.632426

8

0.66

625.

5478.5156

74.016995

9

0.68

555.55556

3731.1385

61.083046

10

0.7

500.

2625.

51.234754

11

0.72

454.54545

1893.3133

43.512220

12

0.74

416.66667

1391.7824

37.306600

13

0.76

384.61538

1037.7788

32.214574

14

0.78

357.14286

781.70554

27.958997

15

0.8

333.33333

592.59259

24.343225

16

0.82

312.5

450.43945

21.223559

17

0.84

294.11765

341.94993

18.491888

18

0.86

277.77778

258.05898

16.064214

19

0.88

263.15789

192.44788

13.872559

20

0.9

250.

140.625

11.858541

Table 1: Statistics for the hitting time distribution for each horse in the
race.

The probability distributions seem to be approximately normal in
shape, see Figure 3. This is reasonable since the hitting time to level
2k is the
sum of 2k
random one-level hitting times, ﬁrst the hitting time from the initial point
0 to ﬁrst
reach 1,
then the independent hitting time to ﬁrst reach
2 from
1, and
so on. The one-level hitting times are independent since the individual
steps of the random walk are independent. Thus, as the sum of many
independent random variables, the distribution of the hitting time to
2k is
approximately normal.

Suppose each horse i
ﬁnishes in hi
steps. Computing the probability that speciﬁc horse
j
wins the race amounts to ﬁnding the probability that
hi>hj for
all i≠j.
Since the probability distributions are approximately normal, approximate this
probability with:

ℙhi>hj,i≠j≈∫
−∞∞ℙhj=x∏i≠jℙhi>xdx= ∫
−∞∞1σjϕx−μjσj ∏i≠j1−Φx−μiσidx.

The probability calculation uses the usual notation:
ϕ
is the standard normal probability density (pdf),
Φ is the
cumulative distribution (cdf) of the standard normal distribution, and
μk and
σk are
the means and standard deviations computed in Table 1.

Analytic integration of the probability is impractical, and even the numerical
integration of

Hj(x)=1σjϕx−μjσj ∏i≠j1−Φx−μiσi

is challenging. Taking H20(x)
as an example, the product term

∏i≠201−Φx−μiσi

is approximately 1
for x<μ19−3σ19≈221.5402. Since each
factor is less than 1
and 1−Φ((x−μ19)∕σ19)≈0 for
x>μ19+3σ19≈304.7756, the product decreases
to approximately 0
for x>304.7756. The other
factor 1σ20ϕx−μ20σ20≈0 outside the
interval (μ20−3σ20,μ20+3σ20)≈(214.4244,285.5756) and has a
maximum value of ϕ(0)≈0.03364
at x=250. Therefore,
H20(x) is approximately
0 outside the interval
(214.4244,285.5756) and reaches a maximum
approximately H20(250)≈0.02633.
Computing the product of so many small magnitude factors could lead to
numerical underﬂow, so instead the exponential of the sum and diﬀerence of the
respective logarithms is computed.

The low variation with an aspect ratio of
1σ20(ϕ(0))∕(6σ20)≈1∕2115
makes even adaptive integration challenging. The default integration routine of
R gives a nonsensical
answer of 1.311008.
With the additional R library pracma of advanced numerical analysis
routines, the quad function using adaptive Simpson quadrature yields
0.7086704 and
the quadl function using adaptive Lobatto quadrature yields the very similar
0.7078003.
Numerical integration with Scientiﬁc Python using either general purpose
integration or Romberg integration gives similar results.

Therefore we estimate horse 20
has about a 71%
chance of winning. Repeating the analysis with the necessary changes, horse
19 has about a
21% chance of winning.
One of the other 18 horses
winning has about an 8%
chance.

The Hitting Time Theorem Under Rotation Invariance

Fix n≥1. Let
(Y1,…,Yn) be a sequence of
random variables Yi
taking values in {…,−2,−1,0,1}.
Deﬁne the walk T=(T0,…,Tn)
as the usual piecewise linear function deﬁned by the values
Ti= ∑j=0iYj at the
integers i=0,1,2,…,n.
Note also that because the only positive value assumed by the steps
Yi is
1, for the random
walk T to gain a height
k, it has to pass through
all k−1 intermediate
integer values. Given r
with 0<r≤n, deﬁne the
rotation of Tby r as the walk
T(r)=(T0(r),…,Tn(r)) corresponding to the
rotated sequence (Yr+1,…,Yn,Y1,…,Yr).
Another way to represent the rotation of the sequence is

Tt(r)=Tt+r−Tr0≤t≤n−r;Tn−Tr+Tt+r−nn−r<t≤n.

Note that Tn(r)=Tn+Tn+r−n−Tr=Tn and the
entire walk rotated by n
is the original walk, T(n)=T.
We say that T(r)peaks at n
if Tt(r)<Tn(r)=Tn for
all t<n.
See Table 3 for several examples.

j

1

2

3

4

5

6

7

8

9

10

Yj

-1

1

-1

-1

1

1

-1

-1

1

1

Tj

-1

0

-1

-2

-1

0

-1

-2

-1

0

Y(5)

1

-1

-1

1

1

-1

1

-1

-1

1

Tj(5)

1

0

-1

0

1

0

1

0

-1

0

Table 2: Example of a rotation of a Bernoulli random walk with
Yi=±1,
n=10,
and r=5.

Lemma 3.If Tn=k≥1,then exactly krotations of Tpeak at n.

Proof.Set M= max{T0,…Tn}.
If Tr≤Ts
for some r
and s
such that 0≤s<r≤n,
then Tn−r+s(r)=Tn+Ts−Tr≥Tn,
so T(r)
can only peak at n
if r=ri= min(t>0|Tt=i)
for some i
satisfying 0=T0<i≤M.
That is, T(r)
can only peak at n
if there are no indices prior to r
with values greater than Tr.
Since the values of Tj
has to pass through all intermediate integer values, r
must be ri= min(t>0|Tt=i).
The property of passing through all intermediate values implies that 0<r1<r2<⋯<rM.
Therefore, for 1≤i≤M,
max{Tt(ri)|0≤t≤n−ri}= max{Tt:ri≤t≤n}−Tri=M−i,
and max{Ttri|n−r≤t<n}=Tn+ max{Tt|0≤t<ri}−Tri=Tn−1.
It follows that Tri
peaks at n
if and only if M−Tn<i≤M.
□

Example.This example is a speciﬁc illustration of the proof of the lemma
with
n=10
and the random walk given in Table 3.

j

1

2

3

4

5

6

7

8

9

10

Yj

1

1

-1

1

-1

1

-1

1

1

1

Tj

1

2

1

2

1

2

1

2

3

4

Yj(1)

1

-1

1

-1

1

-1

1

1

1

1

Tj(1)

1

0

1

0

1

0

1

2

3

4

Yj(2)

-1

1

-1

1

-1

1

1

1

1

1

Tj(2)

-1

0

-1

0

-1

0

1

2

3

4

Yj(3)

1

-1

1

-1

1

1

1

1

1

-1

Tj(3)

1

0

1

0

1

2

3

4

5

4

Yj(4)

-1

1

-1

1

1

1

1

1

-1

1

Tj(4)

-1

0

-1

0

1

2

3

4

3

4

Yj(5)

1

-1

1

1

1

1

1

-1

1

-1

Tj(5)

1

0

1

2

3

4

5

4

5

4

Yj(6)

-1

1

1

1

1

1

-1

1

-1

1

Tj(6)

-1

0

1

2

3

4

3

4

3

4

Yj(7)

1

1

1

1

1

-1

1

-1

1

-1

Tj(7)

1

2

3

4

5

4

5

4

5

4

Yj(8)

1

1

1

1

-1

1

-1

1

-1

1

Tj(8)

1

2

3

4

3

4

3

4

3

4

Yj(9)

1

1

1

-1

1

-1

1

-1

1

1

Tj(9)

1

2

3

2

3

2

3

2

3

4

Table 3: Example of the lemma with
n=10,
counting the rotations which peak at 10.

For the example, M= max{T0,…T10}=4.
For the example 1=T5≤T4=2
for r=5 and
s=4 such that
0≤4=s<r=5≤n=10. Now consider
T10−5+4(5)=T9(5)=5=T10+T4−T5=4+2−1=5≥4. Therefore,
T(5) cannot peak at
n=10. Table 2 also
illustrates that T(5)
cannot peak at n=10.

The deﬁnition of ri
is ri= min(t>0|Tt=i). For the example,
0<1=r1<2=r2<9=r3<10=r4. The next point in
the lemma is that for 1≤i≤M,
max{Tt(ri)|0≤t≤n−ri}= max{Tt:ri≤t≤n}−Tri=M−i.
For this example, Table 3 gives all values for comparison, recalling that
T(10)=T.

ri

max{Tt(ri)|0≤t≤n−ri}

max{Tt:ri≤t≤n}−Tri

M−i

1

3

4−1=3

4−1=3

2

2

4−2=2

4−2=2

9

1

4−3=1

4−3=1

10

0

4−4=0

4−4=0

Finally, the last point in the lemma is that for
1≤i≤M,
max{Tt(ri):n−ri≤t<n}=Tn+ max{Tt:0≤t≤ri}−Tri=Tn−i.
For this example, Table 3 gives all values for comparison, recalling that
T(10)=T.

Deﬁnition.Call two walks equivalent if one is a rotation of the other. This
deﬁnes an equivalence relation on the set of n-step
walks.

Proof.Consider a single equivalence class, and let
x=(x1,…,xn) be the sequence
of increments of one member of the class. Every walk in the class ends at the same height
k, and the class
contains either n
walks, or n∕p walks
for some divisor p
of n if
(x1,…,xn)
happens to be periodic. In either case, the lemma implies that if
k≥1 and the joint probability
distribution of Y=(Y1,…,Yn)
is invariant under rotations, then

ℙTn=k,Tt<k for all t<n, with Y equivalent to x=kn⋅ℙTn=k,Y equivalent to x.

Choose one sequence of increments x
from each equivalence class and sum over them to obtain the result. □

Remark.The condition that the joint probability distribution of
(Y1,…,Yn)
is invariant under rotations here is weaker than the traditional requirement
that
Yi
be independent and identically distributed.

Remark.This proof is another version of the Cycle Lemma from the proof
of the Ballot Theorem.

The Hitting Time Theorem Under Exchangeability

Deﬁnition.A random vector (Y1,…,Yn)
is exchangeable if the joint probability distribution of (Y1,…,Yn)
is invariant for any permutation of (Y1,…,Yn).

Remark.The proof is not given here, see [?]. The idea of the proof is that the
joint probability distribution of (Y1,…,Yn)
conditioned on ∑i=1nYi=k
is still exchangeable. This implies the conditional expectation 𝔼Yi|Tn=0
is independent of i.
Then the conditional expectation step still holds, allowing the completion of
the induction argument.

Sources

The history and background is adapted from [?]. The proof for i. i. d. random
variables is adapted from [?]. The proof under rotation invariance is adapted from
the article by Kager, [?]. The comments about exchangeability are adapted from
[?].

Algorithms, Scripts, Simulations

Algorithm

Enumerating all walks

Set the number a of
upsteps and the number b
of downsteps. Then the total number of steps is the length
l=a+b. There are
then a+ba−b places
to have the downsteps. In a sequence of nested loops systematically place the downsteps
for B in an (a+b)×a+ba−b
matrix of ones. Then cumulatively sum this matrix down the columns to get all
possible walks and save the matrix for later plotting or analysis. Note that this
algorithm does not scale well because of the inherent growth of the binomial
coeﬃcients.

Simulating the Lucky Derby Example

Set all parameters for the simulations, using a suﬃcient length of the race for
most of the fastest horses to complete the race. Simulate all races in a
vectorized manner, using the usual technique of creating uniformly distributed
random variates, comparing row-wise to the Bernoulli probability, and
cumulatively summing. Then using vectorized match functions, extract
the index of the winning horse (breaking ties with minimum index), and
then counting the number of times each horse wins with the histogram
counter.

Computing the Win Probability in the Lucky Derby Example

Using a normal approximation to the ﬁrst hitting time distribution for
a Bernoulli random walk, construct the probability density for the
20th
walk winning by multiplying the density and the survival functions,
assuming independence. Then using diﬀerent numerical quadrature
routines, compute the approximate total probability of the
20th
walker winning.

Problems to Work for Understanding

Use the Hitting Time Theorem to prove the Ballot Theorem.

Using the Reﬂection Principle, prove the probability that an n-step
random walk taking independent and identically distributed steps ±1
with probability 1∕2
stays positive after time 0,
given that it ends at height k>0,
is k∕n.

Write a paragraph explaining in detail the diﬀerences and connections
among the Hitting Time Theorem, the Positive Walks Theorem, and
the Gambler’s Ruin. Provide speciﬁc example using numerical values
for the parameters.

Consider the case of a 7
step walk with Yi=±1
with 4
steps +1
and 3
steps −1.
Make a chart of all possible walks, counting those that satisfy the
Hitting Time Theorem.

The calculation uses the usual notation:
ϕ
is the standard normal probability density (pdf),
Φ
is the cumulative distribution (cdf) of the standard normal distribution, and
μk and
σk
are the means and standard deviations.

Show that
∑n=k∞P(2n;2k,p)=1(sums to 1) ∑n=k∞2nP(2n;2k,p)=2k2q−1(computing the mean) ∑n=k∞(2n−2k2p−1)2P(2n;2k,p)=8np(1−p)(2p−1)3(computing the variance)

Modify the scripts for the probability of winning the horse race to calculate
the probability of horses 18, 17, …winning. At what stage do the calculations
become unfeasible?

If available, modify the scripts for the probability of winning the horse race
to calculate the probability using either diﬀerent numerical integration
algorithms or inﬁnite intervals or both. How do the results compare to the
results in the horse race example?

Modify the scripts to display all walks with
4 steps
+1 and
3 steps
−1
to illustrate the Hitting Time Theorem.

Consider

ℙTn=k,Tt<k, for all t<n=kn⋅ℙTn=k.

Provide a detailed argument that when
n=1, both
sides are 0
when k>1 and
k=0, and are
equal to ℙY1=1
when k=1.
This is the base case of the induction.

Give an example of a set of random variables with joint probability
distribution that is exchangeable, but the set of random variables
violates the condition “independent and identically distributed”.
Keep the example as small and simple as possible.

Give an example of a set of random variables with joint
probability distribution that is invariant under rotations, but the
set of random variables violates the condition “independent and
identically distributed”. Keep the example as small and simple as
possible.

Give an example of a set of random variables with joint probability
distribution that is exchangeable, but the set of random variables
violates the condition of invariant under rotations. Keep the
example as small and simple as possible.

Reading Suggestion:

Outside Readings and Links:

I check all the information on each page for correctness and typographical errors.Nevertheless, some errors may occur and I would be grateful if you would alert me tosuch errors. I make every reasonable eﬀort to present current and accurate informationfor public use, however I do not guarantee the accuracy or timeliness of information onthis website. Your use of the information from this website is strictly voluntary and atyour risk.

I have checked the links to external sites for usefulness. Links to external websitesare provided as a convenience. I do not endorse, control, monitor, or guarantee theinformation contained in any external website. I don’t guarantee that the links areactive at all times. Use the links here with the same caution as you would allinformation on the Internet. This website reﬂects the thoughts, interests and opinions ofits author. They do not explicitly represent oﬃcial positions or policies of myemployer.