Schedules of reinforcement are the rules that determine how often
an organism is reinforced for a particular behavior. The particular
pattern of reinforcement has an impact on the pattern of responding by the
animal. A schedule of reinforcement is either continuous or
partial. The behavior of the Fire Chief Rabbit to the left was not
reinforced every time it pulled the lever that "operated" the fire
truck. In other words, the rabbit's lever pulling was reinforced on a
partial or intermittent schedule. There are four basic partial schedules
of reinforcement. These different schedules are based on reinforcing the
behavior as a function of (a) the number of responses that have occurred or
(b) the length of time since the last reinforcer was available. The
basic four partial schedules are: Fixed Ratio, Variable Ratio, Fixed Interval,
and Variable Interval

Click on the thumbnail below to enlarge

Continuous
Schedule

The continuous schedule of
reinforcement involves the delivery of a reinforcer every single time that a
desired behavior is emitted. Behaviors are learned quickly with a
continuous schedule of reinforcement and the schedule is simple to
use. As a rule of thumb, it usually helps to reinforce the animal every
time it does the behavior when it is learning the behavior. Later, when
the behavior is well established, the trainer can switch to a partial or
intermittent schedule. If Keller Breland (left) reinforces the behavior
(touching the ring with nose) every time the behavior occurs, then Keller is
using a continuous schedule.

Click on the thumbnail below to enlarge

Partial
(Intermittent) Schedule

With a partial
(intermittent) schedule, only some of the instances of behavior are
reinforced, not every instance. Behaviors are shaped and learned more
slowly with a partial schedule of reinforcement (compared to a continuous
schedule). However, behavior reinforced under a partial schedule is more
resistant to extinction.

Partial
schedules of reinforcement are based on either a time interval passing before
the next available reinforcer or it is based on how many behaviors have
occurred before the next instance of the behavior is
reinforced. Schedules based on how many responses have occurred are
referred to as ratio schedules and can be either fixed-ratio or
variable-ratio schedules. Schedules based on elapsed time are referred
to as interval schedules and can be either fixed-interval or
variable-interval schedules.

Click on the thumbnail below to enlarge

Fixed
Ratio Schedule

Ratio schedules involve
reinforcement after a certain number of responses have been emitted. The
fixed ratio schedule involves using a constant number of responses. For
example, if the rabbit is reinforced every time it pulls the lever exactly five
times, it would be reinforced on an FR 5 schedule.

Click on the thumbnail below to enlarge

Variable Ratio Schedule

Ratio schedules involve
reinforcement after an average number of responses have occurred. For
example, the Fire Chief Rabbit's lever pulling, which made it appear that it
was operating the fire truck, was reinforced on a variable-ratio
schedule. Reinforcement occurred after an average of 3 pulls on the
lever. Sometimes the reinforcer was delivered after 2 pulls, sometimes
after 4 pulls, sometimes after 3 pulls, etc. If the average was about
every 3 pulls, this would be a VR 3 schedule. Variable ratio schedules
maintain high and steady rates of the desired behavior, and the behavior is
very resistant to extinction.

Fixed Interval Schedule

Interval schedules involve
reinforcing a behavior after an interval of time has passed. In a fixed
interval schedule, the interval of time is always the same. In an FI
3-second schedule, the first response after three seconds have passed will be
reinforced, but no response made before the three seconds have passed will be
reinforced. ABE did not use this type of schedule very often.

Variable Interval Schedule

Interval schedules involve reinforcing a behavior after an variable
interval of time has passed. In a variable interval schedule, the interval of
time is not always the same but centers around some average length of
time. In a VI 3-second schedule, the first response after three seconds
(on average) have passed will be reinforced, but no responses made before the
three seconds (on average) have passed will be reinforced. After an
animal learns the schedule, the rate of behavior tends to be steadier than
with a fixed interval schedule. ABE did not use this type of schedule
very often.