Faults and fault-tolerance
One of the selling points of a distributed system is that the
system will continue to perform even if some components /
processes fail.
Cause and effect
• Study what causes what.
• We view the effect of failures at our level of
abstraction, and then try to mask it, or recover from it.
• Be familiar with the terms MTBF (Mean Time Between
Failures) and MTTR (Mean Time To Repair)
Classification of failures
Omission failure
Crash failure
Software failure
Transient failure
Temporal failure Security failure
Byzantine failure
Crash failures
Crash failure is irreversible. How can we distinguish between a process
that has crashed and a process that is running very slowly?
In synchronous system, it is easy to detect crash failure (using heartbeat
signals and timeout), but in asynchronous systems, it is never accurate.
Some failures may be complex and nasty. Arbitrary deviation from
program execution is a form of failure that may not be as “nice” as a
crash. Fail-stop failure is an simple abstraction that mimics crash failure
when program execution becomes arbitrary. Such implementations help
detect which processor has failed. If a system cannot tolerate fail-stop
failure, then it cannot tolerate crash.
Omission failures
Message lost in transit. May happen due to
various causes, like
– Transmitter malfunction
– Buffer overflow
– Collisions at the MAC layer
– Receiver out of range
Transient failure
(Hardware) Arbitrary perturbation of the global state. May be
induced by power surge, weak batteries, lightning, radio-
frequency interferences etc.
(Software) Heisenbugs, are a class of temporary internal
faults and are intermittent. They are essentially permanent
faults whose conditions of activation occur rarely or are not
easily reproducible, so they are harder to detect during the
testing phase.
Over 99% of bugs in IBM DB2 production code are non-
deterministic and transient
Byzantine failure
Anything goes! Includes every conceivable form
of erroneous behavior.
Numerous possible causes. Includes malicious
behaviors (like a process executing a different program
instead of the specified one) too.
Most difficult kind of failure to deal with.
Software failures
• Coding error or human error
• Design flaws
• Memory leak
• Incomplete specification (example Y2K)
Many failures (like crash, omission etc) can be caused by
software bugs too.
Specification of faulty behavior
program example1;
define x : boolean (initially x = true);
{a, b are messages);
do {S}: x  send a {specified action}
 {F}: true  send b {faulty action}
od
aaaabaaabbaaaaaaa…
Specifying Byzantine Faults
program example2;
define j:integer, flag : boolean ;
{a, b are messages}, x : buffer;
initially j=0, flag = false;
do ~flag  message=a  x:= a ; flag :=true
 (j<N)  flag  send x to j; j := j+1
j=N  j := 0; flag :=false
od
F : (Byzantine) flag  x :=b (b != a)
Specifying Byzantine Faults
program example3;
define k:integer, x : boolean
initially k=0, x = true;
S: do k >2  send k; k:= k+1
 x  (k=2)  send k; k:= k+1
 k >= 3  k:=0;
F:  x  x:= false
 ~x  (k=2)  send 9; k := k+1
od
Specifying Temporal Failures
program example4 { for process j};
define f[i]: boolean {initially f[i] = false}
S: do ~f[i]  message received from process i  skip
F:  timeout (i,j)  f[i] := true
od
Fault-tolerance
F-intolerant vs F-tolerant systems A system that
tolerates failure
of type F
Four types of tolerance:
faults
- Masking
- Non-masking
tolerances
- Fail-safe
- Graceful degradation
Fault-tolerance
P is the invariant of the
original fault-free system
Q
Q represents the worst
possible behavior of the
P
system when failures occur.
It is called the fault span.
Q is closed under S or F.
Fault-tolerance
Masking tolerance: P = Q
(neither safety nor liveness is violated
Q
Non-masking tolerance: P  Q
(safety property may be temporarily
P
violated, but not liveness). Eventually
safety property is restored
Classifying fault-tolerance
Masking tolerance.
Application runs as it is. The failure does not have a visible impact.
All properties (both liveness & safety) continue to hold.
Non-masking tolerance.
Safety property is temporarily affected, but not liveness.
Example 1. Clocks lose synchronization, but recover soon thereafter.
Example 2. Multiple processes temporarily enter their critical sections,
but thereafter, the normal behavior is restored.
Backward error-recovery vs. forward error-recovery
Backward vs. forward error recovery
Backward error recovery
When safety property is violated, the computation rolls
back and resume from a previous correct state.
time
rollback
Forward error recovery
Computation does not care about getting the history right, but
moves on, as long as eventually the safety property is restored.
True for stabilizing systems.
Classifying fault-tolerance
Fail-safe tolerance
Given safety predicate is preserved, but liveness may be affected
Example. Due to failure, no process can enter its critical section for
an indefinite period. In a traffic crossing, failure changes the traffic in
both directions to red.
Graceful degradation
Application continues, but in a “degraded” mode. Much depends on
what kind of degradation is acceptable.
Example. Consider message-based mutual exclusion. Processes will
enter their critical sections, but not in timestamp order.
Failure detection
The design of fault-tolerant systems will be easier if
failures can be detected. Depends on the
1. System model, and
2. the type of failures.
Asynchronous systems are more tricky. We first focus
on synchronous systems only.
Detection of crash failures
Failure can be detected using heartbeat messages
(periodic “I am alive” broadcast) and timeout
- if the largest time to execute a step is known
- channel delays have a known upper bound.
Detection of omission failures
For FIFO channels: Use sequence numbers with messages.
Non-FIFO channels and bounded propagation delay - use timeout
What about non-FIFO channels for which the upper bound of the
delay is not known? Use unbounded sequence numbers and
acknowledgments. But acknowledgments may be lost too, causing
unnecessary re-transmission of messages :- (
Let us look how a real protocol deals with omission ….
Tolerating crash failures
Triple modular redundancy (TMR)
B0
?
x
for masking any single failure.
x f(x)
A B1 C
N-modular redundancy masks x
f(x)
B2
up to m failures, when N = 2m +1
Take a vote
What if the voting unit fails?
Tolerating omission failures
Central theme in networking
A router
Routers may drop messages, but
reliable end-to-end transmission
is an important requirement. This
B
implies, the communication must
tolerate Loss, Duplication, and router
Re-ordering of messages
Stenning’s protocol
{program for process S}
define ok : boolean; next : integer;
Sender S
initially next = 0, ok = true, both channels are empty;
do ok  send (m[next], next); ok:= false
 (ack, next) is received  ok:= true; next := next + 1
 timeout (r,s)  send (m[next], next)
od m[0], 0
{program for process R}
define r : integer;
initially r = 0;
ack
do (m[ ], s) is received  s = r  accept the message;
send (ack, r);
r:= r+1
 (m[ ], s) is received  s≠r  (ack, r-1) Receiver R
od
Observations on Stenning’s protocol
Sender S
Both messages and acks may be lost
Q. Why is the last ack reinforced by R when s≠r? m[0], 0
A. Needed to guarantee progress.
ack
Progress is guaranteed, but the protocol
is inefficient due to low throughput. Receiver R
Sliding window protocol
last + w
(s, r)
next
S R j
last
: (r, s) } accepted
messages
.
.
The sender continues the send action
without receiving the acknowledgements of at most
w messages (w > 0), w is called the window size.
Sliding window protocol
{program for process S} {program for process R}
define next, last, w : integer; define j : integer;
initially next = 0, last = -1, w > 0 initially j = 0;
do last+1 ≤ next ≤ last + w  do (m[next], next) is received 
send (m[next], next); next := next + 1
 (ack, j) is received  if j = next  accept message;
if j > last last := j send (ack, j);
 j ≤ last  skip j:= j+1
fi  j ≠ next  send (ack, j-1)
 timeout (R,S)  next := last+1 fi;
{retransmission begins} od
od
Why does it work?
Lemma. Every message is accepted exactly once.
Lemma. m[k] is always accepted before m[k+1].
(Argue that these are true.)
Observation. Uses unbounded sequence number.
This is bad. Can we avoid it?
Theorem
If the communication channels are non-FIFO, and the
message propagation delays are arbitrarily large, then
using bounded sequence numbers, it is impossible to
design a window protocol that can withstand the (1)
loss, (2) duplication, and (3) reordering of messages.
Why unbounded sequence no?
(m’’,k) (m’, k) (m[k],k)
New message Retransmitted
using the same version of m
seq number k
We want to accept m” but reject m’. How is that possible?
Alternating Bit Protocol
m[2],0 m[1],1 m[0],0 m[0],0
S R
ack, 0
ABP is a link layer protocol. Works on FIFO channels only.
Guarantees reliable message delivery with a 1-bit sequence
number (this is the traditional version with window size = 1).
Study how this works.
Alternating Bit Protocol
program ABP;
{program for process S}
define sent, b : 0 or 1; next : integer;
initially next = 0, sent = 1, b = 0, and channels are empty; S
do sent ≠b  send (m[next], b);
next := next+1; sent := b m[2],0
 (ack, j) is received  if j = b  b := 1-b
 j ≠ b  skip
m[1],1
fi
a,0
timeout (R,S)  send (m[next-1], b) m[0],0
od
{program for process R}
m[0],0
define j : 0 or 1; {initially j = 0};
do (m[ ], b) is received 
if j = b  accept the message;
send (ack, j); j:= 1 - j
R
 j ≠ b  send (ack, 1-j)
fi
od
How TCP works
Supports end-to-end logical connection between any two
computers on the Internet. Basic idea is the same as those of
sliding window protocols. But TCP uses bounded sequence
numbers!
It is safe to re-use a sequence number when it is unique. With a
high probability, a random 32 or 64-bit number is unique. Also,
current sequence numbers are flushed out of the system after a
time = 2d, where d is the round trip delay.
How TCP works
Sender Recei ver
SYN seq = x
SYN, seq=y, ack = x+1
ACK, ack=y+1
send (m, y+1)
ack (y+2)
How TCP works
• Three-way handshake. Sequence numbers are unique w.h.p.
• Why is the knowledge of roundtrip delay important?
• What if the window is too small / too large?
• What if the timeout period is too small / toolarge?
• Adaptive retransmission: receiver can throttle sender
and control the window size to save its buffer space.
Distributed Consensus
Reaching agreement is a fundamental problem in distributed
computing. Some examples are
Leader election / Mutual Exclusion
Commit or Abort in distributed transactions
Reaching agreement about which process has failed
Clock phase synchronization
Air traffic control system: all aircrafts must have the same view
If there is no failure, then reaching consensus is trivial. All-to-all broadcast
Followed by a applying a choice function … Consensus in presence of
failures can however be complex.
Problem Specification
input output
p0 u0 v
p1 u1 v
p2 u2 v
p3 u3 v
Here, v must be equal to the value at some input line.
Also, all outputs must be identical.
Problem Specification
Termination. Every non-faulty process must eventually decide.
Agreement. The final decision of every non-faulty process
must be identical.
Validity. If every non-faulty process begins with the same
initial value v, then their final decision must be v.
Asynchronous Consensus
Seven members of a busy household decided to hire a cook, since they do not
have time to prepare their own food. Each member separately interviewed
every applicant for the cook’s position. Depending on how it went, each
member voted "yes" (means “hire”) or "no" (means “don't hire”).
These members will now have to communicate with one another to reach a
uniform final decision about whether the applicant will be hired. The process
will be repeated with the next applicant, until someone is hired.
Consider various modes of communication…
Asynchronous Consensus
Theorem.
In a purely asynchronous distributed system,
the consensus problem is impossible to solve
if even a single process crashes
Famous result due to Fischer, Lynch, Patterson
(commonly known as FLP 85)
Proof
Bivalent and Univalent states
A decision state is bivalent, if starting from that state, there exist
two distinct executions leading to two distinct decision values 0 or 1.
Otherwise it is univalent.
A univalent state may be either 0-valent or 1-valent.
Proof
Lemma.
No execution can lead from a 0-valent to a 1-valent
state or vice versa.
Proof.
Follows from the definition of 0-valent and 1-valent states.
Proof
Lemma. Every consensus protocol must have a bivalent initial state.
Proof by contradiction. Suppose not. Then consider the following scenario:
s[0] 0 0 0 0 0 0 …0 0 0 {0-valent)
0 0 0 0 0 0 …0 0 1 s[j] is 0-valent
0 0 0 0 0 0 …0 1 1 s[j+1] is 1-valent
… … … … (differ in jth position)
s[n-1] 1 1 1 1 1 1 …1 1 1 {1-valent}
What if process (j+1) crashes at the first step?
Proof
The adversary tries to prevent
The system from reaching
Lemma. consensus
bivalent
Q
In a consensus protocol,
starting from any initial bivalent bivalent bivalent bivalent
bivalent state, there must S R U T
exist a reachable bivalent action 0 action 1 action 0 action 1
state T, such that every
R0 R1 T0 T1
action taken by some process o-valent 1-valent o-valent 1-valent
p in state T leads to either a
0-valent or a 1-valent state. Actions 0 and 1 from T must be
taken by the same process p. Why?
Proof of FLP (continued)
Lemma. Q
bivalent
In a consensus protocol, starting
bivalent bivalent bivalent bivalent
from any initial bivalent state I,
there must exist a reachable S R U T
bivalent state T, such that every action 0 action 1 action 0 action 1
action taken by some process p
R0 R1 T0 T1
in state T leads to either a 0-
o-valent 1-valent o-valent 1-valent
valent or a 1-valent state.
Actions 0 and 1 from T must be
taken by the same process p. Why?
Proof of FLP (continued)
Assume shared memory communication.
Also assume that p ≠ q. Various cases are possible
Case 1. 1-valent
e1
T1 Decision =1
q writes
T
p reads T0 Decision = 0
e0
0-valent Such a computation must exist
since p can crash at any time
• Starting from T, let e1 be a computation that excludes any step by p.
• Let p crash after reading.Then e1 is a valid computation from T0 too.
To all non-faulty processes, these two computations are identical, but the
outcomes are different! This is not possible!
Proof (continued)
Case 2. 1-valent e1
T1 Decision =1
q writes
T
p writes T0 Decision = 0
0-valent e0
Both write on the same variable, and p writes first.
• From T, let e1 be a computation that excludes any step by p.
• Let p crash after writing.Then e1 is a valid computation from T0 too.
To all non-faulty processes, these two computations are identical,
but the outcomes are different!
Proof (continued)
Case 3 1-valent
T1 Decision =1
q writes p writes
T Z
q writes
p writes T0 Decision = 0
0-valent
Let both p and q write, but on different variables.
Then regardless of the order of these writes, both computations lead
to the same intermediate global state Z. Is Z 1-valent or 0-valent?
Proof (continued)
Similar arguments can be made for communication using
the message passing model too (See Lynch’s book). These
lead to the fact that p, q cannot be distinct processes, and
p = q. Call p the decider process.
What if p crashes in state T? No consensus is reached!
Conclusion
• In a purely asynchronous system, there is no solution to
the consensus problem if a single process crashes..
• Note that this is true for deterministic
algorithms only. Solutions do exist for the
consensus problem using randomized algorithm,
or using the synchronous model.
Byzantine Generals Problem
Describes and solves the consensus problem
on the synchronous model of communication.
- Processor speeds have lower bounds and
communication delays have upper bounds.
- The network is completely connected
- Processes undergo byzantine failures, the worst
possible kind of failure
Byzantine Generals Problem
• n generals {0, 1, 2, ..., n-1} decide about whether to "attack" or
to "retreat" during a particular phase of a war. The goal is to
agree upon the same plan of action.
• Some generals may be "traitors" and therefore send either no
input, or send conflicting inputs to prevent the "loyal"
generals from reaching an agreement.
• Devise a strategy, by which every loyal general eventually
agrees upon the same plan, regardless of the action of the
traitors.
Byzantine Generals
Attack=1 Attack = 1
{1, 1, 0, 0} 0 1 {1, 1, 0, 1}
The traitor
may send out
traitor conflicting inputs
{1, 1, 0, 0} 2 3 {1, 1, 0, 0}
Retreat = 0 Retreat = 0
Every general will broadcast his judgment to everyone else.
These are inputs to the consensus protocol.
Byzantine Generals
We need to devise a protocol so that every peer
(call it a lieutenant) receives the same value from
any given general (call it a commander). Clearly,
the lieutenants will have to use secondary information.
Note that the roles of the commander and the
lieutenants will rotate among the generals.
Interactive consistency specifications
commander
IC1. Every loyal lieutenant receives
the same order from the
commander.
IC2. If the commander is loyal, then
every loyal lieutenant receives
the order that the commander
sends.
lieutenants
The Communication Model
Oral Messages
Messages are not corrupted in transit.
Messages can be lost, but the absence of message can be detected.
When a message is received (or its absence is detected), the receiver
knows the identity of the sender (or the defaulter).
OM(m) represents an interactive consistency protocol
in presence of at most m traitors.
An Impossibility Result
Using oral messages, no solution to the Byzantine
Generals problem exists with three or fewer
generals and one traitor. Consider the two cases:
commander 0 commander 0
1 1 1 0
0
0
1
lieutenent 1 lieutenant 2 lieutenent 1 lieutenant 2
(a) (b)
Impossibility result
Using oral messages, no solution to the Byzantine
Generals problem exists with 3m or fewer generals
and m traitors (m > 0).
Hint. Divide the 3m generals into three groups of m generals
each, such that all the traitors belong to one group. This scenario
is no better than the case of three generals and one traitor.
The OM(m) algorithm
Recursive algorithm
OM(m) OM(0)
OM(m-1)
OM(m-2)
OM(0)
OM(0) = Direct broadcast
The OM(m) algorithm
1. Commander i sends out a value v (0 or 1)
2. If m > 0, then every lieutenant j ≠ i, after
receiving v, acts as a commander and
initiates OM(m-1) with everyone except i .
3. Every lieutenant, collects (n-1) values:
(n-2) values sent by the lieutenants using
OM(m-1), and one direct value from the
commander. Then he picks the majority of
these values as the order from i
Example of OM(1)
commander commander
0 0
1 1 1 1 1
0
1 22 3 1 22 3
1 1 1 1 0 0 1
1 1 0 0 1
2 3 3 1 1 2 3
2 3 1 1 2
(a) (b)
Example of OM(2)
C o m m a n d e r
OM(2) 0
OM(2)
v v v v v v
1 2 3 4 5 6
OM(1)
v v v v v v v v v v v v OM(1)
4 5 6 2 5 6 2 4 6 2 4 5
v v v v v v
OM(0) 5 6 2 6 2 5 OM(0)
Proof of OM(m)
Lemma.
loyal commander
Let the commander be
loyal, and n > 2m + k,
where m = maximum
number of traitors.
values received via OM(r)
m traitors
n-m-1 loyal lieutenants
Then OM(k) satisfies IC2
Proof of OM(m)
Proof
If k=0, then the result trivially holds.
loyal commander
Let it hold for k = r (r > 0) i.e. OM(r)
satisfies IC2. We have to show that
it holds for k = r + 1 too.
Since n > 2m + r+1, so n -1 > 2m + r
So OM(r) holds for the lieutenants in values received via OM(r)
m traitors
the bottom row. Each loyal lieutenant will n-m-1 loyal lieutenants
collect n-m-1 identical good values and
m bad values. So bad values are voted
out (n-m-1 > m + r implies n-m-1 > m)
The final theorem
Theorem. If n > 3m where m is the maximum number of
traitors, then OM(m) satisfies both IC1 and IC2.
Proof. Consider two cases:
Case 1. Commander is loyal. The theorem follows from
the previous lemma (substitute k = m).
Case 2. Commander is a traitor. We prove it by induction.
Base case. m=0 trivial.
(Induction hypothesis) Let the theorem hold for m = r.
We have to show that it holds for m = r+1 too.
Proof (continued)
There are n > 3(r + 1) generals and r + 1 traitors. Excluding
the commander, there are > 3r+2 generals of which there
are r traitors. So > 2r+2 lieutenants are loyal. Since 3r+ 2 >
3.r, OM(r) satisfies IC1 and IC2
> 2r+2 r traitors
Proof (continued)
In OM(r+1), a loyal lieutenant chooses the
majority from (1) > 2r+1 values obtained
from the loyal lieutenants via OM(r),
(2) the r values from the traitors, and
(3) the value directly from the commander.
> 2r+2 r traitors
The values collected in part (1) & (3) are the same for all loyal lieutenants –
it is the same value that these lieutenants received from the commander.
Also, by the induction hypothesis, in part (2) each loyal lieutenant receives
identical values from each traitor. So every loyal lieutenant collects the same set of values.
Acknowledgements
This part relies heavily on Dr. Sukumar Ghosh’s
Iowa University Distributed Systems course
22C:166