CS 3110 Lecture 20
Balanced Binary Trees

Sets and maps are important and useful abstractions.
We've seen various ways to implement an abstract data type for sets and maps,
since data structures that implement sets can be used to implement maps
as well. It's time to look at an implementation of sets that is
asymptotically efficient and useful in practice: balanced binary trees.

Binary trees have two advantages above the asymptotically more efficient hash
table: first, they support nondestructive update with the same asymptotic
efficiency. Second, they store their values (or keys, in the case of a
map) in order, which makes range queries and in-order iteration possible.

For simplicity, we will implement sets of some type we will call
value. We assume we have a comparison function compare:
value*value -> order.

The signature that we will work with is a little different from that in
Lecture 8:

An important property of a search tree is that it can be used to implement an
ordered set or ordered map easily:
a set (map) that abstractly keeps its elements in sorted order.
Although the signature above doesn't show them, ordered sets
generally provide operations for finding the minimum and maximum elements of the
set, for iterating over all the elements between two elements, and for
extracting (or iterating over) ordered subsets of the elements between a range:

Binary search trees

A binary search tree
is a binary tree with the following representation invariant:
For
any node n, every node in #left n has a value less than
that of n, and every node in #right n has a value more
than that of n. And the entire left and right subtrees satisfy the
same invariant.

Given such a tree, how do you perform a lookup operation? Start from the
root, and at every node, if the value of the node is what you are looking for,
you are done; otherwise, recursively look up in the left or right subtree
depending on the value stored at the node. In code:

Adding an element is similar: you perform a lookup until you find the empty node that
should contain the value. This is a nondestructive update, so as the recursion completes,
a new tree is constructed that is just like the old one except that it has a new node (if needed).
In code:

What is the running time of those operations? Since add is just a lookup
with an extra node creation, we focus on the lookup operation. Clearly,
the run time of
add is O(h), where
h is the height of the tree. What's the worst-case height of a tree? Clearly, a tree of n
nodes all in a single long branch (imagine adding the numbers 1,2,3,4,5,6,7
in order into a binary search tree). So the worst-case running time of lookup is
still O(n) (for n
the number of nodes in the tree).

If a tree with n nodes is kept balanced,
its height is O(lg n), which leads
to a lookup operation running in time O(lg n).

How can we keep a tree balanced? It can become unbalanced during element
addition or deletion. Most approaches involve adding or deleting an element
just like in a normal binary search tree, followed by some kind of tree surgery
to rebalance the tree. Similarly, element deletion proceeds as in a binary
search tree, followed by some corrective rebalancing action. Examples of
balanced binary search tree data structures include

AVL (or height-balanced) trees (1962)

2-3 trees (1970's)

Red-black trees

In each of these, we ensure asymptotic complexity of O(lg n) by enforcing
a stronger invariant on the data structure than just the binary search tree
invariant.

Red-Black Trees

Red-black trees are a fairly simple and very efficient data structure for
maintaining a balanced binary tree. The idea is to strengthen the rep invariant
so a tree has height logarithmic in n. To help enforce the invariant, we color
each node of the tree either red or black. Where it matters, we
consider the color of an empty tree to be black.

Here are the new conditions we add to the binary search tree rep invariant:

No red node has a red parent.

Every path from the root to an empty node has the same number of black
nodes: the black height of the tree. Call this BH.

If a tree satisfies these two conditions, it must also be the case that
every subtree of the tree also satisfies the conditions. If a subtree violated
either of the conditions, the whole tree would also.

With this invariant, the longest possible path from the root to an empty
node would alternately contain red and black nodes; therefore it is at most
twice as long as the shortest possible path, which only contains black nodes. If
n is the number of nodes in the tree, the longest
path cannot have a length greater than twice the length of the paths in a
perfect binary tree: 2 lg n,
which is O(lg n). Therefore,
the tree has height O(lg n) and
the operations are all as asymptotically efficient as we could expect.

Another way to see this is to think about just the black nodes in the tree.
Suppose we snip all the red nodes out of the trees by connecting black nodes
to their closest black descendants. Then we have a tree whose leaves are all
at depth BH, and whose branching factor ranges between 2 and 4. Such a tree
must contain at least Ω(2BH) nodes, and so must the whole
tree when we add the red nodes back in. If N is Ω(2BH),
then black height BH is O(lg N). But invariant 1 says that the longest
path is at most h = 2BH. So h is O(lg N) too.

How do we check for membership in red-black trees? Exactly the same way as for
general binary trees.

More interesting is the add operation. We add by replacing
the empty node that a standard add into a binary
search tree would. We also color the new node red to ensure that
invariant #2 is preserved. However, we may destroy invariant #1 in doing so, by
producing two red nodes, one the parent of the other. In order to restore this
invariant we will need to consider not only the two red nodes, but their parent.
Otherwise, the red-red conflict cannot be fixed while preserving black depth. The next figure shows all
the possible cases that may arise:

Notice that in each of these trees, the values of the nodes in a,b,c,d must
have the same relative ordering with respect to x, y, and z:
a<x<b<y<c<z<d. Therefore, we can perform a local tree
rotation to restore the invariant locally, while possibly breaking
invariant 1 one level up in the tree:

Ry
/ \
Bx Bz
/ \ / \
a b c d

By performing a rebalance of the tree at that level, and all the levels
above, we can locally (and incrementally) enforce invariant #1. In the end, we
may end up with two red nodes, one of them the root and the other the child of
the root; this we can easily correct by coloring the root black.
The SML code
(which really shows the power of pattern matching!) is as follows:

This code walks back up the tree from the point of insertion fixing the
invariants at every level. At red nodes we don't try to fix the invariant; we
let the recursive walk go back until a black node is found. When the walk
reaches the top the color of the root node is restored to black, which is needed
if balance rotates the root.

Removing elements

Removing an element from a red-black tree works analogously. We start
with BST element removal and then do rebalancing. Here is code to remove
elements from a binary tree. The key is that when an interior (nonleaf)
node is removed, then we simply splice it out if it has zero or one children;
if it has two children, we find the next value in the tree, which must be
found inside its right child.

Balancing the trees during removal from red-black tree requires considering
more cases. Deleting a black element from the tree creates the possibility that
some path in the tree has too few black nodes, breaking the black-height
invariant (2); the solution is to consider that path to contain a
"doubly-black" node. A series of tree rotations can then eliminate
the doubly-black node by propagating the "blackness" up until a red node can be
converted to a black node, or until the root is reached and it can be changed
from doubly-black to black without breaking the invariant.