Transcription

1 Chapter 3 Trees Section 3. Fundamental Properties of Trees Suppose your city is planning to construct a rapid rail system. They want to construct the most economical system possible that will meet the needs of the city. Certainly, a minimum requirement is that passengers must be able to ride from any station in the system to any other station. In addition, several alternate routes are under consideration between some of the stations. Some of these routes are more expensive to construct than others. How can the city select the most inexpensive design that still connects all the proposed stations? The model for this problem associates vertices with the proposed stations and edges with all the proposed routes that could be constructed. The edges are labeled (weighted) with their proposed costs. To solve the rapid rail problem, we must find a connected graph (so that all stations can be reached from all other stations) with the minimum possible sum of the edge weights. Note that to construct a graph with minimum edge weight sum, we must avoid cycles, since otherwise we could remove the most expensive edge (largest weight) on the cycle, obtaining a new connected graph with smaller weight sum. What we desire is a connected, acyclic graph (hence, a tree) with minimum possible edge weight sum. Such a tree is called a minimum weight spanning tree. Before we determine algorithms for finding minimum weight spanning trees, let s investigate more of the properties of trees. Trees are perhaps the most useful of all special classes of graphs. They have applications in computer science and mathematics, as well as in chemistry, sociology and many other studies. Perhaps what helps make trees so useful is that they may be viewed in a variety of equivalent forms. Theorem 3.. A graph T is a tree if, and only if, every two distinct vertices of T are joined by a unique path. Proof. If T is a tree, by definition it is connected. Hence, any two vertices are joined by at least one path. If the vertices u and v of T are joined by two or more different paths, then a cycle is produced in T, contradicting the definition of a tree. Conversely, suppose that T is a graph in which any two distinct vertices are joined by a unique path. Clearly, T must be connected. If there is a cycle C containing the vertices u and v, then u and v are joined by at least two paths, contradicting the hypothesis.

2 2 Chapter 3: Trees Hence, T must be a tree. Theorem 3..2 q = p. A (p, q) graph T is a tree if, and only if, T is connected and Proof. Given a tree T of order p, we will prove that q = p by induction on p. If p =, then T = K and q =. Now, suppose the result is true for all trees of order less than p and let T be a tree of order p 2. Let e = uv E(T). Then, by Theorem 3.., T e contains no u v path. Thus, T e is disconnected and in fact, has exactly two components (see Chapter 3, exercise ). Let T and T 2 be these components. Then T and T 2 are trees, and each has order less than p; hence, by the inductive hypothesis E(T i ) = V(T i ), for i =, 2. Now we see that E(T) = E(T ) + E(T 2 ) + = V(T ) + V(T 2 ) = p. Conversely, suppose T is a connected (p, q) graph and q = p. In order to prove that T is a tree, we must show that T is acyclic. Suppose T contains a cycle C and that e is an edge of C. Then T e is connected and has order p and size p 2. But this contradicts exercise 7 in Chapter 2. Therefore, T is acyclic and hence, T is a tree. We summarize various characterizations of a tree in the following theorem. Theorem 3..3 The following are equivalent on a (p, q) graph T:. The graph T is a tree. 2. The graph T is connected and q = p. 3. Every pair of distinct vertices of T is joined by a unique path. 4. The graph T is acyclic and q = p. In any tree of order p 3, any vertex of degree at least 2 is a cut vertex. However, every nontrivial tree contains at least two vertices of degree, since it contains at least two vertices that are not cut vertices (see Chapter 2, exercise 28). The vertices of degree in a tree are called end vertices or leaves. The remaining vertices are called internal vertices. It is also easy to see that every edge in a tree is a bridge.

3 Chapter 3: Trees 3 Every connected graph G contains a spanning subgraph that is a tree, called a spanning tree. If G is itself a tree, this is clear. If G is not a tree, simply remove edges lying on cycles in G, one at a time, until only bridges remain. Typically, there are many different spanning trees in a connected graph. However, if we are careful, we can construct a subtree in which the distance from a distinguished vertex v to all other vertices in the tree is identical to the distance from v to each of these vertices in the original graph. Such a spanning tree is said to be distance preserving from v or v- distance preserving. The following result is originally from Ore [7]. Theorem 3..4 For every vertex v of a connected graph G, there exists a v-distance preserving spanning tree T. Proof. The graph constructed in the breadth-first search algorithm starting at v is a tree and is clearly distance preserving from v. The following result shows that there are usually many trees embedded as subgraphs in a graph. Theorem 3..5 Let G be a graph with δ(g) m and let T be any tree of order m + ; then T is a subgraph of G. Proof. We proceed by induction on m. If m =, the result is clear since T = K is a subgraph of any graph. If m =, the result is also clear since T = K 2 is a subgraph of every nonempty graph. Now, assume the result holds for any tree T of order m and any graph G with δ(g ) m. Let T be a tree of order m + and let G be a graph with δ(g) m. To see that T is a subgraph of G, consider an end vertex v of T. Also, suppose that v is adjacent to w in T. Since T v is a tree of order m and the graph G v satisfies δ(g v) m, we see from the inductive hypothesis that T v G v G. Since deg G w m and T v has order m, the vertex w has an adjacency in G outside of V(T v). But this implies that T is a subgraph of G. Section 3.2 Minimum Weight Spanning Trees To solve the rapid rail problem, we now want to determine how to construct a minimum weight spanning tree. The first algorithm is from Kruskal [6]. The strategy of

4 4 Chapter 3: Trees the algorithm is very simple. We begin by choosing an edge of minimum weight in the graph. We then continue by selecting from the remaining edges an edge of minimum weight that does not form a cycle with any of the edges we have already chosen. We continue in this fashion until a spanning tree is formed. Algorithm 3.2. Kruskal s Algorithm. Input: A connected weighted graph G = (V, E). Output: A minimum weight spanning tree T = (V, E(T) ). Method: Find the next edge e of minimum weight w(e) that does not form a cycle with those already chosen.. Let i and T. 2. Choose an edge e of minimum weight such that e E(T) and such that T { e } is acyclic. If no such edge exists, then stop; else set e i e and T T { e i }. 3. Let i i +, and go to step 2. Theorem 3.2. spanning tree. When Kruskal s algorithm halts, T induces a minimum weight Proof. Let G be a nontrivial connected weighted graph of order p. Clearly, the algorithm produces a spanning tree T; hence, T has p edges. Let p E(T) = {e, e 2,..., e p } and let w(t) = w(e i ). Note that the order of the edges listed in E(T) is also the order in which they were chosen, and so w(e i ) w(e j ) whenever i j. From the collection of all minimum weight spanning trees, let T min be chosen with the property that it has the maximum number of edges in common with T. If T is not a minimum weight spanning tree, T and T min are not identical. Let e i be the first edge of T (following our listing of edges) that is not in T min. If we insert the edge e i into T min, we get a graph H containing a cycle. Since T is acyclic, there exists an edge e on the cycle in H that is not in T. The graph H { e } is also a spanning tree of G and w(h { e } ) = w(t min ) + w(e i ) w(e). Since w(t min ) w(h { e } ), it follows that w(e) w(e i ). However, by the algorithm, e i is an edge of minimum weight such that the graph < { e, e 2,..., e i } > is acyclic. However, since all these edges come from T min, Σ i =

5 Chapter 3: Trees 5 < { e, e 2,..., e i, e } > is also acyclic. Thus, we have that w(e i ) = w(e) and w(h { e } ) = w(t min ). That is, the spanning tree H { e } is also of minimum weight, but it has more edges in common with T than T min, contradicting our choice of T min and completing the proof. Example We demonstrate Kruskal s algorithm on the graph of Figure v 3 4 v v v 4 v 3 3 Figure A weighted graph to test Kruskal s algorithm.. i and T T e = v 2 v 4 and i T T { v 2 v 3 } and i T T { v 4 v 5 } and i T T { v v 4 } and i Halt (with the minimum spanning tree shown in Figure 3.2.2). In the performance of Kruskal s algorithm, it is best to sort the edges in order of nondecreasing weight prior to beginning the algorithm. On the average, this can be done in O(qlog q) time using either a quicksort or heap sort (see []). With this in mind, can you determine the average time complexity of Kruskal s algorithm?

6 6 Chapter 3: Trees v v 5 3 v v 4 v 3 Figure A minimum spanning tree for the graph of Figure Kruskal s algorithm is an example of a type of algorithm known as greedy. Simply stated, greedy algorithms are essentially algorithms that proceed by selecting the choice that looks the best at the moment. This local point of view can sometimes work very well, as it does in this case. However, the reader should not think that all processes can be handled so simply. In fact, we shall see examples later in which the greedy approach can be arbitrarily bad. There are several other algorithms for finding minimum weight spanning trees. The next result is fundamental to these algorithms. Theorem Let G = (V, E) be a weighted graph. Let U V and let e have minimum weight among all edges from U to V U. Then there exists a minimum weight spanning tree that contains e. Proof. Let T be a minimum weight spanning tree of G. If e is an edge of T, we are done. Thus, suppose e is not an edge of T and consider the graph H = T { e }, which must contain a cycle C. Note that C contains e and at least one other edge f = uv, where u U and v V U. Since e has minimum weight among the edges from U to V U, we see that w(e) w( f ). Since f is on the cycle C, if we delete f from H, the resulting graph is still connected and, hence, is a tree. Further, w(h f ) w(t) and hence H f is the desired minimum weight spanning tree containing e. This result directly inspired the following algorithm from Prim [8]. In this algorithm we continually expand the set of vertices U by finding an edge e from U to V U of minimum weight. The vertices of U induce a tree throughout this process. The end vertex of e in V U is then incorporated into U, and the process is repeated until U = V. For convenience, if e = xy, we denote w(e) by w(x, y). We also simply consider the tree T induced by the vertex set U.

8 8 Chapter 3: Trees time). Hence, Prim s algorithm requires O( V 2 ) time. At this stage we must point out that the corresponding problem of finding minimum weight spanning trees in digraphs is much harder. In fact, there is no known polynomial algorithm for solving such a problem. Section 3.3 Counting Trees Let s turn our attention now to problems involving counting trees. Although there is no simple formula for determining the number of nonisomorphic spanning trees of a given order, if we place labels on the vertices, we are able to introduce a measure of control on the situation. We say two graphs G and G 2 are identical if V(G ) = V(G 2 ) and E(G ) = E(G 2 ). Now we consider the question of determining the number of nonidentical spanning trees of a given graph (that is, on a given number of vertices). Say G = (V, E) and for convenience we let V = {, 2,..., p }. For p = 2, there is only one tree, namely K 2. For p = 3, there are three such trees (see Figure 3.3.) Figure The spanning trees on V = {, 2, 3 }. Cayley [] determined a simple formula for the number of nonidentical spanning trees on V = {, 2,..., p }. The proof presented here is from Pru.. fer [9]. This result is known as Cayley s tree formula. Theorem 3.3. (Cayley s tree formula). The number of nonidentical spanning trees on p distinct vertices is p p 2. Proof. The result is trivial for p = or p = 2 so assume p 3. The strategy of this proof is to find a one-to-one correspondence between the set of spanning trees of G and the p p 2 sequences of length p 2 with entries from the set {, 2,..., p }. We demonstrate this correspondence with two algorithms, one that finds a sequence corresponding to a tree and one that finds a tree corresponding to a sequence. In what follows, we will identify each vertex with its label. The algorithm for finding the

9 Chapter 3: Trees 9 sequence that corresponds to a given tree is:. Let i. 2. Let j the end vertex of the tree with smallest label. Remove j and its incident edge e = j k. The ith term of the sequence is k. 3. If i = p 2 then halt; else i i + and go to 2. Since every tree of order at least 3 has two or more end vertices, step 2 can always be performed. Thus, we can produce a sequence of length p 2. Now we must show that no sequence is produced by two or more different trees and that every possible sequence is produced from some tree. To accomplish these goals, we show that the mapping that assigns sequences to trees also has an inverse. Let w = n, n 2,..., n p 2 be an arbitrary sequence of length p 2 with entries from the set V. Each time (except the last) that an edge incident to vertex k is removed from the tree, k becomes the next term of the sequence. The last edge incident to vertex k may never actually be removed if k is one of the final two vertices remaining in the tree. Otherwise, the last time that an edge incident to vertex k is removed, it is because vertex k has degree, and, hence, the other end vertex of the edge was the one inserted into the sequence. Thus, deg T k = + (the number of times k appears in w). With this observation in mind, the following algorithm produces a tree from the sequence w:. Let i. 2. Let j be the least vertex such that deg T j =. Construct an edge from vertex j to vertex n i and set deg T j and deg T n i deg T n i. 3. If i = p 2, then construct an edge between the two vertices of degree and halt; else set i i + and go to step 2. It is easy to show that this algorithm selects the same vertex j as the algorithm for producing the sequence from the tree (Chapter 3, exercise 7). It is also easy to see that a tree is constructed. Note that at each step of the algorithm, the selection of the next vertex is forced and, hence, only one tree can be produced. Thus, the inverse mapping is produced and the result is proved. Example Pru.. fer mappings. We demonstrate the two mappings determined in the proof of Cayley s theorem. Suppose we are given the tree T of Figure

10 Chapter 3: Trees Figure The tree T. Among the leaves of T, vertex has the minimum label, and it is adjacent to 7; thus, n = 7. Our next selection is vertex 2, adjacent to vertex 4, so n 2 = 4. Our tree now appears as in Figure Figure The tree after the first two deletions. The third vertex selected is 3, so n 3 = 4. We then select vertex 4; thus, n 4 = 7. Finally, we select vertex 6, setting n 5 = 5. What remains is just the edge from 5 to 7; hence, we halt. The sequence corresponding to the tree T of Figure is To reverse this process, suppose we are given the sequence s = Then we note that: deg =, deg 2 =, deg 3 =, deg 4 = 3, deg 5 = 2, deg 6 =, deg 7 = 3. According to the second algorithm, we select the vertex of minimum label with degree ; hence, we select vertex. We then insert the edge from to n = 7. Now set deg = and deg 7 = 2 and repeat the process. Next, we select vertex 2 and insert the edge from 2 to n 2 = 4: Figure The reconstruction after two passes. Again reducing the degrees, deg 2 = and deg 4 = 2. Next, we select vertex 3 and insert the edge from 3 to n 3 = 4 (see Figure 3.3.5).

11 Chapter 3: Trees Figure The reconstruction after three passes. Now, select vertex 4 and insert the edge from 4 to n 4 = 7. This is followed by the selection of vertex 6 and the insertion of the edge from 6 to n 5 = 5. Finally, since i = p 2, we end the construction by inserting the edge from 5 to 7, which completes the reconstruction of T. An alternate expression for the number of nonidentical spanning trees of a graph is from Kirchhoff [5]. This result uses the p p degree matrix C = [ c i j ] of G, where c ii = deg v i and c i j = if i j. This result is known as the matrix-tree theorem. For each pair (i, j), let the matrix B i j be the n n matrix obtained from the n n matrix B by deleting row i and column j. Then det B i j is called the minor of B at position (i, j) and, ( ) i + j det B i j is called the cofactor of B at position (i, j). Theorem (The matrix-tree theorem) Let G be a nontrivial graph with adjacency matrix A and degree matrix D. Then the number of nonidentical spanning trees of G is the value of any cofactor of D A. Example Consider the following graph: We can use the matrix-tree theorem to calculate the number of nonidentical spanning trees of this graph as follows. The matrices

12 2 Chapter 3: Trees A = D = are easily seen to be the adjacency matrix and degree matrix for this graph, while (D A ) = Thus, det (D A ) = det 2 2 = det 3 2 = det 2 3 = 3. These three spanning trees are easily found since the triangle has three spanning trees. Section 3.4 Directed Trees As with connectivity, directed edges create some additional complications with trees. Initially, we need to decide exactly what we want a directed tree to be. For our purposes, the following definition is most useful: A directed tree T = (V, E) has a distinguished vertex r, called the root, with the property that for every vertex v V, there is a directed r v path in T and the underlying undirected graph induced by V is also a tree. As with trees, directed trees have many possible characterizations. We consider some of them in the next theorem. If there is an edge e in a digraph D with the property that for some pair of vertices u, v in D, e lies on every u v path, then we say that e is a bridge in D.

13 Chapter 3: Trees 3 r Figure A directed tree with root r. Theorem 3.4. The following conditions are equivalent for the digraph T = (V, E):. The digraph T is a directed tree. 2. The digraph T has a root r, and for every vertex v V, there is a unique r v path in T. 3. The digraph T has a root r with id r =, and for every v r, id v =, and there is a unique directed (r v) path in T. 4. The digraph T has a root r, and for every v V, there is an r v path and every arc of T is a bridge. 5. The graph underlying T is connected, and in T, there is a vertex r with id r = and for every other vertex v V, id v =. Proof. Our strategy is to show the following string of implications: => 2 => 3 => 4 => 5 =>. To see that => 2, assume that T has a root r, there are paths from r to v for every v V and the underlying graph of T is a tree. Since there is an r v path in T and since the underlying graph is a tree, this r v path must be unique. To see that 2 => 3, assume that T has a root r and a unique directed path from r to every vertex v V. Suppose that e = u r is an arc of T. Since there is a directed path from r to u, the arc e completes a directed cycle containing r. But then there are at least two paths from r to r, namely the trivial path and the path obtained by following the arcs of this cycle. This contradicts the hypothesis; hence, id r =. Now, consider an arbitrary vertex v r. Clearly, id v > since there is a directed r v path in T. Suppose that id v > ; in fact, suppose that e = v v and e 2 = v 2 v are two arcs into v. Note that T contains a directed r v path P and a directed r v 2 path P 2. By adding the arc e to P and adding e 2 to the path P 2, we obtain two different

14 4 Chapter 3: Trees r v paths, producing a contradiction. If v P (or P 2 ) then the segment of P from r to v and the segment of P 2 followed by the arc e 2 are two different r v paths in T, again producing a contradiction. To see that 3 => 4, note that the deletion of any arc e = u v means that v is unreachable from r; hence, each arc must be a bridge. To see that 4 => 5, suppose that T has root r and that every arc is a bridge. Since any arc into r can be deleted without changing the fact that there are r v paths to all other vertices v, no such arc can exist. Hence, id r =. If v r, id v > since there is a directed r v path in T. Suppose e and e 2 are two arcs into v. Then the path P from r to v cannot use both of these arcs. Thus, the unused arc can be deleted without destroying any r v path. But this contradicts the fact that every arc is a bridge. Hence, id v = for every vertex v r. To see that 5 =>, assume that the graph G underlying T is connected, id r = and id v = for all v r. To see that there is an r v path for any vertex v, let P G be an r v path in G. Then P G corresponds to a directed path in T for otherwise, some arc along P G is oriented incorrectly and, hence, either id r > or id w > for some w r. Similarly, G must be acyclic or else a cycle in G would correspond to a directed cycle in T. A subgraph T of a digraph D is called a directed spanning tree if T is a directed tree and V(T) = V(D). In order to be able to count the number of nonidentical directed spanning trees of a digraph D, we need a useful variation of the adjacency matrix. For a digraph D with m arcs from vertex j to vertex k we define the indegree matrix, as A i (D) = A i = [ d j k ], where d j k = id j if j = k m if j k. Using the indegree matrix, we can obtain another characterization of directed trees (Tutte []). Theorem A digraph T = (V, E) is a directed tree with root r if, and only if, A i (T) = [ d j k ] has the following properties:. The entry d jj = if j = r and the entry d jj = otherwise. 2. The minor at position (r, r) of A i (T) has value. Proof. Let T = (V, E) be a directed tree with root r. By Theorem 3.4., condition ()

15 Chapter 3: Trees 5 must be satisfied. We now assign an ordering to the vertices of T as follows:. The root r is numbered. 2. If the arc u v is in T, then the number i assigned to u is less than the number j assigned to v. This numbering is done by assigning the neighbors of r the numbers 2, 3,..., ( + od r), and we continue the numbering with the vertices a distance 2 from r, then number those a distance 3 from r, etc. The indegree matrix A * i = [ d * j k ] (with row and column ordering according to our new vertex ordering) has the following properties:. d * =. 2. d * jj = for j = 2, 3,..., V. 3. d * j k = if j > k. Note that A i * can be obtained from the original indegree matrix A i by permuting rows and performing the same permutations on the columns. Since such permutations do not change the determinant except for sign, and since each row permutation is matched by the corresponding column permutation, the two minors are equal. The value of the minor obtained from A i * by deleting the first row and column and computing the determinant is easily seen to be. Conversely, suppose that A i satisfies conditions () and (2). By () and Theorem 3.4., either T is a tree or its underlying graph contains a cycle C. The root r is not a member of C since id r = and id v = for all other vertices. Thus, C must be of the form: C: x, x 2,..., x a, x, where x i r for each i =, 2,..., a. Any of the vertices may have other arcs going out, but no other arcs may come into these vertices. Thus, each column of A i corresponding to one of these vertices must contain exactly one + (on the main diagonal) and exactly one, and all other entries are. Each row of this submatrix either has all zeros as entries or contains exactly one + and one. But then the sum of the entries on these columns is zero, and, hence, the minor at position (r, r) is zero. This contradicts condition (2), and the result is proved. Corollary 3.4. If D is a digraph with indegree matrix A i and the minor of A i is zero, then D is not a directed tree. Corollary The number of directed spanning trees with root r of a digraph D equals the minor of A i (D) resulting from the deletion of the rth row and column.

16 6 Chapter 3: Trees Proof. Define the digraph D G obtained from G in the natural manner; that is, for every edge e = uv in G, replace e by the symmetric pair of arcs u v and v u to form D G. Now, let v V(D G ) (hence, of V(G)). There is a one-to-one correspondence between the set of spanning trees of G and the set of directed spanning trees of D G rooted at r. To see that this is the case, suppose that T is a spanning tree of G and suppose that e = uv is an edge of T. In the directed tree T * rooted at r, we insert the arc u v if d T (u, r) < d T (v, r), and we insert the arc v u otherwise. Hence, for every spanning tree T of G, we obtain a distinct directed spanning tree T * of D G. Conversely, given a directed tree T * rooted at r, we can simply ignore the directions on the arcs to obtain a spanning tree T of G. Example Determine the number of spanning trees rooted at vertex of the digraph D of Figure as: Figure A digraph D with three spanning trees rooted at. We begin by constructing the indegree matrix A i of the digraph D. A i = 2 Then, the determinant resulting from the deletion of row and column can be found 2 det 2 = 4 = 3. 2 Hence, D has three spanning trees rooted at vertex. Consulting Figure 3.4.3(a), we see these spanning trees are exactly those shown. For spanning trees rooted at vertex 2 we find that there are two such; while the number rooted at vertex 3 is one. These are shown in Figure 3.4.3(b)

17 Chapter 3: Trees (a) 3 2 (b) Figure (a) Directed spanning trees of D rooted at and (b) others. Suppose we now consider the undirected case. Let G = (V, E) be an undirected graph. We form a digraph D G from G as follows: Let V(D G ) = V(G), and for every edge e = uv in G, we form two arcs, e = u v and e 2 = v u. If r V(G), then there is a - correspondence between the set spanning trees of G and the set of directed spanning trees of D G rooted at r. To see that this is the case, let T be a spanning tree of G. If the edge e = uv is in T and if d T (u, r) < d T (v, r), then select e for the directed spanning tree T D ; otherwise, select e 2. Conversely, given a directed spanning tree T D of D G, it is easy to see that simply ignoring the directions of the arcs of T D produces a spanning tree of G. Thus, to compute the number of spanning trees of G, begin by forming D G. If there are m arcs from vertex i to vertex j, then let A i (D G ) = deg G v i if i = j, m if i j. Thus, applying Corollary produces the desired result; the choice of r makes no difference. As an example of this, consider the graph of Figure We earlier determined it had three spanning trees. We now verify this again, using our result on digraphs. First we form D G.

18 8 Chapter 3: Trees Figure D G for the graph of Example Now form A i (D G ) = 3 But note A i (D G ) equals the matrix C A of Example (this is no coincidence!). Hence, we must get the number of spanning trees with the choice of r making no difference, by applying the matrix-tree theorem Section 3.5 Optimal Directed Subgraphs We now wish to consider a problem for digraphs similar to the minimal spanning tree problem for graphs; that is, given a weighted digraph D, we want to find a minimal weight acyclic subgraph of D. Notice that we are not restricting the subgraphs under consideration to directed trees or even to spanning subgraphs, but rather to a somewhat larger class of digraphs often called branchings. A subgraph B = (V, E * ) of a digraph D = (V, E) is a branching if B is acyclic and id v for every v V. If for exactly one vertex r, id r = and for all other vertices v, id v =, then B is a directed tree with root r. For finding optimum branchings, the following idea is useful. An arc e = u v is called critical if it satisifies the following two conditions:. w(e) <. 2. w(e) w(e ) for all other arcs e = z v.

19 Chapter 3: Trees 9 Form the arc set E c E by selecting one critical arc entering each vertex of V. Using this arc set, we obtain the critical subgraph C = (V, E c ). Karp [4] showed the relationship between critical subgraphs and minimum (as well as maximum) branchings. (For maximum branchings, merely reverse the inequalities in the above definition of critical arcs.) Proposition 3.5. D = (V, E). Then Let C = (V, E c ) be a critical subgraph of a weighted digraph. Each vertex of C is on at most one cycle. 2. If C is acyclic, then C is a minimum weight branching. Proof. () Suppose that v is on two directed cycles. Then there must be a vertex with indegree at least 2, which is a contradiction to the way we selected arcs for C. (2) It is clear that if C is acyclic, then it is a branching. Suppose the vertex v has no negatively weighted arcs entering it in D. Then in a branching B of D, either B has no arc entering v or we can remove the arc entering v without increasing the weight of B. It is clear that C has no arc entering v. If the vertex v has negatively weighted arcs entering it in D, then the arc entering v contained in E c is of minimum weight. Thus, no branching can have a smaller weighted arc at v and, hence, C is a minimum weight branching. It is possible to have many different branchings in a given digraph. In fact, some of these branchings may actually be very similar, that is, they may have a large number of arcs in common with one another. In fact, it is possible that simply deleting one arc and inserting another creates a new branching. If B = (V, E B ) is a branching and if e = u v is an arc of D that is not in B, then we say that e is eligible relative to B if there is an arc e E B such that B = (V, E B { e } e ) is also a branching. We can characterize eligible arcs using directed paths. Theorem 3.5. Let B be a branching of the digraph D and let e = u v be an arc of D that is not in B. Then e is eligible relative to B if, and only if, there is no directed v u path in B. Proof. Suppose there is a directed v u path in B. Then when e is inserted into B, a directed cycle is formed. The removal of the arc in B entering v (if any did exist) does not destroy this cycle. Thus, e is not eligible.

20 2 Chapter 3: Trees Conversely, if there is no directed v u path in B, then inserting e cannot form a directed cycle. The arc set that results from the insertion of e is not a branching if there already is an arc entering v. Removing any such arc ensures that B is a branching and, hence, e is eligible. There is a strong tie between the set of eligible arcs and the arc set of a branching. In fact, we can show that there is a time when they are nearly the same. Theorem Let B = (V, E B ) be a branching and C a directed circuit of the digraph D. If no arc of E(C) E B is eligible relative to B, then E(C) E B =. Proof. Since B is acyclic, it contains no circuits. Thus, E(C) E B. Let e, e 2,..., e k be the arcs of E(C) E B in the order in which they appear in C. Say C: u, e, v, P, u 2, e 2, v 2, P 2,..., v k, P k, u k, e k, v k, u is the circuit, where the P i s are the directed paths in both B and C. Since e is not eligible relative to B, by Theorem 3.5. there must be a directed path P * in B from v to u. This path leaves P at some point and enters v k and continues on to u, so P * cannot enter the path p k after v k ; if it could, B would have two arcs entering the same vertex. Similarly, e j is not eligible relative to B and, thus, there must be a directed path from v j to u j in B. This path must leave P j at some point and enter P j at v j. But we now see that B contains a directed circuit from v, along a section of P, to a path leading to v k, via part of P k, to a path leading to v k, etc., until it finally returns to v. Since B is circuit-free, k. Theorem Let C = (V, E C ) be a critical subgraph of the digraph D = (V, E). For every directed circuit C * in C, there exists a branching B = (V, E B ) such that E(C * ) E B =. Proof. Among all maximum branchings of D, let B be one that contains the maximum number of arcs of C. Let e = u v E C E B. If e is eligible, then E B { e } { e e enters v in B } determines another maximum branching which contains more arcs of C than does B; thus, we have a contradiction to our choice of B. Thus, no arc of E C E B is eligible relative to B, and, by the last theorem, E(C * ) E B =, for every directed circuit C * of C.

21 Chapter 3: Trees 2 Thus, we see that in trying to construct maximum branchings, we can restrict our attention to those branchings which have all arcs (except one per circuit) in common with a critical subgraph C = (V, E ). Edmonds [2] realized this and developed an algorithm to construct maximum branchings. His approach is as follows: Traverse D, examining vertices and arcs. The vertices are placed in the set B v as they are examined, and the arcs are placed in the set B e if they have been previously selected for a branching. The set B e is always the arc set of a branching. Examining a vertex simply means selecting the critical arc e into that vertex, if one exists. We then check to see if e forms a circuit with the arcs already in B e. If e does not form a circuit, then it is inserted in B e and we begin examining another vertex. If e does form a circuit, then we "restructure" D by shrinking all the vertices of the circuit to a single new vertex and assigning new weights to the arcs that are incident to this new vertex. We then continue the vertex examination until all vertices of the final restructured graph have been examined. The final restructured graph contains these new vertices, while the vertices and arcs of the circuit corresponding to a new vertex have been removed. The set B e contains the arcs of a maximum branching for this restructured digraph. The reverse process of replacing the new vertices by the circuits that they represent then begins. The arcs placed in B e during this process are chosen so that B e remains a maximum branching for the digraph at hand. The critical phase of the algorithm is the rule for assigning weights to arcs when circuits are collapsed to single vertices. This process really forces our choice of arcs for B e when the reconstruction is performed. Let C, C 2,..., C k be the circuits of the critical graph (V, H). Let e i m be an edge of minimum weight on C i. Let e _ be the edge of C i which enters v. For arcs e = u w on C i, define the new weight w _ as: w _ (e) = w(e) w(e _ ) + w(ei m ). Edmonds s algorithm is now presented. Algorithm 3.5. Edmonds s Maximum Branching Algorithm. Input: A directed graph D = (V, E). Output: The set B e of arcs in a maximum branching. Method: The digraph is shrunk around circuits.. B v B e and i. 2. If B v = V i, then go to step For some v B v and v V i, do steps 4 6 : 4. B v B v { v },

22 22 Chapter 3: Trees 5. find an arc e = x v in E i with maximum weight. 6. If w(e), then go to step If B e { e } contains a circuit C i, then do steps 8 : 8. i i Construct G i by shrinking C i to u i.. Update B e and B v and the necessary arc weights.. B e B e { e }. 2. Go to step While i, do steps 4 7 : 4. Reconstruct G i and rename some arcs in B e. 5. If u i was a root of an out-tree in B e, then B e B e { e e C i and e e i m }; 6. else B e b e { e e C i and e e _ i }. 7. i i. 8. w(b) Σ w(e). e B e Section 3.6 Binary Trees One of the principle uses of trees in computer science is in data representation. We use trees to depict the relationship between pieces of information, especially when the usual linear (list-oriented) representation seems inadequate. Tree representations are more useful when slightly more structure is given to the tree. In this section, all trees will be rooted. We will consider the trees to be leveled, that is, the root r will constitute level, the neighbors of r will constitute level, the neighbors of the vertices on level that have not yet been placed in a level will constitute level 2, etc. With this structure, if v is on level k, the neighbors of v on level k + are called the children (or descendents) of v, while the neighbor of v on level k (if it exists) is called the parent (or father, or predecessor) of v. A binary tree is a rooted, leveled tree in which any vertex has at most two children. We refer to the descendents of v as the left child and right child of v. In any drawing of this tree, we always place the root at the top, vertices in level below the root, etc. The left child of v is always placed to the left of v in the drawing and the right child of v is

23 Chapter 3: Trees 23 always placed to the right of v. With this orientation of the children, these trees are said to be ordered. If every vertex of a binary tree has either two children or no children, then we say the tree is a full binary tree. r level left child of r right child of r level 2 level Figure A binary tree. The additional structure that we have imposed in designating a distinction between the left and right child means that we obtain distinct binary trees at times when ordinary graph isomorphism actually holds. The trees of Figure are examples of this situation. Figure Two different binary trees of order 5. The height of a leveled tree is the length of a longest path from the root to a leaf, or alternately, the largest level number of any vertex. As an example of the use of binary trees in data representation, suppose that we wish to represent the arithmetic expression a + 3b = a + 3 b. Since this expression involves binary relations, it is natural to try to use binary trees to depict these relations. For example, the quantity 3b represents the number 3 multiplied by the value of b. We can represent this relationship in the binary tree as shown in Figure b Figure Representing the arithmetic expression 3b.

24 24 Chapter 3: Trees Now the quantity represented by 3b is to be added to the quantity represented by a. We repeat the tree depiction of this relationship to obtain another binary tree (see Figure 3.6.4). + a 3 b Figure The expression a + 3b represented using a binary tree. Once we have a representation for data using a tree, it is also necessary to recover this information and its inherent relationships. This usually involves examining the data contained in (or represented within) the vertices of the tree. This means we must somehow visit the vertices of the tree in an order that allows us not only to retrieve the necessary information but also to understand how these data are related. This visiting process is called a tree traversal, and it is done with the aid of the tree structure and a particular set of rules for deciding which neighbor we visit next. One such traversal, the inorder traversal (or symmetric order) is now presented. Algorithm 3.6. Inorder Traversal of a Binary Tree. Input: A binary tree T = (V, E) with root r. Output: An ordering of the vertices of T (that is, the data contained within these vertices, received in the order of the vertices). Method: Here "visit" the vertex simply means perform the operation of your choice on the data contained in the vertex. procedure inorder( r ). If T φ, then 2. inorder (left child of v) 3. visit the present vertex 4. inorder (right child of v) end

25 Chapter 3: Trees 25 This recursive algorithm amounts to performing the following steps at each vertex:. Go to the left child if possible. 2. Visit the vertex. 3. Go to the right child if possible. Thus, on first visiting a vertex v, we immediately proceed to its left child if one exists. We only visit v after we have completed all three operations on all vertices of the subtree rooted at the left child of v. Applying this algorithm to the tree of Figure and assuming that visit the vertex simply means write down the data contained in the vertex, we obtain the following traversal. First, we begin at the root vertex and immediately go to its left child. On reaching this vertex, we immediately attempt to go to its left child. However, since it has no left child, we then "visit" this vertex; hence, we write the data a. We now attempt to visit the right child of this vertex, again the attempt fails. We have now completed all operations on this vertex, so we backtrack (or recurse) to the parent of vertex a. Thus, we are back at the root vertex. Having already performed step 2 at this vertex, we now visit this vertex, writing the data +. Next, we visit the right child of the root. Following the three steps, we immediately go to the left child, namely vertex 3. Since it has no left child, we visit it, writing its data 3. Then, we attempt to go to the right child (which fails), and so we recurse to its parent. We now write the data of this vertex, namely. We proceed to the right child, attempt to go to its left child, write out its data b, attempt to go to the right child, recurse to and recurse to the root. Having completed all three instructions at every vertex, the algorithm halts. Notice that the data were written as a + 3 b. We have recovered the expression. Two other useful and closely related traversal algorithms are the preorder and postorder traversals. The only difference between these traversals is the order in which we apply the three steps. In the preorder traversal, we visit the vertex, go to the left child and then go to the right child. In the postorder traversal, we go to the left child, go to the right child and, finally, visit the vertex. Can you write the preorder and postorder algorithms in recursive form? Another interesting application of binary trees concerns the transmission of coded data. If we are sending a message across some medium, such as an electronic cable, the characters in the message are sent one at a time, in some coded form. Usually, that form is a binary number (that is, a sequence of s and s). Since speed is sometimes important, it will be helpful if we can shorten the encoding scheme as much as possible,

26 26 Chapter 3: Trees while still maintaining the ability to distinguish between the characters. An algorithm for determining binary codes for characters, based on the frequency of use of these characters, was developed by Huffman [3]. Our aim is to assign very short code numbers to frequently used characters, thereby attempting to reduce the overall length of the binary string that represents the message. As an example of Huffman s construction, suppose that our message is composed of characters from the set { a, b, c, d, e, f } and that the corresponding frequencies of these characters is ( 3, 6, 7, 2, 8, ). Huffman s technique is to build a binary tree based on the frequency of use of the characters. More frequently used characters appear closer to the root of this tree, and less frequently used characters appear in lower levels. All characters (and their corresponding frequencies) are initially represented as the root vertex of separate trivial trees. Huffman s algorithm attempts to use these trees to build other binary trees, gradually merging the intermediate trees into one binary tree. The vertices representing the characters from our set will be leaves of the Huffman tree. The internal vertices of the tree will represent the sum of the frequencies of the leaves in the subtree rooted at that vertex. For example, suppose we assign each of the characters of our message and its corresponding frequency to the root vertex of a trivial tree. From this collection of trees, select the two trees with roots having the smallest frequencies. In case of ties, randomly select the trees from those having the smallest frequencies. In this example the frequencies selected are 6 and 7. Make these two root vertices the left and right children of a new vertex, with this new vertex having frequency 3 (Figure 3.6.5). Return this new tree to our collection of trees Figure The first stage of Huffman s algorithm. Again, we choose from the collection of trees the two trees with roots having the lowest frequencies, and 2, and again form a larger tree by inserting a new vertex whose frequency is 22 and that has vertex and vertex 2 as its left and right children, respectively. Again, return this tree to the collection. Repeating this process a third time, we select vertices 3 and 3. Following the construction, we obtain the tree of Figure

27 Chapter 3: Trees 27 We return this tree to the collection and again repeat this process. The next two roots selected have frequencies 8 and 22. We build the new tree and return it to the collection. Finally, only the roots 26 and 4 remain. We select them and build the tree shown in Figure In addition to the tree we constructed, we also place a value of on the edge from any vertex to its left child and a value of on any edge from a vertex to its right child. Note that this choice is arbitrary and could easily be reversed. We can read the code for each of the characters by following the unique path from the root 66 to the leaf representing the character of interest. The entire code is given in Table Figure The new tree formed Figure The final Huffman tree. Next, suppose we are presented with an encoded message string, say, for example, a string like:. Assuming this encoded string was created from the Huffman tree of our example, we can decode this string by again using the Huffman tree. Beginning at the root, we use the next digit of the message to decide which edge of the tree we will follow. Initially, we follow the edge from vertex 66 to vertex 26, then the edge to vertex 3 and then the edge to vertex 6. Since we are now at a leaf of the Huffman tree, the first character of the

28 28 Chapter 3: Trees message is the letter b, as it corresponds to this leaf. character code a b c d e f Table 3.6. The Huffman codes for the example character set. Return to the root and repeat this process on the remaining digits of the message. The next three digits are, the code that corresponds to d, followed in turn by (c), (a), (a), (b), (b), (e) and (f). Thus, the encoded message was bdcaabef. Algorithm Construction of a Huffman Tree. Input: Ordered frequencies ( f, f 2,..., f n ) corresponding to the characters (a, a 2,..., a n ). Output: A Huffman tree with leaves corresponding to the frequencies above. Method: From a collection of trees, select the two root vertices corresponding to the smallest frequencies. Then insert a new vertex and make the two selected vertices the children of this new vertex. Return this tree to the collection of trees and repeat this process until only one tree remains.. If n = 2, then halt, thereby forming the tree: f + f 2 f f 2 2. If n > 2, then reorder the frequencies so that f and f 2 are the two smallest frequencies. Let T be the Huffman tree resulting from the algorithm being recursively applied to the frequencies ( f + f 2, f 3,..., f n ) and let T 2 be the Huffman tree that results from calling the algorithm recursively on the frequencies ( f, f 2 ). Halt the algorithm with the tree that results from substituting T 2 for

29 Chapter 3: Trees 29 some leaf of T (which has value f + f 2 ). Note that the Algorithm does not produce a unique tree. If several frequencies are equal, their positions in the frequency list and, hence, their positions in the tree can vary. Can you find a different Huffman tree for the example data? What conditions would produce a unique Huffman tree? Huffman trees are in a sense the optimal structure for binary encoding. That is, we would like to show that the Huffman code minimizes the length of encoded messages, with characters and frequencies matching those used in the construction of the Huffman tree. Our measure of the efficiency of the code is called the weighted path length of the coding tree and is defined to be: f i l i, where f i is the frequency of the ith letter and Σ i n l i is the length of the path from the root in the Huffman tree to the vertex corresponding to the ith letter. The weighted path length is a reasonable measure to minimize since, when this value is divided by Σ n f i (that is, the number of characters being encoded), i = we obtain the average length of the encoding per character. Theorem 3.6. A Huffman tree for the frequencies ( f, f 2,..., f n ) has minimum weighted path length among all full binary trees with leaves f, f 2,..., f n. Proof. We proceed by induction on n, the number of frequencies. If n = 2, the weighted path length of any full binary tree is f + f 2, as is the value we obtain from the algorithm. Now, suppose that n 3 and assume that the result follows for all Huffman trees with fewer than n leaves. Reorder the frequencies so that f f 2,..., f n (if necessary). Since there are only a finite number of full binary trees with n leaves, there must be one, call it T, with minimum weighted path length. Let x be an internal vertex of T whose distance from the root r is a maximum (for the internal vertices). If f and f 2 are not the frequencies of the leaves of T that are children of x, then we can exchange the frequencies of the children of x, say f i and f j, with f and f 2 without increasing the weighted path length of T. This follows since f i f and f j f 2 and this interchange moves f i closer to the root and f farther away from the root. But T has minimum weighted path length, and thus, its value cannot decrease. Hence, there must be a tree with minimum weighted path length that does have f and f 2 as the frequencies of the children of an internal vertex that is a maximum distance from the root (again, this

Chapter 4 Trees 4.1 Basics A tree is a connected graph with no cycles. A forest is a collection of trees. A vertex of degree one, particularly in a tree, is called a leaf. Trees arise in a variety of applications.

GRAPH THEORY LECTURE 4: TREES Abstract. 3.1 presents some standard characterizations and properties of trees. 3.2 presents several different types of trees. 3.7 develops a counting method based on a bijection

1 Digraphs Definition 1 Adigraphordirected graphgisatriplecomprisedofavertex set V(G), edge set E(G), and a function assigning each edge an ordered pair of vertices (tail, head); these vertices together

GRAPH THEORY and APPLICATIONS Trees Properties Tree: a connected graph with no cycle (acyclic) Forest: a graph with no cycle Paths are trees. Star: A tree consisting of one vertex adjacent to all the others.

Discrete Mathematics Lent 2009 MA210 Solutions to Exercises 8 (1) Suppose that G is a graph in which every vertex has degree at least k, where k 1, and in which every cycle contains at least 4 vertices.

Chapter 6 Planarity Section 6.1 Euler s Formula In Chapter 1 we introduced the puzzle of the three houses and the three utilities. The problem was to determine if we could connect each of the three utilities

Chapter 2 Paths and Searching Section 2.1 Distance Almost every day you face a problem: You must leave your home and go to school. If you are like me, you are usually a little late, so you want to take

PROBLEM ONE (Trees) Homework 15 Solutions 1. Recall the definition of a tree: a tree is a connected, undirected graph which has no cycles. Which of the following definitions are equivalent to this definition

4 Basics of Trees Trees, actually acyclic connected simple graphs, are among the simplest graph classes. Despite their simplicity, they still have rich structure and many useful application, such as in

9 Properties of Trees. Definitions: Chapter 4: Trees forest - a graph that contains no cycles tree - a connected forest. Theorem: Let T be a graph with n vertices. Then the following statements are equivalent:

Trees and Fundamental Circuits Tree A connected graph without any circuits. o must have at least one vertex. o definition implies that it must be a simple graph. o only finite trees are being considered

Homework MA 725 Spring, 2012 C. Huneke SELECTED ANSWERS 1.1.25 Prove that the Petersen graph has no cycle of length 7. Solution: There are 10 vertices in the Petersen graph G. Assume there is a cycle C

2 Trees What is a tree? Forests and Trees: A forest is a graph with no cycles, a tree is a connected forest. Theorem 2.1 If G is a forest, then comp(g) = V (G) E(G). Proof: We proceed by induction on E(G).

Use the following to answer questions 1-26: In the questions below fill in the blanks. Chapter 10 1. If T is a tree with 999 vertices, then T has edges. 998. 2. There are non-isomorphic trees with four

Section 11.1 Trees Definition: A tree is a connected undirected graph with no simple circuits. Example: Which of these graphs are trees? Solution: G 1 and G 2 are trees both are connected and have no simple

COLORED GRAPHS AND THEIR PROPERTIES BEN STEVENS 1. Introduction This paper is concerned with the upper bound on the chromatic number for graphs of maximum vertex degree under three different sets of coloring

CME 305: Discrete Mathematics and Algorithms 1 Basic Definitions and Concepts in Graph Theory A graph G(V, E) is a set V of vertices and a set E of edges. In an undirected graph, an edge is an unordered

2.3 Scheduling jobs on identical parallel machines There are jobs to be processed, and there are identical machines (running in parallel) to which each job may be assigned Each job = 1,,, must be processed

Counting Spanning Trees Bang Ye Wu Kun-Mao Chao Counting Spanning Trees This book provides a comprehensive introduction to the modern study of spanning trees. A spanning tree for a graph G is a subgraph

raph Theory Problems and Solutions Tom Davis tomrdavis@earthlink.net http://www.geometer.org/mathcircles November, 005 Problems. Prove that the sum of the degrees of the vertices of any finite graph is

Math 443/543 Graph Theory Notes 4: Connector Problems David Glickenstein September 19, 2012 1 Trees and the Minimal Connector Problem Here is the problem: Suppose we have a collection of cities which we

Lecture The Matrix-Tree Theorem This section of the notes introduces a very beautiful theorem that uses linear algebra to count trees in graphs. Reading: The next few lectures are not covered in Jungnickel

Data Structures Using C++ 2E Chapter 11 Binary Trees and B-Trees Binary Trees Definition: a binary tree, T, is either empty or such that T has a special node called the root node T has two sets of nodes,

11. Digraphs The concept of digraphs (or directed graphs) is one of the richest theories in graph theory, mainly because of their applications to physical problems. For example, flow networks with valves

last revised: 26 January 2009 1 Markov Chains A Markov chain process is a simple type of stochastic process with many social science applications. We ll start with an abstract description before moving

A CHARACTERIZATION OF TREE TYPE LON H MITCHELL Abstract Let L(G) be the Laplacian matrix of a simple graph G The characteristic valuation associated with the algebraic connectivity a(g) is used in classifying

Lesson 3 Algebraic graph theory Sergio Barbarossa Basic notions Definition: A directed graph (or digraph) composed by a set of vertices and a set of edges We adopt the convention that the information flows

7 Basic Properties 24 TREES 7 Basic Properties Definition 7.1: A connected graph G is called a tree if the removal of any of its edges makes G disconnected. A tree can be defined in a variety of ways as

. iscuss following. Graph graph G consist of a non empty set V called the set of nodes (points, vertices) of the graph, a set which is the set of edges and a mapping from the set of edges to a set of pairs

Basic Notions on Graphs Planar Graphs and Vertex Colourings Presented by Joe Ryan School of Electrical Engineering and Computer Science University of Newcastle, Australia Planar graphs Graphs may be drawn

Chapter 8 Independence Section 8.1 Vertex Independence and Coverings Next, we consider a problem that strikes close to home for us all, final exams. At the end of each term, students are required to take

CMPSCI611: Approximating MAX-CUT Lecture 20 For the next two lectures we ll be seeing examples of approximation algorithms for interesting NP-hard problems. Today we consider MAX-CUT, which we proved to

Graph A graph G consist of 1. Set of vertices V (called nodes), (V = {v1, v2, v3, v4...}) and 2. Set of edges E (i.e., E {e1, e2, e3...cm} A graph can be represents as G = (V, E), where V is a finite and

Section 6.4 Closures of Relations Definition: The closure of a relation R with respect to property P is the relation obtained by adding the minimum number of ordered pairs to R to obtain property P. In

4 Graph Theory Throughout these notes, a graph G is a pair (V, E) where V is a set and E is a set of unordered pairs of elements of V. The elements of V are called vertices and the elements of E are called

arxiv:1205.5492v1 [math.co] 24 May 2012 Partitioning edge-coloured complete graphs into monochromatic cycles and paths Alexey Pokrovskiy Departement of Mathematics, London School of Economics and Political

Unit I (Analysis of Algorithms) 1. What are algorithms and how they are useful? 2. Describe the factor on best algorithms depends on? 3. Differentiate: Correct & Incorrect Algorithms? 4. Write short note:

Chapter 7 Sorting 7.1 Introduction Insertion sort is the sorting algorithm that splits an array into a sorted and an unsorted region, and repeatedly picks the lowest index element of the unsorted region

Analysis of Algorithms I: Binary Search Trees Xi Chen Columbia University Hash table: A data structure that maintains a subset of keys from a universe set U = {0, 1,..., p 1} and supports all three dictionary

COURSE: B.TECH-ECE. IV Sem Data structure Using C 1. Determine the formula to find the address location of an element in three dimensions array, suppose each element takes four bytes of space & elements

222 CHAPTER 6. PROBABILITY 6.3 Conditional Probability and Independence Conditional Probability Two cubical dice each have a triangle painted on one side, a circle painted on two sides and a square painted

Relations Slides by Christopher M. Bourke Instructor: Berthe Y. Choueiry Introduction Recall that a relation between elements of two sets is a subset of their Cartesian product (of ordered pairs). A binary

Induction Margaret M. Fleck 10 October 011 These notes cover mathematical induction and recursive definition 1 Introduction to induction At the start of the term, we saw the following formula for computing

Notes on Linear Recurrence Sequences April 8, 005 As far as preparing for the final exam, I only hold you responsible for knowing sections,,, 6 and 7 Definitions and Basic Examples An example of a linear

Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a sub-vector space of V[n,q]. If the subspace of V[n,q]

CHAPTER 7 Introduction to Relations 1. Relations and Their Properties 1.1. Definition of a Relation. Definition: A binary relation from a set A to a set B is a subset R A B. If (a, b) R we say a is related

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

Homework Exam 1, Geometric Algorithms, 2016 1. (3 points) Let P be a convex polyhedron in 3-dimensional space. The boundary of P is represented as a DCEL, storing the incidence relationships between the

Multimedia Communications Huffman Coding Optimal codes Suppose that a i -> w i C + is an encoding scheme for a source alphabet A={a 1,, a N }. Suppose that the source letter a 1,, a N occur with relative

Full and Complete Binary Trees Binary Tree Theorems 1 Here are two important types of binary trees. Note that the definitions, while similar, are logically independent. Definition: a binary tree T is full

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

inary Trees 1 A binary tree is either empty, or it consists of a node called the root together with two binary trees called the left subtree and the right subtree of the root, which are disjoint from each

GRAPH THEORY LECTURE STRUCTURE AND REPRESENTATION PART A Abstract. Chapter focuses on the question of when two graphs are to be regarded as the same, on symmetries, and on subgraphs.. discusses the concept

Converting a Number from Decimal to Binary Convert nonnegative integer in decimal format (base 10) into equivalent binary number (base 2) Rightmost bit of x Remainder of x after division by two Recursive

Why is the number of - and -avoiding permutations equal to the number of binary trees? What are restricted permutations? We shall deal with permutations avoiding some specific patterns. For us, a permutation

7. Colourings Colouring is one of the important branches of graph theory and has attracted the attention of almost all graph theorists, mainly because of the four colour theorem, the details of which can