Anda di halaman 1dari 16

Module 8: Trees and Graphs

Theme 1: Basic Properties of Trees

A (rooted) tree is a nite set of nodes such that

there is a specially designated node called the root. the remaining nodes are partitioned into sets is a tree. The sets

disjoint sets

such that each of these

are called subtrees, and

the degree of the root.

The above is an example of a recursive denition, as we have already seen in previous modules. Example 1: In Figure 1 we show a tree rooted at and , respectively. We now introduce some terminology for trees: with three subtrees , and rooted at ,

A tree consists of nodes or vertices that store information and often are labeled by a number or a letter. In Figure 1 the nodes are labeled as For example,

An edge is an unordered pair of nodes (usually denoted as a segment connecting two nodes).

is an edge in Figure 1.

The number of subtrees of a node is called its degree. For example, node while node tree. A leaf or a terminal node is a node of degree zero. Nodes in Figure 1.

is of degree three,

is of degree two. The maximum degree of all nodes is called the degree of the

and are leaves

A node that is not a leaf is called an interior node or an internal node (e.g., see nodes ). Roots of subtrees of a node children. For example, are called children of and are children of while , while


is known as the parent of its is the parent of and .

Children of the same parent are called siblings. Thus are siblings.

are siblings as well as and

The ancestors of a node are all the nodes along the path from the root to that node. For example, ancestors of are and .

The descendants of a node are all the nodes along the path from that node to a terminal node. Thus descendants of are

and . 1

T1 E K



T3 C G H M D I J

Figure 1: Example of a tree.

The level of a node is dened by letting the root to be at level zero1 , while a node at level has children at level . For example, the root

in Figure 1 is at level zero, nodes


at level one, nodes

ate level two, and nodes

are at level three.

The depth of a node is its level number. The height of a tree is the maximum level of any node in this tree. Node is at depth two, while node at depth three. The height of the tree presented in Figure 1 is three. A tree is called a -ary tree if every internal node has no more than a full -ary tree if every internal node has exactly children. A tree is called

children. A complete tree is a full tree up

the last but one level, that is, the last level of such a tree is not full. A binary tree is a tree with

. The tree in Figure 1 is a -ary tree, which is neither a full tree nor a complete tree.

An ordered rooted tree is a rooted tree where the children of each internal node are ordered. We usually order the subtrees from left to right. Therefore, for a binary (ordered) tree the subtrees are called the left subtree and the right subtree. A forest is a set of disjoint trees.

Now we study some basic properties of trees. We start with a simple one. Theorem 1. A tree with nodes has edges. than the root, there are edges in a tree.

Proof. Every node except the root has exactly one in-coming edge. Since there are nodes other
Some authors prefer to set the root to be on level one.

The next result summarizes our basic knowledge about the maximum number of nodes and the height. Theorem 2. Let us consider a binary tree. (i) The maximum number of nodes at level is for (ii) The maximum number of all nodes in a tree of height (iii) If a binary tree of height has nodes then


(iv) If a binary tree of height

has leaves, then

Proof. We rst proof (i) by induction. It is easy to see that it is true for

since there is only one node (the root) at level zero. Let now, by the induction hypothesis, assume there are no more nodes at level . We must prove that at level there are no more than nodes. Indeed, every node at level may have no more than two children. Since there are nodes on level , we must have no more than nodes at level . By mathematical induction we prove (i).
To prove (ii) we use (i) and the summation formula for the geometric progression. Since every levels the total number of nodes cannot exceed

level has at most nodes and there are no more than

which proves (ii).

Now we prove (iii). If a tree has height , then the maximum number of nodes by (ii) is

which must be at least be as big as , that is,

This implies that

, and completes the proof of (iii).

Part (iv) can be proved in

exactly the same manner (see also Theorem 3 below). Exercise 8A: Consider a - ary tree (i.e., nodes have degree at most ). Show that at level at most nodes. Conclude the total number of nodes in a tree of height is

there are

Finally, we prove one result concerning a relationship between the number of leaves and the number of nodes of higher degrees.

Figure 2: Illustration to Theorem 3.

Theorem 3. Let us consider a nonempty binary tree with leaves and nodes of degree two. Then

Proof. Let be the total number of all nodes, and the number of nodes of degree one. Clearly

On the other hand, if is the number of edges, then as already observed in Theorem 1 we have

. But also

Comparing the last two displayed equations we prove out theorem. In Figure 2 the reader can verify that by Theorem 3.


, hence

as predicted

Theme 2: Tree Traversals

Trees are often used to store information. In order to retrieve such information we need a procedure to visit all nodes of a tree. We describe here three such procedures called inorder, postorder and preorder traversals. Throughout this section we assume that trees are ordered trees (from left to right). Denition. Let be an (ordered) rooted tree with

subtrees of the root.

1. If is null, then the empty list is preorder, inorder and postorder traversal of . 2. If consists of a single node, then that node is preorder, inorder and postorder traversal of . 3. Otherwise, let

be nonempty subtrees of the root. 4





T3 end




T1 start






Figure 3: Illustration to preorder, inorder and postorder traversals.

The preorder traversal of nodes in nodes of

is the following: the root of

followed by the

in preorder, then nodes of in preorder traversal,

, followed by


preorder (cf. Figure 3). The inorder traversal of nodes in is the following: nodes of

by the root of , followed by the nodes of The postorder traversal of nodes in lowed by

in inorder, followed

in inorder (cf. Figure 3).

is the following: nodes of

in postorder, fol,

in postorder, followed by the root (cf. Figure 3). has three subtrees rooted at

Example 2: Let us consider the tree in Figure 1. The root rooted at , and subtree rooted at preorder of T since after the root rooted at

. The preorder traversal is

and its subtrees, and nally we visit the subtree rooted

we visit rst (so we list the root

and visit subtrees of ), then subtree and its subtrees.

The inorder traversal of is inorder of T

since we rst must traverse inorder the subtree traversing in inorder the subtree rooted at turns out to be a single node

rooted at . But inorder traversal of starts by

, and move to the right subtree that .

, which in turn must start at . Since is a single node, , that we list next, and nally node

we list it. Then we move backward and list the root, which is . Now, we can move up to Then we continue in the same manner. Finally, the postorder traversal of is as follows: postorder of T

0 a b c 0

1 1 0 0 d 1 1 e

a - 0 b = 10 c = 110 d = 1110 e = 1111

Figure 4: A tree representing a prex code. since we must rst traverse postorder which leads to

, which means postorder traversal of a subtree rooted at ,

, and

. The rest follows the same pattern.

Theme 3: Applications of Trees

We discuss here two applications of trees, namely to build optimal prex code (known as Huffmans code), and evaluations of arithmetic expressions.

Huffman Code
Coding is a mapping from a set of letters (symbols, characters) to a set of binary sequences. For example, we can set


(however, as we shall see this is not a good code).

But why to encode? The main reason is to nd a (one-to-one) coding such that the length of the coded message is as short as possible (this is called data compression). However, not every coding is good, since if we are not careful we may encode a message that we wont be able to decode uniquely. For example, with the encoding as above, let us assume that we receive the following message

We can decode in many ways, for example as or

In order to avoid the above decoding problems, we need to construct special codes known as prex codes. A code is called a prex code if the bit string for a letter must never occur as the rst part of the bit strings for another letter. In other words, no code is a prex of another code. (By a prex of the string

we mean

for some


It is easy to construct prex codes using binary trees. Let us assume that we want to encode a subset of English characters. We build a binary tree with leaves labeled by these characters and we label the edges of the tree by bits and , say, a left child of a node is labeled by while a right child by . The code associated with a letter is a sequence of labels on the path from the root to the leaf

containing this character. We observe that by assigning leaves to characters, we assure the prex code property (if we label any internal node by a character, then the path or a code of this node will be a prex of all other characters that are assigned to nodes below this internal node). Example 3: In Figure 4 we draw a tree and the associated prex code. In particular, we nd that


Indeed, no code is a prex of another code.

Therefore, a message like this

can be uniquely decoded as

It should be clear that there are many ways of encoding a message. But intuitively, one should assign shorter code to more frequent symbols in order to get on average as short code as possible. We illustrate this in the following example. Example 4: Let

be the set of symbols that we want to encode. The probabilities

of these symbols and two different codes are shown in Table 1. Observe that both codes are prex codes. Let us now compute the average code lengths and for both codes. We have

Thus the average length of the second code is shorter, and if there is no other constraint this code should be used. Let us now consider a general case. Let probabilities

. For a code

be symbols with the corresponding

the average code length is dened as

Table 1: Two prex codes. Symbol a b c d e Probability 0.12 0.40 0.15 0.08 0.25 Code 1 000 001 010 011 100 Code 2 000 11 01 001 10


is the length of the code assigned to such that the average length

. Indeed, as discussed in Module 7, to compute

the average of the code nd a code

we must compute the sum of products frequency

is as short as possible, that is,

length. We want to

The above is an example of a simple optimization problem: we are looking for a code (mapping from a set of messages

to a sequence of binary strings) such that the average code length

is the

smallest. It turns out that this problem is easy to solve. In 1952 Huffman proposed the following solution: 1. Select two symbols and , whose probability is the sum of that have the lowest probabilities, and replace them by a single

(imaginary) symbol, say

and .

2. Apply Step 1 recursively until you exhaust all symbols (and the nal total probability of the imaginary symbol is equal to one). 3. The code for the original symbols is obtained by using the code for

appended for the code for

and appended for the code for

(dened in Step 1) with

This procedure, which can be proved to be optimal, and it is best implemented on trees, as explained in the following example. Example 5: We nd the best code for symbols with probabilities dened in Table of the

1 of the previous example. The construction is shown in Figure 5. We rst observe that symbols and

. Now we have new set , respectively. We apply the same algorithm as before.
total probability

have the smallest probabilities. So we join them building a small tree with a new node

with the probabilities We choose two symbols and . We

with the smallest probabilities (a tie is broken arbitrarily). In our case it happens to be

build a new node

of probability and construct a tree as shown. Continuing this way we end

up with the tree shown in the gure. Now we read:

This is our Huffman code with the average code length

Observe that this code is better than the other two codes discussed in the previous example.

Evaluation of Arithmetic Expressions

Computers often must evaluate arithmetic expressions like

where and are called operands and

(1) are called the operators. How to


evaluate efciently such expressions? It turns out that a tree representation may help transforming such arithmetic expressions into others that are easier to evaluate by computers. Let us start with a computer representation. We restrict our discussion to binary operators (i.e., such that need two operands, like

). Then we build a binary trees such that:

1. Every leaf is labeled by an operand. 2. Every interior node is labeled by an operator. Suppose a node is labeled by a binary operand (where

) and the left child of this node represents expression

child expression representing

. Then the node labeled by

is shown in Figure 6.

represents expression

, while the right . The tree

Let us have a closer look at the expression tree shown in Figure 6. Suppose someone gives to you such a tree. Can you quickly nd the arithmetic expression? Indeed, you can! Let us traverse inorder the tree in this gure. We obtain:

thus we recover the original expression. The problem with this approach is that we need to keep parenthesis around each internal expression. In order to avoid them, we change the inx notation to 9

0.12 a

0.4 b

0.15 c

0.08 d

0.25 e d

0.2 f_da

0.4 b a

0.15 c

0.25 e

0.35 f_dac c

0.4 b

0.25 e f_dace e

0.60 1 1 c

0.4 b

1 d a

0 b e c 0

1 1 0 0 d 1 1 a

Figure 5: The construction of a Huffman tree and a Huffman code.


Figure 6: The expression tree for

either Polish notation (also called prex notation) or to reverse Polish notation (also called postx notation), as discussed below. Let us rst introduce some notation. As before, we write (

) (here denotes the

power operation) as an operand, while as

and are expression. The standard way of representing

arithmetic expressions as shown above are called the inx notation. This can be written symbolically

. In the prex notation (or Polish notation) we shall write

while in the postx notation (or reverse Polish notation) we write

Observe that parenthesis are not necessary. For the expression shown in (1) we have postx notation prex notation

How can we generate prex and postx notation from the inx notation. Actually, this is easy. We rst build the expression tree, and then traverse it in preorder to get the prex notation, and postorder to nd the postx notation. Indeed, consider the expression tree shown in Figure 6. The postorder traversal gives


which agrees with the above. The preorder traversal leads us to

which is the same as above. Exercise 8B: Write the following expression

in the postx and prex notations.

Theme 4: Graphs
In this section we present basic denitions and notations on graphs. As we mentioned in the Overview graphs are applied to solve various problems in computer science and engineering such as nding the shortest path between cities, building reliable computer networks, etc. We postpone an in-depth discussion of graphs to IT 320. A graph is a set of points (called vertices) together with a set of lines (called edges). There is at most one edge between any two vertices. More formally, a graph consists of a pair of sets and of edges. Example 6: In Figure 7 we present some graphs that will be used to illustrate our denitions. In particular, the rst graph, say

, where is a set of vertices and

is the set



. The second graph (that turns out to be a tree), say and

, consists of

Now we list a number of useful notations associated with graphs. Two vertices are said to be adjacent if there is an edge between them. An edge incident to vertices and . For example, in Figure 7 vertices the edge

is incident to




are adjacent, while

A multigraph has more than one edge between some vertices. Two edges between the same two vertices are said to be parallel edges. A pseudograph is a multigraph with loops. An edge is a loop if its start and end vertices are the same vertex. A directed graph or digraph has ordered pairs of directed edges. Each edge ( vertex , and an end vertex

. For example, the last graph in Figure 7,

and the set of edges is

) has a start

, has


Figure 7: Examples of graphs.


A labeled graph is a one-to-one and onto mapping of vertices to a set of unique labels, e.g., name of cities. Two graphs and are isomorphic, written

, iff there exists a one-to-one correspon-

dence between their vertex sets which preserves adjacency. Thus



are isomorphic since they have the same set of edges.

A subgraph of

. That is,

is a graph having all vertices and edges in

is a subgraph of


is then a supergraph of and

. A spanning

subgraph is a subgraph containing all vertices of example, n graph is a subgraph.

in Figure 7 the graph

, that is,


and and

. For

If is a vertex and if is . It must hold that if

, we say that (

path if all the vertices are distinct, and a cycle if the walk is a path and then (

) is a trail if all edges are distinct, a

. In in Figure 7

. The length

is a

trail and a path.

A graph is connected if there is a path between any two vertices of the graph. A vertex is isolated if there is no edge having it as one of its endpoints. Thus a connected graph has no isolated vertices. In Figure 7 graphs The girth of a graph denoted by we have

and are connected. of Figure 7

is the length of the shortest cycle. In graph

The circumference of a graph denoted by undened if no cycle of exists. In graph

, is the length of any longest cycle, and is . of Figure 7 we have

A graph is called planar if it can be drawn in the plane so that two edges, intersect only at points corresponding to nodes of the graph. Let

be the shortest length path between vertices

and , if any. Then for all


: 1. If



2. 3. 4. Thus,

with iff . . Triangular inequality

denes a distance on graphs. A degree of a vertex , denoted as is the number of edges incident to

that is, the sum of vertex degrees is equal to twice the number of edges. The reader should verify it on Figure 7.

A graph

is regular of degree if every vertex has degree . Graph

in Figure 7 is -regular.

A complete graph


vertices has an edge between every pair of distinct

vertices. Thus a complete graph

is regular degree of

, and has edges.

Observe that is a triangle. In Figure 7

A bipartite graph, also refereed to as bicolorable or bigraph, is a graph whose vertex set can be separated into two disjoint sets such that there is no edge between two vertices of the same and for each edge ( ) ) such in , either and , or and . A bigraph is such that and . set. Thus a graph is a bigraph if



A free tree (unrooted tree) is a connected graph with no cycles. 1.

is a free tree if

is connected, but if any edge is deleted the resulting graph is no longer connected. , then there is exactly one simple path from to .

2. If and are distinct vertices of 3. has no cycles and has

edges. of a vertex

A graph is acyclic if it contains no cycles. In a digraph the out-degree, denoted

is the number of edges with their is the number of edges with their

initial vertex being . Similarly the in-degree of a vertex nal vertex being . Clearly for any digraph


that is, the sum of in-degrees over all vertices is the sum of out-degrees over all vertices (cf. Figure 7 for

). An acyclic digraph contains no directed cycles, and has at least one point of

out-degree zero and at least on point of in-degree zero.

A directed graph is said to be strongly connected if there is an oriented path from to and there is no path between vertex and . from to for any two vertices

. Graph

in Figure 7 is not strongly connected since

If a graph contains a walk that traverses each edge exactly once, goes through all the vertices, and ends at the starting point, then the graph is said to be Eulerian. That is, it contains an Eulerian trail. None of the graphs in Figure 7 has an Eulerian trail. If there is a path through all vertices that visit every vertex once, then it is called a Hamiltonian in Figure 7 is The square

path. If it ends in the starting point, then we have a Hamiltonian cycle. The Hamiltonian cycle

is a graph which contains an edge is between any two vertices that are connected by a path of any length in . The graph
path of length smaller than or equal to called the transitive closure of .

is where contains an edge ( ) whenever there is a path in such that . The powers , are dened similarity. Thus is a graph which contains edges between any two vertices that are connected by a

of a graph


. So