Anda di halaman 1dari 10

1. What steps should one take when solving a problem using a computer?

First construct an exact model in terms of which we can express allowed solutions. We can use any branch of mathematics or science to model the problem domain. Once we have a mathematical model, we can specify a solution in terms of that model. 2. Explain some issues when dealing with the representation of real world objects in a computer program. How real world objects are modeled as mathematical entities; The set of operations that we define over these mathem entities; How these entities are stored in a computers memory (fields in records, how these records are arranged in memory: arrays or linked structures); The algorithms used to perform these operations. 3. Explain the notions: model of computation; computational problem; problem instance; algorithm; and program Model of computation: an abstract sequential computer, called a Random Access Machine (RAM). Uniform cost model. Computational problem: A specification in general terms of inputs and outputs and the desired input/output relationship. Problem instance: A particular collection of inputs for a given problem. Algorithm: a method of solving a problem which can be implemented on a computer. Program: particular implementation of some algorithm. 4. Show the algorithm design algorithm. AlgorithmDesign (informal problem) 1 formalize problem (mathematically) [Step 0] 2 repeat 3 devise algorithm [Step 1] 4 analyze correctness [step 2] 5 analyze efficiency [step 3] 6 refine 7 until algorithm good enough 8 return algorithm 5. What might be the resources considered in algorithm analysis? Running time, memory usage, nr of accesses to secondary storage, nr of basic arithmetic operations, network traffic 6. Explain the big-oh class of growth. O(g(n)) is the set of functions f, s.t f(n) < c*g(n), c>0, n>N (g is an upper bound for f) O(g) is the set of functions that grow no faster than g. g(n) describes the worst case behavior of an algorithm that is O(g). Ex. n lgn + n = O(n2); lgkn = O(n) for all k in N 7. Explain the big-omega class of growth. (g) is the set of functions f, s.t f(n) > c*g(n) (g(n)): class of functions f(n) that grow at least as fast as g(n); g(n) describes the best case behavior of an algorithm that is (g). 8. Explain the big-theta class of growth. (g(n)): class of functions f(n) that grow at same rate as g(n).

Ex. n2/2-3n=(n2). We have to determine c1>0, c2 > 0, n0 in N such that: C2 n2 <= n2 / 2 - 3 n <= c1 n2 for any n > n0 Dividing by n2 yields: c2<= 1 / 2 3 / n <= c1 This is satisfied for c2 = 1 / 14, c1 = 1 / 2, n0 = 7. 9. What are the steps in mathematical analysis of nonrecursive algorithms? Decide on parameter n indicating input size. Identify algorithms basic operation. Determine worst, average and best case for input of size n. Set up summation for C(n) reflecting algorithms loop structure. Simplify summation using standard formulas. 10. What are the steps in mathematical analysis of recursive algorithms? Decide on parameter n indicating input size. Identify algorithms basic operation. Determine worst, average and best case for input of size n. Set up a recurrence relation and initial condition(s) for C(n) the nr of times the basic operation will be executed for an input of size n. Solve the recurrence to obtain a closed form or estimate the order of maginitude of the solution. 11. From lowest to highest, what is the correct order of the complexities O(n2), O(3n), O(2n), O(n2 lg n), O(1), O(n lg n), O(n3), O(n!), O(lg n), O(n)? O(1), O(lgn), O(n), O(2n), O(3n), O(n lgn), O(n 2), O(n2lg n), O(n3), O(n!) 12. What are the complexities of T1(n) = 3n lg n + lg n; T2(n) = 2n + 3 n + 25; and T3(n, k) = k + n, where k less-than or equal to n? From lowest to highest, what is the correct order of the resulting complexities? T1: O(n lgn); T2: O(2n); T3: O(n) T3, T1, T2 13. Suppose we have written a procedure to add m square matrices of size n n. If adding two square matrices requires O (n2 ) running time, what is the complexity of this procedure in terms of m and n? O((m n2) 14. Suppose we have two algorithms to solve the same problem. One runs in time T1(n) = 400n, whereas the other runs in time T2(n) = n2. What are the complexities of these two algorithms? For what values of n might we consider using the algorithm with the higher complexity? T1: O(n), T2: O(n2). For n<400. 15. How do we account for calls such as memcpy and malloc in analyzing real code? Although these calls often depend on the size of the data processed by an algorithm, they are really more of an implementation detail than part of an algorithm itself. Usually calls such as memcpy and malloc are regarded as executing in a constant amount of time. Generally, they can be expected to execute very efficiently at the machine level, regardless of how much data they are copying or allocating. Of course, their exact efficiency may depend on the computer on which they execute as well as other factors (particularly in the case of malloc, which depends on the state of the system at the moment it is called). 16. Explain the stack ADT. (abstract data type) LIFO principle. Elements can be inserted at any time, but only the last can be removed. A stack supports 2 main operations: push(x): inserts element x onto top of

stack; (input: element, output: none); pop(): removes the top stack element and returns it; if stack is empty an error occurs (input: none, output: element). Other support operations: size(): returns the nr of elements in stack; isEmpty(): returns a Boolean indicating if stack if empty; top(): returns the top element of the stack, without removing it. 17. Explain the list ADT. The list supports 3 main operations: insert(x): insert element x at the front of the list; delete(x): remove element from the front of the list (error if the list is empty); search(k): search for list element of key k on the list. Other support methods: size(): returns the nr of elements in the list; isEmpty(): returns a Boolean value that indicates if the list is empty or not; first()/last(): returns, but not removes, the first/last element in the list; prev(x), next(x): element preceding/succeeding x on list. 18. There are occasions when arrays have advantages over linked lists. When are arrays preferable? Linked lists present advantages over arrays when we expect to insert and remove elements frequently. However, arrays offer some advantages when we expect the nr of random accesses to overshadow the nr of insertions and deletions. Arrays are strong in this case because their elements are arranged contiguously in memory. This arrangement allows any element to be accessed in O(1) time by using its index. Arrays are also advantageous for storage because they do not require additional pointers to keep their elements linked together. 19. Explain the queue ADT. FIFO principle. Elements may be inserted at any time, but only the element which has been in the queue the longest may be removed. 2 fundamental operations: enqueue(o): insert element o at the rear of the queue; dequeue(): remove the queue element from the front of the queue and return it. Other support methods: size(), isEmpty(), front(). 20. Sometimes we need to remove an element from a queue out of sequence (i.e., from somewhere other than the head). What would be the sequence of queue operations to do this if in a queue of five requests, req1, . . . , req5, we wish to process req1, req3 , and req5 immediately while leaving req2 and req4 in the queue in order? What would be the sequence of linked list operations to do this if we morph the queue into a linked list? Req1=dequeue() -> process req1; req2=dequeue(); enqueue(req2); -> remove req2 from front and insert at rear; req3=dequeue() -> process req3; req4=dequeue(); enqueue(req4); -> remove req4 from front and insert at rear; req5=dequeue() ->process req5; in the queue we have req2 and req4 in this order. 21. Recall that each of the linked list data structures presented at the laboratory has a size member. The SLList and DLList data structures also contain a first and last member. Why are each of these members included? By updating these members dynamically as elements are inserted and removed, we avoid the O(n) runtime complexity of traversing the list each time its last element or size is requested. By maintaining these members, fetching a lists first or last element or size becomes an O(1) operation without adding any complexity to the operations for inserting and removing elements.

22. When would you use a doubly-linked list instead of a singlylinked one? Why? We should use a doubly-linked list when we need to access both the preceding and the succeeding nodes of an element. Thus it is easier to access the elements of the list, and to perform operations on them. 23. Show the result of inserting the numbers 32, 11, 22, 15, 17, 2, -3 in a doubly linked list with a sentinel. 24. Show the result of inserting the numbers 32, 11, 22, 15, 17, 2, -3 in a circular queue of capacity 9.

25. Show the result of inserting the numbers 32, 11, 22, 15, 17, 2, -3 in a stack of capacity 12. 26. Determine the value returned by the function (depending on n), and the worst case running time, in big-Oh notation, of the following program

27. Determine the value returned by the function (depending on n), and the worst case running time, in big-Oh notation, of the following program

28. Define the term "rooted tree" both formally and informally. Informal: collection of elements called nodes, one of which is distinguished as root, along with a relation (parenthood) that imposes a hierarchical structure on the nodes. Formal: A single node by itself=tree. This node is also the root of the tree. Assume n=node and T1, T2 Tk = trees with roots n1, n2, , nk. Construct a new tree by making n be the parent of node n1, n2, , nk. 29. Define the terms ancestor, descendant, parent, child, sibling as used with rooted trees. Ancestor: Every node on the path from the node p to the root is called an ancestor of the node p. The node p is a descendant for all its ancestors. Parent: node p is a parent for node c is p and c are adjacent, and p is on a higher level (closer to the root). Node c is a child for p.

Sibling: all nodes having the same parent. 30. Define the terms path, height, depth, level as used with rooted trees. Path: <n1, n2, , nk> s.t ni=parent ni+1 for 1<=i<=k. length(path)=nr of nodes -1 The height of a vertex v in V is the length of the longest path from v to a leaf The depth of a vertex v in V is the length of the path from root to v The level of a vertex v in V is height(T)-depth(v). 31. Show the preorder traversal of the tree given in Fig. 1

Fig. 1. Example Tree 23. Show the postorder traversal of the tree given in Fig. 1 24. Show the inorder traversal of the tree given in Fig. 1 25. Construct the tree whose preorder traversal is: 1, 2, 5, 3, 6, 10, 7, 11, 12, 4, 8, 9, and inoder traversal is 5, 2, 1, 10, 6, 3, 11, 7, 12, 8, 4, 9. 26. Construct the tree whose postorder traversal is: 5, 2, 10, 6, 11, 12, 7, 3, 8, 9, 4, 1, and inoder traversal is 5, 2, 1, 10, 6, 3, 11, 7, 12, 8, 4, 9. 27. Show the vector contents for an implementation of the tree in Fig. 1. 28. Show the contents of the data structures (in a sketch) for an implementation of the tree in Fig. 1 using lists of children. 29. Show the contents of the data structures (in a sketch) for an implementation of the tree in Fig. 1 using leftmost child - right sibling method. 30. Show the binary search tree which results after inserting the nodes with keys 5, 2, 10, 6, 11, 12, 7, 3, 8, 9, 4, 1 in that order, in an empty tree. 31. Show the binary search tree resulted after deleting keys 10, 5 and 6 in that order from the binary search tree of Fig. 2. 32. How do we find the smallest node in a binary search tree? What is the runtime complexity to do this in both an unbalanced and balanced binary search tree, in the worst case? How do we find the largest node in a binary search tree? What are the runtime complexities for this? ? Smallest node: the leftmost node in the tree. Complexity O(depth(node))? Largest node: the rightmost node in the tree. 33. Compare the performance of operations insert (add), delete and find for arrays, doubly-linked lists and BSTs. Arrays are simple, fast, but inflexible. Lists are simple and flexible. Trees are simple and flexible. Add: Arrays: O(1) or O(n) if it includes sort; Lists: O(1). Delete: Arrays: O(n); Lists: O(1) for any node or O(n) for a specific node Find: Arrays: O(n) or O(log n) if the array is sorted; Lists: O(n); trees: O(log n).

34. What is the purpose of static BSTs and what criteria are used to build them? They reduce time of search, because the more frequently accessed keys are kept closer to root. 35. If we would have two functions: bitree_rem_left (for removing the left subtree) and bitree_rem_right (for removing the right subtree), why should we use a postorder traversal used to remove the appropriate subtree? Could a preorder or inorder traversal have been used instead? In the postorder traversal the removing starts with the leaves. It removes the leftmost leaf, then the rightmost leaf, and then gets back to their parent. A preorder or inorder traversals couldnt be used, because after the removal of some nodes, it wouldnt be possible to access their descendants in order to remove them. 36. When might we choose to make use of a tree with a relatively large branching factor, instead of a binary tree, for example? ? 37. In a binary search tree, the successor of some node x is the next largest node after x. For example, in a binary search tree containing the keys 24, 39, 41, 55, 87, 92, the successor of 41 is 55. How do we find the successor of a node in a binary search tree? What is the runtime complexity of this operation? Suppose the node has the depth D. We start from the root and compare the nodes key with the roots key. If it is larger we move on to the roots right child, if it is smaller to the roots left child, and we repeat the procedure. We have to make D comparisons until we find the node. Then its successor is its right child. ?? The complexity is O(log n). ?? 38. A multiset is a type of set that allows members to occur more than once. How would the runtime complexities of inserting and removing members with a multiset compare with the operations for inserting and removing members from a set? ??? If we consider the set and multiset as being arrays of sorted elements, then the complexities are equal. 39. The symmetric difference of two sets consists of those members that are in either of the two sets, but not both. The notation for the symmetric difference of two sets, S1 and S2, is S1 ? S2. How could we implement a symmetric difference operation using the set operations union, intersection and difference? Could this operation be implemented more efficiently some other way? S=(S1 U S2) \ (S1 S2) S=(S1\S2) U (S2\S1) ?? more efficiently? 40. Sketch the algorithm for HashInsert in a hash table using open addressing. We successively examine the hash table looking for an unoccupied slot. The slots we check depend on the key we wish to insert. HashInsert(B,k) 1 i := 0 2 repeat 3 j:=h(k) 4 if B[j] = NIL 5 then B[j]:=k

6 return j 7 else i:=i+1 8 until i=m 9 error hash table overflow 41. Why are hash tables good for random access but not sequential access? For example, in a database system in which records are to be accessed in a sequential fashion, what is the problem with hashing? Because the hash function disperses the keys in an apparently random way. ? 42. What is the worst-case performance of searching for an element in a chained hash table? How do we ensure that this case will not occur? O(n), when the hash degenerates into a linked list (all the keys collide). We should choose an appropriate compression function that minimizes the number of collisions. 43. What is the worst-case performance of searching for an element in an open-addressed hash table? How do we ensure that this case will not occur? O(n), when all the keys collide. 44. Explain the generation of hash codes using memory addresses, integer cast and component sum. Memory address: we reinterpret the memory address of the key object as an integer Integer cast: we reinterpret te bits of the key as an integer; suitable for keys of length<=the nr of bits of the integer type. Component sum: we partition the bits of the key into components of fixed length (e.g. 16 or 32) and we sum the components (ignoring overflows); suitable for numeric keys of fixed length>=the nr of bits of the integer type 45. Explain the generation of hash codes using polynomial accumulation. We partition the bits of the key into a sequence of components of fixed length (e.g. 8,16,32) a0 a1 an-1; we evaluate the polynomial p(x)=a 0 + a1x + a2 x2 + + ann-1 at a fixed value x, ignoring overflows. Suitable for strings. 1x 46. How can one implement a compression function using the MAD technique? Multiply, Add and Divide. H(y)=(ay+b) mod m; a,b nonnegative integers s.t. a mod m != 0; otherwise every integer would map to the same value b. 47. Explain the quadratic hashing rehashing strategy. H(k,i)=(h(k) + c1i + c2i2) mod m. h: an auxiliary hash function; 0<=i<=m-1; c 1!=0 and c2!=0: auxiliary constants. Checks B[h(k)]; next checked locations depend quadratically on I; secondary clustering effect; i is the nr of trials. 48. Explain the double hashing rehashing strategy. H(k,i) = (h1(k) + ih2(k)) mod m; h1,h2: aux hash functions, initially checks position B[h1(k)]; successive positions are h2(k) mod m away from the previous positions (sequence depends in two ways on key k); k2(k) and m must be relatively prime (to allow for the whole table to be searched). To ensure this condition: take m=2 k and make h2(k) generate an odd nr or take m prime and make h2(k) return a positive integer m smaller than m h1(k)= k mod m; h2(k)=1 + (k mod m)

49. Show the hash table which results after inserting the values 5, 2, 10, 6, 11, 12, 7, 3, 8, 9, 4, 1 in a chained hash table with N=5 and hash function h(x)=x mod N. 50. Show the hash table which results after inserting the values 5, 2, 10, 6, 11, 12, 7, 3, 8, 9, 4, 1 in an open addressing hash table with N=16, N'=13 using double hashing, The hash functions are h1(x)=x mod N, and h2(x)=1+ (x mod N'). _ 1 2 3 4 5 6 7 8 9 10 11 12 51. What are the operations for the priority queue ADT? Insert and deletemin, createEmpty for the initialization of the data structure. Additional support operations: min() returns, but doesnt remove, an entry with the smallest key; size(), isEmpty(). 52. Compare the performance of priority queues using sorted and unsorted lists. Unsorted list: insert takes O(1) time (we can insert the item at the beginning or end of the list); deleteMin and min take O(n) time (we have to scan the entire list to find the smallest key). Sorted list: insert taked O(n) time (we have to find a place where to insert the item); delelteMin and min take O(1) time (the item is at the beginning of the list). 53. What is a partially ordered tree? A partially ordered tree is a binary tree, in which at the lowest level, where some leaves may be missing, we require that all missing leaves are to the right of all leaves that are not on the lowest level. The priority of node v is no greater than the priority of the children of v. 54. Show the result of inserting the value 14 in the POT of Fig. 2.

Fig. 2 A partially ordered tree. 55. Explain the notion "heap". A heap is complete binary tree of height h iff: -it is empty or -its left subtree is complete of height h-1 and its right subtree is completely full of height h-2 or -its left subtree is completely full of height h-2 and its right subtree is complete of height h-1. 56. What is an AVL tree? An AVL tree is a binary search tree with a balance condition: for every node in an AVL tree T, the height of the left (TL) and right (TR) subtrees can differ by at most 1: |hL hR|<=1. 57. Draw the AVL tree which result from inserting the keys 52, 04, 09, 35, 43, 17, 22, 11 in an empty tree. 58. Draw the AVL tree resulting after deleting node 09 from the tree of Fig. 3.

Fig. 3. An AVL tree. 59. Describe the left-right double rotation in an AVL tree. Left rotation around the left child of a node, followed by a right rotation around the node itself; k1<k2, k1<k3,k2<k3; rotate to make k2 the topmost node. 60. Draw the AVL tree resulting after deleting node 35 from the tree of Fig. 4.

Fig. 4. Another AVL tree. 61. What can you say about the running time for AVL tree operations? -a single restructure is O(1), using a linked-structure binary tree -find is O(log n): height of tree is O(log n), no restructure needed -insert is O(log n): initial find is O(log n), restructuring up the tree, maintaining heights is O(logn) -remove is O(log n): initial find, restructuring up the tree, maintaining heights: O(log n) 62. What is a 2-3 tree? Each interior node has 2 or 3 children, and each path from the root to a leaf has the same length. 63. Show the 2-3 tree which results after inserting the key 13 in the tree of Fig. 5.

Fig. 5. A 2-3 tree. 64. Show the 2-3 tree which results after deleting the key 13 in the tree of Fig. 6.

Fig. 6. Another 2-3 tree.

65. What is a 2-3-4 tree? 2-3-4 tree refers to how many links to child nodes can potentially be contained in a given node. For non-leaf nodes, there are 3 arrangements: -a node with one data item always has 2 children -a node with two data items always has 3 children -a node with 3 data items always has 4 children. A non leaf node must always have one more child than data items. Empty nodes are not allowed. 66. What were the disjoint sets with union and find designed for? Applicable to problems where we start with a collection of objects, each in a set by itself, we combine sets in some order, and from time to time we ask which set a particular object is in. Equivalence classes: If set S has an equivalence relation defined on it, then the set S can pe partitioned into disjoint subsets S1, S2 Sn. Equivalence problem: given a set S and a sequence of statements of the form ab, process the statements in order in such a way that at any time we are able to determine in which equivalence class a given element belongs. 67. Define the operations of the union-find set ADT. Union(A,B) takes the union of the components A and B and calls the result either A or B, arbitrarily; find(x): a function that returns the name for the component of which x is a member; initial(A,x) creates a component named A that contains only the element x. 68. Draw a sketch showing a lists implementation for the union-find set ADT with sets: 1: {1, 4, 7}; 2: {2, 3, 6, 9}; 8:{8, 11, 10, 12}. 69. Draw a sketch showing a tree forest implementation for the unionfind set ADT with sets: 1: {1, 4, 7}; 2: {2, 3, 6, 9}; 8:{8, 11, 10, 12}. 70. How can one speed up union-find ADT operations? When performing a union, make the root of smaller tree point to the root of the larger tree. Performing n union-find operations implies O(n log n) time: each time we follow a pointer, we are going to a subtree of size at least double the size of the previous subtree. Thus we follow at most O(log n) pointers for any find. Path compression: after performin a find, compress all the pointers on the path just treversed so that they all point to the root.

Anda mungkin juga menyukai