Friday 23 November 2012

Advanced Data Structures using ‘C’ - Graphs and their Applications – II

Advanced Data Structures using ‘C’ Unit 4

Unit 4 Graphs and their Applications – II Structure:
4.1 Introduction
Objective
4.2 Depth First Search (DFS) Algorithm
4.2.1 Sample Problem: n Queens [Traditional]
4.2.1.1 Depth First Search (DFS) Implementation
4.2.1.2 Complexity
Self Assessment Questions
4.3 Breadth First Search (BFS)
4.3.1 Sample Problem: Knight Cover [Traditional]
4.3.1.1 Breadth First Search (BFS) Implementation
4.3.1.2 Complexity
Self Assessment Questions
4.4 Depth First with Iterative Deepening (DF-ID)
4.4.1 Complexity
4.5 Comparison of DFS, BFS & DFS+ID
Self Assessment Questions
4.6 Sample Problems
4.6.1 Super-prime Rib
4.6.2 Betsy's Tour
4.6.3 Udder Travel
4.6.4 Desert Crossing
4.6.5 Addition Chains
4.7 Informed Search
4.7.1 Best First Search
4.7.2 A* Search
Self Assessment Questions
Advanced Data Structures using ‘C’ Unit 4
Sikkim Manipal University Page No.: 95
4.8 An Application of Graph
4.9 Amortized Analysis
4.9.1 Aggregate Analysis
4.9.2 The potential method
4.9.3 Properties of the Potential Function
4.9.4 The Dynamic Hash Table
4.9.5 A Dynamic Hash Table that both expands and contracts
Self Assessment Questions
4.10 Summary
4.11 Terminal Questions
4.1 Introduction
In this unit the application of the graphs were discussed mainly on search algorithms. DFS, BFS and DF-ID were discussed with the analysis of examples like movement of nights in a nxn chess board. Different practical examples related were discussed. Informed search methods and application of graphs and the cost analysis algorithms are introduced. Objective: By the end of this unit the reader must be able to answer
 How DFS, BFS and DF-ID search were used and to give the comparative study of these methods
 How to apply these search algorithms to the related problems others than the examples given here
 What are the informed search algorithms and their working
 How cost effectiveness are these search algorithms and to carry out the Amortized analysis
Advanced Data Structures using ‘C’ Unit 4
Sikkim Manipal University Page No.: 96
4.2 Depth First Search (DFS) Algorithm
Form a one-element queue consisting of the root node. Until the queue is empty or the goal has been reached, determine if the first element in the queue is the goal node. If the first element is the goal node, do nothing. If the first element is not the goal node, remove the first element from the queue and add the first element's children, if any, to the front of the queue. If the goal node has been found, announce success, otherwise announce failure. Note: This implementation differs with BFS in insertion of first element's children, DFS from FRONT while BFS from BACK. The worst case for DFS is the best case for BFS and vice versa. However, avoid using DFS when the search trees are very large or with infinite maximum depths.
4.2.1 Sample Problem: n Queens [Traditional]
Place n queens on an n x n chess board so that no queen is attacked by another queen.
4.2.1.1 Depth First Search (DFS) Implementation
The most obvious solution to code is to add queens recursively to the board one by one, trying all possible queen placements. It is easy to exploit the fact that there must be exactly one queen in each column: at each step in the recursion, just choose where in the current column to put the queen. 1. search(col) 2. if filled all columns 3. print solution and exit 4. for each row 5. if board(row, col) is not attacked 6. place queen at (row, col) 7. search(col+1) 8. remove queen at (row, col)
Advanced Data Structures using ‘C’ Unit 4
Sikkim Manipal University Page No.: 97
Calling search(0) begins the search. This runs quickly, since there are relatively few choices at each step: once a few queens are on the board, the number of non-attacked squares goes down dramatically. This is an example of depth first search, because the algorithm iterates down to the bottom of the search tree as quickly as possible: once k queens are placed on the board, boards with even more queens are examined before examining other possible boards with only k queens. This is okay but sometimes it is desirable to find the simplest solutions before trying more complex ones. Depth first search checks each node in a search tree for some property. The search tree might look like this: The algorithm searches the tree by going down as far as possible and then backtracking when necessary, making a sort of outline of the tree as the nodes are visited. Pictorially, the tree is traversed in the following manner:
Advanced Data Structures using ‘C’ Unit 4
Sikkim Manipal University Page No.: 98
4.2.1.2 Complexity
Suppose there are d decisions that must be made. (In this case d=n, the number of columns we must fill.) Suppose further that there are C choices for each decision. (In this case c=n also, since any of the rows could potentially be chosen.) Then the entire search will take time proportional to c^d, i.e., an exponential amount of time. This scheme requires little space, though: since it only keeps track of as many decisions as there are to make, it requires only O(d) space. Self Assessment Questions
1. What is the significance of a Depth First Search (DFS)? How it is implemented?
2. Discuss the complexity of DFS.
4.3 Breadth First Search (BFS)
Form a one-element queue consisting of the root node. Until the queue is empty or the goal has been reached, determine if the first element in the queue is the goal node. If the first element is the goal node, do nothing (or
Advanced Data Structures using ‘C’ Unit 4
Sikkim Manipal University Page No.: 99
you may stop now, depends on the situation). If the first element is not the goal node, remove the first element from the queue and add the first element's children, if any, to the back of the queue. If the goal node has been found, announce success, otherwise announce failure. The side effect of BFS:
1. Memory requirements are a bigger problem for BFS than the execution time
2. Time is still a major factor, especially when the goal node is at the deepest level.
4.3.1 Sample Problem: Knight Cover [Traditional]
Place as few knights as possible on an n x n chess board so that every square is attacked. A knight is not considered to attack the square on which it sits.
4.3.1.1 Breadth First Search (BFS) Implementation
In this case, it is desirable to try all the solutions with only k knights before moving on to those with k+1 knights. This is called breadth first search. The usual way to implement breadth first search is to use a queue of states: 1. process(state) 2. for each possible next state from this one 3. enqueue next state 4. search() 5. enqueue initial state 6. while !empty(queue) 7. state = get state from queue 8. process(state) This is called breadth first search because it searches an entire row (the breadth) of the search tree before moving on to the next row. For the search tree used previously, breadth first search visits the nodes in this order:
Advanced Data Structures using ‘C’ Unit 4
Sikkim Manipal University Page No.: 100
It first visits the top node, then all the nodes at level 1, then all at level 2, and so on.
4.3.1.2 Complexity
Whereas depth first search required space proportional to the number of decisions (there were n columns to fill in the n queens problem, so it took O(n) space), breadth first search requires space exponential in the number of choices. If there are c choices at each decision and k decisions have been made, then there are c^k possible boards that will be in the queue for the next round. This difference is quite significant given the space restrictions of some programming environments. Self Assessment Questions
1. Explain Breadth First Search (BFS) algorithm? What are there implications?
Advanced Data Structures using ‘C’ Unit 4
Sikkim Manipal University Page No.: 101
4.4 Depth First with Iterative Deepening (DF-ID)
An alternative to breadth first search is iterative deepening. Instead of a single breadth first search, run D depth first searches in succession, each search allowed to go one row deeper than the previous one. That is, the first search is allowed only to explore to row 1, the second to row 2, and so on. This “simulates'' a breadth first search at a cost in time but a savings in space.
1. truncated_dfsearch (hnextpos, depth)
2. if board is covered
3. print solution and exit
4. if depth == 0
5. return
6. for i from nextpos to n*n
7. put knight at I
8. search(i+1, depth-1)
9. remove knight at i
10. dfid_search
11. for depth = 0 to max_depth
12. truncated_dfsearch(0, depth)
4.4.1 Complexity
The space complexity of iterative deepening is just the space complexity of depth first search: O(n). The time complexity, on the other hand, is more complex. Each truncated depth first search stopped at depth k takes ck time. Then if d is the maximum number of decisions, depth first iterative deepening takes c^0 + c^1 + c^2 + ... + c^d time.
If c = 2, then this sum is c^(d+1) - 1, about twice the time that breadth first search would have taken. When c is more than two (i.e., when there are many choices for each decision), the sum is even less: iterative deepening
Advanced Data Structures using ‘C’ Unit 4
Sikkim Manipal University Page No.: 102
cannot take more than twice the time that breadth first search would have taken, assuming there are always at least two choices for each decision.
4.5 Comparison of DFS, BFS & DFS+ID
Once you've identified a problem as a search problem, it's important to choose the right type of search. Here are some information which may be useful while taking the decision. In a Nutshell
Search
Time
Space
When to use
DFS
O(c^k)
O(k)
Must search tree anyway, know the level the answers are on, or you aren't looking for the shallowest number.
BFS
O(c^d)
O(c^d)
Know answers are very near top of tree, or want shallowest answer.
DFS+ID
O(c^d)
O(d)
Want to do BFS, don't have enough space, and can spare the time.
d is the depth of the answer k is the depth searched d <= k Remember the ordering properties of each search. If the program needs to produce a list sorted shortest solution first (in terms of distance from the root node), use breadth first search or iterative deepening. For other orders, depth first search is the right strategy.
If there isn't enough time to search the entire tree, use the algorithm that is more likely to find the answer. If the answer is expected to be in one of the rows of nodes closest to the root, use breadth first search or iterative deepening. Conversely, if the answer is expected to be in the leaves, use the simpler depth first search. Be sure to keep space constraints in mind. If
Advanced Data Structures using ‘C’ Unit 4
Sikkim Manipal University Page No.: 103
memory is insufficient to maintain the queue for breadth first search but time is available, use iterative deepening. Self Assessment Questions
1. Compare and contrast DFS, BFS & DFS+ID approaches.
2. What are various problem areas where above methods are applicable?
4.6 Sample Problems
4.6.1 Super-prime Rib
A number is called super-prime if it is prime and every number obtained by chopping some number of digits from the right side of the decimal expansion is prime. For example, 233 is a super-prime, because 233, 23, and 2 are all prime. Print a list of all the super-prime numbers of length n, for n <= 9. The number 1 is not a prime. For this problem, use depth first search, since all the answers are going to be at the nth level (the bottom level) of the search.
4.6.2 Betsy's Tour
A square township has been partitioned into n 2 square plots. The Farm is located in the upper left plot and the Market is located in the lower left plot. Betsy takes a tour of the township going from Farm to Market by walking through every plot exactly once. Write a program that will count how many unique tours Betsy can take in going from Farm to Market for any value of n <= 6. Since the number of solutions is required, the entire tree must be searched, even if one solution is found quickly. So it doesn't matter from a time perspective whether DFS or BFS is used. Since DFS takes less space, it is the search of choice for this problem.
Advanced Data Structures using ‘C’ Unit 4
Sikkim Manipal University Page No.: 104
4.6.3 Udder Travel
The Udder Travel cow transport company is based at farm A and owns one cow truck which it uses to pick up and deliver cows between seven farms A, B, C, D, E, F, and G. The (commutative) distances between farms are given by an array. Every morning, Udder Travel has to decide, given a set of cow moving orders, the order in which to pick up and deliver cows to minimize the total distance traveled. Here are the rules:
1. The truck always starts from the headquarters at farm A and must return there when the day's deliveries are done.
2. The truck can only carry one cow at time.
3. The orders are given as pairs of letters denoting where a cow is to be picked up followed by where the cow is to be delivered.
Your job is to write a program that, given any set of orders, determines the shortest route that takes care of all the deliveries, while starting and ending at farm A. Since all possibilities must be tried in order to ensure the best one is found, the entire tree must be searched, which takes the same amount of time whether using DFS or BFS. Since DFS uses much less space and is conceptually easier to implement, use that.
4.6.4 Desert Crossing
A group of desert nomads is working together to try to get one of their group across the desert. Each nomad can carry a certain number of quarts of water, and each nomad drinks a certain amount of water per day, but the nomads can carry differing amounts of water, and require different amounts of water. Given the carrying capacity and drinking requirements of each nomad, find the minimum number of nomads required to get at least one nomad across the desert.
Advanced Data Structures using ‘C’ Unit 4
Sikkim Manipal University Page No.: 105
All the nomads must survive, so every nomad that starts out must either turn back at some point, carrying enough water to get back to the start or must reach the other side of the desert. However, if a nomad has surplus water when it is time to turn back, the water can be given to their friends, if their friends can carry it. Analysis: This problem actually is two recursive problems: one recursing on the set of nomads to use, the other on when the nomads turn back. Depth-first search with iterative deepening works well here to determine the nomads required, trying first if any one can make it across by themselves, then seeing if two work together to get across, etc.
4.6.5 Addition Chains
An addition chain is a sequence of integers such that the first number is 1, and every subsequent number is the sum of some two (not necessarily unique) numbers that appear in the list before it. For example, 1 2 3 5 is such a chain, as 2 is 1+1, 3 is 2+1, and 5 is 2+3. Find the minimum length chain that ends with a given number. Analysis: Depth-first search with iterative deepening works well here, as DFS has a tendency to first try 1 2 3 4 5 ... n, which is really bad and the queue grows too large very quickly for BFS.
4.7 Informed Search
Unlike Uninformed Search, Informed Search knows some information that can be used to improve the path selection. Examples of Informed Search: Best First Search, Heuristic Search such as A*.
4.7.1 Best First Search
We define a function f(n) = g(n) where g(n) is the estimated value from node 'n' to goal. This search is "informed" because we do a calculation to estimate g(n)
Advanced Data Structures using ‘C’ Unit 4
Sikkim Manipal University Page No.: 106
4.7.2 A* Search
f(n) = h(n) + g(n), similar to Best First Search, it uses g(n), but also uses h(n), the total cost incurred so far. The best search to consider if you know how to compute g(n). Self Assessment Questions
1. Explain different informed search algorithms used in BFS.
4.8 An Application of Graph
INPUT OUTPUT
Input Description: A graph G. Problem: Give an efficient, flexible data structure to represent G. While there are several possible variations, the two basic data structures for graphs are adjacency matrices and adjacency lists.
 How big will your graph be? How many vertices will it have, both typically and in the worse case? Ditto for the number of edges? If your graph has 100 vertices, your adjacency matrix contains 10,000 entries. If your graph has 1,000 vertices, your adjacency matrix contains 1,000,000 entries. If your graph has 10,000 vertices, your adjacency
Advanced Data Structures using ‘C’ Unit 4
Sikkim Manipal University Page No.: 107
matrix contains 100,000,000 entries -- so forget about it. Adjacency matrices work only for small or very dense graphs.
 How dense will your graph be? If the graph is very dense, meaning that a large fraction of the vertex pairs define edges, there is probably no compelling reason to use adjacency lists, since you will be doomed to using Theta(n2) space, anyway.
 Which algorithms will you be implementing? Certain algorithms are easier on adjacency matrices (such as all-pairs shortest path) and others on adjacency lists (such as most DFS-based algorithms). Adjacency matrices win for algorithms that repeatedly ask, “Is (i,j) in G?'' However, most graph algorithms can be modified to eliminate such queries.
 Will you be modifying the graph over the course of your application, and if so, how? Repeated edge insertions and (particularly) deletions argue for adjacency matrices, or perhaps for fancier versions of adjacency lists such as binary search trees. However, more likely than modifying the topology of graph is modifying the attributes of a vertex or edge of the graph, such as size, weight, or color. Attributes are best handled as extra fields in the vertex or edge records of adjacency lists.
Building a good general-purpose graph type is surprisingly tricky and difficult. For this reason, we suggest that you check out existing implementations (particularly LEDA) before hacking up your own. Note that it costs only time linear in the size of the larger data structure to convert between adjacency matrices and adjacency lists. This conversion is unlikely to be the bottleneck in any application, if you decide you want to use both data structures and have the space to store them. This usually isn't necessary but might prove simplest if you are confused about the alternatives.
Advanced Data Structures using ‘C’ Unit 4
Sikkim Manipal University Page No.: 108
4.9 Amortized Analysis
A data structure is a container for holding data that must be maintained through a succession of inserts, removes, and find/retrieves (or overwrites). Using a worst-case analysis to determine the cost of each of a succession of these operations can grossly overstate both the total cost and the average cost per operation of "maintaining" this structure. This is particularly true for self-adjusting structures such as splay trees and heaps. A better approach is to use an amortized analysis which determines an amortized ("time" averaged) cost per operation that is determined by either:
1. Finding the total cost of n successive operations (when such a cost can be determined) and dividing by n, or
2. Associating a potential function with the data structure and evaluating the amortized cost of each operation as the cost of the operation plus the change in potential caused by the operation.
In general, we will be seeking to place a big-O bound on the amortized cost of each operation.
4.9.1 Aggregate Analysis
The first method of analysis is to simply add up the cost of n successive operations, divide by the number of operations, n, and obtain the average or amortized cost per operation. In order to employ this approach, one would have to know what each operation on the container structure was, and what its cost would be. This method can be employed quite successfully if we either evaluate after the fact what the total cost of the n observed operations was, or if we are working with a structure that only has a single operation (like the binary counter). In general, given a container, we will not know in advance the succession of operations that are to be requested.
Advanced Data Structures using ‘C’ Unit 4
Sikkim Manipal University Page No.: 109
Example 1: the Binary Counter Consider an m-bit binary counter with only the increment operation. If we examine a succession of increments and count the number of bits that are flipped on each increment, 0 0 0 0 0 0 0 0 0 0 0 0 start state 0 0 0 0 0 0 0 0 0 0 0 1 1 bit changed 0 0 0 0 0 0 0 0 0 0 1 0 2 bits changed 0 0 0 0 0 0 0 0 0 0 1 1 1 bit changed 0 0 0 0 0 0 0 0 0 1 0 0 3 bits changed 0 0 0 0 0 0 0 0 0 1 0 1 1 bit changed 0 0 0 0 0 0 0 0 0 1 1 0 2 bits changed 0 0 0 0 0 0 0 0 0 1 1 1 1 bit changed 0 0 0 0 0 0 0 0 1 0 0 0 4 bits changed 0 0 0 0 0 0 0 0 1 0 0 1 1 bit changed 0 0 0 0 0 0 0 0 1 0 1 0 2 bits changed 0 0 0 0 0 0 0 0 1 0 1 1 1 bit changed 0 0 0 0 0 0 0 0 1 1 0 0 3 bits changed 0 0 0 0 0 0 0 0 1 1 0 1 1 bit changed 0 0 0 0 0 0 0 0 1 1 1 0 2 bits changed 0 0 0 0 0 0 0 0 1 1 1 1 1 bit changed 0 0 0 0 0 0 0 1 0 0 0 0 5 bits changed We observe that only 1 bit is flipped every other time (starting on the first increment), 2 bits are flipped on the second increment and every four increments after that (always on even numbered increments), 3 bits are flipped on the fourth increment, and every eight increments afterward, 4 bits are flipped on the eighth increment and every sixteen increments thereafter, etc. This observation can be put in the form of a formula as follows:
Advanced Data Structures using ‘C’ Unit 4
Sikkim Manipal University Page No.: 110
Number of flips in n increments = 1*[n/2] + 2*[n/4] + 3*[n/8] + ... + k*[n/2k] where 2k-1 < n  2k. This can be more concisely written as [lg n] number of flips in n increments =  i [n/2i] i = 0 which evaluates to [lg n] [lg n] number of flips in n increments =  i [n/2i]  n  i/2i-1 i = 0 i = 0 Next let 1/2i-1 = (1/2)i-1 = xi-1 Then  i xi-1 = x (x)i =   xi =  (1/(1-x)) = 1/(1-x)2 = 2 and the number of bit flips in n increments is  2 n. The amortized cost per operation is therefore 2.
4.9.2 The potential method
Another way of assigning an amortized cost to a succession of operations on a data structure is to associate a potential function with that data structure and assert that each operation on that structure has an amortized cost that include the actual cost of the operation and the change in potential as a result of that operation. For instance, if we consider the binary counter in the previous example and choose as our potential function the number of bits that are 1, then the result of an increment operation is given by the equation:  ci = ci + i - i-1 where ci = the actual cost of the ith operation, and i = the potential after the ith iteration. If we let bi-1 denote the number of bits set to one before the ith operation and ti = the number of 1 bits that are toggled to zero during the ith iteration, then
Advanced Data Structures using ‘C’ Unit 4
Sikkim Manipal University Page No.: 111
i-1 = bi-1 and i = bi-1 -ti + 1 since exactly 1 zero bit will always be changed to a 1 during an increment. The potential after the ith iteration will be equal to the potential before the ith iteration minus the number of 1 bits that are toggled to zero pust the number (1) of 0 bits toggled to a 1. The amortized cost of any increment is therefore  ci = ci + i - i-1 = ti + 1 + (bi-1 - ti + 1) - bi-1 = 2 This result can be rationalized as the cost of toggling a 0 bit to 1 (cost of 1) plus "paying in advance" (by adding to the potential) the cost of converting it back to a 0.
4.9.3 Properties of the Potential Function
The most difficult aspect of using amortized analysis to study the realistic costs involved in a sequence of operations on a data structure is to find an appropriate potential function. A good potential function should have the following properties:
1. It should be non-negative. 0 is the potential of the empty data structure and i the potential after the ith operation on this structure, i  0
2. It should be simple
3. It should be reflective of the actual cost of emptying the data structure.
4.9.4 The Dynamic Hash Table
Another example could be instructive. Consider a hash table that is initially of size 1, and every time the table becomes full and a new item is to be inserted into it, the table is made to double in size and all of the current elements re-hashed into the new table. After the initial size 1 table, the table can vary from being 1 item more than half full to being completely full.
Advanced Data Structures using ‘C’ Unit 4
Sikkim Manipal University Page No.: 112
If we let numi denote the number of items in the table at time i (after the ith iteration) and sizei denote the size of the table after the ith iteration, the we can define a load factor i as i = numi/sizei or the fraction of the table that is filled after the ith iteration. Since the initial state of an expanded table is 1/2 full and 2 numi  sizei an appropriate choice of potential would be i = 2 numi - sizei Then we will consider two situations:
1. an insertion into a table that is less than full
2. and an insertion into a full table.
In the first instance, the amortized cost  ci = ci + i - i-1 = 1 + (2 numi - sizei) - (2 numi-1 - sizei-1) When the table is less than full, we have 1/2 < i < 1, and can determine that sizei = sizei-1: the size of the table is the same before and after the insertion numi = numi-1 + 1 the contents of the table are increased by 1 ci = 1 the cost of an insert is the cost of doing a hash = O(1) Plugging these values into the above equation yields  ci = ci + i - i-1 = 1 + (2 (numi-1 +1) - sizei-1) - (2 numi-1 - sizei-1) = 3 The amortized cost of this insert is 3 or more particularly O(1).
Case 2. Inserting into a full table
When the table is full, an expansion must occur before the new item is inserted. The cost of performing this insertion is, therefore, the cost of hashing all of the numi-1 current items into the new table plus the cost of inserting the new item.
Advanced Data Structures using ‘C’ Unit 4
Sikkim Manipal University Page No.: 113
 ci = ci + i - i-1 = numi-1 + 1 + (2 numi - sizei) - (2 numi-1 - sizei-1) where in this case sizei = 2 sizei-1 the table doubles in size before the ith insertion numi = numi-1 + 1 the number of items is incremented by 1 ci = numi-1 + 1 previously dicussed numi-1 = sizei-1 since the table is full the amortized cost is determined to be:  ci = ci + i - i-1 = numi-1 + 1 + (2 (numi-1 + 1) - 2 sizei-1) - (2 numi-1 - sizei-1) = numi-1 - sizei-1 + 3 = 3
4.9.5 A Dynamic Hash Table that both expands and contracts
Let our hash table expand (double in size when inserting into a full table) and contract to half size when removing from a table that is 1/4 full. The potential function for this table is shown in the diagram below.
Advanced Data Structures using ‘C’ Unit 4
Sikkim Manipal University Page No.: 114
The potential function is therefore: We have examined the cases of adding into a full and non-full table when (T) > 1/2. Let us examine the cases of removing an element from a table when (T) < 1/2. Case 3. Removing an element from a table when 1/4 < (T) < 1/2 In this instance, the amortized cost  ci = ci + i - i-1 = 1 + (½ sizei - numi) - (½ sizei-1 - numi-1) When the table is less than ½ full, we have: ¼ < i < ½, and can determine that sizei = sizei-1: the size of the table is the same before and after the insertion numi = numi-1 - 1: the contents of the table are decreased by 1 ci = 1: the cost of a remove is the cost of doing a hash = O(1) Plugging these values into the above equation yields  ci = ci + i - i-1 = 1 + (½ sizei-1 - (numi-1 - 1)) - (½ sizei-1 - numi-1) = 2 The amortized cost of this insert is 2 or more particularly O(1). Case 4. Removing an element from a table that is ¼ full When the table is ¼ full, a contraction must occur before the next item is removed. The cost of performing this contraction is, therefore, the cost of hashing all of the numi-1 current items into the new table plus the cost of removing the next item.  ci = ci + i - i-1 = numi-1 + 1 + (½ sizei - numi) - (½ sizei-1 - numi-1)
Advanced Data Structures using ‘C’ Unit 4
Sikkim Manipal University Page No.: 115
where in this case sizei = ½ sizei-1: the table doubles in size before the ith insertion numi = numi-1 - 1 : the number of items is incremented by 1 ci = numi-1 + 1 : previously dicussed numi-1 = ¼ sizei-1 : since the table is ¼ full Substituting into the previous equation, the amortized cost is determined to be:  ci = ci + i - i-1 = numi-1 + 1 + (¼ sizei-1 - (numi-1 - 1)) - (½ sizei-1 - numi-1) = numi-1 - ¼ sizei-1 + 2 = 2 = O(1) Self Assessment Question 1. Explain Amortized method and potential function with respect to data structure.
4.10 Summary
Depth first search is a useful algorithms and the search trees are very large with infinite maximum depts. Breadth first search form a one element queue consisting of the root node. Until the queue is empty the first element in the queue is evaluated for goal node. BFS has disadvantages like large memory requirement and the time for the execution when the goal node is at the deepest level DF-ID is an alternative to BFS and is iterative deepening search method. Aggregate analysis, potential methods and Dynamich Has table are the amortized analysis methods used for worst case cost analysis of each succession of operations.
Advanced Data Structures using ‘C’ Unit 4
Sikkim Manipal University Page No.: 116
4.11 Terminal Questions
1. Construct a B-tree (t = 3) by inserting in order the keys:
M A R I S T B L U E F O X. Show the step-by-step construction of this B-tree.
2. Given the B-tree (figure 1) shown below. Show
a. Show the step-by-step process of inserting key E into this tree.
b. Show the step-by-step process of removing key P from this tree.
c. Show the step-by-step process of removing key R from this tree.
3. Given the B-tree (figure 2) shown below.
a. Show the step-by-step process of removing key D from this tree.
b. Show the step-by-step process of removing key Q from this tree.
c. Show the step-by-step process of removing key E from this tree.
4. Suppose that for each unvisited vertex in a graph that we write the vertex number and the time (a counter initialized to 1 that increments each time a new node is visited) when it is first visited. Perform a depth-first search on the graph in figure 3 and indicate the order in which the vertices were visited.
5. For each edge in the directed acyclic graph in figure 4 below determine its type. Perform a depth-first search for each of the vertices in the order of their vertex number and indicate the count when each was first visited (start) and when the dfs was finished (finish). Use this information plus the color of the vertex to determine if an edge is a tree edge, cross edge, forward edge, or back edge.
Advanced Data Structures using ‘C’ Unit 4
Sikkim Manipal University Page No.: 117
Advanced Data Structures using ‘C’ Unit 4
Sikkim Manipal University Page No.: 118
6. Show that if a Decrement operation were included in the k-bit counter, n operations could cost as much as O(nk) time.
7. A sequence of n operations is performed on a data structure. The ith operation costs i if i is an exact power of 2 and 1 otherwise. Use an aggregate method of analysis to determine the amortized cost per operation.
8. Consider an ordinary binary heap data structure with n elements that supports the operations insert and delete min in O(lg n) worst-case time. Give a potential function such that the amortized cost of insert is O(lg n) and the amortized cost of delete min is O(1), and show that it works.
9. Suppose we contract a dynamic hash table by 1/2 when its load factor, , drops below 1/4. Using the potential function = |2  num[T] – size[T], show that the amortized cost of a remove that uses this strategy is bounded above by a constant. (Consider both the case when  after the remove drops below 1/4 and the case when it does not.)
Advanced Data Structures using ‘C’ Unit 4
Sikkim Manipal University Page No.: 119
10. Consider the dynamic hash table in which an expansion (doubling) occurs whenever the table becomes ½ full ( ½ full before an insert), and a contraction (by ½) occurs when it becomes 1/8 full (before a remove). Let  = 2(num[T] – ¼ size[T]) if   ¼, and  = ( ¼ size[T] – num[T]) if  < ¼ What is the amortized cost of
a. An insert when  = ½?
b. A remove when  = ¼?
c. A remove when  = 1/8 before the remove?

No comments:

Post a Comment