Anda di halaman 1dari 2

# The Design and Analysis of algorithms

Algorithm: An algorithm is any well defined computational procedure that takes some value or values as input and produces some value or values ,as output. An algorithms is said to be correct if, for every input it halts with the correct output. An incorrect algorithm might not halt on some inputs or it might halt with the wrong output. How to specify/describe an algorithm: An algorithm can be specified in English, as a computer program,as psuedo code etc.The only requirement is that the specification must provide a precise description of computational procedure to be carried out. Kinds of problems solved by algorithms: Efficient algorithms solve the problem in different fields solve the problems with lesser time, lesser memory. Some examples that illustrate the indespensibility of the efficient algorithms are as follow: 1.Internet makes heavy use of algorithms to find good routes on which the data will travel. 2.A search engine uses algorithms to find on which page the particular information resides. 3.Algorithms are necessary for the survival of E-commerce. In todays world we buy almost everything online. There is anabsolute necessary to guard sensitive data such as credit card number, password etc. Public key cryptography, digital signatures are used in this scenario. These technologies are based on numerical algorithms and number theory. 4.Algorithms are used in manufacturing industry to identify shortest path routing. 5.An ISP may wish to identify the locations to place the resources so that a maximum chunk of customers are happy(linear programming. The analysis of algortihms: The theoretical study of computer program performance and resource usage. Sorting: The factors that affect the time taken to finish the algorithm are as follow: a. The type of input (Whether it is fully sorted, partially sorted, or reverse sorted) b.The size of input (10 elements, 1000 elements , 10^6 elements etc) c.Generally we are interested in knowing the upper bound of the runing time as it sort of gives guarantee that the program will exectute no longer than the uppper bound. The power of algorithms: Algorithms devised to solve the same problem often differ dramatically in their efficiency. These differences can be much more significant than differences due to hardware and software. Ex: insertion sort takes time c1* n^2(where c1 is a constant independent of n) merge sort takes time c2* nlogn(where c2 is a constant independent of n)
Insertion

sort usually has a smaller constant factor than merge sort, so that,c1 < c2. We shall see that the constant factors can be far less significant in the running time than the dependence on the input size n. Where merge sort has a factor of lg n in its running time, insertion sort has a factor of n, which is much larger. Although insertion sort is usually faster than merge sort for small input sizes, once the input size n becomes large enough, merge sort's advantage of lg n vs. n will more than compensate for the difference in constant factors. No matter how much smaller c1 is than c2, there will always be a crossover point beyond which merge sort is faster.

For a concrete example, let us pit a faster computer (computer A) running insertion sort against a slower computer (computer B) running merge sort. They each must sort an array of one million numbers. Suppose that computer A executes one billion instructions per second and computer B executes only ten million instructions per second, so that computer A is 100 times faster than computer B in raw computing power. To make the difference even more dramatic, suppose that the world's craftiest programmer codes insertion sort in machine language for computer A, and the resulting code requires 2n^2instructions to sort n numbers. (Here, c1 = 2.) Merge sort, on the other hand, is programmed for computer B by an average programmer using a high-level language with an inefficient compiler, with the resulting code taking 50n lg n instructions (so that c2 = 50). To sort one million numbers, computer A takes

## while computer B takes

By using an algorithm whose running time grows more slowly, even with a poor compiler, computer B runs 20 times faster than computer A! The advantage of merge sort is even more pronounced when we sort ten million numbers: where insertion sort takes approximately 2.3 days, merge sort takes under 20 minutes. In general, as the problem size increases, so does the relative advantage of merge sort. The big idea of asymptotic analysis: a. Igonre the constants (machine dependent ,etc) b.Look at the growth of running time (i.e. T(n)) as n-> infinity. Common sense: Just because an algorithm is asymptotically better than the other one doesn't guarantee that practically it performs well. If the value of n0 is a huge number then, practically it is possible that the asymptotically worse algorithm is better. So we need to use the asymtotic analysis as a tool and shoud not put everything on that.