Step 1: Set i to 1
Step 4: Set i to i + 1
Step 5: Go to Step 2
Step 8: Exit
For a binary search to work, it is mandatory for the target array to be sorted. We
shall learn the process of binary search with a pictorial example. The following is our
sorted array and let us assume that we need to search the location of value 31 using
binary search.
Now we compare the value stored at location 4, with the value being searched, i.e.
31. We find that the value at location 4 is 27, which is not a match. As the value is
greater than 27 and we have a sorted array, so we also know that the target value
must be in the upper portion of the array.
We change our low to mid + 1 and find the new mid value again.
Low = Mid +1
Mid = Low + (High - Low) / 2
Our new mid is 7 now. We compare the value stored at location 7 with our target
value 31.
The value stored at location 7 is not a match, rather it is more than what we are
looking for. So, the value must be in the lower part from this location.
Hence, we calculate the mid again. This time it is 5.
We compare the value stored at location 5 with our target value. We find that it is a
match.
Binary search halves the searchable items and thus reduces the count of
comparisons to be made to very less numbers.
Binary search has a huge advantage of time complexity over linear search. Linear
search has worst-case complexity of Ο(n) whereas binary search has Ο(log n).
There are cases where the location of target data may be known in advance. For
example, in case of a telephone directory, if we want to search the telephone
number of Rishabh. Here, linear search and even binary search will seem slow as we
can directly jump to memory space where the names start from 'R' are stored.
Example :
Step 2: Skip the first three elements(1, 2, 3) in the array and check whether
fourth(4) value is equal to or greater than key value(7).
Step 3: If not skip next three elements(5, 6, 7) in the array and check
whether eighth(8) value is equal to or greater than key value(7). In this case
it is greater than Key value.
Step 4: Now by using linear search algorithm, move reverse from value
8(boundary value) to value 4(previous value) to find the key value(7).
Step 5: Thus using linear search algorithm the Key value is calculated and
resulted in position array[6].
Q2) WAP for the following :-
#include <stdio.h>
#include <stdlib.h>
void main()
{
int a[50],j, n,i,s;
cout<<"Enter the number of elements in the array:";
cin>>n;
cout<<"Enter the array:";
for(i=0; i<n; i++)
cin>>a[i];
for(i=0;i<n; i++)
{
if(a[i]>a[j])
//swapping
{
s=a[j];
a[j]=a[i];
a[i]=s;}
}
}
cout<<"Sorted array:";
for(i=0; i<n; i++)
cout<< a[i];
return 0;
}
#include <stdio.h>
#include <stdlib.h>
void main()
{
int a[20],j, n,i, min;
cout<< ”Enter the number of elements in the array:";
cin>>n;
cout"Enter the array:";
for(i=1; i<=n; i++)
cin>>a[i];
for(i=1;i<=n; i++)
{
min=i;
for(j=i+1; j<=n; j++)
{
if(a[j]<a[min])
min=j;
}
int s;
//swappig
//if(a[min]==i)
s=a[min];
a[min]=a[i];
a[i]=s;
}
cout<<"Sorted array:";
for(i=1; i<=n; i++)
cout<< a[i];
return 0;
}
#include <stdio.h>
#include <stdlib.h>
i = 0;
j = 0;
k = l;
while (i < n1 && j < n2)
{
if (L[i] <= R[j])
{
arr[k] = L[i];
i++;
}
else
{
arr[k] = R[j];
j++;
}
k++;
}
int m = l+(r-l)/2;
merge(arr, l, m, r);
}
}
void main()
{
int a[20];
int n,i;
#include<stdio.h>
void quick_sort(int[],int,int);
int partition(int[],int,int);
int main()
{
int a[50],n,i;
cout<<"How many elements?";
cin>>n;
cout<<”Enter array elements:";
for(i=0;i<n;i++)
cin>>a[i];
quick_sort(a,0,n-1);
cout<<”Array after sorting:";
for(i=0;i<n;i++)
cout<<a[i];
return 0;
}
do
{
do
i++;
while(a[i]<v&&i<=u);
do
j--;
while(v<a[j]);
if(i<j)
{
temp=a[i];
a[i]=a[j];
a[j]=temp;
}
}while(i<j);
a[l]=a[j];
a[j]=v;
return(j);
}
#include<iostream>
using namespace std;
#include <stdio.h>
#include <stdlib.h>
}
void build_max_heap(int a[], int heapsize)
{
int i;
for (i = heapsize/2; i >= 0; i--)
{
max_heapify(a, i, heapsize);
}
}
/*void heap_sort(int a[], int heapsize)
{
int i, tmp;
build_max_heap(a, heapsize);
for (i = heapsize; i > 0; i--)
{
tmp = a[i];
a[i] = a[0];
a[0] = tmp;
heapsize--;
max_heapify(a, 0, heapsize);
}
}*/
int main()
{
int i, r,x, heapsize,n;
int a[50];
cout<<"Enter the number of terms in the heap:";
cin>>n;
cout<<"Enter the elements:";
for (i = 0; i < n; i++)
cin>>a[i];
heapsize = n - 1;
cout<<\n;
build_max_heap(a, heapsize);
for (i = 0; i < n; i++)
cout<< a[i);
cout<<"Number to be inserted:";
cin>>x;
insert(a,x,n);
for (i = 0; i < n; i++)
cout<<a[i];
return 0;
}
#include<iostream>
void main()
{
int n, i;
cout<<"\nEnter the number of data element to be sorted: ";
cin>>n;
int arr[n];
for(i = 0; i < n; i++)
{
cout<<"Enter element "<<i+1<<": ";
cin>>arr[i];
}
ShellSort(arr, n);
return 0;
}
1. Time complexity of Bubble sort in Worst Case is O(N^2), which makes it quite
inefficient for sorting large data volumes. O(N^2) because it sorts only one item in
each iteration and in each iteration it has to compare n-i elements.
2. Time complexity of Bubble sort in Best Case is O(N).When the given data set is
already sorted, in that case bubble sort can identify it in one single iteration hence
O(N). It means while iterating, from i=0 till arr.length, if there is no swapping
required, then the array is already sorted and stop there.
3. Bubble sort can identify when the list is sorted and can stop early.
5. It is Stable sort; i.e., does not change the relative order of elements with equal
keys.
Selecting the lowest element requires scanning all n elements (this takes n - 1
comparisons) and then swapping it into the first position.
Finding the next lowest element requires scanning the remaining n - 1 elements and
so on,
= (n - 1) + (n - 2) + ... + 2 + 1 = n(n - 1) / 2
= O(n^2) comparisons.
Stable : No
(iii) Merge Sort:
The first two terms are for two recursive calls, the last term is for the partition
process. k is the number of elements which are smaller than pivot.
The time taken by QuickSort depends upon the input array and partition strategy.
Following are three cases.
Worst Case: The worst case occurs when the partition process always picks greatest
or smallest element as pivot. If we consider above partition strategy where last
element is always picked as pivot, the worst case would occur when the array is
already sorted in increasing or decreasing order. Following is recurrence for worst
case. The solution of above recurrence is (n2).
Best Case: The best case occurs when the partition process always picks the middle
element as pivot. Following is recurrence for best case.
The solution of above recurrence is (nLogn).
Average Case:
To do average case analysis, we need to consider all possible permutation of array
and calculate time taken by every permutation which doesn’t look easy.
We can get an idea of average case by considering the case when partition puts O(n/9)
elements in one set and O(9n/10) elements in other set. Following is recurrence for
this case. Solution of above recurrence is also O(nLogn)
Although the worst case time complexity of QuickSort is O(n2) which is more than
many other sorting algorithms like Merge Sort and Heap Sort, QuickSort is faster in
practice, because its inner loop can be efficiently implemented on most architectures,
and in most real-world data. QuickSort can be implemented in different ways by
changing the choice of pivot, so that the worst case rarely occurs for a given type of
data. However, merge sort is generally considered better when data is huge and
stored in external storage.
Each key is looked at once for each digit (or letter if the keys are alphabetic) of the
longest key. Hence, if the longest key has m digits and there are n keys, radix sort has
order O(m.n).
However, if we look at these two values, the size of the keys will be relatively small
when compared to the number of keys. For example, if we have six-digit keys, we
could have a million different records.
Here, we see that the size of the keys is not significant, and this algorithm is of linear
complexity O(n).
Let there be d digits in input integers. Radix Sort takes O(d*(n+b)) time where b is the
base for representing numbers, for example, for decimal system, b is 10. What is the
value of d? If k is the maximum possible value, then d would be O(log b(k)). So overall
time complexity is O((n+b) * log b(k)) which looks more than the time complexity of
comparison based sorting algorithms for a large k.Let k <= nc where c is a constant. In
that case, the complexity becomes O(n log b(n)). If we set b as n, we get the time
complexity as O(n). In other words, we can sort an array of integers with range from 1
to nc if the numbers are represented in base n (or every digit takes log2(n) bits).
Analysis of Heap Sort Time Complexity. Heap sort worst case, best case and average
case time complexity is guaranteed O(n Log n). Heap sort space complexity is O(1).
The height of a complete binary tree containing n elements is log(n).To fully heapify
an element whose subtrees are already max-heaps, we need to keep comparing the
element with its left and right children and pushing it downwards until it reaches a
point where both its children are smaller than it. In the worst case scenario, we will
need to move an element from the root to the leaf node making a multiple of log(n)
comparisons and swaps.
During the build_max_heap stage, we do that for n/2 elements so the worst case
complexity of the build_heap step is n/2*log(n) ~ nlog n.
During the sorting step, we exchange the root element with the last element and
heapify the root element. For each element, this again takes log n worst time because
we might have to bring the element all the way from the root to the leaf. Since we
repeat this n times, the heap_sort step is also nlog n. Also since the build_max_heap
and heap_sort steps are executed one after another, the algorithmic complexity is not
multiplied and it remains in the order of nlog n.Also it performs sorting in O(1) space
complexity.
The worst-case is O(n^2) and the best-case is O(nlog n) which is reasonable for shell-sort.
The best-case is when the array is already sorted. This would mean that the inner if statement
will never be true, making the inner while loop a constant time operation. Using the bounds
you've used for the other loops gives O(nlogn). The best case of O(n) is reached by using a
constant number of increments.
Given upper bound for each loop we get O((log n)n^2) for the worst-case. But add another
variable for the gap size g. The number of compare/exchanges needed in the inner while is
now <= n/g. The number of compare/exchanges of the middle while is <= n^2/g. Add the
upper-bound of the number of compare/exchanges for each gap together: n^2 + n^2/2 +
n^2/4 + ... <= 2n^2 ∊ O(n^2). This matches the known worst-case complexity for the gaps
you've used.
Consider the array where all the even positioned elements are greater than the median. The
odd and even elements are not compared until we reach the last increment of 1. The number
of compare/exchanges needed for the last iteration is Ω(n^2).