Anda di halaman 1dari 14

ALGORITHM

Divide-and-conquer is a general algorithm design technique that solves a problem by dividing it


into several smaller subproblems of the same type (ideally, of about equal size), solving each of
them recursively, and then combining their solutions to get a solution to the original problem.
Many efficient algorithms are based on this technique, although it can be both inapplicable and
inferior to simpler algorithmic solutions

4. Quicksort is a divide-and-conquer sorting algorithm that works by partitioning its input


elements according to their value relative to some preselected element. Quicksort is noted for its
superior efficiency among n log n algorithms for sorting randomly ordered arrays but also for the
quadratic worst-case efficiency.

Call Quick sort function to sort n elements


// Sorts a sub array by quick sort
//Input : A sub array A[l..r] of A[0..n-1] ,defined by its left and right indices l and r
// Output : The sub array A[l..r] sorted in non decreasing order
if l < r
s = Partition (A[l..r]) //s is a split position
Quick sort (A[l …s-1])
Quick sort (A[s+1…r])

partition (A[l…r])
//Partition a sub array by using its first element as a pivot
// Input : A sub array A[l…r] of A[0…n-1] defined by its left and right indices l and // r (l < r)
// Output : A partition of A[l…r], with the split position returned as this function’s value
p=A[l]
i=l;
j=r+1;
repeat
repeat i= i+1 until A[i] >= p
repeat j=j-1 until A[J] <= p
Swap (A[i],A[j])
until I >=j
Swap (A[i],A[j]) // Undo last Swap when i>= j
Swap (A[l],A[j])
Return j
5. Mergesort is a divide-and-conquer sorting algorithm. It works by dividing an input array into
two halves, sorting them recursively, and then merging the two sorted halves to get the original
array sorted. The algorithm’s time efficiency is in (n log n) in all cases, with the number of key
comparisons being very close to the theoretical minimum. Its principal drawback is a significant
extra storage requirement.

ALGORITHM Merge sort (A[0…n-1])


// Sorts array A[0..n-1] by Recursive merge sort
// Input : An array A[0..n-1] elements
// Output : Array A[0..n-1] sorted in non decreasing order
If n > 1
Copy A[0…(n/2)-1] to B[0…(n/2)-1]
Mergesort (B[0…(n/2)-1])
Copy A[(n/2)-1…n-1] to C[0…(n/2)-1]
Mergesort (C[0…(n/2)-1])
Merge(B,C,A)

ALGORITHM Merge (B[0…p-1], C[0…q-1],A[0….p+q-1])


// merges two sorted arrays into one sorted array
// Input : Arrays B[0..p-1] and C[0…q-1] both sorted
// Output : Sorted array A[0…. p+q-1] of the elements of B and C
I = 0;
J = 0;
K= 0;
While I < p and j < q do
If B[i] <= C[j]
A[k]= B[I]; I= I+1;
Else
A[k] = B[i]; I=i+1
K=k+1;
If I = = p
Copy C[ j..q-1] to A[k….p+q-1]
else
Copy B[I … p-1] to A[k …p+q-1

Dynamic programming is a technique for solving problems with overlapping subproblems.


Typically, these subproblems arise from a recurrence relating a solution to a given problem with
solutions to its smaller subproblems of the same type. Dynamic programming suggests solving
each smaller subproblem once and recording the results in a table from which a solution to the
original problem can be then obtained.
Applicability of dynamic programming to an optimization problem requires the problem to
satisfy the principle of optimality: an optimal solution to any of its instances must be made up of
optimal solutions to its subinstances. Among many other problems, Solving a knapsack problem
by a dynamic programming algorithm exemplifies an application of this technique to difficult
problems of combinatorial optimization.

knapsack problem: given n items of known weights w1,...,wn and values v1,...,vn and a
knapsack of capacity W, find the most valuable subset of the items that fit into the knapsack.
Algorithm:
Input: n – no of items, wi and vi are the weights and profits of corresponding items and W is
capacity of bag
//Output: V(n,W)
//Initialization of first column and first row elements
Repeat for i=0 to n set
V(i,0)=0
Repeat for j=0 to W Set
V(0,j)=0
//complete remaining entries row by row
Repeat for i=1to n
Repeat for j=1toW

if(wi<=j) V(i,j))=max{V(i-1,j),V(i-1,j-wi)+vi}

if(wi>j) V(i,j)=V(i-1,j)

Print V(n,W)

Greedy strategies to the 0/1 Knapsack problem:

The greedy technique suggests constructing a solution to an optimization problem through a


sequence of steps, each expanding a partially constructed solution obtained so far, until a
complete solution to the problem is reached. On each step, the choice made must be feasible,
locally optimal, and irrevocable.

ALGORITHM
algorithm: Greedy-Fractional-Knapsack (w[1..n], p[1..n], W)
for i = 1 to n
do x[i] = 0
weight = 0
for i = 1 to n
if weight + w[i] ≤ W then
x[i] = 1
weight = weight + w[i]
else
x[i] = (W - weight) / w[i]
weight = W
break
return x
let us consider that the capacity of the knapsack W = 60 and the list of provided items are shown
in the following table −

Item A B C D

Profit 280 100 120 120

Weight 40 10 20 24

Ratio (piwi)(piwi) 7 10 6 5
As the provided items are not sorted based on piwipiwi. After sorting, the items are as shown in
the following table.
Item B A C D

Profit 100 280 120 120

Weight 10 40 20 24

Ratio (piwi)(piwi) 10 7 6 5

Solution
After sorting all the items according to piwipiwi. First all of B is chosen as weight of B is less
than the capacity of the knapsack. Next, item A is chosen, as the available capacity of the
knapsack is greater than the weight of A. Now, C is chosen as the next item. However, the
whole item cannot be chosen as the remaining capacity of the knapsack is less than the weight
of C.
Hence, fraction of C (i.e. (60 − 50)/20) is chosen.
Now, the capacity of the Knapsack is equal to the selected items. Hence, no more item can be
selected.
The total weight of the selected items is 10 + 40 + 20 * (10/20) = 60
And the total profit is 100 + 280 + 120 * (10/20) = 380 + 60 = 440
This is the optimal solution. We cannot gain more profit selecting any different combination of
items.
Kruskal’s algorithm is another greedy algorithm for the minimum spanning tree problem. It
constructs a minimum spanning tree by selecting edges in nondecreasing order of their weights
provided that the inclusion does not create a cycle. Checking the latter condition efficiently
requires an application of one of the so-called union-find algorithms.
MST_KRUSKAL (G, w)
1. A _ {} // A will ultimately contains the edges of the MST
2. for each vertex v in V[G]
3. do Make_Set (v)
4. Sort edge of E by nondecreasing weights w
5. for each edge (u, v) in E
6. do if FIND_SET (u) _ FIND_SET (v)
7. then A = AU{(u, v)}
8. UNION (u, v)
9. Return A
Prim’s algorithm is a greedy algorithm for constructing a minimum spanning tree of a weighted
connected graph. It works by attaching to a previously constructed subtree a vertex closest to the
vertices already in the tree.

ALGORITHM Prim(G)
//Prim’s algorithm for constructing a minimum spanning tree
//Input: A weighted connected graph G =( V,E)
//Output: ET ,
the set of edges composing a minimum spanning tree of G
VT ← {v0}
//the set of tree vertices can be initialized with any vertex
ET ← ∅
for i ← 1 to |V | − 1
do
find a minimum-weight edge e∗ = (v∗, u∗) among all the edges (v, u)
such that
v is in VT and
u is in V – VT
VT ← VT ∪ {u∗}
ET ← ET ∪ {e∗}
return ET
Dijkstra’s algorithm solves the single-source shortest-path problem of finding shortest paths
from a given vertex (the source) to all the other vertices of a weighted graph or digraph. It works
as Prim’s algorithm but compares path lengths rather than edge lengths. Dijkstra’s algorithm
always yields a correct solution for a graph with nonnegative weights.

ALGORITHM Dijkstra(G, s)
//Dijkstra’s algorithm for single-source shortest paths
//Input: A weighted connected graph G = V,E with nonnegative weights // and its vertex s
//Output: The length dv of a shortest path from s to v
// and its penultimate vertex pv
for every vertex v in V Initialize(Q) //initialize priority queue to empty for every vertex v in V
dv ← ∞; pv ← null
Insert(Q, v, dv) //initialize vertex priority in the priority queue
ds ← 0;
Decrease(Q, s, ds)
VT ← ∅
for i ← 0 to |V | − 1 do
u∗ ← DeleteMin(Q) //delete the minimum priority element
VT ← VT ∪ {u∗}
for every vertex u in V − VT that is adjacent to u∗ do
if du∗ + w(u∗, u) < du
du ← du∗ + w(u∗, u)
pu ← u∗
Decrease(Q, u, du)
FLOYDS ALGORITHM: Given a weighted connected graph (undirected or directed), the all-
pairs shortest paths problem asks to find the distances—i.e., the lengths of the shortest paths—
from each vertex to all other vertices. This is one of several variations of the problem involving
shortest paths in graphs. Because of its important applications to communications, transportation
networks, and operations research, it has been thoroughly studied over the years. Among recent
applications of the all-pairs shortest-path problem is precomputing distances for motion planning
in computer games.

ALGORITHM Floyd (W[1..n, 1..n])


//Implements Floyd’s algorithm for the all-pairs shortest-paths problem
//Input: The weight matrix W of a graph with no negative-length cycle
//Output: The distance matrix of the shortest paths’ lengths D ← W //is not necessary if W can be
overwritten
for k ← 1 to n do
for i ← 1 to n do
for j ← 1 to n do
D[i, j ] ← min{D[i, j ], D[i, k] + D[k, j ]}
return D

Traveling salesman problem Find the shortest tour through n cities with known positive integer
distances between them (find the shortest Hamiltonian circuit in a complete graph with positive
integer weights).
Algorithm: Traveling-Salesman-Problem
C ({1}, 1) = 0
for s = 2 to n do
for all subsets S Є {1, 2, 3, … , n} of size s and containing 1
C (S, 1) = ∞
for all j Є S and j ≠ 1
C (S, j) = min {C (S – {j}, i) + d(i, j) for i Є S and i ≠ j}
Return minj C ({1, 2, 3, …, n}, j) + d(j, i)
Analysis
There are at the most 2n.n2n.n sub-problems and each one takes linear time to solve. Therefore,
the total running time is O(2n.n2)O(2n.n2).
Example
In the following example, we will illustrate the steps to solve the travelling salesman problem.

From the above graph, the following table is prepared.

1 2 3 4

1 0 10 15 20

2 5 0 9 10

3 6 13 0 12

4 8 8 9 0

S=Φ
Cost(2,Φ,1)=d(2,1)=5Cost(2,Φ,1)=d(2,1)=5

Cost(3,Φ,1)=d(3,1)=6Cost(3,Φ,1)=d(3,1)=6

Cost(4,Φ,1)=d(4,1)=8Cost(4,Φ,1)=d(4,1)=8

S=1
Cost(i,s)=min{Cost(j,s–(j))+d[i,j]}Cost(i,s)=min{Cost(j,s–(j))+d[i,j]}

Cost(2,{3},1)=d[2,3]+Cost(3,Φ,1)=9+6=15Cost(2,{3},1)=d[2,3]+Cost(3,Φ,1)=9+6=15

Cost(2,{4},1)=d[2,4]+Cost(4,Φ,1)=10+8=18Cost(2,{4},1)=d[2,4]+Cost(4,Φ,1)=10+8=18

Cost(3,{2},1)=d[3,2]+Cost(2,Φ,1)=13+5=18Cost(3,{2},1)=d[3,2]+Cost(2,Φ,1)=13+5=18

Cost(3,{4},1)=d[3,4]+Cost(4,Φ,1)=12+8=20Cost(3,{4},1)=d[3,4]+Cost(4,Φ,1)=12+8=20

Cost(4,{3},1)=d[4,3]+Cost(3,Φ,1)=9+6=15Cost(4,{3},1)=d[4,3]+Cost(3,Φ,1)=9+6=15

Cost(4,{2},1)=d[4,2]+Cost(2,Φ,1)=8+5=13Cost(4,{2},1)=d[4,2]+Cost(2,Φ,1)=8+5=13

S=2
Cost(2,{3,4},1)={d[2,3]+Cost(3,{4},1)=9+20=29d[2,4]+Cost(4,{3},1)=10+15=25=25Cost(2,{3,
4},1)={d[2,3]+Cost(3,{4},1)=9+20=29d[2,4]+Cost(4,{3},1)=10+15=25=25

Cost(3,{2,4},1)={d[3,2]+Cost(2,{4},1)=13+18=31d[3,4]+Cost(4,{2},1)=12+13=25=25Cost(3,{2
,4},1)={d[3,2]+Cost(2,{4},1)=13+18=31d[3,4]+Cost(4,{2},1)=12+13=25=25

Cost(4,{2,3},1)={d[4,2]+Cost(2,{3},1)=8+15=23d[4,3]+Cost(3,{2},1)=9+18=27=23Cost(4,{2,3
},1)={d[4,2]+Cost(2,{3},1)=8+15=23d[4,3]+Cost(3,{2},1)=9+18=27=23

S=3
Cost(1,{2,3,4},1)=⎧⎩⎨d[1,2]+Cost(2,{3,4},1)=10+25=35d[1,3]+Cost(3,{2,4},1)=15+25=40d[1,
4]+Cost(4,{2,3},1)=20+23=43=35Cost(1,{2,3,4},1)={d[1,2]+Cost(2,{3,4},1)=10+25=35d[1,3]+
Cost(3,{2,4},1)=15+25=40d[1,4]+Cost(4,{2,3},1)=20+23=43=35

The minimum cost path is 35.


Start from cost {1, {2, 3, 4}, 1}, we get the minimum value for d [1, 2]. When s = 3, select the
path from 1 to 2 (cost is 10) then go backwards. When s = 2, we get the minimum value for d
[4, 2]. Select the path from 2 to 4 (cost is 10) then go backwards.
When s = 1, we get the minimum value for d [4, 2] but 2 and 4 is already selected. Therefore,
we select d [4, 3] (two possible values are 15 for d [2, 3] and d [4, 3], but our last node of the
path is 4). Select path 4 to 3 (cost is 9), then go to s = Φ step. We get the minimum value for d
[3, 1] (cost is 6)

Backtracking constructs its state-space tree in the depth-first-search fashion in the majority of
its applications. If the sequence of choices represented by a current node of the state-space tree
can be developed further without violating the problem’s constraints, it is done by considering
the first remaining legitimate option for the next component. Otherwise, the method backtracks
by undoing the last component of the partially built solution and replaces it by the next
alternative.

The subset-sum problem: find a subset of a given set A = {a1,...,an} of n positive integers
whose sum is equal to a given positive integer d. For example, for A = {1, 2, 5, 6, 8} and d = 9,
there are two solutions: {1, 2, 6} and {1, 8}. Of course, some instances of this problem may have
no solutions. It is convenient to sort the set’s elements in increasing order. So, we will assume
that a1 < a2 < ... < an.

Algorithm:
• accept n: no of items in set
• accept their values, sk in increasing order
• accept d: sum of subset desired
• initialise x[i] = -1 for all i
• check if solution possible or not
• if possible then call SumOfSub(0,1,sum of all elements)
SumOfSub (s, k, r)
//Values of x[ j ], 1 <= j < k, have been determined
//Node creation at level k taking place: also call for creation at level K+1 if possible
// s = sum of 1 to k-1 elements and r is sum of k to n elements
//generating left child that means including k in solution
• Set x[k] = 1
• If (s + s[k] = d) then subset found, print solution
• If (s + s[k] + s[k+1] <=d)
then SumOfSum (s + s[k], k+1, r – s[k])
//Generate right child i.e. element k absent
• If (s + r - s[k] >=d) AND (s + s[k+1] )<=d
THEN { x[k]=0;
SumOfSub(s, k+1, r – s[k])
Hamiltonian circuit problem:

Determine whether a given graph has a Hamiltonian circuit—a path that starts and ends at the
same vertex and passes through all the other vertices exactly once.

A Hamiltonian cycle (or Hamiltonian circuit) is a Hamiltonian Path such that there is an edge (in
graph) from the last vertex to the first vertex of the Hamiltonian Path. Determine whether a given
graph contains Hamiltonian Cycle or not. If it contains, then print the path. Following are the
input and output of the required function.

Input:
A 2D array graph[V][V] where V is the number of vertices in graph and graph[V][V] is
adjacency matrix representation of the graph. A value graph[i][j] is 1 if there is a direct edge
from i to j, otherwise graph[i][j] is 0.

Output:
An array path[V] that should contain the Hamiltonian Path. path[i] should represent the vertex i
in the Hamiltonian Path. The code should also return false if there is no Hamiltonian Cycle in the
graph.

Algorithm:
hamiltonian(p,index)
if index>n then
path is complete
display the values in p
else
for each node v in G
if v can be added to the path then
add v to path p and call hamiltonian(p,index+1)
end Hamiltonian

a---b---f---e---d---c—a
a---d---e---f---b---c---a

Anda mungkin juga menyukai