8-14
ISSN 2230-7621 (Print Version) 2230-763X (Online Version) MIT Publications
1. INTRODUCTION
Grid computing is the combination of computer
resources from multiple administrative domains applied to a
common task, usually to a scientific, technical or business
problem that requires a great number of computer
processing cycles or the need to process large amounts of
data. One of the main strategies of grid computing is using
software to divide and apportion pieces of a program
among several computers, sometimes up to many thousands.
Grid is a form of distributed computing whereby a super
and virtual computer is composed of a cluster of
networked loosely coupled computers acting in concert to
perform very large tasks. What distinguishes grid
computing from conventional cluster computing systems is
that grids tend to be more loosely coupled, heterogeneous,
and geographically dispersed. Also, while a computing grid
may be dedicated to a specialized application, it is often
MIT International Journal of Computer Science & Information Technology Vol. 1 No. 1 Jan. 2011 pp. 8-14
ISSN 2230-7621 (Print Version) 2230-763X (Online Version) MIT Publications
2. GENETIC ALGORITHM
The steps in genetic algorithm as follows,
Step 1-Initial population generation (Initial population of
chromosomes is generated using uniform distribution,
where each chromosome represents a particular sequence of
assigning tasks to machines.)
Step 2-Evaluation (Calculating the total execution time for
each sequence of assignment).
Step 3-While (stopping criteria not met)
{
selection( )
crossover( )
mutation( )
evaluation( )
}
The detailed descriptions of the above methods are as
follows.
Step1 .Population creation() [A set of 200 chromosomes is
generated from uniform distribution for a given
2
3
Fig. 1
[Completion time]
} chromosome[200];
Where c[10] represents a particular chromosome with
maximum 10 no. of tasks.
So to create 200 chromosomes,
The code is
For i=1 to 200 do
{
For j=1 to n (10) do
{
Chromosome[i].c[j] = read the values from the user.
}
}
Evaluation():Consider a chromosome represented by Fig. 1.
Fitness value of this chromosome=Completion time of all
tasks, with the above assignment on machines
For i=1 to 200 {
For j=1 to n
(where n is no. of tasks=5 here)
{
chromosome[i].ct += j*c[j]
}
}
Selection():
Fitnessvalue of the ith chromosome=chromosome[i].ct
For i=1 to 200 {
Totalfitnessvalue += chromosome[i].ct
}
For i=1 to 200 {
P[i]=chromosome[i].ct/totalfitnessvalue
If(Random()<p[i])
MIT International Journal of Computer Science & Information Technology Vol. 1 No. 1 Jan. 2011 pp. 8-14
ISSN 2230-7621 (Print Version) 2230-763X (Online Version) MIT Publications
{
Read from user no-of-copy;
for j=1 to no-of-copy{
O j f (net j )
net j
for k=1 to n
{
Chromosome[j].c[k]=chromosome[i].c[k]
}
}
}
}
Cross-Over():
Let us consider a single-point crossover at index=4
For (i=1; i<200;i+=2)
{
For j=3 to n {
Store-value=chromosome[i].c[j]
Chromosome[i]-c[j]= Chromosome[i+1].c[j]
Chromosome[i+1].c[j]=store-value
}
}
Mutation(): Let us consider a chromosome
presented in Figure-1.
To get better completion time of the system,1st and 3rd
positions are muted
i.e. c[1] is replaced by 3 and c[3] is replaced 2.
For(i=1;i<200;i++)
{
For(j=1;j<n;j++)
{
If(random()<p(chromosome[i]).c[j])), where p is the
probability of mutation associated with each position of the
array.
Chromosome[i].c[j] = any arbitrary value from the
machines.
}
}
Time complexity of GA = k(T(Population creation) +
T(Evaluation) + T(selection) + T(cross-over)+T(Mutation)),
where k is no. of iterations.
T(Population creation)=200*n=O(n)
T(Evaluation)= 200*n=O(n)
T(selection)= 200+200*no-of-copy*n=O(n)
T(cross-over)= 100*n= O(n)
T(Mutation)=O(n)
So T(GA)=k*O(n)=O(kn)
When k is a constant,
T(GA)=O(n)
3. NEURALNETWORKMODEL_1
The proposed model can be depicted by block diagram
shown in Fig. 2.
In the proposed neural network model of Grid scheduling,
the output at any particular neuron is given by equation 1.
10
Where
i1
I iW
(1)
i, j
(2)
I T
(3)
Back propagation
Network
Input layer
(A set of tasks)
Fig. 2
Output layer
(A set f of machines)
(4)
Where,
MIT International Journal of Computer Science & Information Technology Vol. 1 No. 1 Jan. 2011 pp. 8-14
ISSN 2230-7621 (Print Version) 2230-763X (Online Version) MIT Publications
then
{
for(a=0;a<I;a++)
{
for(b=0;b<I;b++)
{
Ti wi, j
i 1
where all
Step 3:
if
Ti =i
f (net j ) Ta then
E
w
1
2
is calculated where E= (Ta O )
2
and O=f(netj)
w
w[a][b]+ = w
}
}}
while(false)
11
E
;
w
for(k=2;k<=j;k++)
{
if(min>w[a][k]) then
}
min=w[a][k]; s=k;
}
}
T(assignment)= (n )
T(proposed model_1)=T(training)
3
T(assignment)=
(n ) + (n ) = (n )
When k is not a constant
3
4. NEURALNETWORK MODEL_2
Here features of a task are taken at the input layer. The
number of neurons at the input layer is equal to number of
features of a task. The output layer contains a single neuron.
Different machines are assigned a particular value. Hidden
layer may have number of neurons. The system is trained
with various samples of tasks. On this trained system, the
test task is applied. The output value tells which machine to
assign.
This model is depicted by following block diagram.
In this Proposed Model 2, features of task to be scheduled
are taken at the input layer.
Features can be categorized as-[Values of features are
kept between (0-1)]
MIT International Journal of Computer Science & Information Technology Vol. 1 No. 1 Jan. 2011 pp. 8-14
ISSN 2230-7621 (Print Version) 2230-763X (Online Version) MIT Publications
Backpropagation
Network
Input layer
(A set of tasks)
Output layer
(One output m/c)
Fig. 3
Table 1
Machine scheduled
0.2
M2
0.3
M4
So on
M1
E
Fi
w
Let
this
trained
NeuroGridSchedular.
12
system
is
named
as
W[ ] = .
.
.
0 .4
0 .1
0 .4
...
...
.
.
...
.
0 .3
0 .0
0 .5
0 . 3
.
.
.
0 .9
0 .1
V[ ] = 0 . 5
0 .7
0 .2
MIT International Journal of Computer Science & Information Technology Vol. 1 No. 1 Jan. 2011 pp. 8-14
ISSN 2230-7621 (Print Version) 2230-763X (Online Version) MIT Publications
{
b=1;
do
{
while(b++<m)
{
for(a=1;a<=f;a++)
net[b]+ =T[a]*w[a][b];
}
for(b=0;b<m;b++)
{
if ( f (net[b]) Ta )
then
{
for(a=0;a<f;a++)
{
for(b=0;b<m;b++)
{
w[a][b]+ = w
}
O(n )
5.
E
;
w
GA- ( Kn) ,
O( Kn3 )
NN_2
O( Kn )
NOH
NoI
IPH [i ] OPI [i ]*W [ j ]
OPH [i ] f ( IPH [i ])
6. CONCLUSION
The simulation study of this work is done by writing
pseudocode
for
Genetic
algorithm
approach,
Neuralnetworkmodel_1 and Neuralnetworkmodel_2 and
finding time- complexity. This time-complexity gives the
average completion time in terms of maximum no. of
iterations, no. of tasks and no. of machines. So in these
models we need to know the Grid environment information
i.e. machine heterogeneity, task heterogeneity, size of tasks
and so on. From the comparative study it has been found
that GA gives better performance than the proposed Neural
Network models for independent task scheduling.
ACKNOWLEDGEMENT
I would like to acknowledge and extend my heartfelt
gratitude to Dr. P.C. Saxena, Rtd. Professor, J.N.U. New
Delhi for his valuable guidance and support.
REFERENCES
[1]
[2]
[3]
OPO [i ] f ( IPO [i ])
}
Test Task=>m[normalised value(
NN_1-
=
when k is constant.
) + O(n )
{
For j=1 to
= (kn
}}
while(false)
For i=1 to
13
OPo )] where m is a
MIT International Journal of Computer Science & Information Technology Vol. 1 No. 1 Jan. 2011 pp. 8-14
ISSN 2230-7621 (Print Version) 2230-763X (Online Version) MIT Publications
[4]
[5]
[6]
F1
F2
F3
F4
F5
Machines
0.2
0.5
0.5
.
.
.
0.5
0.6
0.8
.
.
.
0.1
0.1
0.2
.
.
.
0.6
0.9
0.3
.
.
.
0.7
0.6
0.4
.
.
.
0.1(M1)
0.2(M2)
0.3(M3)
.
.
.
50000
Time
Comple 40000
xity 30000
GA
NNM1
NNM2
20000
10000
10000000
Time 8000000
complex
ity 6000000
4000000
2000000
GA
NNM1
NNM2
0
5 10 20 25 30 35 50
No. of tasks
Fig. 5: This graph shows NeuralNetworkmodel_2 and GA have less
timecomplexity than NeuralNetworkmodel_1 for a fixed no. of
iteration. For smaller values of tasks, all the three have nearly same
order of timecomplexity.
0
100
Table 2
Training
Data no.
1
2
3
.
.
.
14
200
300
400
No. of iterations