Anda di halaman 1dari 158

(IJCNS) International Journal of Computer and Network Security, 1

Vol. 2, No. 6, June 2010

Universal Tool for University Course Schedule


Using Genetic Algorithm
Mohamed Tahar Ben Othman, Senior Member IEEE

College of Computer, Qassim University, Saudi Arabia.


mtothman@gmail.com
bn_athaman@qu.edu.sa

constraints generally are considered as soft constraints that


Abstract: Time tabling is one of the most important
administrative activities that take place in academic institutions. may not be all respected in an acceptable solution.
Timetable Scheduling takes a huge effort at the beginning of Timetable scheduling can be seen as a finite Constraint
each year/semester for every university program. It may result Satisfaction Problem (CSP). CSP is a problem with a finite
also – if it is not well done – to a problem in the registration set of variables, where each variable is associated with a
process as well as an abnormal class start. For this reason, finite domain. Relationships between variables constraint
several researches were conducted for Timetable Scheduling
the possible instantiations they can take at the same time. A
automation. The number of constraints that have to be satisfied
for such problem make Timetable scheduling difficult task. This
solution to a CSP is an instantiation of all the variables with
problem is known as NP Hard. A large number of variants of values from their respective domains, satisfying all the
the time tabling problem have been proposed in the literature, constraints. In many CSPs, such as resource planning and
which differ from each other based on the type of institution Timetable scheduling, some solutions are better than others
involved (university or school) and the type of constraints. In [21]. To solve a CSP, one must find the solution tuple that
this paper, we have resolved the problem of Timetable instantiate variables with values of their respective domains,
scheduling using Genetic Algorithm. The originality of our work and that these instantiations do not violate any of the
comes from its universality. The different variables and constraints. Most of researches that were conducted are
constraints are defined by the end-user and then the genetic either studying a solution to a particular type of institution
algorithm is processed to provide the optimal or the near- or providing a system or language to describe the different
optimal solution.
constraints and variables and generate the time table. In this
Keywords: Scheduling, Time tabling, Genetic Algorithm, hard paper we provide a tool that in one side, gives the possibility
and soft constraints. to define any number of variables and constraints and run
genetic algorithm to provide a number of optimal or near-
1. Introduction optimal solutions and in the other side interacts with the
user to modify any solution by providing an assistance to
Scheduling is the process of allocating tasks to a set of check if any constraint is broken due to such modification.
resources. Generally, these tasks require the use of varied
resources while the resources, on the other hand, are limited The rest of this paper is organized as follow: the next
in nature with respect to quantity as well as the time that section we give a brief description of genetic algorithm.
they are available for. This allocation has to respect a Section 3 presents some related works. The problem
number of constraints. The scheduling of time tabling is one description and the proposed solution are given in section 4.
of the most areas that were studied over the last decade. finally, the discussion and conclusion are presented in
Among the wide variety of time tabling problems, section 5.
educational time tabling is one of the mostly studied from a
practical viewpoint. It is one of the most important and 2. Genetic Algorithm
time-consuming tasks which occur periodically (i.e. each
year/semester) in all academic institutions. The quality of The genetic algorithm is a method for solving both
the time tabling has a great impact on a broad range of constrained and unconstrained optimization problems that is
different parties including lecturers, students and based on natural selection, the process that drives biological
administrators [20]. The university's timetable scheduling is evolution. The genetic algorithm repeatedly modifies a
the assignment to each course the timeslot, the teacher, the population of individual solutions. At each step, the genetic
classroom… etc so that certain conditions for these algorithm selects individuals at a given selected method (eg.
resources and for the students' registrations are respected. random) from the current population to be parents and uses
Because of the number of constraints that have to be dealt them produce the children for the next generation. Over
with and the limitation in resources this problem is successive generations, the population "evolves" toward an
considered as NP-hard. The constraints that have to be
optimal solution. Genetic algorithm can be used to solve a
respected are called hard constraints. On the other hand, a
variety of optimization problems that are not well suited for
good schedule should also be efficient such minimize wasted
standard optimization algorithms, including problems in
time between classes for students as well as teachers. These
2 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

which the objective function is discontinuous, stochastic, or the number of hard constraint violations zero. J. J. Moreira in
highly nonlinear [22]. The genetic algorithm uses three [26] presents a solution method to the problem of automatic
main types of rules at each step to create the next generation construction timetables for the exams. Among several
from the current population. Selection rules: select the mathematical models of representation, the final option was
individuals called parents, that contribute to the population for a model matrix, which is justified by the benefits that
at the next generation. Crossover rules combine two this model presents when used in the algorithm solution.
parents to form children for the next generation. Mutation The method of solution is a meta-heuristics that includes a
rules apply random changes to individual parents to form genetic algorithm.
children. Figure 1 shows the structure of the tool we built.
It consists of two main layers: the first layer gives the ability 4. Problem description
to define variables and constraints and after interfacing with
Making course schedule is one of those NP hard problems.
the genetic algorithm layer the different solutions (Fig 2) The problem can be solved using heuristic search algorithm
can be manipulated via a user-friendly interface. The second to find optimal solution, but it works for simple cases. For
layer is responsible of the genetic algorithm processing. more complex inputs and requirements finding considerably
The fitness function and then the initial population are auto- good solution can take a while or it may be impossible. This
generated upon variables and constraints definition (Fig 4). is where genetic algorithms come in the game. When
Figs 2-3 show the plot of the best fitness got after 200 making course schedule many constraints (number of
generations over which the GA algorithm ran. professors, students, classes and classrooms, size of
classroom, laboratory equipment in classroom and many
other) should be taken into consideration. These constraints
can be divided into two major types: hard and soft
Variables and constraints.
Initial population constraints
Generation definition
• Hard constraints: if one is broken the provided
schedule is not considered as acceptable solution.

Selection • Soft constraints: if one is broken the provided


Generated

Layer 1 schedule is considered as acceptable solution:


fitness

Crossover and These constraints are represented by a set of predicates P.


mutation
These predicates are generally set as parameters and should
Solutions
manipulation
be satisfied to get the best solution. Generally a predicate
new population returns true if it is satisfied and false if it is not. Each
predicate P i is assigned a penalty αi if P i is not satisfied.
Layer 2
This penalty depends on the type of the constraint (hard or
Interactive soft) and on the importance of the requirement within the
finish schedule interface same type (ex. generally having a professor doing different
courses in the same time is more critical than having
Figure 1: Tool Structure. different courses with different professors in the same
classroom at the same time).
4.1. Chromosome
3. Related works A chromosome is represented by the set of classes. A
Several research papers have been published in the last class is a set of course (c), day (d), time (t), professor (p),
decade, which address different aspects of “the time tabling classroom (r), level (l) and a list of students (s). As the
problem”. Indeed, there is even an international conference students' registration is done generally after the schedule is
on the “Practice and Theory of Automated Time tabling” built, we will omit in the first phase the list of the students
(PATAT) which has taken place biannually since 1995. A from the Class.
Large variety of solving techniques has been tried out
for solution of time tabling problems [1-23]. Most research Chrom = {(ci, di, ti, pi, ri, li) / i=1..N} (1)
has concentrated on automatically solving difficult static
time tabling problems. Here we present some that are related where N is the number of courses.
to genetic algorithm. J. Matthews concentrated on solving A course can have more than one time weekly. The
dynamic time tabling problems [24], in which a change of chromosome is represented using a matrix NxV where N is
circumstance would require the modification of an existing the number of courses and V is the number of variables
time tabling solution. The paper [25] describes a genetic (V=6 in this example).
algorithm-based approach with two main stages for solving
the course time tabling problem. A local search is applied to 4.2. Fitness
the algorithm at each stage. The first stage eliminates the
violations of hard constraints, and the second one attempts The objective function is calculated for each constraint C
to minimize the violations of soft constraints while keeping over the chromosome:
(IJCNS) International Journal of Computer and Network Security, 3
Vol. 2, No. 6, June 2010

1 if the constraint is violated the variables that can be modified between two rows with
f(C)=  (2)
0 otherwize the same index.

The general fitness is calculated over all courses as the 4.4. Mutation
summation of violated constraints times their weights as
follows: Mutation operation is very simple. It just takes randomly
a number of rows in the chromosome matrix and moves it to
another randomly chosen slot (change the day and/or the
Fitness = ∑ ∑ α i f (Ci ) (3)
c i time) or location. In every operation, the domain of different
variables is respected. Note that the tool works with variable
where αi represents the weight attached to the constraint Ci . values as indexes to the real values. These real values for
Ci is a set of predicates that have to be all satisfied for Ci to examples courses' names, professors' names, etc can be
be satisfied. This fitness function has to be minimized to bound to these variables. The relationship between the
obtain a better solution. We chose a constant weight α for different variables should be set at the beginning of the
all hard constraints and a constant weight β for all soft process (eg. which professor or set of professors are willing
constraints. This yields to the following fitness equation in to teach a given course).
this particular case:
5. Results
h s
Fitness = α ∗ ∑ ∑ f (Ci ) + β ∗ ∑ ∑ f (Ci ) (4) Figure 2 shows the average number of clashes for each
c i c i constraint during the 200 generations. Figure 4 presents the
best fitness plot for different population during 200
In our measurements the value of α is taken 2 and the
generations. In order to test the effectiveness of our objective
value of β is taken 1. For example if we have the following
function for solving the timetabling problem which is built
conditions:
by given different weights for hard and soft constraints, we
• A professor can has only one class at a time (C1, provide in Figure 4 the history of constraints violation per
Hard) type (Hard and Soft). Figure 4 demonstrates, in one side,
that the convergence faster for hard than for soft constraints
• A Classroom can handle only one course at a time and, in the other side, the only clashes that may remain
(C2, Hard). (Figure 3) are due to soft constraints.

• Two time-slots for he same course cannot be the


same day (C3, Soft)

• Courses in the same level cannot be in the same


time (C4, Hard)

• No course on Monday between 11:00-12:30


(reserved slot) (C5, Hard)

Then these constraints will be written in the form where


they are violated:

∀ (ci , di , ti , pi , ri , li ) & (c j , d j, t j, p j, r j, l j)
C1=(di = d j ) & (ti = t j ) & ( pi = p j )
C =(d = d ) & (t = t ) & (r = r )
 2 i j i j i j
C3 =(ci =c j) & (di = d j ) (5)
C = (l =l ) & (t = t )
 4 i j i j
C5 = (di =Mon) & (ti = 11: 00 − 12: 30)

4.3. Crossover Figure 2: Average number of Clashes per Constraint

Crossover operation combines two parent chromosomes,


and then creates an offspring that will be inserted in the new
generation. The two parent chromosomes are selected either
randomly or according to their fitness (the two having the
best fitness from the remaining pool). A set of rows are
selected randomly for both parents. The rows indexes should
be the same for both parents so always the variables
domains will be respected. The crossover is made by
switching the values of some variables taken randomly from
4 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

modification is done. By this hybrid solution, it is not


required that a GA solution has to be an optimal one. On the
other hand, the other originality given by this tool is that it
can be adapted to any institution as it gives the possibility to
define the different variables and different constraints. The
next step for this tool is to be able to support more types of
constraints and then it can be used not only for course
schedule but for most types of timetable scheduling
generation.
Acknowledgements: I would like to acknowledge the financial
support of this work from the Deanship of Scientific Research,
Qassim University. Also, I would like to thank Professor Abdullah
Ibrahiem Al-Shoshan and Dr. Abdulnaser Rachid for their
remarks.

References
[1] D. Abramson, “Constructing school timetables using
Figure 3: Best Fitness simulated annealing: sequential and parallel algorithms”,
Management Science,v37(1),January 1991, pp. 98-113
[2] P. Adamidis and P. Arapakis, “Evolutionary Algorithms in
Lecture Time tabling”, Proceedings of the 1999 IEEE
Congress on Evolutionary Computation (CEC ’99), IEEE,
1999, pp. 1145-1151.
[3] S.C. Brailsford, C.N. Potts and B.M. Smith, “Constraint
Satisfaction Problems: Algorithms and Applications”,
European Journal of Operational Research, vol 119, 1999,
pp. 557-581.
[4] E. K. Burke and J. P. Newall, "A New Adaptive Heuristic
Framework for Examination Time tabling Problems",
University of Nottingham, Working Group on Automated
Time tabling, TR-2002-1 http://www.cs.nott.ac.uk/TR-
cgi/TR.cgi?tr=2002-1
[5] M.W. Carter, “A Survey of Practical Applications of
Examination Time tabling Algorithms”, Operations
Research vol. 34, 1986, pp. 193-202.
[6] M.W. Carter and G. Laporte, “Recent Developments in
Practical Course Time tabling”, In: Burke, E., Carter, M.
(Eds.), The Practice and Theory of Automated Time tabling
II: Selected Papers from the 2nd Int'l Conf. on the Practice
Figure 4: Constraint violation per type
and Theory of Automated Time tabling, Springer Lecture
6. Discussion and Conclusion Notes in Computer Science Series, Vol. 1408, 1998, pp. 3-
19.
In this paper we introduced a tool to resolve the timetable [7] A. Colorni, M. Dorigo, and V. Maniezzo. “Genetic
scheduling problem with different constraints. This tool uses algorithms – A new approach to the timetable problem”, In
Lecture Notes in Computer Science - NATO ASI Series,
genetic algorithm to provide the optimal or near-optimal
Vol. F 82, Combinatorial Optimization, (Akgul et al eds),
solution. Genetic Algorithm was widely used for time Springer-Verlag, 1990, pp. 235-239.
tabling but as many studies indicated the solution to time [8] A. Hertz, “Tabu search for large scale time tabling
tabling is different from one institution to another because of problems,” European journal of operations research, vol.
the different types of institutions and different constraints 54, 1991, pp. 39-47.
within the same type. The originality in our work comes [9] B. Paechter, A. Cumming, M.G.Norman, and H. Luchian,
from the hybrid solution the tool supports. In fact, most “Extensions to a memetic time tabling system”, In E.K.
researches were looking for the optimal solution to time Burke and P.M. Ross, eds., Proceedings of the 1st
tabling problem in respect of a set of constraints. This NP- International Conference on the Practice and Theory of
Automated Time tabling, 1995.
hard problem required Heuristic methods among which GA
[10] A. Schaerf, “A Survey of Automated Time tabling”,
solution. The problem is that GA may, in one side, runs into Artificial Intelligence Review, vol 13 (2), 1999, 87-127.
minima and, in the other side; the solution may not be the [11] Arabinda Tripathy, “A lagrangian relaxation approach to
wanted solution – even if it has the best fitness – because course time tabling”, Journal of the Operational Research
some soft constraints were not satisfied. For this reason, our Society, vol. 31, 1980, pp. 599-603
tool shows a number of solutions got from GA and then [12] G.M. White and P.W. Chan, “Towards the Construction of
gives the possibility for the user to manipulate the timetable Optimal Examination Timetables”, INFOR 17, 1979, p.p.
in a way to get the needed solution by recalculating the 219-229.
fitness to see if a constraint is broken each time a [13] M. Ben Othman, G. A. Azim, A. Hamdi-Cherif, “Pairwise
Sequence Alignment Revisited – Genetic Algorithms and
(IJCNS) International Journal of Computer and Network Security, 5
Vol. 2, No. 6, June 2010
Cosine Functions”, NAUN conference, Attica, Greece, June Allocation Models For Light-Rail Transaction Systems",
1-3, 2008. International Conference for Prospects for Research in
[14] M. Ben Othman, A. Hamdi-Cherif, “Genetic algorithm and Transport and Logistics on a Regional Global Perspective,
scalar product for pairwise sequence alignment”, IJC February 12-14, 2009, Istanbul, Turkey
International Journal of Computer, NAUN North Atlantic [24] J. Matthews, "A Constraint Modeling Language for
University Union, November 2008. Timetables", Project Report, June 2006.
[15] A. Bhaduri "University Timetable Scheduling using Genetic [25] S. Massoodian, A. Esteki, "A Hybrid Genetic Algorithm for
Artificial Immune Network", 2009 International Conference Curriculum Based Course Time tabling", February 2008.
on Advances in Recent Technologies in Communication [26] J. J. Moreira, "A System for Automatic Construction of
and Computing 978-0-7695-3845-7/09 Exam Timetable Using Genetic Algorithms", ISLA - Rua
[16] Wren, “Scheduling, Time tabling and Rostering – A Special de Cabo Borges, 55, 4430-032 V. N. de Gaia, Portugal.
Relationship”, in The Practice and Theory of Automated Revista de Estudos Politécnicos, Polytechnical Studies
Time tabling: Selected Papers from the 1st Int'l Conf. on the Review, 2008, Vol VI, nº 9, ISSN: 1645-9911
Practice and Theory of Automated Time tabling, Burke, E.,
Ross, P. (Eds.) Springer Lecture Notes in Computer
Science Series, Vol. 1153, 1996, pp. 46-75. Mohamed Tahar Ben Othman received
[17] S.C. Brailsford, C.N. Potts and B.M. Smith, “Constraint his Ph.D. in Computer Science from The
Satisfaction Problems: Algorithms and Applications,” National Institute Polytechnic of Grenoble
European Journal of Operational Research, vol 119, 1999, INPG France in 1993, His Master degree in
pp. 557-581. Computer Science from ENSIMAG "École
[18] Kazarlis S, Petridis V and Fragkou P, “Solving University Nationale Supérieure d'Informatique et de
Time tabling Problems Using Advanced Genetic Mathématiques Appliquées de Grenoble" in
Algorithms” Science Series, Vol. 1408, 1998, pp. 3-19. 1989. He received a degree of Senior
[19] Bhaduri A, “Genetic Artificial Immune Network for Engineer Diploma in Computer Science
Training Artificial Neural Networks”, In Proc. of the IEEE from Faculty of Science of Tunis. This author became a Member
International Advance Computing Conference (2009) pp. (M) of IEEE in 1997, and a Senior Member (SM) in 2007.
1998-2001 He worked as post-doc researcher in LGI (Laboratoire de Genie
[20] R. Qu, E. Burke, B. McCollum, L. T.G. Merlot and S. Y. Logiciel) in Grenoble, France between 1993 and 1995, Dean of the
Lee, "A Survey of Search Methodologies and Automated Faculty of Science and Engineering between 1995 and 1997 at the
Approaches for Examination Time tabling", Computer University of Science and Technology in Sanaa, Yemen, as Senor
Science Technical Report No. NOTTCS-TR-2006-4, Software Engineer in Nortel Networks, Canada, between 1998 and
University of Nottingham. 2001 and Assistant Professor in Computer College at Qassim
[21] L.T. Leng, "Guided Genetic Algorithm", A thesis submitted University in Saudi Arabia from 2002 until present. His research
for the degree of Ph.D in Computer Science, Department of interest areas are wireless networks, Adhoc Networks,
computer science, University of Essex, Colchester, United Communication Protocols, Artificial Intelligence, and
Kingdom, 1999. Bioinformatics.
[22] www.systemtechnik.tu-ilmenau.de/~pohlheim/GA_Toolbox/
[23] S. D. ÖNCÜL, D. S. Ö. AYKAÇ, D. BAYRAKTAR and D.
ÇELEBÝ, "A Review Of Time tabling And Ressource

Appendix

Figure 2: Different solutions.


6 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

Figure 3: Available classrooms.

Figure 4: Constraint definition.


(IJCNS) International Journal of Computer and Network Security, 7
Vol. 2, No. 6, June 2010

Formal Specification for Implementing Atomic


Read/Write Shared Memory in Mobile Ad Hoc
Networks Using the Mobile Unity
Fatma.A.Omara 1, Reham.A.Shihata 2
1
Computer Science Department, Information Systems and Computers Faculty
Cairo University, Cairo -Egypt.
Fatma_omara@ yahoo.com

2
Math's Department. Science Faculty, EL Minufiya University, Shebin-El Kom -Egypt.
rehamteacher@yahoo.com

Abstract: The Geoquorum approach for implementing atomic mobility, also referred to as code mobility. Code on demand,
read/write shaved memory in mobile ad hoc networks. This remote evaluation, and mobile agents are typical forms of
problem in distributed computing is revisited in the new setting code mobility. Of course, many systems entail a combination
provided by the emerging mobile computing technology. A of both logical and physical mobility (as explained in our
simple solution tailored for use in ad hoc networks is employed related work). The potentially very large number of
as a vehicle for demonstrating the applicability of formal independent computing units, a decoupled computing style,
requirements and design strategies to the new field of mobile
frequent disconnections, continuous position changes, and
computing. The approach of this paper is based on well
understood techniques in specification refinement, but the
the location – dependent nature of the behavior and
methodology is tailored to mobile applications and help communication patterns present designers with
designers address novel concerns such as logical mobility, the unprecedented challenges[1][2]. While formal methods may
invocations, specific conditions constructs. The proof logic and not be ready yet to deliver complete practical systems, the
programming notation of mobile UNITY provide the intellectual complexity of the undertaking clearly can benefit
tools required to carryout this task. Also, the quorum systems enormously from the rigor associated with a precise design
are investigated in highly mobile networks in order to reduce the process, even if employed only in the design of the most
communication cost associated with each distributed operation. critical aspects of the system. The attempt to answer the
question raised earlier consists of a formal specification and
Keywords: Formal Specification, Mobility, Mobile Ad Hoc derivation for our communication protocol for ad hoc mobile
Networks, the Quorum Systems. systems, carrying out this exercise by employing the mobile
unity proof logic and programming notation. Mobile unity
1. Introduction provides a notation for mobile system components,
coordination language for expressing interactions among the
Formal notations led to the development of specification components and an associated proof Logic. This highly
languages; formal verification contributed to the application modular extension of the UNITY model extends both the
of mechanical theorem proffers to program checking; and physical and logical notations to accommodate specification
formal derivation is a class of techniques that ensure of and reasoning about mobile programs that exhibit
correctness by construction, has the potential to reshape the dynamic reconfiguration. Ensuring the availability and the
way software will be developed in the future program consistency of shared data is a fundamental task for several
derivation is less costly than post factum verification, is mobile network applications. For instance, nodes can share
incremental in nature, and can be applied with varying data containing configuration information, which is crucial
degrees of rigor in conjunction with or completely apart for carrying out cooperative tasks. The shared data can be
from program verification. More significantly, while used, for example, to coordinate the duty cycle of mobile
verification is tied to analysis and support tools, program nodes to conserve energy while maintaining network
derivation deals with the very essence of the design process, connectivity. The consistency and the availability of the data
the way one thinks about problems and constructs solutions plays a crucial role in that case since the loss of information
[1]. An initial highly- abstract specification is gradually regarding the sleep/awake cycle of the nodes might com-
refined up to the point when it contains so much detail that promise network connectivity. The consistency and
writing a correct program becomes trivial. Program availability of the shared data is also relevant when tracking
refinement uses a correct program as starting point and mobile objects, or in disaster relief applications where
alters it until a new program satisfying some additional mobile nodes have to coordinate distributed tasks without
desired properties is produced. Mobile systems, in general, the aid of a fixed communication infrastructure. This can be
consist of components that may move in a physical or attained via read/write shared memory provided each node
logical space if the components that move are hosts, the maintains a copy of data regarding the damage assessment
system exhibits physical mobility. If the components are and dynamically updates it by issuing write operations. Also
code fragments, the system is said to display logical in this case it is important that the data produced by the
8 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

mobile nodes does not get lost, and that each node is able to 2.1 Quorum Systems under Mobility Model
Retrieve the most up-to-date information. Strong data con-
sistency guarantees have applications also to road safety, This section shows here that quorums proposed for
detection and avoidance of traffic accidents, or safe driving static networks are not able to guarantee data consistency
assistance [3].The atomic consistency guarantee is widely and availability if assumptions A1 and A2 hold, because the
used in distributed systems because it ensures that the minimum quorum intersection is not sufficiently large to
distributed operations (e.g., read and write operations) cope with the mobility of the nodes. In fact, since read/write
performed on the shared memory are ordered consistently operations are performed over a quorum set, in order to
with the natural order of their invocation and response time, guarantee data consistency each read quorum must intersect
and that each local copy is conforming to such an order. a quorum containing the last update. We show that there are
Intuitively, this implies that each node is able to retrieve a scenarios that invalidate this condition in case of quorum
copy showing the last completed update, which is crucial in systems Qg with non-empty quorum intersection, and in case
cooperative tasks. However, the implementation of a fault- of dissemination quorum systems Qd with minimum quorum
tolerant atomic read/write shared memory represents a intersection equal to f + 1, where f is the maximum number
challenging task in highly mobile networks because of the of failures [10].
lack of a fixed infrastructure or nodes that can serve as a
backbone. In fact, it is hard to ensure that each update 2.1.1 Generic Quorum System
reaches a subset of nodes that is sufficiently large to be
retrieved by any node and at any time, if nodes move along It is a set of subsets of a finite universe U such that,
unknown paths and at high speed. The focal point model any two subsets (quorums) intersect (consistency property)
provides a first answer to this challenge since it masks the and, there exists at least one subset of correct nodes
dynamic nature of mobile ad hoc networks by a static model. (availability property). The second condition ensures data
More precisely, it associates mobile nodes to fixed availability and poses the constraint f < (n/2). In our system
geographical locations called focal points. According to this model where nodes continuously fail and recover, this
model, a focal point is active at some point in time if its condition is not sufficient to guarantee data availability. For
geographical location contains at least one active mobile instance, in an implementation of a read/write atomic
node. As a result, a focal point becomes faulty when each memory based on Qg, the liveness of the read protocol can
mobile node populating that sub-region leaves it or crashes. be violated since it terminates only after receiving a reply
The merit of this model is to study node mobility in terms of from a full quorum of active focal point. Therefore, since the
failures of stationary abstract points, and to design recovery operation involves a read operation, data can
coordination protocols for mobile networks in terms of static become unavailable [10][11].
abstract nodes [1] [3].
2.1.2 Dissemination Quorum Systems
2. Related Work
They satisfy a stronger consistency property, but
In this section, the quorum systems in highly insufficient if failures occur continuously. An f-fail-prone
mobile networks are investigated in order to reduce the com- system ß C 2U of U is defined as a set of subsets of faulty
munication cost associated with each distributed operation. nodes of U none of which is contained in another, and such
Our analysis is driven by two main reasons: (1) guarantee
data availability, and (2) reduce the amount of message
that some B ∈
ß contains all the faulty nodes (whose
number does not exceed f).
transmissions, thus conserving energy. The availability of
the data is strictly related to the liveness and response time Definition 1. A dissemination quorum system Q d of U for a
of the recovery protocol since the focal point failures occur f -fail-prone system ß, is set of subsets of U with the
continuously, as they are triggered by the motion of nodes. following properties:
Quorum systems are well-known techniques designed to
enhance the performance of distributed systems, such as to (i) | Q1 ∩ Q2 | ¢ B √ Q1 , Q2 ∈Q √B ∈ß
∈ß э Q ∈Q
d,
reduce the access cost per operation and the load. A quorum (ii) √ B : Q ∩ B = Ø.
d
system of a universe U is a set of subsets of U, called
quorums, such that any pair of quorums does intersect. In Dissemination quorum systems tolerate less than n/3
this paper, the analyzing of quorum systems is in condition failures. Unfortunately, since in our system model an
of high node mobility. Note that the universe U of our additional focal point might fail between the invocation and
quorum systems is (Focal Point) FP, a set of n stationary the response time of a distributed operation, more than f
focal points. This choice allows us to study node mobility in focal points in a quorum set can be non-active at the time
terms of continuous failures of stationary nodes. In the next they receive the request. As a result, data availability can be
Section, two examples of quorum systems are analyzed and violated. The following lemma provides a condition on the
show that they are not always able to guarantee data minimum quorum intersection size (lower bound) that is
consistency and availability under the mobility constraints, necessary to guarantee data consistency and availability
and provide in Lemma 1 a condition on the size of the under our system model, provided nodes fail and recover
minimum quorum intersection that is necessary to [12].
guarantee these properties [10].
(IJCNS) International Journal of Computer and Network Security, 9
Vol. 2, No. 6, June 2010

Lemma 1. An implementation φ of a read/write shared 2.3 An Implementation of Read/Write Atomic Memory


memory built on top of a quorum system δ. of a universe FP
of stationary nodes that fail and recover according to
assumptions A1, A2 guarantees data availability only if |Q1 In this section, the Q opt is the quorum system with
∩ Q2| > f+ 1 for any Q1 , Q2 ∈Q. It ensures atomic minimum intersection size f+3 that is able to guarantee data

consistency only if |Q ∩ Q > f+ 2 for any Q , Q ∈Q.


consistency and availability under our system model and
1 2 1 2 mobility constraints. We prove that by showing that there
exists an implementation φ of atomic read/write memory
2.2The MDQ Quorum Systems built on top of Q opt. Our implementation consists of a suite
of read, write and recovery protocols and built on top of the
This section introduces here a new class of quorum systems, focal points and the Qbcast abstraction [13].
called Mobile Dissemination Quorum system (MDQ) that
satisfies the condition in Lemma 1. 2.3.1 The Qbcast Service
Definition 2. A MDQ system Q m of FP is a set of subsets of

We say that a focal point F i is faulty at time t if focal
FP such that |Q1 ∩ Q2|> f + 2 for any Q1 , Q2 Q. Note point region Gi does not contain any active node at time t or
that in contrast with Q g the liveness of the distributed F i is not connected to a quorum of focal points. In our
operations performed over a quorum set is guaranteed by the implementation each read, write and recovery request is
minimum number of alive nodes contained in any quorum. forwarded to a quorum of focal points. This task is
As a result, in case of failures the sender does not need to performed by the Qbcast service. It is tailored for the MDQ
access another quorum in order to complete the operation. system and designed for hiding lower level details. Similarly
This improves the response time in case of faulty nodes and to Qbcast guarantees reliable delivery. It is invoked using
reduces the message transmissions. Let us consider now the interface qbcast (m), where m is the message to transmit
following MDQ system: containing one of these request tags write, read, confirm.
n+ f+ 3 The notation {si}i ∈ Q qbcast (m, Q denotes the Qbcast
Qopt = Q : (Q C FP ) ^ ( | Q | = invocation over quorum Q, {si} i ∈ Q the set of replies,
2 where Q C Q. We call the subset Q the reply set associated
with request m. This set plays a crucial role to prove data
Lemma 2. Qopt is a MDQ system and f ≤ n - 3.
availability and atomic consistency. Upon receiving a
Proof: Since |Q1 U Q2| = |Q1| + |Q2| - |Q1 ∩ Q2| for any request m, Qbcast computes a quorum Q ∈ Qopt and
transmits message m to each focal point in Q. It is important
Q1, Q 2 ∈ Q opt, to note that qbcast (m) returns only if the node receives
within T time units at least |Q| — (f + 1) replies from Q. If
this does not occur, it waits for a random delay and retries
And |Q1 U Q2| ≤ n, then |Q1 ∩ Q2| ≥ n+ f+ 3-n.
later since if this happens the focal point is faulty by our
definition. Note that if read (or write) operations occur more
In addition, Q opt tolerates up to n — 3 failures since the frequently than write (or read) operations, we can reduce
size of a quorum cannot exceed n, that is message transmissions by distinguishing between read and
 n + f + 3 
  ≤ n write and making read (or write) quorums smaller.
 2 
However, for simplicity of presentation we do not
distinguish between read and write quorums [12] [13].

This implies (f+3-n) / 2 ≤ 0 note that Q opt is highly resilient 2.3.2 Protocols
(in the trivial case f = n -3, Q opt = {U}). Clearly, there is a
trade-off between resiliency and access cost since the access The high level description of the read/write/ recovery
cost per operation increases with the maximum number of protocols is illustrated in Figure 1. Each mobile node
failures. Moreover, our assumption of connectivity among maintains a copy of the state s associated with the shared
active focal points becomes harder to guarantee as f becomes variable x, which is a compound object containing the value
larger. It is important to note that the minimum intersection s.val of x, a timestamp s.t representing the time at which a
size between two quorums of Q opt is equal to f + 3. We node issued update s.val, and a confirmed tag that indicates
prove in the following section that there exists an if s.val was propagated to a quorum of focal points. Each
implementation of atomic memory built on top of Q opt This node can issue write, read and recovery operations. A new
shows that f + 3 is the minimum quorum intersection size state is generated each time a node issues. a write operation.
necessary to guarantee data consistency and data availability
under our mobility model. Therefore, Q opt is optimal in the
size of the minimum quorum intersection, that is in terms of
message transmissions since the sender can compute a
quorum consisting of its  n + f + 3  closest
 
 2 
nodes. This is particularly advantageous in sensor networks
because it can lead to energy savings [12].
10 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

2.4 Analysis
In this section, the key steps to prove the atomic
Write (v): consistency and data availability of the implementation
presented in this paper is shown.
s ← {v, t, unconfirmed, rand}
A. Data Availability
{acki} i ∈ Q ← qbcast (<write s> )
The availability of the data is a consequence of our
{acki} i ∈ Q ← qbcast (<confirm s> )
failure model and of the Qbcast service. The following
Read/recovery ( ): lemmas are useful to prove it and will be also used in
showing atomic consistency [13].
{si} i ∈ Q ← qbcast (<read>)
Lemma 3. The Qbcast service invoked by an active focal
s ← state ({si}) i ∈ Q point terminates within Τ time units since the invocation
time.
if (s not confirmed) Proof: This is true since an active focal point or is able to
{acki} i ∈ Q ← qbcast (<confirm s>) communicate with a quorum of focal points because of
Definition 2, and because at most f + 1 focal points in Q can
return s.val be faulty when the request reaches their focal point regions.
In fact, because of assumptions A1 and A2 at most f + 1 focal
Figure 1. Write / Read/Recovery Protocols. points can appear to be faulty during T time units. Therefore,
at least |Q| — (f+ 1) focal points in a quorum reply. This
2.3.3 Write Protocol proves our thesis since the QBcast service guarantees
reliable delivery, and the maximum round-trip transmission
A node C requesting a write v computes a new state s delay is equal to T.
consisting of value v, the current timestamp, tag The following lemma and Theorem 1 is a straightforward
unconfirmed, and a random identification rand. It transmits derivation of the liveness of the Qbcast service.
its update to a quorum of focal points via the Qbcast service Lemma 4. An active focal point recovers within T time
by invoking qbcast (<write, s> ) and successively qbcast (< units.
confirm, s> ) to make sure that a quorum of focal points
received such an update. Upon receiving a write request, Theorem 1. This implementation of atomic read/write
each non-faulty focal point (including recovering) replaces shared memory guarantees data availability.
its state with the new state s only if the associate timestamp
s.t is higher than the timestamp of its local state, and sets its Lemma 5. At any time in the execution there are at most f+
write tag to unconfirmed. This tag is set to confirmed upon I faulty and recovering focal points. Proof. Because of
receiving the confirm request sent in the second phase of the Assumptions A1 and A2, and Lemma 4, there are at most f +
write protocol, or sent in the second phase of the read 1 faulty and recovering focal points during any time interval
protocol in case the node that issued the write operation [t,t + τ ] for any time t in the execution. This can occur if
could not complete the write operation due to failure[13]. there are f faulty focal points before t and during [t, t + τ]
one of these faulty focal points recovers and another one
fails.
2.3.4 Read Protocol
B. Atomic Consistency
In the read protocol, a node C invokes qbcast
(<read>), which forwards the read request to a quorum Q of There exists a total ordering of the operations with
focal points. Each non-faulty focal point in Q replies by certain properties. We need to show that the total order is
sending a copy of its local state s. Upon receiving a set of consistent with the natural order of invocations and
replies from the Qbcast service, node C computes the state response. That is, if o1 completes before o2 begins, then o1
with highest timestamp and returns the corresponding value. < a o2.
If the tag of s is equal to unconfirmed, it sends a confirm Lemma 6. The reply set Q associated with a request
request. This is to guarantee the linearizabitity of the satisfies the following properties:

operations performed on the shared data in case a write  n − f 
operation did not complete due to client failure [11]. (i) Q ≥   ;
 2
2.3.5 Recovery Protocol Q ∈ Q
( ii ) Q ∩ Q ≥ 2 for any opt
It is invoked by a node C upon entering an empty
Proof. The first property holds because the QBcast service
region Gi. More precisely, C broadcasts a join request as
completes only upon receiving at least |Q| - (f+ 1) replies
soon as it enters a new focal point region and waits for
from a quorum of servers. Therefore,
replies. If it does not receive any reply within 2d time units, − n − f +1 n − f 
where d is the maximum transmission delay, it invokes the Q≥ ≥
recovery protocol which works in the same way as the read 2  2 
protocol [12].
(IJCNS) International Journal of Computer and Network Security, 11
Vol. 2, No. 6, June 2010

3. Methodology and Notation of Mobile Unity


Since |QUQ| = |Q| + |Q| - |Q∩Q| and |QUQ|
This section provides a gentle introduction to Mobile
≤ n, UNITY. A significant body of published work is available
for the reader interested in a more detailed understanding of
− n − f  n + f + 3 the model and its applications to the specification and ver-
Then, Q ∩Q ≥  + −n ification of Mobile IP [4], and to the modeling and verifi-
 2   2 
cation of mobile code, among others. Each UNITY .program
comprises a declare, always, initially, and assign section.
The declare section contains a set of variables that will be
a  b  a + b 
 for any a, b ∈ R,
Therefore, since   +   ≥  used by the program. Each is given a name and a type. The
 
2  
2  2  always section contains definitions that may be used for
convenience in the remainder of the program or in proofs.
then The initially section contains a set of state predicates which
must be true of the program before execution begins.
−  3 Finally, the assign section contains a set of assignment
Q ∩Q ≥ n +  − n = 2.
statements. In each section, the symbol is" is used to
 2
separate the individual elements (declarations, definitions,
predicates, or statements). Each assignment statement is of
Lemma 7. Let o1 be a write operation with associated state
the form x = e if p, where x is a list of program variables, e
s1. Then, at any time t in the execution with t > res (o1)
is a list of expressions, and p is a state predicate called the
there exists a subset Mt of active focal points such that,
guard [5]. When a statement is selected, if the guard is
n − f  satisfied, the right-hand side expressions are evaluated in
(i) M t ≥  −1
 2 
(Equality holds only if the current state and the resulting values are stored in the
variables on the left-hand side. The standard UNITY
/focal points are faulty and one is recovering); execution model involves a non-deterministic, weakly-fair
execution of the statements in the assign section. The
(ii) The state s of its active focal points at time t is such execution of a program starts in a state satisfying the
constraints imposed by the initially section. At each step,
one of the assignment statements is selected and executed.
that s1 ≤s s. The selection of the statements is arbitrary but weakly fair,
Proof. Let us denote t1 — res (o1), and I = [t1, t]. We prove i.e., each statement is selected infinitely often in an infinite
the lemma by induction on the number k of subintervals W1, execution [5] [6].All executions are infinite. The Mobile
. . ., Wi , . . ., Wk of I of size ≤τ, such that Wi = [t1 + (i - 1) τ , UNITY execution model is slightly different, due to the
t1 + iτ] for i = 1, . . .,k, and [t1, t2] C Uki=1Wi .We want to presence of several new kinds of statements, e.g., the
show that at any time t there exists a subset Mt satisfying reactive statement and the inhibit statement described later.
definitions 1. And 2. A toy example of a Mobile UNITY program is shown below.
If k = 1, there exists a subset Mt of active focal points whose Program host (i) at λ
state is ≥ s1. It consists of the reply set Q associated with o1, Declare
less an eventual additional failure occurred in [t1, Token: integer
t].Therefore, because of Lemma 6 and Assumption 1 and 2 Initially
of our failure model, M t ≥  n − f  − 1 Token = 0
 2 
Assign
The equality holds only if f + 1 focal points in Q did not Count token: = token + 1
receive o1request and one of the focal points in Q fails Move:: λ: = Move (i, λ)
during [t1 , t]. This can occur only if one focal point
recovers, because of Assumption 1. In addition, the state of End host
any recovering focal point in W1 is ≥ s1 because M ∩ Q ≠ Ø The name of the program is host, and instances are indexed
for each Q∈ Qm In fact by i. The first assignment statement in host increases the
token count by one. The second statement models movement
n − f  n + f + 3 of the host from one location to another. In Mobile UNITY,
Q∩Mt ≥  +  − n −1≥ 1
 2   2  movement is reduced to value assignment of a special
variable λ that denotes the location of the host. We use Move
Therefore each focal point that recovered during W1 can be (i, λ) to denote some expression that captures the motion
accounted in set Mt after its recovery. Therefore,
 n − f 
patterns of host (i) [6].
M t =   − 1 The overall behavior of this toy example host is to
 2 Only if f focal points are
count tokens while moving. The program host (i) actually
faulty and one is recovering.
defines a class of programs parameterized by the identifier i.
To create a complete system, we must create instances of
this program. As shown below, the Components section of
12 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

the Mobile UNITY program accomplishes this. In our The basic progress properties include ensures, leads-to,
example we create two hosts and place them at initial until, and detects:
locations λ0 and λ1. • p ensures q simply states that if the program reaches a
System ad-hoc network state where p is true, p remains true as long as q is false,
Program host (i) at λ and there is one statement that, if selected, is guaranteed to
…………… make the predicate q true- This is used to define the most
End host basic progress property of programs.
Components • p leads-to q states that if program reaches a state where p
host (0) at λ0 is true, it will eventually reach a state in which q is true.
host (1) at λ1 Notice that in the leads-to, p is not required to hold until q
is established.
Interactions • p until q defined as ((p leads-to q) ^ (p unless q)), is used
host (0).token, host (1).token:=host (0).token, host to describe a progress condition which requires p to hold up
(1).token, 0 to the point when q is established.
When (host (0) λ = host (1).token. λ) • p detects q is defined as (p q) ^ (q leads-to p)
^ (host (1).token = 0) All of the predicate relations defined above represent a
Inhibit host (l).move and host (0).move short-hand notation for expressions involving Hoare triples
When (host (0). λ = host (1). λ) quantified over the set of statements in the system. Mobile
^ (host (l).token > 10) UNITY and UNITY logic share the same predicate rela-
End ad-hoc network tions. Differences become apparent only when one examines
the definitions of unless and ensures and the manner in
Unlike UNITY, in Mobile UNITY all variables are which they handle the new programming constructs of
local to each component. A separate section specifies Mobile UNITY. Here are some properties the toy-example
coordination among components by defining when and how satisfies:
they share data. In mobile systems, coordination is typically (1) (host (0).token + host (l).token = k)
location dependent. Furthermore, in order to define the Unless (host (0).token + host (l).token > k)
coordination rules, statements in the Interactions section — The total count will not decrease
can refer to variables belonging to the components (2) host (0).token = k leads-to host (0).token > k
themselves using a dot notation. The section may be viewed — The number of tokens on host (0) will eventually
as a model of physical reality (e.g., communication takes increase
place only when hosts are within a certain range) or as a
In the next section we employ the Mobile UNITY
specification for desired system services. The operational
proof logic to give a formal requirements definition to the
semantics of the inhibit construct is to strengthen the guard
geoquorum approach (the application of the paper).
of the affected statements whenever the when clause is true.
The statements in the Interactions section are selected for
execution in the same way as those in the component 4. The Geoquorum-Approach (The
programs. Thus, without the inhibit statement, host(0) and Application)
host(l) may move away from each other before the token
collection takes place, i.e., before the first interaction In this paper the Geoquorum algorithm is presented for
statement is selected for execution. With the addition of the implementing the atomic read/write in shared memory of
inhibit statement, when two hosts are co-located, and host(l) mobile ad hoc networks. This approach is based on
holds more than ten tokens, both hosts are prohibited from associating abstract atomic objects with certain geographic
moving, until host(l) has fewer than eleven tokens[7]. The locations. It is assumed that the existence of Focal Points,
inhibit construct adds both flexibility and control over the geographic areas that are normally "populated" by mobile
program execution.In addition to its programming notation, nodes. For example: a focal point may be a road Junction, a
Mobile UNITY also provides a proof logic, a specialization scenic observation point. Mobile nodes that happen to
of temporal logic. As in UNITY, safety properties specify populate a focal point participate in implementing a shared
that certain state transitions are not possible, while progress atomic object, using a replicated state machine approach.
properties specify that certain actions will eventually take These objects, which are called focal point objects, are prone
place. The safety properties include unless, invariant, and to Occasional failures when the corresponding geographic
stable [7]: areas are depopulated. The Geoquorums algorithm uses the
fault-prone focal point objects to implement atomic
• p unless q asserts that if the program reaches a state in read/write operations on a fault-tolerant virtual shared
which the predicate (p ^ ¬ q) holds, p will continue to hold at object. The Geoquorums algorithm uses a quorum- based
least as long as q does not, which may be forever. strategy in which each quorum consists of a set of focal
• Stable p is defined as p unless false, which states that once point objects. The quorums are used to maintain the
p holds; it will continue to hold forever. consistency of the shared memory and to tolerate limited
• Inv p means ((INIT) p) ^ stable p), i.e., p holds failures of the focal point objects, which may be caused by
initially and throughout the execution of the program. depopulation of the corresponding geographic areas. The
INIT characterizes the program's initial state. mechanism for changing the set of quorums has presented,
thus improving efficiency [8]. Overall, the new Geoquorums
(IJCNS) International Journal of Computer and Network Security, 13
Vol. 2, No. 6, June 2010

algorithm efficiently implements read/write operations in a algorithm replicates the read/write shared memory at a
highly dynamic, mobile network. In this chapter, a new number of focal point objects. In order to maintain
approach to designing algorithms for mobile ad hoc consistency, accessing the shared memory requires updating
networks is presented. An ad hoc network uses no pre- certain sets of focal points known as quorums. An important
existing infrastructure, unlike cellular networks that depend aspect of our approach is that the members of our quorums
on fixed, wired base stations. Instead, the network is formed are focal point objects, not mobile nodes. The algorithm uses
by the mobile nodes themselves, which co-operate to route two sets of quorums (I) get-quorums (II) put- quorums
communication from sources to destinations. Ad hoc with property that every get-quorum intersects every put-
communication networks are by nature, highly dynamic. quorum. There is no requirement that put-quorums intersect
Mobile nodes are often small devices with limited energy other put-quorums, or get-quorums intersect other get-
that spontaneously join and leave the network. As a mobile quorums. The use of quorums allows the algorithm to
node moves, the set of neighbors with which at can directly tolerate the failure of a limited number of focal point
communicate may change completely. The nature of ad hoc objects. Our algorithm uses a Global Position System (GPS)
networks makes it challenging to solve the standard time service, allowing it to process write operations using a
problems encountered in mobile computing, such as location single phase, prior single-phase write algorithm made other
management using classical tools. The difficulties arise from strong assumptions, for example: relying either on
the lack of a fixed infrastructure to serve as the backbone of synchrony or single writers. This algorithm guarantees that
the network. In this section developing a new approach that all read operations complete within two phases, but allows
allows existing distributed algorithm to be adapted for for some reads to be completed using a single phase: the
highly dynamic ad hoc environments one such fundamental atomic memory algorithm flags the completion of a previous
problem in distributed computing is implementing atomic read or write operation to avoid using additional phases,
read/ write shared memory [8]. Atomic memory is a basic and propagates this information to various focal paint
service that facilitates the implementation of many higher objects[9]. As far as we know, this is an improvement on
level algorithms. For example: one might construct a previous quorum based algorithms. For performance
location service by requiring each mobile node to reasons, at different times it may be desirable to use
periodically write its current location to the memory. different times it may be desirable to use different sets of get
Alternatively, a shared memory could be used to collect real quorums and put-quorums. For example: during intervals
– time statistics, for example: recording the number of when there are many more read operations than write
people in a building here, a new algorithm for atomic multi operations, it may be preferable to use smaller get- quorums
writes/multi- reads memory in mobile ad hoc networks. The that are well distributed, and larger put-quorums that are
problem of implementing atomic read/write memory is sparsely distributed. In this case a client can rapidly
divided into two parts; first, we define a static system communicate with a get-quorum while communicating with
model, the focal point object model that associates abstract a put – quorum may be slow. If the operational statistics
objects with certain fixed geographic locales. The mobile change, it may be useful to reverse the situation. The
nodes implement this model using a replicated state algorithm presented here includes a limited
machine approach. In this way, the dynamic nature of the ad "reconfiguration" Capability: it can switch between a finite
hoc network is masked by a static model. Moreover it should number of predetermined quorum systems, thus changing
be noted that this approach can be applied to any dynamic the available put-quorums and get –quorums. As a result of
network that has a geographic basis. Second, an algorithm the static underlying focal point object model, in which focal
is presented to implement read/write atomic memory using point objects neither join nor leave, it isn't a severe
the focal point object model. The implementation of the limitation to require the number of predetermined quorum
focal point object model depends on a set of physical systems to be finite (and small). The resulting
regions, known as focal points .The mobile nodes within a reconfiguration algorithm, however, is quite efficient
focal point cooperate to simulate a single virtual object, compared to prior reconfigurable atomic memory
known as a focal point object. Each focal point supports a algorithms. Reconfiguration doesn't significantly delay read
local broadcast service, LBcast which provides reliable, or write operations, and as no consensus service is required,
totally ordered broadcast. This service allows each node in reconfiguration terminates rapidly.
the focal point to communicate reliably with every other
The mathematical notation for the geoquorum approach
node in the focal point. The focal broadcast service is used
- I the totally- ordered set of node identifiers.
to implement a type of replicated state machine, one that
- I0 є I, a distinguished node identifier in I that is
tolerates joins and leaves of mobile nodes. If a focal point
smaller than all order identifiers in I.
becomes depopulated, then the associated focal point object
- S, the set of port identifiers, defined as N>0× OP×I,
fails. (Note that it doesn't matter how a focal point becomes
Where OP= {get, put, confirm, recon- done}.
depopulated, be it as a result of mobile nodes failing, leaving
- O, the totally- ordered, finite set of focal point
the area, going to sleep. etc. Any depopulation results in the
identifiers.
focal point failing). The Geoquorums algorithm implements
- T, the set of tags defined as R ≥0 × I.
an atomic read/write memory algorithm on top of the
- U, the set of operation identifiers, defined as R ≥0 ×
geographic abstraction, that is, on top of the focal point
S.
object model. Nodes implementing the atomic memory use a
- X, the set of memory locations for each x є X:
Geocast service to communicate with the focal point objects.
- Vx the set of values for x
In order to achieve fault tolerance and availability, the
- v0,x є Vx , the initial value of
14 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

X these operations (see fig .4). The Operation Manager (OM)


- M, a totally-ordered set of configuration names is the collection of all the operation manager clients (OMi,
- c0 є M, a distinguished configuration in M that is for all i in I).it is composed of the focal point objects, each
smaller than all other names in M. of which is an atomic object with the put/get variable type:
- C, totally- ordered set of configuration identifies, as
defined as: R ≥0 ×I ×M
Operation Manager Client Transitions
- L, set of locations in the plane, defined as R× R
Input write (Val) i
Fig .2 Notations Used in The Geoquorums Algorithm. Effect:
Current-port-number
Variable Types for Atomic Read/Write object in Current-port-number +1
Geoquorum Approach for Mobile Ad Hoc Network Op < write, put, <clock, i>, Val, recon-ip, <0,
The specification of a variable type for a read/write object in i0, c0>, Ø>
geoquorum approach for mobile ad hoc network is Output write-Ack ( ) i
presented. A read/write object has the following variable Precondition:
type (see fig .3) [8]. Conf-id=<time-stamp, Pid, c>
Put/get variable type τ If op .recon-ip then
√ C/ ∈ M, э P ∈ put-quorums(C/): P C op. acc
State

Else
Tag T, initially< 0.i0> Э P ∈ put-quorums(C): P C Op. acc
Value ∈ V, initially v0
Op .phase=put


Op. type=write
Config-id C, initially< 0, i0, c0> Effect:
Confirmed-set C T, initially Ø Op. phase idle
Recon-ip, a Boolean, initially false Confirmed confirmed U {op. tag}
Operations Input read ( ) i
Put (new-tag, new-value, new-Config-id) Effect:
If (new-tag> tag) then Current-port-number
Value ←new-value Current-port-number +1
Tag ← new-tag Op < read, get, ┴, ┬, recon-ip, <0, i0, c0>, Ø>
If (new-Config-id > Config-id) then Output read-ack (v) i
Config-id ← new-config-id Precondition:
Recon-ip ← true Conf-id=<time-stamp, Pid, c>
Return put-Ack (Config-id, recon-ip) If op. recon-ip then
Get (new-config-id) √ C/ ∈ M, э G ∈ get-quorums(C/): G C op. acc
If (new-config-id >Config-id) then Else
Config-id ← new-Config-id Э G ∈ get-quorums(C): G C op. acc
Recon-ip ←true Op. phase=get

Confirmed ← (tag confirmed-set) Op. type=read
Return get-ack (tag, value, confirmed, Config-id, recon-ip) Op. tag ∈ confirmed
Confirm (new-tag) v= op. value
Confirmed-set ←confirmed –set U {new-tag} Effect:
Return confirm-Ack Op .phase idle
Recon –done (new-Config-id) Internal read-2( )i
If (new-Config-id=Config-id) then Precondition:
Recon-ip ←false Conf-id=<time-stamp, Pid, c>
Return recon-done-Ack ( ) √ C/ ∈ M, э G ∈ get-quorums(C/): G C op. acc
Fig .3 Definition of the Put/Get Variable Type τ Else
Э G ∈ get-quorums(C): G C op. acc
4.1 Operation Manager Op. phase=get
Op. type=read
In this section the Operation Manger (OM) is presented,
an algorithm built on the focal/point object Model. As the
Op. tag ∈ confirmed
Effect:
focal point Object Model contains two entities, focal point
Current-port-number
objects and Mobile nodes, two specifications is presented ,
Current-port-number +1
on for the objects and one for the application running on
Op. phase put
the mobile nodes [8] [9].
Op. Recon. ip recon-ip
Op. acc Ø
4.1.1 Operation Manager Client
Output read-Ack (v)i
Precondition:
This automaton receives read, write, and recon requests
Conf-id=<time-stamp, Pid, c>
from clients and manages quorum accesses to implement
(IJCNS) International Journal of Computer and Network Security, 15
Vol. 2, No. 6, June 2010

If op. recon-ip then distributed clients through the geocast service, to implement
√ C/ ∈ M, э P ∈ put-quorums(C/): P C op. acc an atomic object (with port set q=s)corresponding to a
Else particular focal point. We refer to this algorithm as the
Э P ∈ put-quorums(C): P C op. acc Focal Point Emulator (FPE). The FPE client has three basic
Op. phase=put purposes. First, it ensures that each invocation receives at
Op. type=read most one response (eliminating duplicates).Second, it
v=op. value abstracts away the geocast communication, providing a
Effect: simple invoke/respond interface to the mobile node [9].
Op. phase idle Third, it provides each mobile node with multiple ports to
Confirmed confirmed U {op. tag} the focal point object; the number of ports depends on the
Input recon (conf-name)i atomic object being implemented. The remaining code for
Effect: the FPE server is in fig .5.When a node enters the focal
Conf-id <clock, i, conf-name> point, it broadcasts a join-request message using the LBcast
Recon-ip true service and waits for a response. The other nodes in the
Current-port-number focal point respond to a join-request by sending the current
Current-port-number +1 state of the simulated object using the LBcast service. As an
Op < recon, get, ┴, ┴, true, conf-id, Ø> optimization, to avoid unnecessary message traffic and
Internal recon-2(cid) i collisions, if a node observes that someone else has already
Precondition responded to a join-request, and then it does not respond.
√ C/ ∈ M, э G ∈ get-quorums(C/): G C op. acc Once a node has received the response to its join-request,
√ C/ ∈ M, э P ∈ put-quorums(C/): P C op. acc then it starts participating in the simulation, by becoming
Op. type=recon active. When a node receives a Geocast message containing
Op. phase=get an operation invocation, it resends it with the Lbcast service
Cid=op. recon-conf-id to the focal point, thus causing the invocation to become
Effect ordered with respect to the other LBcast messages (which
Current-port-number are join-request messages, responses to join requests, and
Current-port-number +1 operation invocations ).since it is possible that a Geocast is
Op. phase put received by more than one node in the focal point ,there is
Op. acc Ø some bookkeeping to make sure that only one copy of the
Output recon-Ack(c) i same invocation is actually processed by the nodes. There
Precondition exists an optimization that if a node observes that an
Cid=op. recon-conf-id invocation has already been sent with LBcast service, then it
Cid= <time-stamp, Pid, c> does not do so. Active nodes keep track of operation
Э P ∈ put-quorums(C): P C op. acc invocations in the order in which they receive them over the
Op. type=recon LBcast service. Duplicates are discarded using the unique
Op. phase=put operation ids. The operations are performed on the
Effect: simulated state in order. After each one, a Geocast is sent
If (conf-id=op. recon-conf-id) then back to the invoking node with the response. Operations
Recon-ip false complete when the invoking node with the response.
Op. phase idle Operations complete when the invoking node remains in the
Input geo-update (t, L) i same region as when it sent the invocation, allowing the
Effect: geocast to find it. When a node leaves the focal point, it re-
Clock 1 initializes its variables .A subtle point is to decide when a
Fig .4 Operation Manager Client Read/Write/Recon and node should start collecting invocations to be applied to its
Geo-update Transitions for Node replica of the object state. A node receives a snapshot of the
state when it joins. However by the time the snapshot is
received, it might be out of date, since there may have been
some intervening messages from the LBcast service that
4.2 Focal Point Emulator Overview
have been received since the snapshot was sent. Therefore
The focal point emulator implements the focal point the joining node must record all the operation invocations
object Model in an ad hoc mobile network. The nodes in a that are broadcast after its join request was broadcast but
focal point (i.e. in the specified physical region) collaborate before it received the snapshot .this is accomplished by
to implement a focal point object. They take advantage of having the joining node enter a "listening" state once it
the powerful LBcast service to implement a replicated state receives its own join request message; all invocations
machine that tolerates nodes continually joining and leaving received when a node is in either the listening or the active
.This replicated state machine consistently maintains the state are recorded, and actual processing of the invocations
state of the atomic object, ensuring that the invocations are can start once the node has received the snapshot and has
performed in a consistent order at every mobile node [8].In the active status. A precondition for performing most of
this section an algorithm is presented to implement the focal these actions that the node is in the relevant focal point.
point object model. the algorithm allows mobile nodes This property is covered in most cases by the integrity
moving in and out of focal points, communicating with requirements of the LBcast and Geocast services, which
16 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

imply that these actions only happen when the node is in the Precondition:
appropriate focal point [8]. Peek (geocast-queue) =m
Effect:
Focal Point Emulator Server Transitions
Dequeue (geocost- queue)
Internal join ( ) Obj , i
Input get-update (l, t) obj,i
Precondition:
Effect:
Location ∈ FP-location
Location ← l
Status=idle
Clock← t
Effect: Fig. 5 FPE server transitions for client i and object Obj of variable
Join-id ←<clock, i> type τ = <V, v0, invocations, responses, δ >
Status← joining
Enqueue (Lbcast-queue,<join-req, join-id>)
Input Lbcast- rcv (< join-req, jid>) obj, i
5. Problem Specification
Effect:
The methodology for formal specification of the
If ((status=joining)) ^ (jid=Join-id)) then
geoquorum approach is illustrated by considering a set of
Status ←listening
mobile nodes with identifiers with values from 0 through
If ((status=active))) ^ jid ∉ answered-join-reqs)) then
(N-1) moving through space. Initially some of the nodes are
Enqueue (LBcast-queue, < join-ack, jid, val>)
idle while others are active. Nodes communicate with each
Input Lbcast- rcv (<join-ack, jid, v>) obj, i
other while in range. A node can becomes idle at any time
Effect:
but can be reactivated if it encounters an active node. The
Answered-join-reqs ← answered-join-reqs U {jid}
basic requirement is that of determining that all nodes are
If ((status=listening) ^ (jid =join-id)) then
idle and storing that information in a Boolean flag (claim)
Status ← active
located on some specific node say node (i0), formally the
val V
problem reduces.
Input Geocast –rcv (< invoke, inv, oid, loc, FP-loc>) obj,i
Effect:
If (FP-loc=FP-location) then
Stable W (S1)
If (<inv, oid, loc> ∉ pending-ops U completed ops) then
Enqueue (Lbcast-queue, <invoke, inv, oid, loc>)
Input LBcast –rcv (< invoke, inv, oid, loc>) obj,i
Claim detects W (P1)
Effect:
If ((status=listening V active) ^
(<inv, oid, loc> ∉ pending-ops U completed-ops)) Then
Where W is the condition
Enqueue (pending-ops, <inv, oid, loc>)
Internal simulate-op (inv) obj, i
Precondition: W=< ^ i: 0 < i ≤N:: idle [i] > (D1)
Status=active
Peek (pending-ops) =<inv, oid, loc>
Effect: (S1) is a safety property stating that once all nodes are idle,
(Val, resp)← δ (inv, val) no node ever becomes active again. (P1) is a progress
Enqueue (geocost- queue, < response, resp, oid, loc>) property requiring the flag claim to eventually record the
Enqueue (completed-ops, Dequeue (pending-ops)) system's quiescence. Using idle [i] to express the quiescence
Internal leave ( ) obj, i of a node and define active [i] to be its negation. It is
Precondition: important to note that the problem definition in this case
Location ∉ fp-location does not depend on the nature of the underlying
Status ≠ idle computation.
Effect:
Status← idle 5.1 Formal Derivation
Join-id← <0, i0>
Val ← v0 In this section, specification refinement techniques are
Answered -join- reqs← Ø employed toward the goal of generating a programming
Pending –ops ← Ø solution that accounts for the architectural features of ad hoc
Completed-ops ← Ø networks which form opportunistically as nodes move in
Lbcast-queue ← Ø and out of range.
Geocast-queue ← Ø The refinement process starts by capturing high level
Output Lbcast (m) obj, i behavioral features of the underlying application [14] [15].
Precondition: In each case, we provide the informal motivation behind
Peek (Lbcast-queue) =m that particular step and show the resulting changes to the
Effect: specification. As an aid to the reader, each refinement
Duqueue (Lbcast- queue) concludes with a list of specification statement labels that
Output geocast (m) obj, i captures the current state of the refinement process, as in:
(IJCNS) International Journal of Computer and Network Security, 17
Vol. 2, No. 6, June 2010

Refinement 0: P1, S1 where N is the number of nodes in the system. Property (P1)
is then replaced by:
5.1.1 Refinement 1: Activation Principle
Claim detects D = N (P2)
A node invocation may become put, get, confirm,
reconfig – done. The safety property of these computations With the collection mechanism left undefined for the time
can be expressed as: being.

Get [i] (S2)


Refinement 2: P2, S2, S3, S4, S5, S6.
Unless
5.1.3 Refinement 3: Config-Ids Increasing
< ∃id : config - id ≠ new - config - id :: config- id > new - config - id >
To maintain the invariant that the number of ids in the
system reflects the number of idle nodes, activation of an
Put [i] (S3)
idle node requires that the number of ids increase by one.
Therefore, when an active node put invoke an idle node,
Unless
they must increase config-id between them. To express this,
< ∃id, T : config - id ≠ new - config - id, Tag ≠ new - tag :: config - id > we add a history variable config-id / [i], which maintains the
new - config - id, Tag > new - tag) value config-id [i], held before it changed last. And the put
invocation is a history state of the put-ack state node; this
for all states of nodes of the related work and the safety
property of the put invocation is as follow:

Put [i] (S7)


Confirm [i] (S4)
Put-ack [j]

Unless
Unless
< ∃ T : Tag ≠ new - tag :: Tag > new - tag >
Recon - done [i] (S 5 )

Unless
< Э id, ip: config-id ≠ new-config- id:: recon-ip=false>
Captures the requirement that, when node i activates and
In previous, it is determined that all invocations and its becomes node j, the config-id of node j must be increase in
conditions of the application. the same step. Clearly, this new property strengthens
Refinement 1: P1, S2, S3, S4, S5. property (S2), (S3), (S6).

5.1.2 Refinement 2: Parameters Based Accounting


Refinement 3: P2, S4, S5, S7.
Frequent node movement makes direct counting
inconvenient, but we can accomplish the same thing by 5.1.4 Refinement: Operations –Id Collection
associating id, port-number with each invocation node.
Independent of whether it is currently idle or active, each According to FPE server, FPE client algorithms, the
node in the system holds zero or more ids. The advantage of completed operations is arranged in rank of operations that
this approach is that ids can be collected and then counted at have been simulated initially ø and there exist Val ∈ V,
the collection point. If we define D to be the number of ids holds current value of the simulated atomic node, initially
in the system and I to be the number of confirm nodes, i.e. v0. oid is a history id of the operation; new-oid is the current
id of the operation. To simplify our narration, a host with a
D ≡< + i :: id [i ] > (D 2)
higher id is said to rank higher than a node with a lower id.
I ≡< + i : confirm [i ] :: 1 > ( D 3) Oid (v0) should eventually collect all N oids. We will
accomplish this by forcing nodes to pass their oids to lower
The relationship between the two is established by the ranked nodes for this, we introduce two definitions:
invariant:
L=<+i: obj [i]:: oid [i] > (D4)
Inv.D = I (S6)
Count the number of oids idle agents hold. Obviously, L =
By adding this constraint to the specification, the quiescence N, when all nodes are idle. We also add
property (W) may be replaced by the predicate (D = N),
18 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

w= < max i: L=N ^ oid [i] > 0:: i > (D5) Join-id[j] > 0 ^ L = N ^ < Э I < j:: com (i, j)> , And a safety
property,
To keep track of the highest ranked node holding oids. After
all nodes are idle, we will force w to decrease until it reaches Join-id [j] > 0 ^ L= N ^ j = 0 (S9)
0.when w=0 and L=N, obj (v0) will have collected all the
oids. At this stage we add a new progress property, Unless
Join-id[j] = 0 ^ L=N ^ < Э i< j:: com (i, j)>
w = k > 0 until w < k ( P3)
Refinement 5: P2, P3, P6, S4, S5, S7, S8, S9

That begins to shape the progress of oid passing. As 5.1.6 Refinement 6: Contact Guarantee
mentioned, the until property is a combination of a safety
property (the unless portion) and a progress property (the Property (P6) conveys two different things. First; it
leads – to portion). As long as the highest ranked oid holder ensures that a node with join-ids will meet a lower ranked
passes its oids to a lower ranked node, we can show that all node, if one exists. Second, it requires the join-ids to be
the oids will reach obj (V0= 0) without having to restrict the passed to the lower ranked node. The former requires us to
behavior of any node except node (w) = obj (w). Some can either place restrictions on the movement of the mobile
replace (P2) with nodes or make assumptions about the movement. For this
reason, we refine property (P6) into two obligations. The
Claim detects (w=0) (P4) first

Refinement 4: P2, P3, S4, S5, S7 Join-id [J] > 0 ^ L = N (P7)

5.1.5 Refinement 5: Pairwise Communication Leads – to

According to the code for the FPE client and FPE server Join-id [j] > 0 ^ L = N ^ < Э i< j:: com (i,j)>
which discussed in section 3.2 clearly, a node can only
activate another node or pass (join-ids=jid) to another node Guarantees that a node with join-ids will meet a lower
if the two parties can communicate. To accomplish this, we ranked node. The second,
introduce the predicate, com (i, j) that holds if and only if
nodes i and j can communicate. Join-id [j] > 0 ^ L = N ^ < Э i< j:: com (i, j)>

We begin this refinement with a new safety property: Lead-to

Idle [i] (S8) Join-id [j] = 0 ^ L = N ^ < Э i< j:: com (i,j)> Forces a
node that has met a lower ranked node, to pass its join-ids.
Unless At the point of passing, communication is still available.
There two new properties replace property (P6).
< Э j: j = i : : active/[ j] ^ active [j] ^ active [i ] ^ (join-id[i]
+ join-id [j]=(join-id)/[i] +(join-id)/[j]-1)> 0 ^ com (i,j)> Refinement 5: P2, P3, P7, P8, S4, S5, S6, S7, S8, S9

This requires that nodes i and J be in communication when J


activates i. Also, adding the property:
6. Conclusions and Future Work
Join-id [j] > 0 ^ L=N ^ j = 0 (P5) In this paper we presented the formal derivation of
geoquorum approach for mobile ad hoc networks. This
Until formal specification is built from mobile unity primitives
and proof logic .also this paper provides strong evidence
Join-id [j] = 0 ^ L= N ^ < Э i< j:: com (i, j)> that a formal treatment of mobility and its applications isn't
only feasible but, given the complexities of mobile
computing. There is an open area for using this specification
This requires a node to pass its jids and when it does, to
to mechanistically construct the program text as: first,
have been able to communicate with a lower ranked node.
defining the program components, and then deriving the
As we leave this refinement, we separate property (P5) into
program statements directly from the final specification, the
its two parts; a progress property,
resulting program (called a system in mobile UNITY). Also,
we have viewed a small set of mobility constraints that are
necessary to ensure strong data guarantees in highly mobile
networks. We have also discussed quorum systems in highly
Join –id [J] > 0 ^ L = N ^ J ≠ 0 (P6)
mobile networks and devised a condition that is necessary
for a quorum system to guarantee data consistency and
Leads-to
availability under our mobility constraints as a survey. This
work leaves several open questions such as the problem of
(IJCNS) International Journal of Computer and Network Security, 19
Vol. 2, No. 6, June 2010

dealing with network partitions and periods of network [12]J.Y. Le Boudec, M. Vojnovic, "Perfect Simulation and
instability in which our set of assumptions are invalid. Stationarity of A Class Of Mobility Models", Infocom
(2005).
[13]T. Hara, "Location Management of Replication
Acknowledgment Considering Data Update in Ad Hoc Networks, in: 20th
International Conference AINA, 2006, PP. 753-758.
The authors would like to thank the Conference INFOS [14]T. Hara," Replication Management for Data Sharing In
2010 (Cairo University) reviewers for their constructive Mobile Ad Hoc Networks", Journal of Interconnection
comments and suggestions. Networks 7(1) (2006), PP.75-90.
[15]Y Sawai, M. Shinohara, A. Kanzaki, T. Hara, S. Nishio,
References "Consistency Management Among Replicas Using A
Quorum System in Ad Hoc Networks", MDM (2006),
[1]A. Smith, H. Balakrishnan, M. Goraczko, N. Priyantha, PP.128-132.
"Support for Location: Tracking Moving Devices with
the Cricket Location System", in: Proceedings of the 2nd
International Conference on Mobile Systems,
Applications, and Services, Jun 2004.
[2]S. Gilbert, N. Lynch, A. Shvartsman, "RAMBO II:
Rapidly Reconfigurable Atomic Memory for Dynamic
Networks," in: Proceedings of the International
Conference on Dependable Systems and Networks, June
2003, PP. 259-269.
[3]B. Liu, P. Brass, O. Dousse, P. Nain, D. Towsley,
"Mobility Improves Coverage of Sensor Networks", in:
Proceedings of Mobile Ad Hoc, May 2005, PP. 300-308.
[4]R. Friedman, M. Gradinariu, G. Simon, "Locating Cache
Proxies in MANETs", in: Proceedings of the 5th
International Symposium of Mobile Ad Hoc Networks,
2004, PP. 175-186.
[5]J. Luo, J-P. Hubaux, P. Eugster," Resource Management:
PAN: Providing Reliable Storage in Mobile Ad Hoc
Networks with Probabilistic Quorum Systems", in:
Proceedings of the 4th International Symposium on
Mobile Ad Hoc Networking and Computing, 2003, PP.
1-12.
[6]D. Tulone, Mechanisms for Energy Conservation in
Wireless Sensor Networks. Ph.D. Thesis, Department of
Computer Science, University of Pisa, Dec 2005.
[7]W. Zhao, M. Ammar, E. Zegura,"A Message Ferrying
Approach for Data Delivery in Sparse Mobile Ad Hoc
Networks", in: Proceedings of the 5th International
Symposium on Mobile Ad hoc Networking and
Computing, May 2004, PP.187-198.
[8]S. Dolev, S. Gilbert, N. Lynch, A. Shvartsman, J. Welch,
"GeoQuorums: Implementing Atomic Memory in Mobile
Ad Hoc Networks", in: Proceedings of the 17th
International Conference on Distributed Computing,
October 2003, PP. 306-320.
[9]H. Wu, R. Fujimoto, R. Guensler, M. Hunter, "MDDV: A
Mobility-Centric Data Dissemination Algorithm for
Vehicular Networks", in: Proceedings of the1st
International Workshop on Vehicular Ad hoc Networks,
Oct 2004, PP. 47-56.
[10]J. Polastre, J. Hill, D. Culler, "Versatile Low Power
Media Access for Wireless Sensor Networks", in:
Proceedings of SenSys, 2004.
[11]S. PalChanduri, J.-Y. Le Boudec, M. Vojnovic, "Perfect
Simulations for Random Mobility Models", in: Annual
Simulation Symposium 2005, PP. 72-79. Available from:
http://www.cs.rice.edu/santa/ research/mobility
20 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

Power Management in Wireless ad-hoc Networks


with Directional Antennas
Mahasweta Sarkar1 and Ankur Gupta2
1
Department of Electrical and Computer Engineering
San Diego State University, San Diego, California, USA
Email: msarkar2@mail.sdsu.edu
2
Department of Electrical and Computer Engineering
San Diego State University, San Diego, California, USA
Email: ankur275@gmail.com

ad-hoc sensor or hybrid ad-hoc network consists of a


Abstract: Power control in wireless ad-hoc networks has been number of sensor spreads in a geographical area. Each
known to significantly increase network lifetime and also impact sensor is capable of mobile communication and has some
system throughput positively. In this paper, we investigate the level of intelligence to process signals and to transmit data.
virtues of power control in an ad-hoc network where nodes are In order to support routed communications between two
equipped with directional antennas. We use a cross-layer power mobile nodes, the routing protocol determines the node
optimization scheme. The novelty of our work lies in the fact connectivity and routes packets accordingly. This makes a
that we exploit the knowledge of the network topology to
mobile ad-hoc sensor network highly adaptable so that it can
dynamically change our power control scheme. The information
about the network topology is retrieved from the Network Layer.
be deployed in almost all environments like military
Specifically, we consider three different network topologies – (i) operations, security in shopping malls and hotels or to locate
a sparse topology, (ii) a dense topology and a (iii) cluster-based a vacant parking spot.
topology. The MAC layer, implements a dynamic power control Each node in the network is equipped with an antenna
scheme which regulates the transmission power of a (omni directional or directional) which enables it to find its
transmitting node based on the node density at the receiver
neighbors and communicate with them. Since the antenna
node. In addition a transmitting node can adjust its
transmission power to the optimal level if the SINR value at the
plays such an important role in the communication, the
receiving node is known. Our cross-layer power control choice of the antenna becomes critical. With the existing
algorithm takes the above fact into account. Extensive schemes, an omni directional antenna was used both at the
simulation in NS2 shows that in all the three topologies our transmitter and at the receiver end. Although using this
cross layer design (use of directional antennas with power configuration does enable us to achieve the task of
control) leads to prolonged network lifetime and significantly communication, there are many drawbacks of using an omni
increases the system throughput. direction antenna as compared to using a directional one.
Extensive research in MAC and PHY layer protocols is
Keywords: ad hoc network, power management, being conducted to support these antennas.With the
directional antenna, cross layer advancement in antenna technology it has become possible
to use array antennas and purely directional antennas in ad-
1. Introduction hoc networks. Some of the advantages of using directional
antennas as compared to omni directional antennas are
An ad-hoc or short-live network is the network of two or increased range, spatial reuse and reduced collisions. Since
more mobile devices connected to each other without the the antenna beam can be pointed to a particular direction
help of intervening infrastructure. In contrast to a fixed and all the energy can be focused there, the range is
wireless network, an ad-hoc network can be deployed in increased. Unlike omni directional antennas, by using
remote geographical locations and requires minimum setup directional antennas we can have more nodes
and administration costs. Moreover, the integration of an communicating in the same space, thereby exploiting spatial
ad-hoc network with a bigger network-such as the Internet- reuse and increasing the throughput of the system. Since
or a wireless infrastructure network increases the coverage omni directional antennas transmit and receive in all the
area and application domain of the ad-hoc network. directions, they are more prone to collisions, which is not
However, communication in an ad-hoc network between the case when using directional antennas. Now that the
different hosts that are not directly linked is an issue not choice of antennas has been discussed, we discuss the main
only for search and rescue operations, but also for focus of this paper which is power control. The first
educational and business purposes. drawback of using the fixed-power approach is that it
negatively impacts channel utilization by not allowing
An ad-hoc network can be classified into two main types:
concurrent transmissions to take place over the reserved
mobile ad-hoc network and mobile ad-hoc sensors network.
floor. The second drawback of the fixed-power approach is
Unlike typical sensor networks, which communicate directly
that the received power may be far more than necessary to
with the centralized controller, a mobile ad-hoc sensor
achieve the required signal-to-interference-and-noise ratio
network follows a broader sequence of operational scenarios,
(SINR), thus wasting the node’s energy and shortening its
thus demanding a less complex setup procedure. A mobile
lifetime. Therefore, there is a need for a solution, possibly a
(IJCNS) International Journal of Computer and Network Security, 21
Vol. 2, No. 6, June 2010

cross layer one that allows concurrent transmissions to take In this paper we start by looking at the power management
place in the same vicinity and simultaneously conserves schemes used in ad-hoc networks in section II. We then
energy. present our cross layer design in section III. Simulation
setup in section IV is followed by results and analysis in the
Ad-hoc networks could be broadly classified into three Vth section. We conclude this paper in the VIth section
topologies namely, spare, dense and cluster form. Since the followed by the references in the VIIth section.
MAC and Network layers play an important role in
conserving power and increasing the system throughput, it 2. Previous Work
is essential to have a cross layer design that can support
different topologies. The MAC layer in a system controls The research community has done a lot of work to suggest
how and when a node accesses the medium. This also potential ways of power control in ad-hoc networks. In [1]
includes the transmission power used for communication. If Rong Zheng and Robin Kravets proposed an on demand
a transmitting node is aware of the ideal SINR value at the power management scheme for ad-hoc networks. It is an
receiver, it can adjust its transmitting power such that only extensible on-demand power management framework for
required amount of power is used for communication. In ad-hoc networks that adapts to traffic load. Nodes maintain
doing so not only does the transmitter save power (compared soft-state timers that determine power management
to transmitting at full power) but also interferes with lesser transitions. By monitoring routing control messages and
nodes (as increased power would mean increased range). In data transmission, these timers are set and refreshed on-
case of sparse network, we introduce the “locally aware” demand. Nodes that are not involved in data delivery may
power management protocol in which a receiver can inform go to sleep as supported by the MAC protocol. This soft
its transmitter of the minimum required SINR (to be able to state is aggregated across multiple flows and its
decode the signal) so that the transmitter can adjust its maintenance requires no additional out-of-band messages.
transmitting power accordingly. In a dense topology a The implementation is a prototype of the framework in the
receiver may face interference from more nodes than in a NS-2 simulator that uses the IEEE 802.11 MAC protocol.
sparse topology. We propose the “globally aware” protocol Simulation studies using this scheme with the Dynamic
here. In such a case it is essential that the receiver transmits Source Routing protocol show a reduction in energy
its minimum required SINR value in all the directions. This consumption near 50% when compared to a network
will ensure that all the transmitters in this receiver’s without power management under both long-lived CBR
neighborhood will control their power such that they do not traffic and on-off traffic loads, with comparable throughput
interfere with this receiver anymore but can still carry on and latency. Preliminary results also show that it
with their own communications. Both the sparse and the outperforms existing routing backbone election approaches.
dense topology designs are supported by a power aware In [2] Zongpeng Li and Baochun Li present a probabilistic
routing protocol at the Network layer. Since a sensor power management scheme for ad-hoc networks. It
network is a network in which nodes gather relevant data introduces Odds, an integrated set of energy-efficient and
about an activity and send it to an actor node which fully distributed algorithms for power management in
processes the data further; it is important that efficient wireless ad-hoc networks. Odds is built on the observation
routing is performed from the sensor nodes to the actor that explicit and periodic re-computation of the backbone
nodes. Cluster topologies are very common in ad-hoc topology is costly with respect to its additional bandwidth
sensor networks to record signals of interest at certain places overhead, especially when nodes are densely populated or
only. Fig. 1 shows the difference between a homogeneous highly mobile. Building on a fully probabilistic approach,
and non-homogeneous spatial distribution of nodes. We Odds seek to make a minimum overhead, perfectly
study the changes in power consumed and throughput when balanced, and fully localized decision on each node with
all cluster nodes transmit at the same power to when there is respect to when and how long it needs to enter standby
power control. Power control is implemented by making mode to conserve energy. Such a decision does not rely on
sure that each cluster uses its own power level such that it periodic message broadcasts in the local neighborhood, so
does not interfere with other cluster nodes. This simple cross that Odds are scalable as node density increases. Detailed
layer design helps prolong the network lifetime and also mathematical analysis, discussions and simulation results
shows increase in throughput. have shown that Odds are indeed able to achieve our
objectives while operating in a wide range of density and
traffic loads. Power control schemes using purely directional
transmission and reception have not been researched in
detail. Exploiting the advantages of directional antennas
[13, 14, 15, 20] shows considerable improvement in the
overall network performance.

MAC layer solutions have been proposed from a long time.


In [3] a scheme for controlling the transmission power is
presented. More specifically to the effects of using different
Figure 1. Difference between a Homogeneous and Cluster transmit powers on the average power consumption and
Form Spatial Distribution of nodes end-to-end network throughput in a wireless ad-hoc
environment. This power management approach reduces the
22 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

system power consumption and thereby prolongs the battery control when nodes are non-homogeneously dispersed in
life of mobile nodes. Furthermore, the invention improves space. In such situations, one seeks to employ per packet
the end-to-end network throughput as compared to other ad- power control depending on the source and destination of
hoc networks in which all mobile nodes use the same the packet. This gives rise to a joint problem which involves
transmit power. The improvement is due to the achievement not only power control but also clustering. Three solutions
of a tradeoff between minimizing interference ranges, for joint clustering and power control are presented. The
reduction in the average number of hops to reach a first protocol, CLUSTERPOW, aims to increase the network
destination, reducing the probability of having isolated capacity by increasing spatial reuse. The second, Tunnelled
clusters, and reducing the average number of transmissions CLUSTERPOW, allows a finer optimization by using
including retransmissions due to collisions. The present encapsulation. The last, MINPOW, whose basic idea is not
invention provides a network with better end-to-end new, provides an optimal routing solution with respect to
throughput performance and lowers the transmit power. [6] the total power consumed in communication. The
and [8] also talk about transmission power control. contribution includes a clean implementation of MINPOW
at the network layer without any physical layer support. All
Topology control [4], [9] schemes have also been proposed. three protocols ensure that packets ultimately reach their
In [4] authors present Span, a power saving technique for intended destinations.
multi-hop ad-hoc wireless networks that reduces energy Power aware routing protocols such as Power Aware AODV
consumption without significantly diminishing the capacity [10] also contribute to conserving the network power.
or connectivity of the network. Span builds on the Changes have been suggested to the NS2 routing structure
observation that when a region of a shared-channel wireless to accommodate for a power aware routing protocol that is
network has a sufficient density of nodes, only a small aimed at increasing the network lifetime. [17, 18, 19] also
number of them need be on at any time to forward traffic for propose energy efficient schemes for wireless networks.
active connections. Span is a distributed, randomized
algorithm where nodes make local decisions on whether to 3. Proposed Scheme
sleep, or to join a forwarding backbone as a coordinator.
Each node bases its decision on an estimate of how many of 3.1 Power Control: The MAC Perspective
its neighbors will benefit from it being awake and the
amount of energy available to it. Authors use a randomized In this paper we propose a cross layer design in which the
algorithm where coordinators rotate with time, MAC layer and the Network layer functions can be modified
demonstrating how localized node decisions lead to a to achieve lesser power consumption and increase
connected, capacity-preserving global topology. throughput. The system mimics the 802.11 standard and
Improvement in system lifetime due to Span increases as the uses directional antennas at the physical layer. Since
ratio of idle-to-sleep energy consumption increases, and different situations demand different control mechanisms,
increases as the density of the network increases. the solutions in this paper are aimed at the three most
Simulations show that with a practical energy model, system common topologies in wireless ad-hoc networks, namely
lifetime of an 802.11 network in power saving mode with sparse, dense and cluster forms. A power controlled MAC
Span is a factor of two better than without. Span integrates protocol decides the power level for transmission for a
nicely with 802.11. When run in conjunction with the particular node. It is the power used to access the channel
802.11 power saving mode, Span improves communication and also to carry on subsequent transmissions. Researchers
latency, capacity, and system lifetime. A cone-based solution have used different metrics to determine the optimum
that guarantees network connectivity was proposed in [9]. transmission power; the SINR of the receiving node being
Each node i gradually increases its transmission power until the most popular [add ref]. We assume that there is a
it finds at least one neighbor in every cone of angle. Node i mechanism like GPS that enables the nodes to be aware of
starts the algorithm by broadcasting a “Hello” message at the network topology. This enables the receivers to calculate
low transmission power and collecting replies. It gradually the SINR level based on the number of transmitters and
increases the transmission power to discover more neighbors their distance from this receiver. Not only does this prolong
and continuously caches the direction in which replies are network lifetime but also allows more nodes to communicate
received. It then checks whether each cone of angle contains at the same time. In the proposed protocols the collision
a node. The protocol assumes the availability of directional avoidance information does not prevent interfering nodes
information (angle-of-arrival), which requires extra from accessing the channel. Instead, it bounds the
hardware. This scheme does not seem to work for transmission powers of future packets generated by these
directional antennas [16] as most of the replies would be lost nodes. This is unlike the RTS/CTS packets used in the
depending on where the source’s antenna is pointing. Some 802.11 scheme.
researchers proposed the use of a synchronized global
signaling channel to build a global network topology To understand what this collision avoidance information is
database, where each node communicates only with its and how nodes can make use of it, consider the transmission
nearest N neighbors (N is a design parameter). This of a packet from some node i to some node j. Let SINR (i,j)
approach, however, requires a signaling channel in which be the SINR at node j for the desired signal from node i.
each node is assigned a dedicated slot. Then,
A scheme based on clustering [5] in ad-hoc networks was
also presented. This paper discusses the problem of power
(IJCNS) International Journal of Computer and Network Security, 23
Vol. 2, No. 6, June 2010

frame and continues further communication. Incase the


receiver needs to update the transmitter of any changes in
(1) the SINR level, it can do so by including the new power
where, level information in any CTS frame which it would send out
P (i,j)= received power at node j for a transmission from in response to a data frame. Since this protocol tries to
node i ∑P (k,j)= power received as noise at j from its control the power levels of node pairs, we call it local power
neighbors control. Also, since the nodes are sufficiently apart, the
hj= thermal noise (default value in NS2 is 101 dBm for the receiver does not need to inform all the nodes in the network
802.11 std.) at node j. of its new SINR level.

A packet is correctly received if the SINR is above a certain 2) Dense Networks: A Globally Aware Protocol
threshold (say, SINRth) that reflects the QoS of the link. By
allowing nearby nodes to transmit concurrently, the The protocol which we have simulated states that each node
interference power at receiver j increases, and so SINR (i,j) in the network transmits with optimum power (which is the
decreases. Therefore, to be able to correctly receive the minimum required SINR) in an attempt to maximize the
intended packet at node j, the transmission power at node i throughput of the network. When a transmitting node sends
must be computed while taking into account potential future an RTS to the receiver successfully, the receiver piggy backs
transmissions in the neighborhood of receiver j. This is the SINR information in the CTS frame which can be
achieved by incorporating an interference margin in the further used to decide the ideal transmitting power at the
computation of SINR (i,j). This margin represents the transmitter. It also broadcasts the Collision Avoidance
additional interference power that receiver j can tolerate Information in all the directions. When the neighboring
while ensuring coherent reception of the upcoming packet nodes receive this information, they adjust their
from node i. Nodes at some interfering distance from j can transmission power such that they can still carry on their
now start new transmissions while the transmission i → j is conversation and not disrupt the other nodes. When a pair
taking place. The interference margin is incorporated by of communicating nodes experience interference from some
scaling up the transmission power at node i beyond what is other node (SINR goes down at the reciever), the Collision
minimally needed to overcome the current interference at Avoidance information is updated and can be sent to the
node j. For our simulations, a SINR margin of 10 dB transmitter with a CTS (in response to a data frame). This
ensures correct reception. Also, it is to be kept in mind that allows the nodes which are at a fair distance from other
the minimum transmission power should not be allowed to nodes to transmit at maximum power and also restrict the
drop below a threshold given by power of nodes which are closer to each other. Now a node
with a packet to transmit is allowed to proceed with its
transmission if the transmission power will not disturb the
ongoing receptions in the node’s neighborhood beyond the
(2)
allowed interference margin. Allowing for concurrent
where,
transmissions increases the network throughput and
P t (i,j)= power transmitted by node i such that the
decreases contention delay.
transmission range does not exceed node j
P r (j,i)= power received by node j when node i transmits
P max = maximum power for this configuration

1) Spare Networks: The Locally Aware Protocol

This protocol is aimed at a solution to networks which are


sparse. Since the network is not crowded, one receiver does
not have multiple transmitters in its range. The
communication between a pair of nodes starts when the
transmitter senses the channel to be idle and sends a RTS to
the intended receiver at maximum power. This ensures that
the signal strength is high enough to maintain the SINR
level at the receiver incase of interference. This also helps
incase the intended receiver is farther away from the
transmitter. In such a case, increased power would mean
increased range. When the receiver hears the RTS
successfully, it waits for SIFS amount of time and responds
back with a CTS frame. The receiver calculates the ideal
transmitting power that the transmitter should use to carry Figure 2. A receiving node transmitting its collision
on further communication. This calculation is based on the avoidance information in all the directions for the “Globally
noise level at the receiver and the required SINR level to Aware” protocol
process a signal correctly. This information is included in
the CTS frame. On receiving the CTS frame, the transmitter 3) Clusters: A Power Management Protocol
adjusts its power to the new level as specified by the CTS
24 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

The cluster approach is defined for networks which tend to members. AODV uses sequence numbers to ensure the
operate in small groups. If all the nodes were to transmit at freshness of routes. It is loop-free, self-starting, and scales to
a random level of transmission power, the network large numbers of mobile nodes.
throughput would degrade since the neighboring nodes
would cause interference. It would be appropriate in such a AODV builds routes using a route request / route reply
scenario to restrict the power levels of different groups. query cycle. When a source node desires a route to a
Each member in a group uses the same power level. destination for which it does not already have a route, it
Different groups use different power levels depending on the broadcasts a route request (RREQ) packet across the
size of the group. Since new members may be added to the network. Nodes receiving this packet update their
cluster and old members might leave, a special node called information for the source node and set up backwards
the Cluster Head (CH) is elected. This node has knowledge pointers to the source node in the route tables. In addition to
of the topology of the network. It calculates the minimum the source node's IP address, current sequence number, and
required power so that all node pairs can communicate. This broadcast ID, the RREQ also contains the most recent
value is beaconed after short durations so that all the nodes sequence number for the destination of which the source
have the updated information and are transmitting with only node is aware. A node receiving the RREQ may send a route
the required power. Fig. 3 shows the reason for having each reply (RREP) if it is either the destination or if it has a route
group use its own power level. Nodes can still communicate to the destination with corresponding sequence number
within the cluster and use much less power. Since reducing greater than or equal to that contained in the RREQ. If this
the power also means a reduction in range for the is the case, it unicasts a RREP back to the source.
directional antennas, we see a better throughput result in our Otherwise, it rebroadcasts the RREQ. Nodes keep track of
case. This is a result of better spatial reuse. Next we study the RREQ's source IP address and broadcast ID. If they
the changes made to the Network layer. receive a RREQ which they have already processed, they
discard the RREQ and do not forward it. As the RREP
propagates back to the source, nodes set up forward pointers
to the destination. Once the source node receives the RREP,
it may begin to forward data packets to the destination. If
the source later receives a RREP containing a greater
sequence number or contains the same sequence number
with a smaller hop count, it may update its routing
information for that destination and begin using the better
route. As long as the route remains active, it will continue to
be maintained. A route is considered active as long as there
are data packets periodically travelling from the source to
Figure 3. Nodes inside a cluster can use lesser power to the destination along that path. Once the source stops
communicate sending data packets, the links will time out and eventually
be deleted from the intermediate node routing tables. If a
link break occurs while the route is active, the node
3.2 Power Control: The Network Perspective
upstream of the break propagates a route error (RERR)
Network layer solutions are energy-oriented and aim at message to the source node to inform it of the now
reducing energy consumption, with throughput being a unreachable destination(s). After receiving the RERR, if the
secondary factor. It includes intelligent routing based on source node still desires the route, it can reinitiate route
immediate link costs i.e. power aware routing [10]. Every discovery.
node in the network can compute the power consumed to With power control, AODV manages to preserve the
send a packet to its one hop neighbors. This allows nodes to network life as it routes packets through nodes which have
choose the path which consumes least power, thereby higher power remaining. This means when the AODV
preserving the system power. We implement the power protocol calculates the next hop on receiving a RREQ
aware AODV protocol and simulation results show that packet, it considers the remaining power of all its
network throughput increased marginally and system neighboring nodes before sending a RREP packet. In such a
lifetime was preserved. The advantage of power aware case every node must be periodically updated of the power
routing is that it can be integrated with the changed MAC levels of the neighboring nodes. We do so by periodically
layer, to enhance the system performance. The novelty here broadcasting the remaining power information to immediate
lies in the physical layer, with directional antennas. neighbors. If a node receives information that a neighboring
The Ad-hoc On Demand Distance Vector (AODV) routing node is running out of power, it must immediately calculate
algorithm is a routing protocol designed for ad-hoc mobile a new route to the destination. Although this protocol may
networks. AODV is capable of both unicast and multicast face slightly larger delays (since longer routes may be
routing. It is an on demand algorithm, meaning that it chosen depending on the remaining power of neighbors), it
builds routes between nodes only as desired by source nodes. manages to preserve network lifetime with the same
It maintains these routes as long as they are needed by the throughput efficiency.
sources. Additionally, AODV forms trees which connect
multicast group members. The trees are composed of the
group members and the nodes needed to connect the
(IJCNS) International Journal of Computer and Network Security, 25
Vol. 2, No. 6, June 2010

4. Simulation Setup 5. Results and Analysis

Table 1 lists the parameters and their respective values that We start this section by testing our protocol and studying the
have been used in the simulations. We simulate the three behavior of the network in different situations. We obtained
topologies in NS2 on a Unix platform. The codes have been data by varying different parameters like the CBR rate and
written in Tcl. In the next section we evaluate the changing the number of nodes in a network. The CBR rate
performance of the cross layer design. We perform corresponds to how fast the packets are being generated and
simulations to obtain the network throughput and the transmitted from the transmitter. Controlling this parameter
average consumption of power in the network using our means controlling the network load. Since a given network
design. The packet size is 512 bytes unless specified has only limited bandwidth, by increasing the CBR, the
otherwise. Each flow in the network transmits CBR traffic. throughput should first increase as more and more packets
We do not consider mobility in our simulations. For the are sent successfully over the network. As the bandwidth is
radio propagation model, the two ray path loss model is constrained in a network, on increasing the CBR rate
used. NS2 uses the 802.11 model for wireless networks. The further, collisions occur, packets get dropped and
power consumption has been calculated by considering the throughput decreases. So the network behaves best for a
power used for transmission only, taking into consideration particular CBR for a given scenario. This claim is validated
the number of directional antennas used. Perl language has by Fig. 4, 5 and 6. It is clear that the throughput in all the
been used to calculate values from the trace files. The power three types of networks increases as the CBR rate is
consumed for receiving and carrier sensing have not been increased, but starts to decrease and attains a steady state
calculated. For our simulations, we consider a mobile ad-hoc when the rate is increased further. When the available
network in a square area of dimensions 500m X 500m. bandwidth of a network is fixed and the number of nodes is
varied, the affect is very similar to changing the CBR. With
Table 1: Parameters and their values used in simulations a constant CBR, when nodes are added to a particular
network, they increase the throughput of the network as
TOPOLOGY PARAMETER VALUE more packets are sent over the medium. It is noticed that
when too many nodes start talking in the same space,
Sparse Network Number of nodes 8 collisions occur and the throughput decreases. From Fig. 7,
Size of network 500 X 500 8 and 9 we can see that as we increase the number of nodes
Simulation time 500 secs in the three cases, the throughput increases to a maximum
Packet size 512 bytes value. When a high number of nodes are present is any
Bandwidth 1Mb network, the value of throughput is much low. For Fig. 7, 8
Traffic Type CBR and 9 the number of nodes in each network is specified in
Pmax 100mW Table I. The CBR rate for the sparse network is 20 Kbps and
MinRecvPwr 1mW for the dense and cluster network is 40 Kbps. These tests
Dense Network Number of nodes 30 show that even after making changes to the MAC, Network
Size of network 500 X 500 and Physical Layer of the network, it has good behavior.
Simulation time 500 secs
It should be noted that in Fig. 7, when there are 18 nodes in
Packet size 512 bytes
the network, the throughput is very low. This is attributed to
Bandwidth 2Mb
the fact that the cross-layer design for the sparse network is
Traffic Type CBR
not effective when a large number of nodes are present. This
Pmax 100mW is the point where we switch from the Locally Aware
MinRecvPwr 1mW Scheme to the Globally Aware Scheme and we say our
Cluster form Number of nodes 16 network is no longer sparse. Using the Globally Aware
Size of network 500 X 500 Scheme helps more nodes in the network be aware of the
Simulation time 500 secs SINR values at the receivers and they can then adjust their
Packet size 512 bytes transmission power accordingly. Different networks may
Bandwidth 2Mb have a different value for number of nodes in sparse and
Traffic Type CBR dense network.
Pmax 100mW
MinRecvPwr 1mW
26 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

Figure 4. Sparse Network: Throughput values for different


CBR rate. Figure 7. Sparse Network: Throughput values with different
number of nodes.

Figure 5. Dense Network: Throughput values for different


CBR rate.
Figure 8. Dense Network: Throughput values with different
number of nodes.

Figure 6. Cluster Network: Throughput values for different


CBR rate.

Figure 9. Cluster Network: Throughput values with


different number of nodes.
(IJCNS) International Journal of Computer and Network Security, 27
Vol. 2, No. 6, June 2010

Since we are aiming at conserving power and increasing the


throughput, it is necessary to compare the performance of
these 3 types of topologies when they are not empowered by
the cross layer design. In traditional networks, omni
directional antennas are used. We are going to make the
following comparisons: 1) between a network equipped with
omni directional antennas (802.11b) to a network using
directional antennas with CLPM implementation, 2)
between a network equipped with directional antennas
(802.11b-like) to a network using directional antennas with
CLPM implementation. The terms SN, DN and CN imply
Sparse Network, Dense Network and Cluster Network
respectively. In the cases where there is no power control,
all the nodes transmit at the maximum allowable
transmitting power. Simulation results for five different
networks in each topology case (namely sparse, dense and Figure 11. Throughput comparison between the three
cluster) have also been shown at the end (Fig. 14, 15 and topologies when using a regular directional antenna and a
16) to give the reader an idea of the variation in results. directional antenna with our power scheme implemented.
Fig. 10 compares the throughput between the three In Fig. 12 we have compared the power consumed between
topologies. It is to be kept in mind that the available the three topologies between a network having omni
bandwidth for the sparse network was 1Mb while it was directional antennas and a network having directional
2Mb for the dense network and the cluster form. The cluster antennas with power control. It is clear from the results that
form network shows maximum improvement in throughput in case of all the three topologies, the network with a power
(13.33% more) which indicates that when all nodes in the control scheme performs better than a network with omni
cluster are allowed to transmit without power control, it directional antennas. The maximum allowable transmission
leads to many collisions. power for a node in any case was 100mW and the
MinRecvPwr was 1mW.

Figure 10. Throughput comparison between the three Figure 12. Comparison in power consumed in different
topologies when using a omni directional antenna and a topologies when using a omni directional antenna to using a
directional antenna with our power scheme implemented. directional antenna with power control.

Fig. 11 shows that implementing the power control design Fig. 13 shows the difference in power consumed in case of
increases the throughput by nearly 6% (in the cluster form the three topologies when using a regular directional
case) when compared to a directional antenna without power antenna to when a directional antenna with power control is
control. This is attributed to the fact that reducing the used. Controlling the transmission power shows a lower
transmission power allowed for more node pairs to value of consumed power as compared to a network when
communicate and thus more packets were transmitted all the nodes are transmitting at the maximum power. The
successfully. The dense network and the cluster form simulations show that reducing the transmission power of
topologies also show better performance once the cross layer the nodes did not affect the throughput of the system
design in used. negatively and also helped in conserving the overall network
power. Throughout the simulation, the receiver keeps the
transmitter aware of the ideal transmitting power by sending
its SINR value to the transmitter. The network without
28 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

power control uses nodes that transmit with maximum


power all the time.

Figure 16. Simulation results for power consumption for


five different network scenarios used in the Cluster form
topology case.
Figure 13. Comparison in power consumed in different 6. Conclusion
topologies when using a regular directional antenna to using
a directional antenna with power control. We proposed a CLPM (cross layer design for power
management) and throughput enhancement for wireless ad-
This leads to wastage of energy and also hampers hoc networks. Our primary objective was to exploit the
conversations going on in the nearby space. The overall advantages of directional antennas and develop a power
network power is saved by almost 50% in the sparse control scheme for a sparse, a dense and a clustered
topology and also by considerable amounts in the dense and network. The underlying MAC layer is responsible for
the cluster form. controlling the transmission power of a node depending
upon the minimum required SINR level at the receiver. This
information is sent by the receiver to only the intended
transmitter incase of sparse networks. In case of dense
networks, this collision avoidance information is transmitted
in all the directions in lieu of more transmitters in a
receivers range. This helped in not only conserving the
power of nodes but also increased the throughput as more
communications could take place in the same space. The
Network layer performs a power aware routing protocol
called Power AODV which aims at reducing the overall
power consumed in routing packets. Performance
comparison of this cross layer design with a 802.11b
network and a 802.11b-like network using directional
antennas, showed that it reduced the overall network power
consumption and also increased the throughput. Thus, a
network with a power management scheme implemented
Figure 14. Simulation results for power consumption for will have better performance than a network without such a
five different network scenarios used in the Sparse topology scheme.
case.
References

[1] Rong Zheng, Robin Kravets, “On-demand Power


Management for Ad-hoc Networks”, INFOCOM
2003.
[2] Zongpeng Li and Baochun Li, “Probabilistic Power
Management for Wireless Ad-hoc Networks”, Mobile
Networks and Applications, October 2005.
[3] Krishnamurthy, Srikanth ElBatt, Tamer Connors,
Dennis, “Power management for throughput
enhancement in wireless ad-hoc networks”, In
Proceedings of the IEEE ICC 2000.
[4] Benjie Chen, Kyle Jamieson, Hari Balakrishnan, and
Robert Morris, “Span: An Energy-Efficient
Figure 15. Simulation results for power consumption for Coordination Algorithm for Topology Maintenance
five different network scenarios used in the Dense topology in Ad-hoc Wireless Networks”, Wireless Networks
case. archive Vol. 8, Issue 5, September 2002.
(IJCNS) International Journal of Computer and Network Security, 29
Vol. 2, No. 6, June 2010

[5] Vikas Kawadia and P. R. Kumar, “Power Control and Authors Profile
Clustering in Ad-hoc Networks”, INFOCOM 2003.
[6] El-Osery, A.I. Baird, D. Bruder, S, “Transmission Mahasweta Sarkar joined the
power management in ad-hoc networks: issues and department of Electrical and
advantages”, In Proceedings of the IEEE Networking, Computer Engineering at San Diego
Sensing and Control, 2005. State University as an Assistant
[7] Marwan Krunz and Alaa Muqattash, “Transmission Professor in August 2006, after receiving
her Ph.D. from UCSD in December
Power Control in Wireless Ad-hoc Networks:
2005.
Challenges, Solutions, and Open Issues”, Network, She received her Bachelor’s degree in
IEEE Sept-Oct 2004. Computer Science (Suma Cum Laude)
[8] Matthias Rosenschon1, Markus Heurung1, Joachim from San Diego State University, San
Habermann, “New Implementations into Simulation Diego in June, 2000. Prior to joining
Software NS-2 for Routing in Wireless Ad-Hoc SDSU, she worked as a Research
Networks”, 2004. Scientist at SPAWAR Systems Center,
[9] http://www.isi.edu/nsnam/ns/tutorial/ Point Loma, San Diego. Her research interests include protocol
[10] Asis Nasipuri, Kai Li and Uma Reddy Sappidi, “Power designs, flow control, scheduling and power management issues in
Wireless Local Area Networks.
Consumption and Throughput in Mobile Ad-hoc
Networks using Directional Antennas”, In
Ankur Gupta is currently a graduate research student under Dr.
Preceedings of the Eleventh International Mahasweta Sarkar at San Diego State
Conference, Computer Communications and University, San Diego. He received his
Networks, 2002. Bachelor’s degree in Electronics Engineering
[11] Thanasis Korakis, Gentian Jakllari and Leandros in June 2001 from Pune University, India.
Tassiulas, “A MAC protocol for full explotiation of His research interest includes analysing and
Directional Antennas in Ad-hoc Wireless Networks”, studying the effect of power management
MobiHoc, June 1-3, 2003, Annapolis, Maryland, protocols in wireless devices that emply
USA. directional antennas.
[12] ANSI/IEEE Std 802.11, “Wireless LAN Medium
Access Control (MAC) and Physical Layer (PHY)
specifications”, 1999.
[13] Z. Huang, Z. Zhang, and B. Ryu, "Power control for
directional antenna-based mobile ad-hoc networks",
in Proc., Intl. Conf on Wireless Comm. and Mobile
Computing, pp. 917-922, 2006.
[14] Ram Ramanathan, "On the performance of ad-hoc
networks with beamforming antennas", Proc. Of
MobiHoc '200 1, pages 95-105, October 2001.
[15] B. Alawieh, C. Assi, W. Ajib, "A Power Control
Scheme for Directional MAC Protocols in MANET",
IEEE Conf, WCNC2007, pp.258-263, 2007.
[16] Nader S. Fahmy, Terence D. Todd, and Vytas Kezys,
"Ad-hoc Networks with Smart Antennas Using IEEE
802.11-Based Protocols", In Proceedings of IEEE
International Conference on Communications (ICC),
April 2002.
[17] P. Karn., “MACA -A New Channel Access Method for
Packet Radio”, in Proc. 9th ARRL Computer
Networking Conference, 1990.
[18] E.-S. Jung and N. Vaidya, “A power control MAC
protocol for ad-hoc networks”, In 8th ACM Int. Conf
on Mobile Çomputing and Networking
(MobiCom02), pages 36-47, Atlanta, GA, September
2002.
[19] J.-P. Ebert, B. Stremmel, E. Wiederhold, and A.
Wolisz, “An Energy-efficient Power Control
Approach for WLANs”, Journal of Communications
and Networks (JCN), 2(3): 197-206, September 2000.
[20] Mineo Takai, Jay Martin, Aifeng Ren, and Rajive
Bagrodia, "Directional virtua1 carrier sensing for
directiona1 antennas in mobile ad-hoc networks",
MOBIHOC'02, June 2002.
30 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

A Bidding based Resource Allocation Scheme in


WiMAX
Mahasweta Sarkar1 and Padmasini Chandramouli2
1
Department of Electrical and Computer Engineering, San Diego State University,
San Diego, California, 92182, United States
msarkar2@mail.sdsu.edu
2
Department of Electrical and Computer Engineering, San Diego State University,
San Diego, California, 92182, United States
padma_143608@yahoo.com

service which might lead to over or under serving the


application. For example, WiMAX traffic classification
Abstract: WiMAX has traditionally focused on delivering
Quality of Service (QoS) to users based on a pre-determined (IEEE 802.16), would club both a video conferencing and a
traffic classification model. It broadly classifies traffic into four streaming video application in the “RtPS” traffic category
categories In this paper, we argue that since applications today and thereby extend identical network resource and service to
are diverse, clubbing them in merely four different groups does both the applications. In reality, a video conferencing
not do justice to their QoS requirements. We propose that application is much more sensitive to jitter and delay than a
instead QoS should be delivered on the basis of an application’s streaming video application. Therefore a video conferencing
individual requirement. We propose resource allocation scheme application should require and be provided with higher
for WiMAX networks. Our scheme abides by two basic network resources and prompt network services over a
principles: (A) We distinguish applications based on their streaming video application. However, WiMAX chooses to
resource demands. Unlike existing WiMAX schemes, our treat both these applications in the same fashion and
scheme allocates network resources to them as per their
allocates equal resources to both thus under-serving the
individual QoS requirement (B) We grant "extra" resources to
applications that require it, for a price, by conducting bidding.
video-conferencing application and over-serving the
We show, through extensive analysis and simulation results, streaming-video application. Another set of application
that efficiency of our resource management scheme surpasses currently classified under the UGS scheme suffers from
that of WiMAX's in addition to yielding higher revenues. similar disparity – namely a voice chat application and a
streaming audio application. Traditional resource allocation
Keywords: WiMAX, Resource allocation, bidding strategy,
schemes do not acknowledge the varying resource and QoS
revenue maximization.
requirements of various applications within the same service
class as enumerated by the examples above.
1. Introduction As multimedia traffic in the Internet starts dominating
Fueled by a technologically savvy new generation of mobile data traffic, the need to extend network resource and service
and stationery end users, the wireless marketplace is on the to an application on a “per-need” basis will become not only
cusp of rapid change and limitless opportunities. WiMAX necessary but also crucial. Without such distinction, Internet
is a broadband wireless technology which promises to gaming applications will benefit the same service quality as
provide ubiquitous service to mobile data users. The current an important video conferencing application thereby
standard on WiMAX (802.16) classifies users into four rendering the pseudo-customized network service model of
following QoS service classes [1]: (a)Unsolicited Grant IEEE 802.16 useless. This paper addresses this particular
Services (UGS) to support Constant Bit Rate (CBR) traffic issue and proposes a resource allocation scheme which
like voice applications (b)Real-Time Polling Services (RtPS) mitigates the problem at hand.
designed to support real-time traffic that generate variable However, in doing so, we do not want to deviate from the
size data packets on a periodic basis, such as MPEG video. existing WiMAX standard [1] as that would surely lead to
(c) Non-Real-Time Polling Services (NRtPS) designed to backward compatibility problems. Also, in order to prevent
support non-real-time and delay tolerant services that any and every user from making a claim on “extra”
require variable size data grant burst types on a regular basis resource, we grant extra resources to applications only for a
such as FTP traffic and (d)Best Effort (BE) Service designed price. This would deter undeserving applications from
to support data streams that do not require stringent QoS claiming valuable network resources. To that extent we
support like HTTP traffic (web browsing application). introduce a resource auction principle [2] to formulate our
However, we argue that since applications today are resource allocation scheme. We assume that QoS-intense
diverse, classifying them in merely four different groups traffic will be willing to pay a higher price for their
does not do justice to their QoS requirements. Such broad “enhanced” services – hence the concept of bidding for
classification eliminates the finer requirements of an resources which not only prevents anarchy in the system but
application, thereby providing it with a non-customized also leads to maximization of revenue earned by the ISPs.
(IJCNS) International Journal of Computer and Network Security, 31
Vol. 2, No. 6, June 2010

Our resource allocation scheme splits the available Table 1: Bandwidth requirements of the SSs in Figure 1
resource, namely bandwidth into a “fixed” pool and a
Demand SS1 SS2 SS3 SS4 SS5
“bidding” pool. At the onset, resources from the fixed pool
requirement
are allocated to users based on a standard WiMAX-like
UGS+RTPS 65 4 12 15 40
resource allocation algorithm. However if the users still
have unsatisfied demands they bid for extra resource which NRTPS+BE 15 6 8 5 30
might be allocated to them (from the bidding pool) if they
happen to win the bid. An eligible user who bids the highest A distinctive feature of IEEE 802.16 is its Quality-of-
amount for a bandwidth pool, is the winner. We will present Service (QoS) support [1]. As discussed in Section I, it has
a detailed description of the bidding process and a user’s four service classes to support real time and non-real time
eligibility to bid in Section 3. communication, namely, UGS, RtPS, NRtPS and BE. We
The rest of the paper is organized as follows: Section 2 identify the fact that today’s applications are extremely
describes the system model and the premise of our problem. diverse with varied QoS requirements. Applications that are
Section 3 describes the resource allocation scheme. In categorized under the same service class as per the
Section 4 we present numerical results and a discussion of conventional IEEE 80.16 traffic classification rules [1]
those results. We conclude the paper in Section 5. might in reality require different services as we have
illustrated with the example of a video conferencing
application and a streaming video application in Section I.
2. System Model
This led us to a fine grained traffic classification especially
A WiMAX network comprises of a Base Station (BS) that within the UGS and RtPS classes. Note that, since NRtPS
communicates with several Subscriber Stations (SS) and BE traffic classes comprise of applications that does not
wirelessly. Each of the SSs in turn, communicates with have stringent QoS requirements, finer grained traffic
several users. A BS is primarily responsible for allocating classification in these classes would probably be
available resources to its SSs. A SS further allocates these unnecessary.
resources, to users that it caters to, as depicted in Figure 1. Our traffic classification scheme is based at the user level.
We have quantified the UGS, RtPS, nRtPS and BE We classify the users into three categories namely GOLD,
demands with respect to each SSs in Table1. We will be SILVER and BRONZE. The GOLD users are the ones that
using the system in Figure1 as basis for explaining our have a mix of all four traffic classes namely UGS, RtPS,
resource allocation scheme. This allocation of resources by NRtPS and BE. The SILVER users are those who have a
BS to SS, and further by a SS to its users is achieved via mix of three traffic classes – RtPS, NRtPS and BE. The
WiMAX’s scheduling scheme. Note that the WiMAX BRONZE users are those who only have NRtPS and BE
standard (IEEE 802.16 [1]) does not specify any standard traffic at a particular point in time. We further classify
scheduling scheme. It is left to the judgment and decision of GOLD and SILVER class users into priority classes 1, 2, 3
individual vendors. and 4. A SILVER user who is running an application with
the most stringent QoS requirement will fall under Priority
Class 1, a SILVER user with comparatively lesser stringent
QoS requirement will opt for priority class 2, 3 or 4.For e.g.
a live video conference for an official meeting would
probably demand a Class 1 service whereas a friendly
Internet chat with a friend might be clubbed as Class 2 or 3
traffic.
Our scheme assumes that users with applications which
have stringent QoS requirements will also be willing to pay
for the extra resource that they would demand from the
network. A real life scenario of this concept could be that of
an end user paying a higher fee to its ISP for increased
bandwidth allocation. We assume that GOLD and SILVER
class users with a high component of delay-sensitive UGS
and RtPS traffic within their traffic mix, are the ones who
participate in the “bidding” process to acquire extra
resources from the network to support their “high
maintenance” traffic. We also assume that these users have
the willingness and the budget to bid/pay for the extra
resource (bandwidth) that they would be demanding from
the network. We define a “bid” as an amount of money that
a particular user is willing to pay for the extra resource that
Figure 1. WiMAX architecture depicting one BS catering to it would demand from the network. We exploit a user’s
five SSs each of which caters to a set of users. The resource willingness to pay for preferential treatment from the
demands of each of these users are also outlined in the network as the basis of maximizing revenue that can be
figure earned by the ISP for provisioning for “additional” network
resources to the “high priority class” users.
32 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

3. The Resource Allocation Scheme


Π BP(i) profit margin ($) associated with
We first present an overview of our resource allocation bidding pool portion ‘i’ {or BP(i)}
scheme before delving into the technical details. Resource is ∀ i = 1,2,……n
allocated to all users such that no user is ever starved even if
it does not have delay-sensitive traffic [3]-[6]. Total
ER BP(i) Expected Revenue estimated by BS
available resource (namely bandwidth) is split into two
by auctioning bandwidth pool ‘i’
portions: (a) “fixed” pool and (b) “bidding” pool. First,
{or BP(i}) ∀ i = 1,2,……n
resources from the fixed pool are allocated by the BS to the
SSs based on a simple proportionality concept which we will Let us assume that the total available bandwidth (which is
describe shortly. However, after this initial resource the resource of concern) is 100 MB out of which 60MB is
allocation, if demand for resources is not met for the GOLD ear-marked for the fixed pool and 40 MB for the bidding
and SILVER class users, then they have the option of pool. Note that this division of the total resource pool is not
“bidding” for extra resources from the network. The eligible fixed and can have an impact on the performance of our
users send their bid to their respective SS who passes on the scheme. In general, we have observed that when traffic
bid to the BS. The BS on the other hand, computes its own constitutes of mostly UGS and RtPS categories, it makes
selling price for a chunk of resource that it has put up for sense to have a 40% or higher resource allocation in the
auction. Once it accumulates the prices that various users “bidding” pool. In scenarios where traffic comprises mainly
are willing to pay for a particular resource, it decides upon of nRtPS and BE categories - assigning around 20%
which user to allocate the resource to. That particular user is resource to the bidding pool shows good results. Quantifying
then declared to be the winner of that “bid”. We now present these relationships is a future research project that we are
a formal description of our resource allocation scheme. going to undertake.
Table 2 enumerates the variables and notations that we use
in explaining the scheme. In addition, we also provide a 3.1 Resource Allocation from Fixed Pool
numerical example which we believe will further help the The scheme allocates a minimum bandwidth to all class of
reader in understanding our scheme. users who are in need of bandwidth regardless of the traffic
type they are carrying. It is realistic to assume that the
Table 2 : Notations, Variables and their Descriptions minimum bandwidth limit would be a variable dictated by
the demand of the application and the resources of the ISP.
Variable/Notation Description of Notations However, priority is always given to UGS and RtPS traffic
classes over nRtPS and BE traffic classes.
m Number of SS in the network The following four steps outline the resource allocation
method from the fixed pool:
1. After allocating minimum bandwidth to all users,
FP(i), i= 1,2,..m Fixed Pool chunks numbered from
the BS broadcasts a message to all SSs requesting
1 through m for each SS
them to transmit their individual resource demands
corresponding to each service type. SS accumulates
n Number of “Bidding Pool” chunks demand information from its users and transmits
them to the BS.
BP(i),for Bidding Pool chunks numbered 2. The BS first considers UGS and RtPS demand of
i= 1,2,..n from 1 through n each SS and splits the fixed pool bandwidth as
follows:
[n,m] Initial Cost Matrix contains cost
incurred by BS for allocating each (1)
bidding pool portion to each SS
irrespective of their eligibility to Using the network in Figure 1 as an example, SS1
bid. has 65 MB of UGS and RtPS bandwidth
requirement. Total sum of UGS and RtPS demand of
[n,m] The final cost matrix contains all SSs is 136 MB. Thus going by the above
Cost incurred by BS for allocating equation, bandwidth allocated to SS1 from the Fixed
each bidding pool BP(n) to Pool is equal to FP(1)=(65*60)/136=31MB. Note
eligible SSs(m) . that the bandwidth allocated to a SS never exceeds
the demand of that SS. Similarly, bandwidth for
Lowest total cost incurred by BS UGS and RtPS traffic allocated to SS2 through SS5
CC* for allocating all its Bidding in Figure1 would be FP(2)= 2MB, FP(3)= 6MB,
chunks for auctioning FP(4)= 3MB and FP(5)= 19MB respectively. Figure
2 depicts the proportional splitting of Fixed pool
CC*BP(i) Lowest total cost incurred by BS resources ,among five SSs , with regards to their
for allocating its Bidding chunks UGS and RtPS traffic.
excluding BP(i) for auctioning.
(IJCNS) International Journal of Computer and Network Security, 33
Vol. 2, No. 6, June 2010

3. However, if there are no users with UGS and RtPS In order to implement some amount of fairness in the
traffic, the fixed pool will be split proportionally scheme, a SS is deterred from bidding repeatedly for extra
according to BE and nRtPS. resource (thus decreasing the chance for other SSs to win a
bid) by introducing a “penalty” cost. A user/SS who has won
(2) a bidding pool portion in previous bidding game will incur a
fixed penalty cost if they participate in a new bidding game
4. After allocating bandwidth to UGS and RtPS traffic again. This is mainly to dissuade same user/SS from
for all SSs, if there are resources left then these winning pool portions frequently. There is also a “flat fee”
resources are proportionally split and allocated to to cover the overhead costs.
the SSs to meet their BE and nRtPS traffic Assuming that the resource allocation scheme is run every
requirements. T=3 seconds, and the SSs are required to pay a “flat fee” of
$0.15 and a penalty fee of $0.01 for using 1 MB of resource
5. Once BS allocates bandwidth to the SSs, the SSs from the bidding pool, we calculate the Initial Amount
repeat the same proportionality principle (as payable by each SS as below :
explained in Step 2) to split the resources granted
to it by the BS, amongst its users. Initial AmountSS(i)= flat fee + penalty fee*(number of
previous winnings of SS(i)) where i= 1,2..#of SSs
Let us again consider SS1 in Figure 1. Total (3)
(UGS+RtPS) demand of SS1 is 65MB. (UGS+RtPS)
demand of user1 is 20MB. The bandwidth allocated to the Table 3 lists the unsatisfied bandwidth requirements and
SS by the BS for (UGS+RtPS) traffic is 31MB (as per our the Initial Amount payable by each SS based on an assumed
previous computation). Then bandwidth allocated by SS1 to number of prior victories in winning the bid by each SS.
user1 = 20*31/65=9.5MB. Similarly, the other users get the
following chunk of bandwidth: user2=25*31/65=12; Table 3: Unsatisfied bandwidth and associated cost
user3=15*31/65 =7MB; user4=5*31/65=2.5MB. In order to Unsatisfie # of times SS Amount to be
ensure fairness in terms of resource allocation, each and SS d won paid for using
every user in the network will be allocated a minimum Demand previously 1MB of
chunk of bandwidth (as long as it has a bandwidth (assumption) resource for
requirement) regardless of the traffic type it is carrying. This T=3 sec incl.
is especially important to prevent resource starvation thereby penalty
leading to unsatisfied users in the system.
SS1 49MB 3 0.15+0.01*3=
0.18
BW available from fixed pool SS2 8.9MB 0 0.15
BW allocated to SS1 BW allocated to SS2 SS3 14MB 1 0.15+0.01*1=
BW allocated to SS3 BW allocated to SS4
BW allocated to SS5 0.16
SS4 17MB 4 0.15+0.01*4=
0.19
31% SS5 50MB 2 0.15+0.01*2=
0.17
51%
3.2.1 Resource Partitioning in Bidding Pool
5% 10%
The BS first collects the information regarding unsatisfied
resource demands from all its SSs. The BS then divides the
3%
bidding pool into portions, proportional to unsatisfied
Figure 2: A pie-chart depicting the bandwidth allocation to demand of each SS. For e.g. the bidding pool with available
the five SS’s from the “Fixed Pool” based on their UGS and bandwidth of 40MB will be divided into 5 parts -P1 through
RtPS traffic demands. P5 (for each of the SSs that reports an unsatisfied demand)
as follows:
3.2 Resource Allocation from Bidding Pool
If resources from the fixed pool are not sufficient to satisfy (4)
user demands of GOLD and SILVER class users, then they
resort to “bidding” for extra resources. This extra resource is Thus with a bidding pool of 40MB and a total unsatisfied
allocated to the winner of the bid by the BS from the BW request of 138.9 MB, SS1 will be allocated a
“bidding” pool of resources. Going by our example, BP(1)=49*40/138.9=14.1MB. Similar calculations will yield
unsatisfied demand of the SSs are: SS1=49MB, SS2=8MB, BP(2) = 2.56MB, BP(3)=4.03 MB, BP(4) = 4.89 MB and
SS3=14MB, SS4= 17MB, and SS5= 50MB. Note that BP(5) = 14.39 MB respectively for SS2 through SS5 in
unsatisfied demand of an SS only comprises of resource Figure 1 respectively - (Step 2).
demand of GOLD and SILVER user belonging to that
particular SS. Unsatisfied traffic demand of BRONZE users
are not considered since they do not bid for resources.
34 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

3.2.2 Determining Initial Cost Matrix can bid only for BP(2), BP(3) and BP(4). Table5
enumerates the cost that a SS encounters to bid for a certain
The BS first determines the minimum price for 1MB of chunk of bandwidth (BP(1),…,BP(5)). A ‘*’ denotes the
bandwidth for a time duration of 3 seconds (as outlined in SS’s inability to bid for that particular chunk of bandwidth.
Step 1). The BS then determines the initial cost Thus due to budget constraints, SS2 will not be able to bid
matrix . This matrix represents the cost of for BP(1) and BP(5) while SS1 and SS5 will be able to bid
allocating a chunk (portion) of the bidding pool to a SS. for all bandwidth chunks from BP(1) through BP(5).
Recall that ‘n’ represents the number of chunks that the
bidding pool resource has been split up into while ‘m’ Table 5: SS eligibility to bid based on its budget
represents the number of SS in the system. is
calculated as follows: SS1 SS2 SS3 SS4 SS5
where i= 1,2…m
and j= 1,2..n. BP 2.52 * * * 2.4
(5) 1
BP 0.45 0.375 0.4 0.475 0.425
For example; SS1 requires $0.18 to access 1MB of 2
bandwidth for the duration of 3 seconds as calculated in BP 0.72 0.6 0.64 0.76 0.68
Table 3. Thus the associated cost for SS1 to access BP(1) 3
which comprises of a chunk of 14.1MB of bandwidth, would BP 0.9 0.75 0.8 0.95 0.85
be equal to $0.18* 14.1 = $2.53. 4
Similarly the minimum costs incurred by SS(1) through P5 2.6 * * * 2.45
SS(5) for bidding for bandwidth chunks denoted as BP(1)
through BP(5) are depicted in the Initial Cost (b) BW Demand Factor
matrix (Table 4). We will henceforth use In addition to the budgetary constraint, SSs are allowed to
for determining eligibility of an SS to participate in the bid only for bandwidth chunks that offer bandwidth less
bidding process. than or equal to the demand of the SS. This is enforced in
order to prevent resource wastage. Total bandwidth (BW)
Table 4: Initial Cost Matrix provided by each chunk or bidding-pool portions (BP) for a
duration of 3 seconds is calculated as BP(i)*3 thus leading
SS1 SS2 SS3 SS4 SS5 to 42.3MB, 7.68 MB, 12.09MB, 14.67 MB and 43.17 MB
chunks of bandwidth available in the 5 pool portions for 3
BP 0.18*1 0.15*1 0.16*14= 0.19*14 0.17*
4= 4= 2.1 2.24 =2.66 14=2.4
seconds each as depicted in Table 6.
1
2.52
BP 0.18*2 0.15*2. 0.16*2.5 0.19*2.5 0.17*2. Table 6: Total Bandwidth available in each pool for 3
2 .56= 56= 6= 0.4 6= 56= seconds
0.45 0.375 0.475 0.425 BP1*3 BP2*3 BP3*3 BP4*3 BP5*3
BP 0.18*4 0.15*4 0.16*4=0 0.19*4= 0.17*4
3 =0.72 = 0.6 .64 0.76 = 0.68 42.3 7.68 12.09 14.67 43.2
BP 0.18*5 0.15*5 0.16*5=0 0.19*5= 0.17*5 MB MB MB MB MB
4 =0.9 = 0.75 .8 0.95 =0.85
BP 0.18*1 0.15*1 0.16*14. 0.19*14. 0.17*14 Thus SS4 with an unsatisfied BW of 17 MB can bid only
5 4.4= 4.4= 4=2.3 4=2.7 .4=2.45 for bandwidth-pools BP(2), BP(3) and BP(4) while SS1 and
2.6 2.16 SS5 can bid for all pool portions. Similarly SS2 with
3.2.3 Eligibility of SS to Participate in Bidding unsatisfied BW of 8.9MB can bid only for BP(2)while SS3
with an unsatisfied BW of 14MB can bid for BP(2) and
The BS first determines the eligibility of a SS to
participate in the bidding process based on the following BP(3) only. Taking into consideration the budget and the
factors: BW constraint, we compute the final matrix (Table 7).
Once again, a ‘*’ in the table entry denotes the
(a) Budget Factor corresponding SS’s ineligibility to bid for the particular
Budget Factor is the first criterion that determines eligibility bandwidth pool (BP). Henceforth we will be using Final cost
of a SS to participate in the bidding process. A SS that matrix for all future calculations.
cannot afford a specific (or any) pool portion(s) is not
eligible to bid. Each SS has a certain budget for bidding. Let
us suppose that SS1 through SS5 has a budget of $4, $2, $1,
$1.5 and $3 respectively. Since a SS cannot exceed its
budget during bidding, a SS can only bid for pool portions
that cost less than or equal to its budget. Note that a SS with
sufficient monetary resource can bid for multiple chunks of
bandwidth.
Thus SS1 with a budget of $4 can bid for all the chunks
from BP(1) through BP(5) whereas SS2 with a budget of $2
(IJCNS) International Journal of Computer and Network Security, 35
Vol. 2, No. 6, June 2010

Table 7: Final Cost Matrix bandwidth-pool (or BP) is now assigned a SS who is eligible
SS1 SS2 SS3 SS4 SS5 to bid for that particular BP AND offers the minimum price
for that BP.
BP 2.52 * * * 2.4 Note that, an SS cannot be associated with two different
1 BPs even if it is the lowest bidder on both of them. Again,
BP 0.45 0.375 0.4 0.475 0.425 this is enforced to ensure fairness. Thus referring to the
2 Final Cost Matrix in Table 7, BP(5) is associated with the
BP 0.72 * 0.64 0.76 0.68 minimum bidder – SS5- amongst its eligible bidders SS1
3 and SS5. The second biggest bandwidth-pool is BP(1) which
BP 0.9 * * 0.95 0.85 is associated with SS1. [Note that though SS5 had a lower
4 bid on BP(1) than SS1, but since SS5 has already been
BP 2.6 * * * 2.45 associated with BP(5), it cannot be associated again with
5 BP(1)]. Similarly, applying the above policy, BP(4), BP(3)
and BP(2) are associated with SS4, SS3 and SS2
3.2.1 Determining Expected Revenue, Initial Bid respectively and consecutively.
Price and Profit Margin Thus, the minimum cost incurred by BS for allocating all
Next step is to estimate minimum revenue i.e. expected its BP for auctioning =
revenue (Step 4c) that the BS expects make in the bidding
game. For estimating minimum revenue we need to CC*= SS5’s cost for BP(5) + SS1’s cost for BP(1)+ SS4’s
calculate initial bid price(Step 4a) and the profit margin cost for BP(4)+ SS3’s cost for BP(3)+ SS2’s cost for BP(2)
(Step 4b).The BS calculates the initial bidding price for each (7)
bandwidth chunk (or BP) and also estimates the profit Plugging the above values from Table 7, we have;
margin corresponding to each BP that it expects to gain at CC*= $2.45 + $2.52 + $0.95 + $ 0.64 + $0.375 = $6.885.
the end of the bidding. For example, using Table 7 as a (ii) The BS now iteratively calculates a CC* for each of the
reference, the BS will estimate the profit margin that it will bandwidth-pool portions or BPs. In order to do so, it
acquire by allocating the bandwidth chunk BP(1) to both eliminates that particular bandwidth chunk or BP and
SS(1) and SS(5) (note that both these SSs are eligible to bid computes yet another CC* value without those two
for resource BP(1)). The BP will of course allocate the elements. We denote these CC* calculations for each
chunk to the SS that will maximize its profit making that particular BP as CC*BP(i) ∀i {1,….,n}
particular SS, the winner of the bid. CC*BP(1) = SS5’s cost for BP(5) + SS1’s cost for BP(4)+
(a) Initial Bid Price SS3’s cost for BP(3)+ SS2’s cost for BP(2) (8)

The initial bid price for each pool portion is equal to the Plugging the above values from Table 7, we have;
minimum cost that an eligible SS has to pay for that CC*BP(1) = $2.45 + $0.90 + $ 0.64 + $0.375 = $4.315.
particular pool portion. Consider the Final cost matrix from (iii) Finally, the profit margin associated with each
Table 7. The initial bid prices can be calculated as follows: bandwidth chunk is calculated as:

= min{ [row, column] } ∀column Π BP(i) = CC* - CC*BP(i) ∀i {1,….,n} (9)


{1,…,m} and [row, column] ≠ ‘*’ . Thus, profit margin associated with BP(1) =
(6) Π BP(1) = CC* - CC*BP(1) = 6.885 – 4.315 = $2.57
(c) Expected Revenue
Thus, = min { [1,SS1], [1,SS5]} The BS then calculates a expected revenue (ER) associated
= $2.4 with each bandwidth-pool chunk that is expects to earn by
auctioning that particular chunk of bandwidth. It calculates
= $0.375 the expected revenue as follows:
= $0.64 ER BP(i) = Π BP(i) + ∀i {1,….,n} (10)
=min{ [4,SS1], [4,SS4],CM[4,SS5]
} = $0.85 Thus, ER BP(1) = Π BP(1) + = $2.4 + $2.57= $4.97.
= min { [5,SS1], [5, SS5]} = Similarly expected revenue for the other bandwidth-
$2.45 pools are determined. We use expected revenue with the
(b) Profit Margin objective to discourage a SS from excessively overbidding
Our profit margin calculation is inspired by the standard for a particular pool portion. During the bidding game,
profit margin calculations in a Nash-Bidding Strategy as once a SS reaches the expected revenue associated with the
outlined in any standard Economics text book as in [7]. The bandwidth-pool it is bidding for, the BS allocates that
BS calculates its profit margin for each chunk of bandwidth bandwidth-pool to that SS. However if there are more than
pool (or BP) as follows. (i) First, the minimum cost that a one SSs who reach the expected revenue for a pool portion,
BS can earn by auctioning its entire bandwidth in the then the highest bidder amongst them gets the bandwidth.
reserved pool is calculated. This cost is represented as Thus we prevent the scenario of “ survival of the
(CC*). In order to calculate CC*, the BPs are arranged in wealthiest” .
decreasing order of magnitude. Thus referring to Table 6,
we have BP(5) > BP(1) > BP(4) > BP(3) > BP(2). Each
36 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

3.2.2 The Auction Qualitatively, we attempt to prove to the reader that our
The BS broadcasts the initial bidding prices (calculated in resource allocation scheme attains NE by showing that:
Step 4a) for each BP to all the eligible SSs with unsatisfied 1. The bidding format leads to resource allocation such
bandwidth demands. Each SS adds a random amount to the that it generates the maximum revenue that could be earned
initial bid price of each pool portion that it is eligible to bid from a bandwidth-pool.
for. While doing this, a SS can never exceed its budget. The 2. No user can benefit by changing its bid offer
SS then makes a price offer to the BS. If the price meets the unilaterally
BS’s calculated ER value for that specific bandwidth-pool,
3.3.1 Necessary Condition for NE
the SS is granted access to that bandwidth-pool and that
particular bandwidth-pool is taken off the auction. Note that From Game Theory literature, we know that our resource
though a SS can bid for different pool portions it is allowed allocation scheme can claim to attain Nash Equilibrium if
to win only one bandwidth-pool per game. the scheme can guarantee that it will generate the maximum
revenue that can be possibly generated by allocating a
3.2.3 Resource Distribution by SS certain BP to a certain SS [8],[9]. No other allocation could
Once a SS wins a bandwidth-pool that it bid for, it have maximized the revenue earned by the BS. Let us
distributes it amongst the users based on the offers made by assume that our resource allocation scheme allocates BP(i)
individual users. Gold class users with delay sensitive and to SS(a) ∀i {1,….,n} and ∀a {1,….,m}. For our
resource intensive applications (RtPS or UGS) have the scheme to have attained NE;
highest budget hence bid higher than other users and hence
has higher probability of winning the biggest chunk of the
resource. It is actually at this level that budget based on
(11)
priority class plays an important role. Also note that users
and hence SS can bid for multiple resource pool but can win
for any combination of ‘x’ and ‘y’ ∀x {1,….,m} and ∀y
only one. This is done to prevent a particular user from
{1,….,n}
hogging the entire network resource and ensure fairness
Now consider the bandwidth-pools BP(1) and BP(5). SS2,
amongst users with various priority classes.
SS3 and SS4 cannot bid for these pools because of budget
3.3 Nash Equilibrium is reached in our Resource and bandwidth limitations. Hence these allocations are not
Allocation Scheme - Discussion possible. From Table 7, we finally determine that SS2 is
The above resource allocation scheme has been designed allocated BP(2), SS3 could have been allocated BP(2) and
such that it attains Nash Equilibrium. A rigorous proof of BP(3) but preferably was allocated BP(3) as BP(3) had a
this statement can be found in [8]. In this section we provide higher bandwidth chunk which catered to the high
a qualitative discussion. Nash Equilibrium (NE) is a unsatisfied bandwidth requirement of SS3. SS4 could have
concept in Game theory that has been widely used in various been allocated BP(2), BP(3) or BP(4) but it was allocated
areas of engineering for attaining optimal solutions to a BP(4) as that particular bandwidth chunk best suits its
variety of problems. NE is a profile of strategies in which needs. Hence the allocation of SS2 to BP(2), SS3 to BP(3)
each player’s strategy is an optimal response with respect to and SS4 to BP(4) is the best possible allocation abiding by
all the other players’ strategies. Therefore, when a system the Bidding Rules. This allocation will generate a revenue of
attains NE, a player’s payoff does not increase if this player $0.98+$1.00+ $1.50 = $3.48.
unilaterally deviates from the NE strategy. In this paper, we On the other hand, SS1 and SS5 has the resources to bid
ensure that the bidding “game” reaches NE by enforcing the for all bandwidth-pools and hence would choose to bid for
following rules and regulations that have to be followed by the highest chunks of bandwidth namely BP(5) and BP(1).
users, SSs and the BS participating in this game: Thus 2 possible final combinations could arise, namely ss1
1. Users with stringent delay and jitter requirements are to BP91) and SS5 to BP(5) OR SS(1) to BP(5) and SS(5) to
allocated higher budget, increasing their chances to win BP(1). Let us examine the revenues ( and generated
because while bidding a user cannot exceed its budget. by each of these combinations along with the already
2. Users are allowed to bid for only those bidding pools decided upon combinations of SS2 to BP(2), SS3 to BP(3)
that meet their demand. They are not allowed to bid for and SS4 to BP(4):
more resources than they require which avoids unnecessary = (SS1-BP(1),(SS2-BP(2),SS3-BP(3),SS4-BP(4)),SS5-
resource wastage. BP(5)) = $4.79 + $3.48+ $3= $11.48 and
3. BS calculates the Expected Revenue (ER) at the = (SS1-BP(5),(SS2-BP(2),SS3-BP(3),SS4-BP(4)),SS5-
beginning of the bidding game to prevent users and SSs BP(1)) = $5.23 + $3.48+ $3 = $11.78
from overbidding or underbidding. Thus, we find >
4. Incase the BS’s ER price could not be met by any SS, However SS1 has higher demand requirements and
the BS still has to allocate the bandwidth-pool to the highest budget hence is the probable winning combination, as
bidder. SS1 will outbid SS5 for BP(5).
5. The BS penalizes SS and SS further penalizes users for Thus is the winning combination that generates the
participating in bidding game again when they have won highest revenue possible than any other combination (we
resources from the previous games. This is to ensure fair have argued that other combinations will be eliminated as
allocation of resources to all SS and hence users. The per the bidding rules) and it is the allocation that our
penalty can be reset after a certain period of time. scheme proposes. We have thus qualitatively and
(IJCNS) International Journal of Computer and Network Security, 37
Vol. 2, No. 6, June 2010

numerically proven that our resource allocation scheme allocation scheme where the entire available bandwidth is
attains Nash Equilibrium. chopped up into four chunks for the four different traffic
classes. WiMAX charges a flat rate from all users of all
3.3.2 No user can gain by changing its strategy
classes.
(bid offer) unilaterally
Figures 3(a) and 3(b) demonstrates the other important
We have already numerically shown in the previous section
aspect of our scheme which is revenue maximization. The
that the winning combination generates the maximum
X-axis represents systems with increasing number of GOLD
revenue. The SSs bidding strategies are restricted by their
and SILVER users. The Y-axis denotes the revenue earned
bandwidth requirement and budget. Hence SS2, SS3 and
from these users. We see that WiMAX’s fixed allocation
SS4 cannot bid for pool portions BP(1) and BP(5); SS2
and flat-fee-for-all-users scheme leads to higher revenue as
cannot bid for BP(4) due to its lower bandwidth requirement
the number of users increase in the system but the “bidding”
though it has higher budget than SS4; SS3 can only bid for
scheme leads to a much higher (as much as ~50%) revenue
BP(3) because its demand requirement is higher than other
earned as it forces users to pay proportionally to their
bandwidth-pools namely BP(2) and for the same reason SS4
resource demand.
was best allocated BP(4). Again, SS1 and SS5 would
maximize the revenue only by opting for BP(1) and BP(5).
Any other combinations of SS and BPs will not lead to a
higher revenue or more apt bandwidth need fulfillment.
Thus no SS will gain by changing their strategy of bidding
for other bandwidth-pool than what our resource allocation
scheme spells out. We have thus proved the second required
criteria for our scheme to have attained NE.

4. Simulation Set up and Results


Our resource allocation scheme aims at providing
applications with the bandwidth they require without Figure 3(a): Revenue by varying number of gold user
confining them to the pre-determined bandwidth allocation
that WiMAX decides for them based on their service class
Table 8 tabulates the simulation parameters.

Table 8: Simulation parameters


Simulation Parameters Parameter Value
Traffic Type Constant Bit Rate
traffic
Total available bandwidth 180 MB
Bandwidth available in 180 MB
fixed pool for fixed
bandwidth algorithm
Bandwidth available in 120 MB
fixed pool for our scheme Figure 3(b): Revenue earned by varying silver users
Bandwidth available in 60 MB
bidding pool for our Figures 4(a) and 4(b) compares bandwidth allocation to
scheme UGS and RtPS traffic class in our bidding scheme and the
Number of SS in the system 25 fixed scheme of WiMAX. Our bidding scheme further
Max. number of users per classifies UGS and RtPS traffic into four priority classes
18 based on the applications bandwidth and delay requirement.
SS
Penalty for participating in $0.01 The bidding scheme takes into account these varying
repeated bidding (by SS) requirements and proportionally allocates bandwidth to
Penalty for participating in $0.006 these four different classes within the UGS and RtPS class
repeated bidding (by user) (denoted by the brown bars) whereas WiMAX’s fixed
Demand of Gold Class 1.5 MB allocation scheme does not make this distinction and
users allocates bandwidth equally to all applications within the
UGS and RtPS class (as denoted by the green bars).
Demand of Silver Class 1.5 MB
users Figure 5(a) shows a distribution of bandwidth by the BS
in a system comprising of 80% GOLD and SILVER users
Demand of Bronze ( 1.5 MB
and 20% BRONZE users. The available bandwidth is 180
NRtPS and BE )users
MB and the demand on the bandwidth amongst all the users
In addition, our scheme also maximizes the revenue earned
is 250 MB.
by ISPs by offering this customized bandwidth allocation at
a monetary cost. We compare our “bidding” resource
allocation scheme with WiMAX’s “fixed” resource
38 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

20:80 ratio of Fixed Pool: Bidding Pool. So that Gold and


Silver users get as much resources as possible to satisfy their
demands. However if 80% of users are Bronze then BS
divides the available bandwidth into 80:20 ratio of Fixed
Pool: Bidding Pool. Thus more resources are available for
the Bronze users to meet their needs even after catering to
the GOLD and SILVER users efficiently.

Figure 4(a): Comparison of Bandwidth allocated to Gold


users via WiMAX’s “Fixed” and our proposed “Bidding”
Resource Allocation Scheme (Green bars denote the
WiMAX scheme while the brown bars denote our Bidding
Scheme)

UGS RTPS NRTPS BE


Figure 5(a) : Bandwidth allocated by WiMAX’s Fixed and
our Bidding scheme to a system comprising of 80% Gold
and Silver users.

Figure 4(b): Comparison of Bandwidth allocated to


Silver users via WiMAX’s “Fixed” and our proposed
“Bidding” Resource Allocation Scheme (Green bars denote
the WiMAX scheme while the brown bars denote our
Bidding Scheme).
Figure 5(b) shows a distribution of bandwidth by the BS
in a system comprising of 50% GOLD and SILVER users
and 50% BRONZE users. The available bandwidth is 180
MB and the demand on the bandwidth amongst all the users
UGS RTPS NRTPS BE
is 180 MB. Figure 5(c) shows a distribution of bandwidth
Figure 5(b) : Bandwidth allocated by WiMAX’s Fixed and
by the BS in a system comprising of 20% GOLD and
our Bidding scheme to a system comprising 50% Bronze
SILVER users and 80% BRONZE users. The available
users
bandwidth is 180 MB and the demand on the bandwidth
amongst all the users is 240 MB.
We showcase that even in such stringent bandwidth
demand requirement, our scheme delivers bandwidth
allocation in the most optimum fashion. The blue bars in
Figure 5 denote the total bandwidth demand of the different
traffic classes (marked in the X-axis), the green bars denote
the bandwidth allocation as seen by WiMAX’s fixed scheme
whereas the brown bars denote the bandwidth allocation as
per our bidding scheme. As evident from Figure 5, the
bandwidth demand of UGS and RtPS traffic is met more
efficiently in our scheme as compared to WiMAX’s fixed
UGS RTPS NRTPS BE
scheme.
Figure 5(c) : Bandwidth allocated by WiMAX’s Fixed and
In addition, the nRtPS and BE traffic doesn’t starve
our Bidding scheme to a system comprising of 80% Bronze
either. This is because, based on the traffic mix, the BS
users.
intelligently divides the available bandwidth into ratio of
Fixed Pool: Bidding Pool depending on percentage of Gold,
We have experimented with several traffic mix and our
Silver and Bronze users.
scheme always emerges the winner in terms of resource
For e.g. If the system has 80% Gold and Silver users as in
allocation when compared to WiMAX’s fixed scheme.
Figure 5(a), the BS divides the available bandwidth into
(IJCNS) International Journal of Computer and Network Security, 39
Vol. 2, No. 6, June 2010

Figure 6(a) and 6(b) provide enough evidence to support


this argument. The figure shows that our scheme (indicated
by Brown Bar) allocates higher resources to UGS and RtPS
requirements as number of silver and Gold users increase in
the system and hence performs better than WiMAX fixed
scheme (indicated by green bar).

Figure 7(b): Resource allocated to BE by WiMAX fixed


scheme (Green Line) and Bidding scheme ( Red Line)

5. Conclusion
In this paper we have addressed the issue of broad traffic
Figure 6(a): Resource allocated to UGS classes by WiMAX classification in WiMAX which we perceive inappropriate
fixed scheme( Green Bar) and Bidding scheme ( Red Bar) in today’s world of varied applications and their resource
requirements. We suggest a finer grained traffic
classification within the realms of WiMAX’s service class.
We device a resource scheduling algorithm using a
combination of WiMAX’s fixed resource allocation scheme
and a Nash equilibrium based bidding scheme. We have
framed rules and regulations to ensure that optimum
resource allocation is achieved, without wasting any
resources.
Revenue maximization is an off-spring of our resource
allocation scheme by taking advantage of a user’s
willingness to pay a higher price for extra resources.
Simulation results prove that our “bidding” resource
allocation scheme not only allocates resources optimally and
Figure 6(b): Resource allocated to RtPS classes by WiMAX more efficiently than WiMAX’s fixed resource allocation
fixed scheme( Green Bar) and Bidding scheme ( Red Bar) scheme but also maximizes the revenue earned by the ISPs
We have also compared the performance of our scheme in for the same bandwidth allocation to users. Our future
satisfying BE - Figure7(b) and nRtPS - Figure7(a) research entails mathematically formulating the bandwidth
requirements with increasing GOLD, Silver and Bronze partitioning scheme into the fixed and the bidding pool
users. As number of Gold and Silver users increase more based on the traffic mix to optimize the resource allocation.
resources are allocated to them and lesser to Bronze users.
The simulation result in Figure 7 shows that existing
References
WiMAX scheme performs slightly better than Bidding
scheme when it comes to Bronze users. This is because [1] ANSI/IEEE Std 802.16, “IEEE802.16 Standard:
Bidding scheme allocates more resources to Gold and Silver Broadband Wireless Metropolitan Area Network,”
users than the WiMAX fixed scheme. 2000.
[2] Ha'ikel Yaiche and Ravi R. Mazumdar, “A Game
Theoretic Framework for Bandwidth Allocation and
Pricing in Broadband Networks,” in proceedings of
IEEE/ACM Transactions on Networking, October
2000.
[3] Isto Kannisto, Timo Hamalainen, Jyrki Joutsensalo,
Eero Wallenius and TeliaSonera,“Adaptive Scheduling
Method for WiMAX Basestations” in Proceedings of
European Wireless Conference, 2007.
[4] Jyrki Joutsensalo, Ari Viinikainen, Mika Wikström and
Timo Hämäläinen , “Bandwidth allocation and pricing
in multimode network,” in the Proceedings of IEEE
Figure 7(a): Resource allocated to NRtPS by WiMAX fixed
Advanced Information Networking and Applications
scheme (Green Line) and Bidding scheme ( Red Line)
(AINA), 2006.
40 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

[5] Aymen Belghith, Loutfi Nuaymi and Patrick Maill´e,


“Pricing of Real-Time Applications in WiMAX
Systems,” in the Proceedings of IEEE Vehicular
Technology Conference, Calgary, British Columbia,
Canada, Fall 2008.
[6] Siamak Ayani and Jean Walrand, “Increasing Wireless
Revenue with Service Differentiation,” in Proceedings
of 3rd ACM Q2SWiNet, 2007.
[7] Richard Engelbrecht-Wiggans and Martin Shubik,
Auctions, Bidding, and Contracting: Uses and Theory
Studies in Game Theory and Mathematical Economics,
New York University Press , 1983.
[8] Haili Song, Chen-Ching Liu and Jacques Lawarrée,
“Nash Equilibrium Bidding Strategies in a Bilateral
Electricity Market,” in the Proceedings of IEEE
Transactions on Power Systems, February 2002.
[9] Ken Binmore, Game Theory: A Very Short
Introduction, Oxford University Press, USA , 2007.
[10] Belghith, A. Nuaymi, L. ENST Bretagne, and
Rennes, “Comparison of WiMAX Scheduling
Algorithms and Proposals for the rtPS QoS Class”, in
Proceedings of EW 2008-14th European Wireless
Conference, June 2008.

Authors Profile
Mahasweta Sarkar joined the department
of Electrical and Computer Engineering at
San Diego State University as an assistant
Professor in August 2006, after receiving
her Ph.D. from UCSD in December 2005.
She received her Bachelor’s degree in
Computer Science (Suma Cum Laude) from
San Diego State University, San Diego in
June, 2000. Prior to joining SDSU, she
worked as a Research Scientist at SPAWAR Systems Center, Point
Loma, San Diego. Her research interests include protocol designs,
flow control, scheduling and power management issues in Wireless
Local Area Networks.

Padmasini Chandramouli is currently


pursuing Master’s Degree in Electrical
Engineering from San Diego State
University. She received her Bachelor’s
degree in Electrical Engineering from
Nagarjuna University, India in April, 2003.
Her areas of interest are scheduling in
WiMAX, MAC sublayer design and VLSI
system design.
(IJCNS) International Journal of Computer and Network Security, 41
Vol. 2, No. 6, June 2010

Relative Performance of Scheduling Heuristics in


Heterogeneous Computing Environment
Ehsan Ullah Munir1, Shengfei Shi2, Zhaonian Zou2, Muhammad Wasif Nisar1, Kashif Ayyub1 and
Muhammad Waqas Anwar3
1
Department of Computer Science, COMSATS Institute of Information Technology,
Quaid Avenue, Wah Cantt 47040, Pakistan
ehsanmunnir@gmail.com
2
School of Computer Science and Technology, Harbin Institute of Technology,
Harbin, 150001, PR China
3
Department of Computer Science, COMSATS Institute of Information Technology,
Abbottabad 22060, Pakistan

Provided with a set of tasks {t1, t2,…,tm}, a set of


Abstract: Heterogeneous computing (HC) environment
consists of different resources connected with high-speed links machines{m1, m2,…,mn} and expected time to compute of
to provide a variety of computational capabilities for computing- each task ti on each machine mj , ETC(ti, mj)(1 ≤ i ≤ m,1 ≤ j ≤ n),
intensive applications having multifarious computational in the current study we find out the task assignment strategy
requirements. The problem of optimal assignment of tasks to that gives the minimum makespan.
machines in HC environment is proven to be NP-complete
requiring use of heuristics to find the near optimal solution. In For task selection in heterogeneous environment different
this work we conduct a performance study of task scheduling
criteria can be used, e.g. minimum, maximum or average of
heuristics in HC environment. Overall we have implemented 16
heuristics, among them 7 are proposed in this paper. Based on
expected execution time across all machines. In current
experimental results we specify the circumstances under which work we propose a new heuristic based on task partitioning,
one heuristic will outperform the others. which consider minimum (min), maximum (max), average
(avg), median (med) and standard deviation (std) of
Keywords: Heterogeneous computing, Task scheduling, expected execution time of task on different machines as
Performance evaluation, Task Partitioning heuristic
selection criteria. We call each selection criterion a key.
Each heuristic uses only one key. Scheduling process for the
1. Introduction proposed heuristics works like this; all the tasks are sorted
Heterogeneous computing (HC) environment consists of in decreasing order of their key, then these tasks are
different resources connected with high-speed links to partitioned into k segments and after this scheduling is
provide a variety of computational capabilities for performed in each segment.
computing-intensive applications having multifarious
computational requirements [1]. In HC environment an A large number of experiments were conducted on synthetic
application is decomposed into various tasks and each task datasets; Coefficient of Variation (COV) based method was
is assigned to one of the machines, which is best suited for used for generating synthetic datasets, which provides
its execution to minimize the total execution time. greater control over spread of heterogeneity [2]. A
Therefore, an efficient assignment scheme responsible for comparison among existing heuristics is conducted and new
allocating the application tasks to the machines is needed; heuristics are proposed. Extensive simulation results
formally this problem is named task scheduling [7]. illustrate the circumstances when one heuristic would
Developing such strategies is an important area of research outperform other heuristics in terms of average makespan.
and it has gained a lot of interest from researchers [3, 19, To the best of our knowledge, there is no literature available
20]. The problem of task scheduling has gained tremendous that proposes for a given ETC which heuristic should be
attention and has been extensively studied in other areas used; so this work is a first attempt towards this problem.
such as computational grids [8] and parallel programs [14].

The problem of an optimal assignment of tasks to machines 2. Related Work


is proven to be NP-complete requiring use of heuristics to
Many heuristics have been developed for task scheduling in
find the near-optimal solution [4, 12]. Plethora of heuristics
heterogeneous computing environments. Min-min [9] gives
has been proposed for assignment of tasks to machines in
the highest priority to the task for scheduling, which can be
HC environment [13, 15, 17, 18, 22]. Each heuristic has
completed earliest. The idea behind Min-min heuristic is to
different underlying assumptions to produce near optimal
finish each task as early as possible and hence, it schedules
solution however no work reports which heuristic should be
the tasks with the selection criterion of minimum earliest
used for a given set of tasks to be executed on different
completion time. Max-min [9] heuristic is very similar to
machines.
the Min-min, which gives the highest priority to the task
42 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

with the maximum earliest completion time for scheduling. that have previously been assigned to it (based on the ETC
The idea behind Max-min is that overlapping long running entries for those tasks). The completion time (CT) of task ti
tasks with short-running ones. The Heaviest Task First on machine mj is equal to the execution time of ti on mj plus
(HTF) heuristic [23] computes each task’s minimum the machine availability time of mj i.e.
execution time on all machines and the task with the
maximum execution time is selected. The selected task is CT (ti , mj) = ETC(ti , mj) + MAT(mj)
the heaviest task among all tasks (note that the Max-Min Makespan (MS) is equal to the maximum value of the
algorithm selects the task with the latest minimum completion time of all tasks i.e.
completion time, which may not be the heaviest one). Then
this heaviest task is assigned to the machine on which this MS = max MAT(mj) for (1 ≤ j ≤ n)
task has minimum completion time. Eight dynamic Provided with T, M and ETC our objective is to find the task
mapping heuristics are given and compared in [13], assignment strategy that minimizes makespan.
however the problem domain considered there involves
priorities and multiple deadlines. In Segmented Min-min
heuristic [22] the tasks are divided into four groups based on
their minimum, maximum or average expected execution
4. Task Partitioning Heuristic
time, and then Min-min is applied on each group for In heterogeneous environment for task selection different
scheduling. The Sufferage heuristic [17] is based on the idea criteria can be used, examples are minimum, maximum or
that better mappings can be generated by assigning a task to average of expected execution time across all machines. In
machine that would suffer most in terms of expected task partitioning heuristic we use minimum (min),
completion time if that particular machine is not assigned to maximum (max), average (avg), median (med) and standard
it. Sufferage assigns each task its priority according to its deviation (std) of expected execution time of task on
sufferage value. For each task, its sufferage value is equal to different machines as selection criteria; hereafter referred to
the difference between its best completion time and its as key. Given a set of tasks T = {t1, t2,…,tm}, a set of
second best completion time. The detailed procedure and its machines M = {m1, m2,…,mn}, expected time to compute
comparison with some widely used heuristics are presented (ETC) matrix then the working of proposed heuristic can be
in [17]. In [6] a new criterion to minimize completion time explained as follows: we compute the sorting key for each
of non makespan machines is introduced. It is noted that task (for each heuristic only one key will be used for
although completion time of non makespan machine can be sorting), then we sort the tasks in decreasing order of their
reduced but it can increase the overall system makespan as sorting key. Next the tasks are partitioned into k disjoint
well. Dynamic mapping heuristics for class of independent equal sized groups. In last, tasks are scheduled in each
tasks are studied in [17] and three new heuristics are group gx using the following procedure:
proposed. Balanced Minimum Completion Time (BMCT),
heuristic for scheduling independent tasks is given and Procedure 1
compared in [18], which works in two steps: In first step it
a) For each task ti in a group gx find machine mj which
assign each task to machine which minimize the execution
completes the task at earliest.
time, and then in second phase try to balance the load by
swapping tasks in order to minimize completion time. In [5] b) If machine mj is available i.e. no task is assigned to
the comparison of eleven heuristics is given and the Min- machine then assign task to machine and remove it from list
min heuristic is declared the best among all the other of tasks.
heuristics considered based on makespan criterion.
c) If there is already task tk assigned to machine i.e. machine
Minimum standard deviation first heuristics is proposed in
mj is not available then compute the difference between the
[16] where the task having the minimum standard deviation
minimum earliest completion time (CT) and the second
is scheduled first.
smallest earliest CT on all machines for ti and tk
respectively.
The work in this research is different from the related work
that here different keys are used as a selection criterion 1. If the difference value for ti is larger than that of tk
suiting the ETC type. Moreover for any given ETC we then ti is assigned to machine mj.
provide the near optimal solution by using the heuristic best 2. If the difference value for ti is less than that of tk, then
suited for a specific ETC type. no changes to the assignment.
3. If the differences are equal, we compute the
difference between the minimum earliest completion
3. Problem Statement time and the third smallest earliest completion time
for ti and tk respectively. And repeat 1-3. Every time
Let T = {t1, t2,…,tm} be a set of tasks, M = {m1, m2,…,mn}
if step 3 is selected, the difference between the
be a set of machines, and the expected time to compute
minimum earliest completion time and the next
(ETC) is a m × n matrix where the element ETCij represents earliest completion time (e.g. the fourth, the fifth…)
the expected execution time of task ti on machine mj. For for ti and tk are computed respectively. If all the
clarity, we denote ETCij by ETC(ti , mj) in the rest of the differences are the same then the task is selected
paper. Machine availability time, MAT(mj), is the earliest deterministically i.e. the oldest task is chosen.
time machine mj can complete the execution of all the tasks
(IJCNS) International Journal of Computer and Network Security, 43
Vol. 2, No. 6, June 2010

Now the proposed Task partitioning algorithm can be Table 2: Task partitioning
summed up in the following steps:
Task Partitioning Heuristic Task no m1 m2 m3 m4 Avg
t1 17 19 31 17 21.00
1. Compute the sorting key for each task: t7 13 26 28 10 19.25
Sub-policy1 (avg): Compute the average value t14 16 9 16 15 14.00
of each row in ETC matrix
t13 14 8 12 20 13.50
keyi = ∑ ETC ( ti , mj ) / n. t3 18 11 12 7 12.00
j
t15 18 11 5 7 10.25
Sub-policy2 (min): Compute the minimum
t12 14 6 12 8 10.00
value of each row in ETC matrix
t6 10 9 11 7 9.25
keyi = min ETC ( ti , mj ) . t9 10 13 8 5 9.00
j

Sub-policy3 (max): Compute the maximum t11 7 9 6 13 8.75


value of each row in ETC matrix t4 3 4 6 13 6.50
keyi = max ETC ( ti , mj ) .
t10 5 4 7 9 6.25
j t8 9 6 4 4 5.75
Sub-policy4 (med): Compute the median value t2 2 4 2 5 3.25
of each row in ETC matrix t5 4 2 2 3 2.75
keyi = med ETC ( ti , mj ) .
j
Table 3: Execution process of Procedure 1 on each group
Sub-policy5 (std): Compute the standard
deviation value of each row in ETC matrix execution process on group 1
keyi = std ETC ( ti , mj ) . 1st pass min. CT difference
j
t1→ m1 17 0
2. Sort the tasks in decreasing order of their sorting key t14→ m2 9 6
(for each heuristic only one key will be used for t3→ m4 7 4
sorting). 2nd pass min. CT difference
3. Partition the tasks evenly into k segments. t7→ m4 17 11
4. Apply the procedure 1 for scheduling each segment. t13→ m3 12 5
execution process on group 2
1st pass min. CT difference
A scenario of ETC is given in Table 1 to describe the t15→ m3 17 3
t12→ m2 15 9
working of proposed heuristic. All machines are assumed to
2nd pass min. CT difference
be idle for this case. Sorting key used for the algorithm is
t6→ m2 24 0
average (avg) i.e. tasks are sorted in decreasing order of t9→ m4 22 3
their average value. Table 2 shows the task partitioning; t11→ m3 23 1
tasks are partitioned into three segments which implies k = execution process on group 3
3. Table 3 shows how the results are derived using 1st pass min. CT Difference
procedure 1. Figure 1 gives the visual representation of task t4→ m1 20 8
assignment for proposed heuristic. 2nd pass min. CT Difference
t10→ m1 25 3
Table 1: Scenario ETC matrix
t8→ m4 26 1
3rd pass min. CT Difference
Task no m1 m2 m3 m4 t2→ m3 25 2
t1 17 19 31 17 4th pass min. CT Difference
t2 2 4 2 5 t5→ m2 26 1
t3 18 11 12 7
t4 3 4 6 13
t5 4 2 2 3
t6 10 9 11 7
t7 13 26 28 10
t8 9 6 4 4
t9 10 13 8 5
t10 5 4 7 9
t11 7 9 6 13 Figure 1. Visual representation of task assignment in task
t12 14 6 12 8 partitioning heuristic
t13 14 8 12 20
t14 16 9 16 15 4.1 Heuristics Notation
t15 18 11 5 7 In task partitioning heuristic tasks are sorted based on
average, minimum, maximum, median and standard
44 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

deviation, and each heuristic is named as TPAvg, TPMin, 5.2 Comparative Performance Evaluation
TPMax, TPMed and TPStd. The algorithms Segmented The performance of the heuristic algorithm is evaluated by
min-min (med) and Segmented min-min (std) are also the average makespan of 1000 results on 1000 ETCs
implemented for the evaluation purpose. The naming generated by the same parameters. In all the experiments,
conventions and source information for all existing and the size of ETCs is 512×16, the value of k = 3, the mean of
proposed heuristics are detailed in Table 4. task execution time µtask is 1000, and the task COV Vtask is
in [0.1, 2] while the machine COV Vmachine is in [0.1, 1.1].
Table 4: Summary of compared heuristics

No Name Referenc No Name Reference The motivation behind choosing such heterogeneous ranges
e is that in real situation there is more variability across
H1 TPAvg New H9 Smm-avg [22] execution times for different tasks on a given machine than
H2 TPMin New H10 Smm-min [22] the execution time for a single task across different
H3 TPMax New H11 Smm-max [22] machines.
H4 TPMed New H12 Smm-med New
H5 TPStd New H13 Smm-std New The range bar for the average makespan of each heuristic
H6 Min-min [9] H14 MCT [17] shows a 95% confidence interval for the corresponding
H7 Max-min [9] H15 minSD [16] average makespan. This interval represents the likelihood
H8 Sufferage [17] H16 HTF [23] that makespans of task assignment for that type of heuristic
fall within the specified range. That is, if another ETC
5. Experimental Results and Analysis matrix (of the same type) is generated, and the specified
heuristic generates a task assignment, then the makespan of
5.1 Dataset the task assignment would be within the given interval with
In the experiments, COV based ETC generation method is 95% certainty. In our experiments we have also considered
used to simulate different HC environments by changing the two metrics in comparison of heuristics. Such metrics have
parameters µtask, Vtask and Vmachine, which represent the mean also been considered by [18]
task execution time, the task heterogeneity, and the machine • The number of best solutions (denoted by NB) is the
heterogeneity, respectively. The COV based method number of times a particular method was the only one
provides greater control over the spread of the execution that produced the shortest makespan.
time values than the common range-based method used • The number of best solutions equal with another
previously [5, 10, 11, 21]. method (denoted by NEB), which counts those cases
where a particular method produced the shortest
The COV-based ETC generation method works as follows makespan but at least one other method also achieved
[2]: First, a task vector, q, of expected execution times with the same makespan. NEB is the complement to NB.
the desired task heterogeneity is generated following gamma
distribution with mean µtask and standard deviation µtask × The proposed heuristics are compared with 11 existing
Vtask. The input parameter µtask is used to set the average of heuristics. Experiments are performed with different ranges
the values in q. The input parameter Vtask is the desired of task and machine heterogeneity.
coefficient of variation of the values in q. The value of Vtask
quantifies task heterogeneity, and is larger for high task In the first experiment we have fixed the value of Vtask = 2
heterogeneity. Each element of the task vector q is then used and then increase the value of Vmachine from 0.1 to 1.1 with
to produce one row of the ETC matrix following gamma increment of 0.2 in each step. The results of NB and NEB
distribution with mean q[i] and standard deviation q[i] × are shown in the Table 5 (best values shown in bold). From
Vmachine such that the desired coefficient of variation of the values we can see that for high values of Vmachine H16 is
values in each row is Vmachine, another input parameter. The the best heuristic. And in all other cases one of the proposed
value of Vmachine quantifies machine heterogeneity, and is heuristic H2 or H5 outperforms all other heuristics. Figure 2
larger for high machine heterogeneity. gives the comparison of average makespan of the all
heuristics considered.
(IJCNS) International Journal of Computer and Network Security, 45
Vol. 2, No. 6, June 2010

Table 5: NB and NEB values table when fix Vtask = 2

Cov of tasks H1 H2 H3 H4 H5 H6 H7 H8 H9 H10 H11 H12 H13 H14 H15 H16


0.1 NB 86 197 169 78 245 0 0 96 0 0 0 0 0 0 0 4
NEB 97 27 48 92 29 0 2 18 0 0 0 0 0 0 0 2
0.3 NB 101 252 112 132 90 0 0 213 0 1 0 0 0 0 0 0
NEB 62 54 48 62 52 0 1 49 0 0 0 0 0 0 0 4
0.5 NB 101 352 98 106 65 0 0 92 0 1 1 1 1 0 0 19
NEB 105 84 104 103 99 0 1 90 1 0 1 1 0 0 0 10
0.7 NB 82 350 62 89 47 0 0 45 1 2 4 1 2 0 0 146
NEB 100 59 98 96 99 0 2 89 0 0 2 1 1 0 0 32
0.9 NB 60 199 43 62 44 0 0 11 5 2 2 4 0 0 0 381
NEB 103 78 115 103 110 0 14 94 1 0 2 0 1 2 0 90
1.1 NB 17 69 22 21 16 0 0 9 0 1 0 3 1 0 0 575
NEB 167 156 160 163 160 0 47 156 1 0 3 1 2 5 0 202

(a) (b) (c)

(d) (e) (f)


Figure 2. Average makespan of the heuristics when Vtask = 2 and Vmachine = (a) Vmachine= 0.1; (b) Vmachine = 0.3; (c)
Vmachine = 0.5; (d) Vmachine = 0.7; (e) Vmachine = 0.9; (f) Vmachine = 1.1

In the second experiment we have fixed the value of Vtask = that here in all the cases one of the proposed heuristic H2 or
1.1 and then increase the value of Vmachine from 0.1 to 1.1 H5 is best. Figure 3 gives the comparison of average
with increment of 0.2 in each step. The results of NB and makespan of all the heuristics considered.
NEB are shown in the Table 6. From the values it is clear

Table 6: NB and NEB values table when fix Vtask = 1.1

Cov of tasks H1 H2 H3 H4 H5 H6 H7 H8 H9 H10 H11 H12 H13 H14 H15 H16


0.1 NB 141 159 150 150 372 0 0 0 0 0 0 0 0 0 0 0
NEB 24 2 5 21 6 0 0 0 0 0 0 0 0 0 0 0
0.3 NB 139 284 199 161 211 0 0 0 0 0 0 0 0 0 0 0
NEB 2 4 2 3 1 0 0 0 0 0 0 0 0 0 0 0
0.5 NB 129 445 154 127 142 0 0 0 0 0 0 0 0 0 0 0
NEB 1 2 2 0 1 0 0 0 0 0 0 0 0 0 0 0
0.7 NB 84 613 97 82 102 0 0 0 3 10 1 2 0 0 0 0
NEB 3 2 4 3 1 0 0 0 0 0 0 0 0 0 0 0
0.9 NB 78 586 80 63 91 0 0 0 8 59 5 14 1 0 0 2
NEB 6 8 6 7 4 0 0 1 0 2 0 0 0 0 0 1
1.1 NB 66 505 76 73 63 0 0 1 28 24 4 24 4 0 0 92
NEB 20 24 17 16 14 0 0 10 3 0 1 1 1 0 0 11
46 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

(a) (b) (c)

(d) (e) (f)


Figure 3. Average makespan of the heuristics when Vtask = 2 and Vmachine = (a) Vmachine= 0.1; (b) Vmachine = 0.3; (c)
Vmachine = 0.5; (d) Vmachine = 0.7; (e) Vmachine = 0.9; (f) Vmachine = 1.1

In the third experiment we have fixed the value of Vtask = it is clear that here in all the cases proposed heuristic H5
0.6 and then increase the value of Vmachine from 0.1 to outperforms all other heuristics. Figure 4 gives the
1.1 with increment of 0.2 in each step. The results of comparison of average makespan of all the heuristics.
NB and NEB are shown in the Table 7. From the values

Table 7. NB and NEB values when fix Vtask = 0.6

Cov of tasks H1 H2 H3 H4 H5 H6 H7 H8 H9 H10 H11 H12 H13 H14 H15 H16


0.1 NB 81 80 78 79 682 0 0 0 0 0 0 0 0 0 0 0
NEB 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0.3 NB 73 42 143 76 663 0 0 0 0 0 0 0 0 0 0 0
NEB 1 1 3 0 1 0 0 0 0 0 0 0 0 0 0 0
0.5 NB 84 20 254 118 520 0 0 0 0 0 0 0 0 0 0 0
NEB 3 0 2 0 3 0 0 0 0 0 0 0 0 0 0 0
0.7 NB 127 13 285 130 441 0 0 0 0 0 0 0 0 0 0 0
NEB 2 0 3 1 2 0 0 0 0 0 0 0 0 0 0 0
0.9 NB 150 33 313 144 354 0 0 0 0 0 0 0 0 0 0 0
NEB 2 0 2 4 4 0 0 0 0 0 0 0 0 0 0 0
1.1 NB 138 124 245 158 313 0 0 0 0 6 0 0 0 0 0 1
NEB 4 9 5 8 5 0 0 0 0 1 0 0 0 0 0 0
(IJCNS) International Journal of Computer and Network Security, 47
Vol. 2, No. 6, June 2010

(a) (b) (c)

(d) (e) (f)


Figure 4. Average makespan of the heuristics when Vtask = 2 and Vmachine = (a) Vmachine= 0.1; (b) Vmachine = 0.3; (c)
Vmachine = 0.5; (d) Vmachine = 0.7; (e) Vmachine = 0.9; (f) Vmachine = 1.1

In the fourth experiment we have fixed the value of Vtask it is clear that here in all the cases proposed heuristic H5
= 0.1 and then increase the value of Vmachine from 0.1 to outperforms all other heuristics. Figure 5 gives the
1.1 with increment of 0.2 in each step. The results of comparison of the average makespan of all the
NB and NEB are shown in the Table 8. From the values heuristics.

Table 8: NB and NEB values when fix Vtask = 0.1

Cov of tasks H1 H2 H3 H4 H5 H6 H7 H8 H9 H10 H11 H12 H13 H14 H15 H16


0.1 NB 0 0 0 0 1000 0 0 0 0 0 0 0 0 0 0 0
NEB 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0.3 NB 0 0 0 0 1000 0 0 0 0 0 0 0 0 0 0 0
NEB 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0.5 NB 0 0 14 0 986 0 0 0 0 0 0 0 0 0 0 0
NEB 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0.7 NB 0 0 84 5 910 0 0 0 0 0 0 0 0 0 0 0
NEB 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0.9 NB 8 0 215 10 763 0 0 0 0 0 0 0 0 0 0 0
NEB 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1.1 NB 41 0 311 28 619 0 0 0 0 0 0 0 0 0 0 0
NEB 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
48 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

(a) (b) (c)

(d) (e) (f)


Figure 5. Average makespan of the heuristics when Vtask = 2 and Vmachine = (a) Vmachine= 0.1; (b) Vmachine = 0.3; (c)
Vmachine = 0.5; (d) Vmachine = 0.7; (e) Vmachine = 0.9; (f) Vmachine = 1.1

From all these experiments we conclude that in most of Compute the Vtask and Vmachine
circumstances one of the proposed heuristics H2 or H5 if Vtask is high and Vmachine is high then
outperforms the existing heuristics in terms of average ETC belongs to region1
makespan. In the remaining cases H16 performs better. if Vtask is medium or Vmachine is medium then
ETC belongs to region2
5.3 Algorithm to find best heuristic if Vtask is low or Vmachine is low then
ETC belongs to region3
Based on the values of Vtask and Vmachine we divide ETC end if
into three different regions. If the values of Vtask and switch(region)
Vmachine are high (here Vtask = 2 and 0.9 <= Vmachine <= case region1: return H16
1.1) then ETC falls in the region 1, if either of them is case region2: return H2
medium (here Vtask = 1.1 or 0.3 <= Vmachine < =0.7) then case region3: return H5
it falls in region 2 and if either of them is low (here 0.1 end switch
<= Vtask <= 0.6 or 0.1 <= Vmachine <= 0.2) then it falls in
region 3. Fig. 6 shows the three regions and best
heuristic for each region.
6. Conclusions
COV of Machines
Optimal assignment of tasks to machines in a HC
0.1 0.3 0.5 0.7 0.9 1.1
COV environment has been proven to be a NP-complete problem.
2 H5 H2 H2 H2 H16 H16 It requires the use of efficient heuristics to find near optimal
of
Tasks 1.1 H5 H2 H2 H2 H2 H2 solutions. In this paper, we have proposed, analyzed and
0.6 H5 H5 H5 H5 H5 H5 Region 1 implemented seven new heuristics. A comparison of the
0.1 H5 H5 H5 H5 H5 H5 proposed heuristics with the existing heuristics was also
performed in order to identify the circumstances in which
Region 3 Region 2 one heuristic outperforms the others. The experimental
results demonstrate that in most of the circumstances one of
Figure 6. Division of ETC in different regions the proposed heuristics H2 or H5 outperforms all the
existing heuristics. Based on these experimental results, we
The procedure for finding a best heuristic is given below are also able to suggest, given an ETC, which heuristic
should be used to achieve the minimum.
in Algorithm Best Heuristic, which suggests the best
heuristic depending on ETC type.
Acknowledgements
Best heuristic
Input: expected time to compute matrix (ETC) The authors are thankful to COMSATS Institute of
Output: best heuristic Information Technology for providing the support for this
(IJCNS) International Journal of Computer and Network Security, 49
Vol. 2, No. 6, June 2010

research. This work is supported by the Key Program of The Rust, B., Siegel, H.J., (1998). “Scheduling Resources
National Natural Science Foundation of China under Grant in Multi-User, Heterogeneous, Computing
No.60533110. A preliminary version of portions of this Environments with Smartnet”, Proceedings of the 7th
document appear in the proceedings of the Modelling, Heterogeneous Computing Workshop, p.184-199.
Computation and Optimization in Information Systems and
Management Sciences (MCO’08). [10] Ritchie, G., Levine, J., 2004. A hybrid ant algorithm
for scheduling independent jobs in heterogeneous
computing environments, Proceedings of the 23rd
Workshop of the UK Planning and Scheduling Special
References Interest Group.
[1] Ali, S., Braun, T.D., Siegel, H.J., Maciejewski, A.A., [11] Ritchie, G., Levine, J., 2003 A fast, effective local
Beck, N., Boloni, L., Maheswaran, M., Reuther, A.I., search for scheduling independent jobs in
Robertson, J.P., Theys, M.D., Yao B., (2005). heterogeneous computing environments, Proceedings
Characterizing source allocation heuristics for of the 22nd Workshop of the UK Planning and
heterogeneous computing systems. In: A.R. Hurson Scheduling Special Interest Group, p.178-183.
(Ed.), Advances in Computers, vol. 63: Parallel,
Distributed, and Pervasive Computing, Elsevier, [12] Ibarra, O.H., Kim, C.E., 1977. Heuristic algorithms for
Amsterdam, The Netherlands, p.91-128. scheduling independent tasks on non-identical
processors, Journal of ACM 24 (2):280–289.
[2] Ali, S., Siegel, H.J., Maheswaran, M., Ali, S.,
Hensgen, D., (2000). Task Execution Time Modeling [13] Kim, J.K., Shivle, S., Siegel, H.J., Maciejewski, A.A.,
for Heterogeneous Computing Systems, Proceedings of Braun, T.D., Schneider, M., Tideman S., Chitta, R.,
the 9th Heterogeneous Computing Workshop, p.85– Dilmaghani, R.B., Joshi, R., Kaul, A., Sharma, A.,
200. Sripada, S., Vangari, P., Yellampalli, S.S., 2007.
Dynamically mapping tasks with priorities and
[3] Barbulescu, L., Whitley, L. D., Howe, A. E., (2004). multiple deadlines in a heterogeneous environment.
Leap Before You Look: An Effective Strategy in an Journal of Parallel and Distributed Computing,
Oversubscribed Scheduling Problem. Proceedings of 67(2):154-169.
the 19th National Conference on Artificial
Intelligence, p.143–148. [14] Kwok Y. K., Ahmad, I., 1999. Static scheduling
algorithms for allocating directed task graphs to
[4] Baca, D. F., (1989). Allocating modules to processors multiprocessors, ACM Computing Surveys,
in a distributed system, IEEE Transactions on 31(4):406–471.
Software Engineering 15 (11) 1427–1436.
[15] Kwok, Y.K., Maciejewski, A.A., Siegel, H.J., Ahmad,
[5] Braun T.D., Siegel H.J., Beck, N., Bölöni, L.L., I., Ghafoor, A., 2006. A semi-static approach to
Maheswaran, M., Reuther, A.I., Robertson, J.P., mapping dynamic iterative tasks onto heterogeneous
Theys, M.D., Yao, B., Hensgen, D., Freund, R.F., computing system, Journal of Parallel and Distributed
2001. A comparison of eleven static heuristics for Computing, 66(1):77-98.
mapping a class of independent tasks onto
heterogeneous distributed computing systems. Journal [16] Luo, P., K. Lu, Z.Z. Shi, 2007. A revisit of fast greedy
of Parallel and Distributed Computing 61(6) 810 – heuristics for mapping a class of independent tasks
837. onto heterogeneous computing systems, Journal of
Parallel and Distributed Computing, 67 (6): 695-714.
[6] Briceno, L.D., Oltikar, M., Siegel, H.J., Maciejewski,
A.A., (2007). Study of an Iterative Technique to [17] Maheswaran, M., Ali, S., Siegel, H. J., Hensgen, D.,
Minimize Completion Times of Non-Makespan Freund, R. F., 1999. Dynamic Matching and
Machines, Proceedings of the 17th Heterogeneous Scheduling of a Class of Independent Tasks onto
Computing Workshop. Heterogeneous Computing Systems, Proceedings of the
8th IEEE Heterogeneous Computing Workshop, p.30–
[7] El-Rewini, H., Lewis, T. G., Ali, H. H., (1994). Task 44.
Scheduling in Parallel and Distributed Systems, PTR
Prentice Hall, New Jersey, USA. [18] Sakellariou, R., Zhao, H., 2004. A Hybrid Heuristic for
Dag Scheduling on Heterogeneous Systems,
[8] Foster, I., Kesselman,C., (1998). The Grid: Blueprint Proceedings of the 13th Heterogeneous Computing
for a New Computing Infrastructure, Morgan Kaufman Workshop.
Publishers, San Francisco, CA, USA.
[19] Shestak, V., Chong, E. K. P., Maciejewski, A. A.,
[9] Freund R.F., Gherrity, M., Ambrosius, S., Campbell, Siegel, H. J., Bemohamed, L., Wang I., Daley, R.,
M., Halderman, M., Hensgen, D., Keith, E., Kidd, T., 2005. Resource Allocation for Periodic Applications in
Kussow, M., Lima, J.D., Mirabile, M., Moore, L.,
50 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

a Shipboard Environment. Proceedings of the 14th Muhammad Wasif Nisar is a PhD candidate at
Heterogeneous Computing Workshop. the School of Computer Science and
Technology, Institute of Software, GUCAS
[20] Shivle, S., Siegel, H.J., Maciejewski, A.A., China. He received his BSc in 1998 and MSc
Computer Science in 2000 from University of
Sugavanam, P., Banka, T., Castain, R., Chindam, K.,
Peshawar, Pakistan. His research interest
Dussinger, S., Pichumani, P., Satyasekaran, P., Saylor, includes Software Estimation, Software Process
W., Sendek, D., Sousa, J., Sridharan, J.,Velazco, J., Improvement, Distributed systems, Databases, and CMMI-based
2006. Static allocation of resources to communicating Project Management.
subtasks in a heterogeneous ad hoc grid environment,
Journal of Parallel and Distributed Computing, 66 (4):
600–611.

[21] Shivle, S., Sugavanam, P., Siegel, H.J., Maciejewski, Kashif Ayyub is a MS(CS) student at the Iqra
University, Islamabad Campus, Pakistan. His
A.A., Banka, T., Chindam, K., Dussinger, S., Kutruff,
research interest includes NLP and OCR for Urdu
A., Penumarthy, P., Pichumani, P., Satyasekaran, P., Language. He is currently Lecturer in Computer
Sendek, D., Smith, J., Sousa, J., Sridharan, J., Science Department, COMSATS Institute of
Velazco, J., 2005. Mapping subtasks with multiple Information Technology, Wah Cantt., Pakistan.
versions on an adhoc grid, Parallel Computing. Special
Issue on Heterogeneous Computing. 31(7):671– 690.

[22] Wu, M.Y,. Shu, W., Zhnag, H., 2000. Segmented min-
min: A Static Mapping Algorithm for Meta-Tasks on
Heterogeneous Computing Systems. Proceedings of the Muhammad Waqas Anwar received his Ph.D.
9th Heterogeneous Computing Workshop, p.375–385. degree in Computer Science from Harbin
Institute of technology Harbin, China in 2008. He
is currently an assistant professor in the
[23] Yarmolenko, V., J. Duato, D.K. Panda P. Sadayappan,
department of Computer Science at COMSATS
2000. Characterization and Enhancement of Static Institute of Information Technology, Abbottabad,
Mapping Heuristics for Heterogeneous Systems. Pakistan. His areas of research area NLP and
International Conference on Parallel Processing, pp: Computational Intelligence.
437-444.

Authors Profile

Ehsan Ullah Munir received his Ph.D. degree in


Computer Software & Theory from Harbin
Institute of technology Harbin, China in 2008. He
completed his Masters in Computer Science from
Barani Institute of Information Technology,
Pakistan in 2001. He is currently an assistant
professor in the department of Computer Science
at COMSATS Institute of Information Technology, Pakistan. His
research interests are task scheduling algorithms in heterogeneous
parallel and distributed computing.

Shengfei Shi received the B.E. degree in


computer science from Harbin Institute of
Technology, China, in 1995, his MS and PhD
degrees in computer software and theory from
Harbin Institute of Technology, China, in 2000
and 2006 respectively. He is currently an
associate professor in the School of Computer
Science and Technology at Harbin Institute of
Technology, China. His research interests include wireless sensor
networks, mobile computing and data mining.

Zhaonian Zou is a PhD student in the


Department of Computer Science and
Technology, Harbin Institute of Technology,
China. He received his bachelor degree and
master degree from Jilin University in 2002 and
2005 respectively. His research interests include data mining,
query processing, and task scheduling.
(IJCNS) International Journal of Computer and Network Security, 51
Vol. 2, No. 6, June 2010

Solution for Green Computing


Rajguru P.V.1, Nayak S.K.2 More D.S.3
1
Department of Computer Science and IT,
Adarsh college, Hingoli (Maharashtra),India
prakash_rajgure@yahoo.com
2
Head, Department of Computer Science
Bahirji Smarak Mahavidyalaya, Basmathnagar Dist-Hingoli (Maharashtra), India
sunilnayak1234@yahoo.com
3
Head, Department of Environmental Science
Bahirji Smarak Mahavidyalaya, Basmathnagar Dist-Hingoli (Maharashtra), India
dilipmore123@gmail.com

Abstract: Environmental and energy conservation issues building cooling and heating systems by requiring colder fan
have taken center stage in the global business arena in recent discharge temperatures. In the summer, these temperatures
years. The reality of rising energy costs and their impact on may satisfy computer lab cooling needs while overcooling
international affairs coupled with the increased concern over
the global warming climate crisis and other environmental
issues have shifted the social and economic consciousness of the
business community.
Green Computing, or Green IT, is the practice of
implementing policies and procedures that improve the
efficiency of computing resources in such a way as to reduce the
environmental impact of their utilization. Green Computing is
founded on the “triple bottom line” principle which defines an
enterprise’s success based on its economic, environmental and
social performance. This philosophy follows that given that
there is a finite amount of available natural resources, it is in
the interest of the business community as a whole to decrease
their dependence on those limited resources to ensure long-term
economic viability. Just as the logging industry long ago learned
that they need to plant a tree for each that they cut, today’s
power consumption enterprises must maximize the conservation
of energy until renewable forms become more readily available.
This is often referred to as “sustainability” – that is, the ability
Figure 1. Green computing environment
of the planet to maintain a consistent level of resources to other spaces. Adarsh College given commitment to energy
ensure the continuance of the existing level of society and com- conservation and the environmental stewardship, we must
mercial enterprise.
address the issue of responsible computer use by adopting
In this paper we discuss green computing approach and give conserving practices, annual savings of 40,000 Rs.
green computing solutions that will help for ecofriendly approximately are possible.
environment.
Keywords: Computer System, Flat Panel Display, Energy 2. Green Computing Approach
Conservation, Energy Star.

1. Introduction
Over the last fifteen years, computers have transformed the
academic and administrative landscape. There are now over
100 computers on campus of Adarsh College, Hingoli.
Personal computers (PC) operation alone may directly
account for nearly 1, 50,000 Rs. approximately per year in
Adarsh College Hingoli. Computers generate heat and
require additional cooling which adds to energy costs. Thus,
the overall energy cost of Adarsh College about personal
computers is more likely around 2,25,000 Rs. Figure 2. Green Keyboard
Approximately. A meeting computer cooling need in
summer (and winter) often compromises the efficient use of
52 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

2.1 Green use everyday, direct annual electrical cost would be over 1500
Reducing the energy consumption of computers and other Rs. In contrast, if you operate your system just during
information systems as well as using them in an normal business hours, say 40 hours per week, the direct
environmentally sound manner. annual energy cost would be about 1000 Rs plus, of course,
the cost of providing additional cooling. Considering the
2.2 Green disposal tremendous benefits of computer use, neither of the above
Refurbishing and reusing old computers and properly cost figures may seem like much, but think of what happens
recycling unwanted computers and other electronic when these costs are multiplied by the many thousands of
equipment. computers in use at CU. The energy waste Cost adds up
quickly.
2.3 Green design
Here are some tested suggestions that may make it possible
Designing energy-efficient and environmentally sound for you to reduce your computer energy consumption by 80
components, computers, servers, cooling equipment, and percent or more while still retaining most or all productivity
data centers. and other benefits of your computer system, including
2.4 Green manufacturing network connectivity.
Manufacturing electronic components, computers, and other
associated subsystems with minimal impact on the 4. Solutions for green computing
environment.
These four paths span a number of focus areas and 4.1 Energy efficiency
activities, including - Maximizing the power utilization of computing systems by
• Design for environmental sustainability. reducing system usage during non-peak time periods.
• Energy-efficient computing.
• Power management 4.2 Reducing Electronic Waste
• Server virtualization. Physical technology components (keyboards, monitors,
• Responsible disposal and recycling. CPUs, etc.) are often not biodegradable and highly toxic.
• Green metrics, assessment tools, and methodology. Several business and governmental directives have been
• Environment-related risk mitigation. enacted to promote the recycling of electronic components
• Use of renewable energy sources and and several hardware manufacturers have developed
• Eco-labeling of IT products. biodegradable parts.
4.3 Employing thin clients

These systems utilize only basic computing functionality


(and are sometimes even diskless), utilizing remote systems
to perform its primary processing activity. Since antiquated
systems can be used to perform this function, electronic
waste is reduced. Alternatively, new thin client devices are
now available that are designed with low power
consumption.

4.4 Telecommuting

Providing the facilities necessary to allow employees the


ability to work from home in order to reduce transportation
emissions.
Figure 3. Green computing environment cycle 4.5 Remote Administration

Allowing administrators the ability to remotely access,


3. Energy Consumption of PC monitor and repair systems significantly decreases the need
A typical desktop PC system is comprised of the computer for physical travel to remote offices and customer sites. As
itself (the CPU or the “box”), a monitor, and printer. Your with telecommuting, this reduced travel eliminates
CPU may require approximately 100 watts of electrical unnecessary carbon emissions.
power. Add 50-150 watts for a 15-17 inch monitor,
4.6 Green Power Generation
proportionately more for larger monitors. The power
requirements of conventional laser printers can be as much Many businesses have chosen to implement clean,
as 100 watts or more when printing though much less if renewable energy sources, such as solar and wind, to
idling in a “sleep mode.” Inkjet printers use as little as 12 partially or completely power their business.
watts while printing and 5 watts while idling. How a user
operates the computer also factors into energy costs. First 4.7 Green Computing Practices
let’s take the worst case scenario, continuous operation.
Assuming you operate 200 watt PC system day and night You can take a giant step toward environmentally
(IJCNS) International Journal of Computer and Network Security, 53
Vol. 2, No. 6, June 2010

responsible or “green” computing by conserving energy with 5 Recommendations


your computer. But green computing involves other import
steps as well. These pertain to paper use, toner cartridges, Environmentally responsible computer use implies not
disposal of old computer equipment and purchasing buying new equipment unless there is a demonstrated need.
decisions when considering new computer equipment. Thus, before buying new equipment, consider the following
questions:
4.8 Reducing Paper Waste • Do you really need a new computer or printer?
• Can you meet your needs (with less expense and
Rather than creating a paperless office, computer use has
vastly increased paper consumption and paper waste. Here environmental damage) by upgrading existing
are some suggestions for reducing waste: equipment?
• Can you find a solution in software rather than
• Print as little as possible.
hardware?
• Review and modify documents on the screen and
• If you do need new equipment, buy efficient and
use print.
buy green. Do research online and talk to the
• Preview the document.
Buffalo Chip in the UMC about purchasing
• Minimize the number of hard copies and paper
environmental and socially responsible equipment.
drafts you make.
• Buy only “Energy Star” computers, monitors and
• Instead of printing save information to disks.
printers. Flat panel monitors use about half of the
4.9 Recycle Waste Paper electricity of a cathode-ray tube (CRT) display.
• Buy and use recycled paper in your printers and • Buy a monitor only as large as you really need. A
copiers. From an environmental point of view, the 17 inch monitor uses 30 percent more energy than
best recycled paper is 100 percent post consumer a 15 inch monitor when each is in an active mode.
recycled content. • Buy ink jet printers, not laser printers. These use
• Save e-mail whenever possible and avoid needless 80 to 90 percent less energy than laser printers and
printing of e-mail messages. print quality can be excellent.
• Use e-mail instead of faxes or send faxes directly • Create network and share printer and other
from your computer to eliminate the need for a equipments.
hard copy. When you must fax using hard copies, • Consider leasing equipment as an alternative to
save paper using a “sticky” fax address note and purchasing. Leased equipment is typically
not a cover sheet. refurbished or recycled, and packaging is reduced.
• On larger documents, use smaller font sizes Of all these, “Energy Efficiency” provides the greatest
(consistent with readability) to save paper. potential for quick return on investment, ease of
• If your printer prints a test page whenever it is implementation, and financial justification. Several
commercial solutions for improving computing energy
turned on, disable this unnecessary feature.
efficiency have recently become available and we strongly
• Before recycling paper, which has print on only one
recommend the adoption of such a solution not only for its
side, set it aside for use as scrap paper or in environmental implications, but also for its common-sense
printing drafts. reduction on IT infrastructure costs.
• When documents are printed or copied, use double
sided printing and copying. If possible, use the
multiple pages per sheet option on printer proper 6. Conclusions
ties. The Research Paper contains useful techniques, guidelines
• When general information-type documents must be described suitable for reducing energy consumption of any
shared within an office, try circulating them instead computer institute or a personnel user of computer. By
of making an individual copy for each person. This following these simple guidelines lot of energy can be saved
can also be done easily by e-mail. and thousands of rupees can be saved at village, taluka,
4.10 Reusing and recycling district or any metro city ultimately saving the environment
& conserving the energy.
Adarsh College & institutions in Hingoli city generates
many printer toner, ink jet cartridges and batteries a year. References
Instead of tossing these in the garbage, they can be recycled,
saving resources and reducing pollution and solid waste. To [1] “Energy Saving Guidelines for PCs”.
recycle spent toner or ink jet cartridges (printer and some www.colorado.edu
fax), deposit them at electronic dumping site. To recycle /its/docs/energy.html “
batteries, drop them off at any of the battery collection Shop. [2] “The Computer for the 21st Century” Scientific
Computer diskettes may be inexpensive, but why keep American,
buying more? Diskettes with outdated information on them [3] www.adarshcollege.org
can be reformatted and reused. When you are done with [4] Adarsh College, Hingoli Accounts Department.
your diskettes, recycle them. [5] http://en.wikipedia.org/wiki/Green_computing
54 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

[6] "It's Easier to Say Green Than Be Green", Lave 96


Technology Review, Vol. 99 No.8,
November/December
1996, pp. 68-69.
[7 ] www.cosn.org/greencomputing.
[8] “Rich Kaestner - CoSN Green Computing Project
Director”.
[9] http://www.ce.cmu.edu/GreenDesign
[10] www.epa.gov/epaoswer/hazwaste/recycle/ecycling/
manage.html

Authors Profile
Rajguru Prakash Vithoba received the M.C.M.
from Dr. Babasaheb Ambedkar Marathwada
University, Aurangabad and M.Phil. degree in
Computer Science from Algappa University
Karikudi in 2005 and 2008 respectively. During
2005-till today, he is working as Sr. Lecturer of
Computer Science in Adarsh Education
Society’s, Hingoli.

S.K.Nayak received M.Sc. degree in Computer


Science from S.R.T.M.U, Nanded. In 2000 he
joined as lecturer in Computer Science at Bahirji
Smarak Mahavidyalaya, Basmathnagar. From 2002
he is acting as a Head of Computer Science
department. He is
doing Ph.D. He attended many national and
international conferences, workshops and seminars. He is having
10 international publications. His areas of interest are ICT, Rural
development, Bioinformatics.

More D.S. received the M.Sc. degree in


Environmental Science from S.R.T.M.U, Nanded
(Maharashtra) India in 2006. He is working as a
Head, Department of Environmental Science in
Bahirji Smarak Mahavidyalaya, Basmathnagar,
Dist-Hingoli (Maharashtra), India.
(IJCNS) International Journal of Computer and Network Security, 55
Vol. 2, No. 6, June 2010

BER Analysis of BPSK Signal in AWGN Channel


Dr. Mohammad Samir Modabbes
Aleppo University, Faculty of Electrical and Electronic Engineering,
Department of Communications Engineering, Syria
msmodabbes@hotmail.com

all the bits within the symbol are in error in other words Pb
Abstract: This paper presents extensive results of a
experimental and simulation test of additive white Gaussian
is less than or equal to Ps. Thus ban-pass modulation can be
noise (AWGN) channel on binary phase-shift keying (BPSK) defined as the process whereby the amplitude, frequency or
modulated signal. The bit error rate (BER) is measured and phase of RF carrier or a combination of them is varied or
then simulation results are provided to verify the accuracy of will has M discrete values in accordance with the
error rate measurements. Results for all measurements and information to be transmitted. The efficiency of digital
simulation are presented in curves. Results show that coherent band-pass modulation technique depends on Eb/No
detection of BPSK signal offers approximately 60-70% parameter which can be expressed as the ratio of average
improvement in BER over the non-coherent detection. signal power to average noise power (SNR) [1, 4].

Keywords: Wireless Communications, BPSK detection, noise, 3. Probability of bit error for non-coherent
BER analysis
detection of band-pass modulation
1. Introduction
When the receiver does not exploit phase reference of the
Multipath propagation in wireless cellular systems leads to carrier to detect the received signals, the process is called
small-scale fading. Rayleigh distribution is a useful model non-coherent detection. Thus a complexity reduction in non-
for describing fading conditions for medium or large coherent systems can be achieved at the expense of error
cellular systems. The Ricean distribution is another performance. For non-coherent analysis we consider equally
important fading model for describing small-scale fading. two band-pass modulated signals set (i=1, 2) at the input of
In [1] a precise BER expression for a BPSK signal corrupted the detector, Where:
by Ricean-faded co-channel interfering signals was
obtained, their work focused on Ricean-faded co-channel ri (t ) = s i (t ) + n (t ) (2)
interfering signals. In [2] The BER performance of BPSK
orthogonal frequency division multiplexing (OFDM) system
r i (t) - the received signal
is analyzed in the cases of AWGN, but their work focused s i (t) - the transmitted signal
on OFDM system only. In [3] An exact expression for the n (t) - the noise signal
bit error rate of a synchronous maximum-likelihood receiver
used to detect a band limited BPSK signal in the presence The two band pass modulated signals can be defined by:
of a similar BPSK signal, additive white Gaussian noise and
imperfect carrier phase recovery is derived. It is shown that
2E
the receiver is more robust to carrier phase mismatch at si (t ) = cos(ω0t + ϕ ) 0≤t ≤T (3)
high signal to-interference ratios (SIR) than low SIR. T

2. Error performance for band-pass E is the signal energy per bit; T is the bit duration.
Considering the noise in the received signal n (t) is a white
modulation Gaussian noise we use the decision rule to choose the
Band pass modulation is the process by which an minimum error [5, 8]. This rule for the detector stated in
information signal is converted to sinusoidal waveform [4, terms of decision regions is whenever the received signal ri
5]: (t) is located in region1 , choose signal s1, when it is located
in region2 choose signal s2 as in figure (1).
v(t ) = V (t ) sin[(ω 0 t + ϕ (t )] (1) γ 0 = (a1 + a 2 ) / 2 represents the optimum threshold for
minimizing the probability of making an incorrect decision
Where in digital systems the duration of carrier sinusoidal is given equally likely signals.
called digital symbol. The probability of symbol error (Ps) is For the case when s1is sent such that r (t) =s1 (t) + n (t), the
one of performance evaluation parameter of correct output z2(T) is a Gaussian noise random variable only
reception. The probability of incorrect detection of any without signal component, so it has Rayleigh distribution.
receiver is termed the probability of symbol error Ps, but it’s While z1 (T) has a Rician distribution since the input to
often convenient to specify system performance by the envelope detector is a sinusoid plus noise.
probability of bit error Pb. A symbol error doesn’t mean that
56 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

We can compute Pb by integrating the area of either In BPSK there are no fixed decision regions in the signal
probability density functions p (z/ s1) or p (z/ s2): space, but the decision is based on the phase difference
between successively received signals. Thus the probability
of bit error is equal to [5]:

1 E
pb = exp(− b ) (9)
Decision Line 2 N0

region2 region1 5. Probability of bit error for coherent


p(z/s2) p(z/s1) detection of BPSK
For a coherent receiver, which compares the received wave
Z(T) with the un-modulated carrier, A cos (ω1t), and produces
a2 a1 γ0 instantly the signed phase difference between the two points
=0 [6, 9], an M-ary symbol is transmitted in one baud by the
Figure 1. the two conditional probability density value of k.
functions p (z / s1) and p (z / s2) Considering the external thermal noise in the received
signal n(t) is a white Gaussian noise and is modeled in the
usual Gaussian random process with uniform spectral
density [10,11]. Hence,
1 A2
pb = exp(− 2 ) (4)
2 4σ 0
n (t ) = n1 (t) cos ω1 t − n2 (t) sin ω2 t (10)
Where A = 2 E / T , σ 02 is the noise variance. We can
express the noise at the filter output as: Where n1 (t) and n2 (t) are stationary independent, zero
mean Gaussian random processes with power σ 2
σ 02 = N0W (5) At a certain instant, the combined input signal at the
detector is given by
W=R bits/s = 1/T is the filter bandwidth. Thus equation (5)
becomes: s ′(t ) = s (t ) + n(t ) (11)

The detector examines the difference between the phase of


1 E
pb = exp( − b ) (6) the received signal and the reference phase and decides
2 2 N0 which symbol was transmitted.
Assuming equal a priori symbol probabilities, for a proper
decision we need to define decision thresholds by dividing
Where E b = A T / 2 the signal energy per bit, N0 is is the
2
the circle into regions, for example for the case of M = 8 at
noise power per hertz. Equation (6) gives the probability of the instant of detection, if the phase of the received signal
bit error for non-coherent detection of FSK and is identical lies within the region, 0 ≤θ ≤ π/4 , we make the decision
to the probability of bit error for non-coherent detection of that the symbol, having been transmitted, corresponds to the
OOK signals since envelope detector is used for detection value k = 1, but for BPSK (M=2), we make the decision that
[1, 7]. From equations (5), (6) it's clear that the error the symbol having been transmitted corresponds to the value
performance depends on the filter bandwidth and that Pb k= 1, if the phase of the received signal lies within the
proportional to W. region, 0 ≤θ ≤ π.

4. Probability of bit error for non-coherent 6. Measurement setup


detection of BPSK modulation
The tests were conducted using measurement system as
For standard PSK signal of the form: shown in figure (2), which simulates the transmission
medium over which digital communication takes place with
s ( t ) = A cos( ω1t + ϕ 1t ) (7) frequency range from about 160 Hz to 70 kHz. The
modulator produce BPSK signal with carrier frequency 2.4
kHz. A random pulse generator injects a variable amount of
the digital modulation is carried in the angle of s(t) by φ1(t),
noise into the input signal and has frequencies in the range
which assumes discrete values from a set of M equally
75Hz to 600 Hz.
spaced points in [0, 2π] at the sample times T seconds apart.
Amplitude noise can be measured by determining the bit
Thus the Nth message or baud is modulated by:
error rate (BER), which is the number of incorrect bits
φ1 (NT) = 2πk/M; k=0, 1, 2… M-1 (8)
received in reference to the total number of bits transmitted.
Error counter in error rate calculations block measures the
Where each of M values of k is equally probable.
number of incorrect bits received (error count), where The
(IJCNS) International Journal of Computer and Network Security, 57
Vol. 2, No. 6, June 2010

transmitted and received data are compared bit by bit in the results of BER versus SNR for the coherent and non-
comparator, if the bit is not match an error pulse is coherent detection of BPSK modulation.
generated, the errors are totalized in a counter over a fixed
Figure (5) shows a comparison between simulation and
time period (106 ms is the time required for 128 data bits)
measurement results of BER versus SNR for coherent and
generated by a one shot. Each time when the counter is reset
non-coherent detection of BPSK modulation.
a 106 ms one shot is triggered, and the error pulses from the
XOR gate are totalized by the counter only during 106 ms
frame. Then a display indicates how many errors occurred
within the time interval.
0.35
Coherent PSK
BPSK Σ Amplifier Coheren Noncoherent PSK
0.3
t det.

0.25
Noise
NRZ 0.2
Noncoher

BER
ent Det.
0.15

0.1

Error rate Comparator Switch 0.05


calculations
0
0 5 10 15 20 25
SNR[dB]

Figure 2. BPSK measurement system Figure 4. Probability of bit error versus SNR for
simulation BPSK detection
A performance comparison of coherent and non-coherent
BPSK detection in the presence of noise was made at:
0.8

v cutoff frequency of low pass filter 1.5 kHz


0.7
v total number of bits transmitted 128 bits
0.6
v SNR was calculated for 4Vp-p (2,828Vrms) input signal Simulation coherentPSK
Simulation noncoherent PSK
amplitude and variable amplitude of noise signal. 0.5
Measured coherentPSK
Measured noncoherentPSK
BER

Figure (3) shows measurement results of BER versus SNR 0.4

for the coherent and non-coherent detection of BPSK


0.3
modulation at different noise amplitude.
0.2
0.8
0.1
0.7
0
0 5 10 15 20 25
0.6 SNR[dB]
Coherent PSK
Noncoherent PSK Figure 5. Probability of bit error versus SNR for
0.5
measurement and simulation BPSK detection
BER

0.4

0.3 8. Conclusions
0.2
Results show that coherent detection of BPSK offers
0.1 approximately 60-70% improvement in BER over the non-
0 5 10 15 20 25
SNR[dB] coherent detection. Simulation results offer approximately
40-50% improvement over the measuring results.
Figure 3. Probability of bit error versus SNR for
Measurement and simulation results led us to conclude that
measurement BPSK detection
BPSK modulation has the best performance in coherent
detection.
7. Simulation results

Simulation on BPSK transmission channel with AWGN


using SIMULINK was made. Figure (4) shows simulation
58 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

References Author Profile


[27] M.K. Simon & M.S. Alouini., 2000-"Digital Mohammad Samir Modabbes received the B.S. degree in
Communication over Fading Channels: A Unified Electronic Engineering from University of Aleppo in 1982. M.S.
Approach to Performance Analysis", John Wiley & and Ph.D. degrees in Communications
Sons. Engineering from High Institute of
[28] Zheng Du, J. Cheng and C. Norman, 2007-"BER Communications in Sankt Petersburg in
Analysis of BPSK Signals in Ricean-Faded Co-channel 1988 and 1992, respectively. He is working
Interference" IEEE Transactions on communications, as Associate Professor at the Faculty of
Vol. 55, No. 10. Electrical and Electronic Engineering,
Department of Communications
[29] P. C. Weeraddana, N. Rajatheva and H. Minn., 2009 -
Engineering, University of Aleppo Syria.
" Probability of Error Analysis of BPSK OFDM Since 2006 he is working as Associate
Systems with Random Residual Frequency Offset" Professor at Qassim University, College of Computer, Department
IEEE Transactions on communications, Vol. 57, No. 1. of Computer Engineering in Saudi Arabia. His research interests
[30] A. M. Rabiei, and N. C. Beaulieu., 2007-" Exact Error are: Analysis and performance evaluation of Data transmission
Probability of a Band-limited Single-Interferer Systems, Wireless Communication Systems
Maximum Likelihood BPSK Receiver in AWGN"
IEEE Transactions on Wireless communications, Vol.
6, No. 1.
[31] M. S. Modabbes, 2004- “Performance Analysis of
Shift Keying Modulation in the Presence of Noise” -
1st International Conference on Information and
Communication Technologies: From Theory to
Applications, ICTTA’04, Proceedings, 19-23, pp:271 –
272.
[32] Soon H. Oh and Kwok H. Li, 2005-" BER
Performance of BPSK Receivers Over Two-Wave with
Diffuse Power Fading Channels", IEEE Transactions
on Wireless communications, Vol. 4, No. 4.
[33] José María López-Villegas, Senior Member, IEEE,
and Javier Jose Sieiro Cordoba, 2005 - " BPSK to ASK
Signal Conversion Using Injection-Locked Oscillators-
PartI: Theory", IEEE Transactions on Microwave
Theory and Techniques, Vol. 53, No. 12.
[34] R. Kwan and C. Leung, 2002 - “Optimal Detection of a
BPSK Signal Contaminated by Interference and
Noise", IEEE COMMUNICATIONS LETTERS, Vol.
6, No. 6.
[35] H. Roelofs, R. Srinivasan and W. van Etten, 2003-
“Performance estimation of M-ary PSK in co-channel
Interference using fast simulation", IEE Proc.-
communications, Vol. 150, No. 5.
[36] H. Meng, Y. L. Guan and S. Chen, 2005 - “Modeling
and Analysis of Noise Effects on Broadband Power-
Line Communications", IEEE Transactions on Power
Delivery, Vol. 20, No. 2.
[37] A. M. Rabiei, and N. C. Beaulieu, 2007 “An Analytical
Expression for the BER of an Individually Optimal
Single Co-channel Interferer BPSK Receiver", IEEE
Transactions on communications, Vol. 55, No. 1.
(IJCNS) International Journal of Computer and Network Security, 59
Vol. 2, No. 6, June 2010

The Use of Mobile Technology In Delivering E-


Learning Contents: UNITEN experience
Mohd Hazli.M.Z, Mohd Zaliman Mohd Yusoff and Azlan Yusof
Universiti Tenaga Nasional, College of Information Technology,
KM7 Jalan Kajang-Puchong, 43009 Kajang, Selangor, Malaysia.
{hazli, zaliman, azlany}@uniten.edu.my

of the telecommunication infrastructure especially in the


Abstract: E-learning had become more affordable with area of critical broadband services, blue tooth and WIFI
cheaper but faster Internet connection. This has encouraged the technologies [10]. Thus, it is a challenge for Malaysian
development of mobile applications and integration of such researchers to study the best suitable delivering method that
applications into conventional e-learning system. This paper suits Malaysian environment. Without the right setting, the
presents a working prototype of SMS-based mobile learning
potential of mobile learning technology cannot be utilized by
system which includes the delivery and dissemination of
interactive learning contents to students using mobile phone at Malaysian students. Suitable delivering method is important
University Tenaga Nasional (UNITEN). Challenges and issues so that the progress of mobile learning could be accelerated
related to the deployment of SMS-based application are also by providing better learning opportunities for students. In
discussed. the next section, results of a survey that was carried out to
Keywords: e-Learning, mobile technology, short message study the perspective amongst students with regard to
service application mobile learning is presented.
1. Introduction 2. The SMS based Mobile Learning System
In today modern life, mobile communication activities are (SMLS)
very popular among teenager and adolescence. This has A survey was conducted amongst students of Universiti
resulted in the raise of many mobile applications including Tenaga Nasional (UNITEN) to study their perspective on
mobile learning [2]. In fact, mobile learning presents great mobile devices usage in obtaining information related to
potential as it offers unique learning experience which their studies. This includes dissemination of learning
allows students to learn their lesson anywhere, anytime and contents such as information on lecture’s notes, quizzes,
very personalized. Moreover, it provides enriching, class information and other related student activities. There
enlivening values as well as adding variety to conventional were 55 unpaid students of UNITEN involved in this study.
lessons or courses. They represent students from various degree levels (from
There is several benefits that mobile learning approach foundation to final year students), different degree program
can offer. For example, mobile learning approach encourages (Engineering background as well as IT background
both independent and collaborative learning experiences. students) including those with position in students’ club and
Attewell [2] reported that many learners who taking part in society.
mobile learning class enjoyed the opportunity of using Results from the survey indicate that very significant
mobile devices in their learning sessions. Besides, Mobile percentage (83%) of respondents agrees that using SMS to
learning approach helps to combat resistance on the use of obtain information related to their study and daily activities
ICT and bridge the gap between mobile phone literacy and in university was more effective and convenient. Besides
ICT literacy [9]. For example, it is reported in [6] that, post- that, the coverage of SMS applications is wider compared to
participation, a number of learners of displaced young adults social networking or instant messenger as it has better
studying ESOL (English for Speakers of Other Languages) penetration level in this country. In fact, the penetration
who had previously avoided using PCs actively continue level of cellular phone in Malaysian is more than 98%,
working with their PCs on tasks such as writing letters. In compared to only 21% of broadband penetration [10].
fact, for some learners, their computer skills and confidence Underpinning by the survey results, we develop a working
were enhanced to such an extent that they felt able to offer prototype to incorporate mobile technology in delivering
support and assistance to their peers. Besides that, several learning contents. The SMLS application was developed for
successful results of the benefit gained from mobile learning both students and lecturers. For students, the SMLS
environment were also reported by researches in Sweden, application allows them to request various information
Italy and UK [9]. related to their learning routines such as checking for class
While mobile learning is regarded as the new promising schedule, downloading assignment and obtaining their
learning paradigm, adaptation of this learning approach into assessment marks.
Malaysian community must be done carefully [10]. One of As for lecturers, they can use the system to disseminate
the potential drawbacks to the success of the implementation important announcements to student such as posting the
of mobile learning approach is the delivering technology postponement of class or giving the instruction for
[10]. Compared to develop country, Malaysia is still lacking downloading notes or assignments. Moreover, the SMLS
60 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

application provides web-based interfaces for lecturers to means, when a mobile phone user sent SMS to 32244, SMS
update student information (e.g. students’ contact numbers centre of the respective mobile service provider will route
and email addresses), class information and upload notes or the message to the specific SMS Gateway server. Based on
assignments to SMLS server (see Figure 1). the first keyword in the message, the Gateway will then
make a HTTP GET request to SMLS. Upon receiving the
request, SMLS server will verify the message and process
the request as explain in the later part of this section. An
example of SMS sent by a user is as below:

To : 32244

SMS2U cseb364 notes


adam@yahoo.com.my

Send Cancel

Figure 1. Interface of SMLS Figure 3. Sending SMS to SMLS

3. Design and Implementation The first string which is “sms2u” is the keyword for SMLS
application (see Figure 3). The Gateway will invoke the
3.1 System Architecture HTTP request using a GET method to SMLS server together
Figure 2 below shows the architecture of the SMS with several other parameters for processing as below:
technology used in SMLS prototype. The prototype is
developed based on the current SMS technology. As to http://smls.net/receive.php?from=01234567
reduce the development time and concentrate more on the 89&text=sms2u+cseb324+notes+adam@yahoo.co
m.my.&time=13+May+2010+17:03:52&msgid=100
development of the learning contents, a third party SMS 01&shortcode=32244
gateway service was deployed. The third party SMS gateway
provides the application programming interface between the
After processing the string, SMLS will return the result
SMLS and the service provider’s SMS centre.
accordingly. Basically the result contains 4 parameters as
describe in the Table 1 below:

Mobile Table 1: Return string parameters


Device Sequence Parameter name Description
Mobile SMS Centre
Service
SMS Gateway 32244 1 Type Message type
Provider
Internet 0 – normal text
1 – ringtone
Mobile
Device Internet 2 – operator logo
4 – picture message
SMS Centre 2 To The receiver’s mobile
number. Must be in
Mobile Internet
Device international format,
Mobile
Service
For example:
Provider
Web
60123456789
Browser SMLS 3 Text The content of the
(SMS Application Server)
message, which must be
Web URLencoded
Browser 4 Price 30 – RM0.30

Figure 2. SMLS system architecture


An example of a return string is as below:
Each SMS gateway has a unique number known as the
shortcode. The shortcode is used by SMS Centre of mobile 0,60123456789,CSEB364%20notes%20had%20bee
service provider to route the message to the respective SMS n%20sent%20to%20adam%40yahoo.com.my.,30
Gateway. In our context, the shortcode used is 32244. It
(IJCNS) International Journal of Computer and Network Security, 61
Vol. 2, No. 6, June 2010

The above string contains message type code (0), the special keywords together with the announcement to be
recipient phone number (0123456789), the result or made to SMLS as below:
information requested in url encoded and the premium
charge (30) in cents. The Gateway then processes the string Sms2u announce cseb324 class is
and forwards the message to the respective SMS Centre of postponed.
the particular service provider. Figure 4 shows the example
SMS sent by a user to 32244 and the reply from SMLS. Upon receiving the request, SMLS will invoke the Bulk
SMS module to send the announcement to phone numbers
listed under CSEB324 class. The status of the operation will
then send back to the lecturer’s cellular phone. Besides IOD
and Bulk SMS module, there are also a database server
(MySQL) and PHP modules, which control the flow and
processes of user requests. They also act as the interfaces
between the SMS gateway and other web services, like
emails and FTP. So, upon receiving the HTTP GET request,
From : 32244
the web server will execute and perform specific PHP scripts
CSEB364 notes had according to the keyword. In some cases, when necessary,
been sent to
adam@yahoo.com.my. the script will invoke the database server to establish a
connection to the database or request the mail server to send
Close Delete
email to an email address.
Figure 6 describes the flow of SMLS processes. For
example, in the event of receiving a SMS, SMLS will first
verify the message and then performs the required task.
Figure 4. SMS replies from SMLS Every incoming SMS will be logged into an incoming SMS
database table. Also, for every valid keyword, the required
task will be performed (i.e. either make a query to database
3.2 Server Components of SMLS and get the result or invoke the sendmail function to send
In general, SMLS comprises of two main components; the emails). Also, an acknowledgment will be sent through the
Information On Demand (IOD) module and the Bulk SMS SMS gateway, then to the SMS centre and eventually to the
module. The IOD module is used to handle information as student’s mobile device for every valid SMS request.
per user request. This is a one-to-one service type. It means,
if information is requested by a user, the information is S ta rt

returned or sent only to that particular user. In general, the


server components of the SMLS are represented by Figure 5
SM S
below. R e c o rd M essage
Log R e c e iv e d

Apache P ro c e s s
W eb SM S
s t r in g
Server M ySQL
Database

F a ls e R e tu rn e d
V a l id
e rro r
R e q u e s t?
m essage

T ru e
IOD M ail
M odules Server o b ta in d
r e le v a n t
in fo r m a tio n
Applications
scripts
R e tu r n e d
Bulk th e in fo r m a tio n R e co rd

SMS FTP re q u e s te d Log

Modules Server

PHP Scripts End

Figure 5. SMLS server components


Figure 6. General flowchart of SMLS

The Bulk SMS module is used when SMLS needs to send 3.3 System Functionality
message to multiple users. For example, let assume that a
Table 2 below describes the tasks and application offered by
lecturer needs to send announcement from his mobile
SMLS system to students and lecturers. SMLS has been
phone. In this case, the lecturer can send SMS containing
successfully implemented for student of CSEB324 Data
62 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

Structure and Algorithms class during Semester 2 and reliability issues, two solutions are proposed. First,
2009/2010 in UNITEN. university can choose a third party SMS gateway provider
with better reliability and performance track record. The
Table 2: SMLS functionalities option is a simple but the recurring costs (i.e. maintenance,
Actor Task Description upgrading) of such gateway can be very expensive. A better
Lecturer Announcement Send announcement to all option the problem is for university to setup its own SMS
students in a class. gateway server. Although the initial setup cost is expensive,
Announcement can be in the university can recoup its investment from the charged
the form of instruction to imposed by the service provider to the user.
students or notification
related to their class.
4.2 Cost of SMS issue
Student Request notes Student can request notes Since SMLS is using a third party SMS gateway, cost of
and the notes will then be sending IOD to user is charged at a premium rate with
emailed to their email minimum of RM0.30 per message received. This amount is
address. on top of standard SMS rate imposed by the respective
service provider. Students, with financing constraint were a
Check class Student can check class bit reluctant to use the system, unless they have no choice.
information schedule, class For an SMS massage, about 40% to 60% of the premium
announcement or any class charges goes to the service provider. Whatever remains will
related information. be shared between the third party SMS gateway and the
content provider (in this case the university). If the
Download quiz Student can request quiz university owns the gateway server, almost half of the cost
paper to be emailed to their can be subsidized by the university. Furthermore, the mobile
email. service provider could play their role in reducing premium
charged for education purpose as part of their corporate
Review question Student can request for social responsibility project.
review question for a
chapter for refreshment 4.3 Attestation and Privacy issue
before class. Despite the ability of the SMLS prototype system to detect
students’ mobile phone, the issue of user verification and
Class synopsis. Student can request for authentication still posed a major challenge in MSLS
class or chapter synopsis for environment. It is hard to develop a function that able to
refreshment before or after authenticate the user’s identity in the SMLS environment.
class. Although some mobile phone such as PDA has the user
authentication function using finger print to authenticate the
Check marks Student can check their authorized phone user, such mobile phone is seldom used by
marks (quizzes, test or any students. In most cases, the access to the mobile devices is
assessment marks) open to everyone; thus, who has access to the phone can use
the phone to send SMS and obtain privacy information such
as student’s marks, or maybe, answer a quiz on behalf of
other students.
In this pilot project, we just assume that the sender is the
4. Issues and Challenges trustworthy student. However, in order for an SMLS system
Although SMLS works as planned, there are several
to be widely used and accepted, attestation framework has to
important issues to be highlighted.
be developed.
4.1 Reliability and performance issue
4.4 Copyright of the learning material
The reliability of current implementation of the SMLS
It is common that academic materials are usually
prototype architecture relies on several servers. However,
copyrighted. In mobile learning environment where
these server are not owned by the university– they are
materials such as lecture notes and questions are
belongs to the service provider (i.e. SMS Centre and the
disseminated through mobile phone without the copyright
SMS gateway). As such, the performance of the SMLS
notice. Hence, the intellectual property of the material is not
prototype depends much on the performance and availability
protected. As such, the students could easily reproduce the
of both servers. During the trial, there were cases where the
material and disseminate the material to other students.
reply takes longer than 15 seconds and it was caused by the
Therefore, serious consideration should be taken into
Gateway server. In some isolated cases, the reply even failed
account including looking for means to protect lecturers’
to reach the user. This is solely due to the network
intellectual properties right for their learning materials.
connection problem.
Often, the problem was due to the connectivity failure
4.5 Equal opportunity among students
between the third party SMS Gateway and the service
In mobile learning environment, the use of mobile
provider SMS centre. Thus, to overcome the performance
(IJCNS) International Journal of Computer and Network Security, 63
Vol. 2, No. 6, June 2010

technology gadget is mandatory. In SMLS context, a student of papers. London: Learning and Skills Development
should have a mobile phone with SMS capability. Despite Agency.At www.LSDA.org.uk/files/PDF/1440.pdf,
the mobile phone penetration in the country is high (almost accessed Nov 2009.
100%), there are still students who cannot afford to have it. [5] Bacsich, P, Ash, C, Boniwell, K and Kaplan, L
Unlike personal computers, mobile phone is personal (1999). The Costs of Networked Learning. Sheffield,
accessories and therefore not provided by the university. UK: Sheffield Hallam University. Available online at:
Thus, it creates unequal learning opportunity to those who www.shu.ac.uk/cnl/report1.html
cannot afford to buy the mobile phone. [6] Corlett, D, Sharples, M, Chan, T and Bull, S
Serious actions should be taken to bridge the gap (2004). A mobile learning organiser for university
between students as the possession of a mobile phone is an students. Proceedings of the 2nd International
important requirement if the SMLS were to be implemented Workshop on Wireless and Mobile Technologies in
in larger scale especially in the third world country. Education. JungLi, Taiwan: IEEE Computer Society,
35-42
[7] Johnsen ET (2004).Telenor on the EDGE. 3G
4.6 University policy and acceptance Americas at www.3gamericas.org/English/
We also look into the university policy on extra charges to PressReleases/PR200469682.html, accessed Nov 2009.
be paid by the students. Since university is bound by rules [8] Lonsdale, P, Baber, C, Sharples, M and Arvanitis,
and regulation of the Ministry of Higher Education, extra TN (2003). A context-awareness architecture for
charges to the students have to be addressed carefully. This facilitating mobile learning. Proceedings of MLEARN
is to avoid any discrepancies with the current rules and 2003: Learningwith Mobile Devices. London, UK:
regulation on charging the students. Learning and Skills Development Agency, 79-85
As a pilot project, SMLS is implemented only as an [9] Lonsdale, P, Baber, C, Sharples, M,Byrne, W,
alternative and to complement the current web based Arvanitis, T, Brundell, P and Beale, H (2004). Context
technology. Students are not forced to use the system; awareness for MOBIlearn: creating an engaging
instead, they opted to use the SMLS voluntarily as this learning experience in an art museum. Proceedings of
brings new learning environment experience to students. MLEARN 2004. Bracciano, Rome: LSDA
[10] Malaysian Communications and Multimedia
5. Conclusion Commission (SKMM), 2009 Brief Industry Trends
With the intention to enrich student learning experience, Report: 2H 2008 http://www.skmm.gov.my, assesses
SMLS had been developed. SMLS is an approach of April 2010
implementing e-learning, using existing and inexpensive [11] NRPB (2004). Mobile Phones and Health 2004:
technology. Using SMLS, the learning process can be done Report by the Board of NRPB.Didcot, Oxfordshire:
in almost any place with a very minimal infrastructure – a National Radiation Protection Board.
hand phone. Further research and work need to be done to [12] Prensky M (2004). What Can You Learn from a
improve the system so that more complicated content such Cell Phone? – Almost Anything.At
as multimedia content can be delivered. Besides that, more www.marcprensky.com/writing/ Prensky-
works should be done in the issues and challenges discuss in what_Can_You_Learn_From_a_Cell_Phone-
this paper in order to provide more conducive environment FINAL.pdf, accessed November 2009.
for students and lecturers to interact via SMLS. It is hoped
that this project will not only generate new knowledge, but Authors Profile
will also provide a new learning paradigm as it could
improve students’ learning interest and eventually Mohd Hazli M.Z obtained his BACHELOR OF Computer Science
contribute to the improvement of their performance. and M.Sc (Computer Science) from Universiti Teknologi Malaysia
in 1998 and 2001 respectively. Started his
career as system engineer in an IT company in
1998 as system developer and joined the
References UNITEN in the year of 2001 as Lecturer in
Software Engineering Department. He currently
[1] Albanese, M and Mitchell, S (1993).Problem-based
pursues his PhD in Software Testing.
learning: a review of the literature on its outcomes and
implementation issues. AcademicMedicine, 68: 52-81
[2] Attewell, J, 2005: Mobile technologies and learning Mohd Z. M. Yusoff obtained his BSc and MSC
A technology update and m-learning project summary: in Computer Science from Universiti
Learning and skills development agencies: http :// Kebangsaan Malaysia in 1996 and 1998
www.LSDA.org.uk, retrieved on 30 /10/2009 respectively. He started his career as a Lecturer
[3] Attewell, J and Savill-Smith, C (2003).M-learning at UNITEN in 1998 and has been appointed as a
and social inclusion - focusingon learners and learning. Principle Lecturer at UNITEN since 2008. His
has produced and presented more than 60
Proceedings of MLEARN 2003: Learning with Mobile
papers for local and international conferences.
Devices. London, UK: Learning and Skills His research interest includes modeling and applying emotions in
Development Agency, 3-12 various domains including educational systems and software
[4] Attewell J, Savill-Smith C (eds) (2004). Learning agents, modeling trust in computer forensic and integrating agent
with mobile devices: research and development –a book in knowledge discovery system.
64 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

Azlan Yusof obtained his BSc Computer and


MSc (Computer Science) from Universiti
Teknologi Malaysia in 2000 and 2002
respectively. Started his career in UNITEN in
the year of 2003 as lecturer in Software
Engineering department.
(IJCNS) International Journal of Computer and Network Security, 65
Vol. 2, No. 6, June 2010

Automated Detection and Recognition of Face in a


Crowded Scene
Chandrappa D N1, M Ravishankar2 and D R RameshBabu3
1
SJB Institue of Technology, Bangalore, India
Chandrappa.dn@gmail.com
2
Professor ISE, Dayananda Sagar college of Engineering, Bangalore, India
ravishankarmcn@gmail.com
3
Professor CSE, Dayananda Sagar college of Engineering, Bangalore, India

Abstract: Face detection and recognition play an important the face. Face Recognition using Eigen-faces [6] algorithm
role in the process of biologic character identity. Biological recognizes the person by comparing characteristics of the
character identification happens in two phases, first is the face face to those of known individuals. Eigenface method [7]
detection and then recognition. Face detection is performed by
for effectiveness is evaluated by comparing false acceptance
feature extraction considering skin color model and performing
skin segmentation and morphological operations in different
rates, false rejection rates and equal error rates, and
environmental variations. It extends with extracting the skin verification operations on a large test set of facial images,
color regions by skin likelihood and then the obtained face is which present typical difficulties when attempting
cropped, scaled and stored in a specified location using the recognition, such as strong variations in lighting conditions
digital image processing techniques. For the purpose of face and changes in facial expression.
recognition, cropped face images are projected onto a feature
space (face space) that best encodes the variation among known
The Proposed method introduces a Skin color model that
face images by using Eigenface method to identify the
helps to detect faces from different environmental variations
authenticated person.
and after this stage, model is proposed for Recognition the
Keywords: Face detection, Face recognition, Skin Segmentation, faces from crowded image. This paper is organized as
Skin-Color model follows: Section 2 discusses face detection using Skin color,
Segmentation, Morphological operation, Region labeling,
1. Introduction Euler’s test, Template matching and Recognition by using
Eigen-faces. Section 3 discussing the Results of the
Automatic recognition of individuals based on their proposed method and Section 4 concludes the paper.
biometric characteristics has attracted researcher’s attention
in recent years due to the advancement in image analysis
2. Proposed Method
methods and the emergence of significant commercial
applications. There is an ever increasing demand for In the proposed method, the goal is to detect the presence of
automated personal identification in a wide variety of faces in crowded image using Chrominance method and
applications ranging from low to high security such as: Recognition using Eigen vector algorithm. The system
Human Computer Interaction (HCI), access control, composed of two stages: Face Detection with feature
surveillance, airport screening, smart card and security. extraction and Face Recognition.
Even though biometrics such as finger prints and iris scans
deliver very reliable performance, the human face remains 2.1 Face Detection
an attractive biometric because of its advantages over other
Face detection is achieved by detecting areas that indicate a
forms of biometrics.
human face. Face detection is based on using grayscale and
The face detection algorithm, “Human Skin Color
color information and implementing an algorithm to detect
Clustering for Face Detection” [5] uses 2D color (chromatic)
faces independently of the background color of the scene.
space (RGB) for detecting skin color pixels. Some color
spaces
have a property of separating the luminance component 2.1.1 Skin Color Model
from chromatic component and with that at least partial An RGB image when compared to other color models like
independence of chromaticity and luminance is achieved. YUV or HSI, has the disadvantage of not clearly separating
Such color spaces are for example YUV, RGB, HSI, etc. The the mere color (chroma) and the intensity of a pixel, which
original face detection algorithm [1] was developed to work makes it very difficult to robustly distinguish skin colored
best under standard daylight illumination. After the skin regions, as it often has large intensity differences due to
color classification is done for every pixel of the image, the lightning and shadows. In the RGB space, the triple
component (r,g,b) represents not only color but also
skin region segmentation is performed. Unsuitable regions
luminance. Luminance may vary across a person's face due
are then eliminated on the basis of geometric properties of
66 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

to the ambient lighting and is not a reliable measure in segmented region. Using this technique of adaptive
separating skin from non-skin region. thresholding, many images yield good results; the skin-
colored regions are effectively segmented from the non-skin
In order to segment human skin regions from non-skin
colored regions. The skin segmented image of a color image
regions based on color there is a need for a reliable skin
resulting from this technique is shown in figure 2. The next
color model that is adaptable to people of different skin
stage in the process is Morphological operations that
colors and to different lighting conditions. Luminance has to
involves tools for extracting image components that are
be removed from the color representation in the chromatic
useful in the representation and description of region shape,
color space. Chromatic colors, also known as "pure" colors
such as boundaries and Skeletons using erosion and dilation.
in the absence of luminance, are defined by a normalization
process shown below:

r = R/(R+G+B) (1)

b = B/(R+G+B) (2)

Also color green is redundant after the normalization


because r+g+b = 1.
Chromatic colors have been effectively used to segment
color images in many applications. Although skin colors of
different people appear to vary over a wide range as they Figure 2. Skin Segmented Image
differ much less in color than in brightness.
A total of 50 skin samples from 20 color images were 2.1.3 Morphological Operations
used to determine the color distribution of human skin in The basic morphological tools like filling, erosion and
chromatic color space. As the skin samples were extracted dilation for investigation of binary image with ‘1’
from color images, the skin samples were filtered using a representing skin pixels and ‘0’ representing non-skin
low-pass filter to reduce the effect of noise in the samples. pixels and morphological operations in order to separate the
This skin color model can transform a color image into a skin areas which are closely connected (when two faces are
grayscale image such that the gray value at each pixel shows a bit overlap). The dilated binary image is multiplied with
the likelihood of the pixel belonging to the skin. A sample binary image from segmentation process to maintain the
color image and its resulting skin-likelihood image are holes as shown in figure 3. The next stage to analyze the
shown in Figure 1. All skin regions (like the face, the hands region to determine it is a face region or non-face regions
and the arms) were shown brighter than the non-skin region and regions
and Segmentation that region brighter than Skin Color is labeling.
Model.

(a). Eroded Image (b). Dilated Image


(a). The Original color (b). The skin-likelihood image.
Image Figure 3. After Morphological operation

Figure 1. Sample color image and its resulting skin- 2.1.4 Region Labeling
likelihood image The binary image resulted after morphological operation
needs to be labeled such that each clustered group of pixels
can be identified as a single region, in order for each region
2.1.2 Skin Segmentation
to be analyzed further to determine if it is a face region or
Since the skin regions are brighter than the other parts of
not. That is instead of 1 and 0; each region is labeled as 1,
the images, the skin regions can be segmented from the rest
2, 3 and so on. Pixels with 0 values are unchanged. A
of the image through a thresholding process. Also, since
function is used to show each different labeled region in
people with different skins have different likelihood, an
different colors as shown in figure 4.
adaptive thresholding process is required to achieve the
optimal threshold value for each case. The adaptive The next stage is to conduct Euler test for skin region
thresholding is based on the observation that stepping the and it should have at least one hole inside that region.
threshold value down may intuitively increase the
(IJCNS) International Journal of Computer and Network Security, 67
Vol. 2, No. 6, June 2010

2.1.7 Final Detection Output

The rectangle box to mark the detected face as shown in


figure. 7. Once we get the face regions and crop them, it
becomes possible to then convert it to a gray image of size
92 x112 pixels and save each as individual images for
further processing for face recognition.

Figure 4. Colored labeled region

2.1.5 Euler Test


This stage is processed for each region at a time. The
objective in this stage is to reject regions which have no
holes as mentioned in the above region. There after analyze
with several images, to decide that a skin region should
have at least one hole inside that region. To determine the
number of holes inside a region, we use the formula
E=C–H (3) where E=Euler Number,
C=Connected component, H=holes. The next stage is (a) Face detected
conduct to template matching to cross correlation between
template face and grayscale region.

(c) Gray converted


(b) Cropped Image
resized image

Figure 7. Final face detection

2.2 Face Recognition


Figure 5. After Euler’s test Humans can recognize human faces with little difficulty
even in complex environments. Therefore, the human face is
2.1.6 Template Matching a natural and attractive biometric. Most approaches to
This is the final stage of face detection where cross automate human face recognition typically analyze face
correlation between template face and grayscale region is images automatically. Experimental results can thus be
performed (obtained by multiplying the binary region with collected and approaches evaluated quickly by using
grayscale original image). The template face is an average eigenface method.
frontal face of a person. This final stage involves the PCA approach [6] will treat every image of the training
following processes: Firstly, width, height, orientation and set as a vector in a very high dimensional space. The Eigen
centroid of binary region under consideration have to be vectors of the covariance matrix of these vectors would
computed. Then, the template face image is resized, rotated incorporate the variation amongst the face images. Now
and its centroid placed on the centroid of the region in each image in the training set would have its contribution to
original grayscale image with only one region as shown in the Eigen vectors (variations). This can be displayed as an
figure 6. The rotated template need to be cropped properly ‘eigenface’ representing its contribution in the variation
and the size need to be same with that of the region. Then between the images. In each eigenface some sort of facial
cross correlation is calculated and threshold of 0.2 is set by variation can be seen which deviates from the original
experiment and the region which has less cross correlation image.
value is rejected. The face image to be recognized is projected on this face
space. This yields the weights associated with the Eigen
faces, which linearly approximate the face or can be used to
reconstruct the face. These weights are compared with the
weights of the known face images for it to be recognized as
a known face used in the training set. The face image is then
classified as one of the faces with minimum Euclidean
distance.

Figure 6. Template Matching


68 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

2.2.1 Calculating Eigen faces


Let a face image Γ(x,y) be a two-dimensional N by N
array of intensity values. An image may also be considered
2
as a vector of dimension N , so that a typical image of size
256 x 256 pixels becomes a vector of dimension 65,536, or
equivalently, a point in 65,536-dimensional space. An (a)
ensemble of images, then, maps to a collection of points in Figure. 8 (a)Training set (b)Normalized Training set
this huge space.

Let the training set of face images be Γ1 , Γ2 , Γ3 , …, ΓM .


M
1
The average face of the set if defined by Ψ =
M
∑Γ n =1
n .

Each face differs from the average by the


vector Φ n = Γn − Ψ . An example training set is shown in
Figure 8(a), with the average face Ψ .
(c ) Eigenfaces
This set of very large vectors is then subject to Principal
Component Analysis, which seeks a set of M orthonormal practice, the training set of face images will be relatively
vectors, µ n , which best describes the distribution of the small ( M < N ), and the calculations become quite
2

data. The kth vector, µ k is chosen such that manageable. The associated Eigen-values allow us to rank
the Eigen vectors according to their usefulness in
1 M characterizing the variation among the images. The
λk =
M
∑ (µ
n =1
T
k Φ n )2 (4)
Training set, Normalized and Eigen faces are as shown
below
is a maximum, subject to

1, l = k 3. Result and Discussion


µ lT µ k = 
0, otherwise Results were evaluated using MATLAB environment for
different lighting conditions with uniform as well as non-
The vectors µ k and scalars λ k are the eigenvectors and uniform background. Crowded scene faces were detected by
eigenvalues, respectively, of the covariance matrix using skin color model algorithm for different stages are
analysed experimentally in Figure1 to Figure 7. The
M
1
C =
M
∑Φ
n =1
n Φ Tn = AA T (5) experiment involves a set of 20 images tested in different
conditions like light, Intensity, uniform and non-uniform
background as shown in figure 9 – figure 12. Then the
where the matrix A = [Φ 1 Φ 2 ...Φ M ] . The matrix C, obtain face is cropped, scaled and stored in specified
2 2 2 location for the Eigen faces Recognition.
however, is N by N , and determining the N
eigenvectors and eigenvalues is an intractable task for
typical image sizes. If the number of data points in the
image space is less than the dimension of the space
( M < N ), there will be only M − 1 , rather than N ,
2 2

meaningful eigenvectors.

Consider the eigenvectors ν n of AT Asuch that


A Aν n = λ nν n
T (a). Original Image

The vectors determine linear combinations of the M training


set face images to form the eigenfaces µ n :
M
µn = ∑ν
k =1
nk Φ k = Aν n , n = 1,......, M (6) (b). Detected Face
Figure. 9: Very bright light condition (eg: sunlight)
under different background condition
With this analysis the calculations are greatly reduced, from
2
the order of the number of pixels in the images ( N ) to the
order of the number of images in the training set (M). In
(IJCNS) International Journal of Computer and Network Security, 69
Vol. 2, No. 6, June 2010

approach, which has been widely cited as a prominent


Face Detection
Samples
Input Faces Detected Efficiency
faces

Sample1 14 13 92.8
(a). Original Image
Sample2 5 5 100

Sample3 8 7 80

Face Recognition
Samples
Input Faces Efficiency
(b). Detected Face faces Recognized
Figure. 10: Varying light intensity condition
Sample1 13 12 92.3

Sample2 5 5 100

Sample3 7 7 100
application of PCA to pattern analysis. It is a simple and
fast method that works reasonably well, though not very
(a). Original Image robust when dealing with some variations like changes in
size. The main drawback of proposed method is that when
the background color is similar to skin color, there is
difficult to detect the skin areas in the image.

References
(b). Detected Face
[1] Ming Hu and Qiang Zhang , Zhiping Wang
Figure. 11: Very bright light and uniform
“Application of Rough Sets to Image Pre-processing
background condition
for Face Detection” IEEE International Conference on
Information and Automation , June 20-23, 2008,
Zhangjiajie, China.
[2] Sina jahanbin, hyohoon choi, “Automated facial
feature detection and face recognition using gabor
features on range and portrait images” IEEE
(a). Original Image transaction on 2008.
[3] Xiaogang Wang and Xiaoou,” Face Photo-Sketch
Synthesis and Recognition” IEEE Trans. on Pattern
analysis and machine intelligence
[4] Randazzo Vincenzo, Usai Lisa, “An Improvement of
AdaBoost for Face-Detection with Motion and Color
(b). Detected Face Information” IEEE 14th International Conference on
Image Analysis and Processing (ICIAP) on 2007.
Figure. 12: Very bright light and non-uniform
[5] Jure Kova C, Peter Peer, and Franc Solina “Human
background condition
Skin Colour Clustering for Face Detection”.IEEE
Table No. 1 Comparison of Proposed method with Region8, Eurocon 2003. pp.144-148. vol.2
Different Samples [6] M. Turk and A. Pentland (1991). "Face recognition
using eigenfaces". Proc. IEEE Conference on
Computer Vision and Pattern Recognition. pp. 586–
4. Conclusion 591.
In this paper, the presented results could conclude that [7] Thomas Heseltine1, Nick Pears, Jim Austin, Zezhi
detection algorithm based on only skin color and feature Chen “Face Recognition: A Comparison of
extraction considering skin color model, skin likelihood, Appearance-Based Approaches” VIIth Digital Image
performing skin segmentation and morphological Computing: Techniques and Applications, Sun C.,
operations. It works even with images taken on non-uniform Talbot H., Ourselin S. and Adriaansen T. (Eds.), 10-12
background and different lightning and Intensity conditions. Dec. 2003, Sydney.
Face recognition is done by PCA-based eigenface method
70 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

Authors Profile
Chandrappa D N is a research scholar. He is
working as an Assistant Professor in Dept. of
Electronics and communication Engineering at
SJB Institute of Technology Bangalore-60. He
has a B.E in degree in Electronics &
Communication, M.E degrees in Power
Electronics from University Visveswaraiah
College of Engineering and currently pursing PhD in Image
processing. He has 10 years of teaching experience & has
guided many UG and PG projects.

M Ravishankar received Ph.D degree in


Computer Science from University of Mysore. He
is working as a Professor in the department of
Information Science and Engineering at
Dayananda Sagar college of Engineering
Bangalore and his research area includes Speech
processing , Image processing and computer vission.

D R RameshBabu received Ph.D degree in


Computer Science from university of Mysore. He
is working as a Professor in the department of
Computer Science and Engineering at
Dayananda Sagar college of Engineering
Bangalore and his research area includes Digital
Image processing.
(IJCNS) International Journal of Computer and Network Security, 71
Vol. 2, No. 6, June 2010

Clustering Based Machine Learning Approach for


Detecting Database Intrusions in RBAC Enabled
Databases
Udai Pratap Rao1, G. J. Sahani2, Dhiren R. Patel3
1
Dept. of Computer Engineering, S.V. National Institute of Technology Surat, Gujarat, INDIA
upr@coed.svnit.ac.in
2
Dept. of Computer Engineering, SVIT,Vadodara, Gujarat, INDIA
gurcharan_sahani@yahoo.com
3
Dept. of Computer Science & Engineering, Indian Institute of Technology Gandhinagar, Ahmedabad, Gujarat, INDIA
dhiren@iitgn.ac.in

attacks over a time. In today’s network environment is


Abstract: Database security is an important issue of any
organization. Data stored in databases is very sensitive and necessary to protect our data from attackers. Mainly
hence to be protected from unauthorized access and database attacks are of two types: 1) intentional
manipulations. Database management systems provide number unauthorized attempts to access or destroy private data; 2)
of mechanism to stop unauthorized access to database. But, malicious actions executed by authorized users to cause loss
intelligent hackers are able to break the security of database or corruption of critical data.
systems. Most of the database systems are vulnerable or the Although there are number of number of approaches
environment in which database system resides may be available to detect unauthorized attempt to access data,
vulnerable. People knowing such vulnerabilities can easily get attackers are succeeded in attacking the system because of
access to database. Unauthorized suspicious activities can be the vulnerabilities. As database security mechanisms are
trapped by database management systems. But, there are some not design to primarily detect intrusions, there are many
authorized users who can violet the security constraints.
cases where the execution of malicious sequences of SQL
Traditional database mechanisms are not sufficient to handle
such attacks. Early Detections of any authorized or
commands (transactions) cannot be detected. Therefore it
unauthorized access to database is very important for database becomes necessary to employ intrusion detection system [1].
recovery and to save the loss that can be occurred due to In case a computer system is compromised, an early
manipulation of data. There are number of database intrusion detection is the key for recovering lost or damaged data
detection systems to detect intrusions in network systems, these without much complexity. When an attacker or a malicious
IDSs cannot detect database intrusions. Very few IDS user updates the database, the resulting damage can spread
mechanism for databases has been proposed. Here we are very quickly to other parts of the database.
proposing unsupervised machine learning approach for Intrusion Detection System (IDS) provides good
database intrusion detections in databases enabled with Role protections from attacks aimed at taking down access to the
Based Access Control (RBAC) mechanism. network, such as Distributed Denial of Service attacks and
Keywords: Database Security, Clustering Technique, TCP SYN Flood attacks. But such systems cannot detect
Malicious Transactions malicious database activity done by users.
In recent years, researchers have proposed a variety of
1. Introduction approaches for increasing the intrusion detection efficiency
and accuracy [2]-[5]. But most of these efforts concentrated
Databases not only allow the efficient management and
on detecting intrusions at the network or operating system
retrieval of huge amounts of data, but also they provide
level. But, there have been very few ID mechanisms
mechanisms that can be employed to ensure the integrity of
the stored data. Data in these databases may range from specifically tailored to database systems. They are not
credit card numbers to personal information like medical capable of detecting malicious data corruptions. So,
records. Unauthorized access or modification to such data reasonable effort is required in area of database intrusion
results in big loss to customers. So, database security has detection system. Intrusion detection systems determine the
become an important issue of most of the organizations. normal behavior of users accessing the database. Any
Recently number of database attack incidents has been deviation to such behavior is treated as intrusion. There are
occurred and number of customer records was stolen. Most mainly two models of intrusion detection system, namely,
of the attacks were encountered because of bad coding of anomaly detection and misuse detection. The anomaly
database applications or exploiting database systems detection model bases its decision on the profile of a user's
vulnerabilities. Web applications are the main sources of normal behavior. It analyzes a user's current session and
database attacks. Attackers may attack databases for several compares it with the profile representing his normal
reasons and they may deduce newer techniques of database behavior. An alarm is raised if significant deviation is found
72 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

during the comparison of session data and user's profile. generated as compared to the approach presented in [8].
This type of system is well suited for the detection of More rules generated reduce false alarms. But it is also not
previously unknown attacks. The main disadvantage is that, well suited approach for role based database access. Kamra
it may not be able to describe what the attack is and may et. al [10] have proposed a role based approach for detecting
sometimes have high false positive rate. In contrast, a malicious behavior in RBAC (role based access control)
misuse detection model takes decision based on comparison administered databases. Classification technique is used to
of user's session or commands with the rule or signature of deduce role profiles of normal user behavior. An alarm is
attacks previously used by attackers. raised if roles estimated by classification for given user is
We are presenting unsupervised machine learning different than the actual role of a user. The approach is well
approach for database intrusion detections in databases suited for databases which employ role based access control
enabled with role based access control (RBAC) mechanism. mechanism. It also addresses insider threats scenario
It means number of roles has been defined and assigned to directly. But limitation of this approach is that it is query-
users of database systems. Keeping database security in based approach and it cannot extract correlation among
view, proper privileges are assigned to these roles. queries in the transaction.
The rest of this paper is organized as follows. In section
2, we discuss related background. In section 3, a detailed 3. Our Approach
overview about our approach is given. In section 4, analysis
The approach we are presenting is a transaction level
and result of our approach is presented. Finally in section 5
approach. Attributes referred together for read and write
we conclude with the references at the end.
operations in transactions play important role in defining
normal behavior of user’s activities.
2. Related Work For example consider the following transaction:
Application of machine learning techniques to database
security is an emerging area of research. There are various Begin transaction
approaches that use machine learning/data mining select a1,a2,a3 from t1 where a1= 25;
update t2 set a4= a2+ 1.2(a3);
techniques to enhance the traditional security mechanisms
End transaction
of databases. Bertino et al. [6] have proposed a framework
based on anomaly detection techniques to detect malicious Where t1 and t2 are tables of the database and a1, a2, a3
behavior of database application programs. Association rule are the attributes of table t1 and a4, a5 are the attributes of
mining techniques are used to determine normal behavior of table t2 respectively.
application programs. Query traces from database logs are This example shows the correlation between the two
used for this purpose. This scheme may suffer from high queries of the transaction. It states that after issuing select
detection overhead in case of large number of distinct query, the update query should also be issued by same user
template queries. i.e. the number of association rules to be and in the same transaction. Approach presented in [10] can
maintained will be large. DEMIDS is a misuse-detection easily detect the attributes which are to be referred together,
system, tailored for relational database systems [7]. It uses but it cannot detect the queries which are to be executed
audit log data to derive profiles describing typical patterns together. This example shows the correlation between the
of accesses by database users. The main drawback of the two queries of the transaction. It states that after issuing
approach presented as in [7] is a lack of implementation and select query, the update query should also be issued by same
experimentation. The approach has only been described user and in the same transaction. Our approach extracts this
theoretically, and no empirical evidence has been presented correlation among queries of the transaction. In this
approach database log is read to extract the list of tables
of its performance as a detection mechanism. Yi Hu and
accessed by transaction and list of attributes read and
Brajendra Panda proposed a data mining approach [8] for
written by transaction. The extracted information is
intrusion detection in database systems. This approach
represented in the form of following structure format:
determines the data dependencies among the data items in (Read, TB-Acc[ ], Attr-Acc[ ][ ], Write, TB-Acc[ ],Attr-
the database system. Read and write dependency rules are Acc[ ][ ] )
generated to detect intrusion. The approach is novel, but its Where Read and Write are binary fields while TB-Acc[ ]
scope is limited to detecting malicious behavior in user is binary vector of size equal to number of relations in
transactions. Within that as well, it is limited to user database and Attr-Acc[ ][ ] is vector of N vectors and N is
transactions that conform to the read-write patterns assumed equal to the number of relations in the database. If
by the authors. Also, the system is not able to detect transaction contains select query then Read is equal to 1
malicious behavior in individual read-write commands. otherwise it is 0. Similarly, if transaction contains update or
False alarm rate is may be more. It also does not hold good insert query Write is equal to 1 otherwise it is 0. Element
for different access roles. Sural et al. [9] have presented a TB-Acc[i]=1 if SQL command at hand access i-th table and
approach for extracting dependency among attributes of 0 otherwise. Element Attr-Acc[i][j] = 1 if the SQL
database using weighted sequence mining. They have taken command at hand accesses the j- th attribute of the i-th table
sensitivity of data items into consideration in the form of and 0 otherwise. Table 1 shows the representation of
weights. Advantage of this approach is that more rules are example transaction given above using this format.
(IJCNS) International Journal of Computer and Network Security, 73
Vol. 2, No. 6, June 2010

Table 1: Representation of example transaction into number of groups, we have used k-means clustering
algorithm for clustering. K-means is the fastest among the
Rd t1 t2 a1 a2 a3 a4 a5
partitioning clustering algorithms. Training tuples
1 1 0 1 1 1 0 0 generated from database log has binary data fields.
Therefore similarity measures of binary variables can be
used for clustering such tuples. Similarity measure between
Table 1: (Continued) two tuples for clustering algorithm of our approach is as
follows.
Wt t1 t2 a1 a2 a3 a4 a5

1 0 1 0 0 0 1 0
ncount11
Where Rd=Read and Wt=Write simm(t1,t2) =
Values of fields of above structure will form the normal ncount11 + ncount10 + ncount01
behavior of the transaction to be issued by user. Violation to
such behavior will be detected as anomalous. The overall Where
approach is depicted by figure 1. ncount11 – count equals to number of similar binary
fields of both the tuples t1 and t2 has value 1.

Database Log (History ncount10 – count equals to number of similar binary


Transactions)
field of tuple t1 has value 1 and of tuple t2 has value 0.
ncount01 – count equals to number of similar binary
Preprocess field of tuple t1 has value 0 and of tuple t2 has value 1.
(Read Items, Write Items)

Current Session
For example consider the following transactions:
Clustering Clusters (Role Transaction tr1
(Learning Phase) Profiles)
User transaction Begin Transaction
select a1,a2 from t1;
Comparison
(Detection Phase) update t2 set a4;
End Transaction
Outlier Update
Corresponding bit pattern:
Raise Alarm New DB Log
110 1100010100010

Figure 1. Overview of the proposed approach Transaction tr2


Information about the role of the users who had issued the Begin Transaction
transactions and the data items read written through these
transactions is gathered from the database log. After select a1,a3 from t1;
gathering the history transaction from database log, it is update t2 set a5;
preprocessed and stored as binary bits representing the items
read and items written by the transactions in the form of End Transaction
structure presented above. Data generated in this form is Corresponding bit pattern:
form the dataset for clustering. Clustering forms the group
of similar transactions. These groups represent the normal 110 1010010100001
behavior of the users who have issued such transactions. It ncount11 = 5
represents the role profile of the users who are authorized to
issues such transactions. Once the role profiles are ncount10 = 2
generated, next goal is to predict group of new incoming ncount01 = 2
transactions. If the incoming transaction is found to be
member of any of the cluster, then the transaction is Similarly- similarity measure of tr1 and tr2 will be
considered as a valid transaction. If the incoming simm(tr1,tr2) = 5/(5+2+2) = 55.5 %
transaction is detected as an outlier, then it is considered as
an invalid transaction and an alarm is generated. Valid Advantage of our unsupervised approach is that role
transactions are fired on the database and are added to the information of the transactions need not have to log in
database log. As history transactions are to be partitioned database log. Behaviors of the users belonging to the same
74 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

role are grouped into same cluster. Approach is also well and ulnerability Assessment (DIMVA), pages 123-
suited for the users with more than one role. Detection phase 140,2003.
need to be generalized only. [2] Lee, V. C.S., Stankovic, J. A., Son, S. H., “Intrusion
Detection in Real-time Database Systems Via Time
4. Result and Analysis Signatures,” In Proceedings of the Sixth IEEE Real
For verification of our approach, we generated number of Time Technology and Applications Symposium, pages
database tables with number of attributes. We defined 121-128, 2000.
number of roles and generated number of transactions for [3] Marco Vieira and Henrique Madeira, “Detection of
these roles. Based on these transactions, we also generated Malicious Transactions in DBMS,” IEEE Proceedings-
large number of tuples as a training dataset. For detection, 11th Pacific Rim International Symposium on
we generated number of valid as well as invalid Dependable Computing, PP: 8, Dec 12-14, 2005.
transactions. We tested our approach by supplying valid as [4] Ashish Kamra, Elisa Bertino, and Evimaria Terzi. ,
well as invalid transactions and our approach was detecting “Detecting anomalous access patterns in relational
these transactions with full accuracy. We considered all the databases,” The International Journal on Very Large
possible ways for generating valid and invalid transactions Data Bases (VLDB), 2008.
and we got the proper result for all the cases. Our approach [5] Wai Lup LOW, Joseph LEE, Peter TEOH., “DIDAFIT:
is perfectly detecting correlations among commands of the Detecting intrusions in databases through
transactions. We tested the approach by issuing the valid fingerprinting transactions,” ICEIS 2002 - Databases
transactions by eliminating one of the SQL command from and Information Systems Integration, pages 121-
the transaction and it was detected as invalid transaction. 127,2002.
When we issued the transactions with all the desired SQL [6] Elisa Bertino, Ashish Kamra, and James Early,
commands, it was detected as valid transaction. Training “Profiling database application to detect sql injection
time was also varying linearly with respect to number of attacks,” IEEE International Performance, Computing,
training tuples as per the expectations. Figure 2 shows the and Communications Conference (IPCCC) 2007, pages
nature of training time vs number of training tuples. 449–458, April 2007.
[7] C.Y. Chung, M. Gertz, and K. Levitt. , “DEMIDS: a
misuse detection system for database systems,” In
Integrity and Internal Control in Information Systems:
Strategic Views on the Need for Control. IFIP TC11
WG11.5 Third Working Conference, pages 159-178,
2000.
[8] Yi Hu and Brajendra Panda, “A data mining approach
for database intrusion detection,” In SAC ’04:
Proceedings of the 2004 ACM symposium on applied
computing, pages 711–716, New York, NY, USA,
2004.
[9] Abhinav Srivastava, Shamik Sural and A. K.
Majumdar, “Database intrusion detection using
weighted sequence mining,” Journal of Computers,
Vol. 1, NO. 4, pages 8-12, JULY 2006.
[10] Elisa Bertino, Ashish Kamra and Evimaria Terzi,
“Intrusion detection in rbac-administered databases,”
In Proceedings of the Applied Computer Security
Figure 2. Training Time Vs Training Data
Applications Conference (ACSAC), 2005.

4. Conclusion
In this paper we have proposed a new unsupervised machine
learning approach of database intrusion detection for
databases in which role based access control (RBAC)
mechanism is enabled. It considers the correlations among
the queries of the transaction and detects them accordingly.
It does not require role information to be logged in database
log. Clusters of transactions generated can also provide
guidelines to the database administrator for role definitions.

References
[1] Fredrik Valeur, Darren Mutz, and Giovanni Vigna., “A
learning-based approach to the detection of sql
attacks,” In Proceedings of the International
Conference on Detection of Intrusions and Malware,
(IJCNS) International Journal of Computer and Network Security, 75
Vol. 2, No. 6, June 2010

Secure Password Authenticated Three Party


Key Agreement Protocol
Pritiranjan Bijayasingh1 and Debasish Jena2
1
Department of Computer Science and Engineering
International Institute of Information Technology, Bhubaneswar 751 013, India
pritiprbs@yahoo.com
2
Centre for IT Education
Biju Patnaik University of Technology
Orissa, India
jdebasishjena@gmail.com

passwords, which they call Client-to-Client Password-


Abstract: In the past, several key agreement protocols are
proposed on pre-shared password based mechanism. Due to Authenticated Key Exchange (C2C-PAKE) protocol. In this
development of communication technology, it is necessary to protocol two clients pre-shared their passwords either with a
construct a secure end-to-end channel between clients. In this single server (called a single-server setting) or respectively
paper, an improved version of J. Kim et al proposed password- with two servers (called a cross-realm setting). However, in
authenticated key exchange protocol has been proposed. The [13] Chen have shown the C2C-PAKE protocol in the cross-
proposed scheme is secure against Denial of Service Attack, realm setting is not secure against dictionary attack from a
Perfect Forward Secrecy, Denning-Sacco Attack etc. Hence malicious server in a different realm. In 2004, J. Kim et al,
proposed scheme is much more superior to the previous scheme. shown that the C2C-PAKE protocol is also vulnerable to the
Keywords: Trusted third party, Kerberos, Denial of Services, Denning-Sacco attack by an insider adversary. They have
and Key Exchange. also modified the protocol to overcome the Denning-Sacco
inside adversary attack.
1. Introduction
In this paper, the weaknesses of the modified C2C-PAKE
Secure communication between two users on a computer protocol are presented. Furthermore, the modified protocol,
network is possible using either single key (conventional) which repairs the problem of the modified C2C-PAKE
encryption or public key encryption. In both systems, key protocol, is proposed.
establishment protocols are needed so the users can acquire
keys to establish a secure channel. In single key systems, the The remaining of the paper is organized as follows. In
users must acquire a shared communication key; in public- section 2, a brief overview of the modified C2C-PAKE
key systems, the users must acquire each others' public keys. protocol is given along with its weaknesses. Then the
improved version of the modified C2C-PAKE protocol is
In secure communications, one of the most important introduced in section 3. . Next, in Section 4, we briefly
security services is user authentication. Before the discuss the formal security notions of the proposed
communicating parties start a new connection, their protocols. The paper concludes with the concluding remark
identities should be verified. In the client-server model, and some directions for future work in section 5.
password-based authentication is a favorable method for
user authentication because of its easy-to-memorize
property. As the passwords are easy to remember and 2. Review of Modified C2C-PAKE protocol
selected from a small space, it allows a cryptanalyst to
mount several attacks such as guessing attack, dictionary In this section, the modified C2C-PAKE protocol in cross-
attack against it. Based on different cryptographic realm setting is described with its weakness.
assumptions, various protocols have been proposed to
achieve secure password-authenticated key exchange [7,9- 2.1 Modified C2C-PAKE Protocol
12,19-20] for preventing these ever-present attacks.
2.1.1 Computational Assumptions
In the literature, most password-authenticated key exchange
schemes assume that two parties share a common password. The scheme is based on numerical assumptions and
Two parties use their shared password to generate a secure computational assumptions. Let p, q be sufficiently large
common session key and perform key confirmation with primes such that q|p − 1, and let G be a subgroup of Z∗ p of
regard to the session key. Most of these schemes consider order q. During initialization step, a generator g ∈ G and
authentication between a client and a server. hash function (H1, H2, H3, H4, H5) are published. All
protocols throughout the paper are based on the discrete
In 2002, Byun et al. proposed a new password-authenticated logarithm assumption (DLA) and Diffie-Hellman Protocol.
key exchange protocol between two clients with different
76 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

2.1.2. Protocol Description message because, he can’t discard the TicketC as it is issued
by KDCA. In step-5 KDCB sends the message to client C and
KDCA and KDCB are key distribution centers which store the rest of the steps are executed normally. In this way,
(Alice’s identity, ID(A), and password, pwa), and (Bob’s client A establishes the intended session key with client C
identity, ID(B), and password, pwb) respectively. K is a not with the actual client B. In the, modified C2C-PAKE
symmetric key shared between KDCA and KDCB. protocol there is no verifiable message where both A and B
can verify their identities after step-4. In this way, the
1. Alice chooses x ∈R Z∗ p randomly, computes and sends attacker may trap client C and communicate with client A
Epwa(gx), ID(A) and ID(B) to KDCA. pretending as client B. This causes identity mis-binding
attack
2. KDCA obtains gx by decrypting Epwa(gx). KDCA selects r
∈R Z∗ p randomly and makes TicketB = EK(gx·r, gr, ID(A), In step-2, the attacker may modify the Lifetime L of the
ID(B), L). L is a lifetime of TicketB. Then KDCA sends ticket with a higher value than the value specified by KDCA.
TicketB, ID(A), ID(B) and L to Alice. But, KDCB obtains the actual value of L. As client A has a
high L value, she may request KDCB for service within the
3. Upon receiving the message from KDCA, Alice forwards enhanced time period. But, KDCB is unable to provide the
TicketB to Bob with ID(A). service after the actual lifetime L. This causes a Denial of
Service (DOS) attack.
4. Bob chooses y ∈R Z∗ p randomly, and computes Epwb(gy).
Then he sends Epwb(gy), ID(A) and ID(B) to KDCB with 3. Proposed Protocol
TicketB.
In our proposed protocol every host is registered with the
x·r r TTP with different passwords. The hosts and the TTP agree
5. KDCB obtains g and g by decrypting TicketB, selects r’
upon a family of commutative one-way hash functions
∈R Z∗ p randomly and computes gx·r·r’ and gr·r’. Next KDCB which is used for host authentication. One-way hashes of the
sends gx·r·r’ and gr·r’ to Bob. passwords are being stored instead of storing the plaintext
version of the passwords. One-way function is a function F
6. Bob makes cs = H1(gx·y·r·r’ ) using gx·r·r’ and y. Then Bob such that for each x in the domain of F, it is easy to compute
chooses a random number a ∈R Z∗ p and computes Ecs(ga) y=F(x), but given F(x) it is computationally infeasible to
and gr·r’·y. Finally, Bob sends Ecs(ga) and gr·r’·y to Alice. find any x.

7. Alice also can compute cs using gr·r’·y and x. Next, Alice 3.1 Notations
selects b ∈R Z∗ p randomly and computes the session key sk The following notations are used in this paper.
= H2(gab) and Ecs(gb). Finally she sends Esk(ga) and Ecs(gb)
for session key confirmation. Alice, Bob Honest Hosts
TTP Trusted Third Party
8. After receiving Esk(ga) and Ecs(gb), Bob gets gb by IDA, IDB Identities of Alice and Bob
decrypting Ecs(gb) with cs and computes sk with gb and a. pwa, pwb Passwords of Alice and Bob
Bob verifies ga by decrypting Esk(ga) with sk. Bob sends EK(X) Encryption of plaintext X using key K
Esk(gb) to Alice to confirm the session key. DK(X) Decryption of plaintext X using key K
SK Session Key between A and B
9. Alice verifies gb by decrypting Esk(gb) with sk. H(pwa) One way hash of password of A
g Generator of cyclic group
2.2 Cryptanalysis of Modified C2C-PAKE Protocol p, q Large prime numbers
A→B: M A sends message “M” to B
Let an outside attacker having knowledge of the whole TicketB Kerberos Ticket issued to A for service
cross-realm architecture comes in between client A and from B
KDCA. In the first message transfer from A to KDCA, he sgnA( . ) Signature generated using the private
may snoop the message and modify the IDB with some ID key of A
say, IDC which is a legitimate client in the other realm. K Shared Secret Key between TTP1 and
Upon receiving the message from A, KDCA makes TicketC = TTP2
EK(gx·r, gr, IDA, IDC, L) which is not the intended Ticket for
client B. When KDCA sends the message to A in step-2, the
attacker can again change the IDC to IDB. Hence, A doesn’t 3.2 Proposed Protocol
know what happened in between as she can’t decrypt the
Ticket. Here we describe the steps involved in the protocol in detail.
g, p and q are global parameters shared by protocol
In the same way the attacker may snoop the message in the participants.
step-4 and modify IDB to IDC. After decrypting the Ticket,
KDCB assumes that client A wants to communicate with 3.2.1 Single-Server Setting
client C as he receives same IDC from both the message and
the Ticket. He may discard Epwb(gy) as an redundant
(IJCNS) International Journal of Computer and Network Security, 77
Vol. 2, No. 6, June 2010

Alice and Bob choose password pwa, pwb respectively and


then transfer it to TTP through a secure channel. TTP stores B → A : EKe ( RB ), ESK (sgn B ( IDA, RA, RB ))
(IDA, H(pwa)) and (IDB, H(pwb)) in its database. The
following shows the steps: vii. Alice finds RB after decrypting with Ke. She
computes the session key SK=(RB)a (mod
i. Alice randomly chooses a number r, computes p)=gba (mod p). After calculating the session
gr (mod p). The computed value is encrypted key Alice verifies Bob’s signature. If the
using the H(pwa) along with the IDs of the signature is verified, Alice concatenates Bob’s
participating hosts. Alice then sends the ID, RA and RB. Then she signs the result with
calculated values to the server. her own private key, encrypts the signature
A → T : H ( pwa )[ IDA, IDB , g r (mod p)] with the session key SK and sends to Bob.
A→ B : ESK (sgn A ( IDB , RA, RB ))
ii. Server T then decrypts the received packet
using the H(pwa) of host A stored in its viii. If Bob verifies Alice’s signature correctly, then
database to recover gr (mod p). Then server he ensures that the same session key SK is
randomly chooses a number t and computes grt occupied by both of them.
(mod p) and encrypts the computed value
concatenated with IDs of the two 3.2.2 Dual-Server Setting
communicating entities, using the H(pwb)
stored in its database. Then the computed Alice and Bob choose their distinct passwords pwa, pwb and
value is sent to Bob by the server. registers themselves with TTP1 and TTP2 respectively using
T → B : H ( pwb )[ IDA, IDB , g r .t (mod p)] their passwords through a secure channel. TTP1 and TTP2
store (IDA, H(pwa)) and (IDB, H(pwb)) respectively in their
iii. After receiving the packet Bob decrypts it own databases. We assume that TTP1 and TTP2 share a
using H(pwb) to get grt (mod p). He chooses a secret key K. The steps involved are described below:
random number s, and computes an ephemeral
key (Ke) as Ke=grts (mod p). Then he computes i. A randomly chooses r and computes g (mod p) .
r

gs (mod p), concatenates it with IDA, encrypts


Then she encrypts the computed value concatenated
the resulting value using H(pwb) and sends to
with IDs of the participating hosts using the
the server T.
H(pwa) and sends the calculated value to TTP1.
B → T : H ( pwb )[ IDB , g s (mod p )] A → TTP1 : H ( pwa )[ IDA, IDB , g r (mod p )]
iv. After receiving the packet, server decrypts it
ii. TTP1 obtains gr (mod p) after decrypting the
using H(pwb) to recover gs (mod p). Then he
received packet by the H(pwa) of host A stored in
computes gst(mod p), encrypts it using H(pwa)
its database. Then TTP1 randomly chooses a
and sends it to Alice.
number t and computes grt (mod p) and gt (mod p).
T → A: H ( pwa )[ g s .t (mod p )] It prepares TicketB=EK(grt (mod p), gt (mod p), IDA,
IDB, L). It concatenates IDs of the hosts with L and
v. Alice decrypts the received packet using encrypts the resulting value with H(pwa). TTP1
H(pwa) to get gst (mod p). Then she calculates sends TicketB to A along with the encrypted values
the ephemeral key (Ke) as Ke=gstr (mod p). of IDA, IDB and L. L is the lifetime of TicketB.
Alice chooses a random number a and TTP1 → A: H ( pwa )[ IDA, IDB , L], Ticket B
computes RA=ga(mod p), encrypts RA using the
ephemeral key Ke and sends it to Bob along Ticket B = EK [ g r .t (mod p ), g t (mod p), IDA, ID B , L]
with her own ID.
A → B : IDA, EKe ( RA) iii. After receiving the message, A forwards IDA and
TicketB to TTP2.
A → TTP2 : IDA, Ticket B
vi. Bob recovers RA by decrypting the received
packet using the ephemeral key Ke. He
randomly chooses a number b as his private iv. Upon receiving the message, TTP2 decrypts
key, computes RB=gb (mod p) and encrypts RB TicketB using shared key K to recover grt (mod p)
using the ephemeral key Ke. Then he calculates and gt (mod p). It selects a random number t' and
the session key (SK) as SK=(RA)b (mod p)=gab computes grtt' (mod p) and gtt' (mod p). Next,
(mod p). Bob concatenates RA, RB and Alice’s TTP2 concatenates the computed values with the
ID, creates a signature of the result using his IDs of both the hosts and encrypts the resulting
private key b to be verified by Alice. The value using H(pwb) and sends to B.
signature is encrypted using the session key SK TTP2 → B : H ( pwb )[ ID A, IDB , g r .t .t ' (mod p ), g t .t ' (mod p )]
and sent to Alice along with the encrypted
value of RB.
78 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

v. B obtains grtt' (mod p), gtt' (mod p) after An adversary with pwa (or pwb) can easily compute gr by
decrypting the message. He chooses s as random decrypting H(pwa)(gr). But these values do not help to
and finds an ephemeral key Ke= grtt's (mod p). He compute Ke or SK in old sessions because session key
also randomly selects his private key as b and generation is based on the Diffie-Hellman problem.
calculates his public key as RB=gb(mod p). Next, Therefore the proposed protocol provides perfect forward
RB concatenated with the IDs of the hosts is secrecy.
encrypted using the ephemeral key Ke. Then he
sends this encrypted value and gtt's (mod p) to A. 4.4. Denning-Sacco Attack:
B → A : g t .t '.s (mod p ), EKe ( IDA, IDB , RB ) Now we can show that our protocol is secure against
Denning-Sacco attack. Like the original C2C-PAKE
vi. After receiving the message A finds the protocol, we also classify an adversary into two types. One is
ephemeral key Ke using r and gtt's (mod p). She an Insider adversary and the other is an Outsider adversary.
also recovers RB from the message and chooses
her private and corresponding public key as a, RA 4.4.1. In case of Outsider Adversary:
respectively, where RA= ga(mod p). She computes Outsider adversary, with session keys Ke and SK can
the session key SK=(RB)a (mod p)=gba (mod p). A compute ga, gb and all conversations in the protocol. But he
concatenates RA, RB and IDB, creates a signature can not verify a candidate password pwa'(or pwb') of pwa
of the result using his private key a. The (or pwb) since he can not get r (or s) which is a random
signature is encrypted using the session key SK secret value of A (or B).
and sent to B along with the encrypted value of
RA. 4.4.2. In case of Insider Adversary with pwa:
A → B : EKe ( RA), ESK (sgn A( IDB , RA, RB )) We are going to show that an adversary cannot mount a
dictionary attack on pwb. To verify a candidate password
vii. Upon receiving the message B obtains RA by pwb' of pwb, he must get gs. Since the value of s is a random
decrypting the message, computes the intended number of B, he cannot compute valid gs.
session key SK=(RA)b(mod p)=gab(mod p), and
verifies A’s signature. If the signature is verified, 4.4.3. In case of Insider Adversary with pwb:
B creates a signature exactly the same way as Similar to the case of insider adversary with pwa, he must
done by A and encrypts it using the session key get gr to verify a candidate password pwa' of pwa. Since the
SK. Then he sends the encrypted signature to A. value of r is a random number of A, he cannot compute
valid gr.
B → A : ESK (sgn B ( IDA, RA, RB ))
4.5 Dictionary Attack:
viii. Finally, A verifies the signature and if verified, In case of compromise of pwa or pwb, adversary can mount
ensures that both of the hosts have the same a dictionary attack if he gets gr or gs. However, he can not
session key SK. mount a dictionary attack as analyzed in Denning-Sacco
attack.
4. Security Analysis of Proposed Protocol
In this section, security of the proposed protocols is 4.6 On-line guessing attack, man in the middle attack
analysed. Our proposed protocols are secure against the type and replay attack:
It is the same as analyzed in the original C2C-PAKE
of attacks considered in [13,14] including Identity Mis-
protocol with regard to on-line guessing attack, man in the
binding Attack and Denial of Service attack.
middle attack and replay attack.
4.1. Identity Mis-binding Attack:
Unlike the modified C2C-PAKE protocol, the IDs of the 4.7 Chen’s attack:
Regarding Chen’s attack, there is no verifiable cipher text
communicating entities are encrypted using the one-way
based on password in TicketB. So it is secure against the
hash value of the passwords in the proposed protocol. So,
dictionary attack by a malicious TTP2.
the adversary can’t change any of the IDs of the hosts. As a
result, the proposed protocols are secure against identity
mis-binding attack. 5. CONCLUSION

From the security analysis, we conclude that the proposed


4.2. Denial of Service Attack:
protocol meets all the security requirements defined in [13].
In the Dual-Server Setting of the proposed protocol, the
Furthermore, the protocol is secure against dictionary attack
lifetime L of the ticket issued by TTP1 is encrypted using
from a malicious server in a different realm.
key K in the TicketB as well as using H(pwa) to be
transmitted to entity A. Hence, the value of L remains the
same with A and TTP2 as specified by TTP1. As a result, the References
proposed protocol averts Denial of Service attack. [1] Menezes A.,Oorschot P. van and Vanstone S.,
Handbook of Applied Cryptography, CRC Press, 1996
4.3. Perfect Forward Secrecy: [2] Schneier Bruce., Applied Cryptography: Protocols and
Algorithms, John Wiley and Sons, 1994
(IJCNS) International Journal of Computer and Network Security, 79
Vol. 2, No. 6, June 2010

[3] Stallings Williams., Cryptography and Network [19] S. Jiang and G. Gong, “Password-based Key exchange
Security, 3rd Edition, Pearson Education, 2004 With mutual authentication,” in SAC 2004, LNCS
[4] B. A. Forouzan, Cryptography and Network Security, 3006, pp. 291-306, Springer-Verlag, 2004.
Tata McGraw Hill, Special Indian Edition, 2007 [20] S. Kulkarni, D. Jena, and S. K. Jena., “A novel secure
[5] W. Diffie and M. Hellman, “New Directions In key agreement protocol using trusted third party”.
Cryptography”. IEEE Transactions on Information International Journal of Computer Science and
Theory IT-11, pp. 644–654, November 1976 Security, Volume (1): Issue (1), pp. 11–18, 2007.
[6] Y. Her-Tyan and S. Hung-Min, “Simple Authenticated
Key Agreement Protocol Resistant To Password
Guessing Attacks”, ACM SIGOPS Operating Systems Authors Profile
Review, vol. 36, no. 4, pp.14–22, October 2002
[7] M. Steiner, G. Tsudik, and M. Waidner, “Refinement
And Extension Of Encrypted Key Exchange”. ACM Pritiranjan Bijayasingh received the
Operating System Review, vol. 29, no. 3, pp. 22–30, B.E. degree in Computer Science and
1995 Engineering from Balasore College of
[8] Y. Ding and P. Horster, “Undetectable On-Line Engineering and Technology in 2005.
He has joined Balasore College of
Password Guessing Attacks”. ACM Operating System
Engineering and Technology as
Review, vol. 29, no. 4, pp. 77–86, October1995 Lecturer since 26.08.2005. Now, he is
[9] C. L. Lin, H. M. Sun, and Hwang, “Three-Party persuing his M.Tech degree at
Encrypted Key Exchange: Attacks And A Solution”. International Institute of Information Technology-Bhubaneswar,
ACM Operating System Review, vol. 34, no. 4, pp. 12– Orissa, India. His research area of interest is Information Security
20, October 2000
[10] M. Bellare, D. Pointcheval and P. Rogaway, Debasish Jena was born in 18th
“Authenticated key exchange secure against dictionary December, 1968. He received his B
attacks,” in Eurocrypt 2000, LNCS 1807, pp. 139–155, Tech degree in Computer Science and
Springer-Verlag, 2000. Engineering, his Management Degree
and his MTech Degree in 1991, 1997
[11] E. Bresson, O. Chevassut and D. Pointcheval, “New
and 2002 respectively. He has joined
security results on encrypted key exchange,” in PKC Centre for IT Education as Assistant
2004, LNCS 2947, pp. 145–158, Springer-Verlag, Professor since 01.02.2006. He has
Mar. 2004. submitted his thesis for Ph.D. at NIT,
[12] V. Boyko, P. MacKenzie and S. Patel, “Provably secure Rourkela on 5th April 2010. In addition to his responsibility, he
password-authenticated key exchange using Diffie- was also IT, Consultant to Health Society, Govt. of Orissa for a
Hellman,” in Eurocrypt 2000, LNCS 1807, pp. 156– period of 2 years from 2004 to 2006.His research areas of interest
171, Springer-Verlag, May 2000. are Information Security, Web Engineering, Bio-Informatics and
[13] J. W. Byun, I. R. Jeong, D. H. Lee and C. S. Park, Database Engineering.
“Password-authenticated key exchange between clients
with different passwords,” in ICICS’02, LNCS 2513,
pp. 134–146, Springer-Verlag, Dec. 2002.
[14] L. Chen, “A Weakness of the Password-Autenticated
Key Agreement between Clients with Different
Passwords Scheme”. The document was being
circulated for consideration at the 27th the SC27/WG2
meeting in Paris, France, 2003-10-20/24, 2003
[15] J. Kim, S. Kim, J. Kwak and D. Won, “Crypt-analysis
and improvement of password authenticated key
exchange scheme between clients with different
passwords,” in ICCSA’04, LNCS 3043, pp. 895–902,
Springer-Verlag, May 2004.
[16] D. Denning, G. Sacco, “Timestamps in key distribution
protocols”. Communications of the ACM, Vol.24,
No.8, pp. 533-536, 1981
[17] O. Goldreich and Y. Lindell, “Session-key generation
using human memorable passwords only,” in Crypto
2001, LNCS 2139, pp. 408–432, Springer-Verlag,
Aug. 2001.
[18] J. Katz, R. Ostrovsky and M. Yung, “Efficient
password-authenticated key exchange using human-
memorable passwords,” in Eurocrypt 2001, LNCS
2045, pp. 475–494, Springer-Verlag, May 2001.
80 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

A Specific Replication Strategy of Middleware


Resources in Dense MANETs
Anil Kumar1, Parveen Gupta2, Pankaj Verma3 & Vijay Lamba4
1
C.S.E. Dept.,
H.C.T.M., Kaithal (INDIA).
akj_jakhar@yahoo.com
2
C.S.E. Dept.,
APIIT, Panipat (INDIA)
pk223475@yahoo.com
3
C.S.E. Dept.,
H.C.T.M., Kaithal (INDIA)
Justpankajverma@gmail.com
4
E.C.E. Dept.,
H.C.T.M., Kaithal (INDIA)

simulation results about the Replication Strategy prototype,


Abstract: There are many dynamic services that are applicable
on Adhoc Networks where many portable devices are scattered whose performance shows the effectiveness and the limited
in a limited spatial regions. All mobile peers can autonomously overhead of the proposed solution, by confirming the
cooperate without a fixed deployed network infrastructure such suitability of the application-level middleware approach
as shopping malls and airports. In this paper a Replication even in conditions of high node mobility, notes about
Strategy of middleware is proposed to manage, retrieve, and potential security issues, related work, and conclusions end
disseminate replicas of data/service components to cooperating the article.
nodes in a dense MANET. There are guidelines to enable
optimistic lightweight resource replication capable of tolerating 2. REPLICATION STRATEGY SERVICES ON
node exits/failures. Replication Strategy adopts original
approximated solutions, specifically designed for dense MANET
MANET, which best suited for good scalability and limited Replication Strategy provides replica distribution, replica retrieval,
overhead for dense MANET configuration (node identification and replica degree maintenance facilities in a managed format as:
and manager election), for replica distribution/retrieval, and for
lazily-consistent replica degree maintenance. 2.1 Replica Distribution
The RD facility operates to transparently distribute resource
Keywords: Replication, MANET, MANET Identification,
replicas in the dense MANET [3] [4]. When a delegate enters a
Explored Node.
dense region, it communicates the descriptions of its shared
resources, represented according to the Resource Description
1. Introduction Framework (RDF) [5], to the replica manager, which
autonomously decides the values of some parameters that influence
There are only few proposals which face the challenging
the resource replication process.
issue of resource replication in mobile environments with
the goal of increasing accessibility and effectiveness [1]. In 2.2 Replica Retrieval
particular, the paper proposes a middleware solution, called The RR facility has the goal of retrieving resource replicas at
Replication Strategy in Dense MANETs), that transparently provision time on the basis of the resource RDF-based descriptions
disseminates, retrieves, and manages replicas of common [6], i.e., to dynamically determine the IP address of one suitable
interest resources among cooperating nodes in dense node hosting the requested resource and the unique name of the
MANETs [2]. Replication Strategy has the main goal of replica on that node. Resource retrieval is a hard task in MANETs,
where a static infrastructure is not continuously available, thus
improving the availability of data and service components,
preventing the usage of a fixed centralized lookup service known
and maintaining a desired replication degree for needed by all participants [8]. The usage of a single centralized repository
resources, independently of possible (and unpredictable) with replica placement information is not viable [9]: i) a large
exits of replica-hosting nodes from the dense MANET. The number of requests could overwhelm the single point of
paper presents how Replication Strategy addresses the centralization; ii) the repository would represent a single point of
primary challenging issues of dense MANETs, i.e., the failure; and iii) the repository should be updated with strict
determination of nodes belonging to the dense region, the consistency requirements not to hinder resource accessibility.
sensing of nodes entering/exiting the dense MANET, and
the dynamic election of replica managers responsible for 2.3 Replica Degree Maintenance
orchestrating resource replication and maintenance. In this
Proactive RDM solutions available in the literature, such as [7], do
paper Replication Strategy protocols for dense MANET not fit the provisioning environments addressed by Replication
identification and manager election explained, extensive
(IJCNS) International Journal of Computer and Network Security, 81
Vol. 2, No. 6, June 2010
Strategy. In fact, proactive RDM approaches usually require GPS- fails, any resource delegate that senses the manager
equipped wireless nodes that continuously monitor their mutual unavailability can trigger a new election procedure.
positions to foresee network exits, thus producing non-negligible
network/computing overhead. Replication Strategy RDM, instead,
implements a reactive solution with very low communication
overhead by relaxing the constraint of anytime perfect consistency
in the number of available replicas.

3. Replication Strategy on Dense MANET


The Replication Strategy determines which nodes belong to
the dense MANET and which ones among them have to
play the role of replica managers.
3.3 Identification of Dense MANET
Replication Strategy design a simple protocol where any
node autonomously determines whether it belongs to the
dense MANET or not. One node is in the dense MANET Figure 1. Replication Strategy exploring the sequence of
DM(n) only if the number of its neighbors, i.e., the nodes at ENs from H to A and A to D.
single-hop distance, is greater than n. Each node After having informally introduced the main guidelines of
autonomously discovers the number of its neighbors by the protocol, let us now precisely specify how the manager
exploiting simple single-hop broadcast discovery messages. election works. Each EN executes three operations: i) it
By delving into finer details, at any time one Replication determines the number of hops of the shortest paths
Strategy node can start the process of dense MANET connecting it to any farthest node in the dense MANET (the
identification/update; in the following, we will call that node maximum of those hop numbers is called EN_value); ii) it
the initiator. The initiator starts the original Replication identifies its neighbors located in the direction of its farthest
Strategy protocol for dense MANET identification by nodes (forwarding_neighbors); and iii) it autonomously
broadcasting a discovery message that includes the number chooses the next EN among all the unexplored forwarding
of neighbors (NoN) required to belong to the dense region neighbors of already explored ENs with lowest associated
and the identity of the sender. That number can be values. To take possible device heterogeneity into account,
autonomously decided depending on the desired degree of Replication Strategy promotes the exploration only of ENs
connectivity redundancy: typically, a value between 10 and suitable to play the role of replica manager once elected. For
20 ensures a sufficiently large set of alternative connectivity instance, if a potential EN device has insufficient memory
links. When receiving the discovery message, each node and too low battery life (if compared with configurable
willing to participate replies by forwarding the message to Replication Strategy thresholds), it is excluded from the
its single-hop neighbors, if it has not already sent that manager election protocol. The protocol ends when either
message, and by updating a local list with IP addresses of the Replication Strategy heuristic criterion determines there
detected neighbors. are no more promising nodes, or the current IN_value =
Min_Int((worst explored EN_value)/2), where Min_Int(x)
3.4 Replica Manager Election returns the least integer greater than x. Since Replication
The Replication Strategy middleware works to assign the Strategy considers bi-directional links among MANET
manager role to a node located in a topologically central nodes, when the above equation is verified, it is easy to
position. The protocol explores a manager candidate as a demonstrate that Replication Strategy has reached the
subset of nodes in the dense MANET, called Explored optimal solution for the manager election. In particular,
Nodes (ENs). Figure 1 shows a practical example of Replication Strategy combines two different strategies
application of that guideline. The first step of the protocol against manager assignment degradation, one proactive and
considers node H: its farthest node is I, located at 4-hop one reactive, which operate, respectively, over large and
distance; so, H is tagged with the value of that distance (H4 medium time periods (Tp and Tr with Tp> > Tr). The
in the figure). Then, the Replication Strategy manager proactive maintenance strategy establishes that the current
election protocol considers A, because it is the first node manager always triggers a new manager election after Tp
along the path from H to I: A's farthest node is I, at 3-hop seconds. In addition to probabilistically improving the
distance (A3). At the next iteration, the protocol explores centrality of the manager position, the periodical re-
node D, which is chosen as replica manager by respecting execution of the election process contributes to distribute the
the termination criteria described in the following of the burden of the role among different nodes, thus avoiding
section. Node D can reach any other node in the depicted depleting the energy resources of a single participant.
dense MANET with a maximum path of two hops. Let us Moreover, let us rapidly observe that only the nodes located
observe that Replication Strategy provides a simple way to in the proximity of the dense MANET topology center have
react also to manager exits from the dense MANET. If the high probability to assume the manager role. Therefore, a
manager realizes it is going to exit, e.g., because its battery target_EN_value can be easily determined equal to the
power is lower than a specified threshold, it delegates its EN_value of the current manager, thus speeding up
role to the first neighbor node found with suitable battery manager election and reducing the protocol overhead. In
charge and local memory. In the case the manager abruptly addition to the above proactive degradation counteraction,
82 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

Replication Strategy exploits a reactive strategy that consists {


in repeating the farthest node determination at regular Tr consecutive_Equal_Solutions++;
periods, with the goal of understanding the current manager if (conecutive_Equal_Solutions ==
distance from the optimal placement. If the distance of max_consecutive_equal_solutions) exit;
manager farthest nodes, i.e., its new_EN_value, has unexplored_List = unexplored_List U
increased if compared with the distance estimated at the forwarder_List;
moment of its election, i.e., its EN_value, Replication }
Strategy realizes that the manager has moved from the } Print(best_Node)
topologic center in a significant way; in that case, the
manager itself triggers a new election process. The Figure 2. Pseudo-code of the Replication Strategy
following subsections detail the adopted heuristics to limit manager election protocol.
the number of explored ENs and the exploited solution to
determine, given a node, its farthest nodes in the dense Elected manager perform replica distribution, replica
MANET. retrieval, and replica degree maintenance also.
Replication Strategy provides two tuning parameters that
enable dense MANET administrators to trade between the 4. Experimental Results
quality of the manager election protocol and its
performance. The first parameter, Desired_Accuracy, For evaluation the approach, we have implemented a
permits the initiator to tune the approximation considered Replication Strategy by using NS2 simulator for dense
acceptable for the election solution (see Figure 2). The MANET identification, manager election, and resource
second parameter, Max_Consecutive_Equal_Solutions, is retrieval. The simulations have three main goals which are
introduced by observing that, when the Replication Strategy explained below
election protocol approaches the optimal solution, it often 4.1 Network Overhead
explores other candidate nodes without improving the
First, we have carefully evaluated the network overhead of
current best EN_value. For each explored solution equal to
the Replication Strategy protocols for dense MANET
the current best, Replication Strategy increases a counter;
identification and manager election, to verify that the DMC
the counter resets when Replication Strategy finds a new
proposals are lightweight enough for the addressed
solution outperforming the old best. The adopted heuristic
deployment scenario. We have tested the two protocols in
stops the iterations when the counter reaches
different simulation environments, with a number of nodes
Max_Consecutive_Equal_Solutions. Figure 2 shows the
ranging from 100 to 1100 (increasing of 100 nodes each
pseudo-code of the Replication Strategy election protocol.
step). For protocols we have measured the average number
of messages sent by each participant, over a set of more than
explored_List = ø; forwarder_List = ø;
1,000 simulations. The results reported in Figure 3 are
best_Node = ø; best_Value = MAX; worst_Value = 0;
normalized to the number of nodes actively participating in
unexplored_List = Initiator;
the protocol. The dense MANET identification protocol is
while (unexplored_List != ø)
designed to determine participant nodes by requiring only
{
one local broadcast from each node reachable from the
EN = Head(unexplored_List);
intiator. Given the dependence of manager election network
EN_Value = Distance_From_Farthest(EN);
overhead on the number of iterations, we have carefully
explored_List = explored_List U EN;
investigated the convergence of the Replication Strategy
unexplored_List = unexplored_List – EN;
protocol while varying the number of dense MANET nodes
forwarder_List =
(see Figure 4). The results are average values on a set of
Get_Promising_Neighbors(EN);
simulations where the role of initiator is assigned to a
forwarder_List = forwarder_List –
different node at each simulation. The experimental results
explored_List;
about the number of iterations have demonstrated the almost
if ((ENValue == Min_Int(worst_Value/2) ||
linear dependence on dense MANET diameter and have
(ENValue <=worst_Value * desired_accuracy)) exit;
shown to be negligibly affected by the choice of the initiator
if (EN_Value < best_Value)
node. Let us briefly observe that the non-monotonic growth
{ of the overhead trace in Figure 3 is due to different factors.
best_Node = EN; best_Value = EN_Value;
First, the dense MANET diameter only increases in
consecutive_Equal_Solutions = 0;
correspondence with some threshold values of the number of
unexplored_List = forwarder_List;
participants. About the latency imposed by the Replication
} Strategy manager election protocol, let us briefly observe
if (EN_Value > worst_Value) that it mainly depends on three factors: obviously, one is the
{ number of dense MANET participants; the others are two
worst_Value = EN_Value; Replication Strategy configurable timers that establish,
if ((best_Value == Min_Int (worst_Value/2) || respectively, the message delay for preventing broadcast
best_Value <= worst_Value*desired_accuracy)) storm in farthest node determination and the time interval
exit; waited by the current EN before delegating its role. All the
} presented results have been obtained by setting the former to
if (EN_Value == best_Value) 2.5s and the latter to 5s per hop of the diameter. These
(IJCNS) International Journal of Computer and Network Security, 83
Vol. 2, No. 6, June 2010

values guarantee that the current EN passes node The results in Figure 5 are obtained by starting each election
exploration responsibility only after having received most from a different initiator node. In more than 90% of the
replies from farthest nodes, thus achieving its correct runs, the Replication Strategy protocol has identified either
EN_value. optimal solutions or quasi-optimal solutions at 1-hop
distance from the actual optimum. The average inaccuracy
is only 0.385 hops, which represents a largely acceptable
value for the addressed application scenario.

4.3 Impact of Node Mobility on the Accuracy of the


Replication Strategy Dense MANET
Identification Protocol
To test the robustness of Replication Strategy solutions, we
have evaluated the accuracy of the dense MANET
identification protocol by varying the mobility
characteristics of network nodes. Let us rapidly note that the
manager election protocol executes for very limited time
periods and re-starts its execution only after a long time
interval; to a certain extent, it is assumable that it operates
Figure 3. Number of iterations needed for the
under static conditions. On the contrary, the dense MANET
Replication Strategy manager election protocol.
identification protocol should continuously work to maintain
an almost consistent and updated view of the dense MANET
participants, crucial for the effective working of Replication
Strategy solutions. For this reason, the sub-section focuses
on the behavior of our dense MANET identification protocol
as a function of node mobility. Any pair of random
movements of randomly chosen nodes occurs every M
seconds, with M that varies from 10 to 60. Any other node
movement not producing arrival/departure in/out the dense
MANET does not affect at all the behavior of the
Replication Strategy identification solution. Figure 6 reports
the dense MANET identification inaccuracy, defined as the
difference between the number of dense MANET
participants determined by the Replication Strategy protocol
and its actual value. The inaccuracy is reported as a function
Figure 4. Message sent/received per node in dense of the mobility period M and for different values of the time
MANET identification and manager election. period used for Hello packets. Each point in the figure
4.2 Manager Election Inaccuracy represents an average value obtained by capturing the state
of the network over 30 different runs. The figure shows that
We have measured its accuracy in assigning the manager the average inaccuracy is very limited and always within a
role to a node close to the actual topology center of the dense range that is definitely acceptable for lazy consistent
MANET. We have run over 200 simulations in the most resource replication in dense MANETs. As expected, the
populated scenario of 1100 nodes and, for any simulation, inaccuracy grows when node mobility grows, for fixed
we have measured the election inaccuracy defined as the hop values of the Hello message period.
distance between the manager chosen by the Replication
Strategy protocol and the actual optimal solution.

Figure 6. Accuracy of the Replication Strategy dense


MANET identification protocol as a function of node
Figure 5. Inaccuracy of the Replication Strategy mobility
manager election protocol
84 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

demonstrates how it is possible to control lightweight and


However, even for relatively high values of the Hello period, effective replica management. The performance results
the Replication Strategy identification inaccuracy is obtained are encouraging further new Specific Replication
negligible for the addressed application scenario (on the Strategy related research activities. First, we are evaluating
average, always less than 1.7), for the whole range of node the impact of the proposed protocols on the performance of
mobility that can be of interest for dense MANETs. Let us Replication Strategy-supported applications. In particular,
observe that this permits to set relatively high periods for we are carefully evaluating in which deployment conditions
Hello packets, while obtaining a low inaccuracy for the (dense MANET diameter and number of participants, node
dense MANET identification, thus significantly reducing mobility, resource distributions/requests ratio, average
the message exchange overhead. replica size …) the traffic reduction due to the central
position of the replica manager offsets the overhead incurred
in its election. Finally, as already stated in previous parts of
4.4. Replica Retrieval Message Overhead
the paper, we are working on more sophisticated replica
We have also performed a wide set of simulations to retrieval solutions, which also take into account simple
evaluate the message overhead of the Replication Strategy security mechanisms for authorized resource access and
RR strategy, i.e., the overall number of search messages incentive-based promotion of node collaboration.
exchanged within the dense MANET, on the average, when
a requester looks for the IP address of a node hosting a References
replica of the needed resource. To this purpose, we compare
the Replication Strategy RR results on message traffic with [1]. T. Qing, D.C. Cox, “Optimal Replication Algorithms
the network overhead that would be generated by a trivial for Hierarchical Mobility Management in PCS
retrieval strategy based on flooding query messages to all Networks”, IEEE Wireless Communications and
nodes belonging to the dense MANET (in the following we Networking Conf. (WCNC), year. 2002.
call that strategy Query Flooding – QF). Figure 7 reports the [2]. P. Bellavista, A. Corradi, E. Magistretti, “Lightweight
Replication Strategy RR plot as a function of the rep_Hops Replication Middleware for data and Service
parameter (the configurable number of IRPs disseminated in Components in Dense MANETs”, 1st IEEE Int. Symp.
the dense MANET) in the most populated scenario of 1100 on a World of Wireless, Mobile and Multimedia
nodes. The figure shows the average number of search Networks (WoWMoM), June 2005.
messages necessary to find the first IRP, normalized to the [3]. M.J. Freedman, E. Freudenthal, D. Mazières,
number of dense MANET participants. As expected, the “Democratizing Content Publication with Coral”, 1st
number of search messages per node decreases as the USENIX/ACM Symp. on Networked Systems Design
number of disseminated IRPs grows. But, most important, and Implementation, Mar. 2004
even for a very limited number of IRPs, the Replication [4]. C. Bettstetter, G. Resta, and P. Santi, “The Node
Strategy expanding ring RR strategy largely outperforms Distribution of the Random Waypoint Mobility Model
QF, by providing a suitable trade-off between RR for Wireless Ad hoc Networks”, IEEE Transactions on
complexity, provision-time RR traffic, and IRP distribution Mobile Computing, Jul.-Sept. 2003.
overhead. [5]. S. Decker, P. Mitra, S. Melnik, “Framework for the
Semantic Web: an RDF Tutorial”, IEEE Internet
Computing, Vol. 4, No. 6, Nov.-Dec. 2000.
[6]. I. Aydin, C.-C. Shen, “Facilitating Match-Making
Service in Ad Hoc and Sensor Networks using Pseudo
Quorum”, 11th IEEE Int. Conf. Computer
Communications and Networks, Oct. 2002.
[7]. M. Boulkenafed and V. Lssarny “A middleware service
for mobile ad hoc data haring, enhancing data
availability” in 4th ACM/IFIP/USENIX Middleware
June 2003.
[8]. K. Chen, K. Nahrstedt, “An Integrated Data Lookup
and Replication Scheme in Mobile Ad Hoc Networks”,
SPIE Int. Symp. on the Convergence of Information
Technologies and Communications (ITCom 2001),
Figure 7. Message overload for resource retrieval Aug. 2001.
depending on the number of disseminated IRPs. [9]. M. Tamori, S. Ishihara, T. Watanabe, T. Mizuno, “A
Replica Distribution Method with Consideration of the
5. Conclusion Positions of Mobile Hosts on Wireless Ad-hoc
Dense MANETs can benefit from the assumption of high Networks”, 22nd Int. Conf. on Distributed Computing
node population to enable lazily consistent forms of Systems Workshops, July 2002.
replication for resources of common interest. This can
significantly increase resource availability notwithstanding
unpredictable node movements entry/exit dense MANETs.
The Specific Replication Strategy project middleware
(IJCNS) International Journal of Computer and Network Security, 85
Vol. 2, No. 6, June 2010

Reliable Soft-Checkpoint Based Fault Tolerance


Approach for Mobile Distributed Systems
Surender Kumar1, R.K. Chauhan2 and Parveen Kumar3
1
Haryana College of Technology & Management (H.C.T.M), Department of I.T
Ambala Road, Kaithal-136027, India
ssjangra@rediffmail.com
2
Kurukshetra University, Kurukshetra(K.U.K), Department of Computer Sc. & Applications
Kaithal-Pehowa Road, Kurukshetra-136119, India
rkcdcsa@gmail.com
3
Meerut Institute of Engg. & Tech. (M.I.E.T) , Department of Computer Sc. & Engineering,
N.H. 58, Delhi-Roorkee Highway, Baghpat Road Bypass Crossing, Meerut-250005, India
Pk223475@yahoo.com

Abstract: A checkpoint is a local state of a process saved on not appropriate to directly apply checkpointing protocol
stable storage. In coordinated checkpointing, processes take designed for fixed network distributed systems to mobile
their checkpoints in such a manner that the resulting global systems.
state is consistent. Mostly it follows two-phase commit structure. Checkpointing algorithms are classified into two main
In the first phase, processes take tentative checkpoints and in categories: uncoordinated and coordinated. In
the second phase, these tentative checkpoints are made uncoordinated approach each process takes its checkpoint
permanent. In case of a single systems failure or reply
independently without the knowledge of other process. This
negatively, processes rollback to last consistent checkpointed
state. In mobile distributed systems process is considering as a
approach is simple, but suffers from domino-effect. In
MH without stable storage and there is wireless link between coordinated checkpointing approach, processes take
MH and MSS. So during checkpointing, an MH has to transfer checkpoints in such a manner that the resulting global state
a large amount of checkpoint data to its local MSS over the is consistent. So, this approach is domino-free [2], stores
wireless network. Since MHs are prone to failure and during the minimum number of checkpoints on the stable storage
failure of an MH these checkpoints become unnecessary. (maximum two).
Therefore, transferring such unnecessary checkpoints and then In a MDCS the checkpoint taken by a MH is need to be
rollback after failure, may waste a large amount computation transferred to its current MSS due to lack of stable storage at
power, bandwidth, time and energy. In this paper we propose a MH level [2]. Most of the existing coordinated
soft-checkpoint based checkpointing algorithm which are checkpointing protocols try to minimize the number of
suitable for mobile distributed systems. Our soft-checkpoints are
processes to take checkpoint and to make checkpoint
store on local MHs and have not any transferring cost. These
algorithms non-blocking. These algorithms follows two-
soft- checkpoints are stored in stable only receiving the commit
message or during failure of a node and discard locally, after phase commit structure [1] - [5]. In the first phase, processes
receiving the abort message. Hence, in case of failure our soft- take tentative checkpoints and in the second phase, these are
checkpoint has much less overhead as compare to tentative. made permanent. The main advantage is that only one
permanent checkpoint and at most one tentative checkpoint
is required to be stored. In the case of a fault, processes
Keywords: Checkpointing, Coordinated checkpointing, rollback to last checkpointed state [1].
uncoordinated Checkpointing, Fault Tolerance, Mobile Distributed
Systems, Domino-free
2. Related Work and Problem Formulation:
1. Introduction 2.1 Related work
Checkpointing and rollback recovery techniques for fault- The work presented in this paper shows the performance
tolerance in distributed systems have studied extensively in improvement over work reported in [1]-[5]. These
the literature. However, little attention has been devoted to algorithms either try to make the checkpointing algorithm
fault–tolerance techniques for mobile distributed systems. either minimum-process or non-blocking or both minimum–
Today, mobile user become common in distributed system process and non-blocking.
due to availability, cost and mobile connectivity. Mobile All these algorithms follow two-phase commit structure. In
distributed systems raise new issue such as mobility, low the first phase, processes take temporary checkpoints when
bandwidth of wireless channels, disconnections, limited they receive the checkpoint request. These tentative
battery power and lack of reliable stable storage of mobile checkpoints are store in the stable storage of MSS. In the
nodes. Due to these unique features of mobile systems, it is second phase, if an MSS learns that all of its processes to
86 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

whom it sends checkpoint request, have taken their tentative that all processes takes their hard checkpoint successfully,
checkpoints successfully, initiator MSS sends the commit then it forward commit message. This approach assumes
message to all the participating node. However, in case of a that each site has its own local log, sufficient volatile
single failure or reply aborted, initiator process broadcast memory and can therefore rollback or commit the
the aborted message. After receiving commit request, transaction reliably.
processes convert their tentative checkpoints into permanent
ones and on receiving abort message, every process that has 3. Soft-checkpoint based Checkpointing
already taken a temporary checkpoint must discard it. Later Approach:
the checkpointing algorithm has to be restarted again from
the previous consistent global state. As to take a checkpoint, 3.5 System model
an MH has to transfer a large amount of data to its local
MSS over the wireless network. Since the wireless network A mobile distributed system is a distributed system where
has low bandwidth and the MHs have relatively low some of the processes are running on mobile hosts
computation power. During failure theses transferred (MHs)[7]. It consists of Static Hosts (SHs), Mobile
checkpoint become useless and discard later. These Hosts(MHs) and the Mobile Support Stations(MSSs). So, the
algorithms have higher checkpoint latency and recovery mobile distributed system can be considered as consisting of
time as transferring such temporary checkpoints on stable “n” MHs and “m” MSSs. The static network provides
storage may waste a large amount computation power, reliable, sequenced delivery of messages between any two
bandwidth, energy and time. MSSs, with arbitrary message latency. Similarly, the
wireless network within a cell ensures FIFO delivery of
2.2 Problem formulation messages between an MSS and a local MH. The links are
FIFO in nature. An MH communicates with other nodes of
In mobile distributed system multiple MHs are connected
system via special nodes called mobile support station
with their local MSS through the wireless links. A process is
(MSS).An MH can directly communicate with an MSS only
considering as a MH without stable storage. During
if the MH is physically located within the cell serviced by
checkpointing, an MH has to transfer a large amount of
MSS. A static node that has no support to MH can be
checkpointed data (like variables, control information,
considered as an MSS with no MH. A cell is a geographical
register value and environments etc.) to its local MSS over
area around an MSS in which it can support an MH .An
the wireless network. So, it consume resources to transfer
MH can change its geographical position freely from one
data and to rollback in case of any failure to its consistent
cell to another cell or even area covered by no cell .At any
state. If even a single MH fails to take a checkpoint all the
given instant of time an MH may logically belong to only
checkpoint which is taken on MSS must be rollback. So, it
one cell; its current cell defines the MH’s location and the
increases the checkpoint latency, recovery time during and
MH is considered local to MSS providing wireless coverage
disk overhead during the failure.
in the cell. If an MH does not leave the cell, then every
The objective of the present work is to design a
message sent to it from local MSS would receive in
checkpointing approach that is suitable for mobile
sequence in which they are sent.
computing environment. Checkpointing and recovery
protocol for Mobile computing environment demands for
efficient use of the limited resources of mobile environment 3.6 The Soft checkpoint approach
i.e wireless bandwidth, battery power and memory etc. Therefore, in the present work we emphasize on to make
Consider a mobile environment, in which a MSS has 1000 computation faster, eliminating the overhead of taking
MHs in their minimum set. During the first phase 999 MHs temporary checkpoint on stable storage and finally utilize
take their temporary checkpoints successfully and one MH the available fast memory. In this context we proposed a
fails to take checkpoint. In such case, checkpointing process soft-checkpoint based approach, in which processes takes
must be aborted; MHs discard their temporary checkpoint their temporary checkpoints, as a soft-checkpoints on their
and the system restarts its execution from a previous main memory. The approach work as follows:
consistent global checkpoint saved on the stable storage
during fault free operation. Observe that taking a temporary 3.2.4 Action at the initiator Pj
checkpoint in the stable storage and later discard it, affects The initiator can be in one of four states during the
the bandwidth utilization and waste the MHs limited battery checkpointing process: Initial state, Waiting state, Hard
power. checkpointing state, and COMITTED/ABORTED state, as
shown in state transition diagram of Figure[1]. Each state is
2.3 Basic idea followed by its previous state and work as follows.
The basic idea of the proposed scheme is to store the 1. When P j initiates checkpointing algorithm:
checkpoint as a soft-checkpoint in the memory, till the time {take_soft-checkpoint; Increment csn; set weight=1;
it receives the hard checkpoint request from the initiator. Compute minset[]; send take soft-checkpoint request to
The initiator asks all processes in minset[] whether they are all processes belongs to minset along with minset[], csn
agreed to soft-checkpoint or not. If one process reply abort, and weightj=weightj/2;} /*If Pi initiate its(x+1)th
or fails to respond within a timeout period, then the initiator checkpoint then the set of processes on
broadcast to aborted message and after receiving ABORT, which Pi depends(directly or transitively) in its xth
process rollback to their previous consistent state locally. So checkpoint is minimum set[8]. */
this approach has not any transferring cost. If initiator know 2. Initiator waits for response to the soft-checkpoint
(IJCNS) International Journal of Computer and Network Security, 87
Vol. 2, No. 6, June 2010

request • Sends csni[i], minset with the message


3. If( time out/Failure) Upon receiving computation message from any process Pk:
{Broadcast Global_ ABORT; and go to step 5.} if (m.csn > csni [k])
else if (weight ==1)//receives positive response from all { csni[k]:=m.csn; Increment csni[i];
{Sends take Hard checkpoint request message. Take Soft-checkpoint;
old_csnj:= Soft checkpointj} Process the message; reset Recvi, Recvi[k]:=1;}
else else if (m.csn= csni [j])
go to step 2. Process the message; update Recvi if necessary;
4. if(Receives ACK regarding taking Hard checkpoint else // m.csn< csni [j]
from all process in minset []) Process the message;
{ Broadcast Global_COMMIT message; go to step 5.} On receiving Global_ABORT message:
else • Discard the soft checkpoint from the volatile memory;
{ Broadcast Global_ABORT message;} On receiving Global_COMMIT message:
• Discard old Hard checkpoint, if any;
5. END • Convert the Soft Checkpoint into Hard Checkpoint
Process takes the soft-checkpoint on happening of any
below condition first:
• On the receipt of Take soft checkpoint request from
initiator;
• Upon receiving the computation message, if (m.csn >
csni[k]);
• If local time to take soft checkpoint expires

Figure 1. State transition diagram initiator process

3.2.5 Any at any Process Pi which belongs to


minsetj[]
Process Pi which is the part of minsetj[] have the four states
:Initial, Prepared to takes soft-checkpoint state, Prepared to
Figure 2. State transition diagram of process Pi
take hard checkpoint state and COMMITTED/ABORTED
state. Each state is followed by its previous state. The
actions to be taken during these states are shown in Figure 3.2.6 Increasing Reliability of Proposed
2. Process Pi works in different conditions as follows: Approach:
On the receipt of take Soft-checkpoint request: As soft-checkpoints are necessarily less reliable than hard
• Take Soft-checkpoint; checkpoint, because they are stored on volatile memory of
• Increment in soft_csn; an MHs. Hence, there is a grate need to convert theses soft -
• Sends reply to Initiator; checkpoints in to hard (permanent) checkpoints. These soft
On sending computation message to Pj:
88 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

checkpoints are converted in to hard, based on the following 5. Conclusion


conditions:
• On receipt of the commit message from the initiator. In our proposed approach, a process in minset [ ] takes a soft
• At the time of node disconnection and Handoff: in checkpoint first and then soft checkpoint will be discarded,
mobile distributed system, a MH may get disconnected if it receives aborted message from the initiator. As soft
voluntarily and non-voluntary. In voluntarily checkpoints are saved on main memory of the mobile hosts
disconnections, MH transfer its soft checkpoint in to the [MHs], and then the soft checkpoint will be saved on the
stable storage of MSS before disconnection. We call this stable storage of MSS at a later time only if they receive the
disconnected checkpoint. In such case the MSS who hard checkpoint request from the initiator. As a result soft
receive the disconnected checkpoint, coordinate on the checkpoint based approach requires low battery power of
behalf of disconnected MH. Non-voluntary MHs, low checkpoint latency, low transmission cost, and
disconnections are treated as fault [8]. low recovery time due to reduced disk accessed of MSS by
the MHs. As soft checkpoint approach is less reliable, to
• On the basic of maxsoft: The number of soft checkpoint
make it reliable we transfer the soft checkpoint on stable
stored per hard checkpoint is called maxsoft, and it
storage by happen the conditions given in section 3.2.3.
depends on the quality of service of current network [9].

References
4. Comparison with Hard Checkpointing [1] Cao G. and Singhal M., “On coordinated
Approach: checkpointing in Distributed Systems,” IEEE
Transactions on Parallel and Distributed Systems, vol.
In this section we compare our soft checkpoint based
9, no.12, pp. 1213-1225, Dec 1998.
approach with the hard checkpoint approach in different
[2] G. Cao and M. Singhal, “Mutable Checkpoints: A
perspectives.
New Checkpointing Approach for Mobile Computing
4.1 Checkpoint Latency
Systems,” In Proceedings of the IEEE Trans. Vol. 12,
Checkpointing latency is the time needed to save the No. 2, pp. 157-172, Feb. 2001.
checkpoint. It is observed that there is a big difference [3] Cao G. and Singhal M., “On the Impossibility Min-
between the latency in soft and hard checkpointing based process Non-blocking Checkpointing and an Efficient
approach. This is due to fact that soft checkpoint based Checkpointing Algorithm for Mobile Computing
approach uses fast volatile memory and not have any Systems,” In Proceedings of the International
transmission cost. Conference on Parallel Processing, pp. 37-44, August
4.2 Transmission Cost 1998.
Soft checkpoint based approach have not any transmission [4] Elnozahy E.N., Johnson D.B. and Zwaenepoel W.,
cost as the checkpoint are stored locally on volatile memory, “The Performance of Consistent Checkpointing,” In
instead through wireless link on stable storage of MSS. Proceedings of the 11th Symposium on Reliable
Distributed Systems, pp. 39-47, October 1992.
Table 1: Soft Vs. Hard Checkpointing Approach [5] R.Koo and S.Toueg, “Checkpointing and Roll-back
Parameter Hard-based Soft-based Recovery for Distributed systems,” IEEE Transactions
Checkpoint Latency High Low on Software Engineering, pages 23-31, January 1987.
Transmission Cost High Low [6] Elnozahy E.N., Alvisi L., Wang Y.M. and Johnson
Recovery Time High Low D.B., “A Survey of Rollback-Recovery Protocols in
CPU Overhead High High Message-Passing Systems,” ACM Computing Surveys,
Main Memory Requirement Low High vol. 34, no. 3, pp. 375-408, 2002.
Reliability High Low [7] Acharya A. and Badrinath B. R., “Checkpointing
Efficiency Low High Distributed Applications on Mobile Computers,” In
Portability High Low Proceedings of the 3rd International Conference on
Additional Hardware Not Additiona Parallel and Distributed Information Systems, pp. 73-
required l 80, September 1994.
processors [8] L. Kumar, M. Misra, R.C. Joshi, “Low overhead
Suitability For Large For Small optimal checkpointing for mobile distributed systems
Systems Systems Proceedings,” 19th IEEE International Conference on
Power Consume High Low Data Engineering, pp 686 – 88, 2003.
[9] N. Naves and W.Fuchs, “Adaptive Recovery for
4.3 Recovery Time Mobile Environments,” In Proceeding of the IEEE
It is observed that recovery time in soft checkpoint based High-Assurance Systems Engineering Workshop,
approach is much less if the failure occurs before converting October 21-22, 1996, pp, 134-141. Also in
the soft checkpoint into hard checkpoint and it is Communications of the ACM, 1997, January, vol.40,
comparable in the other case. no.1, pp.68-74.
(IJCNS) International Journal of Computer and Network Security, 89
Vol. 2, No. 6, June 2010

Cryptography of a Gray Level Image


Using a Modified Hill Cipher
V. U. K. Sastry1*, D. S. R. Murthy2, S. Durga Bhavani3
1
Dept. of Computer Science & Engg., SNIST,
Hyderabad, India,
vuksastry@rediffmail.com
2
Dept. of Information Technology, SNIST,
Hyderabad, India,
dsrmurthy.1406@gmail.com
3
School of Information Technology, JNTUH,
Hyderabad, India,
sdurga.bhavani@gmail.com

Abstract: In this paper, we have used a modified Hill cipher


for encrypting a Gray level image. Here, we have illustrated the 2. Development of a Procedure for the
process by considering a couple of examples. The security of the Cryptography of a Gray Level Image
image is totally achieved, as the encrypted version of the
original image, does not reveal any feature of the original Consider a gray level image whose gray level values can
image. be represented in the form of a matrix given by
P = [Pij], i = 1 to n, j = 1 to n. (2.1)
Keywords: Cryptography, Cipher, Gray level image, Encrypted Here, each Pij lies between 0 - 255.
image, Modular arithmetic inverse. Let us choose a key k let it be represented in the form of
a matrix given by
1. Introduction K = [Kij], i = 1 to n, j = 1 to n, (2.2)
The study of cryptography of gray level images [1 – 3] by where each Kij is in the interval [0, 255].
using block ciphers has gained considerable impetus in the Let C = [Cij], i = 1 to n, j = 1 to n (2.3)
recent years. The transformation of an image from its be a matrix, obtained on encryption.
original form to some other form, such that it cannot be The process of encryption and the process of decryption,
deciphered what it is, is really an interesting one. which are quite suitable, for the problem on hand, are given
In a recent investigation [4, 5], we have developed two in Fig. 1.
large block ciphers by modifying the Hill cipher [3]. In these
ciphers, the key is of size 512 bits and the plain text is of
size 2048 bits. In one of the papers [6], the plain text matrix
is multiplied by the key on one side and by its modular
arithmetic inverse on the other side. From the cryptanalysis
and the avalanche effect, we have noticed that the cipher is a
strong one and it cannot be broken by any cryptanalytic
attack.
In the present paper, our objective is to develop a block
cipher, and to use it for the cryptography of a gray level
image. Here, we have taken a key containing 64 decimal
numbers (as it was in [1]), and generated a key matrix of
size 32 x 32 by extending the key in a special manner
(discussed later), and applied it in the cryptography of a
gray level image.
In Section 2, we have developed a procedure for the
cryptography of a gray level image. In Section 3, we have
used an example and illustrated the process. Finally, in
Section 4, we have drawn conclusions from the analysis.
90 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

Figure1. The process of encryption and the process of


decryption

Here, Mix ( ) is a function used for mixing thoroughly the


decimal numbers (on converting them into binary bits)
arising in the process of encryption at each stage of
iteration. IMix ( ) is a function which represents the reverse
process of Mix ( ). For a detailed discussion of these
functions, and the algorithms involved in the processes of
encryption and decryption, we refer to [1].

3. Illustration of the cryptography of the image


Let us choose a key Q consisting of 64 numbers. This
can be written in the form of a matrix given by

Now we obtain the key matrix K given by

The length of the secret key (which is to be transmitted) where J is obtained from H by rotating, circularly, two rows
is 512 bits. On using this key, we can generate a new key E in the downward direction.
in the form

where U = QT, in which T denotes the transpose of a matrix,


and R and S are obtained from Q and U as follows. On
interchanging the 1st row and the 8th row of Q, the 2nd row
and the 7th row of Q, etc., we get R. Similarly, we obtain S
from U. Thus, we have

The afore mentioned operations are performed for


(1) enhancing the size of the key matrix to 32 x 32, and
(2) obtaining the modular arithmetic inverse of K, in a trial
and error manner.
The modular arithmetic inverse of K is obtained as

The size of this matrix is 16 x 16. This can be further


extended to a matrix L of size 32 x 32

where
where H = ET, in which T denotes the transpose of a matrix,
and F and G are obtained from E and H as follows. On
interchanging the 1st row and the 16th row of E, the 2nd row
and the 15th row of E, etc., we get F. Similarly, we obtain G
from H. Thus, we have L.
Here,
(IJCNS) International Journal of Computer and Network Security, 91
Vol. 2, No. 6, June 2010

From (3.8) and (3.10), we can readily find that On using (3.8), (3.10), (3.17), and the procedure for
K K–1 mod 256 = K–1 K mod 256 = I. (3.15) decryption (See Fig. 1.(b)), we get back the original binary
Let us consider the image of a hand, which is given image P, given by (3.16).
below. From the matrix C, on connecting each 1 with its
neighbouring 1, we get an image which is in a zigzag
manner (See Fig. 3).

Figure 2. Image of a Hand


This image can be represented in the form of a binary
matrix P given by

Figure 3. Encrypted image of the hand


It is interesting to note that, the original image and the
encrypted image differ totally, and the former one, exhibits
all the features very clearly, while the later one does not
reveal anything.

4. Conclusions
In this analysis, we have made use of a modified Hill
cipher for encrypting a binary image. Here we have
illustrated the procedure by considering a pair of examples:
(1) the image of a hand, and (2) the image of upper half of a
where 1 denotes black and 0 denotes white. person.
On adopting the iterative procedure given in Fig. 1, we Here, we have noticed that, the encrypted image is
get the encrypted image C totally different from the original image, and the security of
the image is completely enhanced, as no feature of the
original image can be traced out in any way from the
encrypted image.
This analysis can be extended for the images of
signatures and thumb impressions.

References
[1] Hossam El-din H. Ahmed, Hamdy M. Kalash, and
Osama S. Farag Allah, “Encryption Efficiency Analysis
and Security Evaluation of RC6 Block Cipher for
92 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

Digital Images”, International Journal of Computer, Computer Theory and Engineering (IJCTE), Vol. 2, No
Information, and Systems Science, and Engineering, 5, Oct 2010.
Vol. 1, No. 1, pp. 33 – 39, 2007. [6] William Stallings, Cryptography and Network Security,
[2] M. Zeghid, M. Machhout, L. Khriji, A. Baganne, and Principles and Practice, Third Edition, Pearson, 2003.
R. Tourki, “A Modified AES Based Algorithm for Image
Encryption”, World Academy of Science, Engineering
and Technology, Vol. 27, pp. 206 – 211, 2007.
[3] Bibhudendra Acharya, Saroj Kumar Panigrahy, Sarat
Kumar Patra, and Ganapati Panda, “Image Encryption
Using Advanced Hill Cipher Algorithm”, International
Journal of Recent Trends in Engineering, Vol. 1, No. 1,
May 2009.
[4] V. U. K. Sastry, D. S. R. Murthy, S. Durga Bhavani, “A
Block Cipher Involving a Key Applied on Both the
Sides of the Plain Text”, International Journal of
Computer and Network Security (IJCNS), Vol. 1, No. 1,
pp. 27 – 30, Oct. 2009.
[5] V. U. K. Sastry, D. S. R. Murthy, S. Durga Bhavani, “A
Block Cipher Having a Key on One Side of the Plain
Text Matrix and its Inverse on the Other Side”,
Accepted for Publication in International Journal of
(IJCNS) International Journal of Computer and Network Security, 93
Vol. 2, No. 6, June 2010

Proxy Blind Signature based on ECDLP


Asis Kumar Tripathy1, Ipsita Patra2 and Debasish Jena3
1,2
Department of Computer Science and Engineering
International Institute of Information Technology, Bhubaneswar 751 013, India
1
asistripathy@gmail.com , 2ipsi.lucky@gmail.com
3
Center For IT Education
Biju Pattanaik University of Technology,Bhubaneswar, India
debasishjena@hotmail.com

1.4 Proxy Blind Signature


Abstract: Proxy blind signature is the combination of
properties and behavior of two most common digital signature A proxy blind signature scheme is a digital signature
schemes i.e. proxy signature and blind signature. In this scheme scheme which combines the properties of proxy signature
proxy signer generates the blind signature on behalf of the and blind signature schemes. A proxy blind signature
original signer without knowing the content of the message. scheme is a protocol played by two parties in which a user
Proxy blind signature can be used in various applications such obtains a proxy signer’s signature for a desired message and
as e-voting, e-payment, mobile agent communication. In this the proxy signer learns nothing about the message. With
paper cryptanalysis of DLP based proxy blind signature scheme such properties, the proxy blind signature scheme is useful
with low computation by Aung et al. has been done. An efficient
in several applications
proxy blind signature scheme based on ECDLP has been
proposed.
such as e-voting, e-payment and mobile agent environments.
In a proxy blind signature scheme, the proxy signer is
Keywords: Proxy Signature ,Blind signature,Proxy blind allowed to generate a blind signature on behalf of the
signature,ECDLP. original signer.

1. Introduction 1.5 Properties of the Proxy Blind Signature


Scheme
1.1 Digital Signature
In addition to the properties of Digital signature and proxy
A digital code that can be attached to an electronically blind signature should satisfy the following properties.
transmitted that uniquely identifies the sender. Digital 1.Distinguish-ability: The proxy signature must be
signatures are especially important for electronic commerce distinguishable from the normal signature.
and are a key component of most authentication schemes. 2. Non-repudiation: Neither the original signer nor the
To be effective, digital signatures must be unforgeable. proxy signer can sign message instead of the other party.
There are a number of different encryption techniques to Both the original signer and the proxy signer can not deny
guarantee this level of security. their signatures against anyone.
3. Unforgeability: Only a designated proxy signer can
generate a valid proxy signature for the original signer (even
the original signer cannot do it).
1.2 Proxy signature 4. Verifiability: The receiver of the signature should be able
to verify the proxy signature in a similar way to the
The proxy signature scheme is a kind of digital signature verification of the original signature.
scheme . In the proxy signature scheme , one user called the 5. Identifiability: Anyone can determine the identity of the
original signer ,can delegate his/her signing capability to corresponding proxy signer from a proxy signature.
another user called the proxy signer. This is similar to a 6. Prevention of misuse: It should be confident that proxy
person delegating his/her seal to another person in the real key pair should be used only for creating proxy signature,
world. which conforms to delegation information. In case of any
misuse of proxy key pair, the responsibility of proxy signer
1.3 Blind Signature should be determined explicitly.
7. Unlinkability: When the signature is revealed, the proxy
signer can not identify the association between the message
The signer cannot determine which transformed message and the blind signature he generated. When the signature is
received for signing corresponds with which digital verified, the signer knows neither the message nor the
signature, even though the signer knows that such a signature associated with the signature scheme.
correspondence must exist.
In this paper, cryptanalysis of Aung et al[12] has been done
and an efficient proxy digital signature based on ECDLP has
94 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

been proposed. The proposed scheme satisfies all the 3. Overview Of Aung et al.’s DLP Based Proxy
properties of a proxy blind signature scheme. Blind Signature scheme With Low
The rest of this paper is organized as follows. In Section
2,some related work are discussed. Overview Of Aung et
Computation
al.’s DLP Based Proxy Blind Signature scheme with Low
Computation has been done in section 3 . In Section 4, In this section,DLP Based Proxy Blind Signature scheme
Cryptanalysis of Aung et al.’s scheme has been done. An With Low Computation has been discussed.
introduction to ECC is being described in section 5. In
Section 6, Proposed scheme is being described. Security
analysis of the proposed scheme has been done in section 7. 3.1 Proxy Delegation Phase
In Section 8, Efficiency of the proposed scheme is being
compared with the previous schemes and concluding
Original signer selects random number and
remarks are being described in Section 9.
computes:
(1)
(2)
2. Related Work
sends along with the warrant to the proxy
D. Chaum [4]introduced the concept of a blind signature signer. And then proxy signer checks:
scheme in 1982. In 1996 Mambo et al [2] introduced the
concept of proxy signature. The two types of scheme: proxy (3)
unprotected (proxy and original signer both can generate a
valid proxy signature) and proxy protected (only proxy can If it is correct, P accepts it and computes proxy signature
generate a valid proxy signature) ensures among other secret key as follow:
things, non-repudiation and unforgeability . (4)
The first proxy blind signature was proposed by Lin and Jan
[1] in 2000. Recently Tan et al. [7] introduced a proxy blind
Note: responding proxy public key
signature scheme, which ensures security properties of the
schemes, viz., the blind signature schemes and the proxy
signature schemes. The scheme is based on Schnorr blind
signature scheme Lee et al.[3] showed that a strong proxy
signature scheme should have properties of strong 3.2 Blind Signing Phase
unforgeability, verifiability, strong identifiability, strong
nonrepudiation and prevention of misuse. Proxy signer selects random number and
Hung-Min Sun and Bin Tsan Hsieh [8] show that Tan et
computes:
al.[6] schemes do not satisfy the unforgeability and
unlinkability properties. In addition, they also point out that
(5)
Lal and Awasthi [7] scheme does not possess the
unlinkability property either. In 2004, Xue and Cao [9]
and then sends to signature asker . To obtain the
showed there exists one
blind signature of message m, original signer randomly
weakness in Tan et al. scheme [5] and Lal et al. scheme [7]
choose two random numbers and computes:
since the proxy signer can get the link between the blind
message and the signature or plaintext with great (6)
probability. Xue and Cao introduced concept of strong (7)
unlinkability and they also proposed a proxy blind signature (8)
scheme.
In 2007 Li et al.[10] proposed a proxy blind signature If =0 then has to select new tuple Otherwise
scheme using verifiable self-certified public key, and their sends to . After receiving proxy signer computes :
scheme is more efficient than schemes published earlier.
Recently, Xuang Yang and Zhaoping Yu[11] proposed new (9)
scheme and showed their scheme is more efficient than Li et
al.[10]. and sends the sign message to .
In 2009 Aung et al.[12] proposed a new proxy blind
signature scheme which satisfied all the security
requirements of both the blind signature scheme and the 3.3 Extraction Phase
proxy signature scheme.
While receiving , computes:

(10)

Finally the signature of message is .


(IJCNS) International Journal of Computer and Network Security, 95
Vol. 2, No. 6, June 2010

3.4 Verification Phase 5.1.2 Point Addition

Consider two distinct points J and K such that


The recipient of the signature can verify the proxy blind
signature by checking whether and Let where,
then,
(11)
Where
If it is true, the verifier accepts it as a valid proxy blind
signature, otherwise rejects. (12)

Verifiability where, s is the slope of the line through J and K. If ,


then where, O is the point at infinity. If
The verifier can verify the proxy blind signature by checking
then, ; point doubling equations are used. Also,

holds.

5.1.3 Point Doubling


4. Cryptanalysis Of the Aung et al.’s Scheme
The scheme doesn’t satisfy the property of verifiability as Consider a point such that where, . Let
mentioned in Aung et al.[12]’s scheme. The verification where, . Then,
steps are not correct.

(13)

5. Elliptic Curve Cryptography


where, s is the tangent at point and is one of the
parameters chosen with the elliptic curve. If = 0
5.1 Elliptic Curve over Finite Field then, where, O is the point at infinity.

The elliptic curve operations defined on real numbers are


slow and inaccurate due to round-off error. Cryptographic 6. Proposed Scheme
operations need to be faster and accurate. To make
operations on elliptic curve accurate and more efficient, In this section, we propose an efficient proxy blind
ECC is defined over two finite fields— prime field and signature scheme based on ECC.The proposed scheme is
binary field . The field is chosen with finitely large divided into five phases: system parameters, proxy
number of points suited for cryptographic operations. delegation, blind signing, signature extraction and
signature verification.
5.1.1 EC over Prime Field
6.1 System Parameters

The equation of the elliptic curve on a prime field is


The entities involved are three parties.We denote the x-
coordinate of a point on the elliptic curve by . The
where, . Here, the elements of the
finite field are integers between 0 and . All the scheme is constructed as follows. We make conventions
operations such as addition, subtraction, division, and that lowercases denote the elements in and capital
multiplication involves integers between 0 and . The letters denote the points in the curve .
prime number p is chosen such that there is finitely large O: the original signer
number of points on the elliptic curve to make the P: the proxy signer
cryptosystem secure. Standards for Efficient Cryptography A: the signature asker
(SEC) specifies curves with p ranging between 112-521 bits : the original signer O's secret key
. The algebraic rules for point addition and point doubling : the original signer O's public key,
can be adapted for elliptic curves over .
: the proxy signer P's secret key
: the proxy signer P's public key,

: the designated proxy warrant which contains the


identities information of the original signer and the proxy
96 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

signer, message type to be signed by the proxy signer, the The recipient of the signature can verify the proxy blind
delegation limits of authority, valid periods of delegation, signature by checking whether
etc.
h(.) a secure one-way hash function. (24)
|| the concatenation of strings. where

6.2 Proxy Delegation Phase If it is true, the verifier accepts it as a valid proxy blind
signature, otherwise rejects.
Original Signer randomly chooses ,1< < n ; and
computes:
7. Security Analysis of the proposed scheme
(14)

The Scheme has stronger security property because


(15)
ECDLP is more difficult than DLP. While maintaining the
security, the scheme requires less data size and less
sends along with the warrant to the proxy
computations, so it is efficient. In this subsection , we
signer. And then proxy signer checks:
show that our scheme satisfies all the security
requirements of a strong proxy blind signature.
(16)
Distinguish-ability: On the one hand, the proxy blind
If the equation holds, P computes proxy signature secret
signature contains the warrant . On
as follow:
the other hand, anyone can verify the validity of the proxy
blind signature, so he can easily distinguish the proxy
(17) blind signature from the normal signature.

and corresponding proxy public key Nonrepudiation: The original signer does not obtain the
proxy signer’s secret key and proxy signer does not
obtain original signer’s secret key . Thus, neither the
original signer nor the proxy signer can sign in place of
6.3 Blind Signing Phase the other party. At the same time, through the valid proxy
blind signature, the verifier can confirm that the
Proxy Signer randomly choose , where 1 < < n and signature of the message has been entitled by the original
computes: signer, because the verifier must use the original signer’s
public key during the verification. Likewise, the proxy
signer cannot repudiate the signature. The scheme offers
(18) nonrepudiation property.

and then sends to signature asker A. To obtain the Unforgeability: An adversary (including the original
blind signature of message m, original signer O randomly signer and the receiver) wants to impersonate the proxy
choose two random numbers u,v and computes: signer to sign the message m. He can intercept the
delegation information but he cannot obtain
(19) the proxy signature secret key . From Equation (15),
(20) we know that only the proxy signer holds the proxy
(21) signature secret key . Because of 1 < < n , the
adversary can obtain the proper proxy signature secret
If then A has to select new tuple (u,v),Otherwise key by guessing it with at most a probability .That is,
O sends e to P. After receiving e proxy signer P computes : anyone else (even the original signer and the receiver)
can forge the proxy blind signature successfully with a
(22) probability .

Verifiability: The proposed scheme satisfies the property


6.4 Extraction Phase of verifiability. The verifier can verify the proxy blind
signature by checking
holds. This is
While receiving , computes:
(23) because:
Finally the signature of message is

6.5 Verification Phase


(IJCNS) International Journal of Computer and Network Security, 97
Vol. 2, No. 6, June 2010

= Verifiability, unforgeability, Identifiability, Prevention of


misuse and Unlinkability. The proposed scheme is
compared with the known proxy blind signature scheme
which are given below.
=
Table 1: Comparative statement among proxy blind
signature schemes
Scheme Basis 80-Bit
Security
Strength
interger IFP 1024-bit
fractorizatio modulus
n problem
discrete DLP |p| = 1024,
logarithm |q| = 160
problem
Proposed ECDLP m= 163 bits
schemes

9. Conclusion
We analyzed that Aung et al.’s DLP based proxy blind
Identifiability: The proxy blind signature signature scheme with low computation does not satisfy
contains the warrant . Moreover, in the verifiability property of proxy blind signature scheme
the verification equation .Compared with Aung et al’s scheme , we present a more
which includes the original signer O’s public key and efficient and secure proxy blind signature scheme to
the proxy signer P’s public key . Hence, anyone can overcome the pointed out drawback of the Aung et al’s
determine the identity of the corresponding proxy signer scheme.We proved that our scheme is more efficient and
from a proxy signature. secure than the previous schemes.

Prevention of misuse: The proposed scheme can prevent References


proxy key pair misuse because the warrant mw includes
original signer and proxy signer identities information, [1] W. D. Lin, and J. K. Jan, "A security
message type to be signed by the proxy signer, delegation personal learning tools using a proxy blind
period, etc. With the proxy key, the proxy signer cannot signature scheme", Proc. Of International
sign messages that have not been authorized by the Conference on Chinese Language Computing, 2000,
original signer. pp.273- 277.
[2] M. Mambo, K. Usuda, and E. Okamoto, "Proxy
Unlinkability: During generation of the signature signatures: Delegation of the power to sign
, the proxy signer has the view of messages", IEICE Transaction on Fundamentals,
transcripts .Since are specified E79-A (1996), pp.1338-1353.
by the original signer for all the signatures under the [3] B. Lee, H. Kim, and K. Kim,"Strong proxy signature
same delegation condition. The proxy unlinkability holds and its application", Australasian Conference on
if and only if there is no conjunction between and Information Security and Privacy(ACISP 2001),
. This is obvious from Equations (16)- LNCS2119, Springer- Verlag, Sydney, 2001, pp.603-
(21). The value is only included in Equation (17) and 608.
connected to through Equation (18). For this, one must [4] D Chaum," Blind signature for untraceable
be able to compute R which is masked with two random payments, Advances in C ryptology", proceeding of
numbers. Similarly, and may be associated with the CRYPTO 82, Springer- Verlag, New York, 1983,
signature through Equation (19) and (20) respectively. pp.199-203.
They fail again due to the random numbers. Even they are [5] Z. W. Tan, Z. J. Liu, and C. M. Tang, "A proxy blind
combined, the number of unknowns is still more than signature scheme based on DLP", Journal of
that of the equations. So, the proposed scheme provides Software, Vol14, No11, 2003, pp.1931-1935.
indeed the proxy blindness property. [6] Tan Z. Liu Z. Tang C. "Digital proxy blind signature
schemes based on DLP and ECDLP". In MM
Research Preprints, No. 21, MMRC,
AMSS,Academia, Beijing, 2002, 212- 217.
8. Efficiency of the proposed scheme
[7] Lal S. Awasthi A K. Proxy" Blind Signature Scheme".
The proposed scheme has been proved to be correct. It Journal of Information Science and Engineering.
also satisfies the properties of proxy blind signature Cryptology ePrint Archive, Report 2003/072.
scheme, i.e, Distinguish-ability, Nonrepudiation, Available at http://eprint.iacr.org/.
98 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

[8] Hung-Min Sun, Bin-Tsan Hsieh "On the Security of


Some Proxy Blind Signature Schemes".
Australasian InformationvSecurityWorkshop(AIS
W2 004),Dunedin: Australian Computer Society
press, 2004,75-78.
[9] Q. S. Xue, and Z. F. Cao, "A new proxy blind signature
scheme with warrant", IEEE Conference on
Cybernetics and Intelligent Systems (CIS and RAM
2004), Singapore, 2004, pp.1385-1390.
[10] J.G. Li, and S. H.Wang, "New Efficient Proxy Blind
Signature Scheme Using Verifiable Self-certified
Public Key", International Journal of Network
Security, Vol.4, No.2, 2007, pp.193-200.
[11] Xuan Yang, Zhaoping Yu , "Efficient Proxy Blind
Signature Scheme based on DLP", International
Conference on Embedded Software and Systems (
ICESS2008).
[12] Aung Nway Oo and Nilar "The DLP based proxy
blind signature scheme with low computation",
2009 Fifth International Joint Conference on INC,
IMS and IDC.

Authors Profile
Asis Kumar Tripathy received the B.E.
degree in Information Technology from
bialasore College of Engineering and
Technology in 2006. He has joned the
same college as Lecturer since
26.12.2006. Now, he is persuing his
M.Tech degree at International Institute
of Information Technology-Bhubaneswar,
Orissa, India. His research area of interest is Information Security

Ipsita Patra received the MCA degree


from National Institute of Technology
,Rourkela . Now, she is persuing her
M.Tech degree at International Institute of
Information Technology-Bhubaneswar,
Orissa, India. Her research area of interest
are Information Security and Network
security.

Debasish Jena was born in 18th


December, 1968. He received his B
Tech degree in Computer Science and
Engineering, his Management Degree and
his MTech Degree in 1991,1997 and
2002 respectively. He has joined Centre
for IT Education as Assistant Professor
since 01.02.2006. He has submitted his
thesis for Ph.D. at NIT, Rourkela on 5th April 2010. .His research
areas of interest are Information Security , Web Engineering, Bio-
Informatics and Database Engineering.
(IJCNS) International Journal of Computer and Network Security, 99
Vol. 2, No. 6, June 2010

Comparative Study of Continuous Density Hidden


Markov Models (CDHMM) and Artificial Neural
Network (ANN) In Infant’s Cry Classification
System
Yousra Abdulaziz Mohammed, Sharifah Mumtazah Syed Ahmad
College of IT
Universiti Tenaga Nasional (UNITEN)
Kajang, Malaysia
yosra@uniten.edu.my, smumtazah@uniten.edu.my

Abstract: This paper describes a comparative study between the highest classification rate was achieved by using feed-
continuous Density hidden Markov model (CDHMM) and forward neural network. Another research work carried out
artificial neural network (ANN) on an automatic infant’s cries by Cano [8] used the Kohonen's self organizing maps
classification system which main task is to classify and (SOM) which is basically a variety of unsupervised ANN
differentiate between pain and non-pain cries belonging to technique to classify different infant cries. A hybrid
infants. In this study, Mel Frequency Cepstral Coefficient approach that combines Fuzzy Logic and neural network
(MFCC) and Linear Prediction Cepstral Coefficients (LPCC)
has also been applied in the similar domain [7].
are extracted from the audio samples of infant’s cries and are
fed into the classification modules. Two well-known recognition Apart from the traditional ANN approach, other infant cry
engines, ANN and CDHMM, are conducted and compared. The classification technique studied is Support Vector Machine
ANN system (a feedforwaed multilayer perceptron network with (SVM) which has been reported by Barajas and Reyes [2].
backpropagation using scaled conjugate gradient learning Here, a set of Mel Frequency Cepstral Coefficients (MFCC)
algorithm) is applied. The novel continuous Hidden Markov
was extracted from the audio samples as the input features.
Model classification system is trained based on Baum –Welch
algorithm on a pair of local feature vectors.
On the other hand, Orozco and Garcia [6], use the linear
After optimizing system’s parameters by performing some prediction technique to extract the acoustic features from the
preliminary experiments, CDHMM gives the best identification cry samples of which are then fed into a feed- forward
rate at 96.1%, which is much better than 79% of ANN whereby neural network recognition module.
in general the system that are based on MFCC features
Hidden Markov Model is based on double stochastic
performs better than the one that utilizes LPCC features.
processes, whereby the first process produces a set of
observations which in turns can be used indirectly to reveal
Keywords: Artificial Neural Networks, Continuos Density
Hidden Markov Model; Mel Frequency Cepstral Coefficient, another hidden process that describes the states evolution
Linear Prediction Cepstral Coefficints, Infant Pain Cry [13]. This technique has been used extensively to analyze
Classification audio signals such as for biomedical signal processing [17]
and speech recognition [18]. The prime objective of this
1. Introduction paper is to compare the performance of an automatic
infant’s cry classification system applying two different
Infants often use cries as communication tool to express classification techniques, Artificial Neural Networks and
their physical, emotional and psychological states and needs continuous Hidden Markov Model.
[1]. An infant may cry for a variety of reasons, and many
scientists believe that there are different types of cries which Here, a series of observable feature vector is used to reveal
reflects different states and needs of infants, thus it is the cry model hence assists in its classification. First, the
possible to analyze and classify infant cries for clinical paper describes the overall architecture of an automatic
diagnosis purposes. recognition system which main task is to differentiate
between an infant ‘pain’ cries from ‘non-pain’ cries. The
A number of research work related to this line have been performance of both systems is compared in terms of
reported, whereby many of which are based on Artificial recognition accuracy, classification error rate and F-measure
Neural Network (ANN) classification techniques. Petroni under the use of two different acoustic features, namely Mel
and Malowany [4] for example, have used three different Frequency Cepstral Coefficient (MFCC) and Linear
varieties of supervised ANN technique which include a Prediction Cepstral Coefficients (LPCC). Separate phases of
simple feed-forward, a recurrent neural network (RNN) and system training and system testing are carried out on two
a time-delay neural network (TDNN) in their infant cry different sample sets of infant cries recorded from a group of
classification system. In their study, they have attempted to babies which ranges from newborns up to 12 months old.
to recognize and classify three categories of cry, namely
‘pain’, ‘fear’ and ‘hunger’ and the results demonstrated that
100 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

The organization of the rest of this paper is as follows. 3.1 Pre-Processing


Section 2 describes the nature of datasets used in the study. The first step in the proposed system is the pre-processing
Section 3 presents an overall architecture of the recognition step which requires the removal of the ‘silent’ periods from
system, in terms of the features extracted and the the recorded sample. Recordings with cry units lasting at
classification techniques. On the other hand, Section 4 and least 1 second from the moment of stimulus event were used
Section 5, details out the descriptions of the experimental for the study [4, 6]. The cry units are defined as the duration
set-ups and the results of the findings respectively. Section 5 of the vocalization only during expiration [5].
concludes the paper and highlights potential area for future
works. The audio recordings were then divided further into
segments of exactly 1 second length, where each represents
2. Experimental Data Sets a pre-processed cry segments as recommended by [2, 4, 6,
8]. Before these one second segments can be used for
The infant cry corpus collected is a set of 150 pain samples feature extraction, a process called pre-emphasis is applied.
and 30 non-pain samples recorded from a random time Pre-emphasis aims at reducing the high spectral dynamic
interval. The babies selected for recording varies from range, and is accomplished by passing the signal through
newborns up to 12 month old, a mixture of both healthy an FIR filter whose transfer function is given by.
males and females. The records are then sampled at 16000
Hertz. It is important to highlight here that the pain cry
episodes are the result of the pain stimulus carried out (1)
during routine immunization at a local pediatric clinic, in A typical value for the pre-emphasis parameter ‘ ’ is
Darnah, Libya. Recordings resulting from anger, or hunger usually 0.97. Consequently the output is formed as follow:
were considered as non-pain utterances were recorded at
quiet rooms at various infants’ home. Recordings were made
(2)
on a digital player at a sample rate of 8000 Hertz and 4-bit
resolution with microphone placed between 10 to 30 where s(n) is the input signal and y(n) is the output signal
centimeters away from the infant’s mouth. The audio from the first order FIR filter.
signals were then transferred for analysis to a sound editor
and then re-sampled at 16000 with 16 bit resolution [3, 4, Every segment of 1 second is divided thereafter in frames of
5]. The final digital recordings were stored as WAV files. 50-milliseconds with successive frames overlapping by 50%
from each other. The next step is to use a window function
3. System Overview on each individual frame in order to minimize
discontinuities at the beginning and end of each frame.
In this paper, we consider two recognition engines for Typically the window function used is the Hamming
infant’s cry classification system. The first is an artificial window and has the following form:
neural network (ANN) which is a very popular pattern
matching in the field of speech recognition. Feedforward
, (3)
multilayer perceptron (MLP) network with a
backpropagation learning algorithm is the most well known Given the above window function and assuming that there
of ANN. The other technique is a continuous density hidden are N samples in each frame, we will obtain the following
Markov model (CDHMM). The overall system is as signal after windowing.
depicted in Figure 1 below:
, 0≤ ≤ (N-1)
(4)

Figure 2. Hamming Window


Off the 150 pain and 30 non-pain recording samples, 625
and 256 one second cry segments were obtained
Figure 1. Infant Cry Recognition System Architecture respectively. Off these 881 cry segments, 700 were used for
system training and 181 were used for system testing. It is
From Figure 1, we can say that infant cry recognition
important to use a separate set of cry segments for training
system implies three main tasks,
and testing purposes in order to avoid obtaining biased
• Signal Preprocessing
• Feature Extraction
testing results.
• Pattern Matching
(IJCNS) International Journal of Computer and Network Security, 101
Vol. 2, No. 6, June 2010

3.2 Feature Extraction where r is derived from the LPC autocorrelation matrix
3.2.1 Mel-Frequency Cepstral Coefficients (MFCC)
MFCCs are one of the more popular parameter
m−1
k
cm = a m + ∑ ( )ck a m−k for 1 < m < P
used by researchers in the acoustic research domain. It has
the benefit that it is capable of capturing the important (7)
k =1 n
characteristics of audio signals. Cepstral analysis calculates
the inverse Fourier transform of the logarithm of the power
spectrum of the cry signal, the calculation of the mel cepstral
m−1
k
coefficients is illustrated in Figure 3.
cm = ∑ ( )ck a m−k for m > P
k = m− p n
(8)

where p is the so called prediction order, a m represents the


Figure 3. Extraction of MFCC from Audio Signals mth LPC coefficient and is the number of LPCC’s
th
The cry signal must be divided into overlapping blocks first, needed to be calculated. Whereas cm is the m LPCC.
(in our experiment the blocks are in Hamming windows) 3.2.3 DELTA coefficients
which is then transformed into its power spectrum. Because It has been proved that system performance may be
human perception of the frequency contents of sounds does enhanced by adding time derivatives to the static parameters
not follow a linear scale. They are approximately linear with [10]. The first order derivatives are referred to as delta
logarithmic frequency beyond about 1000 Hz. The mel features and can be calculated as shown in formula 9,
frequency warping is most conveniently done by utilizing a
filter bank with filters centered according to mel frequencies
as shown in Figure 4.
(9)

where d, is the delta coefficient at time t, computed in terms


of the corresponding static coefficients , to , and
is the size of delta window.
3.3 Patter Matching
3.3.1 Artificial Neural Network (ANN) Approach
Neural Networks are defined as systems which has the
capability to model highly complex nonlinear problems and
Figure 4. Mel Spaced Filter Banks composed of many simple processing elements, that operate
The mapping is usually done using an approximation in parallel and whose function is determined by the
(where fmel is the perceived frequency in mels), network's structure, the strength of its connections, and the
f processing carried out by the processing elements or nodes.
fmel ( f ) = 2595 × log(1 + ) (5) The feedforward multilayer perceptron (MLP) network
700 architecture using a backpropagation learning algorithm is
These vectors were normalized so that all the values one of the most popular neural networks. It consists of at
within a given 1 second sample would lie between ±1 in least three layers of neurons: an input layer, one or more
order to decrease their dynamic range [4]. hidden layers and an output layer. Processing elements or
neurons in the input layer only act as buffers for distributing
the input signal xi to neurons in the hidden layer. The
3.2.2 Linear Prediction Cepstral Coefficients hidden and output layers have a non-linear activation
(LPCC) function. A backpropagation is a supervised learning
LPCC is Linear Predicted Coefficients (LPC) in the algorithm to calculate the change of weights in the network.
cepstrum domain. The basis of linear prediction analysis is In the forward pass, the weights are fixed and the input
that a given speech sample can be approximated with a vector is propagated through the network to produce an
linear combination of the past p speech samples [9]. This output. An output error is calculated from the difference
can be calculated either by the autocorrelation or covariance between actual output and the target. This is propagated
methods directly from the windowed portion of audio signal backwards through the network to make changes to the
[10]. In this study the LPC coefficients were calculated weights [19].
using the autocorrelation method that uses Levinson-Durbin For this work, a feed-forward multilayer perceptron using
recursion algorithm. Hence, LPCC can be derived from LPC full connections between adjacent layers was trained and
using the recursion as follows: tested with input patterns described above in a supervised
manner with scaled conjugate gradient back-propagation
learning algorithm since it has shown good results in
classifying infant’s cries than other NN algorithms [3, 20].
c0 = r (0) The number of computations in each iteration is
(6) significantly reduced mainly because no line search is
required. Different feature sets were used in order to
102 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010
determine the set that results in optimum recognition rate. vector, can be expressed as a weighted sum of M
Sets of 12 MFCC, 12MFCC+1ST derivative, 12 MFCC+1st & multivariate Gaussian probability densities [15] as given by
2nd derivative, 20 MFCC, 16 LPCC and 16 LPCC+ 1st the equation
derivative were used. Two frame length was used, 50ms and
100 ms in order to determine the right choice that gives the for 1≤ j ≤ N (10)
best results. Two architectures were investigated in this where ot is a d-dimensional feature vector, M is the number
study. First, with one hidden layer then with two hidden
of Gaussian mixture components m is the mixture index
layers. The number of hidden neurons was varied to obtain
(1≤ m ≤ M), Cjm is the mixture weight for the mth
optimum performance. The number of neurons in the input
component, which satisfies the constraint
layer is decided by the number of elements in the feature
vector. Output layer has two neurons each for one cry class • Cjm > 0 and

∑ C jm = 1 , 1≤ j ≤ N , 1≤ m ≤ M
M
The activation function used in all layers in this work is •
m=1
hyperbolic tangent sigmoid transfer function ‘TANSIG’.
Training stops when any of these conditions occur: where N is the number of states, j is the state index, μjm is
• The maximum number of epochs (repetitions) is the mean vectors of the mth Gaussian probability densities
reached. we established 500 epochs at maximum function, and ‫ א‬is the most efficient density functions widely
because above this value, the convergence line do used without loss of generality[11, 15] which is defined as,
not have any significant change.
• The networks were trained until the performance (11)
is minimized to the goal i.e. mean squared error is
less than 0.00001.
In the training phase, and in order to derive the models
for both ‘pain’ and’non-pain’ cries (i.e. to derive λpain and
λnon-pain models respectively) we first have to make a rough
guess about the parameters of an HMM, and based on these
initial parameters more accurate parameters can be found by
applying the Baum-Welch algorithm. This mainly requires
for the learning problem of HMM as highlighted by
Rabiner in his HMM tutorial [13]. The re-estimation
procedure is sensitive to the selection of initial parameters.
The model topology is specified by an initial transition
matrix. The state means and variances can be initialized by
Figure 5. MLP Neural Network clustering the training data into as many clusters as there
are states in the model with the K-means clustering
3.3.2 Hidden Markov Model (HMM) approach algorithm and estimating the initial parameters from these
The continuous HMM is chosen over the discrete clusters [16].
counterparts since it avoids losing of critical signal The basic idea behind the Baum-Welch algorithm (also
information during discrete symbol quantization process and known as Forward-Backward algorithm) is to iteratively re-
that it provides for better modeling of continuous signal estimate the parameters of a model, and to obtain a new
representation such as the audio cry signal [12]. However, model with a better set of parameters ¸ which satisfies the
computational complexity when using the CHMMs is more following criterion for the observation sequence
than the computational complexity when using DHMMs. It ,
normally takes more time in the training phase[11]
An HMM is specified by the following:
• N, the number of states in the HMM; (12)
• ), the prior probability of state si being
the first state of a state sequence. The collection of π1 where the given parameters are¸ . By
forms the vector π = { π1, ….., πN}; setting, ¸ at the end of every iteration and re-
• , the transition coefficients estimating a better parameter set, the probability of
gives the probability of going from state si immediately to can be improved until some threshold is reached. The re-
state sj. The collection of a ij forms the transition matrix estimation procedure is guaranteed to find in a local
A. optimum. The flow chart of the training procedure is shown
• Emission probability of a certain observation o, when in Figure 6,
the model is in state . The observation o can be either
discrete or continuous [11]. However, in this study a
continuous HMM is applied whereby continuous
observations o | = indicates the
probability density function (pdf) over the observation
space for the model being in state .
For the continuous HMM, the observations are continuous
and the output likelihood of a HMM for a given observation
(IJCNS) International Journal of Computer and Network Security, 103
Vol. 2, No. 6, June 2010

Gaussian mixture per state is the best choice to model the


cry utterances because it scored the best identification rate.
On the other hand, a fully connected feedforward MLP NN
trained with backpropagation with Scaled Conjugate
Gradient algorithm, with one hidden layer and five hidden
neurons has given the best recognition rates. The best results
for both systems were obtained using 50 ms frame length.
The results of training and testing datasets are evaluated
using standard performance measures defined as follows:
• The classification accuracy which is calculated by
taking the percentage number of correctly classified test
samples.
xcorrect
System Accuracy (%) = × 100% (15)
T
Figure 6. Training Flow Chart where xcorrect is the total number of correctly classified test
samples and T is the overall total number of test samples.
Once the system training is completed, system testing is • The classification error rate (CER) which is defined
carried out to investigate the accuracy of the recognition as follows [14]:
system. The classification is done with a maximum-
likelihood classifier, that is the model with the highest
probability with respect to the observations sequence, i.e.,
the one that maximizes P(Model | Observations) will be the (16)
natural choice, • F- measure which is defined as follows [14],

(13) (17)
This is called the maximum likelihood (ML) estimate. The
best model in maximum likelihood sense is therefore the one (18)
that is most probable to generate the given observations.
In this study, separate untrained samples from each class
(19)
where fed into the HMM classifier and were compared
against the trained ‘pain’ and ‘non-pain’ model. This
F-measure varies from 0 to 1, with a higher F-measure
testing process mainly requires for evaluation problem of
indicating better performance [14].
HMM as highlighted by Rabiner [13]. The classification
where tp is the number of pain test samples successfully
follows the following algorithm:
classified, fp is the number of misclassified non-pain test
If P(test_sample| λpain) > P(test_sample| λnon-pain) samples, fn is the number of misclassified pain test samples,
and finally tn the number of correctly classified non-pain
Then test_sample is classified ‘pain’
test samples.
Else Both systems performed optimally with 20 MFCC’s 50 ms
window. For the NN based system the hierarchy of one
test_sample is classified ‘non_pain’
hidden layer having 5 hidden nodes showed to be the best,
(14)
while an ergodic HMM with 5 states and 8 Gaussians per
state has resulted in the best recognition rates. The
4. Experimental Results optimum recognition rates obtained were 96.1% for HMM
A total of 700 cry segments of 1 second duration is used trained with 20 MFCC, whereas for ANN the highest
during system testing. Each feature vector is extracted at recognition rate was 79% using 20 MFCC also. For both
50ms windows with 75% overlap between adjacent frames. systems trained with LPCC’s, the best recognition rate
The size of the input data frame used was 50ms and 100ms obtained for HMM was 78.5% using 16 LPCC+DEL, 10
in order to determine which would yield the best results for Gaussians , 5 states and 50 ms whereas for ANN was
this application. These settings were used in all 70.2% using 16 LPCC, 5 states, 8 Gaussians per state and
experiments reported in this paper which compare between 50 ms window.
the performance of both types of infant cry recognition Table I, Figures 7 and 8 summarizes a comparison between
systems described above utilizing 12 MFCCs (with 26 filter performance of both systems using different performance
banks) and 16LPCCs (with 12th order LPC) acoustic metrics for the optimum results obtained with both MFCC
features. The effect of the dynamic coefficients for both used and LPCC features.
features was also investigated.
After optimizing system’s parameters by performing some
preliminary experiments, it was found that a fully connected
(an ergodic) HMM topology with five states and eight
104 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010
Table 1: Overall System Accuracy Web Technologies and Internet Commerce, Vol. 2, pp
Features MFCC LPCC
770 – 775, 2005
[3] Yousra Abdulaziz, Sharrifah Mumtazah Syed
Performance ANN HMM ANN HMM Ahmad, “Infant Cry Recognition System: A
Measures Comparison of System Performance based on Mel
F-measure 0.83 0.97 0.75 0.84 Frequency and Linear Prediction Cepstral
Coefficients”, Proceedings of International Conference
CER% 20.99 3.87 29.83 21.55 on Information Retrieval and Knowledge
Accuracy% 79 96.1 70.2 78.5 Management (CAMP 10),
Selangor, Malaysia, PP 260-263, March, 2010.
Comparison between NN & HMM using MFCC
[4] M. Petroni, A.S. Malowany, C.C. Johnston, B.J.
100
ANN Stevens, “A Comparison of Neural Network
90 HMM

80
Architectures for the Classification of Three Types of
70 Infant Cry Vocalizations”, Proceedings of the IEEE
60
17th Annual Conference in Engineering in Medicine
and Biology Society, Vol. 1, pp 821 – 822, 1995
50

40

30 [5] H.E. Baeck, M.N. Souza, ”Study of Acoustic Features


20
of Newborn Cries that Correlate with the context”,
Proceedings of the IEEE 23rd Annual Conference in
10

0
1 2 3
F-measure CER Accuracy Engineering in Medicine and Biology , Vol. 3, pp 2174
FIGURE 7. COMPARISON BETWEEN ANN AND HMM – 2177, 2001.
USING MFCC [6] Jose Orozco Garcia, Carlos A. Reyes Garcia, “Mel-
Frequency Cepstrum Coefficients Extraction from
90
Com paris on between NN & HMM us ing LPCC
ANN
Infant Cry for Classification of Normal and
80 HMM
pathological cry with Feed- forward Neural Networks”,
70
proceedings of international joint conference on neural
60

50
Networks, Vol. 4, pp 3140-3145, 2003
40 [7] Cano-Ortiz, D. I. Escobedo-Becerro,” Classificacion de
30 Unidades de Llanto Infantil Mediante el Mapa Auto-
20
Organizado de Koheen”, I Taller AIRENE sobre
10

0
Reconocimiento de Patrones con Redes Neuronales,
1 2 3
F-measure CER Ac curac y Universidad Católica del Norte, Chile, pp. 24-29,
Figure 8. Comparison between ANN and HMM 1999.
using LPCC [8] Suaste, I., Reyes, O.F., Diaz, A., Reyes, C.A.,
“Implementation of a Linguistic Fuzzy Relational
5. Conclusion and Future Work Neural Network for Detecting Pathologies by Infant
Cry Recognition,” Advances of Artificial Intelligence -
In this paper we applied both of ANN and CDHMM to IBERAMIA,Vol. 3315, pp. 953-962, 2004
identify infant pain and non-pain cries . From the obtained [9] Jose Orozco, Carlos A. Reyes-Garcia,
results, it is clear that HMM has been proved to be the “Implementation and Analysis of Training
superior classification technique in the task of recognition Algorithms for the Classification of Infant Cry with
and discrimination between infants’ pain and non-pain cries Feed-forward Neural Networks”, in IEEE International
with 96.1% classification rate with 20 MFCC’s extracted at Symp. Intelligent Signal Processing, pp. 271 – 276,
50ms. From our experiments, the optimal parameters of Puebla, Mexico, 2003.
CHMM developed for this work were 5 states , 8 Gaussians [10] Eddie Wong, Sridha Sridharan, “Comparison of
per state, whereas the ANN architecture that yielded the Linear Prediction Cepstrum Coefficients and Mel-
highest recognition rates was with one hidden layer and 5 Frequency Cepstrum Coefficientsfor Language
hidden nodes. The results also show that in general the Identification”, in Proc. Int. Symp. Intelligent
system accuracy performs better with MFCC's rather than Multimedia, Video and Speech Processing, Hong Kong
with LPCC’s features. May 24, 2001.
[11] Hesham Tolba,” Comparative Experiments to Evaluate
References the Use of a CHMM-Based Speaker Identification
[1] J.E. Drummond, M.L. McBride, “The Development of Engine for Arabic Spontaneous Speech”, 2nd IEEE
Mother’s Understanding of Infant Crying,”, Clinical International Conference on Computer Science and
Nursing Research, Vol 2, pp 396 – 441, 1993. Information Technology, ICCSIT, pp. 241 – 245,
[2] Sandra E. Barajas-Montiel, Carlos A. Reyes-Garcia, 2009.
“Identifying Pain and Hunger in Infant Cry with [12] Joseph Picone, “Continuous Speech Recognition Using
Classifiers Ensembles”, Proceedings of the 2005 Hidden Markov Models”, IEEE ASSP magazine,
International Conference on Computational pp.26-41, 1991.
Intelligence for Modeling, Control and Automation, [13] L.R. Rabiner, "A tutorial on hidden markov models
and International Conference on Intelligent Agents, and selected applications in speech recognition".
(IJCNS) International Journal of Computer and Network Security, 105
Vol. 2, No. 6, June 2010

Proceedings of the IEEE, Vol. 77 (2, pp. 257-286),


1989.
[14] Yang Liu, Elizabeth Shriberg, “Comparing
Evaluation Metrics For Sentence Boundary Detection”,
in Proc. IEEE Int. Conf. Vol. 4, pp185-188, 2007.
[15] Jun Cai, Ghazi Bouselmi, Yves Laprie, Jean-Paul
Haton, “Efficient likelihood evaluation and dynamic
Gaussian selection for HMM-based speech
recognition”, Computer Speech and Language, vol.23,
pp. 47–164, 2009.
[16] B. Resch, “Hidden Markov Models, A tutorial of the
course computational intelligence”, Signal Processing
and Speech Communication Laboratory.
[17] Dror Lederman, Arnon Cohen, Ehud Zmora, “On
the use of Hidden Markov Models in Infants' Cry
Classification”, 22nd IEEE Convention, pp 350 – 352,
2002.
[18] Mohamad Adnan Al-Alaoui, Lina Al-Kanj, Jimmy
Azar, Elias Yaacoub, “Speech Recognition Using
Artificial Neural Networks And Hidden Markov
Models”, The 3rd International Conference on Mobile and
Computer Aided Learning, IMCL Conference, Amman,
Jordan, 16-18 April 2008.
[19] Sawit Kasuriya, Chai Wutiwiwatchai, Chularat
Tanprasert, “Comparative Study of Continuous Hidden
Markov Models (CHMM) and Artificial Neural
Network (ANN) on Speaker Identification System”,
International Journal of Uncertainty, Fuzziness and
Knowledge-Based Systems (IJUFKS), Vol. 9, Issue: 6,
pp. 673-683, 2001.
[20] José Orozco, Carlos A. Reyes García, "Detecting
Pathologies from Infant Cry Applying Scaled
Conjugate Gradient Neural Networks”, Proceedings of
European Symposium on Artificial Neural Networks,
pp 349 – 354, 2003.
106 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

Scalable ACC-DCT Based Video Compression


Using Up/Down Sampling Method
1
G.Suresh, 2P.Epsiba, 3Dr.M.Rajaram, 4Dr.S.N.Sivanandam
1
Research Scholar, CAHCET, Vellore, India.
geosuresh@gmail.com
2
Lecturer, Department of ECE, CAHCET, Vellore, India.
talk2epsi@yahoo.co.in
3
Professor & Head, Department of EEE, GCE, Thirunelveli, India.
rajaramgct@rediffmail.com
4
Professor & Head, Department of CSE, PSG Tech, Coimbatore, India.

Abstract: In this paper, we propose a Scalable ACC-DCT based widely used in video compression. Such encoder exploits
video compression using UP/DOWN sampling approach which inter frame correlation to provide more efficient
tends to hard exploit the pertinent temporal redundancy in the compression. However, Motion estimation process is
video frames to improve compression efficiency with less computationally intensive; its real time implementation is
processing complexity. Generally, video signal has high difficult and costly [1][2]. This is why motion-based video
temporal redundancies due to the high correlation between coding standard MPEG[12] was primarily developed for
successive frames. Actually, this redundancy has not been
stored video applications, where the encoding process is
exposed enough by current video compression techniques. Our
model consists on 3D to 2D transformation of the video frames
typically carried out off-line on powerful computers. So it is
that allows exploring the temporal redundancy of the video less appropriate to be implemented as a real-time
using 2D transforms and avoiding the computationally compression process for a portable recording or
demanding motion compensation step. This transformation communication device (video surveillance camera and fully
turns the spatial temporal correlation of the video into high digital video cameras). In these applications, efficient low
spatial correlation. Indeed, this technique transforms each cost/complexity implementation is the most critical issue.
group of pictures (GOP) to one picture (Accordion Thus, researches turned towards the design of new coders
Representation) eventually with high spatial correlation. This more adapted to new video applications requirements. This
model is also incorporated with up/down sampling method led some researchers to look for the exploitation of 3D
(SVC) which is based on a combination of the forward and transforms in order to exploit temporal redundancy. Coder
backward type discrete cosine transform (DCT) coefficients. As
based on 3D transform produces video compression ratio
this kernel has various symmetries for efficient computation, a
which is close to the motion estimation based coding one
fast algorithm of DCT-based Scalability concept is also
proposed. For further improvement of the scalable performance, with less complex processing [3][4][5][6]. The 3d transform
an adaptive filtering method is introduced, which applies based video compression methods treat the redundancies in
different weighting parameters to DCT coefficients. Thus, the the 3D video signal in the same way, which can reduce the
decorrelation of the resulting pictures by the DCT makes efficiency of these methods as pixel's values variation in
efficient energy compaction, and therefore produces a high spatial or temporal dimensions is not uniform and so,
video compression ratio. Many experimental tests had been redundancy has not the same pertinence. Often the temporal
conducted to prove the method efficiency especially in high bit redundancies are more relevant than spatial one [3]. It is
rate and with slow motion video. The proposed method seems to possible to achieve more efficient compression by exploiting
be well suitable for video surveillance applications and for more and more the redundancies in the temporal domain;
embedded video compression systems. this is the basic purpose of the proposed method. The
proposed method consists on projecting temporal
Keywords: SVC, Group of Pictures (GOP), ACC-DCT, redundancy of each group of pictures into spatial domain to
Up/Down Sampling, ACC-DCT.
be combined with spatial redundancy in one representation
with high spatial correlation. The obtained representation
1. Introduction will be compressed as still image with JPEG coder. The rest
The main objective of video coding in most video of the paper is organized as follows: Section 2 gives an
applications is to reduce the amount of video data for storing overview of basic definition of three dimensional DCT.
or transmission purposes without affecting the visual Section 3 gives the basics of the proposed method and the
quality. The desired video performances depend on modifications made to improve the compression ratio and
applications requirements, in terms of quality, disks capacity also reduce the complexity. Experimental results were
and bandwidth. For portable digital video applications, discussed in section 4. The section 5 concludes this paper
highly-integrated real-time video compression and with a short summary.
decompression solutions are more and more required.
Actually, motion estimation based encoders are the most
(IJCNS) International Journal of Computer and Network Security, 107
Vol. 2, No. 6, June 2010
2. Definitions
Accordion representation is formed by collecting the video
2.1. Three dimensional DCT cube pixels which have the same column rank and these
The discrete cosine transform (DCT)[4][7] has energy frames have a stronger correlation compare to spatial
packing efficiency close to that of the optimal Karhunen- frames. To improve correlation in the representation we
Loeve transform. In addition, it is signal independent and reverse the direction of event frames. This tends to put in
can be computed efficiently by fast algorithms. For these spatial adjacency that the pixels having the same coordinate
reasons, the DCT is widely used in image and video in the different frames of the video cube. The following
compression. Since the common three-dimensional DCT example i.e., Figure.2 clearly projecting the Accordion
kernel is separable, the 3D DCT is usually obtained by representation also minimizes the distance between the pixel
applying the one-dimensional DCT along each of the three correlated in the source.
dimensions. Thus, the N ×N ×N 3D DCT can be defined as

3. Proposed Method

The fundamental idea is to represent a video sequence with


highly correlated form. Thus we need to expose both spatial
and temporal redundancy in video signal. The video cube is
the input of our encoder, which is a number of frames. This
video cube will decomposed in to temporal frames which
will be gathered into one 2D frame. The next step consists Figure 2. Accordion Representation Example
of coding the obtained frame. Normally, the variation of the
3D video signal is much less in the temporal domain than Continuation of the Accordion Representation a new
the spatial domain, the pixels in 3D video signal are more concept is originated from scalable video coding
correlated in temporal domain[3]. (SVC)[16][17][18]19[20] technique; up/down sampling
For a single pixel model is denoted as p(x,y,t) where p is method using the DCT has a large degree of symmetries for
pixel value; x,y are pixel spatial coordinates; t is video efficient computation. Thus, a fast algorithm of the up/down
instance at time. The following assumption will be the basis sampling method is also included in our proposed method.
of the proposed model where we will try to put pixels- which For a performance improvement, an adaptive filtering
have a very high temporal correlation in spatial adjacency. method DCT up/down sampling is applied[13][14][15],
which applies different weighting parameters to each DCT
P(x,y,t)-p(x,y,t+1)<p(x,y,t)-p(x+1,y,t) (3) coefficient. Then we have to introduce quantization model
To exploit the succeeding assumption the temporal and Entropy coding (RLE/Huffman) techniques for further
decomposition of the 3D video signal will be carried out and performance improvement of the proposed system. A overall
the temporal, spatial decomposition of one 8x8x8 video cube constructional details of the proposed model is explained in
[8][9][10] is presented in the Figure.1. Thus the Accordion Figure.3.
representation (Spatial Adjacency) is obtained from the
basis assumption.

Figure 3. Complete Constructional Details of the proposed


Figure 1. Temporal and spatial decomposition of one 8x8x8
Model
video cube.
108 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

3.1. Algorithm

1. Decomposition of the video in GOPs (Group of


frames/pictures)
2. (a) Spatial Adjacency separation (Accordion
Representation) of the GOP
For x=0: (L * N)-1 do
For y=0:(H-1) do
If ((x/N)mod 2)!=0 then
n=(N-1)-(x mod N)
else
n=x mod N
end if
IACC (x,y)=In (x/N, y)
With n= ((x/N) mod2)(N-1)+1-2((x/N) mod 2)(x mod N)
(b) For n=0:N-1 do
For x=0:L-1 do
For y=0:H-1 do
If(x mod 2)! =0 then Figure 4. GUI Model
X ACC= (N-1)-n(x*N)
else
X ACC=n(x*N)
end if
In(x,y)=IACC(X ACC, y)
end for
end for
end for
with XACC=(( x/N) mod2)(N-1)+n(1-2(x/N) mod2))+x
3. Decomposition of the resulting frame into 8x8 blocks.
4. Introduce down sampling filter/ up sampling filter with
DCT.
5. Quantization of the obtained coefficients.
6. ZigZag coding of the obtained coefficient.
7. Entropy (Huffman) coding of the coefficients.

4. Experimental Results

This section verifies the performance of the proposed low


complex scalable ACC-DCT based video compression Figure 5. Frame Separation Model
model. We summarize the experimental results with some
analysis and comments. By understanding the performance
of the proposed method with different GOP value the best
compression rate is obtained with GOP=8. Here Figure.4
GUI model is created to integrate the encoder and decoder
sections.Figure.5 shows the progress of frame separation
from video sequence. Encoder model is shown in Figure.6
and Figure.7 is the example Accordion representation for
one GOP video cube. Figure.8: GUI for Decoding and
Validation Process. Figure.9: GUI for Reconstructed output
validation, then history of entire simulation is specified as
ans. Finally Figure.10 shows the plot between Frame
number Vs PSNR(dB), Figure.11 finds the orientation flow
estimation of the sample sequence and compared the
strength of our proposed model with other leading standards
shown in Figure.11.

Figure 6. Encoding Model


(IJCNS) International Journal of Computer and Network Security, 109
Vol. 2, No. 6, June 2010
eventdata: []
handles: [1x1 struct]
q: 1
str1: 'frame'
str2: '.bmp'
Bitstream: {[51782x1 double]}
Bitst: 51782
j1: 2
f: 1
filename_1: '1.bmp'
Image1: [120x960 double]
row: 244
col: 356
out: [120x960 double]
Enc: [120x960 double]
r: 120
c: 960
Figure 7. Accordion Representation example (Hall Monitor)
Input_filesize: 921600
i: 120
j: 960
QEnc: [120x960 double]
ZQEnc: [1x115200 double]
Level: 8
Speed: 0
xC: {[1x115200 double]}
y: [51782x1 double]
Res: [2x4 double]
cs: 4
cc: 51782
dd: 51782
Compresed_file_size: 51782
Comp_RATIO: 71.1908
enctime: 345.6888

38
Figure 8. GUI for Decoding and Validation Process PROPOSED METHOD
37 MPEG4

36
P S N R (d B )

35

34

33

32

31
0 10 20 30 40 50 60 70 80 90 100
Frame Number
Figure 10. Frame number Vs PSNR(dB)(Hall Monitor)

Figure 9. GUI for Reconstructed output validation

ans =

hObject: 4.0011
110 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010
Motion analysis like vector movement, frame block
variation with time, speed analysis and histogram analysis
etc are plotted and presented in figure12. To conclude the
performance betterment of our proposed method PSNR
value is calculated for different bit rate, and then compared
with other popular methods shown in figure 13. For
comparison with representative lossy compression
algorithms, we have two performance measures were used
as described below in equ 4 and equ 5.

Figure 11. Orientation Flow Estimation

Figure 13. comparison response between bit rate Vs PSNR


(dB) for different standard

(a)

Where Tc is the time consumed to compress one frame.

TABLE I show compression ratio and speed for total of


ninety six input video sequences. The proposed scheme
shows a bit rate savings of an average of 3.19% over other
schemes and the average compression ratio is effectively
increased compared to other methods.

Table 1: Comparison of Compression Ratio and Speed


CR% CS(MB/sec)
Methods Average worst
37.56
Previous 66.96 40.96

Proposed 71.1908 41.47 36.18

(b)
Figure 12. Motion analysis of the video sample (a) Speed
analysis (b) Histogram plot
(IJCNS) International Journal of Computer and Network Security, 111
Vol. 2, No. 6, June 2010
5. Conclusion implementation,. in the Mobile Multimedia Conf.
(MoMuC), 2000.
In this paper, we successfully extended and implemented a [7] A. N. N. T. R. K.R., .Discrete cosine transforms,. in
Scalable ACC-DCT based video compression using up/down IEEE transactions on computing, pp. 90.93, 1974.
sampling algorithm on MATLAB and provided [8] T.Fryza and S.Hanus, .Video signals transparency in
experimental results to show that our method is better than consequence of 3d-dct transform,. in Radioelektronika
the existing methods. We not only improved the coding 2003 Conference Proceedings, (Brno,Czech Republic),
efficiency in the proposed encoding algorithm but also it pp. 127.130, 2003.
reduces complexity. As discussed in the experimental [9] N. Boinovi and J. Konrad, .Motion analysis in 3d dct
section, proposed method provides benefits of rate-PSNR domain and its application to video coding, vol.20, pp.
performance at the good quality of base layer and low 510.528, 2005.
quality of enhancement layer. When SVC coding scenario [10] E. Y. Lam and J. W. Goodman, .A mathematical
meets these circumstances, proposed method should be analysis of the dct coef_cient distributions for images,.
useful. With the apparent gains in compression efficiency vol. 9, pp. 1661.1666, 2000.
we foresee that the proposed method could open new [11] Information Technology—Coding of Moving Pictures
horizons in video compression domain; it strongly exploits and Associated Audio for Digital Storage Media at up
temporal redundancy with the minimum of processing about 1.5 Mbit/s: Video, ISO/IEC 13818-2 (Mpeg2-
complexity which facilitates its implementation in video Video), 1993.
embedded systems. It presents some useful functions and [12] Mpeg-4 Video Verification Model 8.0 ISO/IEC
features which can be exploited in some domains as video JTC1/SC29/WG11, MPEG97/N1796, 1997.
surveillance. In high bit rate, it gives the best compromise [13] Joint Scalable Video Model JSVM-5, Joint Video Team
between quality and complexity. It provides better (JVT) of ISO/IEC MPEG & ITU-T VCEG, JVT-S202,
performance than MJPEG and MJPEG2000 almost in Geneva, Switzerland, 2006.
different bit rate values. Over 2000kb/s bit rate values; our [14] S. Sun, Direct Interpolation for Upsampling in
compression method performance becomes comparable to Extended Spatial Scalability, Joint Video Team (JVT)
the MPEG 4 standard especially for low motion sequences. of ISO/IEC MPEG & ITU-T VCEG, JVT-P012,
Additionally, a further development of this model Poznan, Poland, 2005.
could be to combine ‘Accordion representation’ with other [15] S. Sun, J. Reichel, E. Francois, H. Schwarz, M. Wien,
transformations such as wavelet Transformation and also and G. J. Sullivan, Unified Solution for Spatial
aims to estimate the degraded model of the video Scalability Joint Video Team (JVT) of ISO/IEC MPEG
compressed image through using the neural network. We & ITU-T VCEG, JVT-R018, Bangkok, Thailand, 2006.
can improve the quality of the compressed image, using [16] Y. Vatis, B. Edler, D. T. Nguyen, and J. Ostermann,
neural network’s predictive ability strongly and high fault- “Motion and aliasing compensated prediction using a
tolerance improving the quantization parameter prediction two-dimensional non-separable adaptive wiener
in the video encoding, completing video image interpolation filter,” in Proc. Int. Conf. Image
reconstruction. Processing, Sep. 2005, pp. 894–897.
[17] A. Segall and S. Lei, Adaptive Upsampling for
References Spatially Scalable Coding, Joint Video Team (JVT) of
ISO/IEC MPEG & ITU-T VCEG, JVT-O010, Busan,
[1] E. Q. L. X. Zhou and Y. Chen, .Implementation of Korea, 2005.
h.264 decoder on general purpose processors with [18] A. Segall, Study of Upsampling/Down-Sampling for
media instructions,. in SPIE Conf. on Image and Video Spatial Scalability Joint Video Team (JVT) of ISO/IEC
Communications and Processing, (Santa Clara, CA), MPEG & ITU-T VCEG, JVTQ083, Nice, France, 2005.
pp. 224.235, Jan 2003. [19] A. Segall, Upsampling/Down-Sampling for Spatial
[2] M. B. T. Q. N. A. Molino, F. Vacca, .Low complexity Scalability, Joint Video Team (JVT) of ISO/IEC MPEG
video codec for mobile video conferencing,. in Eur. & ITU-T VCEG, JVT-R070, Bangkok, Thailand, 2006.
Signal Processing Conf. (EUSIPCO),(Vienna, Austria), [20] G. J. Sullivan, Resampling Filters for SVC
pp. 665.668, Sept 2004. Upsampling, Joint Video Team (JVT) of ISO/IEC
[3] S. B. Gokturk and A. M. Aaron, .Applying 3d methods MPEG&ITU-T VCEG, JVT-R066, Bangkok, Thailand,
to video for compression,. in Digital Video Processing 2006.
(EE392J) Projects Winter Quarter, 2002. [21] Jun Ho Cho, Tae Gyoung Ahn, and Jae Hun Lee “
[4] T. Fryza, Compression of Video Signals by 3D-DCT Lossless Image /Video Compression with Adaptive
Transform. Diploma thesis, Institute of Radio Plane Split” IEEE conference,2009.
Electronics, FEKT Brno University of Technology,
Czech Republic, 2002. Authors Profile
[5] G. M.P. Servais, .Video compression using the three
dimensional discrete cosine transform,. in G.Suresh received the B.E Degree in ECE from Priyadarshini
Engineering College affiliated to Madras University in 2000, and
Proc.COMSIG, pp. 27.32, 1997.
the M.E degree from Anna University, Chennai, in 2004. Currently
[6] R. A.Burg, .A 3d-dct real-time video compression he registered for Ph.D and doing research under anna university,
system for low complexity single chip vlsi Chennai. His research focuses on video processing, and
112 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010
compression techniques, Reconfigurable and adaptive logic design,
and high performance clock distribution techniques.

P.Epsiba received the B.E degree in ECE from Ranipettai


Engineering College affiliated to Madras University in 2002, and
the M.E degree from Anna University, Chennai, in 2008. Currently
she is working as a lecturer in C. Abdul Hakeem College of
Engineering and Technology. Her research focuses on video
compression and filtering techniques, Reconfigurable and adaptive
logic design, and high performance clock distribution techniques.

Dr.M.Rajaram working as Professor and Head of the department


of EEE in GCE, Thirunelveli. He is having more than twenty five
years of teaching and research experience. His research focuses on
networks security, image and video processing, FPGA
architectures and power electronics.

Dr.S.N.Sivanandam working as Professor and Head of the


department of CSE in PSG Tech, Coimbatore. He is having more
than thirty two years of teaching and research experience. His
research focuses on Bioinformatics, computer Communication,
video compression and adaptive logic design.
(IJCNS) International Journal of Computer and Network Security, 113
Vol. 2, No. 6, June 2010

Study the Effect of Node Distribution on QoS


Multicast Framework in Mobile Ad hoc Networks
Mohammed Saghir

Computer Department, Hodeidah University, Yemen


Information Center, Hodeidah Governorate, Yemen
mshargee@yahoo.com

Abstract: The distribution of nodes in mobile ad hoc


networks affects connectivity, network capacity, group member 2. Literature Review
and route length. In this paper, the effect of node distribution on It has been proven that the capacity of network does not
the QoS multicast framework (FQM) is studied. In order to do
increase with its size when nodes are stationary [2]. On the
this, extensive simulations are performed for FQM with two
well-known node placement models: The Random placement other hand, it has been proven that the mobility increases
model and Uniform placement model. the capacity of mobile ad hoc networks [3]. The
The performance of FQM with the placement models is studied performances of some placement modes are studied in
under different node mobility and node density. The analysis of previous work. In [4], a new algorithm (a Node Placement
simulation results have shown that there is a difference between Algorithm for Realistic Topologies-NPART) was proposed
the performance of FQM-Uniform and FQM-Random when to create realistic network topologies. This algorithm
mobility is zero. This difference is decreased while mobility
increased.
provides realistic topology with different input data. The
NPART algorithm is compared with the uniform placement
Key words: MANET, RANDOM, UNIFORM. model. The result of simulation shows that NPART
algorithm reflects the properties for user initiated network.
1. Introduction The researchers in [5] modified the Particle Swarm
Wireless networking and multimedia applications are Optimization (PSO) using genetic algorithm to optimize
growing in importance rapidly. The motivation for node density and improve QoS in sensing coverage. The
supporting QoS multicasting in MANET is the fact that nodes in simulation area are divided into stationary nodes
multimedia applications are becoming important for group and mobile nodes and the study focuses on how to optimize
communication. Among types of wireless networks, mobile nodes distribution to improve QoS in sensing
MANET provides flexible communication with low cost. All coverage in sensor network.
communications are done over wireless media without the The properties of the random waypoint mobility model
help of wired base stations. The environment for MANETs (RWP) are studied in [6], [7], [8], [9], [10], [11], [12] and
is very volatile so connections can be dropped at any the bugs that might occur when using this model is
moment. Distant nodes communicate over multiple hops highlighted. The researchers concluded that the nodes
and nodes must cooperate with each other to provide distributions after long time of simulation are different from
routing. The challenges in MANET are attributed to the initial nodes distributions. In addition, the random
mobility of intermediate nodes, absence of routing waypoint model and Brownian-like model are studied in [8]
infrastructure, low bandwidth and computational capacity of and concluded that the concentration of nodes in the center
mobile nodes. The network traffic is distributed through of simulation area in RWP model is based on the choice of
multi-hop paths and the construction of these paths is mobility parameters. Moreover, the effect of RWP model on
affected by node distribution. In addition, the node the node distribution is studied in square and circle area
distribution affects connectivity, network capacity, group [13]. The behavior of mobile nodes in RWP mobility model
member and route length in mobile ad hoc networks. is outlined and analyzed. Some parameters for distribution
Moreover, if mobile nodes are not distributed uniformly, in the RWP model to accurate the movement of mobile
some area of ad hoc network may not be covered. In some nodes in a square area are defined.
cases, mobile nodes are located in the middle of simulation
area and as a result, they have higher average connectivity The previous studies are focused on studying and updating
degree than nodes at border [1]. the node placement models and mobility models while in
this paper, we study the effect of node placement models on
This paper is structured as follows: Section 2 gives an the performance of FQM framework as the node distribution
overview on the previous work whereas Section 3 describes affects the group member and construction of multi-hop
the QoS multicast framework FQM and defines the uniform paths communication in the QoS multicast routing
and random placement models. In Section 4, the simulation protocols.
results of implementing FQM with two different placement
models are presented. Finally, Section 5 concludes this
paper and makes mention of future work.
114 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010
3. The FQM with two Placement Models multiplexed when it is necessary to reach two or more
destinations on disjoint paths. This advantage conserves
3.1 The FQM QoS multicast framework bandwidth and network resources [14]. In our previous work
Multicast routing is more efficient in MANETs because it is [15], we propose a cross-layer Framework FQM to support
inherently ready for multicast due to their broadcast nature QoS multicast applications for MANETs. Figure 1 gives an
that avoids duplicate transmission. Packets are only overview on the cross-layer framework FQM.

Figure 1. An Overview on the Functionalities of the Cross-Layer Framework While Receiving Flows

The first component of the framework is a new and efficient requirements. Second, a distributed admission control which
QoS multicast routing protocol (QMR). The QMR protocol used to prevent nodes from being overloaded by rejecting the
is used to find and maintain the paths that meet the QoS request for new flows that will affect the ongoing flows.
(IJCNS) International Journal of Computer and Network Security, 115
Vol. 2, No. 6, June 2010

Third, an efficient way to estimate the available bandwidth performance metrics:


and provides the information of the available bandwidth for • Packet delivery ratio: the average of the ratio
other QoS schemes. Fourth, a source based admission between the number of data packets received and
control which used to prevent new sources from a affecting the number of data packets that should have been
the ongoing sources if there is not enough available received at each destination. This metric indicates
bandwidth for sending to the all members in the multicast the reliability of the proposed framework.
group. Fifth, a cross-layer design with many QoS scheme: • Control overhead: number of transmitted control
classifier, shaper, dynamic rate control and priority queue. packet (request, reply, acknowledgment) per data
These schemes work together to support real-time packet delivered. Control packets are counted at
applications. each hop. The available bandwidth in MANETs is
limited so it is very sensitive to the control
The traffic is classified and processed based on its priority. overhead.
Control packets and real-time packets will bypass the shaper • Average latency: the average end-to-end delivery
and be sent directly to the interface queue at MAC layer. delay is computed by subtracting packet generation
The best-effort packets should be regulated based on the time at the source node from the packet arrival time
dynamic rate control. In terms of queue priority, control at each destination. The multimedia applications
packets and real-time packets have higher priority than best- are very sensitive to the packet delay; if the packet
effort packets. All components in the framework are takes long time to arrive at destinations, it will be
cooperating to provide the required level of services. useless.
3.2 The Uniform Placement Model • Jitter: it is defined as a variation in the latency of
received packets. It is determined by calculating the
In Uniform placement model, the simulation area is divided
standard deviation of the latency [17]. This is an
into a number of cells based on the number of mobile nodes
important metric for multimedia applications and
in the simulation area. Within each cell, a node is placed
should be kept to a minimum value.
randomly. The uniform placement model is set to create
topologies with an equaled average node degree and this • Group Reliability: it is defined as the ratio of number
improves connectivity. of packets received at 95% of destination and
number of packets should be received. This means
3.3 The Random Placement Model that the packet is considered to be received only if it
In the Random placement model, mobile nodes are placed is received by 95% of the number of multicast
randomly in a given area according to a probability group.
distribution of mobile nodes. Based on the probability
4.1 The Performance of FQM under Different Mobility
distribution, the mobile nodes density is different from one
area to another in the simulation area of ad hoc networks. In this section, we study the performance of FQM with the
Actually, the Random distribution is suitable as it reflects uniform and random placement models under different
the real behavior of mobile nodes in ad hoc networks. mobility.
4.1.1 Packet Delivery Ratio (PDR)
4. Performance Evaluation 100
90 FQM-Random FQM-Uniform
We have conducted experiments using GLOMOSIM [16] to
study the effect of node distribution on the QoS multicast 80

framework (FQM). The main concern of these experiments 70

is to study the effect of the random placement model and the 60


PDR

uniform placement model on FQM framework while 50


supporting QoS multicast applications. This simulation was 40
run using a MANET with different number of nodes moving 30
over a rectangular 1000 m × 1000 m area for over 900 20
seconds of simulation time. Nodes in the simulation move 10
according to the Random Waypoint mobility model provided 0 5 10 15 20
by GLOMOSIM. Mobility speed is ranged from 0-20 m/s Mobility (m/s)
and the pause time is 0 s. We used one multicast source
sending to 15 multicast destinations in all experiments Figure2. Performance of PDR vs. mobility.
(assuming all destinations were interested to receive from
the source node). The radio transmission range is 250 M
and the channel capacity is 2Mbit/s. Each data point in this In a stationary network, the nodes always remain either in
simulation represents the average result of ten runs with the range of each other or out of range. When mobility is
different initial seeds. increased, the positions of mobile nodes are changed. The
The performance of FQM with uniform and random performance of PDR vs. increasing mobility is given in
placement models is studied through the following Figure2. The PDR in FQM-Uniform is significantly higher
116 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

than that in FQM-Random. In FQM-Uniform, the mobile show that AL for FQM-Uniform is relatively lower than
nodes are distributed uniformly so the degree of neighbor FQM-Random. In FQM-Uniform, the traffic is distributed
nodes is equaled and as a result the traffic load is balanced through different paths while in FQM-Random the traffic is
through intermediate nodes. congested and as a result, the AL is increased. In addition,
when mobility is increased, the uniform distribution of
In FQM-Random, the mobile nodes are distributed
nodes is changed and as a result, the AL in FQM-Uniform is
randomly so the degrees of nodes are different from one
increased.
node to another and as a result, the traffic load may congest
through some intermediate nodes. When mobility is 4.1.1 Jitter
increased, the distributions of nodes are affected and as a 100

FQM-Random FQM-Uniform
result the difference of PDR between FQM-Uniform and 90

FQM-Random are decreased. 80

4.1.2 Control Overhead (OH) 70

Jitter (ms)
60
0.39
0.36 FQM-Random FQM-Uniform 50
0.33
0.3 40
OH per Packet Delivery

0.27
30
0.24
0.21 20
0.18
0.15 10

0.12
0
0.09 0 5 10 15 20
0.06
Mobility (m/s)
0.03
0
Figure 5. Performance of jitter vs. mobility.
0 5 10 15 20
Jitter occurs due to temporally lack of wireless connections
Mobility (m/s)
and scheduling issues on the link layer [18]. The number of
Figure3. Performance of OH vs. mobility. hops in the path is affected by the node distribution and as a
Figure 3 shows the Control OH vs. increasing mobility. The result, the jitter is affected. Frequently changing routes
results show that control OH for FQM-Uniform is lower could increase the jitter since the time for selecting forward
than FQM-Random when mobile nodes are static; this is nodes and the delay variation between the old and new
because the number of data packets that received in FQM- routes increase the jitter. Figure 5 gives an overview on the
Uniform was higher than that received in FQM-Random. As performance of jitter vs. increasing mobility. The results
mobility increased, the number of data packets that received show that the jitter for FQM-Uniform is relatively less than
in FQM-Uniform decreased and this affect the percentage of FQM-Random when mobile nodes are static. When mobility
control OH per packet. As a result of this, the differences increased, the uniform node distribution is affected and as a
between FQM-Uniform and FQM-Random are decreased. result the differences between FQM-Uniform and FQM-
Random in jitter are decreased.
4.1.3 Average latency (AL)
100 4.1.5 Group Reliability (GR)
90 FQM-Random FQM-Uniform
The Group Reliability vs. increasing mobility is given in
80
Figure 6. The GR for FQM-Uniform is higher than that in
70
FQM-Random when mobility is zero. As mobility is
60
increased, the difference between FQM-Uniform and FQM-
Random in group reliability is decreased as discussed in
AL (ms)

50

40
section 4.1.1.
100
30
FQM-Random FQM-Uniform
90
20
80
10
70

0
60
0 5 10 15 20
GR

50

Mobility (m/s)
40

30
Figure4. the Performance of AL vs. mobility.
20

In wireless networks such as IEEE 802.11, the mobile nodes 10

share the same channel and use the contention mechanism 0


0 5 10 15 20

to capture the channel so the bandwidth availability affected Mobility (m/s)

by the number of nodes in the surrounding and this


increases network delay and jitter. The results in Figures 4 Figure 6. Performance of GR vs. mobility.
(IJCNS) International Journal of Computer and Network Security, 117
Vol. 2, No. 6, June 2010

4.2 The Performance of FQM under Different Nodes


300
Density
270
In this section we study the performance of FQM with the FQM-Random FQM-Uniform
240
uniform and random placement models under different 210
nodes density with no mobility to focus on the effect of node 180

AL (ms)
density. 150
120
In a denser network, the probability of mobile nodes to sense
90
the activities of its neighbor nodes increases so the packet
60
collision that is coming due to hidden terminals is reduced
30
[19]. In addition, when the network density increases, the
0
number of connection increases so packets can finds paths to 100 150 200 250
arrive at destinations. Number of nodes

In sparser network, the connectivity will be small so the


Figure 8. AL vs. network density
number of delivered data packets is few due to lack of routes
[20]. On the other side, when the network density is very 4.2.3 Jitter
high, the interference between nodes increases and this The performance of FQM as a function of network density is
increases the collisions and reduces the channel access described in Figure 9. The results describe the difference
which leads to drop data packets. between the jitter in FQM-Random and FQM-Uniform.

4.2.1 Packet Delivery Ratio (PDR) 400

The packet delivery ratio as a function of network density 360


FQM-Random FQM-Uniform
for FQM-Uniform and FQM-Random is given in Figure 7. 320

The figure shows that there is a difference between PDR 280

FQM-Uniform and PDR FQM-Random. The difference is 240


Jitter (ms)

increased when the node density is increased. This is 200

because in FQM-Uniform, the mobile nodes uniformly 160

distributed so when node density is increased, the number of 120

paths increased and traffic load is balanced and as a result 80

the PDR increased. 40


0
In FQM-Random, when node density is increased with 100 150 200 250
random distribution, the traffic is congested and the Number of nodes
available bandwidth is reduced as a result of increase
contention and collision between mobile nodes. Figure 9. Jitter vs. network density
100 FQM-Random FQM-Uniform 4.2.4 Group Reliability (GR)
90 Figure 10 shows the performance of the group reliability vs.
80 increasing network density. From the Figure, there is a
70
difference between GR in FQM-Uniform and FQM-Random
60
and this is because PDR in FQM-Uniform is higher than
PDR

PDR in FQM-Random as discussed before in Section 4.2.1.


50

40
100
30 FQM-Random FQM-Uniform
90
20
80
10
70
100 150 200 250
Number of nodes 60
GR

50

Figure 7. PDR vs. network density 40


30
4.2.2 Average Latency (AL)
20
The number of nodes in the neighborhood affects the
10
available bandwidth and this increases the network delay
0
and jitter. Figure 8 reflects the difference between the AL in
100 150 200 250
FQM-Uniform and FQM-Random. The AL in FQM- Number of nodes
Random is slightly increased while network density
increased.
Figure 10. GR vs. network density
118 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

4.3 The Performance of FQM under High while mobility is increased; this is because the average
Density and High Mobility latency is changed while mobility is increased.
The effect of mobility with low node density is discussed in
400
details in section 4.1. In this Section, the effect of high
360 FQM-Random FQM-Uniform
mobility with high density on the performance of FQM with
320
uniform and random placement models is studied. The node
280
density is 100 mobile nodes and mobility is 20 m/s.

Jitter (ms)
240
4.3.1. Packet Delivery Ratio (PDR) 200
The packet delivery ratio as a function of mobility with high
160
density for FQM-Uniform and FQM-Random is given in
120
Figure 11. The figure shows that the difference between
80
PDR for FQM-Uniform and FQM-Random is decreased
40
while mobility increased even the node density is high. This
is because the uniformly distribution is affected with high 0
0 20
mobility and as a result, the traffic is congested and
M obility (m/s)
available bandwidth is reduced.
100 FQM-Random FQM-Uniform Figure 13. Jitter as a Function of mobility with high density
90
4.3.4. Group Reliability (GR)
80
The performance of the group reliability with high mobility
70 and high node density is described in Figure 14. The Figure
60 shows that the difference between GR in FQM-Uniform and
PDR

50 FQM-Random is decreased while mobility is increased; this


is because the uniformly distribution is affected with high
40
mobility and as a result, the traffic is congested and
30
available bandwidth is reduced as discussed before in
20 Section 4.3.1.
10
0 20 100
Mobility (m/s) FQM-Random FQM-Uniform
90

80
Figure 11. PDR as a Function of mobility with high density 70

60
4.3.2. Average Latency (AL)
GR

The number of nodes in the neighborhood affects the 50

available bandwidth and this increases the network delay 40

and jitter. Figure 12 shows that the difference between the 30

AL in FQM-Uniform and FQM-Random is decreased while 20

mobility is increased even the node density is high. 10

0
0 20
300
Mobility (m/s)
270 FQM-Random FQM-Uniform

240
210 Figure 14. GR as a Function of mobility with high density
180
AL (ms)

150
120 5. Conclusion and Future work.
90
60 In this paper, we have studied the performance of FQM
30 framework with two placement models under different node
0 mobility and node density. From the results, the
0 20 performance of the QoS multicast framework FQM with
Mobility (m/s)
Uniform placement model (FQM-Uniform) is better than the
Figure 12. AL as a Function of mobility with high density performance of FQM with Random placement model (FQM-
Random) when the mobility of nodes is zero. Although the
4.3.3. Jitter Uniform placement model is superior Random placement
The results in Figure 13 reflect that the difference between model, the Random placement model is more suitable to
the jitter in FQM-Random and FQM-Uniform is decreased reflect the real behavior of nodes in mobile ad hoc networks.
(IJCNS) International Journal of Computer and Network Security, 119
Vol. 2, No. 6, June 2010

This is because the mobility is the main characteristic of [12] J. Yoon, M. Liu, and B. Noble, “Random Waypoint
mobile ad hoc network. Moreover, the Uniform placement Considered Harmful,” In the Proceeding of the IEEE
model is suitable in some applications of sensor networks INFOCOM, 2003.
where static sensors are used. [13] C. Bettstetter, G. Resta and P. Santi, “The Node
Distribution of the Random Waypoint Mobility Model
The analysis of simulation results shows that the mobility for Wireless Ad Hoc Networks,” IEEE Transaction on
model has the most effect on the performance of the FQM Mobile Computing, Vol, 2, No. 3, JULY-
QoS multicast as it changes the distribution of mobile nodes SEPTEMBER, 2003.
and as a result, it affects the group member and network [14] M. Hasana and L. Hoda, Multicast Routing in Mobile
capacity. In future work, we intend to study the performance Ad Hoc Networks: Kluwer Academic Publishers, 2004.
of the FQM QoS multicast framework with different [15] M. Saghir, T. C. Wan, and R. Budiarto, “A New Cross-
mobility models. Layer Framework for QoS Multicast Applications in
Mobile Ad hoc Networks,” International Journal of
Computer Science and Network Security, (IJCSNS),
References vol. 6, pp. 142-151, 2006.
[16] http://pcl.cs.ucla.edu/projects/glomosim.
[1] C. Bettstetter and C. Wagner, “The Spatial Node [17] K. Farkas, D. Budke, B. Plattner, O. Wellnitz, and L.
Distribution of the Random Waypoint Model,” In Wolf, “QoS Extensions to Mobile Ad Hoc Routing
Preceding the First German Workshop Mobile Ad Hoc Supporting Real-Time Applications,” In the
Networks, 2002. proceeding of the 4th ACS/IEEE International
[2] P.gupat and P. Kumer, “the capacity of ad hoc Conference on Computer Systems and Applications,
networks”, IEEE transaction information theory, V 64, Dubai-UAE, 2006.
no 2, pp388-404. [18] O. Farkasa, M. Dickb, X. Gub, M. Bussec, W.
[3] M. grossglauser and D.Tse, “mobility increase capacity Effelsbergc, Y. Rebahid, D. Sisalemd, D. Grigorase, K.
of ad hoc networks”, In the proceeding of the IEEE Stefanidisf, and D. Serpanosf, “Real-time service
INFOCOM, pp1360-1369,2001. provisioning for mobile and wireless networks,”
[4] B. Milic and M. Malek, “NPART - Node Placement Computer Communications, vol. 29, pp. 540-550,
Algorithm for Realistic Topologies in Wireless Multi- 2006.
hop Network Simulation”, In Proceedings of the 2nd [19] C. Lin, H. Dong, U. Madhow, A. Gersho, “Supporting
International Conference on Simulation Tools and real-time speech on wireless ad hoc networks: inter-
Techniques , 2009 packet redundancy, path diversity, and multiple
[5] P. Song, J. Li, K. Li and L. Sui, “Researching on description coding,” In Proceedings of the 2nd ACM
Optimal Distribution of Mobile Nodes in Wireless international workshop on Wireless mobile
Sensor Networks being Deployed Randomly,” applications and services on WLAN hotspots, USA,
International Conference on Computer Science and 2004.
Information Technology, 2008. [20] A. NILSSON, “Performance Analysis of Traffic Load
[6] C. Bettstetter, “Mobility Modeling in Wireless and Node Density in Ad hoc Networks,” In proceeding
Networks: Categorization, Smooth Movement, and the Fifth European Wireless Conference, Spain, 2004.
order Effects,” ACM Mobile Comp. and Comm. Rev.,
vol. 5, no. 3, 2001. Author Profile
[7] C. Bettstetter, H. Hartenstein, and X. Perez-Costa,
“Stochastic Properties of the Random Waypoint Mohammed Saghir received his B.S
Mobility Model,” ACM/Kluwer Wireless Networks, from Technology University, Iraq in
2004. 1998, M.Sc. from Al Al-Bayt
[8] D. M. Blough, G. Resta, and P. Santi, “A Statistical University, Jordan in 2004 and his
Analysis of the Long-Run Node Spatial Distribution in Ph.D from University Sains Malaysia
Mobile Ad Hoc Networks,” In the Proceeding ACM 2008 in Computer Science. He is
Int’l Workshop Modeling, Analysis, and Simulations working as a lecturer in Hodeidah
University, Yemen. His current
of Wireless and Mobile Systems (MSWiM), 2002.
research interests include, Mobile Ad
[9] T. Camp, J. Boleng, and V. Davies, “A Survey of hoc networks, QoS multicast routing
Mobility Models for Ad Hoc Network Research,” in MANETs, WiMax.
Wireless Comm. & Mobile Computing (WCMC), vol.
2, no. 5, pp. 483-502, 2002.
[10] E. Royer, P. Melliar-Smith, and L. Moser, “An
Analysis of the Optimum Node Density for Ad Hoc
Mobile Networks,” In the Proceeding of the IEEE Int’l
Conf. Comm. (ICC), 2001.
[11] J. Song and L. Miller, “Empirical Analysis of the
Mobility Factor for the Random Waypoint Model,” In
Proceeding of the OPNETWORK, 2002.
120 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

ICAR: Intelligent Congestion Avoidance Routing


for Open Networks
Dr. P.K.Suri1 and Kavita Taneja2
1
Dean, Faculty of Sciences, Dean, Faculty of Engineering, Professor, Deptt. Of Comp. .Sci.& Appl.
Kurukshetra University, Kurukshetra, Haryana ,India
pksuritf25@yahoo.com
2
Assistant Professor, M.M. Inst. of Comp. Tech. & Business Mgmt.,
Maharishi Markandeshwar University, Mullana, Haryana, India.
kavitatane@gmail.com

dynamic topology, device discovery, limited bandwidth,


Abstract: MANETs (Mobile Ad-hoc Networks) idiosyncrasies
have emerged as a promising networking theme that allows groups limited resources, limited physical security, infrastructure-
of Nomadic Nodes (NN) to organize themselves in order to form an less and self operated, poor transmission quality and
immediate network and that too in the absence of any central topology maintenance characteristic of open networks. It is
entities for configuration or routing (e.g. DHCP server). MANETs evident that existing solutions are not fully equipped to tame
are instant distributed systems, where any node can become a these challenges to a reasonable level [5], [6]. Hence,
source or destination, is host as well as router and are considered Computing in such a commercialized hostile environment
as peers. Hence, routes in MANETs are often multi-hop in nature governed by mobility (shown in Figure 1) must be
with several intermediate nodes that forward the data of their intelligent to ensure high packet delivery ratio with
peers. Congestion Control is an important design criterion for minimum loss of bandwidth and overhead and hence QoS.
efficient mobile computing. The theme of any mobile computing is
that the “ infrastructure less” devices or NN must be capable of
locating each other & communicating automatically in dynamic,
energy efficient, fault tolerant, latency proof and secure
framework. Despite of lot of recent research, MANETs lack load-
balancing capabilities, and thus, they fail to provide expected
performance especially in the case of a large volume of traffic. In
this paper, we present a simple but very effective method to
support Quality of Service (QoS), by the use of load-balancing and
congestion avoidance routing scheme. This intelligent approach
allows NN to select next hop based on traffic sizes to balance the
load over the network. A simulation study is presented to
demonstrate the effectiveness of the proposed scheme and the
results shows that the proposed design aims to maximize network
performance in terms of reduced packet loss rate and increased
throughput.
Keywords: Mobile Ad Hoc Network (MANET), Nomadic Figure 1. A Pure MANET
Node (NN), Intelligent Congestion Avoidance Routing (ICAR).
Existing routing protocols [11]-[12] tend to neglect the issue
1. Introduction of congested areas in the open networks and hence are
handicap to balance the traffic load over the NNs creating
Recent advances in wireless communication technology and highly congested regions in which the data packets suffered
mobile computing have lead to virtual Big Bang of mobile a long buffering time and the NNs experienced a highly
technology resulting in an explosive growth in the number contended access to the medium. On the other hand, the
of NN intended to communicate via an anytime anywhere proposed Intelligent Congestion Avoidance Routing (ICAR)
network. A Mobile Ad Hoc Network (MANET) is the protocol avoids the creation of such regions by selecting
promising answer, a multihop wireless network [2] in which routes based on traffic metric and not the shortest path.
all NNs (Figure1) such as personal computers, personal Rest of the paper is as follows. Section 2 discusses
digital assistants and wireless phones are wildly mobile and MANET applications and major challenges encountered due
communicating. MANET provides an effective method for to NNs in open networks. Section 3 summarizes related
constructing an instant network on the airs that uses a work including existing routing protocols and their pitfalls.
distributed control system by which all the NNs Section 4 gives the protocol design of ICAR. We present
communicate with each other without a central authority. implementation and performance evaluation of ICAR in
But facilities come as a package with challenges. Due to the Section 5 and finally article is concluded in Section 6.
limited transmission range of wireless network interfaces
[1], [3], [4], multiple hops may be needed to exchange data
between devices in the network. Others may include
(IJCNS) International Journal of Computer and Network Security, 121
Vol. 2, No. 6, June 2010

2. MANET: Applications and Challenges even before it is needed. They are also called table driven
because routes are available as parts of a well-maintained
MANET invokes popular communication architectures to table. The popular proactive routing protocols [12], [15] are
offer to NNs the opportunity to share information and Destination-Sequenced Distance Vector (DSDV) and Global
services, in the absence of any infrastructure. Today the State Routing (GSR). Since the network topology is
commercial aspect of MANET enables cars to exchange dynamic, when a link goes down, all paths that use that link
information about road conditions, traffic jams or are broken and have to be repaired. If no applications are
emergency situations leading to improved road safety, using these paths, then the effort gone in to repair may be
efficiency and driving comfort for the customer. The beauty considered wasted that cause scarce bandwidth resources to
lies in the diversity of the MANET applications [7], [9] be wasted and may lead to further congestion at
ranging from military, law enforcement, national security intermediate network points. These Protocols are applicable
(global war), to disaster relief, and rescue mission like crisis only for low mobility networks and are scalable in the
management during natural disasters to user oriented number of flows and number of nodes but are not scalable in
brighter side. Commercially, MANET can expand the the frequency of topology change. In contrast, reactive
capabilities of mobile phones, communication in routing protocols establish the route to a destination only
exhibitions, conferences, sale presentations, restaurants, etc. when requested. To overcome the wasted work in
There are several challenges existing in the field of maintaining routes not required, on-demand, or reactive
MANETs and to list a few one can start with the protocols have been designed. Reactive routing protocols
autonomous behavior of each node leading to no centralized save the overhead of maintaining unused routes at each
administration [8]. For example, the frequent change in node, but the latency for many applications will drastically
network topology due to the NNs causes a great deal of increase. Most applications are likely to suffer a long delay
control information to flow onto the network. The small when they start because a route to the destination will have
capacity of batteries and the bandwidth limitation of to be acquired before the communication can begin.
wireless channels are major factors [1], [3], [10]. Moreover, Reactive protocols namely Ad Hoc On-Demand Distance
data access focused at a single point may incur impossibility Vector (AODV) [13] and Dynamic Source Routing (DSR)
of communication and make quality of service worse. This [14] are suitable for networks with high mobility and
becomes a serious consideration, especially with recent relatively small number of flows. Shortest path algorithm
trends to transferring huge data including video. used in the existing routing protocols does not provide
optimal results when the primary route is congested.
3. Related Work Congestion aware routing protocols [16]-[17] were therefore
Congestion control is a key problem in mobile ad-hoc proposed to that effect due to the fact that besides route
networks. The standard TCP congestion control mechanism failures, network congestion is the other important cause of
is not able to handle the special properties of a shared packet loss in MANETs.
wireless multihop channel well. In particular the frequent
changes of the network topology ruled by NNs and the 4. ICAR: Protocol Design
shared nature of the wireless channel pose significant The policy of design for mobile networks is that if two
challenges. Many approaches have been proposed to or more routes for NNs communication exist then simply the
overcome these difficulties. The problems in using path with the minimum traffic metric associated is selected.
conventional routing protocols in MANET [12] are as Objectives include maximizing network performance from
follows: the application point of view while minimizing the cost of
• Existing protocols cannot cope with frequent and network itself in accordance with its capacity. In this paper
unpredictably changing topological connectivity [11]. we develop a new Routing framework that is service
• Conventional routing protocol could place heavy oriented for MANET which will cover following issues:
computational burden on mobile computers in terms of • Intelligent routing algorithm ICAR that encompasses
battery power and network bandwidth. an area with broadband wireless coverage and also can
• Inefficient convergence characteristics. dynamically cope with congestion and path load.
• Wireless media has limited range unlike wired media. • Supporting NNs that can act as routers for other
One of the issues with routing in open networks concerns network devices, i.e., each node is an implicit router
whether nodes should keep track of routes to all possible further extending the network.
destinations or instead keep track of only those destinations ICAR protocol operation is illustrated in Figure 2. The
that are of immediate interest [18]. Routing protocols are service required by the device A resides at NN ‘H’. The NN

classified broadly into two main categories as proactive K’ can act as a route to NN ‘A’, helping it to reach the
routing protocols and reactive routing protocols [12]. service at NN ‘H’. Additionally, NNs ‘K’ & ‘H’ can share
Proactive routing protocols are derived from legacy Internet. the data about other services with the NN ‘A’. Suppose NN
Proactive protocols that keep track of routes for all ‘A’ requires a service which is being offered by NN ‘H’ at a
destinations in the ad hoc have the advantage that distance of 2 hops (as path length is a secondary issue here)
communications with arbitrary destinations experience but the most congestion free route is to be selected. So ‘A’
minimal initial delay from the point of view of the broadcasts a request HELO looking for the service through a
application. When the application starts, a route can be lesser loaded path. Alongside service request, ‘A’ also
immediately selected from the routing table. Such protocols advertise its own services. All neighbors NNs receiving the
are called proactive because they store route information broadcast update their service directories. NN ‘K’ has
122 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

information on the required service available with NN ‘H’;


therefore on receiving ‘A’s’ request it will act as pseudo
destination and unicast a response route ACK to ‘A’ B A
Routing
Table
containing route information of ‘H’ in addition to services REQ

offered by ‘H’ and ‘K’ itself. Here one additional field is Routing
RACK REQ
RACK
REQ
Table
proposed in the standard packet header. This field is called C K
the Traffic Field, and it is initialized to zero by the source Routing
RACK
NN before broadcasting a route discovery packet. Every REQ
Routing
Table
Table

intermediate NN that receives the route discovery packet Routing


Table
E

calculates its current total traffic based on its total neighbor REQ

RACK G
NNs at one instance and adds it to the value of the Traffic REQ REQ

Field on the incoming packet. The result of the addition is Routing RACK H
Table
assigned to the Traffic Field before the NN rebroadcasts the F
Routing
packet. When generating the route discovery Reply, besides Table
Routing
copying the route discovery packet’s node list to the reply Table

packet, the target NN copies the value of the Traffic Field


from the route discovery packet to the reply packet. When
the source NN receives the Reply, it is in fact receiving the Figure 2. ICAR Operation
path to the intended destination and the Traffic Metric
associated with that path. To allow the source NN to obtain
more than one path, the destination NN must reply to all 5. Implementation and Performance Study
route discovery requests it receives, and the source of the To evaluate the effectiveness of ICAR, we have
route discovery must wait for an interval of time after simulated the scheme and compare it to the Implicit Source
starting the route discovery process. When the NN obtains Routing (ISR) [19] protocol. ISR is a competitive routing
more than one path for next hop, it simply selects the path protocol based on shortest path. We compare the ICAR to
with the minimum Traffic metric associated with it. Also ISR using the ns-2 network simulator [20], which includes a
ICAR prohibit intermediate NNs from replying to route mobility extension that was ported from CMU’s Monarch
discoveries using their cached paths that guarantee the Group’s mobility extension to ns. CMU’s Monarch mobility
utilization of nearly current Traffic information. When a extension to ns-2 allows the simulation of multi-hop ad hoc
NN sends a HELO Message, it includes its current Local wireless networks. The extension includes functionalities to
Traffic metric on the message. Every NN maintains a list of simulate NN’s movements, and to transmit and receive on
its neighboring NNs and their local traffic. When the NN wireless channels with a realistic radio propagation model.
receives a HELO Message from a neighbor, it checks its list For the ISR simulation, we used the latest version available
of neighbors. If the neighbor is already on the list, it updates from the VINT project that comes with ns-2. That version
the neighbor’s Local Traffic. Otherwise, it adds the new includes DSR with a full implementation of the Implicit
neighbor to the list. ICAR defines the removal of listed Source Routing (ISR) technique. The parameter values used
neighbor if it fails to receive “3-consecutive HELO by both protocols are summarized in Table 1. Constant Bit
messages.” The Local Traffic in the neighbor list results in Rate (CBR) traffic was used in our simulation. The source
the Regional Traffic of the NN. Such data sharing with next and destination devices were randomly selected, and each
hop NN is dirty write in and is likely to be successful, since simulation shows the results of 32 connections. To vary the
the availability of information about required service ‘A’ size of the traffic, we divided the 32 connections into four
makes it highly probable that other related services residing groups of eight connections each. The size of the CBR
on ‘H’ and even ‘K’ might interest ‘A’. So a discovered packets for the first group was 128 bytes, second was 256
route can be cached for further references but only for a span bytes, third was 512 bytes and fourth group was 1024 bytes.
of time that is a function of the mobility parameter. On We modeled a 4 packets/sec send rate for all the groups. The
receiving the response from ‘K’, ‘A’ updates its service mobility patterns in our simulation followed the random
directory and routing table with the information on ‘H’ and model in which each node starts at a random location,
services of ‘K’ and the path least congested from A to H. If chooses a new location in a rectangular space (1500 m x 300
more than one NNs reply with the response, then only the m) randomly, and starts its trip to the new location at a
response having the route information with the least randomly chosen speed (uniformly distributed between 0–20
congestion will be considered. If none of the NNs has m/sec). After reaching its new location, NN pauses for a
information about service required by ‘A’, they will re- period of time (wait time) and then starts a new trip to a
broadcast. This message is rejected by the NNs that have new location. We varied the mobility of the NNs by varying
already received the request during the first rebroadcast by the wait time values. The results we present in this paper are
employing the (Sequence ID, Broadcast ID) in the request based on 50 simulation runs (50 nodes) with run length of
message. Also, if the more than 2 hops are involved, 500 seconds.
backward pointers are setup and routing tables are updated
on each intermediate NN. Once the congestion free route
discovery is made the NNs along the route may be requested
to volunteer as pseudo destinations through a chain of
unicast trusted pseudo destinations request.
(IJCNS) International Journal of Computer and Network Security, 123
Vol. 2, No. 6, June 2010

Table 1: Values used in simulation their loss. ICAR also outperformed ISR in the routing
overhead metric (see Figure 4). ISR incurred an average of
Parameters ICAR ISR 11.42 overhead packets per data packet higher than ICAR.
Send buffer size 64 64 We found that the higher the level of mobility, the higher
Routing table size 64 30 the difference in overhead between ICAR and ISR. At the
Cache on off highest level of mobility, ISR incurred about 38.53 of
HELO interval dynamic N.A. overhead packets per data packet, whereas ICAR incurred
Lost HELO allowed 03 N.A. about 1.04 overhead packets per data packet. At the lowest
WAIT 600 ms N.A. level of mobility, ISR incurred about 2.29 of overhead
Interface queue size 50 50 packets per data packet, while ICAR incurred about 0.42
overhead packets per data packet. The reason for such high
Ten runs of different traffic and mobility scenarios are overhead is the additional Route discoveries incurred by ISR
averaged to generate each data point. However, identical through its salvaging process because of the congested
traffic and mobility scenarios were used for both protocols. network. We found that the data packets experienced delay
We used three performance metrics to compare our times long enough to invalidate, due to the mobility, the
schema to ISR. The first metric is the Packet Delivery Ratio, routes of these packets. For those packets to be salvaged,
which is defined as the percentage of data packets delivered ISR initiates the Route Discovery process to find alternative
to their destination NN of those sent by the source NN. The routes. ICAR surpassed ISR in the average end-to-end delay
second metric is the Routing Overhead of both protocols, metric (see Figure 5). The average end-to-end delay for
which is defined as the number of routing packets ICAR was 2.59 s while it was 6.11 s for ISR. Generally, the
“transmitted” per each data packet “delivered.” On multi- average end-to-end delay of ICAR was about three seconds
hop routes, each transmission of the routing packets is less than that of ISR. The difference is significant at the
counted as one transmission. We chose not to include the highest level of mobility, where the average end-to-end
forwarding information carried in each data packet in our delays for ICAR and ISR were 3.44 s and 8.70 s,
calculation of the overhead because the size is the same for respectively.
both protocols.
Overhead packets per data

45
P 80 ISR 40
A ISR
70 ICAR 35
C ICAR
K 60 30
E
50 25
T
20
40
D
15
E 30
L. 10
Packet

20
5
R 10
A 0
T 0 0 100 200 300 400 500 600
I 0 100 200 300 400 500 600
o Pause Time (seconds)
Pause Time (seconds)

Figure 3. Packet Delivery Ratio Figure 4. Overhead


The third metric used is the Average End-to-End Delay of
the data packets. We used eleven pause time values (0, 50, 10
ISR
Average End-to-End Delay(seconds)

100, 150, 200, 250, 300, 350, 400, 450, and 500 s) to differ 9
ICAR
the mobility level (with 0 s pause time meaning continually 8
moving nodes and 500 s representing stationary nodes). 7

ICAR had a better delivery ratio than ISR (see Figure 3). 6

For the data packets sent, ICAR delivered an average of 5


4
19.65% higher than ISR. The difference in the delivery ratio z
3
between both protocols is significant at the high level of
2
mobility where the pause times are 0 s and 50 s. With pause
1
times 0 s and 50 s, ICAR delivered 26.21% and 20.12%
0
higher than ISR, respectively. At its best performance, ISR
0 100 200 300 400 500 600
did not even deliver 50% of data packets sent whereas;
Pause Time (seconds)
ICAR delivered 70% of the data packets that were sent.
These results are due to fact that ICAR distributed the traffic Figure 5. Average End-to-End Delay
among the NNs and in a way to avoid the creation of highly
congested areas. We found ISR concentrate the traffic
through centrally located NNs because it allows the NNs to The reason for this is that ISR does not balance the traffic
reply to path discoveries from their cached routes that over the NNs, it created highly congested regions in which
caused their interface queues to overflow and suffer from the data packets suffered a long buffering time and the NNs
high drop rates leading to packet collision and eventually experienced a highly contended access to the medium. On
124 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

the other hand, ICAR avoided the creation of such regions [14] D. Johnson, “The Dynamic Source Routing (DSR)
by selecting routes based on Traffic metric and not the Protocol for Mobile Ad Hoc Networks for IPv4,”
RFC4728, February 2007.
shortest path. [15] T. Clausen and P. Jacquet, "Optimized Link State
Routing Protocol (OLSR)," IETF Mobile Ad Hoc
6. Conclusion and Future Work Networks (MANET) Working Group Charter, October
2003, www.ietf.org/rfc/rfc3626.txt.
In this paper, we presented the ICAR protocol keeping in [16] X. Gao, X. Zhang, D. Shi, F. Zou, and W. Zhu,
view the nature of MANETs, to route the traffic through “Contention and Queue-Aware Routing Protocol for
Mobile Ad Hoc Networks,” WiCOM, September 2007.
least congested paths. We have made an effort to take the [17] D.A. Tran and H. Raghavendra, “Congestion Adaptive
discovery and delivery of services to the level of routing, Routing in Mobile Ad Hoc Networks,” IEEE
ICAR is compared to a shortest path based routing protocol, Transactions on Parallel Distributed Systems, 17(11),
namely ISR. ICAR distributes the load on a large network pp. 1294-1305, November 2006.
area, thus increasing the spatial reuse. The simulation study [18] Hui Xu , J. J. Garcia-Luna-Aceves, “Neighborhood
Tracking for Mobile Ad Hoc Networks,” Computer
has confirmed the advantages of ICAR over ISR. We intend Networks: The International Journal of Computer and
to explore the application of the idea introduced in ICAR to Telecommunications Networking, 53 (10), p.1683-
other routing protocols in our future study. 1696, July, 2009.
[19] Yih-Chun Hu and D. B. Johnson, “Implicit Source
Routes for On Demand Ad Hoc Network Routing”, In
References Proceedings of the ACM Symposium on Mobile Ad Hoc
Networking & Computing, MobiHoc, Long Beach,
[1] Mario Gerla, “From Battlefields To Urban Grids: New California, USA, Oct. 2001.
Research Challenges in Ad Hoc Wireless Networks,” [20] VINT Project. The Network Simulator - ns-2,
Pervasive and Mobile Computing, 1(1), pp.77-93, http://www.isi.edu/nsnam/ns/, 2002
March 2005.
[2] Marco Conti, “Multihop Ad Hoc Networking: The
Theory,” Communications Magazine, IEEE, Vol. 45, Authors Profile
pp. 78-86, 2008.
[3] R. Ramanathan and J. Redi, “A Brief Overview of Ad Dr. P. K. Suri received his Ph.D. degree
Hoc Networks: Challenges and Directions,” IEEE from Faculty of Engineering, Kurukshetra
Communications Magazine, 40(5), pp. 20--22, 2002.
University, Kurukshetra, India and Master’s
[4] S. Hariri, “Autonomic Computing: Research Challenges degree from Indian Institute of Technology,
and Opportunities,” CAIP Seminar, February 2004.
Roorkee (formerly known as Roorkee
[5] T. Camp, J. Boleng, and V. Davies, “A Survey of
Mobility for Ad Hoc Network Research,” Wireless University), India. Presently, he is Dean,
Commun. Mobile Comput. (WCMC), pp. 483--502, Faculty of Science, Kurukshetra
2002. University and is working as Professor in the Department of
[6] Al Hanbali , A. A. Kherani , R. Groenevelt , P. Nain , Computer Science & Applications, Kurukshetra University,
E. Altman, “Impact of Mobility on the Performance of Kurukshetra, India since Oct. 1993. He has earlier worked as
Relaying in Ad Hoc Networks,” Extended version, Reader, Computer Sc. & Applications, at Bhopal University,
Computer Networks: The International Journal of Bhopal from 1985-90. He has supervised six Ph.D.’s in Computer
Computer and Telecommunications Networking,
51(14), pp.4112-4130, October, 2007 Science and thirteen students are working under his supervision.
[7] S. Hadim, J. Al-Jaroodi, and N. Mohamed, He has more than 110 publications in International / National
“Middleware Issues and Approaches for Mobile Ad Hoc Journals and Conferences. He is recipient of ‘THE GEORGE
Networks,” In Proccedings of 3rd IEEE Consumer OOMAN MEMORIAL PRIZE' for the year 1991-92 and a
Communications and Networking Conference, CCNC RESEARCH AWARD –“The Certificate of Merit – 2000” for the
2006, pp. 431--436, 8--10 Jan. 2006. paper entitled ESMD – An Expert System for Medical Diagnosis
[8] C. Bettstetter, "On the Connectivity of Ad Hoc from INSTITUTION OF ENGINEERS, INDIA. His teaching and
Networks," The Computer Journal, 47(4), pp. 432--447, research activities include Simulation and Modeling, SQA,
April 2004.
Software Reliability, Software testing & Software Engineering
[9] Konrad Lorincz , et al., “Sensor Networks for processes, Temporal Databases, Ad hoc Networks, Grid
Emergency Response: Challenges and Opportunities,”
IEEE Pervasive Computing, 3(4), pp.16-23, October Computing and Biomechanics.
2004
[10] Lu Yan , Xinrong Zhou, “On Designing Peer-to-Peer Kavita Taneja has obtained M.Phil(CS)
Systems over Wireless Networks,” International Journal from Alagappa University, Tamil Nadu and
of Ad Hoc and Ubiquitous Computing, 3(4), pp.245- Master of Computer Applications from
254, June 2008. Kurukshetra University, Kurukshetra,
[11] Michael Gerharz , Christian de Waal , Peter Martini, Haryana , India. Presently, she is working as
and Paul James, “Strategies for Finding Stable Paths in Assistant Professor in M.C.A. at
Mobile Wireless Ad Hoc Networks,” In Proceedings of
the 28th Annual IEEE International Conference on M.M.I.C.T & B.M., M.M. University,
Local Computer Networks, pp.130, October 20-24, Mullana, Haryana, India. She is pursuing
2003 Ph. D in Computer Science
[12] Mina Masoudifar, “A Review and Performance andapplications from Kurukshetra University. She has published
Comparison of QoS Multicast Routing Protocols for and presented over 10 papers in National /International
MANETs, Ad Hoc Networks,” 7(6), pp.1150-1155, Journals/Conferences and has bagged BEST PAPER AWARD,
August, 2009. 2007 at International Conference for the paper entitled “Dynamic
[13] C. Perkins , E. Belding-Royer , S. Das, “Ad hoc On- Traffic -Conscious Routing for MANETs” at DIT, Dehradun. She
Demand Distance Vector (AODV) Routing,” RFC
Editor, 2003. is supervising five M.Phil scholars in Computer Science. Her
teaching and research activities include Simulation and Modeling
and Mobile Ad hoc Networks.
(IJCNS) International Journal of Computer and Network Security, 125
Vol. 2, No. 6, June 2010

Using Word Distance Based Measurement for


Cross-lingual Plagiarism Detection
Moslem mohammadi1, Morteza analouei2
1
Payam Nour University of Miandoab, Miandoab, Iran
Mo_mohammadi@comp.iust.ac.ir

2
department of Computer Science and Engineering ,
Iran University of Science and Technology, Tehran, Iran
analoui@iust.ac.ir

Abstract: - a large number of articles and papers in various compilation. The plagiarism may be done through
languages are available due to the expanding of information manipulation of resource, such as text, film, speech and
technology and information retrieval. Some authors translate other literary compilation or artifacts, and through manner,
these papers from original languages to other languages so that which Maurer[1] listed four broader categories of plagiarism
they claim as their own papers. In this paper a method for as follows:
detection of exactly translated papers from English to Persian i. Accidental: due to lack of plagiarism knowledge.
language is proposed. Document classification is performed by ii. Unintentional: probably the initiation of the same
corner classification neural network (CC4) method to limit the
idea at same time.
investigated documents, before the main process is commenced.
For bi-lingual text processing, language unification is necessary
iii. Intentional: a deliberate act of copying someone
that is performed by a bi-lingual dictionary. After language work without any credit or reference.
unification, the suspicious text fragments are rephrased iv. Self-plagiarism: republished self published work.
according to the original language. Rephrasing is performed jalali et al.[2] believe that the first two cases occur in non-
like crystallization process. The similarity is computed based on English countries since authors are not competent to use
equivalent word place distance in English and Persian; in fact, English and to avoid plagiarism. They have suggested that
the paragraphs in a suspicious text are probed in the original plagiarism should be avoided by employing a combination
text. Proposed system is tested in two conditions, first with of measures such as explicit warning, using plagiarism
rephrasing and adopted distance based similarity and other with detection software, disseminating knowledge and improving
tradition method. By comparing the outcomes in two the academic and writing skills.
experiments distinguished that proposed method’s
In this paper we will discuss about text plagiarism that can
discriminability is good.
be occurred in same language (monolingual) or other
Keywords: plagiarism detection, document classification, bi- ones(cross-lingual). In first one, a person copies, rewrites or
lingual text processing rephrases another's text in a similar language and
plagiarism detection in same cases can be performed bye
1. Introduction and motivation tradition document similarity algorithms. But in second, a
thief author translates from source language to target
At first, there are several questions that will be discussed.
language. For detection; we need extra knowledge bases
They are
such as a dictionary, wordnet and word correlation table.
i. What is the research?
Actually; plagiarism detection is time consuming because
ii. What is the plagiarism?
there are many large text corpora that must be compared
iii. How an essay is criticized?; and
with suspected document, though most of them are
iv. What is the scientific misconduct?
irrelevant. For this reason if we eliminate irrelevant
These questions and instances of like are ambiguous
documents from comparison corpora, we’ll decrease an
questions that don’t have a clear answer or they share
indispensable time spontaneously. This is realized by
overlapped scopes. These terms are defined in table 1.
classification methods that will be explained later.
Some of the paradigms mentioned, are in the academic
The paper is organized as follow: in next section the related
dishonesty domain and some of them are not. As mentioned
works is reviewed and document classification is discussed
above, these fields have shared overlapped scopes and the
in section 3. Thereafter in section 4 the proposed approach
border line between correct works and incorrect works must
will be explained then datasets and experimental results are
be cleared. Plagiarism is prevalent due to spread
analyzed.
enlargement digital information especially text information
on the internet and other devices. The simple definition of
plagiarism is appropriating another author’s writing or
126 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

Table 1: plagiarism related concepts


Research can be defined as the search for knowledge or any systematic
1 Research
investigation to establish facts.
Use or close imitation of the language and thoughts of another author and the
2 Plagiarism
representation of them as one's own original work.
Is the judgment (using analysis and evaluation) of the merits and faults of
3 Criticism
the actions or work of another individual.
built primarily and explicitly from existing texts in order to solve a writing
4 Assemblage
or communication problem in a new context
An essay mill (or paper mill) is a ghostwriting service that sells essays and
5 Essay mill
other homework writing to university and college students
A ghostwriter is a professional writer who is paid to write books, articles,
6 Ghostwriter
stories, reports, or other texts that are officially credited to another person.
Scientific misconduct is the violation of the standard codes of scholarly
7 Scientific misconduct conduct and ethical behavior in professional scientific research.( fabrication,
plagiarism, ghostwriting)

1. The X vector is a dictionary includes all words of


2. Related works documents; in other words, it is the feature set of documents
that must be classified. To save time, after stopwords are
The root of plagiarism detection can be found in the eliminated, we only use 20 frequently appeared words in
document similarity. Huang et al [3] proposed a sense based document for classification. Each entry of X vector is 1 if its
similarity measure for cross-lingual documents that uses corresponding word is in 20 frequently appeared words of
senses for document representation and adopted fuzzy set document, otherwise it is 0. The Y vector represents the
functions for resemblance calculating. Uzuner et al. [4] class tags and here the k value is the number of classes that
identify plagiarism when works are paraphrased using by in our system it is 3. There are distinct classifiers for each
syntactic information related to creative aspect of writing language.
such as ‘sentence-initial and -final phrase structure’, The non iterative training phase is one of important benefit
‘semantic class of verb’ and ‘syntactic class of verb’. Barron of these networks. Incremental training is the other benefit
et al.[5] proposed the cross-lingual plagiarism analysis with of these networks that appropriate these networks for
a probabilistic method which calculates a bilingual classifying enormous documents. In fact, adding the new
statistical dictionary on the basis of the IBM-1 model in training data to extant CC4 network is very easy and simple.
English and Spanish plagiarized examples. Their proposal Furthermore, when the documents are almost of the same
calculates the probabilistic association between two terms in size, CC4 neural network is an effective document
two different languages. classification algorithm.
In [6] an approach based on support vector machine(SVM)
classifier has been proposed to determine similarity between
English and Chinese text. Subsequently, evaluating
semantic similarity among texts measured by means of a
language-neutral clustering technique based on Self-
Organizing Maps (SOM). Selamat et al [7] have used a
Growing Hierarchical Self-Organized Map (GHSOM) to
detect translated English documents to Arabic texts.
Gustafan et al.[8] have proposed a method, relies on pre-
computed word-correlation factors for determining the
sentence-to-sentence similarity which handle various
plagiarism techniques based on substitution, addition, and
deletion of words in sentences. Consequently, the degree of
resemblance of any two documents to detect the plagiarized
copy is computed based on sentence similarity and the visual
Figure 1. CC4 structure
representation of same sentences is generated.
4. Proposed method
3. Document classification
Bi-lingual Plagiarism can be supposed as a translation
Document classification has an important place on text
problem. Clumsy translators or plagiarists are using a literal
mining tasks because that it limits the amount of texts must
translation. Thus in this case, sentence length in both texts
be processed. There are several methods for information
very probability are equal but innate features and structures
classification such as K- Nearest Neighbors (KNN), Naïve
of either one are different. Automatic detection of awkward
Bayes, Support Vector Machine (SVM)[9], Corner
translation seems easy. The professional plagiarist can
Classification Neural Network (CC4)[10], ensemble
change sentences sequence or split longer sentences in two
classifiers[11] and etc. In this system completion of the
or more sentences or do vice versa. For these reasons and
document classification are performed by CC4 that is a kind
other motivations that will be discussed, we used a distance
of three-layered feed forward neural network [12] like figure
based method. Figure 2 is the schematic view of proposed
(IJCNS) International Journal of Computer and Network Security, 127
Vol. 2, No. 6, June 2010

method. There are three main tasks in proposed system. 4.2. Similarity evaluating
They are As mentioned before, each vector of English words is
i. Document classification: just as explained in changed into jagged matrix of the Persian words, the
previous section, the classification determines tradition vector space methods such as euclidean, cosine,
what documents must be processed. dice, jaccard similarity measures[15] that use the inner
ii. Document representation (translation): documents product of vectors can’t be used for similarity calculation
in different language must be uniformed that will because one side of problem is vector and other side is
be illustrated in next section. matrix. Let t1, t2, · · ·, tn be fragments conforming a
iii. Similarity calculation: this part of system uses a suspicious text T in Persian, and let w1, w2, · · · ,wm be a
word distance base method. collection of W original fragments in English. According to
the structural differences between investigated languages,
the smallest piece of text which must be processed is a
Persian
document
English Paragraph.
documents
The sentence length in English and Persian is different [16],
and one sentence in English during its translation may be
rendered in two or more sentences. Furthermore, the
structural differences exist. For example, the verb placement
Classifier Classifier
in the English sentences is in the middle of sentence but in
the Persian sentences is in the end of sentence. Moreover,
Sorted the sentence boundary detection is an extra task that can be
documents ignored.
At first, the jagged matrix of TW is created according to
Similarity meaning of including words of W like figure 3. The main
evaluating Translate
d aim is computing similarity between T and TW. For this
purpose, each word in T is probed in the TW matrix and a
Dictionary
vector (indi) based on indices of matching words such as (1)
is created.
Rate of similarity

ind i = {j | ti ∈ TWj } (1)


Figure 2. Architect of proposed system
The similarity measurement is inspired from crystallization.
4.1. Document representation The crystallization process consists of two major events,
In this research we are dealing with two kinds of documents nucleation and crystal growth1. The main idea is creation of
in the Persian and English languages which the processing the fragments of text like crystal. indi vectors are equivalent
phase for similarity detection is accomplished in Persian. the solvent and words on them are solute. For crystal
The Persian documents after eliminating the non-verbal growth we need a lot of nucleus that here we use the words
stopwords[13], are represented in vector space model [14]. in T which their indi vectors contain only one word. After
In this adopted model, each paragraph is placed in a vector. define nucleuses, we should describe crystal growth process
For plagiarism detection from English documents to that perform based on T vector. Relation (2) describes it.
Persian, the unification of their language must be
performed. Whereas each word in the English text can have  ind i if t i is nucleus (2)

several meanings and equivalences in Persian, each vector CS i =  0 if t i don' t have equal in W
nearest index to a nucleus
of English words is changed into jagged matrix of the  otherwise
Persian words according to the bilingual dictionary. The CS is the resembled vector to the T vector that contains
schematic description of details comes in figure 3. nearest equvalence to the T words. The similarity amount of
Persian paragraph: this Resembled vector must be measured. For this, we use an
Tà t1 t2 t3 … ti … t adoptive euclidean distance[17] . Euclidean distance is
n computed for two vectors like relation (3), but in adoptive
English paragraph: form we want to influence similarity with word situation
Englis Persian equivalence distance in CS vector.
h (TW)
(W) 1
 2
2


W1 p11 p12 p13
L( D1 , D2 ) =  d1i - d 2i  (3)
W2 p12 p22  i 
W3 If most of a fragment's words of text in T are neighbors in
p31 p32 p33 p34 p35
CS, there are most probable that these two fragments are
... … … … same. For computing amount of similarity we use the
Wj p1
p 2
p 3j p 4j p 5j p 6j p 7j
j j relation (4).
… … … …
Wm p1
p 2
pm3 pm4
m m

Figure 3. Document representation


128 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

  stories, computer science texts in scope of computer


 T 

1 networks and text mining. Furthermore, 100 Persian and
sim(T , W ) =   (4)
 (1 + [ (CSi+1 − CSi ) ] p )  100 English documents without corresponding translation in
i =1
  corpus were added in same genre. These added documents
 sl 
T and W vectors already are illustrated. CSi shows existence were used for false positive evaluation.
of ti word of T in W text fragment and in the case of For classification task, we supposed three types of
existence, it shows the location of that word in W. sl is the documents so called “story”, “network” and “text”.
average sentence length, where if distance of two words is When there is no lack of reports on measurement of
equal with sl, the similarity amount equals with 0.5. p is the semantic similarity of concepts, evaluation of the
slope of curve, which the great value express sharp slope performance of semantic similarity measure between cross-
and diminishes the effect of two far words in similarity lingual documents has not yet been standardized because
evaluating. Figure (4) shows the relation (4) behavior with there is no universally recognized benchmark[3] . However,
various numbers for |CSi+1- CSi|(distance of two words). a simple thumb of rule is that a document should be very
There are several details in computation and similar to its high-quality translation. With this assumption,
implementation such as treatment with words with CSi zero the evaluation criterions adopted in this paper, are the rate
value that here these details have not been explained. of successful match between the Persian documents and
their parallel English documents (true positive results) and
1.2 furthermore maximum similarity rate of documents that
don’t have corresponding translation in corpus (false
1
positive results).
0.8 Persian document (DPi) is classified by Persian classifier
effect on similar ity

0.6 and all English documents from same class for using in
similarity computation are extracted. The most similar
0.4
document along similarity rate of it for each Persian
0.2 document is determined. This is rational that we expect the
0 similarity coefficient in unparallel document would be
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 closed to 0 and in 200 parallel documents near to 1. For
distance final result the mean of all similarities in parallel and
unparallel groups are calculated as the table (2) shows.
Figure 4. effect of distance on similarity with sl= 15 and
p=8 Table 2: mean of similarities in parallel and unparallel
documents with sl= 15 and p=8
This fact is important that how far the neighbor words in Tf based method Proposed method
original paragraph are from each other in the suspicious Parallel 0.76 0.81
paragraph. Suppose tk and tk+1 are adjacent in T and wi and unparallel 0.35 0.09
wi+d are their equivalents in W. if d or in other words the
distance of two words were closed to each other, we can First column in table (2) represents tradition method results
consider wi and wi+d in an identical sentence. By in which similarity of two documents is computed based on
generalizing this fact, the W and T fragments are equal if words occurrences of a document in other one. In other
most of W words equivalences are neighbor in T. in other words this is a term frequency based method, also in this
side of coin; we restate T words by illustrated reconstruction case rephrasing is not performed. Considering, tf based
method in CS. If CS contents come from different part of method has high precision in parallel documents matching,
text or various W, we can not find a unique same for T. but its error rate in unparallel documents are excessive.
Total similarity between two documents with T and W Second column shows proposed method results that in this
fragments is computed by relation (5) that there Pdoc and method, term frequency doesn't have any influence on
Edoc are respectively Persian and corresponding English similarity, and similarity is computed based on distance. As
document. results discloses, similarity rate for each group are
meaningful.
sim(Ti , Wj )
similarity( Pdoc, Edoc) = ∑∑ (5)
min( Pdoc , Edoc )
i j
6. Conclusion
5. Experimental result Plagiarism is a challenging problem in research domain and
academic comportment. For distinguishing plagiarist from
Dataset gathering in bi-lingual research is the main and the researcher, automatic detection of plagiarism is
time consuming challenge especially if you are among the necessary. In this paper we developed a system for detection
beginners in that domain. In order to evaluate the proposed of exact translation from English to Persian. In bi-lingual
approach we used a parallel corpus containing 200 plagiarism, in which system dealing with two different
documents in Persian and English that were collected and languages that each word in one language has several
translated from internet then converted into .txt format. This equivalences in the other. Thus selecting a suitable meaning
corpus contains documents in English and exact translate of is essential. In proposed method, this is realized by
them in Persian from various genre such as social short rephrasing task that inspired from crystallization. Structural
(IJCNS) International Journal of Computer and Network Security, 129
Vol. 2, No. 6, June 2010

differences between investigated languages are another Semantic Web Research Laboratory, Sharif University
challenge in bi-lingual tasks. That this problem is solved by of Technology, tehran - iran 2006.
using a distance based similarity measurement. Obtained [14] G. Salton, A. Wong, C.S. Yang, ‘A vector space model
outcomes are very promising to detect exact translated texts. for information retrieval’, Journal of the American
Using all validate words in similarity computation is One of Society for information Science, 1975, 18, (11), pp.
the best properties of proposed method 613-620
[15] H.F. Ma, Q. He, Z.Z. Shi, ‘Geodesic distance based
approach for sentence similarity computation’. Proc.
7. References Machine Learning and Cybernetics, 2008 International
Conference on2008 pp. 2552-2557
[1] H. Maurer, F. Kappe, B. Zaka, ‘Plagiarism-a survey’, [16] C.D. Manning, H. Schütze, ‘Foundations of statistical
Journal of Universal Computer Science, 2006, 12, (8), natural language processing’ , MIT Press, 2002.
pp. 1050-1084 [17] E. Greengrass, ‘Information retrieval: A survey’,
[2] M. Jalalian, L. Latiff, ‘Medical researchers in non- University of Maryland, Baltimore County, 2000.
English countries and concerns about unintentional
plagiarism’, Journal of Medical Ethics and History of
Medicine, 2009, 2, (2), pp. 1-2
[3] H.H. Huang, H.C. Yang, Y.H. Kuo, ‘A Sense Based
Similarity Measure for Cross-Lingual Documents’.
Proc. Intelligent Systems Design and Applications,
2008. ISDA'08. Eighth International Conference
on2008 .
[4] O. Uzuner, B. Katz, T. Nahnsen, ‘Using syntactic
information to identify plagiarism’. Proc. Proceedings
of the 2nd Workshop on Building Educational
Applications Using NLP2005 pp.37-44
[5] A. Barron-Cedeno, P. Rosso, D. Pinto, A. Juan, ‘On
cross-lingual plagiarism analysis using a statistical
model’, Proc. of PAN-08, 2008
[6] C.H. Lee, C.H. Wu, H.C. Yang, ‘A Platform
Framework for Cross-Lingual Text Relatedness
Evaluation and Plagiarism Detection’. Proc. Innovative
Computing Information and Control, 2008. ICICIC'08.
3rd International Conference on 2008 pp. 303-307.
[7] A. Selamat, H.H. Ismail, ‘Finding English and
translated Arabic documents similarities using
GHSOM’2008 pp. 460-465
[8] N. Gustafson, M.S. Pera, Y.K. Ng, ‘Nowhere to hide:
Finding plagiarized documents based on sentence
similarity’. Proc. IEEE/WIC/ACM International
Conference on Web Intelligence and Intelligent Agent
Technology, 2008. WI-IAT'082008 pp.690-696
[9] D. Zhang, W.S. Lee, ‘Question classification using
support vector machines’. Proc. Proceedings of the
26th annual international ACM SIGIR conference on
Research and development in informaion retrieval2003
[10] C. Enhong, Z. Zhenya, A. Kazuyuki, W. Xu-fa, ‘An
extended corner classification neural network based
document classification approach’, Journal of
Software, 2002, 13, (5), pp. 871-878
[11] M. Mohammadi, H. Alizadeh, B. Minaei-Bidgoli,
‘Neural Network Ensembles Using Clustering
Ensemble and Genetic Algorithm’. Proc. Convergence
and Hybrid Information Technology, 2008. ICCIT'08.
Third International Conference on2008 pp. 761-766
[12] M. Mohammadi, B. Minaei-Bidgoli, ‘Using CC4
neural networks for Persian document classification’.
Proc. 2nd Iran Data Mining Conference(IDMC008)
Tehran 2008
[13] K. Sheykh Esmaili, A. Rostami,: ‘A list of persian
stopwords’. Proc. Technical Report No. 2006-03,
130 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

Experimental Comparison of Fiber Optic Link


Impact on Shift Keying Modulations
Dr. Mohammad Samir Modabbes
Aleppo University, Faculty of Electrical and Electronic Engineering,
Department of Communications Engineering, Syria
msmodabbes@hotmail.com

timing synchronization over additive white Gaussian noise


Abstract: In this paper an experimental comparison of fiber
optic communication link impact on shift keying modulated
channels is made.
signals in presence of external noise is analyzed. The relation of
BER versus SNR for each type of shift keying modulation is 3. Methodology
shown. A comparison measurement of voltage loss through an
optical fiber link with different lengths is calculated. Results
Fiber attenuation in dB per km for different types of shift
show that phase shift keying (PSK) modulation offers the keying modulated signals is determined using
advantages of being more immune to light scattering and phototransistor for measuring relative light power.
absorption losses of fiber optic link and external noise than Light falls on phototransistor controls its photo current,
other shift keying modulations and is preferred to use in optical
which is proportional to the relative light power, by
fiber transmission.
measuring light power (voltage or current) detected by
Phototransistor for two different lengths of optical fibers we
Keywords: BPSK modulation, Fiber optic Communications, can calculate the optical power ratio between the two
noise, BER analysis lengths of that cable. Then by dividing the power ratio
between the two cables by the length deference we can
1. Introduction calculate the optical power loss in dB/km as following:
Fiber optics is widely used today and is becoming more Power loss = (10 log( p1 / p 2 )) /( L2 − L1 ) [dB/km] (1)
common in everyday life. Its greatest use is in the field of
communications for voice, video and data signals Where:
transmission through small flexible threads of glass. These P1 output power of first optical fiber
fiber optic cables far exceed the information capacity of P2 output power of second optical fiber
coaxial cable. They are also smaller and lighter in weight L1 length of first optical fiber
than conventional copper systems and are immune to L2 length of second optical fiber
electromagnetic interference and crosstalk [1, 2]. There are
two main factors to consider when transmitting signals The effect of optical power loss and amplitude noise for each
through optical fibers: signal to noise ratio (SNR) and the type of shift keying modulated signals can be measured by
bit error rate (BER). determining the relation of BER versus SNR.
BER is the number of incorrect bits received in reference to
2. Objective the total number of bits transmitted:
Both absorption and scattering in optical fiber are
depending on light wavelength and the nature of signals BER = incorrect received bits / total transmitted bits (2)
traveling through it, and are specified by manufacturers in
decibels per kilometer.. In [3] the performance of eight- SNR is the ratio of input signal amplitude (Vinput ) to noise
channel 10.7 Gb/s systems was computed using advanced signal amplitude (Vnoise ) in decibels [1,5]:
modulation formats, four non return-to-zero (NRZ)-type
and three return-to-zero (RZ)-type formats with different SNR = 20 log (Vinput / Vnoise) [dB] (3)
phase characteristics.
In [4] a novel phase shift keying technique was proposed
that uses optical delay modulation for fiber-optic radio links. 4. Experimental Setup
Using only a 2x1 switch and a delay line, this technique The experimental measurements were conducted on
enables modulation of a millimeter-wave carrier at bit rates telephone channel simulator, in lab environment, using
of several gigabits per second or higher, where high-speed experimental board No. AS91025 from LABVOLT
devices are not needed. But only binary phase shift keying company, with glass fibers (1 & 5m) and input signal
was experimentally demonstrated. Therefore studying frequency about half kilohertz as shown in figure (1), real
optical fibers effect on different signals traveling through it measurements environment require long fibers (several
is of a great importance. kilometers) and high frequencies (tens of Giga hertz), but
In [5] an exact analysis of BER performance of generalized even though the results on the experimental board can be
hierarchical PSK constellations under imperfect phase or
(IJCNS) International Journal of Computer and Network Security, 131
Vol. 2, No. 6, June 2010

verified and expanded to cover the real fiber transmission 5. Results and discussions
system.
Modulator produces different types of shift keying 5.1 Voltage loss
modulated signals (ASK, OOK, FSK, PSK) with carrier
frequency 2.4 kHz (approximately five times the highest Table (1) shows the measured received light voltage for
frequency of the baseband signal). Then modulated signals different modulated signals detected by phototransistor and
are transmitted by the fiber optic transmitter (FOT) and transmitted by different length of glass fibers:
received by the fiber optic receiver (FOR).
FOT has an infrared LED light source with a peak Table 1: Measured received light voltage
wavelength 820 nm and with typical spectral bandwidth 45 Fiber type The received light voltage [mV]
nm (50% less than peak wavelength). ASK OOK FSK PSK
As it's known light source speed affects the bandwidth of a Glass 4.46 4.66 4.26 4.46
fiber optic system. The greater the bandwidth requirement, fiber(1m)
the greater the need to turn the light source on and off more Glass 4.44 4.65 4.25 4.45
quickly. Therefore light source speed is defined in terms of fiber(5m)
rise time (tr) and the following equation approximates the
maximum bandwidth (Bwmax) [6, 7]: Table (2) shows the calculated optical voltage loss in dB/m
for different modulated signals as in equation (5):
Bwmax = 0.35/tr [Hz] (4)
Table 2: Calculated optical voltage loss

Fiber Loss dB/m


INPUT type ASK OOK FSK PSK
Glass 9.7x10-3 4.6x10-3
5.1x10-3 4.8x10-3
MOD FOT fiber

Fiber
Carrier Loss = ( 20 log(v1 / v2 )) /( L2 − L1 ) [dB/m] (5)

FOR ERROR By comparing these results we conclude that: the received


RPG light voltage and voltage loss are affected by fiber length
and fiber material. OOK and PSK modulations have
DEMOD OUTPUT COMP minimum voltage loss (~4dB/km); while the worst ASK
modulation has maximum voltage loss (~9dB/km).
Figure 1. Telephone channel simulator
5.2 BER vs. SNR
LED has a typical rise time of 3 ns, which means that the
FOT maximum bandwidth is approximately 120 MHz A The graphics below show BER vs. SNR for the coherent
random pulse generator (RPG) injects a variable amount of detection of different types of shift keying modulation for
noise into the received signal after FOR stage and has different noise amplitude using different length of glass
frequencies up to 600 Hz. cables:
Error counter measures the number of incorrect bits received
(error count), where The transmitted and received data are
compared bit by bit in a comparator (XOR gate), if the bit is ASK-1m glass
OOK-1m glass
not match an error pulse is generated, the errors are 0.5 FSK-1m glass
totalized in a counter over a fixed time period or frame (106 PSK-1m glass

ms is the time required for 128 data bits) generated by one 0.4
shot. Each time when the counter is reset a 106 ms one shot
is triggered, and the error pulses from the XOR gate are
BER

0.3
totalized by the counter only during 106 ms frame.
A performance comparison of shift keying modulation 0.2
transmission through a glass fiber optic link with graded
index (62.5/125 μm) was made in the presence of noise at 0.1
cutoff frequency of low pass filter 1.5 kHz; total number of
bits transmitted 128 bits; SNR was calculated for 4Vp-p 0
(2.828 Vrms) input signal amplitude and variable amplitude 6 8 10 12
SNR[dB]
14 16 18

of noise signal.
Figure 2. BER versus SNR using 1m glass fiber
132 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

References
[1] Bernard Sklar., 2001- “Digital Communications
ASK-5m glass
OOK-5m glass
Fundamentals and Applications”, Prentice- Hall, New
0.6
FSK-5m glass Jersey.
PSK-5m glass
0.5
[2] Harold B. Killen., 1991-”Fiber Optic
Communications”, Prentice-Hall, New Jersey
0.4 [3] Yihong M., S. Lobanov and S. Raghavan., 2007-
BER

"Impact of Modulation Format on the Performance of


0.3
Fiber Optic Communication Systems with
Transmitter Based Electronic Dispersion
0.2
Compensation", Optical Society of America
0.1 [4] Y. Doi, S. Fukushima, T. Ohno, Y. Matsuoka, and H.
Takeuchi., 2000- "Phase Shift Keying Using Optical
0
6 8 10 12 14 16 18 Delay Modulation for Millimeter-Wave Fiber-Optic
SNR[dB] Radio Links", Journal of light wave Technology, Vol.
18, No. 3
Figure 3. BER versus SNR using 5m glass fiber [5] P. K. Vitthaladevuni, and M.S. Alouini, 2005- “Effect
of Imperfect Phase and Timing Synchronization on
By comparing all graphics in figures (2 & 3) we conclude the
that attenuation increases with fiber length, ASK Bit-Error Rate Performance of PSK Modulations”,
modulation has the worst performance, OOK and FSK IEEE Transactions on communications, Vol. 53 No. 7
modulations have steady attenuation level in all SNR range,
while in PSK modulation attenuation degreases remarkably [6] H. Meng, Y. L. Guan, and S. Chen., 2005- “Modeling
as SNR ratio increases. and Analysis of Noise Effects on Broadband Power-
Line Communications", IEEE Transactions on Power
Delivery, Vol. 20 No. 2
6. CONCLUSIONS [7] A. Demir, 2007- “Nonlinear Phase Noise in Optical
An experimental comparison of fiber optic communication Fiber-Communication Systems”, Journal of
link impact on shift keying modulated signals (ASK, OOK, Lightwave Technology, Vol. 25, No. 8
FSK, PSK) in the presence of external noise is analyzed.
The experimental measurements were conducted on
telephone channel simulator using board No. AS91025 from Author Profile
LABVOLT Company. Fiber attenuation is determined using
phototransistor for measuring relative light power.
Mohammad Samir Modabbes
Measurements results show that:
received the B.S. degree in Electronic
1. Losses due to light scattering and absorption can be Engineering from University of Aleppo
determined by comparing different lengths of in 1982. M.S. and Ph.D. degrees in
identical fiber. Communications Engineering from High
Institute of Communications in Sankt
2. Voltage Loss increases with fiber length and
Petersburg in 1988 and 1992,
affected by wavelength and fiber material respectively. He is working as Associate
3. ASK modulation has maximum loss in glass fiber; Professor at the Faculty of Electrical and
while FSK & OOK modulations have less and Electronic Engineering, Department of
equal value. Communications Engineering, University of Aleppo Syria. Since
4. For ASK and OOK at a given signal to noise ratio 2006 he is working as Associate Professor at Qassim University,
as the difference between the two levels of the College of Computer, Department of Computer Engineering in
carrier for “1” and “0” states increases the BER Saudi Arabia. His research interests are: Analysis and performance
decreases which improves noise immunity and evaluation of Data transmission Systems, Wireless Communication
Systems
attenuation
5. PSK modulation has the best performance than
other digital modulations in glass fiber, because
attenuation degreases to minimum values as SNR
ratio increases.
These results led us to conclude that PSK modulation offers
the advantages of being more immune to light scattering
and absorption losses of fiber optic and external noise than
other shift keying modulations and is preferred to use in
data transmission systems with optical fiber links.
(IJCNS) International Journal of Computer and Network Security, 133
Vol. 2, No. 6, June 2010

A New Cluster-based Routing Protocol with


Maximum Lifetime for Wireless Sensor Networks
Jalil Jabari Lotf1, seyed hossein hosseini nazhad ghazani2, Rasim M. Alguliev3
Institute of Information Technology of Azerbaijan National Academy of Science, Baku, Azerbaijan
1
jalil.jabari@gmail.com, 2S.HosseiniNejad@gmail.com, 3 rasim@science.az

increases, the clusters can be organized hierarchically.


Abstract: In recent years, one of the main concerns in wireless
sensor networks has been to develop a routing protocol which MLCH is a protocol based on clustering, which minimizes
has an efficient and effective energy yield. Since the ability of the waste of energy in wireless sensor networks. This new
sensor nodes is limited, the energy retaining and a long life algorithm creates the clusters according to radio range and
network is an important issue in wireless sensor networks. In the their even distribution throughout the network. The
present paper, the routing protocol MLCH, which has the proposed method, tries to improve the clusters’ topology in
characteristics of hierarchical routing and self-configuration,
order to minimize the energy consumption.
has been proposed. MLCH corrects the current routing
protocols according to radio beams and the number of cluster
In section 2, some of the previous works on routing
members in several directions. In this method, the clusters will protocols based on wireless sensor networks will be
be distributed evenly in the network. The efficiency of the summarized. In section 3, the suggested algorithm has been
suggested algorithm was measured by simulations and explained in detail. Section 4 explains the experimental
comparing the resulted outcomes with the previous methods’ outcomes and poses the system efficiency and section 5 will
resulted outcomes. The results show that the suggested finish this paper. In this section we will conclude our
algorithm has a high efficiency.
discussion and suggest works to be done in the future.
Keywords: Wireless sensor Network, Routing, clustering. .

1. Introduction 2. Previous works


A sensor network is composed of a lot of sensors which are LEACH [5] is a protocol based on clustering, which uses the
accumulated to examine a state in an environment. As the random rolling of the head-nodes in order to distribute the
name suggests, in sensor networks, the nodes have the energy burden equally between sensor nodes in a network.
responsibility of sensing and sending the received After the clusters are created, the head-nodes spread the
information through the network to a Base station or sink. timing algorithm TDMA which identifies the order in which
Also the nodes usually have limited energy resources which the members of clusters being sent to the head-node. Each
are not changeable or rechargeable after they finish, because node sends the data to head-node in a unique period of time.
of the unavailability of the nodes or the dominant When the last node sends the data, a random head-node is
conditions. Thus, the important issue in the design of chosen for the next period.
routing algorithms in these networks is that they are optimal TEEN [6] is a protocol based on protocol LEACH with two
in energy usage. In order to decrease the energy limitations. First: as soon as the absolute amount of the data
consumption appropriately, we need the data distribution exceeds a certain amount HT, the node which receives it,
techniques with an efficient energy yield. According to [1], should activate the sender and report this. Second: when the
there are three principle methods for data distribution: changes in the received amount is higher than a specific
medium storage, local storage and external storage. In amount ST, the node is forced to reactivate the sender and
medium or local storage, the data are kept inside the report the received data. The node will report the data, when
network and the questions are sent to the nodes which store the received amount is higher than HT or the change in the
the desired data. In this paper we will study the external amount of the received data is higher than ST.
storage in which we need to store the data in a constant PEGASIS [7] is one of chain protocols based on protocol
center out of the network. LEACH. It is similar to multi-layer chain protocols which
The life span of a sensor network can be defined as the creates some chains from the top of nodes. Each head-node
distant between the time when the network starts to work sends and receives its data, only using one neighbor and the
and the time when the first or the last sensor loses its collected data moves from each head-node to the other one.
energy. Clustering is one of the energy yield techniques to Finally, only a certain head-node sends them to the base
increase the lifetime of the sensor network [2, 3]. Increasing station.
the lifetime of the sensor is often accompanied by the data HEED [8] is a distributed clustering method which uses the
combination [4]. Each cluster chooses a node as a head- same methods as the previous algorithms. The only
node, and sends the collected data first to the head node and difference is that it uses several parameters instead of one
then to the base station. The head-nodes can combine the parameter to form its sub-networks. The head-node selection
data in sensors to minimize the amount of data and sends in this algorithm is based on the amount of sending power
them to the base station. And when the size of the network
134 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

in each node which is used as a primary parameter. Also the quickly. The nodes which are located farther than head-node
second parameter is used to improve the communication 5 also will consume a lot of energy to send the data. After
inside each cluster and identifying that if a node is located all data have been collected from the members by the head-
in the communicative range of several head-nodes, which node, it sends them directly to the base station. It is clear
one will be chosen as its head-node. that those head-nodes which are far from the base station
MLCH is a protocol based on a clustering which tries to will have problems with the direct sending of the data and
minimize waste of energy in sensor networks. The key they cannot maintain the needed energy.
characteristics of MLCH are as follows:
3.8 Clustering with maximum lifetime
1. Network lifetime increases
2. It has self- configuration To solve the above problems in LEACH, the clustering
3. The head-nodes are distributed evenly method and the kind of connections between the head-nodes
4. Sending is hierarchical. and base station should be changed and modified. Basically
Since the energy consumption has a direct relation with the we use MLCH to solve these two problems. The structure of
square root of the distance, MLCH tries to cluster the clustering in MLCH avoids the unequal distribution of the
sensors in a way that the sensors’ density is less and the members of the clusters. Also the hierarchical routing
distance between them in the clusters is not big. In this pattern avoids the long term direct connection between the
protocol, the head-nodes are chosen in a way that they can head-nodes and the base station.
cover all the area and the head-nodes can send their data to Our proposed algorithm includes a setup phase and a steady-
the base station hierarchically. Thus the network lifetime is state phase (figure 2). Each period includes setup phase and
increased and the head-nodes are distributed evenly and the steady-state phase. According to the algorithm, in setup
load will be balanced. phase, the head-nodes are distributed evenly and in steady-
state phase, first the clusters are created and then the sensor
nodes send the sensed data to the cluster’s head-node which
3. The proposed algorithm
they belong to. Each of the head-nodes combines the
Here we will discuss the problems related to one of the most received data from the cluster nodes with its own data and
common hierarchical routing protocols for wireless sensor sends it to the base station. Whenever the head-node in a
networks and then will introduce the suggested algorithm. cluster stops working, “steady-state” will be finished and
3.7 The problems with LEACH thus the system will return to the setup phase. The system
will continue this cycling until all the sensor nodes present
LEACH uses a self- configuration method and decreases the
in the network stop working (or the simulation time is
energy consumption a great deal. However, it has some
finished).
deficiencies too. First, LEACH doesn’t study the distribution
of head-nodes because it chooses the head-nodes randomly Setup Phase Steady-state phase
and it doesn’t have any control over the evenly distribution
of them in the environment. Figure (1) is an example,
Time
showing the head-nodes in the environment which has been
created by LEACH.
Figure 2. Round of sensor network operation
1

2 3.2.7 Setup phases


5
Node number 1, sends a “Hello” message to its neighbors.
3 The TTL of the “Hello” message is set in a way that we need
only to recollect the neighbors of only one node. Also the
4 radio range has been adjusted on a certain sending range
and this limits the cluster area conditions. (e.g. a circle with
A meter radiant). This message is to inform the nodes which
are located in the area concerned. “I am a head-node in the
layer X”. All the nodes which are smaller than the radiant
specified (e.g. B meter radiant) will receive and record this
Figure 1. 5 random choices of head-nodes in LEACH message.
Furthermore, such nodes never spread their received
The black nodes show the head-nodes. All of them are message because we emphasize that only one head-node is
distributed in the top right corner. So the nodes in the left present in one radio range. Then we have the nodes which
side consume a lot of energy to establish a connection with are located in the distance between the sense radiant and
those head-nodes. The number of nodes in each cluster has radio range (in the assumptions we have B to A meter) and
an unequal distribution. In figure 1, the head-node 1, they receive this message. Each of these nodes which are
doesn’t include more than 5 clusters. While the head-node chosen from among the nodes located in a distance less than
5, includes 50 members. So its energy will be finished very the sense radiant and have the maximum ability will be
(IJCNS) International Journal of Computer and Network Security, 135
Vol. 2, No. 6, June 2010

chosen as a head-node in the next layer and they will be the distance of the head-node A from the base station through a
new heads in the next layer. Also for each of these head- head-node like X.
nodes, a successor will be chosen to start working when the
first one has some problems or can’t work well and this is
D ( x) = d A2 − x + d x2−sin k x ∈ {all other headnodes} (1)
done to increase the error durability in the network. After
repeating these steps, we have distributed our own head-
nodes evenly, as is shown in figure 3. Then the minimum amount of these functions will be chosen
and it will be compared with the square root of the distance
5-1 of head-node A from the base station.

Five
4-1

4-2 Min( D ( x)) ≤ d A2−sin k (2)


3-1 If we find a medium head-node (like B) which consumes
less energy, then the head-node A sends its data to that
medium head-node. If this is not done, it will send the data
3-2
to base station like LEACH. When the data reaches to the
Four
2-1 4-3 head-node B, the above mentioned algorithm will be
repeated. This process will continue until the data reach the
base station. This sending method will improve the energy
1-1 3-3
consumption. Because first if we have a lot of head-nodes
2-2 and want to send our data to the base station through multi-
hop method, the cluster head which is nearer to the base
One Two Three station will lose its energy sooner and this causes the head-
nodes to stop working. Second if we want to send the data
Figure 3. even distribution of head-nodes in the
directly as the LEACH, then if a cluster head is farther than
environment
the base station, it loses its energy sooner than the other
3.2.8 Steady-state phase head-nodes and it stops working. But in our protocol we use
This phase includes three steps: installation step, a combination of these which causes to consume much less
permanence step, and sending step. In the previous phase energy than the other routing protocols.
we have chosen our head-nodes so in the installation step
they are already distributed evenly. Now we should create 4. Simulation and analysis
our clusters. Each head-node can choose sensors within a
radio range to be its cluster’s member. Then the TDMA 4.1 The simulation environment
time window will be timed for each member of the cluster in The following amounts are the parameters used in the
each period. In this step, each node will turn on the receiver simulations. The network size is 100*100 m2 with 100
as like as the LEACH. Then the head-node will spread a sensor nodes. The number of members in one cluster is 10,
message which includes the timing window TDMA. 15 or 20. We will use a simple radio model with a transfer
Each member of the cluster will have the timing window speed of 40 kbps. Each size of the data is equal to 36 bytes.
which belongs to it. And the amount of remaining ability of The network topology is created with a random even
each sensor (to choose a head-node for next period) along distribution.
with the received data in the defined timing window will be
4.2 Efficiency measurement and analysis
reported to the head-node.
Permanence step: The clusters have been created and the The main operations of sensor nodes in this system have
TDMA timing has been done, then we can transfer the data. been divided into some periods. Each period includes a
With this assumption that the nodes always have some data setup phase and a steady-state phase. In setup phase and
to be sent, they send their energy and their received data in according to clustering algorithms, the network is divided
their allocated window to the head-node. The cluster nodes into several clusters and the routing is based on the cost and
adjust their sent energy dynamically and according to the only regarding the head-nodes and the base station the data
time span of sending the message and the size of the sent collection will be done (figure 3). In data sending phase
messages. In permanence step, only the head-node will (static state), the sensor nodes will produce the sensed data
always turn on its receiver. While the cluster members will and send them to the head-node of the cluster which they
turn on their receivers, only at the time allocated. belong to. Each head-node combines the data received from
Sending step: Regarding the fact that most part of the sent the cluster nodes with its own data and sends them to the
energy is equal with the square root of the distance, when a base station according to the rout created by the steady-state
head-node (e.g. A) wants to send its data to the base station, phase.
first it does some calculations. It calculates the function d(x)
for all other head-nodes first. The function d(x) is the
136 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

First No The results of the experiments are shown in figures (6) to


Round (11). The show results are the averages for 50 times of
simulations for each one of the experiments.
Yes

120

Cluster formation 100


using clustering
algorithms 80

LEACH
60
MLCH

40
Cluster-head
20
selection
0
1 95 189 283 377 471 565 659 753 847 941

Go to steady phase
Figure 6. Comparing Protocols LEACH and MLCH
Number of active nodes:100 , Number of members:10
Figure 4. The process of sensor network setup
120
phase
100
Create Cluster and
Node senses data 80
LEACH
60
MLCH
Node forward
40
sensed data to
cluster-head 20

0
Cluster-head 1 148 295 442 589 736 883
aggregates data
No
Figure 7. Comparing Protocols LEACH and MLCH
Cluster-head Number of active nodes:100 , Number of members:15
Cluster-
sends aggregated
head
data to sink 120

Yes 100
Go to setup
80

MLCH
60
LEACH

40
Figure 5. The process of sensor network “steady-state”
phase 20

To measure the proposed algorithm, 6 different tests 0


1 100 199 298 397 496 595 694 793 892 991
according to the table (1) have been done.

Table 1: The characteristics of tests done


The time duration Figure 8. Comparing Protocols LEACH and MLCH
Test Number of Number of Number of active nodes:100 , Number of members:20
for simulation
number Sensor nodes members
(seconds)
1 100 1000 10
2 100 1000 15
3 100 1000 20
3 200 1000 10
5 200 1000 15
6 200 1000 20
(IJCNS) International Journal of Computer and Network Security, 137
Vol. 2, No. 6, June 2010

hierarchical tree. In the future, we want to solve the


250 problems dealing with the simultaneous sending of the
nodes’ data. Creating thousands of nodes and sending the
200
data simultaneously is difficult and one of the resolutions
can be to use CSMA/CA instead of TDMA. Also the yield
150
LEACH parameters’ characterization can be an alternative
100
MLCH resolution. Finding the yield parameters, such as the number
of members in each cluster and the radio range for MLCH is
50 also important.

0 References
1 95 189 283 377 471 565 659 753 847 941
[1] S. Ratnasamy, D. Estrin, R. Govindan, B. Karp, L. Yin
S. Shenker, and F. Yu. “Data-centric storage in
sensornets”. Proceedings of the ACM First Workshop
Figure 9. Comparing Protocols LEACH and MLCH on Hot Topics in Networks, 2001.
Number of active nodes:200 , Number of members:10 [2] W. Choi, P. Shah and S. K. Das,“A Framework for
Energy-Saving Data Gathering Using Two-Phase
Clustering in Wireless Sensor Networks”, Proceedings
250 of the Mobile and Ubiquitous Systems, Boston, MA,
August 2004.
200
[3] O. Younis and S. Fahmy, “Distributed Clustering in
Ad-hoc Sensor Networks: A Hybrid, Energy-Efficient
Approach”, Proceedings of the IEEE INFOCOM 2004,
150 Hong Kong, China, March, 2004.
LEACH
[4] D. Hall, “Mathematical Techniques in Multisensor Data
MLCH fusion,” Artech House, Boston, MA, 1992.
100
[5] W. Heinzelman, A. Chandrakasan, and H.
Balakrishnan. “Energy-Efficient Communication
50 Protocols for Wireless Microsensor Networks
(LEACH)”. Proceedings of the 33rd Hawaii
0 International Conference on Systems Science- Volume
1 95 189 283 377 471 565 659 753 847 941
8, pp. 3005-3014, January 04-07, 2000
[6] A. Manjeshwar and D. Agrawal, “TEEN: a Routing
Protocol for Enhanced Efficient in Wireless Sensor
Networks,” Proceedings of the 15th International
Parallel and Distributed Processing Symposium, pp.
Figure 10. Comparing Protocols LEACH and MLCH 2009-2015, 2001.
Number of active nodes:200 , Number of members:15 [7] Lindesy and C. Raghavendra, “PEGASIS: Power-
Efficient Gathering in Sensor Information System,”
250 Proceedings of the IEEE Aerospace Conference, pp. 1-
6, March 2002.
200 [8] Ossama Younis and Sonia Fahmy, “Distributed
Clustering for Scalable, Long-Lived Sensor Networks”,
Proceedings of the 9th Annual International
150 Conference on Mobile Computing and Networking,
LEACH
ACM MobiCom, San Diego, CA, September 2003.
MLCH
100

50

0
1 95 189 283 377 471 565 659 753 847 941

Figure 11. Comparing Protocols LEACH and MLCH


Number of active nodes:200 , Number of members:20

5. Conclusions and works in the future


Here we suggested a new protocol based on clustering with
maximum lifetime for wireless sensor networks. MLCH
improves LEACH by using a very equally distributed cluster
and decreasing the unequal topology of the clusters. MLCH
uses the radio range to create a cluster in a certain
environment. And also it modifies the connection distance
of the head-nodes with the base station through a
138 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

Design of Scheduling Algorithm for Ad hoc


Networks with Capability of Differential Service
Seyed Hossein Hosseini Nazhad Ghazani1, Jalil Jabari Lotf 2,Mahsa Khayyatzadeh3, R.M. Alguliev4
1
Institute of Information Technology of ANAS,
Azerbaijan Republic
S.HosseiniNejad@gmail.com
2
Institute of Information Technology of ANAS,
Azerbaijan Republic
Jalil.Jabari@gmail.com
3
Departmant of Electrical Engineering, Urmia University
Urmia, Iran
Mahsa_khayyatzadeh@yahoo.com

4
Institute of Information Technology of ANAS,
Azerbaijan Republic
secretary@iit.ab.az

maintains a self-organizing routing infrastructure, called the


Abstract: A mobile ad hoc network (MANET) is a collection of
mobile nodes that can communicate with each other without “core”.
using any fixed infrastructure. To support multimedia These approaches do not consider the contentious nature
applications such as video and voice MANETs require an of the MAC layer and the neighbor interference on multi-
efficient routing protocol and quality of service (QoS) hop paths. This leads to inaccurate path quality prediction
mechanism. Again Ad-hoc network is mainly constrained by the for real-time flows. Additionally, most of the work does not
bandwidth. The proposed scheduling in this research adopting consider the fact that a newly admitted flow may disrupt the
the Contention Window size of flows and also services the flows quality of service received by ongoing real-time traffic
by select the fixed or minimally mobile nodes that provide flows. Recently, other work has proposed the performance
network backbone access. The novelty of this model is that it is improvement of MAC protocols and the support of service
robust to mobility and variances in channel capacity and differentiation. Many of these approaches specifically target
imposes no control message overhead on the network, again in
IEEE 802.11 [6]. For example, studies in [1,6,12,16]
calculate CW size, attention to collision rate and flow’s allocated
QoS.
propose to tune the contention windows sizes or the inter-
frame spacing values to improve network throughput, while
Keywords: Ad Hoc, Scheduling, QoS, Routing. studies in [1, 4, 14, 23, 31] propose priority-based
scheduling to provide service differentiation.
1. Introduction To support both types of applications in ad hoc networks,
effective QoS-aware protocols must be used to allocate
A MANET2 is essentially a special form of distributed
resources to flows and provide guarantees to real-time traffic
system with the features like no fixed topology, no fixed
in the presence of best effort traffic [15]
connectivity, varying link capacity, without any central
control and is constrained by the lack of resources. Due to
the above items routing in such networks experiences link 2. QOS FRAMEWORK [16]
failure more often. Hence, a routing protocol supports QoS
requires to consider the reasons of link failure to improve its 2.1 Targeted network modification
performance. Many routing schemes and frameworks have This framework targets the support of real-time traffic in
been pro-posed to provide QoS support for ad hoc networks large-scale mobile networks. In these environments, fixed
[1, 2, 3, 4, 5]. Among them, INSIGNIA [1] uses an in-band wireless routers may be placed to serve as a network
signaling protocol for distribution of QoS information. backbone. The majority of proposed routing protocols place
SWAN [2] improves INSIGNIA by introducing an no preference on the selection of paths with regard to node
Additive Increase Multiplicative Decrease (AIMD)-based mobility, i.e., highly mobile nodes have the same likelihood
rate control algorithm. Both [3] and [4] utilize a distance of inclusion on a communication path as stationary nodes.
vector protocol to collect end-to-end QoS information via While best-effort traffic may be more tolerant to these
either flooding or hop-by-hop propagation. CEDAR [5]
events, the quality of real-time traffic will be significantly
proposes a core-extraction distributed routing algorithm that
degraded and is likely to become unacceptable. The
utilization of fixed wireless routers in these networks will
(IJCNS) International Journal of Computer and Network Security, 139
Vol. 2, No. 6, June 2010

greatly improve the quality of real-time traffic by the approaches that propose to provide service differentiation
elimination of intermediate link breaks. Figure 2 illustrates based on 802.11, by either assigning different minimum
an example network where real-time and best-effort traffic contention window sizes ( CWmin ) , Arbitrary Inter Frame
utilize different routes [16]. Spacing (AIFS), or back-off ratios, to different types of
traffic. These approaches can all provide differentiation;
however, the parameters are typically statically assigned and
cannot adapt to the dynamic traffic environment. This
reduces the usage efficiency of the network [16].
We propose an adaptive scheme to address trade-off. The
basic idea is that, because the state of ad hoc networks can
vary greatly due to mobility and channel interference, it is
advantageous to adjust the back-off behavior according to
the current channel condition.
To achieve service differentiation, as well as to adapt to
the current network usage, we combine the collision rate
Figure 1. Functionality of the framework at IP and MAC layers. and current QoS of flow with the exponential back-off
mechanism in IEEE802.11. To do it, classifies flows into
three types: delay-sensitive flows, bandwidth sensitive flows
and best effort flows. The delay-sensitive flows, such as
conversational audio/video conferencing, require that
packets arrive at the destination within a certain delay
bound. The bandwidth-sensitive flows, such as on-demand
multimedia retrieval, require a certain throughput. The best
effort flows, such as file transfer, can adapt to changes in
bandwidth and delay. Due to the different requirements of
flows, each type of flows has its own contention window
Figure 2. An example of the routes for different traffic. adaptation rule [15].
1) Delay-Sensitive Flows: For a delay-sensitive flow, the
dominant QoS requirement is end-to-end packet delay. To
2.2 Call setup for real-time traffic control delay, the end-to-end delay requirement d must be
When a real-time flow is requested, a call setup process is broken down into per-hop delay requirements. Each hop
needed to acquire a valid transmission path with satisfied locally limits packet delay below its per-hop requirement to
QoS requirement. Call setup also enables effective maintain the aggregated end-to-end delay below d. For this
admission control when the network utilization is saturated. paper, each node is assigned with the same per-hop delay
This requires accurate estimation of channel utilization and requirement, d/ m, where m is the hop count of the flow.
prediction of flow quality, i.e., throughput or transmission d − D (n)
delay. ( n+1) ( n)
CW = CW * (1 + a m ) (1)
The proposed QoS approach is based on model-based d
resource estimation mechanism, called MBRP [19]. By m
modeling the node backoff behavior of the MAC protocol th
Where the superscript n represents the n update
and analyzing the channel utilization, MBRP provides both iteration, D denotes the actual peak packet delay at the node
per-flow and aggregated system wide throughput and during a update period and α is a small positive constant
delay[16]. (α=0.1) [15].
Call setup
Call setup process based on the modified AODV routing 2) Bandwidth-Sensitive Flows: For a bandwidth sensitive
protocol, which can be divided into a Request and a Reply flow, the dominant QoS requirement is throughput, which
phase. In the request phase, the source node sends Route requires that at each node along the flow’s route, the packet
Request messages (RREQ) for the new flow. The RREQ arrival rate of the flow should match the packet departure
packet reaches the destination if a path with the needed rate of the flow.
quality exists. During the reply phase, the destination node
sends a Route Reply message (RREP) along the reverse path ( n +1) ( n) ( n)
to the source node [16]. CW = CW + β (q − Q ) (2)
where q is a threshold value of the queue length that is
2.3 Prioritized medium access
smaller than the maximum capacity of the queue, Q
Communication in ad hoc networks occurs in a distributed represents the actual queue length and β is a positive
fashion. There is no centralized point that can provide constant(β= 1). If Q is larger than q, the algorithm decreases
resource coordination for the network; every node is CW to increase the packet departure rate to decrease queue
responsible for its own traffic and is unaware of other traffic length. If Q is smaller than q, the algorithm increases CW to
[16]. decrease the packet departure rate and free up resources for
In Ad hoc networks, priority scheduling algorithm is other flows. As the queue size varies around the threshold
based on IEEE 802.11[6].Currently, there are several
140 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

value q, the average throughput of the flow matches its


requirement [15].

3) Best Effort Flows: Best effort flows are tolerant to


changes in service levels and do not have any hard
requirements about bandwidth or packet delay. The purpose
of updating the contention window size of best effort flows
is to prevent best effort flows from congesting the network
and degrading the service level of real-time flows.

( n +1) ( n) ( n)
CW = CW × (1 + γ ( f − F )) (3)
Where f is a congestion threshold for idle channel time,
Figure 3. Markov Chain model for the back-off window size.
F is the actual idle channel time and γ is a positive constant
(γ= 0.1) [15]
In Eq.(5) W denotes the Contention Window size of flow.
When the average idle channel time F is smaller than the
With attention to Eq.(5), probability that the station with
threshold value f, the network is considered congested and
low CW versus the station with big CW obtain the channel
the contention window size of the best effort traffic is
and transmits a packet, is high. Using the above three
increased to avoid decreasing the service level of real-time
contention window adaptation algorithms and Eq.(4),
traffic. On the other hand, if the network is lightly loaded so
ensures that flows dynamically adjust their contention
that the idle channel time is larger than f, the contention
parameters to meet their own QoS needs with attention to
window size of best effort traffic is decreased so that the idle
collision rate.
bandwidth can be utilized [15].
Additional to combine the collision rate with the
exponential back-off mechanisms, we use the follow 4. CONCLUSION
algorithm [16]: Using the above three contention window adaptation
algorithms (1,2,3) and Eq.(4), ensures that real-time flows
r dynamically adjust their contention parameters to meet their
Back−off =Rand [0,( 2 +Rcol * pri )*CWmin ]*Slot _Time (4)
own QoS needs with attention to collision rate. A real-time
where Rcol denotes the collision rate between a station’s flow that did not get its required QoS in the past due to
two successful frame transmissions, and pri is a variable competition from other flows decreases its contention
associated with the priority level of the traffic. window size so that statistically it will have a higher chance
By applying Eq. (4), traffic with different priority levels will to obtain the channel in the future (Eq.(5)). A best effort
have different back-off behavior when collisions occur. Also flow, on the other hand, increases its contention window
traffic with same priority levels will have different back-off size when the network is considered busy and hence releases
behavior when collisions occur, with attention to flow the channel to the real-time flows.
current status. Specifically, after a collision occurs, low The novelty of this model is that it is robust to mobility
priority traffic will back-off for longer, and subsequently and variances in channel capacity and imposes no control
high priority traffic will have a better chance of accessing message overhead on the network and in calculates CW
the channel. size, attention to collision rate and flow’s current QoS.

3. MODEL VALIDATION
In this section, we study the behavior of a station with a References
Markov model, and we obtain the stationary [1] S. Lee, G.-S. Ahn, X. Zhang, and A. T. Campbell.
probabilityτ that the station transmits a packet in a generic “INSIGNIA: An IP-Based Quality of Service
(i.e., randomly chosen) slot time. This probability does not Framework for Mobile Ad Hoc Networks”. Journal of
depend on the access mechanism (i.e., Basic or RTS/ CTS) Parallel and Distributed Computing, Special issue on
employed. Consider a fixed number m of contending Wireless and Mobile Computing and Communications,
stations. In saturation conditions, each station has 60:374–406, 2000.
immediately a packet available for transmission. With [2] G.-S. Ahn, A. Campbell, A. Veres, and L.-H. Sun.
attention to Markov model (Figure 3), the probability τ is: “Supporting Service Differentiation for Real-Time and
[18] Best-Effort Traffic in Stateless Wireless Ad Hoc
Networks (SWAN)”. IEEE Transactions on Mobile
2
τ ( p) = Computing, vol 1, pp.192–207, July- Septemeber 2002.
1 + W + pW ∑ m − 1( 2 p ) i
(5) [3] S. Chen and K. Nahrstedt. “Distributed Quality-of-
i=0 Service Routing in Ad-Hoc Networks”. IEEE Journal of
Selected Areas in Communications, vol 17, pp 1454-
1465, August 1999.
[4] T. Chen, M. Gerla, and J. Tsai. “QoS Routing
Performance in a Multi-hop, Wireless Network”. In
(IJCNS) International Journal of Computer and Network Security, 141
Vol. 2, No. 6, June 2010

Proceedings of the IEEE ICUPC’97,vol 2, pp 557—61 , EEE/ACM Transactions on Networking, vol 1, pp:397–
San Diego, CA, October 1997. 413, 1993.
[5] P. Sinha, R. Sivakumar, and V. Bharghavan. CEDAR: [18] G.Bianchi, "Performance Analysis of IEEE 802.11
“A Core-Extraction Distributed Ad Hoc Routing Distributed Coordination Function", In IEEE Journal
Algorithm”. In Proceedings of the IEEE Conference on on Selected Areas in Communication, Vol 18, pp 535-
Computer Communications (INFOCOM), pages 202– 547, March 2000.
209, New York, NY, 1999. [19] Y. Sun, X. Gao, E. M. Belding-Royer, and J. Kempf.
[6] IEEE Computer Society.”IEEE Standard for Wireless Modelbased Resource Prediction for Multi-hop Wireless
LANMedium Access Control and Physical Layer Networks. In Proceedings of the 1st IEEE International
Specification”. Novermber 1999. Conference on Mobile Ad-hoc and Sensor Systems
[7] I. Ada and C. Castelluccia. “Differentiation Mechanisms (MASS), Ft. Lauderdale, FL, October 2004.
for IEEE 802.11”. In Proceedings of the IEEE
Conference on Computer Communications
(INFOCOM), Anchorage, Alaska, April 2001.
[8] F. Cal´i, M. Conti, and E. Gregori. “Tuning of the IEEE
802.11 Protocol to Achieve a Theoretical Throughput
Limit”. IEEE/ACM Transactions on Networking, vol 8,
pp 785-799, December 2000.
[9] T. S. Ho and K. C. Chen. “Performance Evaluation and
Enhancement of CSMA/CA MAC Protocol for 802.11
Wireless LANs”. In Proceedings of the IEEE
PIMRC,vol 18, pp 535-547, Taipei, Taiwan, October
1996.
[10] H. Kim and J. C. Hou. “Improving Protocol Capacity
with Model-based Frame Scheduling in IEEE 802.11-
operated WLANs”. In Proceedings of the Ninth Annual
International Conference on Mobile Computing and
Networking (Mobi COM’03), pages 190–204, San
Diego, CA, September 2003.
[11] A. Banchs, X. Perez-Costa, and D. Qiao. “Providing
Throughput Guarantees in IEEE 802.11e Wireless
LANs”. In Proceedings of the 18 th International
Teletraffic Congress (ITC-18), Berlin, Germany,
September 2003.
[12] V. Kanodia, C. Li, A. Sabharwal, B. Sadeghi, and E.
Knightly. “Distributed Multi-Hop Scheduling and
Medium Access with Delay and Throughput
Constraints”. In Proceedings of the Seventh Annual
International Conference on Mobile Computing and
Networking (MobiCOM’01), Rome, Italy, July 2001.
[13] R. Rozovsky and P. Kumar. ”SEEDEX: A MAC
Protocol for Ad Hoc Networks”. In Proceedings of the
2 nd __ACM International Symposium on Mobile Ad
Hoc Networking and Computing (MobiHoc’01), pp 67-
75, Long Beach, CA, October 2001.
[14] A. Veres, A. T. Campbell, M. Barry, and L.-H. Sun.
“Supporting Service Differentiation in Wireless Packet
Networks Using Distributed Control”. IEEE Journal of
Selected Areas in Communications, vol 19, pp 2081,
October 2001.
[15] Yaling Yang and Robin Kravets. "Distributed QoS
Guarantees for Realtime Traffic in Ad Hoc
Networks".Technical Report UIUCDCSR-2004-2446,
June 2004.
[16] Yuan Sun, Elizabeth M. Belding-Royer, Xia Gao and
James Kempf. "Real-time Traffic Support in Large-
Scale Mobile Ad hoc Networks" Proc. of BroadWIM
2004, San Jose, CA, October 2004.
[17] Sally Floyd and Van Jacobson. “Random early
detection gateways for congestion avoidance”.
142 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

An Optimization Problem for Evaluation of Image


Segmentation Methods
Javad Alikhani koupaei, Marjan Abdechiri
Department of Mathematics, Payamenour University, Isfahan, Iran.
verk500@yahoo.co.uk
marjan.abdechiri@qiau.ac.ir

Abstract: Image segmenting is one of the most important steps need of having the main segmented image of the intended
in movie and image processing and the machine vision image at hand.
applications. The evaluating methods of image segmenting that Unsupervised method does not require comparison with a
recently introduced. In this paper, we proposed a new manually- segmented reference image, has received little
formulation for the evaluation of image segmentation methods. attention. The key advantage of unsupervised segmentation
In this strategy using probabilistic model that utilize the evaluation ability to evaluate segmentations independently
information of pixels (mean and variance) in each region to
of a manually-segmented reference images. This metric is
balance the under-segmentation and over-segmentation. Using
this mechanism dynamically set the correlation of pixels in the
good for processing real-time systems.
each region using a probabilistic model, then the evaluation of The evaluating unsupervised which are given up to now, are
image segmentation methods introduce for an optimization base on the features of the image in locality area and the
problem. For solving this problem (evaluation of image number of areas and the number of pixels in each region. In
segmentation methods) use the novel Imperialist Competitive this paper, we examine a new scales for evaluating
Algorithm (ICA) that was recently introduced has a good segmenting with and unsupervised methods.
performance in some optimization problems. In this paper a new In this paper, we formulated the evaluation of image
Imperialist Competitive Algorithm is using chaotic map (CICA2) segmentation methods for an optimization problem. For
is proposed. In the proposed algorithm, the chaotic map is used solving this problem used ICA algorithm.
to adapt the radius of colonies movement towards imperialist’s So far, different evolutionary algorithms have been proposed
position to enhance the escaping capability from a local optima
for optimization which among them, we can point to a
trap. Some famous benchmarks used to test proposed metric
search algorithms were initially proposed by Holland, his
performance. Simulation results show this strategy can improve
the performance of the unsupervised evaluation segmentation colleagues and his students at the University of Michigan.
significantly. These search algorithms which are based on nature and
mimic the mechanism of natural selection were known as
Keywords: Image segmentation, Imperialist Competitive Genetic Algorithms (GAs) [1,2]. Particle Swarm
Algorithm, Segmentation Evaluation. Optimization algorithm proposed by Kennedy and Eberhart
[3,4], in 1995. Simulated Annealing [5] and Cultural
1. Introduction Evolutionary algorithm (CE), developed by Reynalds and
Jin [5], in the early 1990s etc. The ant colony optimization
Image segmentation is used to partition an image into algorithm (ACO), is a probabilistic technique for solving
separate regions for analysis and understanding image. computational problems that can be reduced to finding good
Different methods have been introduced for segmenting paths through graphs. This algorithm is a member of ant
image. There are two main approaches in image colony algorithms family, in swarm intelligence methods.
segmentation: region segmentation and boundary detection. Initially proposed by Marco Dorigo in 1992 in his PhD
We consider region-based image segmentation methods, thesis [6][7] , the first algorithm was aiming to search for an
because it has better results for texture images but there is optimal path in a graph, based on the behavior of ants
no appropriate scale for evaluating these algorithms yet. The seeking a path between their colony and a source of food.
most usual evaluating method is the visual one in which the Differential evolution (DE) is an optimization algorithm.
user visually observes different segmenting method at hand. The DE method is originally due to Storn and Price [8][9]
Being time-consuming and gaining different results by users and works on multidimensional real-valued functions which
is disadvantages of this method. are not necessarily continuous or differentiable.
In supervised method, different segmented images are Recently, a new algorithm [10], in 2007, which has inspired
compared and evaluated with a ground truth image which not natural phenomenon, but of course from a socio-human
has been made by the experts or different users. This method from phenomenon. This algorithm has looked at
is the best method because of its high evaluating precision. imperialism process as a stage of human's socio-political
Up to now most researches has been one on the supervised evolution. The Imperialist Competitive Algorithm makes
methods. relation between humans and social sciences on one hand,
In spite of their simplicity and low cost this method don’t and technical and mathematical sciences on the other hand,
have a proper efficiency because of miscue resulted from having a completely new viewpoint about the optimization
user improper choosing and spending a long time to topic. In the ICA algorithm, the colonies move towards the
examine different existing segmenting methods and also the imperialist country with a random radius of movement. In
(IJCNS) International Journal of Computer and Network Security, 143
Vol. 2, No. 6, June 2010

[11] CICA algorithm has been proposed that improved A measure of discrepancy between the
Empirical
segmented image output by an algorithm
performance of ICA algorithm by the chaotic maps are used discrepancy methods
[15].
to adapt the angle of colonies movement towards Computing the degree of overlap of the
imperialist’s position to enhance the escaping capability Region Differencing cluster associated with each pixel in one
from a local optima trap. The ICA algorithm is used for segmentation [16][17][18].
Matching boundaries between the
Neural Network Learning based on Chaotic Imperialist segmentations, and computing some
Boundary matching
Competitive Algorithm [12]. summary statistic of match quality
[16][19][20].
Formulate the problem as that of
In this paper, we have proposed a new formulation for the evaluating an affinity function that gives
evaluation of image segmentation methods that solved with Information-based
the probability of two pixels belonging to
Imperialist Competitive Algorithm. the same segment [16][21][22][23].
We introduce in this paper a study of unsupervised
evaluation criteria that enable the quantification of the Unsupervised methods instead evaluate a segmented image
quality of an image segmentation result. This evaluation based on how well it matches a set of features of segmented
metric computes some statistics for each region in a images as idealistic by humans.
segmentation result. Suggested scales engage in evaluation For solving these problem we need to use unsupervised
methods of segmenting by extracting image features in methods so unsupervised evaluation suitable for online
spatial domain. This method evaluate by evolutionary segmentation in real-time systems, where a wide variety of
algorithm (ICA). These methods compare considering the images, whose contents are not known beforehand, need to
segmented images and the main image. For this be processed. We for evaluation segmented image need to
comparative study, we use two database composed of 200 original image and some of segmented images.
images segmented. We will explain the suggested methods There are two major problems with segmentation: under-
afterwards. segmentation and over-segmentation [24][25] are shown in
Figure1. We need to minimize the under- or over-
This article is organized as follows: In Section 2, provides segmentation as much as possible.
an introduction of the unsupervised evaluation criteria and
highlight the most relevant ones and related work. In a b c
section 3, we introduced the Imperialist Competitive
Algorithm (ICA). In section 4, described proposed
algorithm and definition of chaotic radius in the movement
of colonies toward the imperialist. In section 5, we present Figure 1. a) A ground truth image. b) Under-segmented
unsupervised evaluation methods and optimization problem. image. c) Over-segmented image.
In Section 6, comparing results and show role correlation
In the case of under-segmentation, full segmentation has not
metric and our evaluation finally, in Section 7 we present a
been achieved, i.e. there are two or more regions that appear
summary of our work and provide pointers to further work.
as one. In the case of over-segmentation, a region that would
be ideally present as one part is now, split into two or more
2. Related Work parts. These problems, though important, are not easy to
resolve.
Unsupervised method does not require comparison with a Recently a large number of unsupervised evaluation
manually-segmented reference image, has received little methods have been proposed. Without any a priori
attention and it is quantitative and objective. Supervised knowledge, most of evaluation criteria compute some
evaluation methods, evaluate segmentation algorithms by statistics on each region or class in the segmentation result.
comparing the resulting segmented image against a We consider region-based image segmentation methods.
manually segmented reference image, which is often Most of these methods consider factors such as region
referred to as ground-truth. uniformity, inter-region heterogeneity, region contrast, line
The degree of similarity between the human and machine contrast, line connectivity, texture, and shape measures [26].
segmented images determines the quality of the segmented An evaluation methods has been proposed by Liu and Yang
image. One benefit of supervised methods over unsupervised (1994) [27], that it is compute the average squared color
methods is that the direct comparison between a segmented error of the segments, penalizing over-segmentation by
image and a reference image is believed to provide a finer weighting proportional to the square root of the number of
resolution of evaluation. Unsupervised method also known segments. It requires no user-defined parameters and is
as stand-alone evaluation methods or empirical goodness independent of the contents and type of image. The
methods [13]. evaluation function:

Table 1. Classification of evaluation methods.


Class Details
Methods attempt to characterize an
Analytic methods algorithm itself in terms of principles, Where N is the number of segments, is the number of
requirements, complexity etc. pixels in segment j, and is the squared color error of
Computing a “goodness” metric on the
Empirical goodness
segmented image without a priori region j. The F evaluation function has a very strong bias
methods towards under-segmentation (segmentations with very few
knowledge [14].
144 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

regions) and penalizing over-segmentation by weighting


proportional to the square root of the number of segments.
This metric is independent of the type of image. F is bias
towards under-segmentation, so An evaluation methods has The cost of a country is found by evaluating the cost
been proposed by Borsotti et al.(1998), that extended F by function f at the variables . Then
penalizing segmentations that have many small regions of
the same size. Borsotti improved upon Liu and Yang’s
method, and improved F' by decreasing the bias towards
The algorithm starts with N initial countries and the
both over-segmentation and under-segmentation. Proposing
a modified quantitative evaluation (Q) [28], where best of them (countries with minimum cost) chosen as the
imperialists. The remaining countries are colonies that each
belong to an empire. The initial colonies belong to
imperialists in convenience with their powers. To distribute
the colonies among imperialists proportionally, the
The variance was given more influence in Q by dividing normalized cost of an imperialist is defined as follow
by the logarithm of the region size, and Q is penalized
strongly by when there are a large number of
Where, is the cost of nth imperialist and is its
segments. So Q is the less biased towards both under normalized cost. Each imperialist that has more cost value,
segmentation and over-segmentation. will have less normalized cost value. Having the normalized
More recently, Zhang et al.(2004), proposed the evaluation cost, the power of each imperialist is computed as below and
function E, an information theoretic and the minimum based on that the colonies distributed among the imperialist
description length principle (MDL). This segmentation countries.
evaluation function instead of using squared color error they
use region entropy as its measure of intra-region uniformity
that measures the entropy of pixel intensities within each
region [29]. To prevent a bias towards over-segmentation,
they define the layout entropy of the object features of all On the other hand, the normalized power of an imperialist is
pixels in image where any two pixels in the same region assessed by its colonies. Then, the initial number of colonies
have the same object feature. Pal and Bhandari also of an empire will be
proposed an entropy-based segmentation evaluation measure
for intra-region uniformity based on the second-order local
entropy. Weszka and Rosenfeld proposed such a criterion
with thresholding that measures the effect of noise to Where, is initial number of colonies of nth empire and
evaluate some threshold images. Based on the same idea of is the number of all colonies.
intra-region uniformity, Levine and Nazif also defined To distribute the colonies among imperialist, of the
criterion LEV1 that computes the uniformity of a region colonies is selected randomly and assigned to their
characteristic based on the variance of this characteristic. imperialist. The imperialist countries absorb the colonies
Complementary to the intra-region uniformity, Levine and towards themselves using the absorption policy. The
Nazif defined a disparity measurement between two regions absorption policy shown in Fig.2, makes the main core of
to evaluate the dissimilarity of regions in a segmentation this algorithm and causes the countries move towards to
result. We compare our proposed method against the their minimum optima. The imperialists absorb these
evaluation functions of F, E and Q . colonies towards themselves with respect to their power that
described in (8). The total power of each imperialist is
3. Introduction of Imperialist Competitive determined by the power of its both parts, the empire power
Algorithm (ICA) plus percents of its average colonies power.

In this section, we introduce ICA algorithm and chaos


theory.
Where is the total cost of the nth empire and is a
3.1. Imperialist Competitive Algorithm (ICA) positive number which is considered to be less than one.
Imperialist Competitive Algorithm (ICA) is a new
evolutionary algorithm in the Evolutionary Computation
In the absorption policy, the colony moves towards the
field based on the human's socio-political evolution. The
imperialist by x unit. The direction of movement is the
algorithm starts with an initial random population called
vector from colony to imperialist, as shown in Fig.2, in this
countries. Some of the best countries in the population
figure, the distance between the imperialist and colony
selected to be the imperialists and the rest form the
shown by d and x is a random variable with uniform
colonies of these imperialists. In an N dimensional
distribution. Where is greater than 1 and is near to 2. So,
optimization problem, a country is a array. This
a proper choice can be . In our implementation is
array defined as below
respectively.
(IJCNS) International Journal of Computer and Network Security, 145
Vol. 2, No. 6, June 2010

In this paper, to enhance the global exploration capability,


(10) the chaotic maps are incorporated into ICA to enhance the
ability of escaping from a local optimum.
In ICA algorithm, to search different points around the
imperialist, a random amount of deviation is added to the The radius of movement is changed in a chaotic way during
direction of colony movement towards the imperialist. In the search process. Adding this chaotic behavior in the
Fig. 2, this deflection angle is shown as , which is chosen imperialist algorithm absorption policy we make the
randomly and with an uniform distribution. While moving conditions proper for the algorithm to escape from local
toward the imperialist countries, a colony may reach to a peaks. Chaos variables are usually generated by the some
better position, so the colony position changes according to well-known chaotic maps [30],[31]. Eq.(13), shows the
position of the imperialist. mentioned chaotic maps for adjusting parameter (radius of
colonies movement towards the imperialist’s position) in the
proposed algorithm.

Where, is a control parameter. is a chaotic variable in


kth iteration which belongs to interval of (0,1). During the
search process, no value of is repeated. The CICA
algorithm is summarized in Fig.3.
Figure2. Moving colonies toward their imperialist
(1) Initialize the empires and their colonies positions randomly.
In this algorithm, the imperialistic competition has an (2) Compute the adaptive x (colonies movement radius towards the
important role. During the imperialistic competition, the imperialist’s position) using the probabilistic model.
weak empires will lose their power and their colonies. To (3) Compute the total cost of all empires (Related to the power of
both the imperialist and its colonies).
model this competition, firstly we compute the probability of (4) Pick the weakest colony (colonies) from the weakest empire and
possessing all the colonies by each empire considering the give it (them) to the empire that has the most likelihood to possess it
Figure2. The AICA algorithm.
total cost of empire. (Imperialistic competition).
(5) Eliminate the powerless empires.
(6) If there is just one empire, then stop else continue.
(7) Check the termination conditions.

Where, is the total cost of nth empire and is the Figure3. The CICA2 algorithm.
normalized total cost of nth empire. Having the normalized
total cost, the possession probability of each empire is
computed as below 5. Unsupervised Image Segmentation and
CICA2 algorithm

As mentioned before in algorithms the evaluations are bias


after a while all the empires except the most powerful one towards the under-segmentation or over-segmentation. At
will collapse and all the colonies will be under the control of first, we compute correlation and then present a new metric
this unique empire. for evaluation of segmented images.
In this method, we extract the statistical information about
the image from the each region to provide an adaptive
4. Proposed Algorithm
evaluation. We proposed a probabilistic model [32]-[35], to
In this paper, we have proposed a new Imperialist decrease error of evaluation. The probabilistic model P(x)
Competitive Algorithm using the chaos theory (Chaotic that we use here is a Gaussian distribution model. The joint
Imperialist Competitive Algorithm CICA2). The primary probability distribution of pixels given by the product of the
ICA algorithm uses a local search mechanism as like as marginal probabilities of the countries:
many evolutionary algorithms. Therefore, the primary ICA
may fall into local minimum trap during the search process
and it is possible to get far from the global optimum. To
solve this problem we increased the exploration ability of Where
the ICA algorithm, using a chaotic behavior in the colony
movement towards the imperialist’s position. So it is
intended to improve the global convergence of the ICA and
to prevent it to stick on a local solution. The average, µ, and the standard deviation, , for the pixels
of each region is approximated as below:
4.1. Definition of chaotic radius in the movement of
colonies towards the imperialist
146 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

Where in the Eq.(21). Means of region original image is


and means of region in gray-level segmented
image is .
Using this probabilistic model, the density of pixels is
computed in each region. If the pixels density in the current
region is more than the previous region, then with 75% the
previous correlation of the evaluation of the pixels will be We formulated the Evaluation of Image Segmentation
decrease and with 25% the mentioned correlation will be Methods for an optimization problem and solved this
increase. problem with ICA algorithms. This method has a good
precision for evaluating segmented image. This fitness
function computes derivation of regions in segmented
image. This Algorithm is show in Fig.5. This fitness
, is the current correlation of pixel. , is the
function and ICA algorithm evaluate image segmentation
correlation of previous region and α is the constant value of
decreasing and increasing the correlation of evaluation. The algorithms. In this paper, ICA algorithm used for
value of α is 0.5. minimization the problem. We test this metric and is show
result in next section.
Otherwise, if the pixels density in the current region is less
than the previous region, then with 75% the previous
correlation of the evaluation of the pixels will be increased (1) Eq.(21) is cost function.
(1) Initialize the empires and their colonies positions randomly.
and with 25% the mentioned correlation will be decreased. (2) Compute the total cost of all empires (Related to the power of
both the imperialist and its colonies).
(4) Pick the weakest colony (colonies) from the weakest empire and
give it (them) to the empire that has the most likelihood to possess it
If the pixels density in the current region is more than the (Imperialistic competition).
(5) Eliminate the powerless empires.
previous region, it means that may be the pixels are in a (6) If there is just one empire, then stop else continue.
good region. In Eq. (18), depending on the density of the (7) Check the termination conditions.
pixels distribution, we set the correlation of region so that
each pixel can escape from the dense area with 25% and Figure 5. The CICA2 algorithm for evaluation of image
with 75% the pixel is in its region with a decreasing segmentation methods.
correlation.
6. Experimental Result

We empirically studied the evaluation methods F, Q, E and


CICA2 algorithm on the segmentation results from two
different segmentation algorithms, the Edge Detection and
Figure 4. A sequence from formulating the evaluation of Image Segmentation (EDISON). It developed by the Robust
Image Segmentation Methods for an optimization problem. Image Understanding Laboratory at Rutgers University. We
used EDISON to generate images that vary in the number of
In Eq. (19), if the pixels density in the current region is less regions in the segmentation to see how the evaluation
than the previous region, each pixel with possibility of 25% methods are affected by the number of regions. The second
is in its region with a decreasing correlation and with 75% segmentation algorithm is canny that is available in
the pixel is in its region with an increasing correlation. This Berkeley dataset. We use these two segmentation methods to
way, provides a more efficient evaluation in all over the do a preliminary study on the effectiveness of these
image. quantitative evaluation methods on different segmentation
A good segmentation evaluation should maximize the parameterizations and segmentation techniques. We use two
uniformity of pixels within each segmented region, and dataset Berkeley with 1000 images and 1200 images, for
minimize the uniformity across the regions. We propose a computing error in evaluation.
new function for evaluation of image segmentation for this In this section, we analyze the previously presented
function we need region-based segmentation. We compute unsupervised evaluation criteria. We describe experimental
variances of the R, G and B (variances color for image results to evaluate CICA2 algorithm and results from four
segmented in K-means) pixels of the region. evaluation methods are examined and compared. We
compute the effectiveness of F, Q, E and CICA2 based on
their accuracy with evaluations provided by a small group of
human evaluators. In our first set of experiments, we vary
the total number of regions in the segmentation (using
is variances intensity of pixels in region that
EDISON to generate the segmentations) to study the
you see in Eq. (20), and N is number of regions.
sensitivity of these an objective evaluation methods to the
number of regions in the segmentation.
With an increase in the number of regions, the segmented
images clearly look better to the observer, since more details
are preserved. However, more regions do not necessarily
(IJCNS) International Journal of Computer and Network Security, 147
Vol. 2, No. 6, June 2010

make a better segmentation, since over-segmentation can In Fig.6, we are shown that run-time for evaluation 100
occur and it is a problem for evaluations. images in CICA2 algorithm is better than E, F and Q.
The proposed algorithm is a good algorithm for evaluation
of segmented image because this method has a controller for Error of evaluation of segmented images
under-segmentation and over-segmentation. The corr has 120
data1
important role in evaluation so error of evaluation is the data2
less. 100

The effectiveness is described by accuracy, which is defined


as the percentage of the number of times the evaluation 80

measure correctly matches human evaluation result divided

Cost
60
by the total number of comparisons in the experiment. We
compute the effectiveness of F, Q, E and CICA2 algorithm
40
based on their accuracy with evaluations provided on four
dataset that is shown in Table.2.
20

Table 2: Accuracy (%) of the evaluation measures


0
0 10 20 30 40 50 60 70 80 90 100
Generation
Accuracy (%) F Q E CICA2
algorithm
Figure 7. Cost of evaluation of segmented images datasets
1)Image %73.3 %76.22 %74.81 %80.09 3,4.
Segmentation
(EDISON)

2)Berkeley %64.3 %68.22 %63.81 %75.85


In Fig.7, we are shown that error for evaluation 100 images
dataset in CICA2 algorithm is near the zero.

(canny)
7. Conclusion And Future Work
3)1200 images- %71.01 %73.35 %75.50 %84.63
Berkeley
In this paper, we present an optimization method that
4)1000 images- %62.43 %68.6 %71.32 %83.43 objectively evaluate image segmentation. In this paper, we
Berkeley proposed a new formulation for the evaluation of image
segmentation methods. In this paper using probabilistic
The results, given in Table 2, once again demonstrate the model that utilize the information of pixels (mean and
bias of many of the evaluation methods towards under- variance) in each region to balance the under-segmentation
segmentation. F and E, achieve low accuracy in this and over-segmentation. Using this mechanism dynamically
experiment. On the other hand, those measures that are set the correlation of pixels in the each region using a
more balanced or less biased towards under-segmentation, probabilistic model, then the evaluation of image
i.e. Q and CICA2 algorithm, achieve higher accuracy. segmentation methods introduce for an optimization
Overall, CICA2 algorithm performs best here. problem. We first present four segmentation evaluation
We evaluate an image with four unsupervised evaluation methodologies, and discuss the advantages and
and can see that CICA2 algorithm is better than F. and shortcomings of each type of unsupervised evaluation,
CICA2 algorithm no sensitive to under-segmentation and among others. Subjective and supervised evaluations have
over-segmentation. It computes correlation and pixel density their disadvantages. For example tedious to produce and can
for each region and control error of under-segmentation and vary widely from one human to another and time-
over-segmentation in evaluation. consuming. Unsupervised segmentation evaluation methods
offer the unique advantage that they are purely objective and
do not require a manually-segmented reference image and
those embedded in real-time systems. We have demonstrated
via our preliminary experiments that our unsupervised
segmentation evaluation measure, CICA2 algorithm,
improves upon previously defined evaluation measures in
several ways. In particular, F has a very strong bias towards
images with very few regions and thus do not perform well.
Q outperforms F but still disagrees with our human
evaluators more often than E did. The correlation and
density in each region are important components in
obtaining our results. Coding evaluation problem and
present a new cost function and solving a optimization
Figure 6. Run-time for four metrics for evaluation 100 problem is interesting directions for future research.
image (second).
148 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

References Transactions on Pattern Analysis and Machine


Intelligence 7: pp. 155–164, 1985.
[1]H. M¨uhlenbein, M. Schomisch and J. Born, "The
[15]W.A. Yasnoff, J.K. Mui and J.W. Bacus, "Error
Parallel Genetic Algorithm as Function Optimizer",
measures for scene segmentation", Pattern Recognition
Proceedings of The Fourth International Conference on
9:pp.217–231, 1977.
Genetic Algorithms, University of California, San
[16]D. Martin, "An Empirical Approach to Grouping and
diego, pp. 270-278,1991.
Segmentation", PhD dissertation, Univ. of California,
[2]J.H. Holland. "ECHO: Explorations of Evolution in a
Berkeley, 2002.
Miniature World", In J.D. Farmer and J. Doyne,
[17]D. Martin, C. Fowlkes, D. Tal, and J. Malik, "A
editors, Proceedings of the Second Conference on
Database of Human Segmented Natural Images and Its
Artificial Life, 1990.
Application to Evaluating Segmentation Algorithms
[3]J. Kennedy and R.C. Eberhart, "Particle swarm
and Measuring Ecological Statistics", Proc. Int’l Conf.
optimization", in: Proceedings of IEEE International
Computer Vision, 2001.
Conference on Neural Networks, Piscataway: IEEE, pp.
[18]H.I. Christensen and P.J. Phillips, "Empirical
1942–1948, 1995.
Evaluation Methods in Computer Vision" , eds. World
[4]X. Yang, J. Yuan, J. Yuan and H. Mao," A modified
Scientific Publishing, July 2002.
particle swarm optimizer with dynamic adaptation",
[19]J. Freixenet, X. Munoz, D. Raba, J. Marti, and X.
Applied Mathematics and Computation, 189 (2): pp.
Cuff, "Yet Another Survey on Image Segmentation:
1205-1213, 2007.
Region and Boundary Information Integration", Proc.
[5] X. Jin and R.G. Reynolds, "Using Knowledge-Based
European Conf. Computer Vision, pp. 408-422, 2002.
Evolutionary Computation to Solve Nonlinear
[20]Q. Huang and B. Dom, "Quantitative Methods of
Constraint Optimization Problems: A Cultural
Evaluating Image Segmentation", Proc. IEEE Int’l
Algorithm Approach", In Proceedings of the IEEE
Conf. Image Processing, pp. 53-56, 1995.
Congress on Evolutionary Computation, 3: pp. 1672–
[21]C. Fowlkes, D. Martin, and J. Malik, "Learning
1678, 1999.
Affinity Functions for Image Segmentation", Proc.
[6]A. Colorni, M. Dorigo et V. Maniezzo, "Distributed
IEEE Conf. Computer Vision and Pattern Recognition,
Optimization by Ant Colonies", actes de la première
2: pp. 54-61, 2003.
conférence européenne sur la vie artificial, Paris,
[22]M. Meila, "Comparing Clusterings by the Variation of
France, Elsevier Publishing, pp.134-142, 1991.
Information", Proc. Conf. Learning Theory, 2003.
[7]M. Dorigo, "Optimization, Learning and Natural
[23] M.R. Everingham, H. Muller, and B. Thomas,
Algorithms", PhD thesis, Politecnico di Milano, Italie,
"Evaluating Image Segmentation Algorithms Using the
1992.
Pareto Front, " Proc. European Conf. Computer Vision,
[8] R. Storn and K. Price,."Differential evolution - a simple
4: pp. 34-48, 2002.
and efficient heuristic for global optimization over
[24]H. Zhang, Jason E. Fritts and S. A. Goldman ,
continuous spaces", Journal of Global Optimization, 11:
"Image Segmentation Evaluation: A Survey of
341–359, 1997.
Unsupervised Methods", Computer Vision and
[9]R.Storn, "On the usage of differential evolution for
Image Understanding (CVIU), 110(2): pp. 260-280,
function optimization", Biennial Conference of the
2008.
North American Fuzzy Information Processing Society
[25]S. Chabrier, B. Emile, H. Laurent, C. Rosenberger and
(NAFIPS), pp. 519–523, 1996.
P. Marche, "Unsupervised evaluation of image
[10]E. Atashpaz-Gargari and C. Lucas, "Imperialist
segmentation application multispectral images", in:
Competitive Algorithm: An Algorithm for
Proceedings of the 17th international conference on
Optimization Inspired by Imperialistic Competition",
pattern recognition, 2004.
IEEE Congress on Evolutionary Computation (CEC
[26]Zhang, Y.J, "A survey on evaluation mehods for image
2007), pp. 4661-4667, 2007.
segmentation", Pattern Recognition 29:pp. 1335–1346,
[11]H. Bahrami, K. Faez, M. Abdechiri, "Imperialist
1996.
Competitive Algorithm using Chaos Theory for
[27]J.Liu, Y.-H. Yang, "Multi-resolution color image
Optimization", UKSim-AMSS 12th International
segmentation", IEEE Transactions on Pattern Analysis
Conference on Computer Modeling and Simulation,
and Machine Intelligence 16 (7):pp. 689–700, 1994.
2010.
[28] M. Borsotti, P. Campadelli, R. Schettini,
[12] M. Abdechiri, K. Faez and H. Bahrami, "Neural
"Quantitative evaluation of color image segmentation
Network Learning based on Chaotic Imperialist
results", Pattern Recognition Letters 19 (8):pp. 741–
Competitive Algorithm", The 2nd International
747,1998.
Workshop on Intelligent System and Applications
[29] Y. J. Zhang and J. J. Gerbrands, "Objective and
(ISA2010), 2010.
quantitative segmentation evaluation and comparison".
[13]H. Zhang, J. E. Fritts and S. A. Golman, "An
Signal Processing 39:pp. 43–54, 1994.
Entropy - based Objective Evaluation method for
[30]WM. Zheng, Kneading plane of the circle map. Chaos,
image segmentation", SPIE Electronic Imaging –
Solitons & Fractals, 4:1221, 1994.
Storage and Retrieval Methods and Applications
[31]HG. Schuster, "Deterministic chaos: an introduction",
for Multimedia, pp.38-49, 2004.
2nd revised ed. Weinheim, Federal Republic of
[14]M.D. Levine, A. Nazif, "Dynamic measurement of
Germany: Physick-Verlag GmnH; 1988.
computer generated image segmentations", IEEE
(IJCNS) International Journal of Computer and Network Security, 149
Vol. 2, No. 6, June 2010

[32]A. Papoulis, "Probability, Random Variables and


Stochastic Processes", McGraw-Hill, 1965.
[33]R. C. Smith and P. Cheeseman, "On the
Representation and Estimation of Spatial Uncertainty",
the International Journal of Robotics Research,5(4),
1986.
[34]T.K. Paul and H. Iba, "Linear and Combinatorial
Optimizations by Estimation of Distribution
Algorithms", 9th MPS Symposium on Evolutionary
Computation, IPSJ, Japan, 2002.
[35]Y. Bar-Shalom, X. Rong Li and T. Kirubarajan,
"Estimation with Applications to Tracking and
Navigation", John Wiley & Sons, 2001.

Authors Profile

Javad Alikhani koupaei the B.S. degrees


received the in mathematics (application in
computer), University of Esfahan, dec, 1993
and M.S. degrees in mathematics, Tarbiat
modarres University, Tehran, Iran, february
1998, he study in "A new recursive
algorithm for a gussian quadrature formula
via orthogonal polynomials", He is faculty at the Department of
Mathematics, Payamenour University, Isfahan, Iran

Marjan Abdechiri received her B.S. degree in computer


Engineering from Najafabad University, Isfahan, Iran, 2007. He is
now pursuing the Artificial intelligence Engineering of Qazvin
University, Her research interests are computational intelligence
and image processing, machine learning, evolutionary
computation.
150 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

Segmentation of MR Brain tumor using


Parallel ACO
J.Jaya1 and K.Thanushkodi 2
1
Research Scholar, Anna University,Chennai,Tamil Nadu,India.
jaya_hindusthan@yahoo.co.in
2
Director, Akshaya College of Engg, & Tech.Coimbatore,Tamil Nadu,India

the research community. Among them we can find


Abstract: One of the most complex tasks in digital image
processing is image segmentation. This paper proposes a novel
commercial and non commercial packages. Usually, the
image segmentation algorithm that uses a biologically inspired former ones are focused on specific applications or just on
technique based on ant colony optimization (ACO) algorithm. It visualization being more robust and stable, whereas the
has been applied to solve many optimization problems with good latter ones often offer more features to the end user.
discretion, parallel, robustness and positive feedback. The
proposed new meta-heuristic algorithm operates on the image 2. Existing Methods
pixel data and a region/neighborhood map to form a context in
which they can merge. Hence, we segment the MR brain image Several methods have been proposed to segment brain
using ant colony optimization algorithm. Compared to
MR images. The methods are either based on the intrinsic
traditional metaheuristic segmentation methods, the proposed
method has advantages that it can effectively segment the fine
structure of the data or based on statistical frameworks.
details. The suggested image segmentation strategy is tested on a Structural based methods rely on apparent spatial
set of real time MR Brain images. regularities of image structures such as edges, and regions.
However, the performance is not satisfying when the images
Keywords: MRI, Segmentation, Ant colony optimization, & are with noise, artifacts, and local variations, which is often
Registration. the case in real data. Instead, statistical based methods use a
probability model to classify the voxels into different tissue
types based on the intensity distribution of the image.
5. Introduction Methods based on statistical frameworks can be further
divided into non-parametric methods or parametric
The field known as biomedical analysis has evolved methods.. In non-parametric methods, the density model of
considerably over the last couple of decades. The widespread the prior relies entirely on the data itself, i.e. the K-nearest-
availability of suitable detectors has aided the rapid neighbors (K-NN) method. Non-parametric methods are
development of new technologies for the monitoring and adaptive, but its limitation is the need of a large amount of
diagnosis, as well as treatment, of patients. Over the last labeled training data. In contrast, non-parametric methods
century technology has advanced from the discovery of x- rely only on the explicit functional form of intensity density
rays to a variety of imaging tools such as magnetic function of the MR image [10, 11].
resonance imaging, computed tomography, positron
emission tomography and ultrasonography. The recent 3. Proposed Method
revolution in medical imaging resulting from techniques
such as computed tomography (CT) and magnetic resonance
imaging (MRI) can provide detailed information about Image Acquisition
disease and can identify many pathologic conditions, giving
an accurate diagnosis. Furthermore, new techniques are Preprocessing
helping to advance fundamental biomedical research.
Medical imaging is an essential tool for improving the
diagnoses, understanding and treatment of a large variety of Enhancement
diseases [1, 7, 8].
The extraordinary growth experimented by the medical
image processing field in the last years, has motivated the Segmentation
development of many algorithms and software packages for
image processing. There are also some efforts to develop Figure1. Block Diagram of Proposed work
software that could be easy to modify by other researchers,
for example the Medical Imaging Interaction Toolkit, is 3.1 Image Acquisition
intended to fill the gap between algorithms and the final The development of intra-operative imaging systems has
user, providing interaction capabilities to construct clinical contributed to improving the course of intracranial
applications. Therefore, many software packages for neurosurgical procedures. Among these systems, the 0.5T
visualization and analysis of medical data are available for intra-operative magnetic resonance scanner of the Kovai
(IJCNS) International Journal of Computer and Network Security, 151
Vol. 2, No. 6, June 2010

Medical Center and Hospital (KMCH, Signa SP, GE


Medical Systems) offers the possibility to acquire
256*256*58(0.86mm, 0.86mm, 2.5 mm) T1 weighted
images with the fast spin echo protocol (TR=400,TE=16 ms,
FOV=220*220 mm) in 3 minutes and 40 seconds. The
quality of every 256*256 slice acquired intra-operatively is
fairly similar to images acquired with a 1.5 T conventional
scanner, but the major drawback of the intra-operative
image is that the slice remains thick (2.5 mm).
Images of a patient obtained by MRI scan is displayed as Figure 2 b) After Preprocessing
an array of pixels (a two dimensional unit based on the
matrix size and the field of view) and stored in Mat lab 3.3 Enhancement
7.0.Here, grayscale or intensity images are displayed of Image enhancement methods improve the visual
default size 256 x 256.The following figure displayed a MRI appearance of Magnetic Resonance Image (MRI). The role
brain image obtained in Mat lab 7.0. The brain MR images of enhancement technique is removal of high frequency
are stored in the database in JPEG format. Fig 2 shows the components from the images. This part is used to enhance
image acquisition. the smoothness towards piecewise-homogeneous region and
reduces the edge-blurring effect. Conventional Enhancement
techniques such as low pass filter, Median filter, Gabor
Filter, Gaussian Filter, Prewitt edge-finding filter,
Normalization Method are employable for this work.

This proposed system describes the information of


enhancement using weighted median filter for removing
high frequency components such as impulsive noise, salt
and pepper noise, etc. and obtained high PSNR and ASNR
values. PSNR = 0.924,ASNR =.929

Figure 2. Image Acquisition


3.2 Preprocessing
Preprocessing functions involve those operations that are
normally required prior to the main data analysis and
extraction of information, and are generally grouped as
radiometric or geometric corrections. Radiometric
corrections include correcting the data for sensor
irregularities and unwanted sensor or atmospheric noise, Figure 3. Enhancement
removal of non-brain voxels and Converting the data so they
accurately represent the reflected or emitted radiation
measured by the sensor [9].

In this work tracking algorithm is implemented to


remove film artifacts. The high intensity value of film
artifacts are removed from MRI brain image. During the
removal of film artifacts, the image consists of salt and
pepper noise.

Figure3.Performance Evaluation of Enhancement Stage

3.5 Segmentation
Segmentation is the initial step for any image
analysis. There is a task for segmentation of brain MRI
images. That is to obtain the locations of suspicious areas to
assist radiologists for diagnose. Image segmentation has
been approached from a wide variety of perspectives.
Region-based approach, morphological operation, multiscale
Figure 2 a) Before Preprocessing analysis, fuzzy approaches and stochastic approaches have
been used for MRI image segmentation but with some
limitations. Local thresholding is used by setting threshold
152 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

values for sub-images. It requires selection of a window size 4.1 Pheromone Initialization
and threshold parameters. Wu et al. presented that the For each ant assign the initial pheromone value T0.
threshold for a pixel is set as the mean value plus the Root And for each ant select a random pixel from the image
Mean Square (RMS) noise value multiplied by a selected which has not been selected previously. To find out the
coefficient in a selected square region around the pixels is been selected or not, a flag value is assigned for
thresholded pixel. Kallergi et al. compared local each pixel. Initially the flag value is assigned as 0, once the
thresholding and region growing methods. It showed that pixel is selected the flag is changed to 1. This procedure is
the local thresholding method has greater stability, but is followed for all the ants. For each ant a separate column for
more dependent on parameter selection. Woods et al. used pheromone and flag values are allocated in the solution
local thresholding by subtracting the average intensity of a matrix.
15×15 window from the processed pixel. Then, region
growing is performed to group pixels into objects. 4.2 Local Pheromone Update
Comparing with the multi-tolerance region growing Update the pheromone values for all the randomly
algorithm and the active contour model, it showed that the selected pixels using the following equation:
speed of the algorithm is more than an order of magnitude Tnew = (1 – ρ) * Told + ρ * T0 ,
faster than the other two. where Told and Tnew are the old and new pheromone values,
Edge detection is a traditional method for and ρ is rate of pheromone evaporation parameter in local
segmentation. Many operators, Roberts gradient, Sobel update, ranges from [0,1] i.e., 0 < ρ < 1. Calculate the
gradient, Prewitt gradient and Laplacian operator, were posterior energy function value for all the selected pixels by
published in the literature. Some mathematical the ants from the solution matrix.
morphological operations such as erosion, top-hat The ant, which generates this local minimum value, is
transformation and complicated morphological filters and selected and whose pheromone is updated using the
multi-structure elements can also be used. It is good in following equation:
dealing with geometrically analytic aspects of image
analysis problems. Stochastic approaches have also been Tnew = (1 – α) * Told + α * ∆Told,
used to segment calcifications. Stochastic and Bayesian where Told and Tnew are the old and new pheromone values,
methods have provided a general framework to model and α is rate of pheromone evaporation parameter in global
images and to express prior knowledge. Markov Random update called as track’s relative importance, ranges from
Field (MRF) model was used to deal with the spatial [0,1] i.e., 0 < α < 1, and ∆ is equal to ( 1 / Gmin). For the
relations between the labels obtained in an iterative remaining ants their pheromone is updated as: Tnew = (1 –
segmentation process. The process-assigning pixel labels α) * Told, here, the ∆ is assumed as 0. Thus the pheromones
iteratively. are updated globally. This procedure is repeated for all the
image pixels. At the final iteration, the Gmin has the
optimum label of the image. To further enhance the value,
4. Proposed parallel Ant Colony System (ACS) this entire procedure can be repeated for any number of
Ant Colony Optimization (ACO) is a population-based times. In our implementation, we are using 20 numbers of
approach first designed by Marco Dorigo and coworkers, iterations.
inspired by the foraging behavior of ant colonies [13, 14,
15]. ndividuals ants are simple insects with limited
memory and capable of performing simple
ctions.However, the collective behavior of ants provides
intelligent solutions to problems such as finding the
shortest paths from the nest to a food source. Ants
foraging for food lay down quantities of a volatile
chemical substance named pheromone, marking their
path that it follows. Ants smell pheromone and decide to
follow the path with a high probability and thereby
reinforce it with a further quantity of pheromone [12, 16].
Figure 4. Segmented MRI
The probability that an ant chooses a path increases with
the number of ants choosing the path at previous times
and with the strength of the pheromone concentration laid 5. Conclusion
on it. In this paper, a meta heuristic based image segmentation
In this work, the labels created from the MRF approach was presented. The multi-agent algorithm takes
method and the posterior energy function values for each advantage of various models to represent the agent’s
pixel are stored in a solution matrix. The goal of this problem solving expertise within a knowledge base and
method is to find out the optimum label of the image that perform an inference mechanism that governs the agent’s
minimizes the posterior energy function value. Initially behavior in choosing the direction of their proceeding steps.
assign the values of number of iterations (N), number of This paper details an image segmentation method based on
ants (K), initial pheromone value (T0). [Hint: we are using the ant colony optimization algorithm. The improved
N=20, K=10, T0=0.001]. accuracy rate according to the experimental results is due to
better characterization of natural brain structure.
(IJCNS) International Journal of Computer and Network Security, 153
Vol. 2, No. 6, June 2010

Experiments on both real and synthetic MR images show Transactions Pattern Analysis and Machine
that the segmentation result of the proposed method has Intelligence 21, June 1999, pp. 544-551.
higher accuracy compared to existing algorithms. [15] John Ashburner , Karl J. Friston,” Rigid body
registration “, The welcome dept of image neuro
science , 12 queen square, London.
References [16] M. Dorigo, Mauro Birattari and Thomas Stiitzle,
[1] André Collignon, Dirk Vandermeulen, Paul Suetens, “Ant Colony Optimization artificial Ants as a
Guy Marchal,” 3D multi-modality medical image Computational Intelligence Technique”,
registration using feature space clustering Computational Intelligence magazine, Nov. 2006,
“,SpringerLink, Volume 905pp.193-204,Berlin pp.28-39.
(1995). [17] P.S. Shelokar, V.K. JayRaman and B.D. Kulkami,
[2] Alexis Roche, Gregoire Malandain, Nicholas “An ant colony clustering approach for clustering”,
Ayache, Sylvain prima, “Towards a better analytical chemical act 509, 2004, pp.187- 195.
comprehension Medical Image Registration”,
Medical Image Computing and Computer-Assisted Acknowledgement
Intervention-(MICCAI’99) ,Volume 1679,pp. 555- The author wishes to thank Dr.M.Karnan, for his guideness
566, (1999). in this area and also extend wishes to thank Doctor Pankaj
[3] Aaron Lefohn, Joshua Cates, Ross Whitaker,: Metha for his Suggestion on tumor recognition with his
”Interactive GPU-Based level sets for 3D Brain knowledge and experience in imaging area. The MRI Image
Tumor Segmentation”,April 16,2003. data obtained from KMCH Hospital, Coimbatore, Tamil
[4] Ahmed Saad, Ben Smith,Ghassan Simultaneous Nadu, and India.
Segmentation, Kinetic Parameter Estimation and
Uncertainty Visualization of Dynamic PET
Images”,( MICCAI) pp-500-510 ( 2007).
[5] Albert K.W.Law,F.K.Law,Francis H.Y.Chan,:”A
Fast Deformable Region Model for Brain Tumor
Boundary Extraction”,IEEE ,Oct 23, USA,(2002).
[6] Amini L, Soltanian-Zadeh H, Lucas.C,:“Automated
Segmentation of Brain Structure from MRI”, Proc.
Intl.Soc.Mag.Reson.Med.11(2003).
[7] Ceylan.C, Van der Heide U.A, Bol G.H, Lagendijk
.J.J.W,Kotte A.N.T.J,”Assessment of rigid multi-
modality image registration consistency using the
multiple subvolume registration(MSR) method”,
Physics in Medicine Biology, pp. 101-108,(2005).
[8] Darryl de cunha, Leila Eadie, Benjamin Adams,
David Hawkes, “Medical Ultrasound Image
similarity measurement by human visual
system(HVS) Modelling”, spingerlink, volume
2525,pp.143-164 ,berlin, (2002).
[9] Dirk-Jan Kroon,” Multimodality non-rigid demon
algorithm image registration “, Robust Non-Rigid
Point Matching, Volume 14, pp. 120-126, (2008).
[10] Duan Haibin, Wang Daobo, Zhu Jiaqiang and
Huang Xianghua, “Development on ant colony
algorithm theory and its application,” Control and
Decision, Vol. 19, pp. 12-16, (2004).
[11] Dorigo.M, Di Caro G, Gambardella L M, “Ant
algorithms for discrete optimization”, Artificial
Life, 1999, Vol.5, No.2.
[12] ErikDam,Marcoloog,Marloes
Letteboer,:”Integrating Automatic and Interactive
Brain tumor Segmentation”, Proceedings of the
17th International Conference on Pattern
Recognition (ICPR’04), IEEE Computer Society.
[13] H. He, Y. Chen, “Artificial life for image
segmentation,” International Journal of Pattern
Recognition and Artificial Intelligence 15 (6),
2001, pp. 989-1003.
[14] J. Liu, Y.Y. Tang, “Adaptive image segmentation
with distributed behavior-based agents,” IEEE
154 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

Session Based Load Optimization Techniques to


Enhance Security & Efficiency of E-commerce
Transactions
R.K. Pateriya1, J.L. Rana2 and S.C. Shrivastava3
1
Associate Professor , Department of Information Technology,
Maulana Azad National Institute of Technology (MANIT) Bhopal, India
pateriyark@gmail.com
2
Professor , Department of Computer Science and Engineering,
Maulana Azad National Institute of Technology (MANIT) Bhopal, India
jl_rana@yahoo.co.in
3
Professor , Department of Electronics Engineering,
Maulana Azad National Institute of Technology (MANIT) Bhopal, India
scs_manit@yahoo.co.in

Abstract: Today internet based e-commerce has become a over the Internet is traditionally provided using Secure
trend and business necessity. Secure Socket layer (SSL) is the Socket Layer (SSL). It is commonly used for secure http
world standard for cyber security. A SSL session contains connections where credit card information is going to be sent along
temporally and logically related request sequences from the a network and this gives e-commerce the confidence it needs
same client. In e-commerce session integrity is a critical metric. to allow on-line banking and shopping.. SSL provides and
Overload on server can lead e-commerce applications to encrypted bi-directional data stream, data is encrypted at the
considerable revenue losses, response times may grow to sender's end and decrypted at the receiver's end. It can
unacceptable levels and as a result the server may saturate or
perform mutual authentication of both the sender and
even crash .Session based admission control techniques is able
receiver of messages and ensure message confidentiality.
to control the server load . The purpose of this paper is to review
about various session based admission control techniques to This process involves certificates that are configured on both
avoid server overload. Overload control is a critical goal so that sides the connection. Although providing these security
a system can remain operational even when the incoming capabilities does not introduce a new degree of complexity
request rate is several times greater than the system capacity in web application structure, it increases remarkably the
and this admission control mechanism based on session will computation time needed to serve a connection, due to the
maximize the number of sessions completed successfully, use of cryptographic techniques, becoming a CPU intensive
allowing e-commerce sites to increase the number of workload.
transactions completed, generating higher benefits and Two problems are typically encountered with deploying e-
optimizes performance. commerce Web sites presented in [1,2]. First is overload,
where the volume of requests for content at a site
Keywords: Admission control, Application servers, temporarily exceeds the capacity for serving them and
Overload control, Service differentiation
renders the site unusable. Second is responsiveness, where
the lack of adequate response time leads to lowered usage of
1. Introduction a site and subsequently, reduced revenues. During overload
E-Commerce is a growing phenomenon as consumers gain conditions, the service’s response times may grow to
experience and comfort with shopping on the Internet .Most unacceptable levels, and exhaustion of resources may cause
of e-commerce web sites applications are session-based. the service to behave erratically or even crash causing denial
Access to a web service occurs in the form of a session of services. For this reason, overload prevention in these
consisting of a sequence of individual requests. Placing an applications is a critical issue. Several mechanisms have
order through the web site involves further requests relating been proposed in [1,3,4] to deal with overload, such as
to selecting a product, providing shipping information, admission control, request scheduling, service
arranging payment agreement and finally receiving a differentiation, service degradation.
confirmation. So for a customer trying to place an order or a Request scheduling refers to the order in which
retailer trying to make a sale, the real measure of a web concurrent requests should be served. A well known form is
server performance is its ability to process the entire queuing theory (SRPT) that shortest remaining processing
sequence of requests needed to complete a transaction. The time first scheduling minimizes queuing time .Better
higher the number of sessions completed the higher the scheduling can always be complementary to any other
amount of revenue that is likely to be generated as discussed mechanism .Service differentiation is based on
in [3]. Sessions that are broken or delayed at some critical differentiating classes of customers so that response times of
stages, like checkout and shipping, could mean loss of preferred clients do not suffer in the presence of overload.
revenue to the web site. Security between network nodes Service degradation is based on avoiding refusing clients as
(IJCNS) International Journal of Computer and Network Security, 155
Vol. 2, No. 6, June 2010

a response to overload but reducing the service offered to control goodness discussed in [1,2]: First is the percentage
clients for example in the form on providing smaller of aborted requests, which server can determine based on the
content. client side closed connections. Aborted requests indicate that
Admission control generally requires two components the level of service is unsatisfactory. Typically, aborted
knowing the load that a particular job will generate on a requests lead to aborted sessions, and could serve as a good
system, and knowing the capacity of that system. By keeping warning sign of degrading server performance; second is a
the maximum amount of load just below the system percentage of connection refused messages sent by a server,
capacity, overload is prevented and peak throughput is
in the case of full listen queue. Refused connections are the
achieved discussed in [1,2]. The goal of overload control is
dangerous warning sign of an overloaded server and its
to prevent service performance from degrading in an
inevitable poor session performance. If both of these values
uncontrolled fashion under heavy load, it is often desirable
to shed load. are zero then it reveals that an admission control
The rest of the paper is organized as follows: mechanism uses an adequate admission control function to
Section II Overview of Session based admission control cope with current workload and traffic rate. Good admission
(SBAC) techniques. Section III presents SSL connection control strategy which minimizes a percentage of aborted
differentiation and admission control. Sections IV CPU requests and refused connections (ideally to 0) and
Utilization-Based Implementation of SBAC Mechanism. maximizes the achievable server throughput .Now in the
Section V Adaptive admission control technique. Section following sections we are going to discuss three techniques
VI Comparative study of SBAC techniques. Section VII of session based admission control and their comparative
Conclusion. study

2. Overview of SBAC Techniques 3. SSL Connection Differentiation and


The Admission control is based on reducing the amount of Admission Control
work the server accepts when it is faced with overload. For SSL connections is 7 times lower than when using normal
example, admission control on per request basis may lead to connections. Based on the SSL connection differentiation, a
a large number of broken or incomplete sessions when the session-based adaptive admission control mechanism is
system overloaded Sessions have distinguishable features implemented in [3,4] .This mechanism allows the server to
from individual requests that complicate the overload avoid throughput degradation and response time increments
control. Session-based workload gives a new interesting occurred during overload conditions. The server
angle to revisit and re-valuate the definition of web server differentiates full SSL connections from resumed SSL
performance. It proposes to measure a server throughput as connections and limits the acceptance of full SSL
a number of successfully completed sessions. The reason, for connections to the maximum number possible without
failure of admission control techniques that work on a per overloading the available resources, while it accepts all the
request basis discussed in [1,2], is because it leads to large resumed SSL connections. In this way, this admission
number of broken or incomplete sessions when the system is control mechanism maximizes the number of sessions
overloaded, hence cause revenue loss. Session integrity is a completed successfully, allowing e-commerce sites based on
critical metric in e-commerce. SSL to increase the number of transactions completed, thus
Research in admission control, can be roughly categorized generating higher benefit .
under two broad approaches presented in [3,5]: First is The SSL protocol fundamentally has two phases of
reducing the amount of work required when faced with operation SSL handshake and SSL record protocol. Two
overload, and second is differentiating classes of customers different SSL handshake types can be distinguished
so that response times of preferred clients do not suffer in discussed in [3,4 ,9]: The full SSL handshake and the
the presence of overload. This paper focuses on first resumed SSL handshake. Most of the computation time
approach by reducing amount of work for admission required when using SSL is spent during the SSL handshake
control. phase, which features the negotiation between the client and
There are two desirable, but somewhat contradictory, the server to establish a SSL connection. The full SSL
properties for an admission control mechanism: stability and handshake is negotiated when a client establishes a new SSL
responsiveness discussed in [1,2]. In case, when the server connection with the server, and requires the complete
receives an occasional burst of new traffic, while still being negotiation of the SSL handshake, including parts that need
under a manageable load, the stability, which takes into a lot of computation time to be accomplished The SSL
account some load history, is a desirable property for resumed handshake is negotiated when a client establishes a
admission control mechanism. It helps to maximize server new HTTP connection with the server but resumes an
throughput and to avoid unnecessary rejection of newly existing SSL connection, the SSL session ID is reused hence
arrived sessions. However, if a server's load during previous part of the SSL handshake negotiation can be avoided,
time intervals is consistently high, and exceeds its capacity, reducing considerably the computation time for performing
the responsiveness is very important: The admission control a resumed SSL handshake Note their is big difference
policy should be switched on as soon as possible, to control between the time to negotiate a full SSL handshake
and reject newly arriving traffic. There is a trade-off compared to the time to negotiate a resumed SSL handshake
between these two desirable properties for an admission i.e. (175 ms vs. 2 ms) given in [3,4].
control mechanism.
The following two values help to check an admission
156 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

A session oriented adaptive mechanism discussed in


[3,4] performs admission control based on SSL connection Above figure are taken from [3,4] shows considerable
differentiation continuously monitors incoming secure improvement using overload control mechanism as
connections to the server, performs online measurements compared to without overload control policy and maximizes
distinguishing new SSL connections from resumed SSL number of completed session
connections and decides which incoming SSL connections
are accepted and hence maximizing the number of sessions 4. CPU Utilization Based SBAC Mechanism
successfully completed . This maximum depends on the
available processors for the server and the computational A simple implementation of session based admission
demand required by the accepted resumed connections. control is based on server CPU utilization presented in [1,2]
Following are the definition of variable used in . It measures and predicts the server utilization, rejects new
calculation taken from [3,4]] :K sampling interval (currently sessions when the server becomes critically loaded and
defined as 2 s), O(k) :defined as the number of resumed SSL sends an explicit message of rejection to the client of a
connections that arrive to the server during that interval. rejected session.
CTO: The average computation time entailed by a resumed U_ac (admission control) is threshold which establishes
SSL connection .CTN: the average computation time the critical server utilization level to switch on the
entailed by a new SSL connection . N(k): defined as the admission control policy);T1; T2; :::; Ti a sequence of time
maximum number of new SSL connections that can be intervals used for making a decision whether to admit (or to
accepted by the server during that interval without reject) new sessions during the next time interval, this
overloading. A(k): defined as the number of processors sequence is defined by the ac-interval length; F_ac is an ac-
allocated to the server. Admission control mechanism function used to evaluate the predicted utilization and
periodically calculates, at the beginning of every sampling distinguish two different values for server utilization ;
interval k maximum number of new connection allowed. U_measured(i) is a measured server utilization during;Ti{
Since resumed SSL connections have preference with the i-th ac-interval}; U_predicted( i+1) is a predicted
respect to new SSL connections all resumed SSL utilization computed using a given ac-function f_ac after ac-
connections are accepted.(O(k) · CTO) is the computation interval Ti and before a new ac-interval Ti+1 begins,.
time required by the already accepted resumed SSL U_predicted(i+1)=f_ac(i+1) (2)
connections .Hence maximum number of new connection f_ac(1)=U_ac (3)
allowed: f_ac(i+1)=(1-k)*f_ac(i)+k*U_measured(i) (4)
N(k)=(k*A(k)–O(k)*CTO)/CTN (1)
where k is a damping coefficient between 0 and 1, and it is
called ac-weight coefficient. which cover the space between
ac-stable and ac-responsive policies. A web server with an
admission control mechanism re-evaluates its admission
strategy on intervals T1; T2; :::; Ti; ::: boundaries. Web
server behavior for the next time interval Ti+1 is defined in
[1,2] in the following way :.If ( U_predicted(i+1) > U_ac)
then any new session arrived during Ti+1 will be rejected,
and web server will process only requests belonging to
already accepted sessions or if ( U_predicted(i+1) < U_ac )
then web server during Ti+1 is functioning in a usual mode:
processing requests from both new and already accepted
sessions. It is the simplest techniques but are not giving very
Figure 1. Completed sessions by the original Tomcat with satisfactory results.
different numbers of processors
5. Adaptive Admission Control Technique

Predictive admission control strategy and Hybrid


admission policies as discussed in[1,2] allow the design of a
powerful admission control mechanism which tunes and
adjusts itself for better performance across different
workload types and different traffic loads[2].
Predictive admission control strategy evaluates the
observed workload and makes its prediction for the load in
the nearest future. It consistently shows the best
performance results for different workloads and different
traffic patterns. For workloads with short average session
length, predictive strategy is the only strategy which
provides both highest server throughput in completed
Figure 2. Completed sessions with overload control with sessions and no (or, practically no) aborted sessions.
different numbers of processors
(IJCNS) International Journal of Computer and Network Security, 157
Vol. 2, No. 6, June 2010

Hybrid admission control strategy which tunes itself to be session will maximizes the number of sessions completed
more responsive or more stable on a basis of observed successfully and allow e-commerce sites to increase the number of
quality of service. It successfully combines most attractive transactions completed, therefore help in enhancing security and
features of both responsive and stable policies. It improves performance.
performance results for workloads with medium to long
average session length.
References
6. Comparative study of SBAC Techniques
[1] L. Cherkasova, P. Phaal “Session Based Admission
CPU utilization based implementation presented in [1,2] Control: a Mechanism for Improving the Performance
is the simplest implementation of session based admission of an Overloaded Web Server.” HP Laboratories Report
control but can break under certain rates and not work No. HPL-98-119, June, 1998.
properly, reason is that the decision ,whether to admit or [2] L. Cherkasova, P. Phaal, “Session-based admission
reject new sessions, is made at the boundaries of ac-intervals control: A mechanism for peak load management of
and this decision can not be changed until the next ac- commercial websites” IEEE Transactions on
interval. However, in presence of a very high load, the Computers LI (6), pp. 669–685 ,2002.
number of accepted new sessions may be much greater than [3] Jordi Guitart, David Carrera, Vicenç Beltran, Jordi
a server capacity, and it inevitably leads to aborted sessions Torres and Eduard Ayguade “Session-Based Adaptive
and poor session completion characteristics Overload Control for Secure Dynamic Web
Hybrid admission control strategy covered in [2] which Applications” In Proceeding of International conf on
tunes itself to be more responsive or more stable on a basis Parallel Processing (ICPP) , pp. 341-349, 2005.
of observed quality of service. It successfully combines most [4] Jordi Guitart , Vicenc Beltran , David Carrera , Jordi
attractive features of both ac-responsive and ac-stable Torres , Eduard Ayguade “Designing an overload
policies. It improves performance results for workloads with control strategy for secure e-commerce applications” LI
medium to long average session length. (XV), pp. 4492-4510 , 2007.
Predictive admission control strategy also covered in [2] [5] M. Harchol-Balter, B. Schroeder, N. Bansal, M.
which estimates the number of new sessions a server can Agrawal, “Size-based scheduling to improve web
accept and still guarantee processing of all the future session performance” ACM Transactions on Computer Systems
requests. This adaptive strategy evaluates the observed , XXI (II) , pp. 207–233 , 2003
workload and makes its prediction for the load in the nearest [6] D. Mosberger, T. Jin “A tool for measuring web
future. It consistently shows the best performance results for server performance” Workshop on Internet Server
different workloads and different traffic patterns. For Performance (WISP’98) in conjunction with
workloads with short average session length, predictive SIGMETRICS’98 Madison, Wisconsin, USA, pp59–
strategy is the only strategy which provides both: highest 67 , 1998 .
server throughput in completed sessions and no (or, [7] S. Elnikety, E. Nahum, J. Tracey, W. Zwaenepoel,
practically no) aborted sessions. “A method for transparent admission control and
Session-based adaptive overload control mechanism based request scheduling in e-commerce web sites” 13th
on SSL connections differentiation and admission control International Conference on World Wide Web
presented in [3,4] prioritizes resumed connections (WWW’04), New York, USA, pp. 276–286, 2004
maximize the number of sessions completed and also limits [8] B. Urgaonkar, P. Shenoy, “Cataclysm: Handling
dynamically the number of new SSL connections accepted extreme overloads in internet services” Tech. Rep.
depending on the available resources and the number of TR03-40, Department of Computer Science, University
resumed SSL connections accepted, in order to avoid server of Massachusetts, USA, December 2003
overload. [9] H. Chen, P. Mohapatra, “Overload control in QoS-
aware webservers” Elsevier journal Computer Networks
7. Conclusion XLII (I) , pp.119–133, 2003
[10] A.O. Freier, P. Karlton, C. Kocher, “The SSL
SSL is commonly used for secure http connections where Protocol.Version 3.0” November 1996. Available:
sensitive information is going to be sent along networks. SSL http://wp.netscape.com/eng/ssl3/ssl-toc.htm
session integrity is a critical metric in e-commerce.
Overload can lead e-commerce applications to considerable
revenue losses or may cause response times to grow to
R K Pateriya M.Tech & B.E. in Computer
unacceptable levels hence overload control is a critical goal. Science & Engg. and working as Associate
To meet this goal either apply predictive or hybrid overload Professor in Information Technology
control strategy based on session length which tunes itself Department of MANIT Bhopal . Total 17
for giving better performance according to different Years Teaching Experience ( PG & UG ).
workload or an alternative approach is to apply SSL
connection differentiation and admission control technique
which prioritizes resumed SSL session over new session for
overload control. These admission control mechanism based on
158 (IJCNS) International Journal of Computer and Network Security,
Vol. 2, No. 6, June 2010

Dr. J. L . Rana Professor & Head of


Computer Science & Engg deptt. in MANIT
Bhopal .He has received his PhD from IIT
Mumbai & M.S. from USA (Huwaii) .He has
Guided Six PhD.

Dr. S. C. Shrivastava Professor & Head of


Electronics Engg. department of MANIT
Bhopal. He has Guided three PhD , 36
M.Tech and presented nine papers in
international & twenty papers in national
conference in India.

Anda mungkin juga menyukai