Anda di halaman 1dari 57

P-PERCENT COVERAGE IN WIRELESS SENSOR NETWORKS

by

CHAITANYA SAMBHARA

Under the Direction of Yingshu Li

ABSTRACT

Coverage in a Wireless Sensor Network reflects how well a sensor network monitors an

area. Many times it is impossible to provide full coverage. The key challenges are to prolong the

lifetime and ensure connectivity to provide a stable network. In this thesis we first define p-

percent coverage problem in which we require only p% of the whole area to be monitored. We

propose two algorithms, Connected P-Percent Coverage Depth First Search (CpPCA-DFS) and

Connected P-Percent Connected Dominating Set (CpPCA-CDS). Through simulations we then

compare and analyze them for their efficiency and lifetime. Finally in conclusion we prove that

CpPCA-CDS provides 5 to 20 percent better active node ratio at low density. At high node

density it achieves better distribution of covered area however the lifetime is only 5 to10 percent

shorter then CpPCA-DFS. Overall CpPCA-CDS provides up to 30 percent better distribution of

covered area.

INDEX WORDS: Wireless sensor network, Coverage problem, DFS, CDS, Active node
ratio
P-PERCENT COVERAGE IN WIRELESS SENSOR NETWORKS

by

CHAITANYA SAMBHARA

A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of

Master of Science

in the College of Arts and Sciences

Georgia State University

2008
Copyright by

Chaitanya Sambhara

2008
P-PERCENT COVERAGE IN WIRELESS SENSOR NETWORKS

by

CHAITANYA SAMBHARA

Committee Chair: Yingshu Li

Committee: Rajshekhar Sunderraman


Anu Bourgeois

Electronic Version Approved:

Office of Graduate Studies


College of Arts and Sciences
Georgia State University
December 2008
iv

DEDICATION

Dedicated to all the teachers who taught and educated me. I dedicate this to my father

Mr. Sambhara.V.R Jogarao, my mother Sambhara Sri Lakshmi and my brother for their

invaluable support. I dedicate this to my uncle Dr. Prakash Sambhara and my aunt Ramani

Sambhara for their constant support. I dedicate this to Dr. Raj Sunderraman, Wiwek Deshmukh

and Tushar Bhagat who supported me all along my endeavors. I thank the Almighty for

everything.
v

ACKNOWLEDGEMENTS

I am very grateful to my advisor Dr. Yingshu Li for introducing me to the subject of

sensor networks and encouraging me throughout this work. She gave me invaluable guidance

and support at every stage of this work. I wish to thank Yiwei Wu, Dr.Naixue Xiong, Shan Gao

and Wiwek Deshmukh for their support and friendship. Working with them has been a very

enjoyable learning experience.

I want to express my sincere and deep gratitude for the sustained guidance and encouragement I

have received from Dr. Yingshu Li and Dr. Rajshekhar Sunderraman who I credit for shaping

my career. I am very grateful to them.

- Chaitanya Sambhara
vi

TABLE OF CONTENTS

ACKNOWLEDGEMENTS V

LIST OF TABLES viii

LIST OF FIGURES ix

CHAPTER

1. INTRODUCTION 1

Sensor Network 4

Application of Sensor Networks 6

Characteristic Models of Sensor Networks 6

Network Architecture and Communication Protocols 7

Coverage Problem 12

2. RELATED WORK 14

Coverage Problem in Wireless Sensor Networks 14

Energy Conservation in Wireless Sensor Networks 19

Summary and Comparative Analysis 23

3. P-PERCENT COVERAGE AND ALGORITHMS 24

Definitions 25

Connected P-Percent Coverage Depth First Search Algorithm 28

Connected P-Percent Coverage Connected Dominating Set Algorithm 30

4. SIMULATIONS AND CONCLUSION 35

Simulations 35

Analysis 41

Conclusion 43
vii

REFERENCES 44
viii

LIST OF TABLES

Table 1: Observed Data for p=0.6, p=0.7 and p=0.8 36


ix

LIST OF FIGURES

Figure 1: Sensor Node Architecture 2

Figure 2: WeC Sensor Node 3

Figure 3: Mica Family 3

Figure 4 Telos Mote 3

Figure 5: Specification Prototype 3

Figure 6: WSN Topology 20

Figure 7: Duty Cycle of a Node 21

Figure 8: Bad Distribution of Nodes 25

Figure 9: Good Distribution of Nodes 25

Figure 10: Connected Dominating Set 27

Figure 11: CpPCA-DFS 29

Figure 12: CpPCA-CDS Phase 1 32

Figure 13: CpPCA-CDS Phase 3 34

Figure 14: Active Node Ratio for p=0.6 37

Figure 15: Active Node Ratio for p=0.7 37

Figure 16: Active Node Ratio for p=0.8 37

Figure 17: Average Sensing Void Distance p=0.6 38

Figure 18: Average Sensing Void Distance p=0.7 38

Figure 19: Average Sensing Void Distance p=0.8 39

Figure 20: Network Lifetime for p=0.6 39

Figure 21: Network Lifetime for p=0.7 40

Figure 22: Network Lifetime for p=0.8 40

Figure 23: Network Lifetime for p=0.6 41


x

Figure 24: Network Lifetime for p=0.7 42

Figure 25: Network Lifetime for p=0.8 42


1

1. INTRODUCTION

There are many situations where it is very important to measure and monitor the physical

quantities of an area. Commonly monitored parameters are temperature, humidity, pressure,

wind direction and speed, illumination intensity, vibration intensity, sound intensity, power-line

voltage, chemical concentrations, pollutant levels and vital body functions.

Sensor Node: A sensor is a hardware device that gives a measurement of the change in the

physical characteristic in the environment or a physical condition like temperature, pressure and

smoke. Sensors are equipped with digital to analog converters that digitize the analog data

sensed by the node.[1]

The important requirements [2] of a sensor node are that:

a) It should be small in size.

b) It should consume minimal energy.

c) It should be able to operate when unattended.

d) It should be able to adapt to the changing physical environment and conditions.

e) It should be able to operate in high volumetric density when required.

A sensor node when operating in a network is capable of gathering the sensory information,

process the gathered data and communicate with the other connected nodes in the network.

The main components of a sensor node are microcontroller, transceiver, external memory,

power source and one or more sensors.


2

Figure 1: Sensor Node Architecture [3]

Transceiver: A transceiver as shown in Figure 1 is a device that has both a transmitter and a

receiver that is combined and share common circuitry or a single housing. If no circuitry is

common between transmit and receive functions, the device is a transmitter-receiver.

External Memory: From an energy perspective, the most relevant kinds of memory are on-chip

memory of a microcontroller and FLASH memory - off-chip RAM is rarely, if ever used. Flash

memories are used due to its cost and storage capacity. Memory requirements are very much

application dependent.

Power Source: Power consumption in the sensor node is for the Sensing, Communication and

Data Processing. More energy is required for data communication in sensor node. Energy

expenditure is less for sensing and data processing. Power is stored either in Batteries or

Capacitors. Batteries are the main source of power supply for sensor nodes.
3

Microcontroller: Microcontroller performs tasks, processes data and controls the functionality of

other components in the sensor node.

Figure 2: WeC Sensor Node [5] Figure 3: Mica Family [5]

Figure 4: Telos Mote [5] Figure 5: Specification Prototype [5]

Figure 2 shows a WeC Sensor Node and Figure 3 shows two nodes that belong to the Mica

family. In Figure 4 is shown a Telos Mote, a common mote used for academic purposes. Figure 5

shows a comparison with a tip of a pen to show how small a sensor node could be.
4

Sensor Network

A sensor network consists of multiple sensor nodes. The nodes may be equipped with one

or more sensors. A sensor network normally constitutes a wireless ad-hoc network. A wireless

ad-hoc network supports a multi-hop routing algorithm in which several nodes are able to

forward the data packets to the base station. The nodes can communicate among themselves or

with base station or both. [1].

Few features and requirements of a sensor network are listed below.

Application specific

A sensor network is always application specific. Different type of networks are deployed

for different applications depending on the need, the type of information that needs to be

gathered and the type of algorithm that would suit the most for a certain type of application.

Sensor networks can never be generalized for all applications and needs.

Environment interaction

The sensor networks need to interact with different kinds of environment depending on

the need. They should be scalable when the need arises. It is not uncommon that the data transfer

at a given time is very low and at the next instance when an event happens, the entire network

needs to be active and handle much larger amount of data transfer. Hence, highly scalable

solutions are required for sensor networks.

Energy

The energy supply is very limited in a sensor node and for the entire network. It is very

important to minimize the energy consumption of the entire network. Most of the time, the
5

battery of the sensor node is not rechargeable and it is very important to prolong the life of the

node in order to prolong the life of the entire network.

Self configurability

Similar to ad hoc networks, WSNs are most likely required to self-configure into

connected networks, but the difference in traffic, energy trade-offs, etc. could require new

solutions. This includes the need for sensor nodes to learn about their geographical position.

Dependability and Quality of Service

In some cases, only occasional delivery of a packet can be more than enough; in other

cases, very high reliability requirements exist. The packet delivery ratio is an insufficient metric,

what is relevant is the amount and quality of information that can be extracted at given sinks of

information about the observed objects or area. Moreover, this information has to be put into

perspective with the energy that is required to obtain it.

Data centric

Most importantly, the low cost and low energy supply will require, in many application

scenarios, redundant deployment of wireless sensor nodes. As a consequence, the importance of

any one particular node is considerably reduced as compared to traditional networks. More

important is the data that these nodes can observe. It is very important that the same data should

be sensed by multiple sensors so that failure of a single node does not affect the entire network.

Simplicity

Since sensor nodes are small and energy is scarce, the operating and networking software

must be simple. This simplicity may also require breaking with conventional layering rules for

networking software, since abstractions typically cost time and space.


6

Application of Sensor Networks

Based on such a technological vision, new types of applications become possible.

Applications include environmental control such as fire fighting or marine ground floor erosion

but also installing sensors on bridges or buildings to understand earthquake vibration patterns;

surveillance tasks of many kinds like intruder surveillance in premises; deeply embedding

sensing into machinery where wired sensors would not be feasible, e.g., because wiring would be

too costly, could not reach the deeply embedded points, limits flexibility, represents a

maintenance problem, or disallows mobility of devices; tagging mobile items like containers or

goods in a factory floor automation system or smart price tags for foods that can communicate

with the fridge; etc. Also classes of applications include car-to-car or in-car communication.

Characteristic Models of Sensor Networks

Depending on the application needs, the characteristics of sensor networks can be varied.

Some examples of which are:

Event Detection

In an event driven model, the network reports to the base station only when it detects an

event without which the base station can safely assume that no event occurred. To give a general

example of an event based model, the nodes are activated after certain intervals but it is made

sure that there are always certain minimum numbers of nodes in an active state that can measure

the event. As soon as the event occurs they may activate the other nodes and send the data

packets to the base station in a timely manner so that the event is detected soon and an

appropriate action be taken.


7

Continuous Data Delivery

In a continuous data delivery model, the sensor nodes need to continuously report the

base station about the data gathered. The nodes have the capability to process the data and then

transmit it to the other nodes or directly report to the base station. How it should be done is

specific to an application though. The cost of transmission is much higher than the cost of

processing the data. The energy consumed for processing the data is much less and hence it is

always preferable to process the data among the nodes before useful information is sent to the

base station. It is very crucial in such models to prolong the lifetime of the network as some or

all of the network is always active and is transmitting the data.

Observed Initiated Data Delivery

In an observed initiated data delivery model it is entirely up to the user when he/she

wants to extract the information. Data transmission takes place only when the user requests it.

Hybrid Data Delivery Model

It can be a combination of any of the models described above. It mostly depends on the

need of the application. For example, a model periodically sends the data if the user does not

extract the information in a certain time.

Network Architecture and Communication Protocols

Due to the principle differences in application scenarios and underlying communication

technology, the architecture of such WSNs will be drastically different both regarding a single

node and the network as a whole.

Network architecture

The network architecture as a whole has to take into account various different aspects.

The protocol architecture has to take both application- and energy-driven point of view. Quality-
8

of-Service, dependability, redundancy and imprecision in sensor readings all have to be

considered.

The addressing structures in WSNs are likely to be quite different: Scalability and energy

requirements can demand an “address-free structure”. Distributed assignments of addresses can

be a key technique, even if these addresses are only unique in a two-hop neighborhood. Also,

geographic and data-centric addressing structures are required.

A crucial and defining property of WSNs will be the need and their capacity to perform

in-network processing. This pertains to aggregation of data when multiple sensor readings are

converge-casted to a single or multiple sinks, distributed signal processing, and the exploitation

of correlation structures in the sensor readings in both time and space. In addition, aggregating

data reduces the number of transmitted packets.

Based on such in-network processing, the service that a WSN offers at the level of an

entire network is a still ill-defined concept. It is certainly not the transports of bits from one

place to another, but any simple definition of a WSN service (“provides readings of

environmental values upon request” etc.) is also not going to capture all possible application

scenarios.

As these services are, partially and eventually, invoked by nodes outside the system,

gateway concepts are required: How to structure the integration of WSNs into larger networks,

where to bridge the different communication protocols (starting from physical layer upwards) are

open issues.
9

Communication protocols:

Physical layer

With respect to “classical” radio transmission, the main question is how to transmit as

energy efficiently as possible, taking into account all related costs (overhead, possible

retransmissions etc.). Comparatively little work exists regarding protocols well suited to the

needs of WSNs.

MAC

Medium access has been and still is one of the most active research areas for WSNs. In

most of the work, the question is how to ensure that the sensor nodes can sleep as long as

possible, not being able to communicate.

Link Layer

More recent, on-going work is targeted at taking into account the degree of redundancy

that an aggregated message carries on the link layer, which is much more specific to the situation

in wireless sensor networks.

Addressing Concepts

Addressing questions in WSNs deal with some issues that also appear in traditional ad

hoc networks. For example, the problem of distributed address assignment leverages concepts

from ad hoc networks, but has also some WSN-specific twists to the problem. Also, geographic

addresses are also important in WSN, since these are required by many applications (e.g.

environmental monitoring) and have proved very helpful in networking tasks like routing. More

interestingly, content-based addresses seem a more natural match to WSN needs than

conventional addresses.
10

Time synchronization

Since time plays a big role in WSNs, it is important to ensure that observations are

annotated with the correct time, to synchronize sleeping cycles, etc.

Localization

Localizing sensor nodes by means of the network itself, i.e., computing a sensor network

coordinate system, is an extraordinarily popular research area. Investigated mechanisms include

exploiting received signal strength indicators, time of arrival, time difference of arrival, or angle

of arrival. Additionally, problems like the integration of beacons or anchor nodes with precise

information, the iterative increase in precision by distributed algorithms, are popular and

important problems.

Topology Control

In a densely deployed network, performing a broadcast by simple flooding results in a

large overhead of unnecessarily repeated information as many nodes in the vicinity will repeat

the message, even though many other nodes have already done so.

Network Layer

Apart from MAC and topology control, the network layer is surely the area with the most

active research interest. It shares some commonalities with ad hoc networking, but the more

stringent requirements regarding scalability, energy efficiency and data-centricness require new

solutions. Nonetheless, the traditional routing problems of unicast, multicast, anycast, and

convergecast routing exist in WSN for various purposes; also, the less conventional geographic

routing and the relatively new and characteristic data-centric routing are present.
11

Unicast

In unicast, one node communicates with only one another sensor node or base station

which lies in its proximity (communication range).

Multicast

Similar to the unicast case, multicast is also a function that will be required in some WSN

application areas. In a multicast system, one node communicates with multiple nodes either

directly in its communication range or through hops. The maximum number of nodes to which it

can communicate is limited though.

Anycast

The purpose of anycast is that the node send the data or tries to communicate with any

potential node that it could find within its range. Anycast refers to the case where a message is

sent to an object name that has potentially multiple instantiations in the network, and any of

these will do (typically, the closest instantiation is preferred). This functionality is usually

considered useful in the context of service discovery.

Convergecast

This concept describes the notion of collecting data from several sources at a central

point. It is likely to be a crucial abstraction in WSNs, and it ties in closely with Geographic

Routing.

Geographic routing

Geographic routing is the idea of using an area instead of a node identifier as the target of

a packet; any node that is positioned within the given area will be acceptable as a destination

node and can receive and process a message. In the context of sensor networks, such geographic

routing is evidently important to request sensor data from some region (“Request temperature in
12

living room”); it will also often be combined with some notion of multicast, specifically,

stochastically constrained multicast.

Data-centric routing

Data-centric routing is perhaps the core abstraction of WSNs. A node with given or new

sensor readings publishes the values; interested nodes can subscribe to such events. As an

example, a node could subscribe to events like “Provide me all events that exceed the

temperature of 50 degrees Celsius”. The probably most popular and often-cited approach in this

context is “directed diffusion”, even though some of its performance and functional

characteristics are not entirely understood or explained.

Coverage Problem

Coverage problem in sensor networks is widely studied and researched topic. The initial

motivation of sensors and ad-hoc networks formed by sensor nodes was to detect and cover areas

of interest where it would be economically feasible to deploy them. There are varied issues and

challenges that face the coverage problem. First and foremost is to prolong the lifetime of the

network which monitors an area of interest. The second is to efficiently process the data. The

sensor nodes are equipped with computation abilities in addition to sensing and transmitting the

data. The cost of transmission of the data can be five hundred times as expensive as processing

the data. It is always feasible to compute and process the sensed data before transmitting it to the

base station. Second issue is the stability of the network. The sensor nodes are prone to failure

due to varied reasons. It is very important to ensure that we have enough nodes to cover a certain

area in case a few nodes fail. This also brings up the third issue to ensure connectivity. If an

active node fails due to some reason, we should have some other nodes to replace it and ensure
13

that connectivity of the network is maintained. Fourth issue for our concern is that for a given

sensor node, the transmission range is much larger than the sensing range. Hence it is important to

ensure the coverage distribution of the network. It is very important that nodes be distributed

evenly or more densely in the areas of particular interest.

Partial Coverage Problem

There has not been much work done for partial coverage. In our work we address this

problem. There are many applications where it is unnecessary and/or impossible to provide full

coverage. Instead partial coverage of the area is sufficient to monitor the area and estimate the

conditions. In this work we address the partial coverage problem by providing a definition which

we call p-percent coverage. P-percent coverage requires only p-percent of the area to be

monitored at any time. If p-percent of the area is well monitored we can safely assume and

estimate the conditions of the rest of the area. The value of p-percent is varied and depends on

the conditions and requirements of the user. In this work through simulations we monitor the

area with different levels of p. We also address the issue of connectivity to provide a stable

network. We propose two algorithms and test them for different levels of required coverage

which are sixty, seventy and eighty percent respectively. Our two algorithms are called

Connected P Percent Coverage Depth First Search and Connected P Percent Coverage

Connected Dominating Set. Through simulations we prove that the two algorithms can obtain

good results. We discuss the algorithms in detail in the Third chapter. We discuss the simulation

and the analysis in chapter Four.


14

2. RELATED WORK

Coverage Problem in Wireless Sensor Networks

Coverage in general is the measure of the “Quality of Service” of a sensor network.

Paper [6] talks about the K-coverage problem in sensor networks. The authors describe k as the

user specified value. K is the minimum number of sensor nodes required to monitor any given

point in the field which is being monitored. If at any given time and point in the field if we can

ensure that at least K sensors are providing the coverage then we can say that it is a K-connected

region. The Sensor Scheduling for K-Coverage is an NP Hard problem.

The algorithm that they employ divides the sensors into subsets make a schedule to

activate the subsets successively to extend the network lifetime. The subsets so formed are

disjoint. The authors propose a greedy algorithm which they call PCL (Perimeter Coverage

Level) Greedy selection. PCL is defined as the number of sensors in the same set that cover any

point on the nodes perimeter of the sensing area. To make the algorithm work effectively and

efficiently they propose a density control scheme for sensor deployment to reduce the number of

unallocated sensors. To extend the lifetime of the network the closer to the border the area being

monitored is the more nodes they deploy. They do not discuss the case that talks about deploying

nodes in the corner, which assumes that object of interest in the center. What if it is not and it is

a random deployment.

In [7] the authors talk about placement of nodes in a specific manner. The assumption is

that it can deal with an arbitrary shape of the monitored area as opposed to an open and/or

rectangular area as most of the existing work has to depend on. It also allows sensors to have any

ratio between communication range and sensing range. The objective is to properly place the

sensors such that both connectivity and coverage are attained and the number of deployed
15

sensors is minimized. The idea is to partition the arbitrary-shaped region into a number of sub-

regions such that each sub-region is a polygon.

The problem becomes deploying sensors for each sub-region. In the fields of pre-

deployment, the following fact is well and widely known: three sensors having the sensing range

of rs can cover the maximum continuous area if they locate at the vertices of an equilateral

triangle whose edge’s length is 3rs. If ratio between communication range and sensing range is

3, both the connectivity and coverage requirements are satisfied if sensors are places at those

vertices. The proposed algorithm works only with uniform sensor network. The solution is

definitely not optimum since there may exist unnecessary coverage overlapping between

neighboring sub-regions (beside these between sensors within the same sub-region). Also, it is

not easy to place the sensors exactly at the positions figured out by the algorithm. It is really

difficult to extend the algorithm such that it can function with the region having some curved

portions on the border. There may not be redundant nodes but too many of them used can make

the system costly. If this approach is used then the life time of the network can be very low.

In [8] the authors also consider connectivity problem. This paper simultaneously attacks

this combined problem by introducing the concept of K-Benefit path. As a definition, the K-

benefit is the ratio between the numbers of “new” valid sub-elements (a valid sub-elements is an

area covered by the same set of sensors and is inside the considered region). A sub-element is

said to be “new” if it is currently covered by less than K sensors. The centralized algorithm in

this paper constructs the set cover Sj by greedily adding a sensor from the set of sensor S* ⊆ S

which has the maximum K-benefit. The set S* here is the set of sensors that can communicate

with at least one sensor in set cover Sj (so the connectivity is maintained). The authors also

prove that the size of the set cover is at most 2r(log Kn)|OPT|where r is the maximum
16

communication distance, in terms of number of hops, between any two sensors whose sensing

regions overlap. The paper is very good as it gives various algorithms for consideration and

compares them. Redundancy of nodes not addressed once they get deleted, especially in large

networks it could be a big loss. Can sensors be planned while deploying? Issues not addressed

are whether the use of nodes should be minimized or not? Energy remaining in the nodes never

taken in to consideration, only the distance from a certain node is.

In “Coverage by Randomly Deployed Wireless Sensor Networks” [9] the authors define

K-Coverage similarly. This work addresses the change in the probability of a region which is

being covered. The deployment of nodes is random and it studies how the probability changes

when the number of sensors in the region or the sensing range changes. They consider other

work where the probabilistic study of K-coverage by a random point process is done for K=1

that is the point in the region needs to be read by at least and only one sensor. But in such work

they do not assume Euclidean metric which is in fact more relevant to applications. The authors

in this paper address the asymptotic (k+1) coverage of a region by uniform point process. In this

work they show that for any K great or equal to 1 the boundary effect dominates the probability

of (K+1) coverage.

In [10] the authors discuss how efficiently can the sensors be deployed for maximum

coverage providing that there no global information available. Their algorithm “Self-

Deployment by Density Control” SDDS uses density control for each node and prove it to be

better than self-deployment algorithms. Each node calculates its own density and then adjusts its

location from high density to low density. Each node is assumed to be able to detect the

obstacles in its sensing range and their locations. Also each node can compute the angle Ɵ and

the distance between them. They define the density of actual sensor nodes in the communication
17

as D = Na / km and Na is the number of nodes within km. Each node is assumed to have a unique

member ID. Each node is given a cluster status which can be either “Undecided”, “Cluster-head”

or a “Member”. To form a cluster each node has initially the status of undecided. The nodes

communicate with each other and exchange their IDs and the node with the smallest member ID

is selected as the cluster-head. The cluster-heads then formed change their status to cluster-head

and notify their neighbors. The nodes which receive messages from Cluster-head change their

status to a member node and transmit their status to their immediate neighbors.

Phases in Density Control: The first phase is the initialization phase. In this phase the node

activates itself, studies its surroundings and then becomes either a cluster-head or a member as

described above. The second phase is “Goal Selection”. In this phase, the nodes calculate their

local density to estimate their next position. When they have the entire data of angle and distance

the density is calculated in every smaller communication area. The third phase is “Goal

Resolution” in which a node decides which cluster it should belong to depending on the density

of the area in which it resides. The fourth and the final phase is “Execution”. In this phase the

difference in the density of any two given clusters is less than a threshold. It is very important

because it is very crucial to ensure that no cluster’s density in unbalanced. The smaller is the

difference the more evenly the nodes have been deployed. The algorithms “SDDS” iterates

through the above phases to ensure that.

In [11] the authors take a very unique and different approach to provide K-Coverage as it

uses “Hybrid Networks” which includes both static and mobile sensor nodes. Though some

nodes are considered to be mobile, but their mobility is restricted to move only once and limited

to move over only a short distance. The authors define a new metric “Over-Provisioning Factor”

to indicate the efficiency of network deployment. Mathematically it is defined as ή=Λ/k where Λ


18

is the required sensor density, k the defined k coverage and ή is the over-provisioning factor. The

smaller is the value of ή the more efficient is the deployment of nodes in providing k-coverage.

The most important aspect for employing k-coverage is to improve the network lifetime. The

authors prove in this work that mobility of nodes can significantly improve the coverage issues

but do not consider the cases where it can be very difficult to deploy mobile nodes. For example

in terrains and high friction areas it can be quite a challenge and the extra expense of the mobile

nodes may not be fruitful and it can behave just like a static node. However the significance of

this work cannot be undermined and there are quite a few applications where mobility can be an

added advantage.

In [12] the authors discuss three different algorithms. These algorithms essentially divide

the sensors in to different covers and activate the covers iteratively in round robin fashion. In

SET K-COVER problem we have a set of S elements which is finite which corresponds to the

areas to be monitored. This problem can also be used for event driven querying and monitoring.

The three algorithms discussed here are:

a) Randomized Algorithm: Each cover in this algorithm is assigned a sensor chosen in a

random fashion and spread uniformly. This algorithm partitions the sensors in to covers

and is executed in parallel at each sensor starting from initialization at time t=0.

b) Distributed Greedy Algorithm: As opposed to the Randomized Algorithm, this algorithm

gives a deterministic guarantee that the produced partition covers at least half as many

areas as the best possible partition. This algorithm works in two phases, namely

preprocessing phase and partition phase.

c) Centralized Greedy Algorithm: This algorithm provides better approximation ratio as

compared to the distributed greedy algorithm. This algorithm is similar to the Distributed
19

Greedy Algorithm except that each area is assigned a different weight depending on the

number of subsets containing a certain minimum area in given time stamp which have not

been assigned to a cover.

Energy Conservation in Wireless Sensor Networks

Conserving energy and prolonging the lifetime of a network is the single most criteria for

most research in sensor networks. In [13] the authors discuss several protocols for energy

conservation. They divide the protocols in two broad categories, namely Topology Control

Protocols and Power Save Protocols.

Topology Control Protocols: The Topology Control deals with the coordination of node

decisions regarding the transmission range to generate a network with desired properties while

reducing the energy consumption at the nodes. Such protocols are very useful when the

transmission range is small. The topology control protocols are fully distributed and rely on

locally available information.

Power Save Protocols: In such protocols the nodes are put to sleep and are activated according to

the requirements and time intervals. Such protocols are implemented when the required

transmission range is large. To maintain the connectivity is the biggest challenge in such

protocols. In such cases a certain node is assumed to play the role of a leader and this node

controls the sleeping and active status of the member nodes in its set. This node, sometimes

called a gateway node can be pre-assigned or can be elected by the nodes depending on some

pre-defined criteria. Remaining energy of the node can be the most influential criteria.

In [14] the authors describe a multi-criteria architecture. They use a data aggregation to a

single sink as discussed above. A sink node is necessary for power saving protocols to collect,
20

process and possibly forward the data. The architecture that they discuss is called “Multi-Criteria

Message Forwarding Architecture”.

Figure 6: WSN Topology [14]

As shown in the Figure 6 Sensing Nodes (SN): Sensing nodes sense the data and send directly to

the sink node. In case the sink node is beyond the communication range then these nodes pass on

the data packets to communication nodes.

Communication Nodes (CN): Communication nodes only transmit the data they receive from the

sensing nodes to the sink node.

Sink Node (SN): The final recipient of the communication nodes is the sink node. Sink node can

communicate the data to another sink node or to the base station.

A typical node has a duty cycle according to which the node wakes up, sleeps and works.
21

Figure 7: Duty Cycle of a Node [14]

A node stops any communication or computation during the sleep mode. According to the duty

cycle it changes its state to idle. When it listens or receives a signal from a neighbor node, it

activates and sends or receives the sensed data and eventually goes back to idle and then finally

the sleep mode. In this work the authors present the idea of sleeping and active nodes to

maximize the network lifetime. In [15] the authors propose a new scheme called energy aware

routing that uses sub-optimal paths occasionally to provide substantial gains. They propose this

against the protocols which are low power scalable with the number of nodes and fault tolerant

(to nodes that go up or down, or move in and out of the range). The energy optimizing protocols

that find optimal paths and then burn the energy of the nodes along those paths, leaving the

network with a wide disparity in the energy levels of the nodes, and eventually disconnected

subnets.

In the energy aware routing protocol the nodes in the network burn energy more

equitably, then the nodes in the center of the network continue to provide connectivity for

longer, and the time to network partition increases. Hence instead of a single path, a

communication would use different paths at different times, thus any single path does not get
22

energy depleted. It is also quick to respond to nodes moving in and out of the network, and has

minimal routing overhead. The primary metric of interest is network survivability.

The approach that they take to do is that every time data is to be sent from the source to

destination, one of the paths is randomly chosen depending on the probabilities. This means that

none of the paths is used all the time, preventing energy depletion. Also different paths are tried

continuously, improving tolerance to nodes moving around the network. This is an indeed

impressive idea to save energy in a holistic manner in the entire network. One problem could be

that in case of event detection, we may not really need to conserve energy for all the paths

because only a few possible paths need to be used and that too occasionally. Also, there are some

applications in which there is energy imbalance among all nodes and a few nodes might be

located where they are always more busier than others and it might not be possible to divert the

work to be done by some other randomly chosen path. We also cannot assume always that the

energy replenishing capability of the nodes is exactly the same throughout.

In [17] the authors propose that sensor networks can increase their usable lifetimes by

extracting energy from their environment. They propose to provide the network with capabilities

to automatically feed itself from its environment, which opens up the possibility of achieving

significantly longer lifetimes and reducing the battery size. So the idea is to extract more work

out of the same energy environment if the task distribution among nodes is adapted to the

detailed characteristics of environmental energy availability. The harvesting problem means

extracting the maximum work out of a given energy environment. The distributed framework, is

referred to as the environmental energy harvesting framework (EEHF). It is to adaptively learn

the energy properties of the environment and the renewal opportunity at each node through local

measurements and to make the information available in a succinct form for use in energy aware
23

task assignment such as load balancing, leader elections for clustering techniques, and energy

aware communication. In this work however, workload in the network may not follow the

replenishment cycles: over some time intervals it may happen that the energy consumed at a

node with recharging opportunity is more that the energy gained from the environment, thus

reducing its residual battery to a smaller size than a node with no recharging. Knowing only

residual energy may not be sufficient to decide how much extra energy can be consumed at the

energy-rich nodes to save energy at the constrained nodes without jeopardizing the richer node’s

own lifetime.

Summary and Comparative Analysis

From the above discussion we can conclude that the work done in coverage problem

addressed only the full coverage. Most of the work above discussed K-Coverage problem and

different ways to ensure connectivity. We also discussed the comparative discussion on trade

offs between coverage, mobility, density and connectivity. The methods discussed for

prolonging the lifetime by energy saving protocols discuss either recharging methods, planned

distribution of nodes or making the nodes go to sleep after certain intervals. However our work

focuses not on full coverage but to efficiently monitor a region and also ensure connectivity. We

also focus on active node ratio which is missing in the works discussed above. We judge our

algorithms based on the lifetime of the network. We also evaluate our work on the basis of the

ratio of nodes required to cover the required percent of area.


24

3. P-PERCENT COVERAGE AND ALGORITHMS

As discussed in the previous chapter, sensor networks find numerous applications in

covering a region or an area for surveillance, target tracking, environmental monitoring to name

a few. However as it has been discussed all along this study, the WSNs are severely restricted by

their constraints and limited capabilities. Most importantly, due to resource constraints of WSNs

it can be almost impossible and impractical to cover the entire region and provide a full

coverage. Even if one aims to cover the entire region, it may be too expensive of an effort for

something which can be done with less cost involved.

The idea discussed in this chapter is that it is not always necessary to attempt to cover the

whole region. If a certain portion or a crucial region of the area being considered is covered and

monitored properly then the topography of the entire area can be well estimated. If only partial

area is covered then the efficiency of the coverage can be increased subsequently prolonging the

lifetime of the entire network. One of the most important goals of a sensor network is to prolong

the lifetime and it can be achieved through partial coverage without compromising the quality of

service.

In this work we investigated the p-percent coverage problem. Instead of monitoring the entire

region, it only requires p-percent of the area to be monitored efficiently. The focus is also to

maintain the connectivity of the network while monitoring the p-percent of the area.
25

Before discussing further it is important to introduce a few concepts and definitions.

Definitions:

Definition 1: Coverage Increment

One metric used in P-percent coverage is called the “Coverage Increment” of a node. The

coverage increment is determined by the coverage degree of a sensor node. The coverage

increment Ci is the obtained coverage addition to the current percentage of the area covered.

Definition 2: Sensing Void Distance

A very important criterion to determine the efficiency of the algorithm is the “Sensing

Void Distance” (dsv). The distance between the sensing void and the closest point under the

sensing range of an active sensor is the sensing void distance.

Sensing void the region in the area being considered in which no point exists which is being

monitored by any active sensor node of the network. It is hence very important to distribute the

nodes properly to minimize the sensing void in the region. Sensing void plays a crucial role to

determine the quality of the partial coverage which also determines the coverage accuracy. The

less the sensing void the better it is, as shown in the Figure 8 and Figure 9.

Figure 8: Bad Distribution of Nodes Figure 9: Good Distribution of Nodes


26

The node distribution shown in Figure 9 is much better than the distribution in Figure 8. The

sensing void distance dsv is much less in the second case.

The concept of Connected Dominating Set (CDS) is used to address the p-percent

coverage problem. Combined with CDS and the p-percent coverage problem, the algorithm used

for this problem can be addressed as CpPCA-CDS “Connected P-Percent Coverage Algorithm-

Connected Dominating Set”. Before discussing the algorithms it is important to introduce the

concept of a CDS.

Definition 3: Connected Dominating Set

A connected dominating set of a graph G = (V,E) is a set of vertices such that

1. is a dominating set of G, and

2. the subgraph induced by is connected.

And it is dominating as every vertex not in V′ is joined to at least one member of V′ by some

edge.

In the network being considered, it can be translated to V being a node set and E being an edge

set. is the subset of V such that every node that is not in must have a neighbor node in

and is connected.

The network topology is represented by an undirected graph G = (V,E). Two nodes u and v are

neighbors only if they fall into each other’s communication range. It is also assumed that there is

no sensing void if all the sensor nodes in the network are active and the entire area can be fully

covered. The original application of a CDS is in constructing routing topology.


27

The Figure 10 is an example of a Connected Dominating Set. The black nodes form a CDS. The

nodes that form a CDS, in this case the black nodes, are called dominators. The nodes connected

to the CDS, in this case the adjacent white nodes, are called dominates.

Figure 10: Connected Dominating Set

Connected P-Percent Algorithm required in addition to the p percentage of the area needs to be

covered, the connectivity of the network. Maintaining the connectivity of the network can be

very crucial in some applications. The network connectivity should be guaranteed for data

querying and routing purpose. Any loss of connectivity for example, while a query is being

processed through the sensor nodes can hamper the readings leading to severe data loss. Not only

it will give an incorrect read, but also make the end user take an incorrect decision. If such

situation arises in case of an emergency, it can lead to severe loss of property and life following

the misjudgment.
28

We propose two ways for achieving the Connected P-Percent Coverage of the area and

compare their efficiency by comparing their lifetimes and the working node ratio.

A comparative analysis is done in the end to determine the conditions in which either of these

algorithms works better while providing P-Percent Coverage and maintaining the connectivity.

The first method is CpPCA-DFS. CpPCA-DFS uses the Depth First Search technique The

second method is to form a Connected Dominating Set (CDS) and then apply the Depth First

Search technique to the nodes in the CDS to cover the P-Percent Coverage.

The two methods are as follows:

Connected P-Percent Coverage- Depth First Search (CpPCA-DFS)

A very simple method of implementing the “Connected P-Percent Coverage” is to use the

Depth First Search (DFS) technique. DFS is used to traverse a tree starting at the root and then

progressing further by expanding the first child node that appears and going deeper till no node

could be found which can be added into the tree. Then backtrack to the recent most node which

is not finished with its exploration of its children nodes.

In this algorithm the nodes with the maximum weight W2 are explored first. After addition of

every node, the percentage of the area covered is updated and compared to the required

predefined value for P-Percent. The algorithm works in a non-recursive fashion till either the P-

percent cover for the network has been achieved or all the nodes are exhausted. Through

simulations we prove that this algorithm is very simple, efficient and provides a good network

lifetime. However the distribution of the covered area is very poor. The sensing void distance is

very large, an important criterion to evaluate this algorithm.


29

Figure 11 demonstrates a step by step example of how this algorithm works.

Figure 11: CpPCA-DFS

In Figure 11, the root node starts a DFS search and keeps on exploring deep in to the area. If a

node can find in its communication range, another node, the node is added into the tree or else it

traverses back to its parent as shown in part 5 and part 9. It keeps exploring nodes and

incrementing the coverage increment Ci till P-percent condition is satisfied.


30

Connected P-Percent-Connected Dominating Set (CpPCA-CDS)

The Connected P-Percent-Connected Dominating Set algorithm works in three phases.

1. Construct a Connected Dominating Set.

2. Chose the nodes to cover the P-Percent by building a DFS search tree in the CDS formed

in phase 1.

3. Add more nodes from outside the CDS to meet the P-Percent if it is not met in phase 2.

The first phase is to construct a Connected Dominating Set. The Connected Dominating Set and

the first phase are described below.

PHASE 1

A very important factor in a CDS is the size of the CDS. Routing is more efficient and the

number of control messages is less for a smaller CDS. In [18] the authors also consider the

diameter as a metric to evaluate the constructed CDS. The diameter of the graph in [18] is

defined as the length of the longest shortest paths between a pair of nodes in the graph. The

authors also introduce a new concept called Average Backbone Path Length (ABPL).

ABPL is defined as the sum of the hop distances between any pair of nodes divided by the

number of all the possible pair of nodes. Diameter is the worst case of ABPL. They also prove

that it is possible for two Connected Dominating Sets with the same size and diameter can have

two different values for ABPL.

They introduce a distributed algorithm to construct a CDS, CDS-Bounded Diameter-Distributed

(CDS-BD-D) which generates a smaller ABPL as compared to other works.


31

The algorithm used to construct the CDS is as follows:

1. Root builds a Breadth First Search tree.

2. The root marks itself BLACK and a BLACK message is broadcast.

3. The nodes receive BLACK or WHITE messages and a node u with the maximum weight

W1 among its neighbor siblings which are still undecided marks a WHITE to itself if it

receives at least one BLACK message from either its parent or siblings.

4. If the node with maximum weight W1 does not receive any BLACK message it sends

CONNECT message to its parent with the maximum weight.

5. The parent and the node u mark themselves as BLACK and broadcast a BLACK message

again.

6. Once the color of all the nodes is decided, a CDS is formed by all the nodes currently

BLACK.

PHASE 2

The second phase in the Connected P-Percent-Connected Dominating Set Algorithm is as

follows.

1. The root selected in phase 1 starts DFS search for only the nodes in the CDS.

2. With every node added with the maximum weight W2, the current P-Percent is

calculated.

3. If the P-Percent coverage is met then algorithm terminates else step 4.

4. If the P-percent is not met after all the nodes in the CDS have been exploited then

calculate the P-Percent and Phase 3 begins.


32

Figure 12 demonstrates step by step example of the Phase 1 of the algorithm.

Figure 12: CpPCA-CDS Phase 1


33

Figure 12 demonstrates the formation of a CDS. The first half shows a root node before and

the second part shows the network after the formation of a CDS.

PHASE 3

The third phase begins only if the requited P-Percent area has not been covered in the second

phase. Phase 3 is as follows.

1. The root checks if the P-Percent condition is satisfied.

2. If not then the root chooses a dominatee neighbor v with maximum weight W2 outside

the CDS and checks whether P-Percent is satisfied.

3. If P-Percent is not satisfied in step 2 then v chooses a dominate neighbor w with the

maximum W2. If the P-Percent is satisfied then the algorithm terminates else v passes

the token back to the root node.

4. The root node repeats step 2 till the P-Percent is satisfied.

Figure 13 demonstrates step by step example of the Phase 3 of the algorithm. The root node

starts a DFS search with in the constructed CDS. If P-Percent condition has not been satisfied

then the nodes outside of the CDS are added in to the network. A node with maximum weight

W2 is added repeatedly according to the method explained in phase 3 till P-Percent coverage is

achieved. Part 4 of the figure shows a complete set of dominator and dominated nodes which

together provide the P-Percent coverage.


34

Figure 13: CpPCA-CDS Phase 3


35

4. SIMULATIONS AND CONCLUSION

Simulations

We performed simulations to evaluate the CpPCA-DFS and CpPCA-CDS algorithms. To

evaluate the algorithms the results were averaged over 50 simulation runs. The simulations were

done for 60 percent, 70 percent and 80 percent coverage respectively.

Active Node Ratio:

The simulation results show that for lower network density (50 to 90 nodes) CpPCA-CDS

provides 5 to 20 percent better active node ratio. For dense networks (100 to 150) not

surprisingly the active node ratio for CpPCA-DFS is 0 to 5 percent better. This is because

CpPCA-DFS is a much simpler algorithm. However, CpPCA-CDS provides better distribution of

the covered area where as it is only 5 to 10 percent expensive as compared to CpPCA-DFS.

Network Lifetime:

The simulation results show that for lower network density (50 to 90 nodes) the lifetime of the

network for CpPCA-CDS is 0 to 10 percent longer. For dense networks (100 to 150) as the

network size grows the network lifetime for CpPCA-DFS is 0 to 5 percent better.

Simulation Parameters:

The Nodes were deployed randomly and distributed uniformly.

Area of the field: 400 x 400 meters.

Sensing range of a node: 50 meters.

Communication range; 100 meters.

Weight Function W1: Degree x (Energy)2

Weight Function W2: 3 Ci x Energy


36

Observed Data

Table 1: Observed Data for p=0.6, p=0.7 and p=0.8

CDS Life CDS Life


Nodes Percent CDS Ratio DFS Ratio OLD DFSLife NEW
50 0.6 0.210669 0.227174 8 8 8
60 0.6 0.228589 0.242123 8 8 9
70 0.6 0.219119 0.239688 10 10 10
80 0.6 0.238968 0.24883 10 11 11
90 0.6 0.238814 0.249011 11 13 11
100 0.6 0.255433 0.24856 11 14 13
110 0.6 0.257555 0.256355 13 15 13
120 0.6 0.25695 0.251878 14 17 17
130 0.6 0.257271 0.247604 16 19 17
140 0.6 0.264548 0.255819 16 20 18
150 0.6 0.265733 0.255394 18 21 19
50 0.7 0.242589 0.264856 7 6 7
60 0.7 0.244175 0.265103 8 8 8
70 0.7 0.249976 0.271917 9 9 9
80 0.7 0.250647 0.269738 10 10 10
90 0.7 0.27011 0.278081 10 11 11
100 0.7 0.263963 0.277593 12 13 12
110 0.7 0.267337 0.279324 13 14 13
120 0.7 0.276049 0.282101 13 15 15
130 0.7 0.285626 0.278894 13 17 17
140 0.7 0.279488 0.276195 16 18 18
150 0.7 0.28587 0.27709 16 19 18
50 0.8 0.253394 0.321078 6 6 6
60 0.8 0.281734 0.303591 6 7 8
70 0.8 0.287142 0.303876 8 8 9
80 0.8 0.288075 0.296167 9 9 9
90 0.8 0.276801 0.289852 10 10 10
100 0.8 0.271386 0.283947 11 12 12
110 0.8 0.279487 0.280772 13 13 13
120 0.8 0.275539 0.277084 14 15 14
130 0.8 0.295754 0.290425 14 16 15
140 0.8 0.295007 0.28164 15 18 16
150 0.8 0.28639 0.285465 16 19 18
37

Active Node Ratio Comparison

Active Nodes Ratio Comparison p=0.6

0.3

0.25
Active Nodes Ratio

0.2
CDS
0.15
DFS
0.1

0.05

0
50 60 70 80 90 100 110 120 130 140 150
Nodes

Figure 14: Active Node Ratio for p=0.6

Active Nodes Comparison Ratio p=0.7

0.29

0.28
Active Nodes Ratio

0.27

0.26 CDS
0.25 DFS

0.24

0.23

0.22
50 60 70 80 90 100 110 120 130 140 150
Nodes

Figure 15: Active Node Ratio for p=0.7

Active Nodes Ratio Comparison p=0.8

0.35

0.3
Active Nodes Ratio

0.25

0.2 CDS
0.15 DFS

0.1

0.05

0
50 60 70 80 90 100 110 120 130 140 150
Nodes

Figure 16: Active Node Ratio for p=0.8


38

It can be seen from the above figure that CpPCA-CDS provides 5 to 20 percent better

Active Node Ratio at low density of nodes (50 to 90). The less the Active Node Ratio means that

the number of nodes required to cover same area is less for CpPCA-CDS. As the network density

grows the Active Node Ratio for both the algorithms is either equal or CpPCA-DFS performs up

to 5 percent better.

Average Sensing Void Distance Comparison

Average Sensing Void Distance p=0.6

40
Average Sensing Void Distance

35
30

25
CpPCA-CDS
20
CpPCA-DFS
15

10
5

0
50 70 90 110 130 150 170
Nodes

Figure 17: Average Sensing Void Distance p=0.6

Average Sensing Void Distance p=0.7

35
Average Sensing Void Distance

30

25

20
CpPCA-CDS
15 CpPCA-DFS

10

0
50 70 90 110 130 150 170
Nodes

Figure 18: Average Sensing Void Distance p=0.7


39

Average Sensing Void Distance p=0.8

30
Average Sensing Void Distanc

25

20
CpPCA-CDS
15
CpPCA-DFS
10

0
50 70 90 110 130 150 170
Nodes

Figure 19: Average Sensing Void Distance p=0.8

Network Lifetime Comparison

Network Lifetime Comparison p=0.6

25

20
Network Lifetime

15
CDSLife
DFSLife
10

0
50 60 70 80 90 100 110 120 130 140 150
Nodes

Figure 20: Network Lifetime for p=0.6


40

Network Lifetime Comparison p=0.7

20
18
16
Network Lifetime

14
12
CDSLife
10
DFSLife
8
6
4
2
0
50 70 90 110 130 150 170
Nodes

Figure 21: Network Lifetime for p=0.7

Network Lifetime Comparison p=0.8

20
18
16
Network Lifetime

14
12
CDSLife
10
DFSLife
8
6
4
2
0
50 70 90 110 130 150 170
Nodes

Figure 22: Network Lifetime for p=0.8

The simulation results show that the network lifetime for CpPCA-DFS is up to 5 percent longer

than CpPCA-CDS at low density (50 to 90 nodes). At higher node density (100 to 150) CpPCA-

DFS provides up to 10 percent better network lifetime.


41

Analysis

We then carefully considered the criterion for adding a node in the CDS. We found that

the network lifetime for CpPCA-CDS can be further improved if more worth is given to the

remaining energy. The criteria for constructing the CDS were altered and the simulations were

performed again. While Constructing a CDS in the first round of simulations, the criterion to add

a sensor node in the CDS was degree of the node times the square of the energy remaining in the

node. In the second round of simulations however, the worth of energy remaining was increased

to power of four of the energy remaining W1= Degree x (Energy)4. It is obvious because in

the CpPCA-CDS, the CDS is constructed before finalizing all nodes which will form a network

for P-percent coverage. So if the network lifetime is to be increased then more worth should be

given to the energy remaining in each individual node while constructing the CDS. CDS

provides better connectivity and below are the results. We call the previous run of simulations

for CpPCA-CDS as CpPCA-CDS (old) and the new run of simulations as CpPCA-CDS (new).

Network Lifetime CpPCA-CDS (old), CpPCA-CDS (new), CpPCA-DFS

Network Lifetime Comparison p=0.6

25

20
Network Lifetime

15 CDS Life OLD


DFSLife
10 CDS Life NEW

0
50 70 90 110 130 150 170
Nodes

Figure 23: Network Lifetime for p=0.6


42

Network Lifetime Comparison p=0.7

20
18
16
Network Lifetime

14
12 CDS Life OLD
10 DFS Life
8 CDS Life NEW
6
4
2
0
50 70 90 110 130 150 170
Nodes

Figure 24: Network Lifetime for p=0.7

Network Lifetime Comparison p=0.8

20
18
16
Network Lifetime

14
12 CDS Life OLD
10 DFSLife
8 CDS Life NEW
6
4
2
0
50 70 90 110 130 150 170
Nodes

Figure 25: Network Lifetime for p=0.8

The simulation results show that the lifetime of the network after changing the criterion for

selecting a node for CDS improves by up 10 percent. The lifetime for CpPCA-CDS (new) is

longer than the previous run of simulations for CpPCA-CDS (old) and only 5 percent on average

less than CpPCA-DFS.


43

Conclusion

In this thesis we investigated the Connected P-percent Coverage Problem in sensor

networks. We proposed two algorithms CpPCA-DFS and CpPCA-CDS. We then did a

comparative analysis of both the algorithms. The results from the simulations show that for

lower node density CpPCA-CDS provides 5 to 20 percent better active node ratio and up to 10

percent better network lifetime. For higher density of nodes however the CpPCA-DFS performs

up to 5 percent better for active node ratio and 5 to 10 percent better for the network lifetime. In

most applications where event detection is as crucial as coverage the CpPCA-CDS performs

better as it can up to 30 percent better distribution of covered area and is more stable. For future

work, location free algorithms can be tested to achieve partial coverage. It will be interesting to

see how partial coverage can be achieved in presence of obstacles.


44

REFERENCES

[1] A short survey of wireless sensor networks, Holger Karl, Andreas Willig

Berlin, October 2003, www.tkn.tu-berlin.de/publications/papers/TechReport_03_018.pdf

[2] http://en.wikipedia.org/wiki/Wireless_Sensor_Networks

[3] http://en.wikipedia.org/wiki/Sensor_node

[4] A Simulation Framework for Sensor Networks in J-Sim

http://www.j-sim.org/v1.3/sensor/sensornets_tutorial.htm

[5] http://images.google.com/images?hl=en&q=sensor+node&btnG=Search+Images&gbv=2

[6] Sensor Scheduling for k-Coverage in Wireless Sensor Network Shan Gao, Chinh T. Vu,

and Yingshu Li, 2nd International Conference on Mobile Ad-hoc and Sensor Networks

(MSN 2006), Hong Kong, China, December 13-15, 2006.

[7] Node Placement for Connected Coverage in Sensor Networks, Kaushik Kar, Sumar

Banerjee ftp://ftp-sop.inria.fr/maestro/WiOpt03PDFfiles/kar40.pdf

[8] Connected K Coverage Problem in Sensor Networks Zongheng Zhou,† Samir Das,

Himanshu Gupta, http://www.cs.sunysb.edu/~samir/Pubs/k-coverage.pdf

[9] Coverage by Randomly Deployed Wireless Sensor Networks, Peng-Jun Wan and

Chih-Wei Yi, June 2006 IEEE/ACM Transactions on Networking (TON)

[10] Deploying Sensors for Maximum Coverage in Sensor Networks Ruay-Shiung

Chang, Shuo-Hung Wang August 2007 IWCMC '07: Proceedings of the 2007

international conference on Wireless communications and mobile computing

[11] Trade-offs Between Mobility and Density for Coverage in Wireless Sensor

Networks, Wei Wang Vikram Srinivasan, Kee-Chaing Chua September 2007

MobiCom '07: Proceedings of the 13th annual ACM international conference on


45

Mobile computing and networking

[12] Set K-Cover Algorithms for Energy Efficient Monitoring in Wireless Sensor

Network, Zoë Abrams, Ashish Goel, Serge Plotkin April 2004 IPSN '04:

Proceedings of the 3rd international symposium on Information processing in

Sensor Networks.

[13] Comparative Study of Wireless Sensor Networks Energy Efficient Topologies and

Power Saving Protocols, Piotr Kwaśniewski, Ewa Niewiadomska-Szynkiewicz,

Izabela Windyga, Institute of Control and Computation Engineering, Warsaw

University of Technology, ResearchAcademicComputerNetwork(NASK)

[14] Energy Efficiency in Wireless Sensor Networks: A Utility-Based Architecture

Manoussos Athanassoulis, Ioannis Alagiannis and Stathes Hadjiefthymiades, February

2007 http://cgi.di.uoa.gr/~grad0800/Publication-2-UBA.pdf

[15] Energy Aware Routing for Low Energy Ad-Hoc Sensor Networks

Rahul C. Shah and Jan M. Rabaey, Berkeley Wireless Research Center

University of California, Berkeley

http:// bwrc.eecs.berkeley.edu/ Publications/2002/presentations/

WCNC2002/wcnc.rahul.pdf

[17] An Environmental Energy Harvesting Framework for Sensor Networks

Aman Kansal, Mani B. Srivastava August 2003

ISLPED '03: Proceedings of the 2003 international symposium on Low power

electronics and design.

[18] Constructing Minimum Connected Dominating Sets with Bounded Diameters in


46

Wireless Networks. Donghyun Kim, Yiwei Wu, Yingshu Li, Feng Zou, and

Ding-Zhu Du, IEEE Transactions on Parallel and Distributed Systems, 2008.

Anda mungkin juga menyukai