Anda di halaman 1dari 15

A

Seminar Report Writing


On
ANALYSIS AND IMPROVEMENT OF PROPHET
ROUTING ALGORITHM
Submitted in partial fulfillment of Award of

BACHELOR OF TECHNOLOGY

Degree
In

Information Technology
Submitted By

Piyush Kumar Srivastava (09118045)

Under Guidance of

Dr.R R Janghel

Assistant Professor (IT)

NATIONAL INSTITUTE OF TECHNOLOGYRAIPUR,


CHHATTISGARH-492010
(INDIA)
MAY, 201…..
Table of Contents
Page No

1. Introduction 1
1.1 Abc 2
1.2 Xyz
1.3 Stuv
2. Literature review
2.1 Abc….
2.2 Xyz
2.3 Stuv

3. Gaps in the Literature (Objective)

4. Proposed Methodology ( Algorithm, Methods, etc)


4.1 Introduction
4.2 Details of ….
4.3 Algorithms….

5. Experiment Results (Include figures tables ……)

6. Conclusion and future work

7. References ( EEE Format)

1) Silberschatz, Abraham (2006). Operating System Principles (7 ed.). Wiley-India.


p. 237. Retrieved 29 January 2012.

2) Stuart, Brian L. (2008).principle of operating systems.


3)
4)
List of Figures Page No
Figure 1 MapReduce Programming Model .................................................... Error! Bookmark not defined.
Figure 2 HDFS Architecture ............................................................................ Error! Bookmark not defined.
Figure 3 starting name node focussed content are java processes ............... Error! Bookmark not defined.
Figure 4 performing job in above example we are performing wordcount .. Error! Bookmark not defined.
Figure 5 showing results of the job performed ............................................. Error! Bookmark not defined.

List of Tables Page No

Table 1 MapReduce Programming Model ..................................................... Error! Bookmark not defined.


Table 2 HDFS Architecture ............................................................................ Error! Bookmark not defined.
Table 3 starting name node focussed content are java processes ............... Error! Bookmark not defined.
Table 4 performing job in above example we are performing wordcount .. Error! Bookmark not defined.
Table 5 showing results of the job performed Error! Bookmark not defined.
List of Abbreviation
β = what do you mean β
α = what do you mean
ϕ = what do you mean
Introduction

2.1 Deadlocks

A deadlock is a situation in which two or more competing actions are each waiting for the
other to finish, and thus neither ever does.
In a transactional databse, a deadlock happens when two processes
each within its own transaction updates two rows of information but in the opposite order.
For example, process A updates row 1 then row 2 in the exact timeframe that process B
updates row 2 then row 1. Process A can't finish updating row 2 until process B is finished,
but process B cannot finish updating row 1 until process A is finished. No matter how much
time is allowed to pass, this situation will never resolve itself and because of this database
management systems will typically kill the transaction of the process that has done the least
amount of work.

Figure 1: Title
In an operating system, a deadlock is a situation which occurs when
a process or thread enters a waiting state because a resource requested is being held by
another waiting process, which in turn is waiting for another resource. If a process is unable
to change its state indefinitely because the resources requested by it are being used by
another waiting process, then the system is said to be in a deadlock.

Deadlock is a common problem in multiprocessing systems, parallel


computing and distributed systems, where software and hardware locks are used to handle
shared resources and implement process synchronization.
In telecommunication systems, deadlocks occur mainly due to lost or corrupt
signals instead of resource contention.

Examples
Any deadlock situation can be compared to the classic "chicken or egg" problem. It can be
also considered a paradoxical "Catch-22" situation. A real world example would be an
illogical statute passed by the Kansas legislature in the early 20th century, which stated.

“ When two trains approach each other at a crossing, both shall come to a full stop and

neither shall start up again until the other has gone.”

A simple computer-based example is as follows. Suppose a computer has three CD drives


and three processes. Each of the three processes holds one of the drives. If each process now
requests another drive, the three processes will be in a deadlock. Each process will be waiting
for the "CD drive released" event, which can be only caused by one of the other waiting
processes. Thus, it results in a circular chain.
Moving onto the source code level, a deadlock can occur even in the case of a single thread
and one resource (protected by a mutex). Assume there is a function func1 () which does
some work on the resource, locking the mutex at the beginning and releasing it after it's done.
Next, somebody creates a different function func2 () following that pattern on the same
resource (lock, do work, release) but decides to include a call to func1() to delegate a part of
the job. What will happen is the mutex will be locked once when entering func2() and then
again at the call to func1(), resulting in a deadlock if the mutex is not reentrant.

A deadlock is a condition where a process cannot proceed because it needs to obtain a


resource held by another process and it itself is holding a resource that the other process
needs.
We can consider two types of deadlock:
Communication deadlock: occurs when process A is trying to send a message to
process B, which is trying to send a message to process C which is trying to send a message
to A.
Resource deadlock: occurs when processes are trying to get exclusive access to devices,
files, locks, servers, or other resources. We will not differentiate between these types of
deadlock since we can consider communication channels to be resources without loss of
generality.
Four conditions have to be met for deadlock to be present:
1. Mutual exclusion: A resource can be held by at most one process
2. Hold and wait: Processes that already hold resources can wait for another resource.
3. Non-preemption: A resource, once granted, cannot be taken away from a process.
4. Circular wait: Two or more processes are waiting for resources geld by one of the other
processes.
We can represent resource allocation as a graph where: P ←R means a resource R is currently
held by a process P. P→R means that a process P wants to gain exclusive access to resource
R.
Deadlock exists when a resource allocation graph has a cycle.

2.2 Deadlock Handling:

Most current operating systems cannot prevent a deadlock from occurring. When a deadlock
occurs, different operating systems respond to them in different non-standard manners. Most
approaches work by preventing one of the four Coffman conditions from occurring,
especially the fourth one. Major approaches are as follows.

Ignoring deadlock
In this approach, it is assumed that a deadlock will never occur. This is also an application of
the Ostrich algorithm.This approach was initially used by MINIX and UNIX. This is used
when the time intervals between occurrences of deadlocks are large and the data loss incurred
each time is tolerable.

Detection
Under deadlock detection, deadlocks are allowed to occur. Then the state of the system is
examined to detect that a deadlock has occurred and subsequently it is corrected. An
algorithm is employed that tracks resource allocation and process states, it rolls back and
restarts one or more of the processes in order to remove the detected deadlock. Detecting a
deadlock that has already occurred is easily possible since the resources that each process has
locked and or currently requested are known to the resource scheduler of the operating
system.
Deadlock detection techniques include, but are not limited to, model checking. This approach
constructs a finite state-model on which it performs a progress analysis and finds all possible
terminal sets in the model. These then each represent a deadlock.
After a deadlock is detected, it can be corrected by using one of the following methods:
1.Process Termination: One or more processes involved in the deadlock may be
aborted. We can choose to abort all processes involved in the deadlock. This ensures that
deadlock is resolved with certainty and speed. But the expense is high as partial computations
will be lost. Or, we can choose to abort one process at a time until the deadlock is resolved.
This approach has high overheads because after each abort an algorithm must determine
whether the system is still in deadlock. Several factors must be considered while choosing a
candidate for termination, such as priority and age of the process.

2.Resource Preemption: Resources allocated to various processes may be


successively preempted and allocated to other processes until the deadlock is broken.

2.3 DeadLock Preventition:

Deadlock prevention works by preventing one of the four Coffman conditions from
occurring.

 Removing the mutual exclusion condition means that no process will have exclusive
access to a resource. This proves impossible for resources that cannot be spooled. But
even with spooled resources, deadlock could still occur. Algorithms that avoid mutual
exclusion are called non-blocking synchronization algorithms.

 hold and wait or resource holding conditions may be removed by requiring


processes to request all the resources they will need before starting up (or before
embarking upon a particular set of operations). This advance knowledge is frequently
difficult to satisfy and, in any case, is an inefficient use of resources. Another way is
to require processes to request resources only when it has none. Thus, first they must
release all their currently held resources before requesting all the resources they will
need from scratch. This too is often impractical. It is so because resources may be
allocated and remain unused for long periods. Also, a process requiring a popular
resource may have to wait indefinitely, as such a resource may always be allocated to
some process, resulting in resource starvation. (These algorithms, such as serializing
tokens, are known as the all-or-none algorithms.)
 The no preemption condition may also be difficult or impossible to avoid as a process
has to be able to have a resource for a certain amount of time, or the processing outcome
may be inconsistent or thrashing may occur. However, inability to enforce preemption
may interfere with a priority algorithm. Preemption of a "locked out" resource generally
implies a rollback, and is to be avoided, since it is very costly in overhead. Algorithms
that allow preemption include lock-free and wait-free algorithms and optimistic
concurrency control.
 The final condition is the circular wait condition. Approaches that avoid circular waits
include disabling interrupts during critical sections and using a hierarchy to determine
apartial ordering of resources. If no obvious hierarchy exists, even the memory address of
resources has been used to determine ordering and resources are requested in the
increasing order of the enumeration. Dijkstra's solution can also be used.

2.4 DeadLock Avodance:


Deadlock can be avoided if certain information about processes are available to the operating
system before allocation of resources, such as which resources a process will consume in its
lifetime. For every resource request, the system sees whether granting the request will mean
that the system will enter an unsafe state, meaning a state that could result in deadlock. The
system then only grants requests that will lead to safe states.[1] In order for the system to be
able to determine whether the next state will be safe or unsafe, it must know in advance at
any time:

 resources currently available


 resources currently allocated to each process
 resources that will be required and released by these processes in the future

It is possible for a process to be in an unsafe state but for this not to result in a deadlock. The
notion of safe/unsafe states only refers to the ability of the system to enter a deadlock state or
not. For example, if a process requests A which would result in an unsafe state, but releases B
which would prevent circular wait, then the state is unsafe but the system is not in deadlock.

One known algorithm that is used for deadlock avoidance is the Banker's algorithm, which
requires resource usage limit to be known in advance.[1] However, for many systems it is
impossible to know in advance what every process will request. This means that deadlock
avoidance is often impossible.
Two other algorithms are Wait/Die and Wound/Wait, each of which uses a symmetry-
breaking technique. In both these algorithms there exists an older process (O) and a younger
process (Y). Process age can be determined by a timestamp at process creation time. Smaller
timestamps are older processes, while larger timestamps represent younger processes.

Wait for graph (WFG)

The state of the system can be modeled by directed graph,called a wait for graph (WFG).
In a WFG , nodes are processes and there is a directed edge from node P1 to mode P2 if P1 is
blocked and is waiting for P2 to release some resource.
A system is deadlocked if and only if there exists a directed cycle or knot in the WFG

Issues in Deadlock Detection:

 Deadlock handling using the approach of deadlock detection entails addressing two
basic issues: First,detection of existing deadlocks and second resolution of detected
deadlocks.
 Detection of deadlocks involves addressing two issues:Maintenance of the WFG and
searching of the WFG for thepresence of cycles (or knots).

Deadlock Detection in Distributed System:


1. Centralized deadlock detection
2. Distributed deadlock detection
3. Hierarchical deadlock detection

1. Centraized deadlock detection :


Since preventing and avoiding deadlocks to happen is difficult, researchers works on
detecting the occurrence of deadlocks in distributed system.
The presence of atomic transaction in some distributed systems makes a major
conceptual difference.
a. When a deadlock is detected in a conventional system, we kill one or more
processes to break the deadlock --- one or more unhappy users.
b. When deadlock is detected in a system based on atomic transaction, it is
resolved by aborting one or more transactions.
c. But transactions have been designed to withstand being aborted.
d. When a transaction is aborted, the system is first restored to the state it had
before the transaction began, at which point the transaction can start again.
e. With a bit of luck, it will succeed the second time.
f. Thus the difference is that the consequences of killing off a process are much
less severe when transactions are used.
False deadlock:

• One possible way to prevent false deadlock is to use the Lamport’s algorithm to
provide global timing for the distributed systems.
• When the coordinator gets a message that leads to a suspect deadlock:
– It send everybody a message saying “I just received a message with a
timestamp T which leads to deadlock. If anyone has a message for me with an
earlier timestamp, please send it immediately”
– When every machine has replied, positively or negatively, the coordinator will
see that the deadlock has really occurred or not.

2. Distributed deadlock detection:


We use a centralized deadlock detection algorithm and try to imitate the
nondistributed algorithm.
a. Each machine maintains the resource graph for its own processes and
resources.
b. A centralized coordinator maintain the resource graph for the entire system.
c. When the coordinator detect a cycle, it kills off one process to break the
deadlock.
d. In updating the coordinator’s graph, messages have to be passed.
i. Whenever an arc is added or deleted from the resource graph, a
message have to be sent to the coordinator.
ii. Periodically, every process can send a list of arcs added and deleted
since previous update.
iii. Coordinator ask for information when it needs it.

• The Chandy-Misra-Haas algorithm:


– Processes are allowed to request multiple resources at once -- the growing
phase of a transaction can be speeded up.
– The consequence of this change is a process may now wait on two or more
resources at the same time.
– When a process has to wait for some resources, a probe message is generated
and sent to the process holding the resources. The message consists of three
numbers: the process being blocked, the process sending the message, and the
process receiving the message.
– When message arrived, the recipient checks to see it it itself is waiting for any
processes. If so, the message is updated, keeping the first number unchanged,
and replaced the second and third field by the corresponding process number.
– The message is then send to the process holding the needed resources.
– If a message goes all the way around and comes back to the original sender --
the process that initiate the probe, a cycle exists and the system is deadlocked.

• There are several ways to break the deadlock:


– The process that initiates commit suicide -- this is overkilling because several
process might initiates a probe and they will all commit suicide in fact only
one of them is needed to be killed.
– Each process append its id onto the probe, when the probe come back, the
originator can kill the process which has the highest number by sending him a
message. (Hence, even for several probes, they will all choose the same guy)

• A method that might work is to order the resources and require processes to acquire
them in strictly increasing order. This approach means that a process can never hold a
high resource and ask for a low one, thus making cycles impossible.
• With global timing and transactions in distributed systems, two other methods are
possible -- both based on the idea of assigning each transaction a global timestamp at
the moment it starts.
• When one process is about to block waiting for a resource that another process is
using, a check is made to see which has a larger timestamp.
• We can then allow the wait only if the waiting process has a lower timestamp.
• The timestamp is always increasing if we follow any chain of waiting processes, so
cycles are impossible --- we can used decreasing order if we like.
• It is wiser to give priority to old processes because
– they have run longer so the system have larger investment on these processes.
– they are likely to hold more resources.
A young process that is killed off will eventually age until it is the oldest one in the system,
and that eliminates starvation

Wait_ die vs wound_wait:


• As we have pointed out before, killing a transaction is relatively harmless, since by
definition it can be restarted safely later.
• Wait-die:
– If an old process wants a resource held by a young process, the old one will
wait.
– If a young process wants a resource held by an old process, the young process
will be killed.
– Observation: The young process, after being killed, will then start up again,
and be killed again. This cycle may go on many times before the old one
release the resource.
• Once we are assuming the existence of transactions, we can do something that had
previously been forbidden: take resources away from running processes.
• When a conflict arises, instead of killing the process making the request, we can kill
the resource owner. Without transactions, killing a process might have severe
consequences. With transactions, these effects will vanish magically when the
transaction dies.
• Wound-wait: (we allow preemption & ancestor worship)
– If an old process wants a resource held by a young process, the old one will
preempt the young process -- wounded and killed, restarts and wait.
– If a young process wants a resource held by an old process, the young process
will wait.

Summary:

Centrailzed deadlock detection:


1. Large communication overhead.
2. Coordinator is performance bottleneck.
3. Possibility of single point failure.

Distributed deadlock detection algorithms:


1.high complexity.
2.detection of phantom deadlock possible.

Hierarchial deadlock detection algorithms:


1.most common.
2. efficient.
References

1. Silberschatz, Abraham (2006). Operating System Principles(7 ed.). Wiley-India.


p. 237. Retrieved 29 January 2012.
2. Padua, David (2011). Encyclopedia of Parallel Computing Springer. p. 524. Retrieved
28 January 2012.
3. Schneider, G. Michael (2009). Invitation to Computer Science. Cengage Learning.
p. 271. Retrieved 28 January 2012.
4. Rolling, Andrew (2009). Andrew Rollings and Ernest Adams on game design. New
Riders. p. 421. Retrieved 28 January 2012.
5. Oaks, Scott (2004). Java Threads. O'Reilly. p. 64. Retrieved 28 January 2012.
6. A Treasury of Railroad Folklore, B.A. Botkin & A.F. Harlow, p. 381
7. Shibu (2009). Intro To Embedded Systems (1st ed.). McGraw Hill Education. p. 446.
Retrieved 28 January 2012.
8. "Oracle Locking Survival Guide".
9. Stuart, Brian L. (2008).principle of operating systems.

Anda mungkin juga menyukai