Anda di halaman 1dari 42

Processes | 1

UNIT 2: Processes

Structure

2.0 Objective
2.1 Introduction
2.2 The Process Concept
2.3 Threads
2.4 Systems Programmers view of Processes
2.5 The Operating System view of Processes
2.6 Operating System Services for Process Management
2.7 Scheduling Algorithms
2.8 Performance Evaluation
2.9 Summary
2.10 Exercises
2.11 Suggested Readings

2.0 Objective
At the end of this chapter you must know:
What is Process?
What are the different states of process?
What are the main components of a Process Control Block?
How a programmer views the process?
OS View for processes
What are the different transitions of state that a process can take?
What are the different Operating System Services for Process Management
What are the different types of schedulers and how they differ from each other?
What are different criteria to select scheduling algorithm?
Different Scheduling Algorithms
Performance Evaluation parameter

2.1 Introduction

This chapter gives the description about the process. It begins with the concept of a process, different
states that a process can attain while it goes in the system memory. A process can transit in several
phases according to the time it takes to execute the workload or depending upon the request it makes
2 | Operating System

at time of execution. After this, the chapter covers the topic that how a programmer views a process
in system and how it differs from the concept of OS viewing the same process in the system. Next
comes the different services that an OS gives for management of a process. It is followed by the
concept of different types of schedulers existing to handle the request of a process. The schedulers
basically help the system to identify which process is to be execute at current time and which one to
suspend (based on preemptive or non-preemptive approach). Then chapter talks about different CPU
scheduling algorithms that a process scheduler follows to schedule the process and assign them CPU.
The scheduling algorithms are different and depends upon the policy it is following either preemptive
or non-preemptive. At the end of the chapter the performance evaluation metrics will be discussed
that helps to compare and contrast the different scheduling algorithms performance. Metrics may help
to assess an algorithm based on several criteria and chose the best algorithm according to scenario.

2.2 The Process Concept

An operating system runs a variety of programs such as:


Batch system jobs
Time-shared systems user tasks or programs
Jobs or process in OS are used interchangeably.
2.2.1 Process

Process is defined as a program that is in execution. Or can say that an instance of a program in
execution is known as Process. Program and Process is not the same as shown in Figure 2.1. A
process is more than a program code.Program is considered as a 'passive' entity whereas a process is
considered as an 'active' entity. Process execution must evolve in sequential fashion. Attributes
apprehended by process include memory, hardware state, CPU etc.

main () main () heap


{ {

function 1 () function 1 ()
stack

} }
function 2 () function 2 ()
{ registers
{
PC

} }

PROGRAM PROCESS
Processes | 3

Figure 2.1: Program and Process

Process memory is distributed into four segments for efficient working as shown in Figure 2.2:

1. The stack is used for local variables. Space on the stack is held in reserve for local variables
when they are declared and when the variables go out of scope, the space is released.
Note that the stack is also used for return values of function, and the exact mechanisms of
stack management may be language dependent.

MAX STACK

HEAP
DATA
TEXT
0
Figure 2.2: A process in memory

2. The heap is used for the dynamic memory allocation, and is managed via calls to new, free,
malloc, delete etc.
3. The data section is constituted of static and global variables, assigned and initialized earlier
to executing the main.
4. The text section consists of the compiled program code, read in from non-volatile storage
when the program is launched.

When processes are transacted out of memory and later reinstated, additional information must also
be stored and restored. Significantpoint among them are the value of all program registers and the
program counter.

Note that the heap and the stack start at opposite ends of the process's free space and propagate
towards each other. If they should ever encounter each other, then either a call to malloc or new will
fail due to inadequate memory availability or else a stack runoff error will happen.

2.2.2 Process State or Process Lifecycle

As a process executes it changes its state. It may be in one of 5 states, as shown in Figure 2.3:
New or StartThe phase in which the process is being created.
4 | Operating System

Ready The process is equipped with all the resources available that it requires to execute,
but has to wait for processor availability i.e. the CPU is not presentlyemployed on this
process's instructions.
Running - The CPU is occupied by this process and its instructions are being executed.
Waiting/Blocked - The process cannot execute at the instant, as it is waiting for some event
to occur or for availability of some resources. For example, the process may be waiting for
input from keyboard, requesting to access disk, inter-process messages, a timer to go off, or a
child process to get complete.
Terminated or Exit - The execution of process has finished then it reaches to terminated
state. Here the process waits to get removed from the main memory.

New Terminated

New process admitted Scheduler dispatch or Dispatching


Completed or Exit

Ready Running

Interrupt or Preemption

I/O resource granted or


event wait completed I/O resource or event wait

Waiting/Blocked

Figure 2.3: Process State Diagram

2.2.3 Process Control Block (PCB)

For every process, the operating system maintains a data structure known as Process Control Block.
PCB for each process encloses all the information about the process. Through PCB the OS controls
the activities of the process. An integer process ID (PID) is associated with PCB to identify it. A PCB
holds all the information about each process, needed to keep track of it islisted below.

Process State The current state of the process and it can be running, waiting, ready etc.

Process ID - Operating system has an exclusive credentials for identification of each process.

Child IDandparent process ID These ids are required at time of synchronization of


process. Generally when parent requires to check whether its child process has terminated or
not.
Processes | 5

Priority It is a numerical value assigned to a process at time of its creation. The priority of
a process may vary in its lifetime depending upon the nature of process, age of process and
resources utilized by the process.
Process privileges - This is essential for allowing/disallowing access to system resources.
Program Status Word (PSW) - This is a picture, i.e., a copy, of the PSW when the last
process got preempted or was blocked. Stacking this snapshot back into the PSW would
continue action of the process.
PCB pointer - This field is used to prepare a list of PCBs for the purpose of scheduling.
CPU registers - Various CPU registers are required by the process to store the information
regardingthe execution of running state.
Program Counter - The address of the subsequent instruction to be executed for a process is
stored in the Program Counter.
CPU Scheduling Information - It basically have the CPU Scheduling information that
describes the priority information required to schedule the process.
Memory Management information - This contains the information of memory limits, page
table, segment table dependent upon memory used by the OS.
Accounting information - This contains the information about the user and kernel CPU time
consumed by the process for execution, time limits, account number, execution ID etc.
I/O Status informationIt consist of the list of devices allocated to process, open file tables,
etc.

The PCB architecture totally depends on Operating System and may enclose different information in
different operating systems environment. Here is a basic diagram as shown in Figure 2.4 of a PCB
according to information presented above.

PROCESS ID
PROCESS STATE
POINTER
PRIORITY
PROGRAM COUNTER
CPU REGISTERS

I/O INFORMATION
ACCOUNTING INFORMATION
etc.

Figure 2.4: Process Control Block (PCB)


6 | Operating System

The PCB of a process is retained during the whole lifetime of a process, and once the process
terminates it is removed.

2.3 Threads

A thread is a unit of execution which has its own program counter, stack and a set of registers. The
program counter keeps a track of which instruction to execute next. System registers are used to hold
the current working variables of the thread. A stack contains the execution history. A thread also
shares some information like data segment, code segment and open files with its peer threads.
Threads are also referred to as lightweight processes. Threads provide a way to improve the
performance of applications which we call parallelism. The processor quickly switches back and
forth among the threads, giving an illusion that all the threads are running in parallel.

Each thread belongs to a single process only and no thread can exist outside a process. Each thread
denotes a separate flow of control. We have successfully used threads in implementing web servers
and network servers. Threads provide the base for parallel execution of applications on shared
memory multiprocessors. Figure 2.5 below shows the working of a single-threaded and a multi-
threaded process.

Figure 2.5: working of single-threaded and multi-threaded process

The following table shows the difference between processes and threads.
Processes | 7

Table 2.1: Difference between Process and Thread

S.No. Process Thread


1 Process is heavy weight or resource Thread is light weight, and consumes less
intensive. resources relatively.
2 Each process has its own memory space. Threads use the memory of the process to
which they belong
3 As each process has a different memory Threads of a process share the memory
address, inter process communication is address of the process. Therefore inter thread
slow. communication is fast.
4 Context switching between processes is Context switching between threads is less
more expensive. expensive.
5 A process does not share its memory Threads share their memory with other
with any other process. threads of the same process.

The advantages of threads have been listed below:


Efficient communication
It is more economical to create threads and context switch among them.
Threads facilitate utilization of multiprocessor architecture to a greater scale and efficiency.
With threads, the context switching time is minimized.
Use of threads provides concurrency within a process.

Threads can be implemented in the following two ways:


1. User Level Threads Threads managed by users.
2. Kernel Level Threads Threads that are managed by the OS and act on the kernel.
2.3.1 User Level Threads
The thread management kernel is unaware of the existence of user threads. The thread library
contains the code for creating and destroying threads, for scheduling thread execution, for passing
message and data between threads and for saving and restoring thread contexts. The application
starts with a single thread.

The following are the advantages of user level threads:

Thread switching does not need Kernel mode privileges.

Scheduling can be application specific in the user level thread.

User level thread can run on any OS.


8 | Operating System

User level threads can be quickly created and managed.

The following are the disadvantages of user level threads:

In a typical OS, most system calls are blocking.


Multithreaded application cannot take advantage of multiprocessing.

2.3.2 Kernel Level Threads


Kernel level threads are managed by the operating system. There is no thread management code in
the application area. Kernel level threads are supported directly by the OS. We can program any
application to be multithreaded. All the threads within an application are supported within a single
process.

The Kernel maintains context information for all threads within a process and also for the process as
whole. Scheduling by the Kernel is done on a thread basis. The Kernel performs thread creation,
scheduling and management in Kernel space. As compared to user threads, kernel threads are slower
to create and manage.

The following are the advantages of kernel threads:

Kernel has full knowledge of all threads. Hence, scheduler may decide to give more time to
a process which hasmore threads as compared to a process which has less threads.

Kernel-level threads are especially good for applications that frequently block.

The following are the disadvantages of kernel level threads:

Kernel threads are slower to create and manage as compared to user threads.
To transfer control from one thread to another thread within the same process, the mode
needs to be switched to kernel mode.
2.3.3 Multithreading models
Some operating systems provide a facility to use both user-level threads and kernel-level threads at
the same time. A good example of such an operating system is Solaris.

Basically, multithreading models can be of three types:

Many to many model

Many to one model

One to one model


Processes | 9

1. Many to Many model

The many-to-many model maps any number of user threads onto an equal or smaller number of
kernel threads.Figure 2.6 shows a many-to-many threading model in which 6 user level threads have
been mapped onto6 kernel level threads. In a many-to-many model, developers can create as many
user threads as required and the corresponding Kernel threads can run in parallel on a
multiprocessor machine. This model provides the best accuracy on concurrency. When a thread
performs a blocking system call, the kernel can schedule another thread for execution.

Figure 2.6: Many-to-Many model

2. Many to One Model

In a many-to-one model, many user level threads are multiplexed onto a single Kernel-level thread.
Thread management is done in user space by the thread library. The entire process is blocked when a
thread makes a blocking system call. Only one thread can access the Kernel at a time. Therefore,
multiple threads cannot run in parallel on multiprocessors. Figure 2.7 shows the many-to-one
model.
10 | Operating System

Figure 2.7: Many-to-One model

3. One to One Model

In a one-to-one model a user level thread is mapped into a single kernel level thread. This model
provides more concurrency than the many-to-one model. When a thread makes a blocking system
call, then another thread can run. It supports multiple threads to execute in parallel on
microprocessors. The disadvantage of this model is that creating a user thread requires the
corresponding Kernel thread. Figure 2.8 shows the one-to-one model.

Figure 2.8: One-to-One model


Processes | 11

2.4 Systems Programmers view of Processes

From programmers view, to achieve concurrent execution of a program, processes are required. The
main task of a concurrent program is to create child processes as shown in Figure 2.9. It should
assign priorities to processes to either run some functions at highest priority as required or to achieve
computation speed up or to guard the parent process against the errors. To achieve a common goal,
the main process and child processes have to interact with each other. The interaction may require
processes to coordination of activities among themselves or exchange of data between them.

The following four operations are implemented by OS to get programmer view of processes:

1. Creation of child process and assignment of priorities to them.

2. Identifying the status of child processes.

3. Termination of child processes.

4. Sharing, communication, synchronization between processes.

register
Main Program

Child Process1

Child Process1 Child Process2 Child Process3


bufer-area

Child Process2

Child Process3

(a) (b)

Figure 2.9: (a) Process Tree and (b) Processes

To achieve the first three points the following three system calls are made:

create_process: It is used to create a new process, assign priority to it and a process id


(unique identifier) to it. The process id is returned to the caller. It takes two parameter as
12 | Operating System

input: procedure name and an integer value. The procedure turns into code component for
the new process and its address turns into the address to be executed as next instruction in
the process. The priority assigned to new process is calculated by adding the integer value to
the parent process priority. For scheduling, it is assumed that the bigger numerical value
represents the process with high priority. The call to this function returns a unique identifier
to the new process.

status: It returns a code as alive or terminated after checking the status of a process.

terminate: It is used to terminate the specific child process (if mentioned) or terminates itself
if no process is explicitly mentioned.

2.4.1 Sharing, communication, synchronization between processes

The processes of a program or application desire to communicate with each other because they all
work in one direction to achieve a common goal. Table 2.2 summarizes the four types of interaction
possible between processes.

Table 2.2: Types of Process Interaction

Interaction Description

Message Passing: Processes share information among themselves by passing messages to each
other.

Data Sharing: If several processes tries to update the shared data at same time then data
inconsistency may take place on that data. Hence, if process wants to access
the shared data then the processes must communicate and decide when it is
safe to use it.

Signals: Signals are used to inform about exceptional situations happening to the
process.

Synchronization: In order to achieve the final common goal, processes must organize their
activities and execute their actions in desired sequence.

Message Passing: A message consist of some information that one process sends to another process.
The other process may use that information in its execution or copy and store that information in its
data structure. Both the source (sender) and destination (receiver) must get agree on information
exchange process. The process must be aware that when it has to send or receive the message, so that
information exchange can become a standard protocol or convention between the processes.
Processes | 13

Data Sharing:If many processes update a shared data concurrently, shared variable may hold the
inconsistent values at some point of time. To avoid this situation, if shared data is accessed by a
process then another process who wants to access the same data must be delayed. Thus data sharing
by processes running concurrently comes at a cost.

Signals: A signal is used to represent the abnormal or exceptional conditions to a process so that the
process may implement some distinct actions to handle that conditions. Signal handler is a code that
process executes to handle the raised signal. The signal mechanism is displayed through line of
interrupts. Thus when a signal is given to a process, then operating system interferes the process
execution and runs the signal handler declared by the process. OS differ in the way they handle the
pending signals or resume the process execution after implementing the signal handler.

Synchronization: Suppose if an action Aj is to be executed only after execution of action A i.Then the
process which has to execute action Aj is postponed till the other process executing action A i gets
complete. An OS offers services to check if another process has completed a specified action or not.

2.4.2 Concurrency and Parallelism

The quality to occur at the same time is known as parallelism. Two processes are considered to be
parallel if they happen at the same time. Concurrency is an impression of parallelism. Thus, two
processes are concurrent if there is a deception that they are being accomplished in parallel, however,
in realism, only one of them may be executed at any time.

In an OS, concurrency is attained by interweaving action of processes on the CPU, which produces
the misconception that these processes are functioning at the same time. Parallelism is achieved by
means of several CPUs, as in a multiprocessor system, and executing different processes on these
CPUs.

Concurrency benefits in following ways:

Multiprogramming OS throughput increases by interleaving operation of processes on a CPU


as an I/O action in one process intersects with a computational task of another process.

In a time-sharing system,interleaved operation of processes created by different users makes


each user believe that he has a workstation to himself, although it is slower than the real
workstation being used.

The interleaving of processes may lead to speed up in computation.

Parallelism can offer improved throughput because it operates processes on multiple CPUs at a time.
It can also deliver high computation speed; however, the computation speedup given by it is
qualitatively different from that delivered through concurrency. When concurrency is used, speedup
is acquired by coinciding I/O actions of one process with CPU events of other processes, whereas
14 | Operating System

when parallelism is operated, CPU and I/O activities in one process can intersect with the CPU and
I/O activities of another process.

Computation speedup of an application through concurrency and parallelism would depend on


several factors:

Inherent parallelism within the application


Overhead of concurrency and parallelism
Model of concurrency and parallelism supported by the OS.

2.5 Operating Systems view of Processes

In the operating systems view, anelement of computational work is known as a process. It is also
defined as program in execution mode. Hence the OS kernels chief task is to regulate operation of
processes to givean efficient utilization of the computer system. Accordingly, the kernel assigns
resources to a process, shields the processes and its resources from intervention of other processes,
and confirms that the process acquires the CPU until its operation get completed. The key notion of
OS is to monitor the process state by which it can identify what a process is doing at any instance of
time, when it requires CPU and when to terminate that process and release the resources acquired by
it.

The kernel is triggered when an occasion occurs that requires the attention of kernel, that either leads
to aninterrupt from hardware or a call from a system. The kernel implements the four essential
functions to govern operation of processes as shown in Figure 2.10 and discussed as following:

1. Context save: Saving the state of CPU and information regarding resources held by process whose
execution is interrupted.

2. Event handling: Investigating the situation that directed an interrupt to occur, or the request by a
process that led to a system call, and taking suitable actions to handle them.

3. Scheduling: Choosing the process to be performed next on the CPU.

Event

Context Save

Event Handling

Scheduling

Dispatching
Processes | 15

Exit from Kernel

Figure 2.10: Basic functions of OS Kernel for controlling the processes

4. Dispatching: Setting up access to resources of the process scheduled and loading its hold back
CPU state in the CPU to resume or begin its functioning.
The context save function is performed by OS kernel to store concerned information of the process
that is being interrupted. It is trailed by implementation of an appropriate event handling routine,
which may prevent further action of the process being interrupted. For e.g., if a process Ahas made a
call to system toexecute an I/O operation, or may allow some other process to execute its operation
(if the interrupt was initiated by finishing of its I/O operation). The kernel now implements the
scheduling function for a process selection and the dispatching task to resume or begin its operation.
Hence, to implement scheduling an OS must identify which processes need the CPU at any instant of
time. Hence the key to control operation of processes is to invigilate all processes and see what each
process is doing at any instant of timewhether performing on the CPU, in queue for the CPU to be
assigned to it, in queue for an I/O operation to accomplish, or in queue to be transacted into memory.
The OSsees the process state to keep path of what a process is performing at any moment. The
subsequent section discusses about process state, different states a process can possess; and the
activities by which the OS maintains statistics about the state of a process.

2.5.1 Process State and State Transitions

Process State: To keep track of what a process is doing at any instant of time an operating system
uses the notion of a process state. Process state helps to identify the nature of the current activity of a
process.

The OS uses the process state to simplify its task according to process id, name and number of
processes waiting. Basically OS uses four states to represent its status: Ready, blocked, running and
terminated (see section 2.2.2 or detail).

A process is considered to be in the blocked state by OS kernel if it has asked for a resource and it is
yet to be approved, or if it is waiting for some action to happen. A CPU should not be assigned to this
type of process till its wait is over. The kernel would modify the process state to ready when the
requested resource is approved (given) or the task for which it is waiting happens. Such a process can
be considered for scheduling. When it is dispatched the kernel would alter the state of the process to
running state. The state would be transformed to terminated state when process execution gets over or
when it is terminated by the OS kernel for some purpose. Since, a conventional computer system
holds only one CPU, and so utmost one process can only be in the running state. There can be several
processes that can have blocked, ready, and terminated states.
16 | Operating System

State Transitions: A state transition for a process P is an alteration in its current state. A state
transition is triggered by the happening of some action such as the start or end of an I/O task. When
the action occurs, the kernel identifies its effect on events in processes, and consequently modifies
the state of process being affected.

When a process P in the running state request for an I/O, its state has to be altered to blocked state till
its I/O eventgets over. At the end of the I/O event, process P state is altered from blocked to ready
state because now P requires the CPUto be utilized. Like-wise state modifications are made when a
process asks for some request and that request cannot be handledor satisfied instantaneously by the
OS. The ready state of a process is changed to state of running when it is dispatched, and the running
state of process is changed to state of ready when it is obstructedeither because a process of higher-
priority being ready to execute or because its time slice has passed. Table 2.3encapsulates causes of
state transitions. Figure 2.3depicts the fundamental state transitions for a process. A new process is
kept in the ready state after resources prerequisite by it have been assigned. It may go into the
blocked, running, and ready states many times as a result of events. Ultimately it moves in the
terminated state.

Table 2.3: Fundamental state transition causes for a Process

State Transition Description

ready running The process is dispatched. The CPU starts or recommences


implementation of the process commands.

blocked ready A demand made by the process is approved or an action for which it was
waiting happens.

running ready The process is obstructed because the kernel chooses scheduling of some
other process. This switch occurs either because a process with higher-
priority is ready to be executed, or because the time span of the process
gets over.

running blocked The process in executioncreates a call to system to specify that it desires
to pause until some request of resources made by it is allowed (granted),
or until a specific event happens in the system. Five chiefreasons of
blocking are:

Process desires an I/O operation

Process desires a resource


Processes | 17

Process needs to postpone for a stated interval of time

Process needs a message from another process

Process needs some action to be performed by another process.

running terminated Implementation of the program is finished. Five prime reasons for
termination of process are:

Self-termination: The process in execution either finishes its job or


finds that it cannot function meaningfully and creates a system call as
terminate me. Examples of the second condition may be inconsistent or
incorrect data, or incapability to access data in a chosen manner, e.g., improper
privileges of accessing of file.

Termination by a Parent: A process creates a system call terminateP


to end a child process P, when it discovers that implementation of the
child process is no longer required or significant.

Exceeding resource utilization: An OS may bound the resources that


may be consumed by a process. A process beyond a resource bound
would be terminated by the kernel.

Abnormal conditions during operation: The kernel terminates a


process if an anomalouscondition or disorderstands up due to the
instruction being performed. For e.g., implementation of an illegal
instruction, execution of a privileged instruction, arithmetic
circumstances like memory protection violation or overflow.

Incorrect interaction with other processes: The kernel may terminate a


process if it gets indulged in a situation of deadlock.

A kernel requires supplementary states to designate the nature of the action of a process that is not in
one of the four essential states termed earlier as in section 2.2.2. A situation may arise that process
may be in the blocked or the ready state when it got exchanged out of memory. The process needs to
be transacted back into memory before it can restart its action. Hence it is no longer in the blocked or
ready state; the OS kernel must outline a new state for it may be called as suspended process. When a
suspended process is to restart its old action, it should go back to its previous state that it was at time
of suspension.

To assist this transition of state, the kernel may outline many suspend states and place a suspended
process into the suitable suspension state. Two suspend states called ready swapped and blocked
swapped can be defined. Figure 2.11 displays process states and transitions of state. The transition
ready ready swapped or blocked blocked swapped is triggered by a swap-out action. The
18 | Operating System

converse state transition happens when the process is exchanged back into memory. The blocked
swapped ready swapped change takes place if the request for which the process was waiting is
assigned even though the process is in a state of suspension. Once it is transacted back into memory,
its changes its state to ready and it participates with other processes in ready state for the CPU. A new
process is kept either in the ready state or in the ready swapped state conditional on memory
availability.

New Terminated

Scheduler dispatch or Dispatching


New process admitted Completed or Exit

Swap out
Ready Running
Ready swapped
Interrupt or Preemption
Swap in

I/O resource or event wait

I/O resource granted or event wait completed


Blocked
Swap out

Swap in
Blocked swapped

Figure 2.11: Process state and state transitions using two swapped states

2.5.2 Process Context and the Process Control Block

The OS kernel assigns resources to a process and does the process scheduling for use of the CPU.
Accordingly, the kernels view of a process comprises of two parts:

Data, stack and code of the process, and information regarding memory and other resources,
such as files, assigned to it.

Information regarding program execution, such as the process state, the state of CPU
containing the process id, stack pointer etc.
Process id
Process State
Memory infoResource infoFile pointers PC value
Code Data Stack
Processes | 19

Priority

Process Context Program Control Block (PCB)

Figure 2.12: Kernels View of Process

These two parts of the kernels view are enclosed in the process context and the process control block
i.e. PCB (for details see section 2.2.3), respectively as shown in Figure 2.12. This organization allows
different OS modules to access significant process-related information appropriately and
competently.

The process context consists of the following:

1. Address space of the process: The data, stack and code components of the process

2. Memory allocation information: Information regarding memory areas assigned to a process.


This statistics is used by the memory management unit (MMU) during implementation of the
process.

3. Status of file processing activities: Information regarding files being used, such as present
positions in the files.

4. Process interaction information: Information essential to control communication of the


process with other processes.For e.g. parent id and child id of the processes, and inter-process
messages sent to it that yet have not been conveyed to it.

5. Resource information: Information regarding resources assigned to the process.

6. Miscellaneous information: Miscellaneous information required for operation of a process

2.6 Operating System Services for Process Management

A process is defined as only ONE instant of a program that is executing. There may be many
processes that are running the same program at a time. The services that an OS provides at runtimefor
management of process is known as system call. Here. runtime services are predefined calls and can
be raised by the user process either directly i.e. through the supervisor call written in program code or
indirectly via writing the commands on the terminal and translated into OS call by console-monitor-
routine. The process management OS calls are:
CREATE: With this call OS creates a new process with default or specified number of
identifiers and attributes. Parameters passed to call is processID and attributes.
20 | Operating System

DELETE: Also known as DESTROY, EXIT or TERMINATE. This call makes the OS to
delete the specified process from the memory. Parameter passed is processID to be deleted.
The OS collects all the resources allocated to the process to be deleted.
ABORT: The abort is the forceful termination of a process. It is generally used to terminate
processes that are malfunctioning.
FORK/JOIN: It is another type o call to create or terminate a process. FORK and JOIN are
used in pair. The FORK splits the sequence of instructions in two concurrently executing
parts. The JOIN is used to synchronize the caller with the termination of the named
procedure.
SUSPEND: Also known as SLEEP or BLOCK. The process is deferred and kept in
suspended state.
RESUME: Also known as WAKEUP. This call targets the suspended processes and bring
them back in the memory or execution.
DELAY:Also known as SLEEP. The process is suspended for the specified time period.
Parameter passed is processID and time.
GET_ATTRIBUTES: It is an inquiry to which OS responds by passing the current value of
the attributes set. Parameters passed in system call are processID and attribute_set.
CHANGE_PRIORITY:It is the instance of more general system call
SET_PROCESS_ATTRIBUTE. It is not applied to process whose priority is static in nature.
It is use either to increase the priority of a process or to decrease it. Parameters passed in call
are: processID and new_priority.

Each call is made in the form of procedure call with the names of the parameters enclosed in
parentheses.

The five most important activities of an operating system in respect to process management are:

Construction and removal of system or user processes.

Interruption and recommencement of processes.

Process synchronization mechanism.

Process communication mechanism.

Deadlock handling mechanism.

The process management in OS can be viewed as shown in Figure 2.13. OS basically instantiates,
terminates and schedules a process activities. Process management also involves resource allocation
that includes CPU, memory, and offers services that a process may require.
Processes | 21

PROCESSES and THREADS

PROCESS SYNCHRONIZATION SCHEDULING

Deadlocks Message PassingSynchronization and Scheduling in Multiprocessor OSs

PROCESS MANAGEMENT

Figure 2.13: Overview of Process Management services given by OS

The process management section contains of a module for inter-process communication, which
implements synchronization and communication between processes, the memory management and
scheduling units.

Below is a list of selected operating system services:

Program Execution: The operating system must offer the capability to load a program into
memory and to execute it.
I/O Operations: The operating system must provide this ability since users cannot make I/O
operations directly.
Communications: It is the conversationwhere information among processes (that may be
running on the same computer or on fullyon different machines) are exchanged. This is
usually employedusing shared memory or message passing concept.
Error Detection: It is the skill to identify errors in user programs. It must check for data
consistency.

2.7 Scheduling Algorithms

2.7.1 Process Scheduling

The activity of the process manager that handles the elimination of the executing process from the
CPU and the selecting another process on the basis of a specific strategy to assign CPU is known as
process scheduling. It is basically the decision making fragment that decides which process to send in
running state from ready state.
22 | Operating System

Process scheduling is an important fragment of a Multiprogramming OSs. Such operating systems


permit more than one process loading in the executable memory at a time and the processes that are
loaded shares the CPU using time multiplexing. The prime goal of process scheduling is to maintain
CPU busy all time and provide smallest response time for all programs. To achieve this, the scheduler
must put on suitable procedures for exchange of processes IN and OUT of CPU.
Schedulers are categorized in one of the two common categories:
Pre-emptive scheduling: In preemptive approach, the OS may interrupt the presently running
process and relocate it to the ready state. This happens when the operating system chooses to
serve another process, then pre-empting the presently executing process is done.
Non pre-emptive scheduling:In non-preemptive approach, as soon asa process moves into
running state, it continues the execution until it blocks or terminates itself for waiting of I/O
or for some other operating system service request. In this the currently running process
gives up the CPU willingly.
When an interrupt occurs or a new process comes in, policies with preemptive approach may acquire
greater overhead than non-preemptive approach.But preemptive approach may deliver better service.

2.7.2 Process Scheduling Queue

All PCBs are maintained by the in Process Scheduling Queues. The OS prepares a distinct queue for
each of the PCBs and process states of all processes those are in same execution state and they all are
placed in the same queue. When the state of a process is altered, its PCB is removed from its current
queue and placed to another state queue according to changed state.

The following process scheduling queues are maintained by the Operating System (also see Figure
2.14):

Job queue: The queue keeps all the processes in the system. The processes that entered in the
system are stored in this queue.
Ready queue Queue thatretains a set of all processes existing in main memory, ready and
waiting to get executed. A new process is at all timesplaced in this queue.
Device queues Processes that are waiting for allocation of devices are kept in this queue.
There are different queues for different I/O devices. Processes get blocked due to
unavailability of an I/O device and create this queue.

Exit
JOB QUEUE
READY QUEUE CPU

DEVICE QUEUE
I/O
Processes | 23

Figure 2.14: Process Scheduling Queues

Different policies are used by OS to manage each queue. The OS scheduler identifies in what waya
process is to be moved between the ready queue and run queues. It can have only one entrance per
processor core on the system.

2.7.3 Types of Schedulers

A special system software that handles scheduling of processes is known as Schedulers.Their chief
duty is job selection that is to be given into the system and to choose which process to execute as can
be seen in Figure 2.15. There are three types of schedulers

1. Long-Term Scheduler
2. Short-Term Scheduler
3. Medium-Term Scheduler

NEW
Long-Term Scheduling Long-Term Scheduling

Short-Term Scheduling

READY SUSPEND READY RUNNING

Exit
Medium-Term Scheduling
Event Wait
TERMINATED

WAITING SUSPEND WAITING

Figure 2.15: Use of Scheduling during State Transitions

1. Long Term Scheduler: Another name of this is job scheduler. A long-term scheduler identifies
which programs are to be added for processing to the system. From the queue it selects the processes
and for execution it is loaded in the memory. Processes are loaded into the memory for CPU
scheduling. Basically the long term schedulers decides which program must be acquired into the job
queue. And from the job queue, it goes to the job processor. There processes are selected and loaded
into the memory for execution. Long term scheduling is generally being performed when a new
process is created.
24 | Operating System

Main objective of the Job Scheduler is to maintain a good extent of multiprogramming. It provides a
balanced mixing of jobs, such as processor bound and I/O bound. If the degree of multiprogramming
is steady, then the average process creation rate must be equal to the average departure rate of
processes leaving the system. On some systems, the long-term scheduler may be minimal or not be
available. Long Term Scheduler is absent in time-sharing operating systems. When the state of a
process changes from new to ready, then long-term scheduler is used.

2. Short Term Scheduler:It is also termed as CPU scheduler and runs very often. Its chief objective is
to escalate performance of system in accordance with the selected criteria set. It is the state transition
from ready state to running state of the process. CPU scheduler chooses a process to execute from the
ready queue of processes and assigns CPU to one of them.

Dispatchers another name given to short-term schedulers, as it makes the judgment on which process
is to be executed next in the sequence. Short-term schedulers are faster than long-term schedulers.
The principal goal of this scheduler is to boost CPU performance and increase the rate of process
execution.

3. Medium Term Scheduler:Medium-term scheduling is a fragment of swapping. It eliminates the


processes from the memory. Degree of multiprogramming gets reduced because of this. The medium-
term scheduler is involved in handling of the swapped out-processes.

A running process may get converted to suspended state if it requests for an I/O. A suspended
processes cannot mark any movement towards it completion. In this circumstance, to eliminate the
process from memory and create space for other processes, the suspended process is transferred to the
secondary storage. This procedure is known as swapping and the process is called as swapped out
process or rolled out process. To improve the process mixing, swapping is necessary.

During additional load, this scheduler takes out large processes from the ready queue for some time,
to allow smaller processes to execute, by this meansdropping the number of processes in the ready
queue.

Table 2.4: Comparison between Schedulers:

S.No. Long Term Scheduler Short Term Scheduler Medium Term scheduler
1. It chooses processes from It picks those processes that It can reinstate the process
group of processes and loads are ready for execution. into memory and process can
them into memory for continue its execution.
implementation.
2. Degree of It offers lesser govern over It decreases the degree of
multiprogramming is degree of multiprogramming
controlled by this. multiprogramming
3. Its speed is slower than short Its speed is fastest Its speed lies in between both
term scheduler. amongstthe three long term and short term
schedulers. scheduler.
Processes | 25

4. It is also known as job It is also known as CPU It is known as process


scheduler. scheduler. swapping scheduler.
5. It is either absent or It is also insignificant in It is a part of time sharing
negligible in time sharing time sharing system. systems.
system.

2.7.4 CPU Scheduling

CPU scheduling is a procedure which permits one process to utilize the CPU while the
implementation of another process is kept on pause (in waiting state) due to unavailability of any
resource like I/O etc. It is done to increase the utilization of CPU i.e. making complete use of CPU.
The objective of CPU scheduling is to make the system fast, efficient and fair.

Scheduling Criteria:

Scheduling criteria is also termed as scheduling methodology. Important featureof multiprogramming


is scheduling.Different CPU scheduling algorithm have diverse properties.The criteria used for
checking that best scheduling algorithms is considered are defined as follows:

CPU utilization: It is the task to make the CPU busy as much as possible. It is that to make
best use of CPU such thatany CPU cycle should not go unused. Ideally, CPU would be
working most of the timei.e. 100% of the time. Considering a real scenario, CPU usage
should range from lightly loaded to 90% heavily loaded i.e. ranging from 40% to 90%.

Throughput: Throughput is the amount at which processes gets completed in per unit of
time.Rather say it is over-all amount of effort done in a unit of time. This may vary from
10/second to 1/hour dependent on specific processes.

Turnaround time: It is the total time taken to execute a specific process, i.e. the interval
(time gap) from submission time of the process to the completion time of the process(Wall
clock time).
Turnaround time = Waiting Time + Service Time

Waiting time: The sum of the time periods gone waiting in the ready queue to obtain control
on the CPU.

Load average: It is the average amount of processes existingin the ready queue and waiting
for their opportunity to acquirethe CPU.

Response time: Amount of time taken from a submission of a request to the first response is
made. Remember, it is the periodtill the initial response and not the final response (the
completion of process execution).

Fairness: Every process should get a justified (fair) share of CPU.


26 | Operating System

In common throughput and CPU utilization are maximized and other features are reduced for
appropriate escalation.

2.7.5 Scheduling Algorithms

Short-term scheduling generally requires scheduling policies or scheduling algorithms. The primary
objective of short-term scheduling is to assignCPU time in such a way as to improve one or more
characteristics of system behavior.

It is being assumed that for these scheduling algorithms only a single processor is present.
Scheduling policy may either be preemptive or non-preemptive in nature.Scheduling algorithms
decide that which process from the ready queue is to be assigned to the CPU.The process scheduler
takes this decision on basis of scheduling policy. For process scheduling, its arrival time and service
time will play key role.

The scheduling algorithms are listed as follows:

First-come, first-served scheduling (FCFS) algorithm

Shortest Job First Scheduling (SJF) algorithm

Shortest Remaining time (SRT) algorithm

Non-preemptive priority Scheduling algorithm

Preemptive priority Scheduling algorithm

Round-Robin Scheduling algorithm

Highest Response Ratio Next (HRRN) algorithm

Multilevel Queue Scheduling algorithm

Multilevel Feedback Queue Scheduling algorithm

For understanding of various scheduling algorithms, the following example information is used:

SERVICE TIME
PROCESS ARRIVAL TIME PRIORITY
(BURST TIME)
P1 0 4 2
P2 3 6 1
P3 5 3 3
P4 8 2 1

FIRST-COME, FIRST-SERVE SCHEULING (FCFS) ALGORITHM


Processes | 27

First-come First-served Scheduling Algorithm keeps simple concept the process that enters first
should be completed first. As each process turn out to be ready, it enters in the ready queue. When the
current running process come to an end of execution, the oldest process in the ready queue is
nominated for subsequentexecution. That is first entered process among the available processes in the
ready queue is executed first.Poor in performance as the average waiting time for this is often pretty
long. It is based on non-preemptive scheduling approach.

From above mentioned example:

PROCESS SERVICE TIME WAITING TIME TURNAROUND TIME


(BURST TIME)
P1 4 0 4
P2 6 4 10
P3 3 10 13
P4 2 13 15

Gantt chart:

P1 P2 P3 P4
0 4 10 13 15

Average Waiting Time = (P1wait time + P2wait time + P3wait time + P4wait time) / 4
= (0 + 4 + 10 + 13) / 4
= 23 / 4
=9

Average Turnaround Time = (P1turnaroundtime + P2turnaroundtime + P3turnaroundtime + P4turnaroundtime) / 4


= (4 + 10 + 13 + 15) / 4
= 42 / 4
= 10.5
Advantages:

Best suited for long processes.


Minimum overhead is on processor.
Starvation problem will not occur.
Disadvantages:
Follow effect arises.Even very small process have to wait for its turn to come and to
operateon the CPU. Short duration process after long duration process consequences in lower
CPU utilization.
Throughput is not highlighted.

SHORTEST JOB FIRST (SJF) SCHEDULING ALGORITHM


28 | Operating System

This algorithm connects each process with the length of the next CPU burst.Shortest-job-first
scheduling is also known as shortest process next (SPN). The process with the shortest projectedtime
of processing is selected for execution, among the processes available in the ready queue. Therefore,
a short process will come up to the front of the queue over long jobs. If the subsequentCPU bursts of
two processes are the same then FCFS scheduling approach is applied to resolve the tie.SJF
scheduling algorithm is probably optimum. It gives the smallest average waiting time for a given set
of processes.It cannot be applied at the level of short term CPU scheduling. There is no way of
identifying the shortest CPU burst.

A Non-preemptive SJF algorithm will permit the currently running process to finish.

For above example:

CASE 1: When Arrival time is not considered.

SERVICE TIME
PROCESS WAITING TIME TURNAROUND TIME
(BURST TIME)
P1 4 5 9
P2 6 9 15
P3 3 2 5
P4 2 0 2

Gantt chart:

P4 P3 P1 P4
0 2 5 9 15

Average Waiting Time = (P1wait time + P2wait time + P3wait time + P4wait time) / 4
= (5 + 9 + 2 + 0) / 4
= 16 / 4
=4

Average Turnaround Time = (P1turnaroundtime + P2turnaroundtime + P3turnaroundtime + P4turnaroundtime) / 4


= (9 + 15 + 5 + 2) / 4
= 31 / 4
= 7.75

CASE 2: When Arrival Time is given.

ARRIVAL SERVICE TIME WAITING TURNAROUND


PROCESS
TIME (BURST TIME) TIME TIME
P1 0 4 0 4
Processes | 29

P2 3 6 43=1 7
P3 5 3 10 5 = 5 8
P4 8 2 13 8 = 5 7

Gantt chart:

P1 P2 P3 P4
0 4 10 13 15

Average Waiting Time = (P1wait time + P2wait time + P3wait time + P4wait time) / 4
= (0 + 1 + 5 + 5) / 4
= 11 / 4
= 2.75

Average Turnaround Time = (P1turnaroundtime + P2turnaroundtime + P3turnaroundtime + P4turnaroundtime) / 4


= (4 + 7 + 8 + 7) / 4
= 26 / 4
= 6.5

Advantages

It gives greater turnaround time performance to shortest process next because a short job is
given instant preference to a currently executing lengthier job.

High throughput.

Disadvantages

Execution-completed-time (elapsed time) must be recorded for each process, it consequences


in additional burden on the CPU.
Starvation may be probable for the lengthier processes.

SHORTEST REMAINING TIME (SRT) FIRST SCHEDULING ALGORITHM

A preemptive Shortest Job First algorithm is known as Shortest Remaining Time (SRT) First
algorithm. It will block the currently running process if the next CPU burst time of recently arrived
process is shorter than what is left to the presently running process. It is difficult to implement in
interactive systems where CPU time required is not known in prior. It is generally used in batch
environments where preference is to be given to short jobs.

For above example:

PROCESS ARRIVAL SERVICE ELAPSED TIME WAITING TURNAROUND


TIME TIME TIME TIME
30 | Operating System

(BURST
TIME)
P1 0 4 P2 arrives 0 4
43=1<6
P2 3 6 P3 arrives at 3 43=1 5 + 1 + 6 = 12
6 (5 4) = 5 > 3
P4 arrives at 8 10 5 = 5
5>2 5+1=6
P3 5 3 - 0 3
P4 8 2 - 0 2

Gantt chart:

P1 P2 P3 P4 P2
0 4 5 8 10 15

Average Waiting Time = (P1wait time + P2wait time + P3wait time + P4wait time) / 4
= (0 + 6 + 0 + 0) / 4
=6/4
= 1.5

Average Turnaround Time = (P1turnaroundtime + P2turnaroundtime + P3turnaroundtime + P4turnaroundtime) / 4


= (4 + 10 + 13 + 15) / 4
= 42 / 4
= 10.5

PRIORITY SCHEDULING

Special case of general priority scheduling algorithm is the Shortest Job First Algorithm.

An integer called priority is connected with each process.The highest priority process is assigned the
CPU.Normallylowest integer value is represented as the highest priority. Equivalent priority
processes are programmed in First Come First serve order.It can be Non-preemptive or preemptive.

Non-Preemptive Priority Scheduling Algorithm

The CPU is allocated to the process with the highest priority after finishing the present running
process.

For above example:

SERVICE
ARRIVAL WAITING TURNAROUND
PROCESS TIME (BURST PRIORITY
TIME TIME TIME
TIME)
Processes | 31

P1 0 4 2 0 4
P2 3 6 1 43=1 7
P3 5 3 3 12 5 = 7 10
P4 8 2 1 10 8 = 2 4

Gantt chart:

P1 P2 P4 P3
0 4 10 12 15

Average Waiting Time = (P1wait time + P2wait time + P3wait time + P4wait time) / 4
= (0 + 1 + 7 + 2) / 4
= 10 / 4
= 2.5

Average Turnaround Time = (P1turnaroundtime + P2turnaroundtime + P3turnaroundtime + P4turnaroundtime) / 4


= (4 + 7 + 10 + 4) / 4
= 25 / 4
= 6.25
Advantage
Give best response for highest priority processes.
Disadvantage
Starvation may be likely to happen for the processes with lowest priority.

Preemptive Priority Scheduling Algorithm


The CPU is assigned to the highest priority process instantaneously upon the arrival of that process.
If the equivalent priority process is in successively execution state, after the finishing of the present
executing process, CPU is assigned to this even yet one more equal priority process is to arrive.

For above example:

ELAPSED TURN-
ARRIVAL BURST WAITING
PROCESS PRIORITY TIME AROUND
TIME TIME TIME
TIME
P1 0 4 2 P2 arrives 0 10
2<1
P1 waits 11 3 = 8
P2 3 6 1 P3 arrives 0 6
1<3
P3 waits
P4 arrives
1=1
P4 waits
32 | Operating System

P3 5 3 3 P4 in queue 12 5 = 7 10
3>1
P3 waits
P4 8 2 1 P1 in queue 98=1 3
1<2
P1 waits
Gantt chart:

P1 P2 P4 P1 P3
0 3 911 12 15

Average Waiting Time = (P1wait time + P2wait time + P3wait time + P4wait time) / 4
= (8 + 0 + 7 + 1) / 4
= 16 / 4
=4

Average Turnaround Time = (P1turnaroundtime + P2turnaroundtime + P3turnaroundtime + P4turnaroundtime) / 4


= (10 + 6 + 10 + 3) / 4
= 29 / 4
= 7.25

Advantage

Over non-preemptive priority scheduling version it gives excellent response for the highest
priority process.

Disadvantage

Starvation may occur for the processes having lowest priority.

ROUND-ROBIN SCHEDULING ALGORITHM

Time sharing system is target of this type of scheduling algorithm. It is analogous to FCFS with
addition to preemption approach. Round-Robin Scheduling is also known as time-slicing scheduling.
The preemptive kind is built on a clock. That is a clock interference is created at periodic
intermissions commonly 10-100ms. When the interference happens, the presently executing process
is positioned in the ready queue and the following ready job is nominated on a First-come, First-serve
basis. This process is acknowledged as time-slicing, as each process is given a time slice before being
preempted.

One of the subsequent occurs:

The process may have a burst time of CPU less than the time slice or
Processes | 33

CPU burst of presently running process be lengthier than the time slice. In this situation the
context switch happens the process is placed at the end of the ready queue.

In round-robin scheduling, the primary design problem is the deciding the length of the time quantum
or time-slice to be applied. If the slice is very small, then short duration processes will move rapidly.

Important definitions:

Context Switch: Moving the processor from one executing process (or job) to another.
Denotes change in memory.
Processor Sharing: Use of a small time slice (quantum) such that each process rounds
regularly at speed 1/n.
Reschedule latency: How much time it takes from when a process requests to get executed,
till it finally gets control of the CPU.

For above example suppose time slice is 2ms:

SERVICE TURN-
ARRIVAL ELAPSEDTIM WAITING
PROCESS TIME (BURST AROUND
TIME E TIME
TIME) TIME
P1 0 4 Slice1: P2 arrives Slice1: 0 6
43=1 Slice2: 5 3
P1 waits =2

P2 3 6 Slice1: P1 in Slice1: 0 9
queue and P3 Slice2: 6 5
arrives =1
62=4 Slice3: 10
P2& P3 waits 8= 2
Slice2: 4 2 = 2
P3 5 3 P4 arrives Slice1: 8 5 10
32=1 =3
P2 &P4 waits Slice2: 14
10 = 4
P4 8 2 P4 waits for his 12 8 = 4 6
turn in ready
queue

Gantt chart:

P1 P2 P1 P2 P3 P2 P4 P3
0 3 5 6 8 1012 14 15

Average Waiting Time = (P1wait time + P2wait time + P3wait time + P4wait time) / 4
= (2 + 3 + 7 + 4) / 4
34 | Operating System

= 16 / 4
=4

Average Turnaround Time = (P1turnaroundtime + P2turnaroundtime + P3turnaroundtime + P4turnaroundtime) / 4


= (6 + 9 + 10 + 6) / 4
= 31 / 4
= 7.75

Advantages

In general-purpose system, transaction-processing system or times-sharing system,round-


robin scheduling algorithm is an effective approach to apply.
It tries to give fair handlingto all processes in the queue.
Processor overhead is low.
It provides excellent response time for processes having short CPU burst time.

Disadvantages

Quantum value or time slice value must be carefully chosen as the algorithm dependency on
it is high.
Overhead is there in managing the clock interrupt.
Throughput gets affected(becomes low) if time quantum chosen is too small.

HIGHEST RESPONSE RATIO NEXT (HRNN) ALGORITHM

The principle for Highest Response Ratio Next (HRRN) is the Response Ratio (RR). HRRN working
is similar to FCFS.But at the timeof tie(if process waiting to get scheduled is more than one) it
computes all the processes RRs that are in the tie and chooses the highest RR value.HRRN not only
supports the shorter process, but it also restricts the longer process waiting time. It is a non-
preemptive scheduling approach.

Response Ration is computed as follows:

RR= (Waiting Time + Burst Time) / Burst Time

For above example:

TURN-
ARRIVAL SERVICE TIME WAITING
PROCESS RR TIME AROUND
TIME (BURST TIME) TIME
TIME
P1 0 4 0 4
P2 3 6 43=1 7
Processes | 35

P3 5 3 10 5 = 5 ((10 5) + 3) / 8
3 = 2.66
P4 8 2 13 8 = 5 ((10 8) + 2) / 7
2=2

Gantt chart:

P1 P2 P3 P4
0 4 10 13 15

RR time of P3 is higher than RR time o P4, so P3 will be assigned CPU.

Average Waiting Time = (P1wait time + P2wait time + P3wait time + P4wait time) / 4
= (0 + 1 + 5 + 5) / 4
= 11 / 4
= 2.75

Average Turnaround Time = (P1turnaroundtime + P2turnaroundtime + P3turnaroundtime + P4turnaroundtime) / 4


= (4 + 7 + 8 + 7) / 4
= 26 / 4
= 6.5

An effort to elude the starvation problems for predictable long term executing jobs on busy systems
with many short term executing jobs.

MULTILEVEL QUEUE SCHEDULING ALGORITHM

Multiple-level queues are not an autonomous scheduling algorithm. They uses other present
algorithms to categorize and schedule processes with similar characteristics. Basically multi-level
queue scheduling algorithm is applied in situations where the processes can be categorized into
groups based on certain specifications like CPU time, process type, memory size I/O access, etc. In a
multilevel queue scheduling algorithm, there SYSTEM PROCESSES
will be 'm' number of queues, where 'm' is the number
SYSTEM PROCESSES
of groups formed by the processes based on their different properties as shown in Figure 2.16 as an
example. Each queue is being allotted a priority. And each queue can have its own scheduling
algorithm like FCFS or round-robin scheduling. For the PROCESSES
INTERACTIVE process in a queue to get executed, all the
queues of higher priority than it should beunoccupied, meaning the process in those queues with high
SYSTEM PROCESSES
priority should have finished its execution. In thisscheduling algorithm, the process once assigned to
a queue, is not allowed to move to any other queues.
INTERACTIVE EDIT PROCESSES
Highest Priority Queue SYSTEM PROCESSES

SYSTEM PROCESSES
BATCH PROCESSES
SYSTEM PROCESSES

USER PROCESSES
SYSTEM PROCESSES
36 | Operating System

Lowest Priority Queue

Figure 2.16: Multilevel Queue Scheduling

For example, I/O-bound processes can be organized in one queue and all CPU-bound processes can
be arranged in another queue. The Process Scheduler then interchangeably selects processes from
each queue and dispenses them to the CPU based on the algorithm allotted to particular queue.

MULTILEVEL FEEDBACK QUEUE SCHEDULING ALGORITHM

It is an improvement of multilevel queue scheduling where process can interchange its queues or can
move from one queue to other. In this method, the ready queue is divided into several queues
possessing different priorities. The system uses CPU burst characteristic of a process to allot into a
queue. If a process takes too much CPU time, then it is positioned into a queue with low priority. It
supports I/O bound tasks to get good input/output device utilization. A procedure called aging
encourages processes of lower priority queue to the get next higher priority queue after inappropriate
time interval.

In Figure 2.17, the process queue is presented from topmost to bottommost in order of decreasing
priority. The topmost priority queue has the smallest CPU time slice. After a process from the
topmost queue finishes its time slice on the CPU, it is positioned on the next queue with lower
priority. The process is serviced next time when it reaches on the topmost queue if the top queue
processes gets exhausted.

Advantage

A process that gets postpone for too long in a lower priority queue may be relocated to a
queue of higher priority.

Disadvantage

CPU overhead increases more by moving the process around queue.


Processes | 37

Highest Priority Queue

Medium Priority Queue

Aging
Lowest Priority Queue

Figure 2.17: Multilevel Feedback Queue

2.8 Performance Evaluation

Some of the methods used to evaluate the performance of a scheduling algorithm are as following:

Deterministic modeling,
Queuing models,
Simulations and
Implementation.

Gantt charts are often employed as visual representations.

2.8.1 Deterministic modeling

It involves defining a workload in prior, executing it on the chosenalgorithm and then matchingthat
how this selected algorithm accomplishesits task on that given workload in comparison with other
algorithms. This demonstration results in concrete numbers that are pretty easy to understand. It will
be fairlyapparent which algorithm outcomes is the fastest in processing the given workload. It does
not yetincludes into account thevarying workloads. This is why it is significant to select numerous
different categories of workloads for the testing the algorithm.

However, deterministic modeling is incapable in description for real world scenarios. This is because
requests of user and processes on the system varies from one minute to the next. Therefore it is
tremendously challenging to capture in a specified workload.

In previous section 2.6.5, the several algorithms are computed on given workload to compute the
average waiting time and average turnaround time. On comparing them one can check which
algorithm works better in this scenario.
38 | Operating System

2.8.2 Queuing models

It is the quicker way of comparing two algorithms. Metrics such as CPU utilization, throughput and
response time (see section 2.6.4 for details) are easy to compute. This category of modeling concerns
the situation where CPU and I/O bursts are involved.

Littles Law creates a relation between the average number of tasks in a system ( n), the average
arrival rate of new tasks (), and the average time taken to perform a task (W) in the following
formula:
n = * W

Algebraic restructuring of Littles Law demonstrates several different scheduling metrics. CPU
utilization can be computed as the arrival rate times the average time to service a job. This type of
demonstrating is effective in contrasting between algorithms. A disadvantage from this kind of
investigation is that the computed results are often estimates (approximation). With complex
scheduling algorithms, pre-backbone must be created and hence the calculated results are not an
preciserepresentation of real world circumstances.

2.8.3 Simulations

They implement the same workload through the code of program that simulates different algorithms
and the outcomesof each may be analyzed. Models (scenarios) of the different computing
environments are generated. Then the software is made to execute the workloads through the
different algorithms and environments. The results are charted and may be compared for assessment.
Abenefit of this type of assessment is that numerous different environments an algorithms may be
replicated and tested. Random number generators are used to reproduce real world actions by varying
theburst times, arrival speeds and process sizes. This method of assessment can be time consuming
and expensive on the other hand is more precise to real world situations than queuing modeling or
deterministic modeling.

2.8.4 Implementation

It is the most expensive sort of assessment to execute. As the name suggests, implementation
essentiallyproceedsby taking the scheduling algorithm as input, resides it inside an operating system
and executes it in production. Understanding of the outcomes is not always strong as there is change
in computing environments. The obscured cost of user obstruction is also a concern for this type of
technique and should be seriouslyconsidered and weighed. However implementation is the most
precise way to identify the actual advantage of one scheduling algorithm over the other. Data
collected on how the scheduler executes is actual real world data. Implementation is a requirement
for any scheduling algorithm to proceed from theory concept to practice set.

Few parameters that may be considered for performance evaluation are as follows:
Processes | 39

UTILIZATION: The portion of period a system is in use.

Utilization = ratio of in-use time / total observation time


THROUGHPUT: The number of jobs completed in a time period.
Throughput = number of jobs / second
SERVICE TIME: The required time by a system to handle a request. (seconds)
QUEUEING TIME: Time spent in a queue waiting for service as a response from the system.
(seconds)
RESIDENCE TIME: The time expended by a request at a system.
Residence Time = Service Time + Queueing Time
RESPONSE TIME: Time taken by a system to reply to a user process. ( seconds )
THINK TIME: The time expended by the user of an interactive system to consider the next
request. (seconds)

The aim is to improve both the average and the amount of variation. (Butbe carefulof the
giantexpectedness.)

NOTE: Some parameters are already defined in section 2.6.4.

2.9 Summary

This chapter covers basic concepts related to processes in Operating System. A process is defined as a
part of program in executing mode. Many processes simultaneously can execute a same program.
Processes during execution time changes their state. The process state depends upon the current state
of a process. A process in general can exhibit following states: new, ready, running, waiting or
suspended and terminated. A process scheduler is required to schedule the process. The scheduler can
be of three types: long term, short term and medium term scheduler. The scheduler based on
scheduling algorithms decide when a process should get a CPU and for how much time. The
scheduling algorithms can be of either types: preemptive algorithms or non-preemptive algorithms.
Some of the common algorithms are FCFS, Round Robin, SJF, Priority Scheduling algorithms. Every
algorithm can be evaluated on certain performance evaluation parameter. These parameters helps in
judging that which algorithm is best suited under certain environment and workload. The parameters
generally considered for evaluation are average waiting time, turnaround time, throughput, CPU
utilization etc. Apart from theses aspects the chapter talks about how a user perceives a process in a
system and at same time how an OS perceives it. Both uses different aspects to deal with a process
and maintain certain parameters to handle a process like, process control block etc.

2.10 Exercise

1. Which module gives control of the CPU to the process selected by the short-term scheduler?
A. dispatcher B. interrupt
C. scheduler D. none of the mentioned
40 | Operating System

2. The interval from the time of submission of a process to the time of completion is termed as:
A. waiting time B. turnaround time C. response time D. throughput

3. In priority scheduling algorithm:


A. CPU is allocated to the process with highest priority
B. CPU is allocated to the process with lowest priority
C. equal priority processes cannot be scheduled
D. none of the mentioned

4. The processes that are residing in main memory and are ready and waiting to execute are kept
on a list called:
A. job queue B. ready queue C. execution queue D. process queue

5. Dispatch latency is:


A. the speed of dispatching a process from running to the ready state
B. the time of dispatching a process from running to ready state and keeping the CPU idle
C. the time to stop one process and start running another one
D. None of these

6. A process of operating system for execution needs


A. through put B. timers C. resources D. both a and b

7. Which of the following are the states of a state process model?


i) Running ii) Ready iii) New iv) Exit
v) Destroy
A. i, ii, iii and v only B. i, ii, iv and v only
C. i, ii, iii, and iv only D. All i, ii, iii, iv and v

8. Process Management function of an operating system kernel includes.


A. Process creation and termination. B. Process scheduling and dispatching
C. Process switching D. All of the above

9. State which statement is true for Suspended process?


i) The process is not immediately available for execution.
ii) The process may be removed from suspended state automatically without removal order.
A. i only B. ii only C. i and ii only D. None

10. A process which has just terminated but has yet to relinquish its resources is called
A. A suspended process B. A zombie process
C. A blocked process D. A terminated process
11. What is the difference between a program and a process?
12. What is CPU utilization?
13. Explain the difference between busy waiting and blocking.
Processes | 41

14. Explain the role of a Process control block (PCB).


15. Explain the function of the system calls along with the process state diagrams.
16. Compare preemptive and non-preemptive scheduling methods and explain in details the
priority based scheduling technique.
17. What are the motivations for short term, medium term and long term scheduling levels?
Explain with block schematics.
18. Define throughput and turnaround time.
19. Explain starvation. When and how starvation may occur?
20. Compare and contrast the round-robin, pre-emptive policy with shortest job first pre-emptive
policy.
21. Suppose the following jobs arrive for processing at the times indicated, each jobwill run the
listed amount of time.

Jobs Arrival Time Burst Time (seconds)


1 0 8
2 0.4 4
3 1 1

Give Gantt charts illustrating the execution of these jobs using the non-preemptive FCFS and SJF
scheduling algorithms. Compute the average turnaround time and average waiting time of each job
for above algorithms.

22. Given the following information about jobs:

Jobs Arrival Time Burst Time (seconds) Priority


1 0 8 3
2 0 4 2
3 0 6 1
4 0 1 4

All jobs arrive at time 0(but in the order 1, 2, 3&4).Draw charts and calculate theaverage time to
complete (turn-around time) using the following schedulingalgorithms: FCFS, SJF, Priority
scheduling and round Robin (t=2).
23. Put the following in the chronological order in the context of the birth of aprocess executes:
Ready, suspended, execute, terminate, create.
24. What is the difference between the idle and blocked state of a process?
25. With the help of a state transition diagram, explain various states of a process.
42 | Operating System

2.11 Suggested Readings

1. Operating System Concepts by Abraham Silberschatz, Peter B. Galvin, Greg Gagne,Wiley.


2. Operating Systems: Internals and Design Principles by William Stallings, Pearson.

Anda mungkin juga menyukai