Anda di halaman 1dari 28

Top of Form

• Contact US
Anna University Engineering Question Bank
Top of Form

600207 - DESIGN AND ANALYISS OF ALGORITHMS

View Lis t 151

Bottom of Form

• Unit - I
• Unit - II
• Unit - III
• Unit - IV
• Unit - V

PART - A
PART-A

1. What is an operating system?


An OS is a program that managers the computer hardware. It also provides a basis for
application programs and acts as an intermediate between a user of a computer and the
computer hardware.
2. Define Mainframe Systems.
Mainframe computer systems are used to tackle many commercial and scientific
applications. We are going to trace the growth of mainframe system from batch system to
time-share system.
3. What is Batch System?
The computer runs one application at a time. Earlier computer machine were physically
enormous which run from console. The most common input device where card reader and
tape drivers. Output devices are time printers, tape drivers and card printers.
The user do not interact with computer system rather they prepare the job in the form of a
program, the data and some control information about the job and submit this to computer
operator.
4. What is Time Share system?
An interactive computer provides direct communication between user and system. A time-
shared OS uses CPU scheduling and multi-programming to provide each user with a small
portion of a time-shared computer. Each user has atleast one separate program in memory
a program loaded into program memory and executing is commonly referred to as process.
Time sharing OS are even more complex than multi-programming OS. The OS have to obtain
reasonable responsible time and jobs may have to swap in and out of the main memory to
disk providing back stores for main memory etc.
5. Define Desktop System.
Personal computers appeared in 1970’s during the first decade, PC OS lacked in feature to
protect OS programs from users program. PC OS are both multi-user and multi-tasking. PC
OS goal is to maximize user convince and responsiveness. These system include PC running
Microsoft windows and Apple Macintosh. IBM has updates MS.DOS to multi-tasking system
and apple Macintosh OS parted to more advance hardware. LINUX, UNIX like OS available for
PC’s has also become popular recently.
6. List out the advantage of Multi-Processor System.
It is also called as parallel system or tightly coupled system. It have more than one
processor in close common sharing computer bus, clock and sometimes memory and
peripheral devices.
Advantage: -
Increased throughput
Economy of scale¬
Increasing reliability
7. What are the main reasons to build Distributed system?
It is a collection of processors that do not share memory or a clock instead each processor
has is own memory. The processor communicates with one another through various
common networks such as high-speed buses or telephone lines.
The four main reasons for building distributed system
Resource sharing¬
Computation speed up
Reliability
Communication
8. Define Clustered system?
Cluster systems like parallel system gathered together multiple CPU’s to accomplish
computation work. Cluster system differ from parallel system in that they are composed of
two or more individual systems cupped together or coupled together. There are two type of
clustering
Symmetric Clustering
Asymmetric Clustering
9. What is Real time system?
Another special purpose OS is Real Time system. A real time is used when reject time
requirements have been placed on the operation of the processor or the flow of the system
and is often used as control device in a dedicated application. System that control scientific
experiments, medical imaging systems and industrial control system and certain display
systems is real time system.
A real time system has well defined fixed time constraint processing must be define within
the define constraints. A real time system functions correctly only if it returns the correct
result within its time constraints.
A real time system comes into two flavor:
Hard Real Time Systems
Soft Real Time Systems
10. State on Hand held systems.
It includes PDA (Personal Digital Assistance) such as palm pilots or cellular telephone with
connection to a network like internet.
Developer of hand helped system face many challenges most of which are due to size of
such devices. Following are the issues related to hand held system:
Many hand held devices are between 512KB &8 MB of memory. As a result of OS &
application must manage memory very efficiently. Thus forcing program developer to work
within the constraints of limited physical memory.
To developers is the speed of processors used in the device so faster¬ processors required
more power automatically this would required larger battery so therefore Os and application
must be designed not to the processor.
The last issue is the small display screens typically available familiar task such as reading,
email, browsing the web page must be condensed on to smaller displays. Some handle held
device might be wireless technology such as blue tooth.

11. List out the types of System Components.


Even though, not all systems have the same structure many modern operating systems
share the same goal of supporting the following types of system components.
Process Management¬
Main-Memory Management¬
File Management
I/O System Management¬
Secondary-Storage Management
Networking
Protection System
Command Interpreter System¬

12. What are the services provide by Operating Systems?


The five services provided by operating systems to the convenience of the users.
Program Execution¬
I/O Operations
File System Manipulation
Communications
Error Detection

13. What is Process Management?


The operating system manages many kinds of activities ranging from user programs to
system programs like printer spooler, name servers, file server etc. Each of these activities
is encapsulated in a process. A process includes the complete execution context (code, data,
PC, registers, OS resources in use etc.).
It is important to note that a process is not a program. A process is only ONE instant of a
program in execution. There are many processes can be running the same program. The five
major activities of an operating system in regard to process management are
• Creation and deletion of user and system processes.
• Suspension and resumption of processes.
• A mechanism for process synchronization.
• A mechanism for process communication.
• A mechanism for deadlock handling.
14. Definition of Process
The designers of the MULTICS first used the term “process” in 1960\'s. Since then, the term
process, used somewhat interchangeably with \'task\' or \'job\'. The process has been given
many definitions for instance
A program in Execution.
An asynchronous activity.
The \'animated sprit\' of a procedure in execution.
The entity to which processors are assigned.
The \'dispatchable\' unit
15. What is Process State?
The process state consists of everything necessary to resume the process execution if it is
somehow put aside temporarily. The process state consists of at least following:
Code for the program.
Program\'s static data.
Program\'s dynamic data.
Program\'s procedure call stack.
Contents of general purpose registers.
Contents of program counter (PC)
Contents of program status word (PSW).
Operating Systems resource in use.

16. Define Process Creation


In general-purpose systems, some way is needed to create processes as needed during
operation. There are four principal events led to processes creation.
• System initialization.
• Execution of a process Creation System calls by a running process.
• A user request to create a new process.
• Initialization of a batch job.
• Foreground processes interact with users.
• Background processes that stay in background sleeping but suddenly springing to life to
handle activity such as email, WebPages, printing, and so on. Background processes are
called daemons. This call creates an exact clone of the calling process.

17. List out the Reasons for creation of a process


• User logs on.
• User starts a program.
• Operating systems creates process to provide service, e.g., to manage printer.
• Some program starts another process, e.g., Netscape calls xv to display a picture.
18. Define Process Termination.
A process terminates when it finishes executing its last statement. Its resources are
returned to the system, it is purged from any system lists or tables, and its process control
block (PCB) is erased i.e., the PCB\'s memory space is returned to a free memory pool. The
new process terminates the existing process, usually due to following reasons:
• Normal Exist Most processes terminates because they have done their job. This call is exist
in UNIX.
• Error Exist When process discovers a fatal error. For example, a user tries to compile a
program that does not exist.
• Fatal Error An error caused by process due to a bug in program for example, executing an
illegal instruction, referring non-existing memory or dividing by zero.
• Killed by another Process A process executes a system call telling the Operating Systems
to terminate some other process. In UNIX, this call is kill. In some systems when a process
kills all processes it created are killed as well (UNIX does not work this way).
19. List out the Process State Transitions?
Following are six(6) possible transitions among above mentioned five (5) states:
• Transition 1 occurs when process discovers that it cannot continue. If running process
initiates an I/O operation before its allotted time expires, the running process voluntarily
relinquishes the CPU.

This state transition is:

Block (process-name): Running → Block.

• Transition 2 occurs when the scheduler decides that the running process has run long
enough and it is time to let another process have CPU time.

This state transition is:

Time-Run-Out (process-name): Running → Ready.

• Transition 3 occurs when all other processes have had their share and it is time for the first
process to run again

This state transition is:

Dispatch (process-name): Ready → Running.

• Transition 4 occurs when the external event for which a process was waiting (such as
arrival of input) happens.
This state transition is:

Wakeup (process-name): Blocked → Ready.

• Transition 5 occurs when the process is created.

This state transition is:

Admitted (process-name): New → Ready.

• Transition 6 occurs when the process has finished execution.

This state transition is:

Exit (process-name): Running → Terminated.


20. Define Process Control Block
A process in an operating system is represented by a data structure known as a process
control block (PCB) or process descriptor. The PCB contains important information about the
specific process including
• The current state of the process i.e., whether it is ready, running, waiting, or whatever.
• Unique identification of the process in order to track \"which is which\" information.
• A pointer to parent process.
• Similarly, a pointer to child process (if it exists).
• The priority of process (a part of CPU scheduling information).
• Pointers to locate memory of processes.
• A register save area.
• The processor it is running on.
The PCB is a certain store that allows the operating systems to locate key information about
a process. Thus, the PCB is the data structure that defines a process to the operating
systems.
21. What is the Process Model
Even though in actuality there are many processes running at once, the OS gives each
process the illusion that it is running alone.
• Virtual time: The time used by just this processes. Virtual time progresses at a rate
independent of other processes. Actually, this is false, the virtual time is typically
incremented a little during systems calls used for process switching; so if there are more
other processors more ``overhead\'\' virtual time occurs.
• Virtual memory: The memory as viewed by the process. Each process typically believes it
has a contiguous chunk of memory starting at location zero. Of course this can\'t be true of
all processes (or they would be using the same memory) and in modern systems it is
actually true of no processes (the memory assigned is not contiguous and does not include
location zero).
Virtual time and virtual memory are examples of abstractions provided by the operating
system to the user processes so that the latter ``sees\'\' a more pleasant virtual machine
than actually exists.
22. What is meant by Process Hierarchies
• Modern general purpose operating systems permit a user to create and destroy processes.
• In unix this is done by the fork system call, which creates a child process, and the exit
system call, which terminates the current process.
• After a fork both parent and child keep running (indeed they have the same program text)
and each can fork off other processes.
• A process tree results. The root of the tree is a special process created by the OS during
startup.
MS-DOS is not multiprogrammed so when one process starts another, the first process is
blocked and waits until the second is finished.
23. List out the Implementation of Processes
The OS organizes the data about each process in a table naturally called the process table.
Each entry in this table is called a process table entry or PTE.
• One entry per process.
• The central data structure for process management.
• A process state transition (e.g., moving from blocked to ready) is reflected by a change in
the value of one or more fields in the PTE.
• We have converted an active entity (process) into a data structure (PTE). Finkel calls this
the level principle ``an active entity becomes a data structure when looked at from a lower
level\'\'.
• The PTE contains a great deal of information about the process. For example,
o Saved value of registers when process not running
o Stack pointer
o CPU time used
o Process id (PID)
o Process id of parent (PPID)
o User id (uid and euid)
o Group id (gid and egid)
o Pointer to text segment (memory for the program text)
o Pointer to data segment
o Pointer to stack segment
o UMASK (default permissions for new files)
o Current working directory
o Many others
24. Define CPU scheduling.
CPU scheduling is the process of switching the CPU among various processes. CPU
scheduling is the basis of multiprogrammed operating systems. By switching the CPU among
processes, the operating system can make the computer more productive.

25. What is a Dispatcher?


The dispatcher is the module that gives control of the CPU to the process selected by the
short-term scheduler. This function involves:
Switching context
Switching to user mode¬
Jumping to the proper location in the user program to restart that program.
26. What is dispatch latency?
The time taken by the dispatcher to stop one process and start another running is known as
dispatch latency.
27. What are the various scheduling criteria for CPU scheduling?
The various scheduling criteria are
CPU utilization¬
Throughput¬
Turnaround time
Waiting time
Response time
28. Define throughput?
Throughput in CPU scheduling is the number of processes that are completed per unit time.
For long processes, this rate may be one process per hour; for short transactions,
throughput might be 10 processes per second.
29. What is turnaround time?
Turnaround time is the interval from the time of submission to the time of completion of a
process. It is the sum of the periods spent waiting to get into memory, waiting in the ready
queue, executing on the CPU, and doing I/O.
30. What is Scheduling Algorithms
CPU Scheduling deals with the problem of deciding which of the processes in the ready
queue is to be allocated the CPU.
Following are some scheduling algorithms we will study
• FCFS Scheduling.
• Round Robin Scheduling.
• SJF Scheduling.
• SRT Scheduling.
• Priority Scheduling.
• Multilevel Queue Scheduling.
• Multilevel Feedback Queue Scheduling.
31. What is First-Come-First-Served (FCFS) Scheduling?
First-Come-First-Served algorithm is the simplest scheduling algorithm is the simplest
scheduling algorithm. Processes are dispatched according to their arrival time on the ready
queue. Being a non-preemptive discipline, once a process has a CPU, it runs to completion.
The FCFS scheduling is fair in the formal sense or human sense of fairness but it is unfair in
the sense that long jobs make short jobs wait and unimportant jobs make important jobs
wait.
FCFS is more predictable than most of other schemes since it offers time. FCFS scheme is
not useful in scheduling interactive users because it cannot guarantee good response time.
The code for FCFS scheduling is simple to write and understand. One of the major drawback
of this scheme is that the average time is often quite long.
The First-Come-First-Served algorithm is rarely used as a master scheme in modern
operating systems but it is often embedded within other schemes.
Other names of this algorithm are:
• First-In-First-Out (FIFO)
• Run-to-Completion
• Run-Until-Done
32. What is Round Robin Scheduling?
Round Robin Scheduling is preemptive (at the end of time-slice) therefore it is effective in
time-sharing environments in which the system needs to guarantee reasonable response
times for interactive users.
The only interesting issue with round robin scheme is the length of the quantum. Setting the
quantum too short causes too many context switches and lower the CPU efficiency. On the
other hand, setting the quantum too long may cause poor response time and approximates
FCFS.
In any event, the average waiting time under round robin scheduling is often quite long.
33. What is Shortest-Job-First (SJF) Scheduling?
Shortest-Job-First (SJF) is a non-preemptive discipline in which waiting job (or process) with
the smallest estimated run-time-to-completion is run next. In other words, when CPU is
available, it is assigned to the process that has smallest next CPU burst.
The SJF scheduling is especially appropriate for batch jobs for which the run times are known
in advance. Since the SJF scheduling algorithm gives the minimum average time for a given
set of processes, it is probably optimal.
34. What is Shortest-Remaining-Time (SRT) Scheduling?
• The SRT is the preemptive counterpart of SJF and useful in time-sharing environment.
• In SRT scheduling, the process with the smallest estimated run-time to completion is run
next, including new arrivals.
• In SJF scheme, once a job begins executing, it run to completion.
• In SJF scheme, a new arrival process with shortest estimated run-time may preempt a
running process.
• The algorithm SRT has higher overhead than its counterpart SJF.
• The SRT must keep track of the elapsed time of the running process and must handle
occasional preemptions.
• In this scheme, arrival of small processes will run almost immediately. However, longer
jobs have even longer mean waiting time.
35. Define Priority Scheduling.
Each job is assigned a priority (externally, perhaps by charging more for higher priority) and
the highest priority ready job is run.
• Similar to ``External priorities\'\' above
• If many processes have the highest priority, use RR among them.
• Can easily starve processes (see aging below for fix).
• Can have the priorities changed dynamically to favor processes holding important
resources (similar to state dependent RR).
• Many policies can be thought of as priority scheduling in which we run the job with the
highest priority (with different notions of priority for different policies).
36. What is Multilevel Queue Scheduling
A multilevel queue scheduling algorithm partitions the ready queue in several separate
queues, for instance
In a multilevel queue scheduling processes are permanently assigned to one queues. The
processes are permanently assigned to one another, based on some property of the process,
such as
• Memory size
• Process priority
• Process type
Algorithm choose the process from the occupied queue that has the highest priority, and run
that process either
• Preemptive or
• Non-preemptively
Each queue has its own scheduling algorithm or policy.
37. Define CPU/Process Scheduling
Scheduling the processor is often called ``process scheduling\'\' or simply ``scheduling\'\'.
The objectives of a good scheduling policy include
• Fairness.
• Efficiency.
• Low response time (important for interactive jobs).
• Low turnaround time (important for batch jobs).
• High throughput [the above are from Tanenbaum].
• Repeatability. Dartmouth (DTSS) ``wasted cycles\'\' and limited logins for repeatability.
• Fair across projects.
o ``Cheating\'\' in Unix by using multiple processes.
o TOPS-10.
o Fair share research project.
• Degrade gracefully under load.
38. Define Long Term Scheduling
• ``Job scheduling\'\'. Decide when to start jobs (i.e., do not necessarily start them when
submitted.
• Force user to log out and/or block logins if over-committed.
o CTSS (an early time sharing system at MIT) did this to insure decent interactive response
time.
o Unix does this if out of processes (i.e., out of PTEs)
o ``LEM jobs during the day\'\' (Grumman).
o Some supercomputer sites.
PART-B
1. Explain the evaluation and view of OS.
2. Discuss the system components.
3. Explain about the system calls groups.
4. Discuss about the OS services.
5. Explain the system Components.
6. Explain the function of operating system.
7. Explain the distributed system.
8. Discuss about the clustered system.
9. Explain about the file management.
10. Write short notes on the following
a. Mainframe systems
b. Batch system
c. Multi-Programming system.
d. Time shared OS
11. Explain the process states in briefly.
12.Discuss about the Process Operations.
13. Discuss about the Process Termination.
14. Explain the Process Control Block.
15. Explain the Inter-Process Communication (IPC).
16. Explain the Proposals for Achieving Mutual Exclusion.
17. Explain the Scheduling Algorithms.
18. Explain the CPU/Process Scheduling.
19. Explain the Real time Scheduling.
20. State about the Multilevel Feedback Queues
Top of Form

Send

Bottom of Form
aximize Toolbar

Part-A
1. What is Concurrent Processes?
The concurrent process is use for several processes between starting and finishing.
2. What are the uses of Resource Allocation?
The use of resource allocation are:
1. We will focus on shared variables
2. Printers are actually controlled by one process called the printer server or printer daemon.
Modern operating systems do not use mutual exclusion techniques for controlling the
printer.
3. List out the Low-Level Techniques for Mutual Exclusion.
• In UNIX, if the operating system is about to make changes to its kernel data structures
(e.g., create a new process by changing the process table)
• Turn off interrupts; therefore, a process may get more time on the processor
• Process cannot lose the processor because the short term scheduler is run in response to a
timer interrupt
• Change data structure
• Keep the code very short without bugs
• Usually only one or two changes are needed here
• Allow interrupts again
• Used on Hercules
• Great for single-processor machines
4. Define Critical Section
The key to preventing trouble involving shared storage is find some way to prohibit more
than one process from reading and writing the shared data simultaneously. That part of the
program where the shared memory is accessed is called the Critical Section. To avoid race
conditions and flawed results, one must identify codes in Critical Sections in each thread.
The characteristic properties of the code that form a Critical Section are
• Codes that reference one or more variables in a “read-update-write” fashion while any of
those variables is possibly being altered by another thread.
• Codes that alter one or more variables that are possibly being referenced in “read-update-
write” fashion by another thread.
• Codes use a data structure while any part of it is possibly being altered by another thread.
• Codes alter any part of a data structure while it is possibly in use by another thread.
5. What is the Solution to the Critical Section Problem?
The solution to the critical section problem:
Mutual exclusion¬
Progress
Bounded waiting
6. Define Semaphores.
A semaphore is a protected variable whose value can be accessed and altered only by the
operations P and V and initialization operation called \'Semaphoiinitislize\'.
Binary Semaphores can assume only the value 0 or the value 1 counting semaphores also
called general semaphores can assume only nonnegative values.
7. List out the type of semaphores.
• Counting Semaphores
• Binary Semaphores
7. What are the Techniques for Critical Section Problem?
Software
Peterson\'s Algorithm: Based on busy waiting
Semaphores: General facility provided by operating system
Monitors: Programming language technique
Hardware
Exclusive access to memory location: Always assumed
Interrupts that can be turned off: Must have only one processor for mutual exclusion
Test-and-Set: Special machine-level instruction
Swap: atomically swaps contents of two words
8. What do you meant by Monitors?
Monitors are a higher-level construct, used in .NET (C#\'s lock) and Java (synchronized) that
allow operation declarations inside of them. These operations are all mutually exclusive
(they are all the same critical section).
9. List out the Functions of monitors.
The two functions monitors offer to the operation declarations are wait (condition) and
signal (condition), which act as a mutex\'s down and up operations, both of which are only
accessible from inside the monitor. Both of those internal operations can act on any of the
conditions defined by the monitor.
10. What are the problems in monitors?
The problems in monitors are:
Bounded Buffer
Producer-Consumer
11. Define Deadlock
A set of process is in a deadlock state if each process in the set is waiting for an event that
can be caused by only another process in the set. In other words, each member of the set of
deadlock processes is waiting for a resource that can be released only by a deadlock
process. None of the processes can run, none of them can release any resources, and none
of them can be awakened. It is important to note that the number of processes and the
number and kind of resources possessed and requested are unimportant.
12. What is meant by Preemptable and Non-Preemptable Resources?
Resources come in two flavors: Preemptable and non-Preemptable. A Preemptable resource
is one that can be taken away from the process with no ill effects. Memory is an example of
a Preemptable resource.
A non-Preemptable resource is one that cannot be taken away from process (without
causing ill effect).
For example, CD resources are not Preemptable at an arbitrary moment.
Reallocating resources can resolve deadlocks that involve Preemptable resources.
Deadlocks that involve non-Preemptable resources are difficult to deal with.
13. What are the necessary and sufficient deadlock conditions?
The necessary and sufficient deadlock conditions are:
Mutual Exclusion Condition
Hold and Wait Condition
No-Preemptive Condition
Circular Wait Condition
14. What is the Dealing with Deadlock Problem?
There are four strategies of dealing with deadlock problem:
The Ostrich Approach
Deadlock Detection and Recovery
Deadlock Avoidance
Deadlock Prevention
15. List out the Deadlock Prevention condition.
The Deadlock prevention conditions are:
Elimination of “Mutual Exclusion” Condition
Elimination of “Hold and Wait” Condition
Elimination of “No-preemption” Condition
Elimination of “Circular Wait” Condition
16. What does Deadlock Avoidance mean?
This approach to the deadlock problem anticipates deadlock before it actually occurs. This
approach employs an algorithm to access the possibility that deadlock could occur and
acting accordingly. This method differs from deadlock prevention, which guarantees that
deadlock cannot occur by denying one of the necessary conditions of deadlock.
If the necessary conditions for a deadlock are in place, it is still possible to avoid deadlock by
being careful when resources are allocated. Perhaps the most famous deadlock avoidance
algorithm, due to Dijkstra [1965], is the Banker’s algorithm. So named because the process
is analogous to that used by a banker in deciding if a loan can be safely made.
17. What is mean by Deadlock Detection?
Deadlock detection is the process of actually determining that a deadlock exists and
identifying the processes and resources involved in the deadlock.
The basic idea is to check allocation against resource availability for all possible allocation
sequences to determine if the system is in deadlocked state a. Of course, the deadlock
detection algorithm is only half of this strategy.
18. What are the needs to recover, when the deadlock detected?
The needs to recover the deadlock detection are:
• Temporarily prevent resources from deadlocked processes.
• Back off a process to some check point allowing preemption of a needed resource and
restarting the process at the checkpoint later.
• Successively kill processes until the system is deadlock free.
19. What is Circular Wait Condition?
Requesting process hold already, resources while waiting for requested resources.
Explanation: There must exist a process that is holding a resource already allocated to it
while waiting for additional resource that are currently being held by other process
20. What is Hold and Wait Condition?
The processes in the system form a circular list or chain where each process in the list is
waiting for a resource held by the next process in the list.
21. What is a thread?
A thread otherwise called a lightweight process (LWP) is a basic unit of CPU utilization, it
comprises of a thread id, a program counter, a register set and a stack. It shares with other
threads belonging to the same process its code section, data section, and operating system
resources such as open files and signals.
22. What are the benefits of multithreaded programming?
The benefits of multithreaded programming can be broken down into four major categories:
Responsiveness¬
Resource sharing
Economy¬
Utilization of multiprocessor architectures¬
23.Compare user threads and kernel threads.
User threads Kernel threads
User threads are supported above the kernel and are implemented by a thread library at the
user level Kernel threads are supported directly by the operating system
Thread creation & scheduling are done in the user space, without kernel intervention.
Therefore they are fast to create and manage Thread creation, scheduling and management
are done by the operating system. Therefore they are slower to create & manage compared
to user threads
Blocking system call will cause the entire process to block If the thread performs a blocking
system call, the kernel can schedule another thread in the application for execution
24. Define thread cancellation & target thread.
The thread cancellation is the task of terminating a thread before it has completed. A thread
that is to be cancelled is often referred to as the target thread. For example, if multiple
threads are concurrently searching through a database and one thread returns the result,
the remaining threads might be cancelled.
25. What are the different ways in which a thread can be cancelled?
Cancellation of a target thread may occur in two different scenarios:
Asynchronous cancellation: One thread immediately terminates the target¬ thread is called
asynchronous cancellation.
Deferred cancellation: The target thread can periodically check if it¬ should terminate,
allowing the target thread an opportunity to terminate itself in an orderly fashion.
26. Write some classical problems of synchronization?
The Bounded-Buffer Problem
The Readers-Writers Problem
The Dining Philosophers Problem
27. When the error will occur when we use the semaphore?
i. When the process interchanges the order in which the wait and signal operations on the
semaphore mutex.
ii. When a process replaces a signal (mutex) with wait (mutex).
iii. When a process omits the wait (mutex), or the signal (mutex), or both.
28. Define the term critical regions?
Critical regions are small and infrequent so that system through put is largely unaffected by
their existence. Critical region is a control structure for implementing mutual exclusion over
a shared variable.
29.What are the drawbacks of monitors?
1.Monitor concept is its lack of implementation most commonly used programming
languages.
2.There is the possibility of deadlocks in the case of nested monitors calls.
30. What are the two levels in threads?
Thread is implemented in two ways.
1.User level
2.Kernel level
31.What are the methods for handling deadlocks?
The deadlock problem can be dealt with in one of the three ways:
Use a protocol to prevent or avoid deadlocks, ensuring that the system will never enter a
deadlock state.
Allow the system to enter the deadlock state, detect it and then♣ recover.
Ignore the problem all together, and pretend that deadlocks never occur in♣ the system.
32. What are a safe state and an unsafe state?
A state is safe if the system can allocate resources to each process in some order and still
avoid a deadlock. A system is in safe state only if there exists a safe sequence. A sequence
of processes <P1,P2,….Pn> is a safe sequence for the current allocation state if, for each Pi,
the resource that Pi can still request can be satisfied by the current available resource plus
the resource held by all the Pj, with j<i. if no such sequence exists, then the system state is
said to be unsafe.
33. What is banker’s algorithm?
Banker’s algorithm is a deadlock avoidance algorithm that is applicable to a resource-
allocation system with multiple instances of each resource type. The two algorithms used for
its implementation are:
Safety algorithm: The algorithm for finding out whether or not a system is in a safe state.
Resource-request algorithm: if the resulting resource-allocation is safe, the transaction is
completed and process Pi is allocated its resources. If the new state is unsafe Pi must wait
and the old resource-allocation state is restored.
34. Differentiate deadlock and starvation.
A set of processes is in deadlock state when every process in the set is waiting for an event
that can be caused only by the other process in the set.
Starvation or indefinite blocking is a situation where processes wait indefinitely within the
semaphore.
PART-B
1. Give a detailed description about deadlocks and its characterization
2. Explain about the methods used to prevent deadlocks
3. Write in detail about deadlock avoidance.
4. Explain the Banker’s algorithm for deadlock avoidance.
5. Give an account about deadlock detection.
6. What are the methods involved in recovery from deadlocks?
7. Explain what semaphores are, their usage, implementation given to avoid busy waiting
and binary semaphores.
8. Explain the classic problems of synchronization.
9. Write about critical regions and monitors.
9. Explain about Dining Philosopher’s problem of Synchronization
Explain- Dinning philosopher Problem, algorithm
10. Explain about Producer Consumer Problem and algorithm
Toolbar
Top of Form
Top of Form

Part-A
1. What is Memory Management Storage Hierarchy ?
Medium-term scheduler (used in time-sharing systems)
• Manages processes waiting for resources
o Examples: waiting to be placed in main memory, waiting for i/o, or waiting to be placed in
the ready list of the short tem scheduler.
• Integrated with memory management routines (especially processes waiting for main
memory)
2. What is Volatile Storage?
• volatile - contents lost if power is interrupted
• non-volatile - can withstand power failures and system crashes
3. What is meant by Cache Memory?
• Source: FutureShop.ca web site
• A cache is a very fast block of memory that speeds up the performance of another device.
Frequently used data are stored in the cache. The computer looks in the cache first to see if
what it needs is there.
• Level 1 Cache is located directly inside the CPU itself, and stores frequently used data or
commands. Although relatively small, Level 1 Cache has the most direct effect on overall
performance.
• Level 2 Cache is located on the motherboard. It stores frequently used data from the
computer\'s main memory (RAM). In Intel Pentium chips, Advanced Transfer Cache is an
improved version of the Level 2 Cache, in which the cache memory operates at the same
speed as the processor, which is as much as four times the speed of a standard Level 2
Cache
4. Define Contiguous Memory Allocation.
Straight-forward method to allocate memory to several processes
Memory is divided into two partitions: Resident OS in lower part, and user processes in
upper part
Assumes a single hunk of memory per process
Relocation-register scheme used to protect user processes from each other, and from
changing operating-system code and data
5. List out the Dynamic Storage Allocation Problem
First fit – allocate first fitting hole
Best fit – allocate best fitting hole
Worst fit – allocate largest hole
First and best fit better than worst fit
Methods cause external fragmentation
o – 50-percent rule gives 33% memory loss
May be solved by compaction
o – Code must be relocated at execution
o – Time consuming
6. What is Swapping?
Move entire processes between main and secondary memory
Informally, the term \"swapping\" is also applied to the movement of partial processes
There are more processes than will fit in main memory
A simple memory management policy
Done by moving a complete process in or out of memory
o Process data moved include:
Process control block (pcb)
Data variables (i.e. heap, stack)♣
Instructions (in machine language)
7. What is Segmentation?
Divide a process up into logically connected chunks which vary in size
- e.g. segment: a logically connected chunk of a process\' address space
- it is the virtual address space that is being divided up
8. What is Virtual Memory Operating System?
OS takes care of all aspects of address translation
Gives the user process the illusion that it has its own address space starting from 0
The user process address space can be larger than main memory
Thus, the actual size of the user process ( 7 pages in our previous example) can be larger
than main memory
Load the pieces of the process that are needed immediately during execution
Other parts of the process can be stored in the swap space (backing store)
9. What are the advantages of Virtual Memory?
Can have processes larger than available memory
OS, not the programmer, manages the details
Performance is satisfactory when we have \"locality of reference\"
10.Write the disadvantage of virtual memory?
Can be slow because the running process stops whenever a required page is in the swap
space
11. What is Static binding?
Bind before execution time
Process must execute in the same area of memory (loader loads it into this area)
i) Absolute code - compiler generates code with the physical address
ii) Relocatable code - loader translates the address in the object module into the physical
address
12.What is Dynamic Binding?
Delay address binding until execution time
Use relocation registers (called segment base registers ; based on the beginning of the
program or page)
Binds a unit of code every time it re-enters main memory
Sets the base register
All addresses have the value from the base register added on before being used
13. What does Swap Space mean?
Divided into pages
Assume the complete process is first loaded into the swap space
More pages go back and fo rth between swap space and main memory
When a program is loaded, put it into the swap space
Memory and page size is always a power of 2
14. What is Page Fault?
When a process tries to access an address whose page is not currently in memory
Process must be suspended, process leaves the processor and the ready list, status is
now ``waiting for main memory\'\'
15.List out the Paging Operation.
Fetch policy
o Demand fetching
o Anticipatory fetching
Placement Policy
Replacement Policy
16. List out the Page Replacement Algorithms.
RAND (Random)
MIN (minimum) or OPT (optimal) :
FIFO (First In, First Out):
LRU (Least Recently Used):
NRU (Not Recently Used):
17. What is Pool Method?
Once a page has been selected for replacement/removal, it joins a pool of pages waiting
for removal
If referenced while in the pool, it is immediately reactivated and removed from the pool
Dirty pages in the pool are written to disk whenever an i/o device is available,
If a page has an up to date copy on disk, it is called a clean page
The \"page cleaning process\" is the process that writes the dirty pages to disk
When the pool is full and a page is needed, remove any clean page from the pool and
reallocate it to another process
Only have to stop the running process and wait for a page out if there are no clean pages
Page faults take twice as long if we have to write dirty pages to disk
Often save a number of pages and write them out together in order of disk address
Used in Windows NT and Windows 2000.
18. What is Shared Pages?
Pages can be marked as READ-ONLY, READ-WRITE, etc. according to what segment they
are in
Processes can share pages that are marked as \"READ-ONLY\"
For example, the pages containing the executable code will be the same for several users
who are all running the same program (\"vi\")
a page is kept if any process needs it
Other pages are marked as \"READ-WRITE\" or \"COPY-ON-WRITE\", which means that a
separate copy will be made for any sharing process that changes the page
With COPY-ON-WRITE pages, if any process tries to change a page, then a copy is made,
that process\'s page table is updated, and then execution continues
19. What is Paged Segmentation?
Divide big segments into pages
Described in some more detail under Segmentation
20. What is Segmentation?
Divide processes\' data into logical units, each called a segment
Better for error checking because all the data in one segment is of the same type .E.g.
instructions - cannot be changed (read-only)
E.g. array (not in UNIX, but in VAX) - checking bounds on an array[ 1 ... 100 ]
Make a segment large enough to hold exactly 100 entries
Then an array reference of array[101] gives a segmentation fault
Internal fragmentation - none - each segment can be defined to be the proper size. e.g.
word address segment boundary, say divide by 256
External fragmentation can be a serious problem
Table overhead and transportation costs are similar to paging
21. What is Segmentation Placement?
Placement: where to locate (place) a segment
best fit: smallest hole (contiguous space) the segment can fit in
first fit: first hole (contiguous space) the segment can fit in
worst fit: largest hole (contiguous space) the segment can fit in
- this idea is perhaps good, but simulation experiments show that after a while, it tends to
eliminate all large holes and only be able to satisfy requests for small holes
22. What is Replacement?
Replacement: which segment to choose for removal when there is inadequate space
available
Compaction is an alternative action (no disk operations and relatively fast)
segment of a terminated process
segment of a blocked process
segment of a ready process♣
Could design LRU, FIFO, NRU, etc. algorithms
Paging is the winning approach, wastes little space with little management overhead
Paged segmentation is most common in current Operating Systems
First divide process into relatively large segments -- read/write/etc
Now divide segments into pages
Wastes part of a page at the end of every segment
Acts like a paging algorithm
Part B
1. Explain the segmentation placement.
2. Discuss the page segmentation.
3. Explain about the share pages.
4. Write note on Pool method?
5. Explain about the Page Replacement Algorithms
6. Discuss about the Effective Memory Access Time
7. Write briefly about the Memory Addressing?
8. Explain about the Swapping
9. Write note on Fixed Partitions
10. Explain the Variable partitions
Bottom of Form
Top of Form
Bottom of Form

Part- A
1. Define Disks Scheduling
The ideal storage device is
o Fast
o Big (in capacity)
o Cheap
o Impossible
Disks are big and cheap, but slow.
2. List out the components of Disk Hardware
Show a real disk opened up and illustrate the components
Platter
Surface
Head
Track
Sector
Cylinder
Seek time
Rotational latency
Transfer time
3. What is mean by Error Handling?
Disks error rates have dropped in recent years. Moreover, bad block forwarding is done by the
controller (or disk electronic) so this topic is no longer as important for OS.
4. Define Track Caching
Often the disk/controller caches a track, since the seek penalty has already been paid. In fact
modern disks have megabyte caches that hold recently read blocks. Since modern disks cheat
and don\'t have the same number of blocks on each track, it is better for the disk electronics (and
not the OS or controller) to do the caching since it is the only part of the system to know the true
geometry.
5. What is Ram Disks?
Fairly clear. Organize a region of memory as a set of blocks and pretend it is a disk.
A problem is that memory is volatile.
Often used during OS installation, before disk drivers are available (there are many types of
disk but all memory looks the same so only one ram disk driver is needed).
6. What is Memory-Mapped Terminals?\\
Less dated. But it still discusses the character not graphics interface.
Today, the idea is to have the software write into video memory the bits to be put on the
screen and then the graphics controller converts these bits to analog signals for the monitor
(actually laptop displays and very modern monitors are digital).
But it is much more complicated than this. The graphics controllers can do a great deal of
video themselves (like filling).
This is a subject that would take many lectures to do well.
7. What is Terminal Hardware?
Quite dated. It is true that modern systems can communicate to a hardwired ASCII terminal, but
most don\'t. Serial ports are used, but they are normally connected to modems and then some
protocol (SLIP, PPP) is used not just a stream of ASCII characters.
8. List out the File structure.
Byte stream
Record stream
Varied and complicated beast
9. List out the File types
(Regular) files.
Directories: studied below.
Special files (for devices). Uses the naming power of files to unify many actions.
dir # prints on screen
dir > file # result put in a file
dir > /dev/tape # results written to tape
``Symbolic\'\' Links (similar to ``shortcuts\'\'): Also studied below.
``Magic number\'\': Identifies an executable file.
There can be several different magic numbers for different types of executables.
unix: #!/usr/bin/perl
10. What is File access?
There are basically two possibilities, sequential access and random access (a.k.a. direct access).
Previously, files were declared to be sequential or random. Modern systems do not do this.
Instead all files are random and optimizations are applied when the system dynamically
determines that a file is (probably) being accessed sequentially.
With Sequential access the bytes (or records) are accessed in order (i.e., n-1, n, n+1,..).
Sequential access is the most common and gives the highest performance. For some devices (e.g.
tapes) access ``must\'\' be sequential.
With random access, the bytes are accessed in any order. Thus each access must specify
which bytes are desired.
11. List out the File operations
Create: Essential if a system is to add files. Need not be a separate system call (can be merged
with open).
Delete: Essential if a system is to delete files.
Open: Not essential. An optimization in which the translation from file name to disk locations
is perform only once per file rather than once per access.
Close: Not essential. Free resources.
Read: Essential. Must specify filename, file location, number of bytes, and a buffer into
which the data is to be placed. Several of these parameters can be set by other system calls and in
many OS\'s they are.
Write: Essential if updates are to be supported. See read for parameters.
Seek: Not essential (could be in read/write). Specify the offset of the next (read or write)
access to this file.
Get attributes: Essential if attributes are to be used.
Set attributes: Essential if attributes are to be user settable.
Rename: Tanenbaum has strange words. Copy and delete is not acceptable for big files.
Moreover copy-delete is not atomic. Indeed link-delete is not atomic so even if link (discussed
below) is provided, renaming a file adds functionality.
12. What is Memory mapped files
Conceptually simple and elegant. Associate a segment with each file and then normal memory
operations take the place of I/O. Thus copyfile does not have fgetc/fputc (or read/write). Instead
it is just like memcopy
while ( (dest++)* = (src++)* );
13. What is Path Names?
You can specify the location of a file in the file hierarchy by using either anabsolute versus or a
Relative path to the file
An absolute path starts at the (or a if we have a forest) root.
A relative path starts at the current (a.k.a working) directory.
The special directories . and .. represent the current directory and the parent of the current
directory respectively.
14. List out the Directory operations
Create: Produces an ``empty\'\' directory. Normally the directory created actually contains .
and .., so is not really empty
Delete: Requires the directory to be empty (i.e., to just contain . and ..). Commands are
normally written that will first empty the directory (except for . and ..) and then delete it. These
commands make use of file and directory delete system calls.
Opendir: Same as for files (creates a ``handle\'\')
Closedir: Same as for files
Readdir: In the old days (of unix) one could read directories as files so there was no special
readdir (or opendir/closedir). It was believed that the uniform treatment would make
programming (or at least system understanding) easier as there was less to learn.
However, experience has taught that this was not a good idea since the structure of directories
then becomes exposed. Early unix had a simple structure (and there was only one). Modern
systems have more sophisticated structures and more importantly they are not fixed across
implementations.
Rename: As with files
Link: Add a second name for a file; discussed below.
Unlink: Remove a directory entry. This is how a file is deleted. But if there are many links
and just one is unlinked, the file remains.

15. What is Implementing Files?


A disk cannot read or write a single word. Instead it can read or write a sector, which is often
512 bytes.
Disks are written in blocks whose size is a multiple of the sector size.
When we study I/O in the next chapter I will bring in some physically large (and hence old)
disks so that we can see what they look like and understand better sectors (and tracks, and
cylinders, and heads, etc.).
16. Linked allocation
The directory entry contains a pointer to the first block of the file.
Each block contains a pointer to the next.
Horrible for random access.
Not used.
17. What is FAT (file allocation table)?
Used by dos and windows (but not windows/NT).
Directory entry points to first block (i.e. specifies the block number).
A FAT is maintained in memory having one (word) entry for each disk block. The entry for
block N contains the block number of the next block in the same file as N.
This is linked but the links are store separately.
Time to access a random block is still is linear in size of file but now all the references are to
this one table which is in memory. So it is bad but not horrible for random access.
Size of table is one word per disk block. If one writes all blocks of size 4K and uses 4-byte
words, the table is one megabyte for each disk gigabyte. Large but not prohibitive.
If write blocks of size 512 bytes (the sector size of most disks) then the table is 8 megs per
gig, which might be prohibitive.
18. What is Inodes?
Used by unix.
Directory entry points to inode (index-node).
Inode points to first few data blocks, often called direct blocks.
Inode also points to an indirect block, which points to disk blocks.
Inode also points to a double indirect, which points an indirect ...
For some implementations there are triple indirect as well.
The inode is in memory for open files. So references to direct blocks take just one I/O.
For big files most references require two I/Os (indirect + data).
For huge files most references require three I/Os (double indirect, indirect, and data).
19. What is shared files (links)?
``Shared\'\' files is Tanenbaum\'s terminology.
More descriptive would be ``multinamed files\'\'.
If a file exists, one can create another name for it (quite possibly in another directory).
This is often called creating a (or another) link to the file.
Unix has two flavor of links, hard links and symbolic links or symlinks.
Dos/windows has symlinks, but I don\'t believe it has hard links.
These links often cause confusion, but I really believe that the diagrams I created make it all
clear.
20. What is Symlinks?
Asymmetric multinamed files.
When a symlink is created another file is created, one that points to the original file.
21. What is Hard Links?
Symmetric multinamed files.
When a hard like is created another name is created for the same file.
The two names have equal status.
It is not, I repeat NOT true that one name is the ``real name\'\' and the other is ``just a link\'\'.
Part B
1. Explain about the Disk space management
2. Discuss about File System Performance
3. Explain the Disk space management
4. State about the Directories Implementing
5. Explain about Directories
6. Explain about the File structure
7. Explain about the Terminals
8. Explain about the Clocks
9. Discuss the Disk Arm Scheduling Algorithms
10. Explain Disks Scheduling

Courses

Top of Form

View Lis t 151

Bottom of Form

PART - A
PART A
1.What is UNIX?
UNIX is also called as operating system, which nowadays runs on most computer systems.
An operating system is merely a computer program through which the user interacts with
the computer and its components and peripheral devices (processor, processes, files, disks,
terminals, printers, plotters, etc.).
2. What is mean by kernel?
The kernel is very small and its always resides in the main memory. It consists of about
10000 lines of C codes and about 1000 lines of assembly code. The size of the kernel makes
it easy to understand, debug or enhance it.
UNIX kernel is supposed to be divided only into two parts:
Information Management
Process Management
3.List out different types of file in UNIX.
• Ordinary Files
• Directory Files
• Special Files
• FIFO Files
4.Define Mounting file system
Mounting usually done at the time of booting UNIX by the system administrator. If all users
with all their directories were to be supported all the time in one file system, it would be
difficult task.
The Mounting facility provides the system manager the flexibility to change or tune his file
system as per the need. It also allows security.
5. What is the logical layout of the file system?
The logical layout of the file system are
Part 1 - Boot Block
Part 2 - Super Block♣
Part 3 - Inode Block
Part 4 - Data Block
6. Define OPEN System Call
When a process wants to perform any operation on a file, it has to open it first. The format of
this system call is as follow:
fd=open(pathname, mode, flag, permissions)
fd -file descriptor♣
pathname - pathname of the file♣
mode - the file to be open in read or write mode♣
flag - Indicator
Permissions - Access right to be given for read or write the file♣
7. What is mean by lseek?
lseek is also called as random seek. The read and write system call allow sequential reading
or writing of bytes with respect to the offset for the file maintained in the appropriate file
table entry. The system call for random seek allow this. The syntax of this call is:
Position= lseek (fd, offset, reference)
fd - File descriptor♣
Offset - The new relative byte number (RBN)
Reference - Its an indicator of offset♣
8. List out the data structures maintained by UNIX.
The data structures maintained by UNIX are:
• Process Table (PT)
• u- area
• Per Process Region Table (Pregion)
• Region Table(RT)
• Page Map Tables (PMT)
• Kernel Stack(KS)
9.What is mean by swapping in UNIX?
A swap device is a part of the disk. Only the kernel can read data from the swap device or
write it back. The kernel allocates one block at a time to ordinary files but in the case of
swap device, this allocation is done contiguously to achieve higher speed as for the I/O while
performing the functions of swapping in or out
10. Define Demand Paging.
Demand paging is the process image is divided into equal sized pages and the physical
memory is divided into same sized page frames.
The process image resides on the disk in the “executable file”. The blocks allocated to this
file by the kernel need not be contiguous. When the process starts executing, depending
upon the free physical page frames, an equal number of pages s are loaded and the
execution.
PART- B
1. What happens when
• A file is opened?
• A file with three links is deleted?
• A file requests for additional blocks?
• A shell program is invoked?
2. Describe the salient features of the file system of UNIX.
3. How does UNIX provide file protection?
4. Explain the merits and demerits of file protection.
5. Explain MOUNT / UNMOUNT in UNIX. What is its purpose?

Anda mungkin juga menyukai