Anda di halaman 1dari 15

BC0042 Operating Systems Assignment 2

Q1. What are the jobs of CPU scheduler? Explain any two

scheduling algorithm. Ans.


CPU Scheduler Whenever the CPU becomes idle, it is the job of the CPU Scheduler (a.k.a. the short-term scheduler) to select another process from the ready queue to run next. The storage structure for the ready queue and the algorithm used to select the next process are not necessarily a FIFO queue. There are several alternatives to choose from, as well as numerous adjustable parameters for each algorithm, which is the basic subject of this entire unit. Preemptive Scheduling CPU scheduling decisions take place under one of four conditions: 1. When a process switches from the running state to the waiting state, such as for an I/O request or invocation of the wait( ) system call. 2. When a process switches from the running state to the ready state, for example in response to an interrupt. 3. When a process switches from the waiting state to the ready state, say at completion of I/O or a return from wait( ). 4. When a process terminates. For conditions 1 and 4 there is no choice A new process must be selected. For conditions 2 and 3 there is a choice To either continue running the current process, or select a different one. If scheduling takes place only under conditions 1 and 4, the system is said to be non-preemptive, or cooperative. Under these conditions, once a process starts running it keeps running, until it either voluntarily blocks or until it finishes. Otherwise the system is said to be preemptive. Windows used nonpreemptive scheduling up to Windows 3.x, and started using pre-emptive scheduling with Win95. Macs used non-preemptive prior to OSX, and pre-emptive since then. Note that pre-emptive scheduling is only possible on hardware that supports a timer interrupt. It is to be noted that pre-emptive scheduling can cause problems when two processes share data, because one process may get interrupted in the middle of updating shared data structures.

Preemption can also be a problem if the kernel is busy implementing a system call (e.g. updating critical kernel data structures) when the preemption occurs. Most modern UNIXes deal with this problem by making the process wait until the system call has either completed or blocked before allowing the preemption Unfortunately this solution is problematic for real-time systems, as real-time response can no longer be guaranteed. Some critical sections of code protect themselves from concurrency problems by disabling interrupts before entering the critical section and reenabling interrupts on exiting the section. Needless to say, this should only be done in rare situations, and only on very short pieces of code that will finish quickly, ( usually just a few machine instructions. ) Dispatcher The dispatcher is the module that gives control of the CPU to the process selected by the scheduler. This function involves: Switching context. Switching to user mode. Jumping to the proper location in the newly loaded program. The dispatcher needs to be as fast as possible, as it is run on every context switch. The time consumed by the dispatcher is known as dispatch latency. Scheduling Algorithms The following subsections will explain several common scheduling strategies, looking at only a single CPU burst each for a small number of processes. Obviously real systems have to deal with a lot more simultaneous processes executing their CPU-I/O burst cycles. First-Come First-Serve Scheduling, FCFS FCFS is very simple Just a FIFO queue, like customers waiting in line at the bank or the post office or at a copying machine. Unfortunately, however, FCFS can yield some very long average wait times, particularly if the first process to get there takes a long time. For example, consider the following three processes: Process Burst Time P1 24 P2 3 P3 3 In the first Gantt chart below, process P1 arrives first. The average waiting time for the three

processes is (0 + 24 + 27) / 3 = 17.0 ms. In the second Gantt chart below, the same three processes have an average wait time of (0 + 3 + 6) / 3 = 3.0 ms. The total run time for the three bursts is the same, but in the second case two of the three finish much quicker, and the other process is only delayed by a short amount. FCFS can also block the system in a busy dynamic system in another way, known as the convoy effect. When one CPU intensive process blocks the CPU, a number of I/O intensive processes can get backed up behind it, leaving the I/O devices idle. When the CPU hog finally relinquishes the CPU, then the I/O processes pass through the CPU quickly, leaving the CPU idle while everyone queues up for I/O, and then the cycle repeats itself when the CPU intensive process gets back to the ready queue. Shortest-Job-First Scheduling, SJF The idea behind the SJF algorithm is to pick the quickest fastest little job that needs to be done, get it out of the way first, and then pick the next smallest fastest job to do next. (Technically this algorithm picks a process based on the next shortest CPU burst, not the overall process time.). For example, the Gantt chart below is based upon the following CPU burst times, (and the assumption that all jobs arrive at the same time.) Proces s Burst Time P1 6 P2 8 P3 7 P4 3 In the case above the average wait time is (0 + 3 + 9 + 16) / 4 = 7.0 ms, (as opposed to 10.25 ms for FCFS for the same processes.) SJF can be proven to be the fastest scheduling algorithm, but it suffers from one important problem: How do you know how long the next CPU burst is going to be? For long-term batch jobs this can be done based upon the limits that users set for their jobs when they submit them, which encourages them to set low limits, but risks their having to re-submit the job if they set the limit too low. However that does not work for shortterm CPU scheduling on an interactive system. Another option would be to statistically measure the run time characteristics of jobs, particularly if the same tasks are run

repeatedly and predictably. But once again that really isnt a viable option for short term CPU scheduling in the real world. A more practical approach is to predict the length of the next burst, based on some historical measurement of recent burst times for this process. One simple, fast, and relatively accurate method is the exponential average, which can be defined as follows. estimate[ i + 1 ] = alpha * burst[ i ] + ( 1.0 alpha ) * estimate[ i ] In this scheme the previous estimate contains the history of all previous times, and alpha serves as a weighting factor for the relative importance of recent data versus past history. If alpha is 1.0, then past history is ignored, and we assume the next burst will be the same length as the last burst. If alpha is 0.0, then all measured burst times are ignored, and we just assume a constant burst time. Most commonly alpha is set at 0.5, as illustrated in Figure 5.3:
Fig. 5.3: Prediction of the length of the next CPU burst

SJF can be either preemptive or non-preemptive. Preemption occurs when a new process arrives in the ready queue that has a predicted burst time shorter than the time remaining in the process whose burst is currently on the CPU. Preemptive SJF is sometimes referred to as shortest remaining time first scheduling. For example, the following Gantt chart is based upon the following data:
Process Arrival Time Burst Time P1 0 8 P2 1 4 P3 2 9 p4 3 5

The average wait time in this case is ( (5 3) + (10 1) + (17 2)) / 4 = 26 / 4 = 6.5 ms. (As opposed to 7.75 ms for non-preemptive SJF or 8.75 for FCFS.)

Q2. What do you mean by Deadlock? How can deadlock be prevented? Ans.
Introduction Recall that one definition of an operating system is a resource allocator. There are many resources that can be allocated to only one process at a time, and we have seen several operating

system features that allow this, such as mutexes, semaphores or file locks. Sometimes a process has to reserve more than one resource. For example, a process which copies files from one tape to another generally requires two tape drives. A process which deals with databases may need to lock multiple records in a database. A deadlock is a situation in which two computer programs sharing the same resource are effectively preventing each other from accessing the resource, resulting in both programs ceasing to function. The earliest computer operating systems ran only one program at a time. All of the resources of the system were available to this one program. Later, operating systems ran multiple programs at once, interleaving them. Programs were required to specify in advance what resources they needed so that they could avoid conflicts with other programs running at the same time. Eventually some operating systems offered dynamic allocation of resources. Programs could request further allocations of resources after they had begun running. This led to the problem of the deadlock. Deadlock Prevention The difference between deadlock avoidance and deadlock prevention is a little subtle. Deadlock avoidance refers to a strategy where whenever a resource is requested, it is only granted if it cannot result in deadlock. Deadlock prevention strategies involve changing the rules so that processes will not make requests that could result in deadlock. Here is a simple example of such a strategy. Suppose every possible resource is numbered (easy enough in theory, but often hard in practice), and processes must make their requests in order; that is, they cannot request a resource with a number lower than any of the resources that they have been granted so far. Deadlock cannot occur in this situation. As an example, consider the dining philosophers problem. Suppose each chopstick is numbered, and philosophers always have to pick up the lower numbered chopstick before the higher numbered chopstick. Philosopher five picks up chopstick 4, philosopher 4 picks up chopstick 3, philosopher 3 picks up chopstick 2, philosopher 2 picks up chopstick 1. Philosopher 1 is hungry, and without this assumption, would pick up chopstick 5, thus causing deadlock. However, if the lower number rule is in effect, he/she has to pick up chopstick 1 first, and it is already in use, so he/she is blocked. Philosopher 5 picks up chopstick 5, eats, and puts both down, allows philosopher 4 to eat. Eventually everyone gets to eat. An alternative strategy is to require all processes to request all of their resources at once, and either all are granted or none are granted. Like the above strategy, this is conceptually easy but often hard to implement in practice because it assumes that a process knows what resources it

will need in advance.

Q3. Explain the algorithm of petersons method for mutual exclusion. Ans.
Mutual exclusion by Petersons Method: The algorithm uses two variables, flag, a boolean array and turn, an integer. A true flag value indicates that the process wants to enter the critical section. The variable turn holds the id of the process whose turn it is. Entrance to the critical section is granted for process P0 if P1 does not want to enter its critical section or if P1 has given priority to P0 by setting turn to 0. flag[0]=false; flag[1]=false; turn = 0; /* Process 0 */ while (true) { flag[0] = true; turn = 1; while(flag[1] && turn == 1) /* no operation */; /* critical section */; flag[0] = false; /* remainder */; } /* Process 1 */ while (true) { flag[1] = true; turn = 0; while(flag[0] && turn == 0) /* no operation */; /* critical section */; flag[1] = false; /* remainder */; }

Q4. Explain how the block size is affected on I/O operation to read the file. Ans.

Figure-1 shows the general I/O structure associated with many medium-scale processors. Note that the I/O controllers and main memory are connected to the main system bus. The cache memory (usually found on-chip with the CPU) has a direct connection to the processor, as well as to the system bus.

Figure 1: A general I/O structure for a medium-scale processor system Note that the I/O devices shown here are not connected directly to the system bus, they interface with another device called an I/O controller. In simpler systems, the CPU may also serve as the I/O controller, but in systems where throughput and performance are important, I/O operations are generally handled outside the processor. Until relatively recently, the I/O performance of a system was somewhat of an afterthought for systems designers. The reduced cost of high-performance disks, permitting the proliferation of virtual memory systems, and the dramatic reduction in the cost of high-quality video display devices, have meant that designers must pay much more attention to this aspect to ensure adequate performance in the overall system. Because of the different speeds and data requirements of I/O devices, different I/O strategies may be useful, depending on the type of I/O device which is connected to the computer. Because the I/O devices are not synchronized with the CPU, some information must be exchanged between the CPU and the device to ensure that the data is received reliably. This interaction

between the CPU and an I/O device is usually referred to as handshaking. For a complete handshake, four events are important: The device providing the data (the talker) must indicate that valid data is now available. The device accepting the data (the listener) must indicate that it has accepted the data. This signal informs the talker that it need not maintain this data word on the data bus any longer. The talker indicates that the data on the bus is no longer valid, and removes the data from the bus. The talker may then set up new data on the data bus. The listener indicates that it is not now accepting any data on the data bus. the listener may use data previously accepted during this time, while it is waiting for more data to become valid on the bus. Note: that each of the talker and listener supply two signals. The talker supplies a signal (say, data valid, or DAV) at step (1). It supplies another signal (say, data not valid, or ) at step (3). Both these signals can be coded as a single binary value (DAV) which takes the value 1 at step (1) and 0 at step (3). The listener supplies a signal (say, data accepted, or DAC) at step (2). It supplies a signal (say, data not now accepted, or ) at step (4). It, too, can be coded as a single binary variable, DAC. Because only two binary variables are required, the handshaking information can be communicated over two wires, and the form of handshaking described above is called a two wire Handshake. Other forms of handshaking are used in more complex situations; for example, where there may be more than one controller on the bus, or where the communication is among several devices. Figure 2 shows a timing diagram for the signals DAV and DAC which identifies the timing of the four events described previously.

Figure 2: Timing diagram for two-wire handshake Either the CPU or the I/O device can act as the talker or the listener. In fact, the CPU may act as a talker at one time and a listener at another. For example, when communicating with a terminal screen (an output device) the CPU acts as a talker, but when communicating with a terminal keyboard (an input device) the CPU acts as a listener.

Q5. Explain programmed I/O and interrupt I/O. How do they differ? Ans.
Interrupt-controlled I/O reduces the severity of the two problems mentioned for programcontrolled I/O by allowing the I/O device itself to initiate the device service routine in the processor. This is accomplished by having the I/O device generate an interrupt signal which is tested directly by the hardware of the CPU. When the interrupt input to the CPU is found to be active, the CPU itself initiates a subprogram call to somewhere in the memory of the processor; the particular address to which the processor branches on an interrupt depends on the interrupt facilities available in the processor. The simplest type of interrupt facility is where the processor executes a subprogram branch to some specific address whenever an interrupt input is detected by the CPU. The return address (the location of the next instruction in the program that was interrupted) is saved by the processor as part of the interrupt process. If there are several devices which are capable of interrupting the processor, then with this simple interrupt scheme the interrupt handling routine must examine each device to determine which one caused the interrupt. Also, since only one interrupt can be handled at a time, there is usually a hardware priority encoder which allows the device with the highest priority to interrupt the processor, if several devices attempt to interrupt the processor simultaneously. In Figure -3, the handshake out outputs would be connected to a priority encoder to implement this type of I/O. the other connections remain the same. (Some systems use a daisy chain priority system to determine which of the interrupting devices is serviced first. Daisy chain priority resolution is

discussed later.) In most modern processors, interrupt return points are saved on a stack in memory, in the same way as return addresses for subprogram calls are saved. In fact, an interrupt can often be thought of as a subprogram which is invoked by an external device. If a stack is used to save the return address for interrupts, it is then possible to allow one interrupt the interrupt handling routine of another interrupt. In modern computer systems, there are often several priority levels of interrupts, each of which can be disabled, or masked. There is usually one type of interrupt input which cannot be disabled (a non-maskable interrupt) which has priority over all other interrupts. This interrupt input is used for warning the processor of potentially catastrophic events such as an imminent power failure, to allow the processor to shut down in an orderly way and to save as much information as possible. Most modern computers make use of vectored interrupts. With vectored interrupts, it is the responsibility of the interrupting device to provide the address in main memory of the interrupt servicing routine for that device. This means, of course, that the I/O device itself must have sufficient intelligence to provide this address when requested by the CPU, and also to be initially programmed with this address information by the processor. Although somewhat more complex than the simple interrupt system described earlier, vectored interrupts provide such a significant advantage in interrupt handling speed and ease of implementation (i.e., a separate routine for each device) that this method is almost universally used on modern computer systems. Some processors have a number of special inputs for vectored interrupts (each acting much like the simple interrupt described earlier). Others require that the interrupting device itself provide the interrupt address as part of the process of interrupting the processor.

Q6. Explain briefly the architecture of Windows NT operating system. Ans.


Architecture of the Windows NT operating system line The Windows NT operating system familys architecture consists of two layers (user mode and kernel mode), with many different modules within both of these layers. User mode in the Windows NT line is made of subsystems capable of passing I/O

requests to the appropriate kernel mode software drivers by using the I/O manager. Two subsystems make up the user mode layer of Windows 2000: the Environment subsystem (runs applications written for many different types of operating systems), and the Integral subsystem (operates system specific functions on behalf of the environment subsystem). Kernel mode in Windows 2000 has full access to the hardware and system resources of the computer. The kernel mode stops user mode services and applications from accessing critical areas of the operating system that they should not have access to. The Executive interfaces with all the user mode subsystems. It deals with I/O, object management, security and process management. The hybrid kernel sits between the Hardware Abstraction Layer and the Executive to provide multiprocessor synchronization, thread and interrupt scheduling and dispatching, and trap handling and exception dispatching. The microkernel is also responsible for initializing device drivers at bootup. Kernel mode drivers exist in three levels: highest level drivers, intermediate drivers and low level drivers. Windows Driver Model (WDM) exists in the intermediate layer and was mainly designed to be binary and source compatible between Windows 98 and Windows 2000. The lowest level drivers are either legacy Windows NT device drivers that control a device directly or can be a PnP hardware bus.

User mode

The user mode is made up of subsystems which can pass I/O requests to the appropriate kernel mode drivers via the I/O manager (which exists in kernel mode). Two subsystems make up the user mode layer of Windows 2000: the Environment subsystem and the Integral subsystem. The environment subsystem was designed to run applications written for many different types of operating systems. None of the environment subsystems can directly access hardware, and must request access to memory resources through the Virtual Memory Manager that runs in kernel mode. Also, applications run at a lower priority than kernel mode processes. Currently, there are three main environment subsystems: the Win32 subsystem, an OS/2 subsystem and a POSIX subsystem. The Win32 environment subsystem can run 32-bit Windows applications. It contains the console as well as text window support, shutdown and hard-error handling for all other environment subsystems. It also supports Virtual DOS Machines (VDMs), which allow MS-DOS

and 16-bit Windows 3.x (Win16) applications to be run on Windows. There is a specific MS-DOS VDM which runs in its own address space and which emulates an Intel 80486 running MS-DOS 5. Win16 programs, however, run in a Win16 VDM. Each program, by default, runs in the same process, thus using the same address space, and the Win16 VDM gives each program its own thread to run on. However, Windows 2000 does allow users to run a Win16 program in a separate Win16 VDM, which allows the program to be preemptively multitasked as Windows 2000 will pre-empt the whole VDM process, which only contains one running application. The OS/2 environment subsystem supports 16-bit character-based OS/2 applications and emulates OS/2 1.x, but not 2.x or later OS/2 applications. The POSIX environment subsystem supports applications that are strictly written to either the POSIX.1 standard or the related ISO/IEC standards. The integral subsystem looks after operating system specific functions on behalf of the environment subsystem. It consists of a security subsystem, a workstation service and a server service. The security subsystem deals with security tokens, grants or denies access to user accounts based on resource permissions, handles logon requests and initiates logon authentication, and determines which system resources need to be audited by Windows 2000. It also looks after Active Directory. The workstation service is an API to the network redirector, which provides the computer access to the network. The server service is an API that allows the computer to provide network services.

Kernel mode

Windows 2000 kernel mode has full access to the hardware and system resources of the computer and runs code in a protected memory area. It controls access to scheduling, thread prioritization, memory management and the interaction with hardware. The kernel mode stops user mode services and applications from accessing critical areas of the operating system that they should not have access to as user mode processes ask the kernel mode to perform such operations on its behalf. Kernel mode consists of executive services, which are it made up on many modules that do specific tasks, kernel drivers, a microkernel and a Hardware Abstraction Layer, or HAL.
Executive

The Executive interfaces with all the user mode subsystems. It deals with I/O, object management, security and process management. It contains various components, including the I/O Manager, the Security Reference Monitor, the Object Manager, the IPC Manager, the Virtual Memory Manager (VMM), a PnP Manager and Power Manager, as well as a Window Manager which works in conjunction with the Windows Graphics Device Interface (GDI). Each of these components exports a kernel-only support routine allows other components to communicate with one another. Grouped together, the components can be called executive services. No executive component has access to the internal routines of any other executive component. Each object in Windows 2000 exists in its own namespace. This is a screenshot from SysInternals WinObj The object manager is a special executive subsystem that all other executive subsystems must pass through to gain access to Windows 2000 resources essentially making it a resource management infrastructure service. The object manager is used to reduce the duplication of object resource management functionality in other executive subsystems, which could potentially lead to bugs and make development of Windows 2000 harder. To the object manager, each resource is an object, whether that resource is a physical resource (such as a file system or peripheral) or a logical resource (such as a file). Each object has a structure or object type that the object manager must know about. When another executive subsystem requests the creation of an object, they send that request to the object manager which creates an empty object structure which the requesting executive subsystem then fills in. Object types define the object procedures and any data specific to the object. In this way, the object manager allows Windows 2000 to be an object oriented operating system, as object types can be thought of as classes that define objects. Each instance of an object that is created stores its name, parameters that are passed to the object creation function, security attributes and a pointer to its object type. The object also contains an object close procedure and a reference count to tell the object

manager how many other objects in the system reference that object and thereby determines whether the object can be destroyed when a close request is sent to it. Every object exists in a hierarchical object namespace.

Anda mungkin juga menyukai