Anda di halaman 1dari 160

Misc Topics in Computer Networks

Question 1

WRONG
Which one of the following is not a client server application?

A Internet chat

Web browsing

C E-mail

ping
GATE CS 2010 Misc Topics in Computer Networks
Discuss it

Question 1 Explanation:
Ping is not a client server application. Ping is a computer network administration utility used to test the
reachability of a host on an Internet Protocol (IP). In ping, there is no server that provides a service.

Question 2

CORRECT
Match the following:

(P) SMTP (1) Application layer

(Q) BGP (2) Transport layer

(R) TCP (3) Data link layer

(S) PPP (4) Network layer

(5) Physical layer

A P2Q1R3S5

P1Q4R2S3

C P1Q4R2S5

D P2Q4R1S3

Misc Topics in Computer Networks GATE-CS-2007


Discuss it

Question 2 Explanation:
See question 4 of http://www.geeksforgeeks.org/computer-networks-set-10/
Question 3

CORRECT
In the following pairs of OSI protocol layer/sub-layer and its functionality, the INCORRECT pair is

A Network layer and Routing

Data Link Layer and Bit synchronization

C Transport layer and End-to-end process communication

D Medium Access Control sub-layer and Channel sharing

Misc Topics in Computer Networks GATE-CS-2014-(Set-3)


Discuss it

Question 3 Explanation:
1) Yes, Network layer does Rotuing

2) No, Bit synchronization is provided by Physical Layer

3) Yes, Transport layer provides End-to-end process

communication

4) Yes, Medium Access Control sub-layer of Data Link Layer provides

Channel sharing.

Question 4

CORRECT
Choose the best matching between Group 1 and Group 2.
Group-1 Group-2

P. Data link 1. Ensures reliable transport of data

over a physical point-to-point link

Q. Network layer 2. Encoder/decodes data for physical

transmission

R. Transport layer 3. Allows end-to-end communication

between two processes

4. Routes data from one network

node to the next

P-1, Q-4, R-3

B P-2, Q-4, R-1

C P-2, Q-3, R-1


D P-1, Q-3, R-2

Misc Topics in Computer Networks GATE-CS-2004


Discuss it

Question 4 Explanation:
Data link layer is the second layer of the OSI Model. This layer is responsible for data transfer between
nodes on the network and providing a point to point local delivery framework. So, P matches with 1.
Network layer is the third layer of the OSI Model. This layer is responsible for forwarding of data packets
and routing through intermediate routers. So, Q matches with 4. Transport layer is the fourth layer of the
OSI Model. This layer is responsible for delivering data from process to process. So, R matches with 3.
Thus, A is the correct option. Please comment below if you find anything wrong in the above post.

Question 5

WRONG
Which of the following is NOT true with respect to a transparent bridge and a router?

A Both bridge and router selectively forward data packets

A bridge uses IP addresses while a router uses MAC addresses


A bridge builds up its routing table by inspecting incoming packets

D A router can connect between a LAN and a WAN

Misc Topics in Computer Networks GATE-CS-2004


Discuss it

Question 6

WRONG
Host A sends a UDP datagram containing 8880 bytes of user data to host B over an Ethernet LAN.
Ethernet frames may carry data up to 1500 bytes (i.e. MTU = 1500 bytes). Size of UDP header is 8 bytes
and size of IP header is 20 bytes. There is no option field in IP header. How may total number of IP
fragments will be transmitted and what will be the contents of offset field in the last fragment?

A 6 and 925

6 and 7400
7 and 1110

D 7 and 8880

Misc Topics in Computer Networks GATE-CS-2015 (Set 2)


Discuss it

Question 6 Explanation:
UDP data = 8880 bytes
UDP header = 8 bytes

IP Header = 20 bytes

Total Size excluding IP Header = 8888 bytes.

Number of fragments = 8888 / 1480

=7

Refer the Kurose book slides on IP (Offset is always scaled by 8)

Offset of last segment = (1480 * 6) / 8 = 1110

Question 7

WRONG
Since it is a network that uses switch, every packet goes through two links, one from source to switch and
other from switch to destination. Since there are 10000 bits and packet size is 5000, two packets are sent.
Transmission time for each packet is 5000 / 1077 bits per second links. Each link has a propagation delay
of 20 microseconds. The switch begins forwarding a packet 35 microseconds after it receives the same. If
10000 bits of data are to be transmitted between the two hosts using a packet size of 5000 bits, the time
elapsed between the transmission of the first bit of data and the reception of the last bit of the data in
microseconds is _________.

A 1075

1575

C 2220

2200
Misc Topics in Computer Networks GATE-CS-2015 (Set 3)
Discuss it

Question 7 Explanation:
Sender host transmits first packet to switch, the transmission time is 5000/10 7 which is 500 microseconds.
After 500 microseconds, the second packet is transmitted. The first packet reaches destination in 500 +
35 + 20 + 20 + 500 = 1075 microseconds. While the first packet is traveling to destination, the second
packet starts its journey after 500 microseconds and rest of the time taken by second packet overlaps with
first packet. So overall time is 1075 + 500 = 1575.

Question 8

WRONG
Which one of the following statements is FALSE?
TCP guarantees a minimum communication rate
TCP ensures in-order delivery

C TCP reacts to congestion by reducing sender window size


D TCP employs retransmission to compensate for packet loss

Misc Topics in Computer Networks GATE-IT-2004


Discuss it

Question 9

WRONG
Which one of the following statements is FALSE?

A HTTP runs over TCP

HTTP describes the structure of web pages


HTTP allows information to be stored in a URL

D HTTP can be used to test the validity of a hypertext link

Misc Topics in Computer Networks GATE-IT-2004


Discuss it

Question 9 Explanation:
HTML describes structure of page not HTTP. HTTP is the set of rules for transferring files (text, graphic
images, sound, video, and other multimedia files) on the World Wide Web.

Question 10

WRONG
A serial transmission Ti uses 8 information bits, 2 start bits, 1 stop bit and 1 parity bit for each character. A
synchronous transmission T2 uses 3 eight bit sync characters followed by 30 eight bit information
characters. If the bit rate is 1200 bits/second in both cases, what are the transfer rates of Ti and T2?

A 100 characters/sec, 153 characters/sec

B 80 characters/sec, 136 characters/sec

100 characters/sec, 136 characters/sec


80 characters/sec, 153 characters/sec
Misc Topics in Computer Networks GATE-IT-2004
Discuss it

Question 10 Explanation:

Serial communication :
Total number of bits transmitted = 8 + 2 + 1 + 1 = 12 bits Bit rate = 1200 / second Transfer Rate = 1200 *
(8/12) = 800 bits/sec = 100 bytes/sec = 100 characters/sec
Synchronous transmission :
Total number of bits transmitted = 3 + 30 = 33 bits Transfer Rate = 1200 * (30/33) = 136 characters/sec
Thus, option (C) is correct.

Please comment below if you find anything wrong in the above post.

Question 11

CORRECT
In a sliding window ARQ scheme, the transmitter's window size is N and the receiver's window size is M.
The minimum number of distinct sequence numbers required to ensure correct operation of the ARQ
scheme is

A min (M, N)

B max (M, N)

M+N

D MN

Misc Topics in Computer Networks GATE-IT-2004


Discuss it

Question 11 Explanation:
In general sliding window ARQ scheme , the sending process sends a number of
frames without worrying about receiving an ACK(acknowledgement) packet from the
receiver. The sending window size in general is N and receiver window is 1. This means
it can transmit N frames to its peer before requiring an ACK. The receiver keeps track of
the sequence number of the next frame it expects to receive and sends that number
with ever ACK it sends. But in case of the question the sender window size is N and
receiver is M so the receiver will accept M frames instead of 1 frame in general. Thus
sending M sequence numbers attached with the acknowledgement. Hence, for such a
scheme to work properly we will need a total of M+ N distinct sequence numbers. This
solution is contributed by Namita Singh.

Question 12

WRONG
Which one of the following protocols is NOT used to resolve one form of address to another one?

A DNS

ARP
DHCP

D RARP
Misc Topics in Computer Networks GATE-CS-2016 (Set 1)
Discuss it

Question 12 Explanation:
DHCP is used to assign IP dynamically. All others are used to convert one address to other.

Question 13

CORRECT
Identify the correct sequence in which the following packets are transmitted on the network by a host
when a browser requests a webpage from a remote server, assuming that the host has just been
restarted.

A HTTP GET request, DNS query, TCP SYN

B DNS query, HTTP GET request, TCP SYN

DNS query, TCP SYN, HTTP GET request

D TCP SYN, DNS query, HTTP GET request

Misc Topics in Computer Networks GATE-CS-2016 (Set 2)


Discuss it

Question 13 Explanation:
Step 1 : Whenever the client request for a webpage, the query is made in the form say
www.geeksforgeeks.org. As soon as the query is made the server makes the DNS query to identify the
Domain Name Space. DNS query is the process to identify the IP address of the DNS such as www.org.
The clients computer will make a DNS query to one of its internet service providers DNS server. Step
2 : As soon as DNS server is located a TCP connection is to be established for the further
communication. The TCP protocol requests the server to establishing a connection by sending a TCP
SYN message. Which is further responded by the server using SYN_ ACK from server to client and then
ACK back to server from client (3- way hand shaking protocol). Step 3 : Once the connection has been
established the HTTP protocol comes into picture. It requests for the webpage using its GET method and
thus, sending an HTTP GET request. Hence, the correct sequence for the transmission of packets is
DNS query, TCP SYN, HTTP GET request. This explanation has been contributed by Namita Singh.

Question 14

WRONG
Consider the following statements about the timeout value used in TCP. i. The timeout value is set to the
RTT (Round Trip Time) measured during TCP connection establishment for the entire duration of the
connection. ii. Appropriate RTT estimation algorithm is used to set the timeout value of a TCP connection.
iii. Timeout value is set to twice the propagation delay from the sender to the receiver. Which of the
following choices hold?

A (i) is false, but (ii) and (iii) are true


(i) and (iii) are false, but (ii) is title

C (i) and (ii) are false, but (iii) is true

(i), (ii) and (iii) are false


Transport Layer Misc Topics in Computer Networks Gate IT 2007
Discuss it

Question 14 Explanation:
Time-out timer in TCP: One cant use static timer used in data link layer (DLL), which is
HOP to HOP connection, since nobody knows how many hops are there in the path
form sender to receiver as it uses IP service and path may vary time to time. So,
dynamic timers are used in TCP. Time-out timer should increase or decrease depending
on traffic to avoid unnecessary congestion due to retransmissions. There are three
algorithms are for this purpose: 1. Basic algorithm 2. Jacobsons algorithm 3. Karls
modification. Solution:
1. The timeout value is set to the RTT (Round Trip Time) measured during TCP
connection establishment for the entire duration of the connection.- FALSE The
timeout value cant be fixed for entire duration as it will turn timer to static
timer, we need dynamic timer for timeout.
2. Appropriate RTT estimation algorithm is used to set the timeout value of a TCP
connection.-TRUE Yes, all three algorithm are appropriate RTT estimation
algorithm used to set timeout value dynamically.
3. Timeout value is set to twice the propagation delay from the sender to the receiver.-
FALSE This statement is false because, timeout value is set to twice the
propagation delay in data link layer where, hop to hop distance is known, not
in TCP layer.
This solution is contributed by Sandeep pandey.

Question 15

WRONG
A firewall is to be configured to allow hosts in a private network to freely open TCP connections and send
packets on open connections. However, it will only allow external hosts to send packets on existing open
TCP connections or connections that are being opened (by internal hosts) but not allow them to open TCP
connections to hosts in the private network. To achieve this the minimum capability of the firewall should
be that of

A A combinational circuit

A finite automaton

C A pushdown automaton with one stack

A pushdown automaton with two stacks


Misc Topics in Computer Networks Network Security Gate IT 2007
Discuss it

Question 15 Explanation:
A) A combinational circuit => Not possible, because we need memory in Firewall, Combinational ckt has
none.
B) A finite automaton => We need infinite memory, there is no upper limit on Number of TCP ckt so Not
this.
C) A pushdown automaton with one stack => Stack is infinite. Suppose we have 2 connections , we have
pushed details of those on stack we can not access the details of connection which was pushed first,
without popping it off. So Big NO.
D) pushdown automaton with two stacks => This is TM. It can do everything our normal computer can do
so Yes. Firewall can be created out of TM.

Question 16

CORRECT
How many bytes of data can be sent in 15 seconds over a serial link with baud rate of 9600 in
asynchronous mode with odd parity and two stop bits in the frame?

A 10,000 bytes

12,000 bytes

C 15,000 bytes

D 27,000 bytes

Data Link Layer Misc Topics in Computer Networks Gate IT 2008


Discuss it

Question 16 Explanation:
1 sec--------> 9600 bits
15 sec------->9600*15 bits
given, 1 parity bit+2 stop bits + 1 start bit
=> 12 bits extra each frame
=> 9600*15/12 = 12000bytes

Question 17

WRONG
Provide the best matching between the entries in the two columns given in the table below:
A I-a, II-d, III-c, IV-b

I-b, II-d, III-c, IV-a


I-a, II-c, III-d, IV-b

D I-b, II-c, III-d, IV-a

Misc Topics in Computer Networks Gate IT 2008


Discuss it

Question 17 Explanation:
DNS - Allows caching of entries at local server.

Question 18

WRONG
Which protocol will be used to automate the IP configuration mechanism which includes IP address,
subnet mask, default gateway, and DNS information?

A SMTP

DHCP

C ARP

TCP/IP
Misc Topics in Computer Networks GATE 2017 Mock
Discuss it

Question 18 Explanation:
DHCP (Dynamic Host Configuration Protocol) is used to provide IP information to the hosts on the
network along with the information regarding IP address, subnet mask, default gateway and DNS
information.

Question 19

CORRECT
In Goback 3 flow control protocol every 6th packet is lost. If we have to send 11 packets. How many
transmissions will be needed ?

A 10

17

C 12

D 9

Misc Topics in Computer Networks GATE 2017 Mock


Discuss it
Question 19 Explanation:
In Go back N, if we dont receive acknowledgement for a packet, whole window of that packet is sent
again. As a packet is received window is slided. Here, window size is 3. Initially window will contain 1,2,3
then as acknowledgement of 1 is received window slides so 4 is transmitted. Now,when 4th packets
acknowledgement is received 7th packet is sent and when 5th packets acknowledgement is received 8th
packet is sent. Now, as acknowledgement of 6 is not received so the window of 6 i.e. 6,7,8 packets are
retransmitted.Now the 6th packet from there is 9, so 9,10 will be retransmitted. These are the serial
transmissions of packets: 1 2 3 4 5 6 7 8 6 7 8 9 10 11 9 10 11 . Hence total 17 transmissions are
needed. Packets in bold in the above were failed transmissions. Hence their window (underlined) was
resent.

Question 20

WRONG
What will be the total minimum bandwidth of the channel required for 7 channels of 400 kHz bandwidth
multiplexed together with each guard band of 20 kHz?

A 2800 khz

2600 khz

C 3600 khz

2920 khz
Misc Topics in Computer Networks GATE 2017 Mock
Discuss it

Question 20 Explanation:
(for 6 guard band 20 * 6 = 120) + (for 7 channels 400* 7= 2800)
= 120+ 2800 = 2920 kHz

Process Management

Question 1

WRONG
Consider the following code fragment:
if (fork() == 0)
{ a = a + 5; printf("%d,%d\n", a, &a); }
else { a = a 5; printf("%d, %d\n", a, &a); }
Run on IDE
Let u, v be the values printed by the parent process, and x, y be the values printed by the child process.
Which one of the following is TRUE?
A u = x + 10 and v = y

u = x + 10 and v != y
u + 10 = x and v = y

D u + 10 = x and v != y

Process Management
Discuss it

Question 1 Explanation:
fork() returns 0 in child process and process ID of child process in parent process. In Child (x), a = a + 5
In Parent (u), a = a 5; Therefore x = u + 10. The physical addresses of a in parent and child must be
different. But our program accesses virtual addresses (assuming we are running on an OS that uses
virtual memory). The child process gets an exact copy of parent process and virtual address of a doesnt
change in child process. Therefore, we get same addresses in both parent and child. See this run for
example.

Question 2

CORRECT
The atomic fetch-and-set x, y instruction unconditionally sets the memory location x to 1 and fetches the
old value of x in y without allowing any intervening access to the memory location x. consider the following
implementation of P and V functions on a binary semaphore .
void P (binary_semaphore *s) {

unsigned y;

unsigned *x = &(s->value);

do {

fetch-and-set x, y;

} while (y);

void V (binary_semaphore *s) {

S->value = 0;

Which one of the following is true?


The implementation may not work if context switching is disabled in P.

B Instead of using fetch-and-set, a pair of normal load/store can be used

C The implementation of V is wrong

D The code does not implement a binary semaphore


Process Management
Discuss it

Question 2 Explanation:
Let us talk about the operation P(). It stores the value of s in x, then it fetches the old value of x, stores it
in y and sets x as 1. The while loop of a process will continue forever if some other process doesn't
execute V() and sets the value of s as 0. If context switching is disabled in P, the while loop will run forever
as no other process will be able to execute V().

Question 3

WRONG
Three concurrent processes X, Y, and Z execute three different code segments that access and update
certain shared variables. Process X executes the P operation (i.e., wait) on semaphores a, b and c;
process Y executes the P operation on semaphores b, c and d; process Z executes the P operation on
semaphores c, d, and a before entering the respective code segments. After completing the execution of
its code segment, each process invokes the V operation (i.e., signal) on its three semaphores. All
semaphores are binary semaphores initialized to one. Which one of the following represents a
deadlockfree order of invoking the P operations by the processes? (GATE CS 2013)
X: P(a)P(b)P(c) Y:P(b)P(c)P(d) Z:P(c)P(d)P(a)
X: P(b)P(a)P(c) Y:P(b)P(c)P(d) Z:P(a)P(c)P(d)

C X: P(b)P(a)P(c) Y:P(c)P(b)P(d) Z:P(a)P(c)P(d)

D X: P(a)P(b)P(c) Y:P(c)P(b)P(d) Z:P(c)P(d)P(a)

Process Management Deadlock


Discuss it

Question 3 Explanation:
Option A can cause deadlock. Imagine a situation process X has acquired a, process Y has acquired b
and process Z has acquired c and d. There is circular wait now. Option C can also cause deadlock.
Imagine a situation process X has acquired b, process Y has acquired c and process Z has acquired a.
There is circular wait now. Option D can also cause deadlock. Imagine a situation process X has acquired
a and b, process Y has acquired c. X and Y circularly waiting for each other.
See http://www.eee.metu.edu.tr/~halici/courses/442/Ch5%20Deadlocks.pdf Consider option A) for
example here all 3 processes are concurrent so X will get semaphore a, Y will get b and Z will get c, now
X is blocked for b, Y is blocked for c, Z gets d and blocked for a. Thus it will lead to deadlock. Similarly
one can figure out that for B) completion order is Z,X then Y. This question is duplicate
of http://geeksquiz.com/gate-gate-cs-2013-question-16/

Question 4

WRONG
A shared variable x, initialized to zero, is operated on by four concurrent processes W, X, Y, Z as follows.
Each of the processes W and X reads x from memory, increments by one, stores it to memory, and then
terminates. Each of the processes Y and Z reads x from memory, decrements by two, stores it to memory,
and then terminates. Each process before reading x invokes the P operation (i.e., wait) on a counting
semaphore S and invokes the V operation (i.e., signal) on the semaphore S after storing x to memory.
Semaphore S is initialized to two. What is the maximum possible value of x after all processes complete
execution? (GATE CS 2013)

A -2

-1

C 1

2
Process Management
Discuss it

Question 4 Explanation:
Processes can run in many ways, below is one of the cases in which x attains max value
Semaphore S is initialized to 2

Process W executes S=1, x=1 but it doesn't update the x variable.

Then process Y executes S=0, it decrements x, now x= -2 and

signal semaphore S=1

Now process Z executes s=0, x=-4, signal semaphore S=1

Now process W updates x=1, S=2

Then process X executes X=2

So correct option is D

Question 5

WRONG
A shared variable x, initialized to zero, is operated on by four concurrent processes W, X, Y, Z as follows.
Each of the processes W and X reads x from memory, increments by one, stores it to memory, and then
terminates. Each of the processes Y and Z reads x from memory, decrements by two, stores it to memory,
and then terminates. Each process before reading x invokes the P operation (i.e., wait) on a counting
semaphore S and invokes the V operation (i.e., signal) on the semaphore S after storing x to memory.
Semaphore S is initialized to two. What is the maximum possible value of x after all processes complete
execution? (GATE CS 2013)
-2

B -1
C 1

2
Process Management
Discuss it

Question 5 Explanation:
See http://geeksquiz.com/operating-systems-process-management-question-11/ for explanation.

Question 6

WRONG
A certain computation generates two arrays a and b such that a[i]=f(i) for 0 i < n and b[i]=g(a[i]) for 0 i
< n. Suppose this computation is decomposed into two concurrent processes X and Y such that X
computes the array a and Y computes the array b. The processes employ two binary semaphores R and
S, both initialized to zero. The array a is shared by the two processes. The structures of the processes are
shown below.
Process X: Process Y:
private i; private i;
for (i=0; i < n; i++) { for (i=0; i < n; i++) {
a[i] = f(i); EntryY(R, S);
ExitX(R, S); b[i]=g(a[i]);
} }
Which one of the following represents the CORRECT implementations of ExitX and EntryY? (A)
ExitX(R, S) {

P(R);

V(S);

EntryY (R, S) {
P(S);

V(R);

(B)
ExitX(R, S) {

V(R);

V(S);

EntryY(R, S) {

P(R);
P(S);

(C)
ExitX(R, S) {

P(S);

V(R);

EntryY(R, S) {

V(S);

P(R);

(D)
ExitX(R, S) {

V(R);

P(S);

EntryY(R, S) {

V(S);

P(R);

A A

B
C

D D

Process Management
Discuss it

Question 6 Explanation:
The purpose here is neither the deadlock should occur

nor the binary semaphores be assigned value greater

than one.

A leads to deadlock

B can increase value of semaphores b/w 1 to n

D may increase the value of semaphore R and S to


2 in some cases

Question 7

WRONG
Three concurrent processes X, Y, and Z execute three different code segments that access and update
certain shared variables. Process X executes the P operation (i.e., wait) on semaphores a, b and c;
process Y executes the P operation on semaphores b, c and d; process Z executes the P operation on
semaphores c, d, and a before entering the respective code segments. After completing the execution of
its code segment, each process invokes the V operation (i.e., signal) on its three semaphores. All
semaphores are binary semaphores initialized to one. Which one of the following represents a deadlock-
free order of invoking the P operations by the processes?

A X: P(a)P(b)P(c) Y: P(b)P(c)P(d) Z: P(c)P(d)P(a)

X: P(b)P(a)P(c) Y: P(b)P(c)P(d) Z: P(a)P(c)P(d)


X: P(b)P(a)P(c) Y: P(c)P(b)P(d) Z: P(a)P(c)P(d)

D X: P(a)P(b)P(c) Y: P(c)P(b)P(d) Z: P(c)P(d)P(a)

Process Management GATE CS 2013


Discuss it

Question 7 Explanation:
Option A can cause deadlock. Imagine a situation process X has acquired a, process Y has acquired b
and process Z has acquired c and d. There is circular wait now. Option C can also cause deadlock.
Imagine a situation process X has acquired b, process Y has acquired c and process Z has acquired a.
There is circular wait now. Option D can also cause deadlock. Imagine a situation process X has acquired
a and b, process Y has acquired c. X and Y circularly waiting for each other.
See http://www.eee.metu.edu.tr/~halici/courses/442/Ch5%20Deadlocks.pdf Consider option A) for
example here all 3 processes are concurrent so X will get semaphore a, Y will get b and Z will get c, now
X is blocked for b, Y is blocked for c, Z gets d and blocked for a. Thus it will lead to deadlock. Similarly
one can figure out that for B) completion order is Z,X then Y. This question is duplicate
of http://geeksquiz.com/operating-systems-process-management-question-8/

Question 8

WRONG
A shared variable x, initialized to zero, is operated on by four concurrent processes W, X, Y, Z as follows.
Each of the processes W and X reads x from memory, increments by one, stores it to memory, and then
terminates. Each of the processes Y and Z reads x from memory, decrements by two, stores it to memory,
and then terminates. Each process before reading x invokes the P operation (i.e., wait) on a counting
semaphore S and invokes the V operation (i.e., signal) on the semaphore S after storing x to memory.
Semaphore S is initialized to two. What is the maximum possible value of x after all processes complete
execution?

A -2
B -1

1
2
Process Management GATE CS 2013
Discuss it

Question 8 Explanation:
Background Explanation: A critical section in which the process may be changing common variables,
updating table, writing a file and perform another function. The important problem is that if one process is
executing in its critical section, no other process is to be allowed to execute in its critical section. Each
process much request permission to enter its critical section. A semaphore is a tool for synchronization
and it is used to remove the critical section problem which is that no two processes can run
simultaneously together so to remove this two signal operations are used named as wait and signal which
is used to remove the mutual exclusion of the critical section. as an unsigned one of the most important
synchronization primitives, because you can build many other Decrementing the semaphore is called
acquiring or locking it, incrementing is called releasing or unlocking. Solution : Since initial value of
semaphore is 2, two processes can enter critical section at a time- this is bad and we can see why. Say, X
and Y be the processes.X increments x by 1 and Z decrements x by 2. Now, Z stores back and after this
X stores back. So, final value of x is 1 and not -1 and two Signal operations make the semaphore value 2
again. So, now W and Z can also execute like this and the value of x can be 2 which is the maximum
possible in any order of execution of the processes. (If the semaphore is initialized to 1, processed would
execute correctly and we get the final value of x as -2.) Option (D) is the correct answer. Another
Solution: Processes can run in many ways, below is one of the cases in which x attains max value
Semaphore S is initialized to 2 Process W executes S=1, x=1 but it doesn't update the x variable. Then
process Y executes S=0, it decrements x, now x= -2 and signal semaphore S=1 Now process Z executes
s=0, x=-4, signal semaphore S=1 Now process W updates x=1, S=2 Then process X executes X=2 So
correct option is D Another Solution: S is a counting semaphore initialized to 2 i.e., Two process can go
inside a critical section protected by S. W, X read the variable, increment by 1 and write it back. Y, Z can
read the variable, decrement by 2 and write it back. Whenever Y or Z runs the count gets decreased by 2.
So, to have the maximum sum, we should copy the variable into one of the processes which increases the
count, and at the same time the decrementing processed should run parallel, so that whatever they write
back into memory can be overridden by incrementing process. So, in effect decrement would never
happen.

Related Links: http://quiz.geeksforgeeks.org/process-synchronization-set-


1/http://geeksquiz.com/operating-systems-process-management-question-11/ for explanation This
solution is contributed by Nitika Bansal

Question 9

WRONG
A certain computation generates two arrays a and b such that a[i]=f(i) for 0 i < n and b[i]=g(a[i]) for 0 i
< n. Suppose this computation is decomposed into two concurrent processes X and Y such that X
computes the array a and Y computes the array b. The processes employ two binary semaphores R and
S, both initialized to zero. The array a is shared by the two processes. The structures of the processes are
shown below.
Process X: Process Y:
private i; private i;
for (i=0; i < n; i++) { for (i=0; i < n; i++) {
a[i] = f(i); EntryY(R, S);
ExitX(R, S); b[i]=g(a[i]);
} }
Which one of the following represents the CORRECT implementations of ExitX and EntryY? (A)
ExitX(R, S) {

P(R);

V(S);

EntryY (R, S) {

P(S);

V(R);

(B)
ExitX(R, S) {

V(R);

V(S);

EntryY(R, S) {

P(R);

P(S);
}

(C)
ExitX(R, S) {

P(S);

V(R);

EntryY(R, S) {

V(S);

P(R);

(D)
ExitX(R, S) {

V(R);

P(S);

EntryY(R, S) {

V(S);

P(R);

A A

B
C

D D

Process Management GATE CS 2013


Discuss it

Question 9 Explanation:
The purpose here is neither the deadlock should occur

nor the binary semaphores be assigned value greater

than one.

A leads to deadlock

B can increase value of semaphores b/w 1 to n

D may increase the value of semaphore R and S to

2 in some cases

See http://geeksquiz.com/operating-systems-process-management-question-13/

Question 10

CORRECT
A process executes the code
fork();

fork();

fork();

The total number of child processes created is

A 3

B 4
7

D 8

GATE CS 2012 Process Management


Discuss it

Question 10 Explanation:
Let us put some label names for the three lines
fork (); // Line 1

fork (); // Line 2

fork (); // Line 3

L1 // There will be 1 child process created by line 1

/ \

L2 L2 // There will be 2 child processes created by line 2

/ \ / \

L3 L3 L3 L3 // There will be 4 child processes created by line 3

We can also use direct formula to get the number of child processes. With n fork statements, there are
always 2^n 1 child processes. Also see this post for more details.

Question 11

WRONG
Fetch_And_Add(X,i) is an atomic Read-Modify-Write instruction that reads the value of memory
location X, increments it by the value i, and returns the old value of X. It is used in the pseudocode
shown below to implement a busy-wait lock. L is an unsigned integer shared variable initialized to
0. The value of 0 corresponds to lock being available, while any non-zero value corresponds to the
lock being not available.
AcquireLock(L){

while (Fetch_And_Add(L,1))

L = 1;

ReleaseLock(L){

L = 0;

This implementation

A fails as L can overflow

fails as L can take on a non-zero value when the lock is actually available
works correctly but may starve some processes
D works correctly without starvation

GATE CS 2012 Process Management


Discuss it

Question 11 Explanation:
Take closer look the below while loop.
while (Fetch_And_Add(L,1))

L = 1; // A waiting process can be here just after

// the lock is released, and can make L = 1.

Consider a situation where a process has just released the lock and made L = 0. Let there be one more
process waiting for the lock, means executing the AcquireLock() function. Just after the L was made 0, let
the waiting processes executed the line L = 1. Now, the lock is available and L = 1. Since L is 1, the
waiting process (and any other future coming processes) can not come out of the while loop. The above
problem can be resolved by changing the AcuireLock() to following.
AcquireLock(L){

while (Fetch_And_Add(L,1))

{ // Do Nothing }

Source : http://www.geeksforgeeks.org/operating-systems-set-17/

Question 12

WRONG
The time taken to switch between user and kernel modes of execution be t1 while the time taken to switch
between two processes be t2. Which of the following is TRUE?

A t1 > t2

B t1 = t2

t1 < t2
nothing can be said about the relation between t1 and t2
Process Management GATE CS 2011
Discuss it

Question 12 Explanation:
Process switches or Context switches can occur in only kernel mode . So for process switches first we
have to move from user to kernel mode . Then we have to save the PCB of the process from which we are
taking off CPU and then we have to load PCB of the required process . At switching from kernel to user
mode is done. But switching from user to kernel mode is a very fast operation(OS has to just change
single bit at hardware level) Thus T1< T2 This explanation has been contributed by Abhishek Kumar.
Question 13

WRONG
A thread is usually defined as a "light weight process" because an operating system (OS) maintains
smaller data structures for a thread than for a process. In relation to this, which of the following is TRUE?

A On per-thread basis, the OS maintains only CPU register state

The OS does not maintain a separate stack for each thread


On per-thread basis, the OS does not maintain virtual memory state

D On per-thread basis, the OS maintains only scheduling and accounting information

Process Management GATE CS 2011


Discuss it

Question 13 Explanation:
Threads share address space of Process. Virtually memory is concerned with processes not with
Threads. A thread is a basic unit of CPU utilization, consisting of a program counter, a stack, and a set of
registers, (and a thread ID.) As you can see, for a single thread of control - there is one program counter,
and one sequence of instructions that can be carried out at any given time and for multi-threaded
applications-there are multiple threads within a single process, each having their own program counter,
stack and set of registers, but sharing common code, data, and certain structures such as open files.

Option (A): as you


can see in the above diagram, NOT ONLY CPU Register but stack and code files, data files are also
maintained. So, option (A) is not correct as it says OS maintains only CPU register state. Option
(B): according to option (B), OS does not maintain a separate stack for each thread. But as you can see
in above diagram, for each thread, separate stack is maintained. So this option is also incorrect. Option
(C): according to option (C), the OS does not maintain virtual memory state. And It is correct as Os does
not maintain any virtual memory state for individual thread. Option (D): according to option (D), the OS
maintains only scheduling and accounting information. But it is not correct as it contains other information
like cpu registers stack, program counters, data files, code files are also maintained.
Reference:https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/4_Threads.html This solution is
contributed by Nitika Bansal

Question 14

WRONG
Consider the methods used by processes P1 and P2 for accessing their critical sections whenever
needed, as given below. The initial values of shared boolean variables S1 and S2 are randomly assigned.
Method Used by P1

while (S1 == S2) ;

Critica1 Section

S1 = S2;

Method Used by P2

while (S1 != S2) ;

Critica1 Section

S2 = not (S1);

Which one of the following statements describes the properties achieved?


Mutual exclusion but not progress

B Progress but not mutual exclusion

Neither mutual exclusion nor progress

D Both mutual exclusion and progress

Process Management GATE CS 2010


Discuss it

Question 14 Explanation:
Mutual Exclusion: A way of making sure that if one process is using a shared modifiable data, the other
processes will be excluded from doing the same thing. while one process executes the shared variable, all
other processes desiring to do so at the same time moment should be kept waiting; when that process
has finished executing the shared variable, one of the processes waiting; while that process has finished
executing the shared variable, one of the processes waiting to do so should be allowed to proceed. In this
fashion, each process executing the shared data (variables) excludes all others from doing so
simultaneously. This is called Mutual Exclusion. Progress Requirement: If no process is executing in its
critical section and there exist some processes that wish to enter their critical section, then the selection of
the processes that will enter the critical section next cannot be postponed indefinitely. Solution: It can be
easily observed that the Mutual Exclusion requirement is satisfied by the above solution, P1 can enter
critical section only if S1 is not equal to S2, and P2 can enter critical section only if S1 is equal to S2. But
here Progress Requirement is not satisfied. Suppose when s1=1 and s2=0 and process p1 is not
interested to enter into critical section but p2 want to enter critical section. P2 is not able to enter critical
section in this as only when p1 finishes execution, then only p2 can enter (then only s1 = s2 condition be
satisfied). Progress will not be satisfied when any process which is not interested to enter into the critical
section will not allow other interested process to enter into the critical section.
Reference
:http://www.personal.kent.edu/~rmuhamma/OpSystems/Myos/mutualExclu.htmSee http://www.geeksforge
eks.org/operating-systems-set-7/ This solution is contributed by Nitika Bansal

Question 15

WRONG
The following program consists of 3 concurrent processes and 3 binary semaphores.The semaphores are
initialized as S0 = 1, S1 = 0, S2 = 0.

How many times


will process P0 print '0'?
At least twice

B Exactly twice

Exactly thrice

D Exactly once

Process Management GATE CS 2010


Discuss it

Question 15 Explanation:
Initially only P0 can go inside the while loop as S0 = 1, S1 = 0, S2 = 0. P0 first prints '0' then, after
releasing S1 and S2, either P1 or P2 will execute and release S0. So 0 is printed again.

Question 16

WRONG
The enter_CS() and leave_CS() functions to implement critical section of a process are realized using
test-and-set instruction as follows:
void enter_CS(X)

while test-and-set(X) ;

void leave_CS(X)

X = 0;
}

In the above solution, X is a memory location associated with the CS and is initialized to 0. Now consider
the following statements: I. The above solution to CS problem is deadlock-free II. The solution is
starvation free. III. The processes enter CS in FIFO order. IV More than one process can enter CS at the
same time. Which of the above statements is TRUE?
I only

B I and II

C II and III

IV only
Process Management GATE-CS-2009
Discuss it

Question 16 Explanation:
The above solution is a simple test-and-set solution that makes sure that deadlock doesnt occur, but it
doesnt use any queue to avoid starvation or to have FIFO order.

Question 17

WRONG
The P and V operations on counting semaphores, where s is a counting semaphore, are defined as
follows:
P(s) : s = s - 1;

if (s < 0) then wait;

V(s) : s = s + 1;

if (s <= 0) then wakeup a process waiting on s;

Assume that Pb and Vb the wait and signal operations on binary semaphores are provided. Two binary
semaphores Xb and Yb are used to implement the semaphore operations P(s) and V(s) as follows:
P(s) : Pb(Xb);

s = s - 1;

if (s < 0) {

Vb(Xb) ;

Pb(Yb) ;

else Vb(Xb);

V(s) : Pb(Xb) ;

s = s + 1;

if (s <= 0) Vb(Yb) ;
Vb(Xb) ;

The initial values of Xb and Yb are respectively

A 0 and 0

B 0 and 1

1 and 0
1 and 1
Process Management GATE CS 2008
Discuss it

Question 17 Explanation:
Suppose Xb = 0, then because of P(s): Pb(Xb) operation, Xb will be -1 and processs will get blocked as it
will enter into waiting section. So, Xb will be one. Suppose s=2(means 2 process are accessing shared
resource), taking Xb as 1,
first P(s): Pb(Xb) operation will make Xb as zero. s will be 1 and Then Vb(Xb) operation will be executed
which will increase the count of Xb as one. Then same process will be repeated making Xb as one and s
as zero.
Now suppose one more process comes, then Xb will be 1 but s will be -1 which will make this process go
into loop (s <0) and will result into calling Vb(Xb) and Pb(Yb) operations. Vb(Xb) will result into Xb as 2
and Pb(Yb) will result into decrementing the value of Yb.
case 1: if Yb has value as 0, it will be -1 and it will go into waiting and will be blocked.total 2 process will
access shared resource (according to counting semaphore, max 3 process can access shared resource)
and value of s is -1 means only 1 process will be waiting for resources and just now, one process got
blocked. So it is still true.
case 2: if Yb has value as 1, it will be 0. Total 3 process will access shared resource (according to
counting semaphore, max 3 process can access shared resource) and value of s is -1 means only 1
process will be waiting for resources and but there is no process waiting for resources.So it is false.
See Question 2 of http://www.geeksforgeeks.org/operating-systems-set-10/ This solution is contributed
by Nitika Bansal

Question 18

WRONG
A process executes the following code
for (i = 0; i < n; i++) fork();

The total number of child processes created is

A n

2^n - 1
2^n

D 2^(n+1) - 1;
Process Management GATE CS 2008
Discuss it

Question 18 Explanation:
F0 // There will be 1 child process created by first fork

/ \

F1 F1 // There will be 2 child processes created by second fork

/ \ / \

F2 F2 F2 F2 // There will be 4 child processes created by third fork

/\ /\/\ /\

............... // and so on

If we sum all levels of above tree for i = 0 to n-1, we get 2^n - 1. So there will be 2^n 1 child processes.
Also see this post for more details.

Question 19

WRONG
Consider the following statements about user level threads and kernel level threads. Which one of the
following statement is FALSE?
Context switch time is longer for kernel level threads than for user level threads.

B User level threads do not need any hardware support.

C Related kernel level threads can be scheduled on different processors in a multi-processor system.

Blocking one kernel level thread blocks all related threads.


Process Management GATE-CS-2007
Discuss it

Question 19 Explanation:
Kernel level threads are managed by the OS, therefore, thread operations are implemented in the kernel
code. Kernel level threads can also utilize multiprocessor systems by splitting threads on different
processors. If one thread blocks it does not cause the entire process to block. Kernel level threads have
disadvantages as well. They are slower than user level threads due to the management overhead. Kernel
level context switch involves more steps than just saving some registers. Finally, they are not portable
because the implementation is operating system dependent. option (A): Context switch time is longer for
kernel level threads than for user level threads. True, As User level threads are managed by user and
Kernel level threads are managed by OS. There are many overheads involved in Kernel level thread
management, which are not present in User level thread management. So context switch time is longer
for kernel level threads than for user level threads. Option (B): User level threads do not need any
hardware support True, as User level threads are managed by user and implemented by Libraries, User
level threads do not need any hardware support. Option (C): Related kernel level threads can be
scheduled on different processors in a multi- processor system. This is true. Option (D): Blocking one
kernel level thread blocks all related threads. false, since kernel level threads are managed by operating
system, if one thread blocks, it does not cause all threads or entire process to block. See Question 4
of http://www.geeksforgeeks.org/operating-systems-set-13/ Reference
:http://www.personal.kent.edu/~rmuhamma/OpSystems/Myos/threads.htm http://quiz.geeksforgeeks.org/o
perating-system-user-level-thread-vs-kernel-level-thread/ This solution is contributed by Nitika Bansal

Question 20

WRONG
Two processes, P1 and P2, need to access a critical section of code. Consider the following
synchronization construct used by the processes:Here, wants1 and wants2 are shared variables, which
are initialized to false. Which one of the following statements is TRUE about the above construct?v
/* P1 */

while (true) {

wants1 = true;

while (wants2 == true);

/* Critical

Section */

wants1=false;

/* Remainder section */

/* P2 */

while (true) {

wants2 = true;

while (wants1==true);

/* Critical

Section */

wants2 = false;

/* Remainder section */

A It does not ensure mutual exclusion.

B It does not ensure bounded waiting.

It requires that processes enter the critical section in strict alternation.


It does not prevent deadlocks, but ensures mutual exclusion.
Process Management GATE-CS-2007
Discuss it

Question 20 Explanation:
Bounded waiting :There exists a bound, or limit, on the number of times other processes are allowed to
enter their critical sections after a process has made request to enter its critical section and before that
request is granted. mutual exclusion prevents simultaneous access to a shared resource. This concept is
used in concurrent programming with a critical section, a piece of code in which processes or threads
access a shared resource. Solution: Two processes, P1 and P2, need to access a critical section of
code. Here, wants1 and wants2 are shared variables, which are initialized to false. Now, when both
wants1 and wants2 become true, both process p1 and p2 enter in while loop and waiting for each other to
finish. This while loop run indefinitely which leads to deadlock. Now, Assume P1 is in critical section (it
means wants1=true, wants2 can be anything, true or false). So this ensures that p2 wont enter in critical
section and vice versa. This satisfies the property of mutual exclusion. Here bounded waiting condition is
also satisfied as there is a bound on the number of process which gets access to critical section after a
process request access to it. See question 3 of http://www.geeksforgeeks.org/operating-systems-set-
13/ This solution is contributed by Nitika Bansal

Question 21

WRONG
Which one of the following is FALSE?

A User level threads are not scheduled by the kernel.

B When a user level thread is blocked, all other threads of its process are blocked.

Context switching between user level threads is faster than context switching between kernel

level threads.
Kernel level threads cannot share the code segment
Process Management GATE-CS-2014-(Set-1)
Discuss it

Question 21 Explanation:

USER LEVEL THREAD KERNEL LEVEL THREAD


User thread are implemented by user processes. kernel threads are implemented by OS.
OS doesnt recognized user level threads. Kernel threads are recognized by OS.
Implementation of User threads is easy. Implementation of Kernel thread is complicated.
Context switch time is less. Context switch time is more.
Context switch requires no hardware support. Hardware support is needed.
If one user level thread perform blocking If one kernel thread perform blocking operation

operation then entire process will be blocked. then another thread can continue execution.
Example : Java thread, POSIX threads. Example : Window Solaris.
Source: http://geeksquiz.com/operating-system-user-level-thread-vs-kernel-level-thread/

Question 22
WRONG
Consider two processors P1 and P2 executing the same instruction set. Assume that under identical
conditions, for the same input, a program running on P2 takes 25% less time but incurs 20% more CPI
(clock cycles per instruction) as compared to the program running on P1. If the clock frequency of P1 is
1GHz, then the clock frequency of P2 (in GHz) is _________.
1.6

B 3.2

1.2

D 0.8

Process Management GATE-CS-2014-(Set-1)


Discuss it

Question 22 Explanation:
For P1 clock period = 1ns

Let clock period for P2 be t.

Now consider following equation based on specification

7.5 ns = 12*t ns

We get t and inverse of t will be 1.6GHz

Question 23
Consider the procedure below for the Producer-Consumer problem which uses semaphores:

Whi
ch one of the following is TRUE?

A The producer will be able to add an item to the buffer, but the consumer can never consume it.
B The consumer will remove no more than one item from the buffer.

C Deadlock occurs if the consumer succeeds in acquiring semaphore s when the buffer is empty.

D The starting value for the semaphore n must be 1 and not 0 for deadlock-free operation.

Process Management GATE-CS-2014-(Set-2)


Discuss it

Question 24

WRONG
The atomic fetch-and-set x, y instruction unconditionally sets the memory location x to 1 and fetches the
old value of x n y without allowing any intervening access to the memory location x. consider the following
implementation of P and V functions on a binary semaphore S.
void P (binary_semaphore *s)

unsigned y;

unsigned *x = &(s->value);

do

fetch-and-set x, y;

while (y);

void V (binary_semaphore *s)

S->value = 0;

Which one of the following is true?


The implementation may not work if context switching is disabled in P
Instead of using fetch-and set, a pair of normal load/store can be used

C The implementation of V is wrong

D The code does not implement a binary semaphore

Process Management GATE-CS-2006


Discuss it

Question 24 Explanation:
See Question 3 of http://www.geeksforgeeks.org/operating-systems-set-15/

Question 25

WRONG
Barrier is a synchronization construct where a set of processes synchronizes globally i.e. each process in
the set arrives at the barrier and waits for all others to arrive and then all processes leave the barrier. Let
the number of processes in the set be three and S be a binary semaphore with the usual P and V
functions. Consider the following C implementation of a barrier with line numbers shown on left.
void barrier (void) {
1: P(S);
2: process_arrived++;
3. V(S);
4: while (process_arrived !=3);
5: P(S);
6: process_left++;
7: if (process_left==3) {
8: process_arrived = 0;
9: process_left = 0;
10: }
11: V(S);
}
Run on IDE
The variables process_arrived and process_left are shared among all processes and are initialized to
zero. In a concurrent program all the three processes call the barrier function when they need to
synchronize globally. The above implementation of barrier is incorrect. Which one of the following is true?

A The barrier implementation is wrong due to the use of binary semaphore S

The barrier implementation may lead to a deadlock if two barrier in invocations are used in

immediate succession.
Lines 6 to 10 need not be inside a critical section

D The barrier implementation is correct if there are only two processes instead of three.

Process Management GATE-CS-2006


Discuss it

Question 25 Explanation:
It is possible that process_arrived becomes greater than 3. It will not be possible for process arrived to
become 3 again, hence deadlock.

Question 26

WRONG
Barrier is a synchronization construct where a set of processes synchronizes globally i.e. each process in
the set arrives at the barrier and waits for all others to arrive and then all processes leave the barrier. Let
the number of processes in the set be three and S be a binary semaphore with the usual P and V
functions. Consider the following C implementation of a barrier with line numbers shown on left.
void barrier (void) {
1: P(S);
2: process_arrived++;
3. V(S);
4: while (process_arrived !=3);
5: P(S);
6: process_left++;
7: if (process_left==3) {
8: process_arrived = 0;
9: process_left = 0;
10: }
11: V(S);
}
Run on IDE
The variables process_arrived and process_left are shared among all processes and are initialized to
zero. In a concurrent program all the three processes call the barrier function when they need to
synchronize globally. Which one of the following rectifies the problem in the implementation?
Lines 6 to 10 are simply replaced by process_arrived--
At the beginning of the barrier the first process to enter the barrier waits until process_arrived

becomes zero before proceeding to execute P(S).

C Context switch is disabled at the beginning of the barrier and re-enabled at the end.

D The variable process_left is made private instead of shared

Process Management GATE-CS-2006


Discuss it

Question 26 Explanation:

Step 2 should not be executed when the process enters the barrier second time till other two processes
have not completed their 7th step. This is to prevent variable process_arrived becoming greater than 3.
So, when variable process_arrived becomes zero and variable process_left also becomes zero then the
problem of deadlock will be resolved.
Thus, at the beginning of the barrier the first process to enter the barrier waits until process_arrived
becomes zero before proceeding to execute P(S).

Thus, option (B) is correct.

Please comment below if you find anything wrong in the above post.

Question 27

WRONG
Consider two processes P1 and P2 accessing the shared variables X and Y protected by two binary
semaphores SX and SY respectively, both initialized to 1. P and V denote the usual semaphone
operators, where P decrements the semaphore value, and V increments the semaphore value. The
pseudo-code of P1 and P2 is as follows : P1 :
While true do {
L1 : ................

L2 : ................

X = X + 1;

Y = Y - 1;

V(SX);

V(SY);

P2 :
While true do {

L3 : ................

L4 : ................

Y = Y + 1;

X = Y - 1;

V(SY);

V(SX);

In order to avoid deadlock, the correct operators at L1, L2, L3 and L4 are respectively
P(SY), P(SX); P(SX), P(SY)

B P(SX), P(SY); P(SY), P(SX)

C P(SX), P(SX); P(SY), P(SY)

P(SX), P(SY); P(SX), P(SY)


Process Management GATE-CS-2004
Discuss it

Question 27 Explanation:
Option A: In line L1 ( p(Sy) ) i.e. process p1 wants lock on Sy that is

held by process p2 and line L3 (p(Sx)) p2 wants lock on Sx which held by p1.

So here circular and wait condition exist means deadlock.

Option B : In line L1 ( p(Sx) ) i.e. process p1 wants lock on Sx that is held

by process p2 and line L3 (p(Sy)) p2 wants lock on Sx which held by p1. So here

circular and wait condition exist means deadlock.

Option C: In line L1 ( p(Sx) ) i.e. process p1 wants lock on Sx and line L3 (p(Sy))

p2 wants lock on Sx . But Sx and Sy cant be released by its processes p1 and p2.

Please read the following to learn more about process synchronization and semaphores: Process
Synchronization Set 1 This explanation has been contributed by Dheerendra Singh.
Question 28

WRONG
Suppose we want to synchronize two concurrent processes P and Q using binary semaphores S and T.
The code for the processes P and Q is shown below.
Process P:
while (1) {
W:
print '0';
print '0';
X:
}

Process Q:
while (1) {
Y:
print '1';
print '1';
Z:
}
Synchronization statements can be inserted only at points W, X, Y and Z. Which of the following will
always lead to an output staring with '001100110011' ?
P(S) at W, V(S) at X, P(T) at Y, V(T) at Z, S and T initially 1
P(S) at W, V(T) at X, P(T) at Y, V(S) at Z, S initially 1, and T initially 0

C P(S) at W, V(T) at X, P(T) at Y, V(S) at Z, S and T initially 1

D P(S) at W, V(S) at X, P(T) at Y, V(T) at Z, S initially 1, and T initially 0

Process Management GATE-CS-2003


Discuss it

Question 28 Explanation:
P(S) means wait on semaphore S and V(S) means signal on semaphore S. 1 Wait(S) { while (i <= 0)
--S; } Signal(S) { S++; } [/sourcecode] Initially, we assume S = 1 and T = 0 to support mutual exclusion in
process P and Q. Since S = 1, only process P will be executed and wait(S) will decrement the value of S.
Therefore, S = 0. At the same instant, in process Q, value of T = 0. Therefore, in process Q, control will be
stuck in while loop till the time process P prints 00 and increments the value of T by calling the function
V(T). While the control is in process Q, semaphore S = 0 and process P would be stuck in while loop and
would not execute till the time process Q prints 11 and makes the value of S = 1 by calling the function
V(S). This whole process will repeat to give the output 00 11 00 11 .

Thus, B is the correct choice.

Please comment below if you find anything wrong in the above post.

Question 29

WRONG
Suppose we want to synchronize two concurrent processes P and Q using binary semaphores S and T.
The code for the processes P and Q is shown below.
Process P:
while (1) {
W:
print '0';
print '0';
X:
}

Process Q:
while (1) {
Y:
print '1';
print '1';
Z:
}
Synchronization statements can be inserted only at points W, X, Y and Z Which of the following will
ensure that the output string never contains a substring of the form 01n0 or 10n1 where n is odd?
P(S) at W, V(S) at X, P(T) at Y, V(T) at Z, S and T initially 1

B P(S) at W, V(T) at X, P(T) at Y, V(S) at Z, S and T initially 1

P(S) at W, V(S) at X, P(S) at Y, V(S) at Z, S initially 1

D V(S) at W, V(T) at X, P(S) at Y, P(T) at Z, S and T initially 1

Process Management GATE-CS-2003


Discuss it

Question 29 Explanation:
P(S) means wait on semaphore S and V(S) means signal on semaphore S. The definition of these
functions are :

Wait(S) {
while (i <= 0) ;
S-- ;
}

Signal(S) {
S++ ;
}

Initially S = 1 and T = 0 to support mutual exclusion in process P and Q.


Since, S = 1 , process P will be executed and function Wait(S) will decrement the value of S. So, S = 0
now.
Simultaneously, in process Q , T = 0 . Therefore, in process Q control will be stuck in while loop till the
time process P prints 00 and increments the value of T by calling function V(T).
While the control is in process Q, S = 0 and process P will be stuck in while loop. Process P will not
execute till the time process Q prints 11 and makes S = 1 by calling function V(S).
Thus, process 'P' and 'Q' will keep on repeating to give the output 00110011 .

Please comment below if you find anything wrong in the above post.

Question 30

WRONG
Which of the following does not interrupt a running process?

A A device

B Timer

Scheduler process
Power failure
Process Management GATE-CS-2001
Discuss it

Question 30 Explanation:
Scheduler process doesnt interrupt any process, its Job is to select the processes for following three
purposes. Long-term scheduler(or job scheduler) selects which processes should be brought into the
ready queue Short-term scheduler(or CPU scheduler) selects which process should be executed next
and allocates CPU. Mid-term Scheduler (Swapper)- present in all systems with virtual memory,
temporarily removes processes from main memory and places them on secondary memory (such as a
disk drive) or vice versa. The mid-term scheduler may decide to swap out a process which has not been
active for some time, or a process which has a low priority, or a process which is page faulting frequently,
or a process which is taking up a large amount of memory in order to free up main memory for other
processes, swapping the process back in later when more memory is available, or when the process has
been unblocked and is no longer waiting for a resource. Source: http://www.geeksforgeeks.org/operating-
systems-set-3/

Question 31

CORRECT
Which of the following need not necessarily be saved on a context switch between processes?

A General purpose registers

Translation look aside buffer

C Program counter

D All of the above

Process Management GATE-CS-2000


Discuss it

Question 31 Explanation:
See question 2 of http://www.geeksforgeeks.org/operating-systems-set-3/

Question 32

CORRECT
The following two functions P1 and P2 that share a variable B with an initial value of 2 execute
concurrently.
P1()

C = B 1;

B = 2*C;

P2()

D = 2 * B;

B = D - 1;

The number of distinct values that B can possibly take after the execution is
3

B 2

C 5

D 4

Process Management GATE-CS-2015 (Set 1)


Discuss it

Question 32 Explanation:
There are following ways that concurrent processes can follow.
C = B 1; // C = 1

B = 2*C; // B = 2

D = 2 * B; // D = 4

B = D - 1; // B = 3

C = B 1; // C = 1

D = 2 * B; // D = 4
B = D - 1; // B = 3

B = 2*C; // B = 2

C = B 1; // C = 1

D = 2 * B; // D = 4

B = 2*C; // B = 2

B = D - 1; // B = 3

D = 2 * B; // D = 4

C = B 1; // C = 1

B = 2*C; // B = 2

B = D - 1; // B = 3

D = 2 * B; // D = 4

B = D - 1; // B = 3

C = B 1; // C = 2

B = 2*C; // B = 4

There are 3 different possible values of B: 2, 3 and 4.

Question 33

WRONG
Two processes X and Y need to access a critical section. Consider the following synchronization construct
used by both the processes.

Here, varP and


varQ are shared variables and both are initialized to false. Which one of the following statements is true?
The proposed solution prevents deadlock but fails to guarantee mutual exclusion

B The proposed solution guarantees mutual exclusion but fails to prevent deadlock
C The proposed solution guarantees mutual exclusion and prevents deadlock

The proposed solution fails to prevent deadlock and fails to guarantee mutual exclusion
Process Management GATE-CS-2015 (Set 3)
Discuss it

Question 33 Explanation:
When both processes try to enter critical section simultaneously,both are allowed to do so since both
shared variables varP and varQ are true.So, clearly there is NO mutual exclusion. Also, deadlock is
prevented because mutual exclusion is one of the four conditions to be satisfied for deadlock to
happen.Hence, answer is A.

Question 34

WRONG
In a certain operating system, deadlock prevention is attempted using the following scheme. Each
process is assigned a unique timestamp, and is restarted with the same timestamp if killed. Let Ph be the
process holding a resource R, Pr be a process requesting for the same resource R, and T(Ph) and T(Pr)
be their timestamps respectively. The decision to wait or preempt one of the processes is based on the
following algorithm.
if T(Pr) < T(Ph)

then kill Pr

else wait

Which one of the following is TRUE?


The scheme is deadlock-free, but not starvation-free

B The scheme is not deadlock-free, but starvation-free

The scheme is neither deadlock-free nor starvation-free

D The scheme is both deadlock-free and starvation-free

Process Management GATE-IT-2004


Discuss it

Question 34 Explanation:

1. This scheme is making sure that the timestamp of requesting process is always lesser than
holding process
2. The process is restarted with same timestamp if killed and that timestamp can NOT be
greater than the existing time stamp
From 1 and 2,it is clear that any new process coming having LESSER timestamp will be KILLED.So,NO
DEADLOCK possible However, a new process will lower timestamp may have to wait
infinitely because of its LOWER timestamp(as killed process will also have same timestamp ,as it was
killed earlier).STARVATION IS Definitely POSSIBLE So Answer is A

Question 35

WRONG
A process executes the following segment of code :
for(i = 1; i < = n; i++)

fork ();

The number of new processes created is


n

B ((n(n + 1))/2)

2n - 1

D 3n - 1

Process Management GATE-IT-2004


Discuss it

Question 35 Explanation:
fork (); // Line 1

fork (); // Line 2

fork (); // Line 3

.....till n

L1 // There will be 1 child process created by line 1

/ \

L2 L2 // There will be 2 child processes created by line 2

/ \ / \

L3 L3 L3 L3 // There will be 4 child processes created by line 3

........

We can also use direct formula to get the number of child processes. With n fork statements, there are
always 2n 1 child processes. Also see this post for more details.

Question 36

WRONG
The semaphore variables full, empty and mutex are initialized to 0, n and 1, respectively. Process
P1 repeatedly adds one item at a time to a buffer of size n, and process P 2 repeatedly removes one item at
a time from the same buffer using the programs given below. In the programs, K, L, M and N are
unspecified statements.
P1
while (1) { K; P(mutex); Add an item to the buffer; V(mutex); L; } P2 while (1) { M;P(mutex);
Remove an item from the buffer; V(mutex); N; } The statements K, L, M and N are respectively
P(full), V(empty), P(full), V(empty)

B P(full), V(empty), P(empty), V(full)

C P(empty), V(full), P(empty), V(full)

P(empty), V(full), P(full), V(empty)


Process Management GATE-IT-2004
Discuss it

Question 36 Explanation:

Process P1 is the producer and process P2 is the consumer.


Semaphore full is initialized to '0'. This means there is no item in the buffer. Semaphore empty is
initialized to 'n'. This means there is space for n items in the buffer.
In process P1, wait on semaphore 'empty' signifies that if there is no space in buffer then P1 can not
produce more items. Signal on semaphore 'full' is to signify that one item has been added to the buffer.
In process P2, wait on semaphore 'full' signifies that if the buffer is empty then consumer cant not
consume any item. Signal on semaphore 'empty' increments a space in the buffer after consumption of an
item.

Thus, option (D) is correct.

Please comment below if you find anything wrong in the above post.

Question 37

WRONG
Consider the following two-process synchronization solution.

The shared variable turn is initialized to zero. Which one of the following is TRUE?

A This is a correct two-process synchronization solution.

This solution violates mutual exclusion requirement.


This solution violates progress requirement.
D This solution violates bounded wait requirement.

Process Management GATE-CS-2016 (Set 2)


Discuss it

Question 37 Explanation:
It satisfies the mutual excluision :
Process P0 and P1 could not have successfully executed their while
statements at the same time as value of turn can either be 0 or 1
but cant be both at the same time. Lets say, when process P0 executing
its while statements with the condition turn == 1, So this condition
will persist as long as process P1 is executing its critical section. And
when P1 comes out from its critical section it changes the value of turn
to 0 in exit section and because of that time P0 comes out from the its while
loop and enters into its critical section. Therefore only one process is
able to execute its critical section at a time.
Its also satisfies bounded waiting :
It is limit on number of times that other process is allowed to enter its
critical section after a process has made a request to enter its critical
section and before that request is granted. Lets say, P0 wishes to enter into
its critical section, it will definitely get a chance to enter into its critical
section after at most one entry made by p1 as after executing its critical section
it will set turn to 0 (zero). And vice-versa (strict alteration).
Progess is not satisfied :
Because of strict alternation no process can stop other process from entering into
its critical section.
This explanation has been contributed by Dheerendra Singh.

Question 38

WRONG
Consider a non-negative counting semaphore S. The operation P(S) decrements S, and V(S) increments
S. During an execution, 20 P(S) operations and 12 V(S) operations are issued in some order. The largest
initial value of S for which at least one P(S) operation will remain blocked is ________.
7

B 8

C 9

10
Process Management GATE-CS-2016 (Set 2)
Discuss it

Question 38 Explanation:
20-7 -> 13 will be in blocked state, when we perform 12 V(S) operation makes 12 more process to get
chance for execution from blocked state. So one process will be left in the queue (blocked state) here i
have considered that if a process is in under CS then it not get blocked by other process.

Question 39

WRONG
Which of the following DMA transfer modes and interrupt handling mechanisms will enable the highest I/O
band-width?

A Transparent DMA and Polling interrupts

B Cycle-stealing and Vectored interrupts

Block transfer and Vectored interrupts


Block transfer and Polling interrupts
Process Management Input Output Systems Computer Organization and Architecture GATE
IT 2006
Discuss it

Question 40

WRONG
In the working-set strategy, which of the following is done by the operating system to prevent thrashing?
1. It initiates another process if there are enough extra frames.
2. It selects a process to suspend if the sum of the sizes of the working-sets exceeds the total
number of available frames.

A I only

B II only

Neither I nor II
Both I and II
Process Management GATE IT 2006
Discuss it

Question 40 Explanation:
According to concept of thrashing,
I is true because to prevent thrashing we must provide processes with as many frames as
they really need "right now".If there are enough extra frames, another process can be
initiated.
II is true because The total demand, D, is the sum of the sizes of the working sets for all
processes. If D exceeds the total number of available frames, then at least one process is
thrashing, because there are not enough frames available to satisfy its minimum working
set. If D is significantly less than the currently available frames, then additional processes
can be launched.

Question 41

WRONG
Processes P1 and P2 use critical_flag in the following routine to achieve mutual exclusion. Assume that
critical_flag is initialized to FALSE in the main program.
get_exclusive_access ( ) { if (critical _flag == FALSE) { critical_flag = TRUE ; critical_region () ;
critical_flag = FALSE; } } Consider the following statements.
i. It is possible for both P1 and P2 to access critical_region concurrently.
ii. This may lead to a deadlock.
Which of the following holds?

A (i) is false and (ii) is true

Both (i) and (ii) are false


(i) is true and (ii) is false

D Both (i) and (ii) are true

Process Management Deadlock Gate IT 2007


Discuss it

Question 41 Explanation:

Say P1 starts first and executes statement 1, after that system context switches to P 2 (before executing
statement 2), and it enters inside if statement, since the flag is still false. So now both processes are in
critical section!! so (i) is true.. (ii) is false By no way it happens that flag is true and no process' are inside
the if clause, if someone enters the critical section, it will definitely make flag = false. So no deadlock.

Question 42

WRONG
The following is a code with two threads, producer and consumer, that can run in parallel. Further, S and
Q are binary semaphores equipped with the standard P and V operations. semaphore S = 1, Q = 0;
integer x; producer: consumer: while (true) do while (true) do P(S);
P(Q); x = produce (); consume (x); V(Q); V(S); done
done Which of the following is TRUE about the program above?

A The process can deadlock

B One of the threads can starve

Some of the items produced by the producer may be lost


Values generated and stored in 'x' by the producer will always be consumed before the producer

can generate a new value


Process Management Gate IT 2008
Discuss it
Question 42 Explanation:
A semaphore is hardware or a software tag variable whose value indicates the status of a common
resource. Its purpose is to lock the resource being used. A process which needs the resource will check
the semaphore for determining the status of the resource followed by the decision for proceeding. In
multitasking operating systems, the activities are synchronized by using the semaphore techniques. wait
and signal are defined on the semaphore. Entry to the critical section is controlled by the wait operation
and exit from a critical region is taken care by signal operation. The wait, signal operations are also called
P and V operations. The manipulation of semaphore (S) takes place as following: 1. The wait command
P(S) decrements the semaphore value by 1. If the resulting value becomes negative then P command is
delayed until the condition is satisfied. 2. The V(S) i.e. signals operation increments the semaphore value
by 1. Solution: Consumer can consume only once the producer has produced the item, and producer
can produce(except the first time) only once the consumer has consumed the item. Lets explain the
working of this code. It is mentioned that Producer and Consumer execute parallely. Producer: st1 - S
value is 1, P(S) on S makes it 0 and st2 - and then x item is produced. st3 - Q value is 0. V(Q) on Q
makes it 1. this being an infinite while loop should infinitely iterate. In the next iteration of while loop, S is
already 0 ,further P(S) on 0 sends P to blocked list of S. So producer is blocked. Consumer: P(Q) on Q
makes Q =0 and then consumes the item. V(S) on S, now instead of changing the value of S to
1,consumer wakes up the blocked process on Q 's queue. Hence process P is awaken. P resumes from
st2,since it was blocked at statement 1. So P now produces the next item. So consumer consumes an
item before producer produces the next item. Correct Option is (D). Choice of this question: A) deadlock
cannot happen has both producer and consumer are operating on different semaphores (no hold and wait
) B) No starvation happen because there is alteration between P and Consumer Which also makes them
have bounded waiting. (C) Some of the items produced by the producer may be lost but it cant.
Reference: http://www.geeksforgeeks.org/mutex-vs-semaphore/
This solution is contributed by Nitika Bansal

Question 43

CORRECT
An operating system implements a policy that requires a process to release all resources before making a
request for another resource. Select the TRUE statement from the following:

A Both starvation and deadlock can occur

Starvation can occur but deadlock cannot occur

C Starvation cannot occur but deadlock can occur

D Neither starvation nor deadlock can occur

Process Management Gate IT 2008


Discuss it

Question 43 Explanation:
Starvation may occur, as a process may want othe resource in ||<sup>al</sup> along with currently hold
resources. <br> According to given conditions it will never be possible to collect all at a time.<br> No
deadlock.
Question 44

CORRECT
If the time-slice used in the round-robin scheduling policy is more than the maximum time required to
execute any process, then the policy will

A degenerate to shortest job first

B degenerate to priority scheduling

degenerate to first come first serve

D none of the above

Process Management Gate IT 2008


Discuss it

Question 44 Explanation:
RR executes processes in FCFS manner with a time slice. It this time slice becomes long enough, so that
a process finishes within it, It becomes FCFS.

Question 45

WRONG
Consider the following C code for process P1 and P2. a=4, b=0, c=0 (initialization)
P1 P2

if (a < 0) b = 10;

c = b-a; a = -3;

else

c = b+a;

If the processes P1 and P2 executes concurrently (shared variables a, b and c), which of the following
cannot be the value of c after both processes complete?
4

B 7

10

D 13

Process Management GATE 2017 Mock


Discuss it

Question 45 Explanation:
P1 : 1, 3, 4 -> c = 0+4 =4 {hence option a}
P2 : i, ii and P1 : 1, 2 -> c = 10-(-3) = 13 {hence option d}
P1 : 1 , P2 : i, ii and P1 : 3, 4 -> c= 10+(-3) = 7 { hence option b}
So 10 cannot be c value.

CPU Scheduling

Question 1

WRONG
Consider three processes (process id 0, 1, 2 respectively) with compute time bursts 2, 4 and 8 time units.
All processes arrive at time zero. Consider the longest remaining time first (LRTF) scheduling algorithm.
In LRTF ties are broken by giving priority to the process with the lowest process id. The average turn
around time is:
13 units
14 units

C 15 units

D 16 units

CPU Scheduling
Discuss it

Question 1 Explanation:
Let the processes be p0, p1 and p2. These processes will be executed in following order.
p2 p1 p2 p1 p2 p0 p1 p2 p0 p1 p2

0 4 5 6 7 8 9 10 11 12 13 14

Turn around time of a process is total time between submission of the process and its completion. Turn
around time of p0 = 12 (12-0) Turn around time of p1 = 13 (13-0) Turn around time of p2 = 14 (14-0)
Average turn around time is (12+13+14)/3 = 13.

Question 2

WRONG
Consider three processes, all arriving at time zero, with total execution time of 10, 20 and 30 units,
respectively. Each process spends the first 20% of execution time doing I/O, the next 70% of time doing
computation, and the last 10% of time doing I/O again. The operating system uses a shortest remaining
compute time first scheduling algorithm and schedules a new process either when the running process
gets blocked on I/O or when the running process finishes its compute burst. Assume that all I/O
operations can be overlapped as much as possible. For what percentage of time does the CPU remain
idle?
0%
10.6%

C 30.0%
D 89.4%

CPU Scheduling
Discuss it

Question 2 Explanation:
Let three processes be p0, p1 and p2. Their execution time is 10, 20 and 30 respectively. p0 spends first 2
time units in I/O, 7 units of CPU time and finally 1 unit in I/O. p1 spends first 4 units in I/O, 14 units of CPU
time and finally 2 units in I/O. p2 spends first 6 units in I/O, 21 units of CPU time and finally 3 units in I/O.
idle p0 p1 p2 idle

0 2 9 23 44 47

Total time spent = 47 Idle time = 2 + 3 = 5 Percentage of idle time = (5/47)*100 = 10.6 %

Question 3

WRONG
Consider three CPU-intensive processes, which require 10, 20 and 30 time units and arrive at times 0, 2
and 6, respectively. How many context switches are needed if the operating system implements a shortest
remaining time first scheduling algorithm? Do not count the context switches at time zero and at the end.

A 1

2
3

D 4

CPU Scheduling
Discuss it

Question 3 Explanation:
Let three process be P0, P1 and P2 with arrival times 0, 2 and 6 respectively and CPU burst times 10, 20
and 30 respectively. At time 0, P0 is the only available process so it runs. At time 2, P1 arrives, but P0 has
the shortest remaining time, so it continues. At time 6, P2 arrives, but P0 has the shortest remaining time,
so it continues. At time 10, P1 is scheduled as it is the shortest remaining time process. At time 30, P2 is
scheduled. Only two context switches are needed. P0 to P1 and P1 to P2.

Question 4

CORRECT
Which of the following process scheduling algorithm may lead to starvation

A FIFO

B Round Robin

Shortest Job Next


D None of the above

CPU Scheduling
Discuss it

Question 4 Explanation:
Shortest job next may lead to process starvation for processes which will require a long time to complete if
short processes are continually added.

Question 5

WRONG
If the quantum time of round robin algorithm is very large, then it is equivalent to:
First in first out

B Shortest Job Next

Lottery scheduling

D None of the above

CPU Scheduling
Discuss it

Question 5 Explanation:
If time quantum is very large, then scheduling happens according to FCFS.

Question 6

CORRECT
A scheduling algorithm assigns priority proportional to the waiting time of a process. Every process starts
with priority zero (the lowest priority). The scheduler re-evaluates the process priorities every T time units
and decides the next process to schedule. Which one of the following is TRUE if the processes have no
I/O operations and all arrive at time zero?

A This algorithm is equivalent to the first-come-first-serve algorithm

This algorithm is equivalent to the round-robin algorithm.

C This algorithm is equivalent to the shortest-job-first algorithm..

D This algorithm is equivalent to the shortest-remaining-time-first algorithm

GATE CS 2013 CPU Scheduling


Discuss it

Question 6 Explanation:
The scheduling algorithm works as round robin with quantum time equals to T. After a process's turn
comes and it has executed for T units, its waiting time becomes least and its turn comes again after every
other process has got the token for T units.

Question 7

WRONG
Consider the 3 processes, P1, P2 and P3 shown in the table.
Process Arrival time Time Units Required

P1 0 5

P2 1 7

P3 3 4

The completion order of the 3 processes under the policies FCFS and RR2 (round robin scheduling with
CPU quantum of 2 time units) are
FCFS: P1, P2, P3

RR2: P1, P2, P3

FCFS: P1, P3, P2


B RR2: P1, P3, P2

FCFS: P1, P2, P3

RR2: P1, P3, P2

FCFS: P1, P3, P2


D RR2: P1, P2, P3

GATE CS 2012 CPU Scheduling


Discuss it

Question 7 Explanation:
FCFS is clear.

In RR, time slot is of 2 units.

Processes are assigned in following order

p1, p2, p1, p3, p2, p1, p3, p2, p2

This question involves the concept of ready queue. At t=2, p2 starts and p1 is sent to the ready queue
and at t=3 p3 arrives so then the job p3 is queued in ready queue after p1. So at t=4, again p1 is executed
then p3 is executed for first time at t=6.

Question 8
WRONG
Consider the following table of arrival time and burst time for three processes P0, P1 and P2.
Process Arrival time Burst Time

P0 0 ms 9 ms

P1 1 ms 4 ms

P2 2 ms 9 ms

The pre-emptive shortest job first scheduling algorithm is used. Scheduling is carried out only at arrival or
completion of processes. What is the average waiting time for the three processes?
5.0 ms
4.33 ms

C 6.33

D 7.33

GATE CS 2011 CPU Scheduling


Discuss it

Question 8 Explanation:
See Question 4 of http://www.geeksforgeeks.org/operating-systems-set-6/

Question 9

WRONG
Which of the following statements are true?
I. Shortest remaining time first scheduling may cause starvation

II. Preemptive scheduling may cause starvation

III. Round robin is better than FCFS in terms of response time

A I only

B I and III only

II and III only


I, II and III
GATE CS 2010 CPU Scheduling
Discuss it

Question 9 Explanation:
I) Shortest remaining time first scheduling is a pre-emptive version of shortest job scheduling. In SRTF,
job with the shortest CPU burst will be scheduled first. Because of this process, It may cause starvation
as shorter processes may keep coming and a long CPU burst process never gets CPU. II) Pre-emptive
just means a process before completing its execution is stopped and other process can start execution.
The stopped process can later come back and continue from where it was stopped. In pre-emptive
scheduling, suppose process P1 is executing in CPU and after some time process P2 with high priority
then P1 will arrive in ready queue then p1 is pre-empted and p2 will brought into CPU for execution. In
this way if process which is arriving in ready queue is of higher priority then p1, then p1 is always pre-
empted and it may possible that it suffer from starvation. III) round robin will give better response time
then FCFS ,in FCFS when process is executing ,it executed up to its complete burst time, but in round
robin it will execute up to time quantum. So Round Robin Scheduling improves response time as all
processes get CPU after a specified time. So, I,II,III are true which is option (D). Option (D) is correct
answer.
Reference:https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/5_CPU_Scheduling.html http://w
ww.geeksforgeeks.org/operating-systems-set-7/ This solution is contributed by Nitika Bansal

Question 10

CORRECT
In the following process state transition diagram for a uniprocessor system, assume that there are always
some processes in the ready state: Now consider the following statements:

I. If a process makes a transition D, it would result in

another process making transition A immediately.

II. A process P2 in blocked state can make transition E

while another process P1 is in running state.

III. The OS uses preemptive scheduling.

IV. The OS uses non-preemptive scheduling.

Which of the above statements are TRUE?

A I and II

B I and III

II and III

D II and IV

GATE-CS-2009 CPU Scheduling


Discuss it

Question 10 Explanation:
I is false. If a process makes a transition D, it would result in another process making transition B, not A. II
is true. A process can move to ready state when I/O completes irrespective of other process being in
running state or not. III is true because there is a transition from running to ready state. IV is false as the
OS uses preemptive scheduling.

Question 11

CORRECT
Group 1 contains some CPU scheduling algorithms and Group 2 contains some applications. Match
entries in Group 1 to entries in Group 2.
Group I Group II

(P) Gang Scheduling (1) Guaranteed Scheduling

(Q) Rate Monotonic Scheduling (2) Real-time Scheduling

(R) Fair Share Scheduling (3) Thread Scheduling

P3Q2R1

B P1Q2R3

C P2Q3R1

D P1Q3R2

GATE-CS-2007 CPU Scheduling


Discuss it

Question 11 Explanation:
See question 2 of http://www.geeksforgeeks.org/operating-systems-set-12/

Question 12

WRONG
An operating system uses Shortest Remaining Time first (SRT) process scheduling algorithm. Consider
the arrival times and execution times for the following processes:
Process Execution time Arrival time

P1 20 0

P2 25 15

P3 10 30

P4 15 45

What is the total waiting time for process P2?


5
15

C 40

D 55
GATE-CS-2007 CPU Scheduling
Discuss it

Question 12 Explanation:
Shortest remaining time, also known as shortest remaining time first (SRTF), is a scheduling method that
is a pre-emptive version of shortest job next scheduling. In this scheduling algorithm, the process with the
smallest amount of time remaining until completion is selected to execute. Since the currently executing
process is the one with the shortest amount of time remaining by definition, and since that time should
only reduce as execution progresses, processes will always run until they complete or a new process is
added that requires a smaller amount of time. The Gantt chart of execution of

processes: At
time 0, P1 is the only process, P1 runs for 15 time units. At time 15, P2 arrives, but P1 has the shortest
remaining time. So P1 continues for 5 more time units. At time 20, P2 is the only process. So it runs for 10
time units. at time 30, P3 is the shortest remaining time process. So it runs for 10 time units. at time 40,
P2 runs as it is the only process. P2 runs for 5 time units. At time 45, P3 arrives, but P2 has the shortest
remaining time. So P2 continues for 10 more time units. P2 completes its execution at time 55.

As we know, turn around time is total time between submission of the process and its completion. Waiting
time is the time The amount of time that is taken by a process in ready queue and waiting time is the
difference between Turn around time and burst time. Total turnaround time for P2 = Completion time -
Arrival time = 55 - 15 = 40 Total Waiting Time for P2= turn around time - Burst time = 40 25 = 15 See
question 3 of http://www.geeksforgeeks.org/operating-systems-set-12/ This solution is contributed
by Nitika Bansal

Question 13

WRONG
Consider three CPU-intensive processes, which require 10, 20 and 30 time units and arrive at times 0, 2
and 6, respectively. How many context switches are needed if the operating system implements a shortest
remaining time first scheduling algorithm? Do not count the context switches at time zero and at the end.
1
2
C 3

D 4

GATE-CS-2006 CPU Scheduling


Discuss it

Question 13 Explanation:
Shortest remaining time, also known as shortest remaining time first (SRTF), is a scheduling method that
is a pre-emptive version of shortest job next scheduling. In this scheduling algorithm, the process with the
smallest amount of time remaining until completion is selected to execute. Since the currently executing
process is the one with the shortest amount of time remaining by definition, and since that time should
only reduce as execution progresses, processes will always run until they complete or a new process is
added that requires a smaller amount of time.Solution: Let three process be P0, P1 and P2 with arrival
times 0, 2 and 6 respectively and CPU burst times 10, 20 and 30 respectively. At time 0, P0 is the only
available process so it runs. At time 2, P1 arrives, but P0 has the shortest remaining time, so it continues.
At time 6, P2 also arrives, but P0 still has the shortest remaining time, so it continues. At time 10, P1 is
scheduled as it is the shortest remaining time process. At time 30, P2 is scheduled. Only two context
switches are needed. P0 to P1 and P1 to P2. See question 1 of http://www.geeksforgeeks.org/operating-
systems-set-14/ This solution is contributed by Nitika Bansal

Question 14

WRONG
Three processes A, B and C each execute a loop of 100 iterations. In each iteration of the loop, a process
performs a single computation that requires tc CPU milliseconds and then initiates a single I/O operation
that lasts for tio milliseconds. It is assumed that the computer where the processes execute has sufficient
number of I/O devices and the OS of the computer assigns different I/O devices to each process. Also,
the scheduling overhead of the OS is negligible. The processes have the following characteristics:
Process id tc tio
A 100 ms 500 ms
B 350 ms 500 ms
C 200 ms 500 ms
The processes A, B, and C are started at times 0, 5 and 10 milliseconds respectively, in a pure time
sharing system (round robin scheduling) that uses a time slice of 50 milliseconds. The time in
milliseconds at which process C would complete its first I/O operation is ___________.

A 500

1000
2000

D 10000

GATE-CS-2014-(Set-2) CPU Scheduling


Discuss it
Question 14 Explanation:
There are three processes A, B and C that run in

round robin manner with time slice of 50 ms.

Processes atart at 0, 5 and 10 miliseconds.

The processes are executed in below order

A, B, C, A

50 + 50 + 50 + 50 (200 ms passed)

Now A has completed 100 ms of computations and

goes for I/O now

B, C, B, C, B, C

50 + 50 + 50 + 50 + 50 + 50 (300 ms passed)

C goes for i/o at 500ms and it needs 500ms to

finish the IO.

So C would complete its first IO at 1000 ms

Question 15

WRONG
An operating system uses shortest remaining time first scheduling algorithm for pre-emptive scheduling of
processes. Consider the following set of processes with their arrival times and CPU burst times (in
milliseconds):
Process Arrival Time Burst Time

P1 0 12

P2 2 4

P3 3 6

P4 8 5

The average waiting time (in milliseconds) of the processes is _________.

A 4.5

B 5.0

5.5
6.5
GATE-CS-2014-(Set-3) CPU Scheduling
Discuss it

Question 15 Explanation:
Process Arrival Time Burst Time

P1 0 12

P2 2 4

P3 3 6

P4 8 5

Burst Time - The total time needed by a process from the CPU for its complete execution. Waiting Time -
How much time processes spend in the ready queue waiting their turn to get on the CPU Now, The Gantt
chart for the above processes is :

P1 - 0 to 2 milliseconds

P2 - 2 to 6 milliseconds

P3 - 6 to 12 milliseconds

P4 - 12 to 17 milliseconds

P1 - 17 to 27 milliseconds

Process p1 arrived at time 0, hence cpu started executing it. After 2 units of time P2 arrives and burst time
of P2 was 4 units, and the remaining time of the process p1 was 10 units,hence cpu started executing P2,
putting P1 in waiting state(Pre-emptive and Shortest remaining time first scheduling). Due to P1's highest
remaining time it was executed by the cpu in the end.
Now calculating the waiting time of each process:

P1 -> 17 -2 = 15

P2 -> 0

P3 -> 6 - 3 = 3

P4 -> 12 - 8 = 4

Hence total waiting time of all the processes is

= 15+0+3+4=22

Total no of processes = 4

Average waiting time = 22 / 4 = 5.5


Hence C is the answer.

Question 16

WRONG
Consider the following set of processes, with the arrival times and the CPU-burst times given in
milliseconds
Process Arrival Time Burst Time

P1 0 5

P2 1 3

P3 2 3

P4 4 1

What is the average turnaround time for these processes with the preemptive shortest remaining
processing time first (SRPT) algorithm ?
5.50

B 5.75

C 6.00

6.25
GATE-CS-2004 CPU Scheduling
Discuss it

Question 16 Explanation:
The following is Gantt Chart of execution

P1 P2 P4 P3 P1
1 4 5 8 12
Turn Around Time = Completion Time - Arrival Time Avg Turn Around Time = (12 + 3 + 6+ 1)/4 = 5.50

Question 17
A uni-processor computer system only has two processes, both of which alternate 10ms CPU bursts with
90ms I/O bursts. Both the processes were created at nearly the same time. The I/O of both processes can
proceed in parallel. Which of the following scheduling strategies will result in the least CPU utilization
(over a long period of time) for this system ?

A First come first served scheduling

B Shortest remaining time first scheduling

C Static priority scheduling with different priorities for the two processes

D Round robin scheduling with a time quantum of 5 ms


GATE-CS-2003 CPU Scheduling
Discuss it

Question 18

CORRECT
Which of the following scheduling algorithms is non-preemptive?

A Round Robin

First-In First-Out

C Multilevel Queue Scheduling

D Multilevel Queue Scheduling with Feedback

GATE-CS-2002 CPU Scheduling


Discuss it

Question 18 Explanation:
Round Robin - Preemption takes place when the time quantum expires First In First Out - No Preemption,
the process once started completes before the other process takes over Multi Level Queue Scheduling -
Preemption takes place when a process of higher priority arrives Multi Level Queue Scheduling with
Feedback - Preemption takes a place when process of higher priority arrives or when the quantum of high
priority queue expires and we need to move the process to low priority queue So, B is the correct choice.
Please comment below if you find anything wrong in the above post.

Question 19

WRONG
Consider a set of n tasks with known runtimes r1, r2, .... rn to be run on a uniprocessor machine. Which of
the following processor scheduling algorithms will result in the maximum throughput?
Round-Robin
Shortest-Job-First

C Highest-Response-Ratio-Next

D First-Come-First-Served

GATE-CS-2001 CPU Scheduling


Discuss it

Question 19 Explanation:

Throughput means total number of tasks executed per unit time i.e. sum of waiting time and burst time.
Shortest job first scheduling is a scheduling policy that selects the waiting process with the smallest
execution time to execute next.
Thus, in shortest job first scheduling, shortest jobs are executed first. This means CPU utilization is
maximum. So, maximum number of tasks are completed.

Thus, option (B) is correct.

Please comment below if you find anything wrong in the above post.

Question 20

WRONG
Consider a uniprocessor system executing three tasks T1, T2 and T3, each of which is composed of an
infinite sequence of jobs (or instances) which arrive periodically at intervals of 3, 7 and 20 milliseconds,
respectively. The priority of each task is the inverse of its period and the available tasks are scheduled in
order of priority, with the highest priority task scheduled first. Each instance of T1, T2 and T3 requires an
execution time of 1, 2 and 4 milliseconds, respectively. Given that all tasks initially arrive at the beginning
of the 1st milliseconds and task preemptions are allowed, the first instance of T3 completes its execution
at the end of ______________ milliseconds.

A 5

10
12

D 15

GATE-CS-2015 (Set 1) CPU Scheduling


Discuss it

Question 20 Explanation:
Periods of T1, T2 and T3 are 3ms, 7ms and 20ms

Since priority is inverse of period, T1 is the highest

priority task, then T2 and finally T3

Every instance of T1 requires 1ms, that of T2 requires

2ms and that of T3 requires 4ms

Initially all T1, T2 and T3 are ready to get processor,

T1 is preferred

Second instances of T1, T2, and T3 shall arrive at 3, 7,

and 20 respectively.
Third instance of T1, T2 and T3 shall arrive at 6, 14,

and 49 respectively.

Time-Interval Tasks

0-1 T1

1-2 T2

2-3 T2

3-4 T1 [Second Instance of T1 arrives]

4-5 T3

5-6 T3

6-7 T1 [Third Instance of T1 arrives]

[Therefore T3 is preempted]

7-8 T2 [Second instance of T2 arrives]

8-9 T2

9-10 T1 [Fourth Instance of T1 arrives]

10-11 T3

11-12 T3 [First Instance of T3 completed]

Question 21

CORRECT
The maximum number of processes that can be in Ready state for a computer system with n CPUs is

A n

B n2

C 2n

Independent of n
GATE-CS-2015 (Set 3) CPU Scheduling
Discuss it

Question 21 Explanation:
The size of ready queue doesn't depend on number of processes. A single processor system may have a
large number of processes waiting in ready queue.

Question 22

WRONG
For the processes listed in the following table, which of the following scheduling schemes will give the
lowest average turnaround time?
Process Arrival Time Processing Time
A 0 3

B 1 6

C 4 4

D 6 2

A First Come First Serve

Non-preemptive Shortest Job First


Shortest Remaining Time

D Round Robin with Quantum value two

GATE-CS-2015 (Set 3) CPU Scheduling


Discuss it

Question 22 Explanation:
Turnaround time is the total time taken between the submission of a program/process/thread/task (Linux)
for execution and the return of the complete output to the customer/user. Turnaround Time = Completion
Time - Arrival Time. FCFS = First Come First Serve (A, B, C, D) SJF = Non-preemptive Shortest Job First
(A, B, D, C) SRT = Shortest Remaining Time (A(3), B(1), C(4), D(2), B(5)) RR = Round Robin with
Quantum value 2 (A(2), B(2), A(1),C(2),B(2),D(2),C(2),B(2)
Pr Arr.Time P.Time FCFS SJF SRT RR

A 0 3 3-0=3 3-0=3 3-0=3 5-0=5

B 1 6 9-1=8 9-1=8 15-1=14 15-1=14

C 4 4 13-4=9 15-4=11 8-4=4 13-4=9

D 6 2 15-6=9 11-6=5 10-6=4 11-6=5

Average 7.25 6.75 6.25 8.25

Shortest Remaining Time produces minimum average turn-around time.

Question 23

WRONG
Which of the following is FALSE about SJF (Shortest Job First Scheduling)?
S1: It causes minimum average waiting time

S2: It can cause starvation

A Only S1

B Only S2

Both S1 and S2
Neither S1 nor S2
GATE-CS-2015 (Mock Test) CPU Scheduling
Discuss it

Question 23 Explanation:
1. Both SJF and Shortest Remaining time first algorithms may cause starvation. Consider a
situation when long process is there in ready queue and shorter processes keep coming.
2. SJF is optimal in terms of average waiting time for a given set of processes, but problems
with SJF is how to know/predict time of next job.
Refer Process Scheduling for more details.

Question 24

WRONG
Two concurrent processes P1 and P2 use four shared resources R1, R2, R3 and R4, as shown below.
P1 P2
Compute: Use R1; Use R2; Use R3; Use R4; Compute; Use R1; Use R2; Use R3;. Use R4;
Both processes are started at the same time, and each resource can be accessed by only one process at
a time The following scheduling constraints exist between the access of resources by the processes:
P2 must complete use of R1 before P1 gets access to R1
P1 must complete use of R2 before P2 gets access to R2.
P2 must complete use of R3 before P1 gets access to R3.
P1 must complete use of R4 before P2 gets access to R4.
There are no other scheduling constraints between the processes. If only binary semaphores are used to
enforce the above scheduling constraints, what is the minimum number of binary semaphores needed?

A 1

2
3

D 4

CPU Scheduling Gate IT 2005


Discuss it

Question 24 Explanation:

We use two semaphores : A and B. A is initialized to 0 and B is initialized to 1.

P1:

Compute;
Wait(A);
Use R1;
Use R2;
Signal(B);
Wait(A);
Use R3;
Use R4;
Signal(B);

P2:

Compute;
Wait(B);
Use r1;
Signal(A);
Wait(B);
Use R2;
Use R3;
Signal(A);
Wait(B);
Use R4;
Signal(B);

In process p1, initially control will be stuck in while loop of Wait(A) because A = 0. In process p2, Wait(B)
decrements the value of B to 0 . Now, P2 uses the resource R1 and increments the value to A to 1 so that
process P1 can enter its critical section and use resource R1.
Thus, P2 will complete use of R1 before P1 gets access to R1.
Now, in P2 values of B = 0. So, P2 can not use resource R2 till P1 uses R2 and calls function Signal(B) to
increment the value of B to 1. Thus, P1 will complete use of R2 before P2 gets access to R2.
Now, semaphore A = 0. So, P1 can not execute further and gets stuck in while loop of function Wait(A).
Process P2 uses R3 and increments the value of semaphore A to 1.Now, P1 can enter its critical section
to use R3. Thus, P2 will complete use of R3 before P1 gets access to R3.
Now, P1 will use R4 and increments the value of B to 1 so that P2 can enter is critical section to use R4.
Thus, P1 will complete use of R4 before P2 gets access to R4.

Thus, option (B) is correct.

Please comment below if you find anything wrong in the above post.

Question 25

WRONG
We wish to schedule three processes P1, P2 and P3 on a uniprocessor system. The priorities, CPU time
requirements and arrival times of the processes are as shown below.
Process Priority CPU time required Arrival time (hh:mm:ss)
P1 10(highest) 20 sec 00:00:05
P2 9 10 sec 00:00:03
P3 8 (lowest) 15 sec 00:00:00
We have a choice of preemptive or non-preemptive scheduling. In preemptive scheduling, a late-arriving
higher priority process can preempt a currently running process with lower priority. In non-preemptive
scheduling, a late-arriving higher priority process must wait for the currently executing process to
complete before it can be scheduled on the processor. What are the turnaround times (time from arrival till
completion) of P2 using preemptive and non-preemptive scheduling respectively.
30 sec, 30 sec
B 30 sec, 10 sec

C 42 sec, 42 sec

30 sec, 42 sec
CPU Scheduling Gate IT 2005
Discuss it

Question 25 Explanation:
For Non preemptive scheduling

P3(AT=0) P1(AT=5) P2(AT=3)


0 15 35
45 Turn Around Time= Completion Time - Arrival Time = 45 -3 = 42 For Preemptive scheduling

P3 P3 P3 P2 P2 P1 P2 P3
0 1 2 3 4 5 25 33 45
Turn Around Time= Completion Time - Arrival Time = 33 - 3 = 30

Question 26

WRONG
Consider an arbitrary set of CPU-bound processes with unequal CPU burst lengths submitted at the same
time to a computer system. Which one of the following process scheduling algorithms would minimize the
average waiting time in the ready queue?
Shortest remaining time first

B Round-robin with time quantum less than the shortest CPU burst

Uniform random

D Highest priority first with priority proportional to CPU burst length

CPU Scheduling GATE-CS-2016 (Set 1)


Discuss it

Question 26 Explanation:
Turnaround time is the total time taken by the process between starting and the completion and waiting
time is the time for which process is ready to run but not executed by CPU scheduler. As we know, in all
CPU Scheduling algorithms, shortest job first is optimal i.ie. it gives minimum turn round time, minimum
average waiting time and high throughput and the most important thing is that shortest remaining time first
is the pre-emptive version of shortest job first. shortest remaining time first scheduling algorithm may lead
to starvation because If the short processes are added to the cpu scheduler continuously then the
currently running process will never be able to execute as they will get pre-empted but here all the
processes are arrived at same time so there will be no issue such as starvation. So, the answer is
Shortest remaining time first, which is answer (A).
Reference:https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/5_CPU_Scheduling.html http://g
eeksquiz.com/gate-notes-operating-system-process-scheduling/ This solution is contributed by Nitika
Bansal

Question 27

WRONG
Consider the following processes, with the arrival time and the length of the CPU burst given in
milliseconds. The scheduling algorithm used is preemptive shortest remaining-time first.

The average turn around time of these processes is ___________ milliseconds. Note : This question
was asked as Numerical Answer Type.
8.25
10.25

C 6.35

D 4.25

CPU Scheduling GATE-CS-2016 (Set 2)


Discuss it

Question 27 Explanation:
PreEmptive Shortest Remaining time first scheduling, i.e. that processes will be scheduled on the CPU
which will be having least remaining burst time( required time at the CPU). The processes are scheduled
and executed as given in the below Gantt chart. Turn
Around Time(TAT) = Completion Time(CT) - Arrival Time(AT) TAT for P1 = 20 - 0 = 20 TAT for P2 = 10 - 3
= 7 TAT for P3 = 8- 7 = 1 TAT for P4 = 13 - 8 = 5 Hence, Average TAT = Total TAT of all the processes / no
of processes = ( 20 + 7 + 1 + 5 ) / 4 = 33 / 4 = 8.25 Thus, A is the correct choice.

Question 28

CORRECT
Consider n jobs J1, J2,......Jn such that job Ji has execution time ti and a non-negative integer weight wi. The
weighted mean completion time of the jobs is defined to be , where Ti is the completion time of
job Ji. Assuming that there is only one processor available, in what order must the jobs be executed in
order to minimize the weighted mean completion time of the jobs?

A Non-decreasing order of ti

B Non-increasing order of wi

C Non-increasing order of witi

None-increasing order of wi/ti


CPU Scheduling Gate IT 2007
Discuss it

Question 29

CORRECT
Assume every process requires 3 seconds of service time in a system with single processor. If new
processes are arriving at the rate of 10 processes per minute, then estimate the fraction of time CPU is
busy in system?

A 20%

B 30%

50%

D 60%

CPU Scheduling GATE 2017 Mock


Discuss it
Question 29 Explanation:
10 processes -> 1 min
1 process-> 1/10 min = 6 sec (Arrival rate)
Each process -> 3 sec service time 3/6 * 100 = 50% of time CPU is busy.

You have completed 28/29 questions .


Your accuracy is 26%.

Memory Management

Question 1

WRONG
Which of the following page replacement algorithms suffers from Beladys anomaly?
FIFO
LRU

C Optimal Page Replacement

D Both LRU and FIFO

Memory Management
Discuss it

Question 1 Explanation:
Beladys anomaly proves that it is possible to have more page faults when increasing the number of page
frames while using the First in First Out (FIFO) page replacement algorithm. See the example given
on Wiki Page.

Question 2

WRONG
What is the swap space in the disk used for?
Saving temporary html pages
Saving process data

C Storing the super-block

D Storing device drivers

Memory Management
Discuss it

Question 2 Explanation:
Swap space is typically used to store process data. See this for more details.

Question 3
WRONG
Increasing the RAM of a computer typically improves performance because:
Virtual memory increases

B Larger RAMs are faster

Fewer page faults occur

D Fewer segmentation faults occur

Memory Management
Discuss it

Question 3 Explanation:
When there is more RAM, there would be more mapped virtual pages in physical memory, hence
fewer page faults. A page fault causes performance degradation as the page has to be loaded from
secondary device.

Question 4

WRONG
A computer system supports 32-bit virtual addresses as well as 32-bit physical addresses. Since the
virtual address space is of the same size as the physical address space, the operating system designers
decide to get rid of the virtual memory entirely. Which one of the following is true?
Efficient implementation of multi-user support is no longer possible

B The processor cache organization can be made more efficient now

Hardware support for memory management is no longer needed

D CPU scheduling can be made more efficient now

Memory Management
Discuss it

Question 4 Explanation:
For supporting virtual memory, special hardware support is needed from Memory Management Unit.
Since operating system designers decide to get rid of the virtual memory entirely, hardware support for
memory management is no longer needed

Question 5

WRONG
A CPU generates 32-bit virtual addresses. The page size is 4 KB. The processor has a translation look-
aside buffer (TLB) which can hold a total of 128 page table entries and is 4-way set associative. The
minimum size of the TLB tag is:
11 bits

B 13 bits

15 bits
D 20 bits

Memory Management
Discuss it

Question 5 Explanation:
Size of a page = 4KB = 2^12 Total number of bits needed to address a page frame = 32 12 = 20 If there
are n cache lines in a set, the cache placement is called n-way set associative. Since TLB is 4 way set
associative and can hold total 128 (2^7) page table entries, number of sets in cache = 2^7/4 = 2^5. So 5
bits are needed to address a set, and 15 (20 5) bits are needed for tag.

Question 6

WRONG
Virtual memory is
Large secondary memory

B Large main memory

Illusion of large main memory

D None of the above

Memory Management
Discuss it

Question 6 Explanation:
Virtual memory is illusion of large main memory.

Question 7

WRONG
Page fault occurs when
When a requested page is in memory
When a requested page is not in memory

C When a page is currupted

D When an exception is thrown

Memory Management
Discuss it

Question 7 Explanation:
Page fault occurs when a requested page is mapped in virtual address space but not present in memory.

Question 8

WRONG
Thrashing occurs when
When a page fault occurs
Processes on system frequently access pages not memory

C Processes on system are in running state

D Processes on system are in waiting state

Memory Management
Discuss it

Question 8 Explanation:
Thrashing occurs processes on system require more memory than it has. If processes do not have
enough pages, the pagefault rate is very high. This leads to: low CPU utilization operating system
spends most of its time swapping to disk The above situation is called thrashing

Question 9

WRONG
A computer uses 46bit virtual address, 32bit physical address, and a threelevel paged page table
organization. The page table base register stores the base address of the firstlevel table (T1), which
occupies exactly one page. Each entry of T1 stores the base address of a page of the secondlevel table
(T2). Each entry of T2 stores the base address of a page of the thirdlevel table (T3). Each entry of T3
stores a page table entry (PTE). The PTE is 32 bits in size. The processor used in the computer has a 1
MB 16 way set associative virtually indexed physically tagged cache. The cache block size is 64 bytes.
What is the size of a page in KB in this computer? (GATE 2013)
2

B 4

D 16

Memory Management
Discuss it

Question 9 Explanation:
Let the page size is of 'x' bits

Size of T1 = 2 ^ x bytes

(This is because T1 occupies exactly one page)

Now, number of entries in T1 = (2^x) / 4


(This is because each page table entry is 32 bits

or 4 bytes in size)

Number of entries in T1 = Number of second level

page tables

(Because each I-level page table entry stores the

base address of page of II-level page table)

Total size of second level page tables = ((2^x) / 4) * (2^x)

Similarly, number of entries in II-level page tables = Number

of III level page tables = ((2^x) / 4) * ((2^x) / 4)

Total size of third level page tables = ((2^x) / 4) *

((2^x) / 4) * (2^x)

Similarly, total number of entries (pages) in all III-level

page tables = ((2^x) / 4) * ((2^x) / 4) * ((2^x) / 4)

= 2^(3x - 6)

Size of virtual memory = 2^46

Number of pages in virtual memory = (2^46) / (2^x) = 2^(46 - x)

Total number the pages in the III-level page tables =

Number of pages in virtual memory

2^(3x - 6) = 2^(46 - x)

3x - 6 = 46 - x

4x = 52

x = 13

That means, page size is of 13 bits


or Page size = 2^13 bytes = 8 KB

Question 10

CORRECT
Consider data given in the above question. What is the minimum number of page colours needed to
guarantee that no two synonyms map to different sets in the processor cache of this computer? (GATE
CS 2013)

A 2

B 4

D 16

Memory Management
Discuss it

Question 10 Explanation:
1 MB 16-way set associative virtually indexed physically tagged cache(VIPT).
The cache block size is 64 bytes.

No of blocks is 2^20/2^6 = 2^14.

No of sets is 2^14/2^4 = 2^10.

VA(46)
+-------------------------------+
tag(30) , Set(10) , block offset(6)
+-------------------------------+

In VIPT if the no. of bits of page offset =


(Set+block offset) then only one page color is sufficient.

but we need 8 colors because the number bits where the cache set index and
physical page number over lap is 3 so 2^3 page colors is required.(option
c is ans).

Question 11

WRONG
Consider the virtual page reference string 1, 2, 3, 2, 4, 1, 3, 2, 4, 1 On a demand paged virtual memory
system running on a computer system that main memory size of 3 pages frames which are initially empty.
Let LRU, FIFO and OPTIMAL denote the number of page faults under the corresponding page
replacements policy. Then

A OPTIMAL < LRU < FIFO

OPTIMAL < FIFO < LRU


OPTIMAL = LRU

D OPTIMAL = FIFO
GATE CS 2012 Memory Management
Discuss it

Question 11 Explanation:
First In First Out (FIFO) This is the simplest page replacement algorithm. In this algorithm, operating
system keeps track of all pages in the memory in a queue; oldest page is in the front of the queue. When
a page needs to be replaced page in the front of the queue is selected for removal. Optimal Page
replacement: in this algorithm, pages are replaced which are not used for the longest duration of time in
the future. Least Recently Used (LRU) In this algorithm page will be replaced which is least recently
used. Solution: the virtual page reference string is 1, 2, 3, 2, 4, 1, 3, 2, 4, 1 size of main memory pages
frames is 3. For FIFO: total no of page faults are 6 (depicted in bold and
italic)

For optimal: total no of page faults are 5 (depicted in bold and


italic)

For LRU: total no of page faults are 9 (depicted in bold and


italic)

The Optimal will be 5, FIFO 6 and LRU 9. so, OPTIMAL < FIFO < LRU option (B) is correct answer.
See http://www.geeksforgeeks.org/operating-systems-set-5/ This solution is contributed by Nitika Bansal

Question 12

CORRECT
Let the page fault service time be 10ms in a computer with average memory access time being 20ns. If
one page fault is generated for every 10^6 memory accesses, what is the effective access time for the
memory?

A 21ns

30ns

C 23ns

D 35ns

Memory Management GATE CS 2011


Discuss it
Question 12 Explanation:
Let P be the page fault rate

Effective Memory Access Time = p * (page fault service time) +

(1 - p) * (Memory access time)

= ( 1/(10^6) )* 10 * (10^6) ns +

(1 - 1/(10^6)) * 20 ns

= 30 ns (approx)

Question 13

CORRECT
A system uses FIFO policy for page replacement. It has 4 page frames with no pages loaded to begin
with. The system first accesses 100 distinct pages in some order and then accesses the same 100 pages
but now in the reverse order. How many page faults will occur?
196

B 192

C 197

D 195

Memory Management GATE CS 2010


Discuss it

Question 13 Explanation:
See http://www.geeksforgeeks.org/operating-systems-set-7/

Question 14

WRONG
In which one of the following page replacement policies, Beladys anomaly may occur?
FIFO

B Optimal

LRU

D MRU

Memory Management GATE-CS-2009


Discuss it

Question 14 Explanation:
Beladys anomaly proves that it is possible to have more page faults when increasing the number of page
frames while using the First in First Out (FIFO) page replacement algorithm. See the wiki page for an
example of increasing page faults with number of page frames.
Question 15

WRONG
The essential content(s) in each entry of a page table is / are

A Virtual page number

Page frame number

C Both virtual page number and page frame number

Access right information


Memory Management GATE-CS-2009
Discuss it

Question 15 Explanation:
A page table entry must contain Page frame number. Virtual page number is typically used as index in
page table to get the corresponding page frame number. See this for details.

Question 16

WRONG
A multilevel page table is preferred in comparison to a single level page table for translating virtual
address to physical address because

A It reduces the memory access time to read or write a memory location.

It helps to reduce the size of page table needed to implement the virtual address space of a

process.
It is required by the translation lookaside buffer.

D It helps to reduce the number of page faults in page replacement algorithms.

Memory Management GATE-CS-2009


Discuss it

Question 16 Explanation:
The size of page table may become too big (See this) to fit in contiguous space. That is why page tables
are typically divided in levels.

Question 17

CORRECT
A processor uses 36 bit physical addresses and 32 bit virtual addresses, with a page frame size of 4
Kbytes. Each page table entry is of size 4 bytes. A three level page table is used for virtual to physical
address translation, where the virtual address is used as follows Bits 30-31 are used to index into the
first level page table Bits 21-29 are used to index into the second level page table Bits 12-20 are used
to index into the third level page table, and Bits 0-11 are used as offset within the page The number of
bits required for addressing the next level page table (or page frame) in the page table entry of the first,
second and third level page tables are respectively.
A 20, 20 and 20

24, 24 and 24

C 24, 24 and 20

D 25, 25 and 24

Memory Management GATE CS 2008


Discuss it

Question 17 Explanation:
Virtual address size = 32 bits Physical address size = 36 bits Physical memory size = 2^36 bytes Page
frame size = 4K bytes = 2^12 bytes No. of bits for offset (or number of bits required for accessing location
within a page frame) = 12. No. of bits required to access physical memory frame = 36 - 12 = 24 So in third
level of page table, 24 bits are required to access an entry. 9 bits of virtual address are used to access
second level page table entry and size of pages in second level is 4 bytes. So size of second level page
table is (2^9)*4 = 2^11 bytes. It means there are (2^36)/(2^11) possible locations to store this page table.
Therefore the second page table requires 25 bits to address it. Similarly, the third page table needs 25
bits to address it.

Question 18
CORRECT
A virtual memory system uses First In First Out (FIFO) page replacement policy and allocates a fixed
number of frames to a process. Consider the following statements:

P: Increasing the number of page frames allocated to a

process sometimes increases the page fault rate.

Q: Some programs do not exhibit locality of reference.

Which one of the following is TRUE?

A Both P and Q are true, and Q is the reason for P

Both P and Q are true, but Q is not the reason for P.

C P is false, but Q is true

D Both P and Q are false

Memory Management GATE-CS-2007


Discuss it

Question 18 Explanation:
First In First Out Page Replacement Algorithms: This is the simplest page replacement algorithm. In this
algorithm, operating system keeps track of all pages in the memory in a queue, oldest page is in the front
of the queue. When a page needs to be replaced page in the front of the queue is selected for removal.
FIFO Page replacement algorithms suffers from Beladys anomaly : Beladys anomaly states that it is
possible to have more page faults when increasing the number of page frames. Solution: Statement
P: Increasing the number of page frames allocated to a process sometimes increases the page fault rate.
Correct, as FIFO page replacement algorithm suffers from beladys anomaly which states above
statement. Statement Q: Some programs do not exhibit locality of reference. Correct, Locality often
occurs because code contains loops that tend to reference arrays or other data structures by indices. So
we can write a program does not contain loop and do not exhibit locality of reference. So, both statement
P and Q are correct but Q is not the reason for P as Beladys Anomaly occurs for some specific patterns
of page references. See Question 1 of http://www.geeksforgeeks.org/operating-systems-set-
13/ Reference :http://quiz.geeksforgeeks.org/operating-system-page-replacement-algorithm/ This solution
is contributed by Nitika Bansal

Question 19

WRONG
A process has been allocated 3 page frames. Assume that none of the pages of the process are available
in the memory initially. The process makes the following sequence of page references (reference string):
1, 2, 1, 3, 7, 4, 5, 6, 3, 1 If optimal page replacement policy is used, how many page faults occur for the
above reference string?
7
8
C 9

D 10

Memory Management GATE-CS-2007


Discuss it

Question 19 Explanation:
Optimal replacement policy looks forward in time to see which frame to replace on a page fault. 1 23 ->
1,2,3 //page faults 173 ->7 143 ->4 153 -> 5 163 -> 6 Total=7 So Answer is A

Question 20

WRONG
Consider the data given in above question. Least Recently Used (LRU) page replacement policy is a
practical approximation to optimal page replacement. For the above reference string, how many more
page faults occur with LRU than with the optimal page replacement policy?
0

B 1

D 3

Memory Management GATE-CS-2007


Discuss it

Question 20 Explanation:
LRU replacement policy: The page that is least recently used is being Replaced. Given String: 1, 2, 1, 3,
7, 4, 5, 6, 3, 1 123 // 1 ,2, 3 //page faults 173 ->7 473 ->4 453 ->5 456 ->6 356 ->3 316 ->1 Total 9 In
http://geeksquiz.com/gate-gate-cs-2007-question-82/, In optimal Replacement total page
faults=7 Therefore 2 more page faults Answer is C

Question 21

CORRECT
Assume that there are 3 page frames which are initially empty. If the page reference string is 1, 2, 3, 4, 2,
1, 5, 3, 2, 4, 6, the number of page faults using the optimal replacement policy is__________.

A 5

B 6

D 8
Memory Management GATE-CS-2014-(Set-1)
Discuss it

Question 21 Explanation:
In optimal page replacement replacement policy, we replace the place which is not used for longest
duration in future.
Given three page frames.

Reference string is 1, 2, 3, 4, 2, 1, 5, 3, 2, 4, 6

Initially, there are three page faults and entries are


1 2 3

Page 4 causes a page fault and replaces 3 (3 is the longest


distant in future), entries become
1 2 4
Total page faults = 3+1 = 4

Pages 2 and 1 don't cause any fault.

5 causes a page fault and replaces 1, entries become


5 2 4
Total page faults = 4 + 1 = 5

3 causes a page fault and replaces 1, entries become


3 2 4
Total page faults = 5 + 1 = 6

3, 2 and 4 don't cause any page fault.

6 causes a page fault.


Total page faults = 6 + 1 = 7

Question 22

WRONG
A computer has twenty physical page frames which contain pages numbered 101 through 120. Now a
program accesses the pages numbered 1, 2, , 100 in that order, and repeats the access sequence
THRICE. Which one of the following page replacement policies experiences the same number of page
faults as the optimal page replacement policy for this program?

A Least-recently-used

First-in-first-out

C Last-in-first-out

Most-recently-used
Memory Management GATE-CS-2014-(Set-2)
Discuss it

Question 22 Explanation:
The optimal page replacement algorithm swaps out the page whose next use will occur farthest in the
future. In the given question, the computer has 20 page frames and initially page frames are filled with
pages numbered from 101 to 120. Then program accesses the pages numbered 1, 2, , 100 in that
order, and repeats the access sequence THRICE. The first 20 accesses to pages from 1 to 20 would
definitely cause page fault. When 21st is accessed, there is another page fault. The page swapped out
would be 20 because 20 is going to be accessed farthest in future. When 22nd is accessed, 21st is going
to go out as it is going to be the farthest in future. The above optimal page replacement algorithm actually
works as most recently used in this case. As a side note, the first 100 would cause 100 page faults, next
100 would cause 81 page faults (1 to 19 would never be removed), the last 100 would also cause 81 page
faults.

Question 23

WRONG
A system uses 3 page frames for storing process pages in main memory. It uses the Least Recently Used
(LRU) page replacement policy. Assume that all the page frames are initially empty. What is the total
number of page faults that will occur while processing the page reference string given below? 4, 7, 6, 1, 7,
6, 1, 2, 7, 2

A 4

B 5

6
7
Memory Management GATE-CS-2014-(Set-3)
Discuss it

Question 23 Explanation:
What is a Page fault ? An interrupt that occurs when a program requests data that is not currently in real
memory. The interrupt triggers the operating system to fetch the data from a virtual memory and load it
into RAM. Now, 4, 7, 6, 1, 7, 6, 1, 2, 7, 2 is the reference string, you can think of it as data requests made
by a program. Now the system uses 3 page frames for storing process pages in main memory. It uses the
Least Recently Used (LRU) page replacement policy.
[ ] - Initially page frames are empty.i.e. no

process pages in main memory.

[ 4 ] - Now 4 is brought into 1st frame (1st

page fault)

Explanation: Process page 4 was requested by the program, but it was not in the main memory(in form of
page frames),which resulted in a page fault, after that process page 4 was brought in the main memory
by the operating system.

[ 4 7 ] - Now 7 is brought into 2nd frame


(2nd page fault) - Same explanation.

[ 4 7 6 ] - Now 6 is brought into 3rd frame

(3rd page fault)

[ 1 7 6 ] - Now 1 is brought into 1st frame, as 1st

frame was least recently used(4th page fault).

After this 7, 6 and 1 are were already present in the frames hence no replacements in pages.

[ 1 2 6 ] - Now 2 is brought into 2nd frame, as 2nd

frame was least recently used(5th page fault).

[ 1 2 7 ] -Now 7 is brought into 3rd frame, as 3rd frame

was least recently used(6th page fault).

Hence, total number of page faults(also called pf) are 6. Therefore, C is the answer.

Question 24

WRONG
Consider a paging hardware with a TLB. Assume that the entire page table and all the pages are in the
physical memory. It takes 10 milliseconds to search the TLB and 80 milliseconds to access the physical
memory. If the TLB hit ratio is 0.6, the effective memory access time (in milliseconds) is _________.

A 120

122
124

D 118

Memory Management GATE-CS-2014-(Set-3)


Discuss it

Question 24 Explanation:
TLB stands for Translation Lookaside Buffer. In Virtual memory systems, the cpu generates virtual
memory addresses. But, the data is stored in actual physical memory i.e. we need to place a physical
memory address on the memory bus to fetch the data from the memory circuitry. So, a special table is
maintained by the operating system called the Page table. This table contains a mapping between the
virtual addresses and physical addresses. So, every time a cpu generates a virtual address, the operating
system page table has to be looked up to find the corresponding physical address. To speed this up, there
is hardware support called the TLB. The TLB is a high speed cache of the page table i.e. contains
recently accessed virtual to physical translations. TLB hit ratio- A TLB hit is the no of times a virtual-to-
physical address translation was already found in the TLB, instead of going all the way to the page table
which is located in slower physical memory. TLB hit ratio is nothing but the ratio of TLB hits/Total no of
queries into TLB. In the case that the page is found in the TLB (TLB hit) the total time would be the time of
search in the TLB plus the time to access memory, so TLB_hit_time := TLB_search_time +
memory_access_time In the case that the page is not found in the TLB (TLB miss) the total time would
be the time to search the TLB (you don't find anything, but searched nontheless) plus the time to access
memory to get the page table and frame, plus the time to access memory to get the data,
so TLB_miss_time := TLB_search_time + memory_access_time + memory_access_time But this is
in individual cases, when you want to know an average measure of the TLB performance, you use the
Effective Access Time, that is the weighted average of the previous measures EAT := TLB_miss_time *
(1- hit_ratio) + TLB_hit_time * hit_ratio. EAT := (TLB_search_time + 2*memory_access_time) * (1-
hit_ratio) + (TLB_search_time + memory_access_time)* hit_ratio. As both page table and page are in
physical memory T(eff) = hit ratio * (TLB access time + Main memory access time) + (1 - hit ratio) * (TLB
access time + 2 * main memory time) = 0.6*(10+80) + (1-0.6)*(10+2*80) = 0.6 * (90) + 0.4 * (170) = 122
This solution is contributed Nitika Bansal

Question 25

WRONG
The memory access time is 1 nanosecond for a read operation with a hit in cache, 5 nanoseconds for a
read operation with a miss in cache, 2 nanoseconds for a write operation with a hit in cache and 10
nanoseconds for a write operation with a miss in cache. Execution of a sequence of instructions involves
100 instruction fetch operations, 60 memory operand read operations and 40 memory operand write
operations. The cache hit-ratio is 0.9. The average memory access time (in nanoseconds) in executing
the sequence of instructions is __________.
1.26
1.68

C 2.46

D 4.52

Memory Management GATE-CS-2014-(Set-3)


Discuss it

Question 25 Explanation:
The question is to find the time taken for,

"100 fetch operation and 60 operand red operations and 40 memory

operand write operations"/"total number of instructions".

Total number of instructions= 100+60+40 =200

Time taken for 100 fetch operations(fetch =read)

= 100*((0.9*1)+(0.1*5)) // 1 corresponds to time taken for read

// when there is cache hit


= 140 ns //0.9 is cache hit rate

Time taken for 60 read operations = 60*((0.9*1)+(0.1*5))

= 84ns

Time taken for 40 write operations = 40*((0.9*2)+(0.1*10))

= 112 ns

// Here 2 and 10 the time taken for write when there is cache

// hit and no cahce hit respectively

So,the total time taken for 200 operations is = 140+84+112

= 336ns

Average time taken = time taken per operation = 336/200

= 1.68 ns

Question 26

CORRECT
A CPU generates 32-bit virtual addresses. The page size is 4 KB. The processor has a translation look-
aside buffer (TLB) which can hold a total of 128 page table entries and is 4-way set associative. The
minimum size of the TLB tag is:

A 11 bits

B 13 bits

15 bits

D 20 bits

Memory Management GATE-CS-2006


Discuss it

Question 26 Explanation:
Virtual Memory would not be very effective if every memory address had to be translated by looking up
the associated physical page in memory. The solution is to cache the recent translations in a Translation
Lookaside Buffer (TLB). A TLB has a fixed number of slots that contain page table entries, which map
virtual addresses to physical addresses. Solution Size of a page = 4KB = 2^12 means 12 offset bits CPU
generates 32-bit virtual addresses Total number of bits needed to address a page frame = 32 12 = 20 If
there are n cache lines in a set, the cache placement is called n-way set associative. Since TLB is 4 way
set associative and can hold total 128 (2^7) page table entries, number of sets in cache = 2^7/4 = 2^5. So
5 bits are needed to address a set, and 15 (20 5) bits are needed for tag. Option (C) is the correct
answer. See Question 3 of http://www.geeksforgeeks.org/operating-systems-set-14/ This solution is
contributed by Nitika Bansal

Question 27

WRONG
A computer system supports 32-bit virtual addresses as well as 32-bit physical addresses. Since the
virtual address space is of the same size as the physical address space, the operating system designers
decide to get rid of the virtual memory entirely. Which one of the following is true?
Efficient implementation of multi-user support is no longer possible

B The processor cache organization can be made more efficient now

Hardware support for memory management is no longer needed

D CPU scheduling can be made more efficient now

Memory Management GATE-CS-2006


Discuss it

Question 27 Explanation:
Same as http://geeksquiz.com/operating-systems-memory-management-question-4/

Question 28

WRONG
The minimum number of page frames that must be allocated to a running process in a virtual memory
environment is determined by
the instruction set architecture
page size

C physical memory size

D number of processes in memory

Memory Management GATE-CS-2004


Discuss it

Question 28 Explanation:
There are two important tasks in virtual memory management: a page-replacement strategy and a frame-
allocation strategy. Frame allocation strategy says gives the idea of minimum number of frames which
should be allocated. The absolute minimum number of frames that a process must be allocated is
dependent on system architecture, and corresponds to the number of pages that could be touched by a
single (machine) instruction. So, it is instruction set architecture i.e. option (A) is correct answer. See
Question 3 of http://www.geeksforgeeks.org/operating-systems-set-
4/ Reference:https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/9_VirtualMemory.htmlThis
solution is contributed by Nitika Bansal
Question 29

WRONG
Consider a system with a two-level paging scheme in which a regular memory access takes 150
nanoseconds, and servicing a page fault takes 8 milliseconds. An average instruction takes 100
nanoseconds of CPU time, and two memory accesses. The TLB hit ratio is 90%, and the page fault rate is
one in every 10,000 instructions. What is the effective average instruction execution time?
645 nanoseconds

B 1050 nanoseconds

C 1215 nanoseconds

1230 nanoseconds
Memory Management GATE-CS-2004
Discuss it

Question 29 Explanation:

Figure : Translation Lookaside Buffer[5]


As shown in figure, to find frame number for corresponding page number, at first TLB
(Translation Lookaside Buffer) is checked whether it has that desired page number-
frame number pair entry or not, if yes then its TLB hit otherwise its TLB miss. In case of
miss the page number is searched into page table. In two-level paging scheme, the
memory is referred two times to obtain corresponding frame number.
Ifa virtual address has no valid entry in the page table, then any attempt by your pro-
gram to access that virtual address will cause a page fault to occur .In case of page
fault, the required frame is brought in main memory from secondary memory,time taken
to service the page fault is called page fault service time.
We have to caculate average execution time(EXE), lets suppose average memory ac-
cess time to fetch is M, then EXE = 100ns + 2*150 (two memory references to fetch
instruction) + M ...1
Now we have to calculate average memory access time M, since page fault is 1 in
10,000 instruction and then M = (1 1/10 )(M EM ) + (1/10 ) 8ms ...2
4 4

Where MEM is memory access time when page is present in memory. Now we calcu-
late MEM MEM = .9(TLB Access Time)+.1(TLB Access Time+2*150ns)
Here TLB Acess Time is not given lets assume it 0. So MEM=.9(0)+.1(300ns) =30ns ,
put MEM value in equation(2). M = (1 1/10 )(30ns) + (1/10 ) 8ms = 830ns
4 4

Put this M's value in equation(1), EXE=100ns+300ns+830ns=1230ns , so Ans is


option(4).
This sulotion is contributed Nirmal Bhardwaj .

Question 30

CORRECT
In a system with 32 bit virtual addresses and 1 KB page size, use of one-level page tables for virtual to
physical address translation is not practical because of

A the large amount of internal fragmentation

B the large amount of external fragmentation

the large memory overhead in maintaining page tables

D the large computation overhead in the translation process

Memory Management GATE-CS-2003


Discuss it

Question 30 Explanation:
See question 4 of http://www.geeksforgeeks.org/operating-systems-set-4/

Question 31

CORRECT
Which of the following is NOT an advantage of using shared, dynamically linked libraries as opposed to
using statically linked libraries ?

A Smaller sizes of executable files


B Lesser overall page fault rate in the system

Faster program startup

D Existing programs need not be re-linked to take advantage of newer versions of libraries

Memory Management GATE-CS-2003


Discuss it

Question 31 Explanation:
Refer Static and Dynamic Libraries In Non-Shared (static) libraries, since library code is connected at
compile time, the final executable has no dependencies on the the library at run time i.e. no additional
run-time loading costs, it means that you dont need to carry along a copy of the library that is being used
and you have everything under your control and there is no dependency.

Question 32

WRONG
A processor uses 2-level page tables for virtual to physical address translation. Page tables for both levels
are stored in the main memory. Virtual and physical addresses are both 32 bits wide. The memory is byte
addressable. For virtual to physical address translation, the 10 most significant bits of the virtual address
are used as index into the first level page table while the next 10 bits are used as index into the second
level page table. The 12 least significant bits of the virtual address are used as offset within the page.
Assume that the page table entries in both levels of page tables are 4 bytes wide. Further, the processor
has a translation look-aside buffer (TLB), with a hit rate of 96%. The TLB caches recently used virtual
page numbers and the corresponding physical page numbers. The processor also has a physically
addressed cache with a hit rate of 90%. Main memory access time is 10 ns, cache access time is 1 ns,
and TLB access time is also 1 ns. Assuming that no page faults occur, the average time taken to access a
virtual address is approximately (to the nearest 0.5 ns)
1.5 ns

B 2 ns

C 3 ns

4 ns
Memory Management GATE-CS-2003
Discuss it

Question 32 Explanation:
The possibilities are

TLB Hit*Cache Hit +

TLB Hit*Cache Miss +

TLB Miss*Cache Hit +

TLB Miss*Cache Miss


= 0.96*0.9*2 + 0.96*0.1*12 + 0.04*0.9*22 + 0,04*0.1*32

= 3.8

Why 22 and 32? 22 is because when TLB miss occurs it takes 1ns and the for the physical address it has
to go through two level page tables which are in main memory and takes 2 memory access and the that
page is found in cache taking 1 ns which gives a total of 22

Question 33

WRONG
A processor uses 2-level page tables for virtual to physical address translation. Page tables for both levels
are stored in the main memory. Virtual and physical addresses are both 32 bits wide. The memory is byte
addressable. For virtual to physical address translation, the 10 most significant bits of the virtual address
are used as index into the first level page table while the next 10 bits are used as index into the second
level page table. The 12 least significant bits of the virtual address are used as offset within the page.
Assume that the page table entries in both levels of page tables are 4 bytes wide. Further, the processor
has a translation look-aside buffer (TLB), with a hit rate of 96%. The TLB caches recently used virtual
page numbers and the corresponding physical page numbers. The processor also has a physically
addressed cache with a hit rate of 90%. Main memory access time is 10 ns, cache access time is 1 ns,
and TLB access time is also 1 ns. Suppose a process has only the following pages in its virtual address
space: two contiguous code pages starting at virtual address 0x00000000, two contiguous data pages
starting at virtual address 000400000, and a stack page starting at virtual address 0FFFFF000. The
amount of memory required for storing the page tables of this process is:

A 8 KB

12 KB
16 KB

D 20 KB

Memory Management GATE-CS-2003


Discuss it

Question 33 Explanation:
Breakup of given addresses into bit form:-

32bits are broken up as 10bits (L2) | 10bits (L1) | 12bits (offset)

first code page:

0x00000000 = 0000 0000 00 | 00 0000 0000 | 0000 0000 0000

so next code page will start from

0x00001000 = 0000 0000 00 | 00 0000 0001 | 0000 0000 0000


first data page:

0x00400000 = 0000 0000 01 | 00 0000 0000 | 0000 0000 0000

so next data page will start from

0x00401000 = 0000 0000 01 | 00 0000 0001 | 0000 0000 0000

only one stack page:

0xFFFFF000 = 1111 1111 11 | 11 1111 1111 | 0000 0000 0000

Now, for second level page table, we will just require 1 Page

which will contain following 3 distinct entries i.e. 0000 0000 00,

0000 0000 01, 1111 1111 11.

Now, for each of these distinct entries, we will have 1-1 page

in Level-1.

Hence, we will have in total 4 pages and page size = 2^12 = 4KB.

Therefore, Memory required to store page table = 4*4KB = 16KB.

Question 34

WRONG
Which of the following is not a form of memory?

A instruction cache

instruction register
instruction opcode

D translation lookaside buffer

Memory Management GATE-CS-2002


Discuss it

Question 34 Explanation:
Instruction Cache - Used for storing instructions that are frequently used Instruction Register - Part of
CPU's control unit that stores the instruction currently being executed Instruction Opcode - It is the portion
of a machine language instruction that specifies the operation to be performed Translation Lookaside
Buffer - It is a memory cache that stores recent translations of virtual memory to physical addresses for
faster access. So, all the above except Instruction Opcode are memories. Thus, C is the correct choice.
Please comment below if you find anything wrong in the above post.

Question 35

WRONG
The optimal page replacement algorithm will select the page that
Has not been used for the longest time in the past.
Will not be used for the longest time in the future.

C Has been used least number of times.

D Has been used most number of times.

Memory Management GATE-CS-2002


Discuss it

Question 35 Explanation:
The optimal page replacement algorithm will select the page whose next occurrence will be after the
longest time in future. For example, if we need to swap a page and there are two options from which we
can swap, say one would be used after 10s and the other after 5s, then the algorithm will swap out the
page that would be required 10s later. Thus, B is the correct choice. Please comment below if you find
anything wrong in the above post.

Question 36

WRONG
Dynamic linking can cause security concerns because:
Security is dynamic
The path for searching dynamic libraries is not known till runtime

C Linking is insecure

D Crytographic procedures are not available for dynamic linking

Memory Management GATE-CS-2002


Discuss it

Question 36 Explanation:
Static Linking and Static Libraries is the result of the linker making copy of all used library functions to
the executable file. Static Linking creates larger binary files, and need more space on disk and main
memory. Examples of static libraries (libraries which are statically linked) are, .a files in Linux and .lib files
in Windows.Dynamic linking and Dynamic Libraries Dynamic Linking doesnt require the code to be
copied, it is done by just placing name of the library in the binary file. The actual linking happens when the
program is run, when both the binary file and the library are in memory. Examples of Dynamic libraries
(libraries which are linked at run-time) are, .so in Linux and .dll in Windows. In Dynamic Linking,the path
for searching dynamic libraries is not known till runtime

Question 37

WRONG
Which of the following statements is false?
Virtual memory implements the translation of a programs address space into physical memory
A address space
Virtual memory allows each program to exceed the size of the primary memory

C Virtual memory increases the degree of multiprogramming

Virtual memory reduces the context switching overhead


Memory Management GATE-CS-2001
Discuss it

Question 37 Explanation:
See question 4 of http://www.geeksforgeeks.org/operating-systems-set-2/

Question 38
The process of assigning load addresses to the various parts of the program and adjusting the code and
date in the program to reflect the assigned addresses is called

A Assembly

B Parsing

C Relocation

D Symbol resolution

Memory Management GATE-CS-2001


Discuss it

Question 39

WRONG
Where does the swap space reside?
RAM
Disk

C ROM

D On-chip cache

Memory Management GATE-CS-2001


Discuss it

Question 39 Explanation:
Swap space is an area on disk that temporarily holds a process memory image. When memory is full
and process needs memory, inactive parts of process are put in swap space of disk.

Question 40

WRONG
Consider a virtual memory system with FIFO page replacement policy. For an arbitrary page access
pattern, increasing the number of page frames in main memory will

A always decrease the number of page faults

always increase the number of page faults


sometimes increase the number of page faults

D never affect the number of page faults

Memory Management GATE-CS-2001


Discuss it

Question 40 Explanation:
See question 4 of http://www.geeksforgeeks.org/operating-systems-set-1/

Question 41

CORRECT
Consider a machine with 64 MB physical memory and a 32-bit virtual address space. If the page size is
4KB, what is the approximate size of the page table?

A 16 MB

B 8 MB

2 MB

D 24 MB

Memory Management GATE-CS-2001


Discuss it

Question 41 Explanation:
See question 1 of http://www.geeksforgeeks.org/operating-systems-set-2/

Question 42

WRONG
Suppose the time to service a page fault is on the average 10 milliseconds, while a memory access takes
1 microsecond. Then a 99.99% hit ratio results in average memory access time of (GATE CS 2000)

A 1.9999 milliseconds

B 1 millisecond

9.999 microseconds
1.9999 microseconds
Memory Management GATE-CS-2000
Discuss it
Question 42 Explanation:
If any page request comes it will first search into page table, if present, then it will directly fetch the page
from memory, thus in this case time requires will be only memory access time. But if required page will
not be found, first we have to bring it out and then go for memory access. This extra time is called page
fault service time. Let hit ratio be p , memory access time be t1 , and page fault service time be t2.
Hence, average memory access time = p*t1 + (1-p)*t2

=(99.99*1 + 0.01*(10*1000 + 1))/100

=1.9999 *10^-6 sec

This explanation is contributed by Abhishek Kumar. Also, see question 1 of


http://www.geeksforgeeks.org/operating-systems-set-3/

Question 43

WRONG
Consider a system with byte-addressable memory, 32 bit logical addresses, 4 kilobyte page size and
page table entries of 4 bytes each. The size of the page table in the system in megabytes is ___________
2
4

C 8

D 16

Memory Management GATE-CS-2015 (Set 1)


Discuss it

Question 43 Explanation:
Number of entries in page table = 232 / 4Kbyte
==2 2 32 / 2
20
12

Size of page=table
222
4 *=
4 bytes
(No. page table entries)*(Size of an entry)
Megabytes
20

Question 44

WRONG
A computer system implements a 40 bit virtual address, page size of 8 kilobytes, and a 128-entry
translation look-aside buffer (TLB) organized into 32 sets each having four ways. Assume that the TLB tag
does not store any process id. The minimum length of the TLB tag in bits is _________
20

B 10

C 11

22
Memory Management GATE-CS-2015 (Set 2)
Discuss it
Question 44 Explanation:
Total virtual address size = 40

Since there are 32 sets, set offset = 5

Since page size is 8kilobytes, word offset = 13

Minimum tag size = 40 - 5- 13 = 22

Question 45

CORRECT
Consider six memory partitions of size 200 KB, 400 KB, 600 KB, 500 KB, 300 KB, and 250 KB, where KB
refers to kilobyte. These partitions need to be allotted to four processes of sizes 357 KB, 210 KB, 468 KB
and 491 KB in that order. If the best fit algorithm is used, which partitions are NOT allotted to any
process?
200 KB and 300 KB

B 200 KB and 250 KB

C 250 KB and 300 KB

D 300 KB and 400 KB

Memory Management GATE-CS-2015 (Set 2)


Discuss it

Question 45 Explanation:
Best fit allocates the smallest block among those that are large enough for the new process. So the
memory blocks are allocated in below order.
357 ---> 400

210 ---> 250

468 ---> 500

491 ---> 600

Sot the remaining blocks are of 200 KB and 300 KB


Refer http://courses.cs.vt.edu/~csonline/OS/Lessons/MemoryAllocation/index.html for details of all
allocation strategies.

Question 46

CORRECT
A Computer system implements 8 kilobyte pages and a 32-bit physical address space. Each page table
entry contains a valid bit, a dirty bit three permission bits, and the translation. If the maximum size of the
page table of a process is 24 megabytes, the length of the virtual address supported by the system is
_______________ bits
36

B 32

C 28

D 40

Memory Management GATE-CS-2015 (Set 2)


Discuss it

Question 46 Explanation:
Max size of virtual address can be calculated by

calculating maximum number of page table entries.

Maximum Number of page table entries can be calculated

using given maximum page table size and size of a page

table entry.

Given maximum page table size = 24 MB

Let us calculate size of a page table entry.

A page table entry has following number of bits.

1 (valid bit) +

1 (dirty bit) +

3 (permission bits) +

x bits to store physical address space of a page.

Value of x = (Total bits in physical address) -

(Total bits for addressing within a page)

Since size of a page is 8 kilobytes, total bits needed within

a page is 13.

So value of x = 32 - 13 = 19

Putting value of x, we get size of a page table entry =


1 + 1 + 3 + 19 = 24bits.

Number of page table entries

= (Page Table Size) / (An entry size)

= (24 megabytes / 24 bits)

= 223

Vrtual address Size


= (Number of page table entries) * (Page Size)
= 223 * 8 kilobits
= 236
Therefore, length of virtual address space = 36

Question 47

WRONG
Which one of the following is NOT shared by the threads of the same process?
Stack

B Address Space

File Descriptor Table

D Message Queue

Memory Management GATE-IT-2004


Discuss it

Question 47 Explanation:
Threads can not share stack (used for maintaining function calls) as they may have their individual
function call sequence.
Image
Source: https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/4_Threads.html

Question 48

WRONG
Consider a fully associative cache with 8 cache blocks (numbered 0-7) and the following sequence of
memory block requests: 4, 3, 25, 8, 19, 6, 25, 8, 16, 35, 45, 22, 8, 3, 16, 25, 7 If LRU replacement policy
is used, which cache block will have memory block 7?

A 4

C 6

7
Memory Management GATE-IT-2004
Discuss it

Question 48 Explanation:
Block size is =8 Given 4, 3, 25, 8, 19, 6, 25, 8, 16, 35, 45, 22, 8, 3, 16, 25, 7 So from 0 to 7 ,we have
4 3 25 8 19 6 16 35 //25,8 LRU so next 16,35 come in the block.
45 3 25 8 19 6 16 35
45 22 25 8 19 6 16 35
45 22 25 8 19 6 16 35
45 22 25 8 3 6 16 35 //16 and 25 already there
45 22 25 8 3 7 16 35 //7 in 5th block Therefore , answer is B

Question 49
WRONG
The storage area of a disk has innermost diameter of 10 cm and outermost diameter of 20 cm. The
maximum storage density of the disk is 1400bits/cm. The disk rotates at a speed of 4200 RPM. The main
memory of a computer has 64-bit word length and 1s cycle time. If cycle stealing is used for data transfer
from the disk, the percentage of memory cycles stolen for transferring one word is

A 0.5%

B 1%

5%
10%
Memory Management GATE-IT-2004
Discuss it

Question 49 Explanation:

Inner most diameter = 10 cm Storage density = 1400 bits/cm


Capacity of each track : = 3.14 * diameter * density = 3.14 * 10 * 1400 = 43960 bits
Rotational latency = 60/4200 =1/70 seconds
It is given that the main memory of a computer has 64-bit word length and 1s cycle time.
Data transferred in 1 sec = 64 * 106 bits Data read by disk in 1 sec = 43960 * 70 = 3.08 * 10 6 bits
Total memory cycle = (3.08 * 106) / (64 * 106) = 5%

Thus, option (C) is correct.

Please comment below if you find anything wrong in the above post.

Question 50

WRONG
A disk has 200 tracks (numbered 0 through 199). At a given time, it was servicing the request of reading
data from track 120, and at the previous request, service was for track 90. The pending requests (in order
of their arrival) are for track numbers. 30 70 115 130 110 80 20 25. How many times will the head change
its direction for the disk scheduling policies SSTF(Shortest Seek Time First) and FCFS (First Come Fist
Serve)
2 and 3

B 3 and 3

3 and 4

D 4 and 4

Memory Management GATE-IT-2004


Discuss it

Question 50 Explanation:
According to Shortest Seek Time First: 90-> 120-> 115-> 110-> 130-> 80-> 70-> 30-> 25-> 20 Change
of direction(Total 3); 120->15; 110->130; 130->80 According to First Come First Serve: 90-> 120-> 30->
70-> 115-> 130-> 110-> 80-> 20-> 25Change of direction(Total 4); 120->30; 30->70; 130->110;20-
>25 Therefore,Answer is C

Question 51

WRONG
In a virtual memory system, size of virtual address is 32-bit, size of physical address is 30-bit, page size is
4 Kbyte and size of each page table entry is 32-bit. The main memory is byte addressable. Which one of
the following is the maximum number of bits that can be used for storing protection and other information
in each page table entry?
2

B 10

C 12

14
Memory Management GATE-IT-2004
Discuss it

Question 51 Explanation:

Virtual memory = 232 bytes Physical memory = 230 bytes


Page size = Frame size = 4 * 103 bytes = 22 * 210 bytes = 212 bytes
Number of frames = Physical memory / Frame size = 2 30/212 = 218
Therefore, Numbers of bits for frame = 18 bits
Page Table Entry Size = Number of bits for frame + Other information Other information = 32 - 18 = 14
bits

Thus, option (D) is correct.

Please comment below if you find anything wrong in the above post.

Question 52

WRONG
In a particular Unix OS, each data block is of size 1024 bytes, each node has 10 direct data block
addresses and three additional addresses: one for single indirect block, one for double indirect block and
one for triple indirect block. Also, each block can contain addresses for 128 blocks. Which one of the
following is approximately the maximum size of a file in the file system?

A 512 MB

2GB
8GB
D 16GB

Memory Management GATE-IT-2004


Discuss it

Question 52 Explanation:

The diagram is taken from Operating


System Concept book.
Maximum size of the File System = Summation of size of all the data blocks

whose addresses belongs to the file.

Given:

Size of 1 data block = 1024 Bytes

No. of addresses which 1 data block can contain = 128

Now, Maximum File Size can be calculated as:

10 direct addresses of data blocks = 10*1024

1 single indirect data block = 128*1024

1 doubly indirect data block = 128*128*1024

1 triple indirect data block = 128*128*128*1024

Hence,

Max File Size = 10*1024 + 128*1024 + 128*128*1024 +

128*128*128*1024 Bytes
= 2113674*1024 Bytes

= 2.0157 GB ~ 2GB

Question 53

CORRECT
A two-way switch has three terminals a, b and c. In ON position (logic value 1), a is connected to b, and in
OFF position, a is connected to c. Two of these two-way switches S1 and S2 are connected to a bulb as

shown below. Which of the


following expressions, if true, will always result in the lighting of the bulb ?

A S1.S2'

B S1+S2

(S1S2)'

D S1S2

Memory Management Gate IT 2005


Discuss it

Question 53 Explanation:
If we draw truth table of the above circuit,it'll be S1 S2 Bulb 0 0 On 0 1 Off 1
0 Off 1 1 On = (S1 S2)' Therefore answer is C

Question 54

WRONG
Consider a 2-way set associative cache memory with 4 sets and total 8 cache blocks (0-7) and a main
memory with 128 blocks (0-127). What memory blocks will be present in the cache after the following
sequence of memory block references if LRU policy is used for cache block replacement. Assuming that
initially the cache did not have any memory block from the current job? 0 5 3 9 7 0 16 55

A 0 3 5 7 16 55

0 3 5 7 9 16 55
0 5 7 9 16 55

D 3 5 7 9 16 55

Memory Management Gate IT 2005


Discuss it

Question 54 Explanation:
2-way set associative cache memory, .i.e K = 2.

No of sets is given as 4, i.e. S = 4 ( numbered 0 - 3 )

No of blocks in cache memory is given as 8, i.e. N =8 ( numbered from 0 -7)

Each set in cache memory contains 2 blocks.

The number of blocks in the main memory is 128, i.e M = 128. ( numbered from 0 -127)

A referred block numbered X of the main memory is placed in the

set numbered ( X mod S ) of the the cache memory. In that set, the

block can be placed at any location, but if the set has already become

full, then the current referred block of the main memory should replace

a block in that set according to some replacement policy. Here


the replacement policy is LRU ( i.e. Least Recently Used block should
be replaced with currently referred block).

X ( Referred block no ) and


the corresponding Set values are as follows:

X-->set no ( X mod 4 )

0--->0 ( block 0 is placed in set 0, set 0 has 2 empty block locations,


block 0 is placed in any one of them )

5--->1 ( block 5 is placed in set 1, set 1 has 2 empty block locations,


block 5 is placed in any one of them )

3--->3 ( block 3 is placed in set 3, set 3 has 2 empty block locations,


block 3 is placed in any one of them )

9--->1 ( block 9 is placed in set 1, set 1 has currently 1 empty block location,
block 9 is placed in that, now set 1 is full, and block 5 is the
least recently used block )

7--->3 ( block 7 is placed in set 3, set 3 has 1 empty block location,


block 7 is placed in that, set 3 is full now,
and block 3 is the least recently used block)

0--->block 0 is referred again, and it is present in the cache memory in set 0,


so no need to put again this block into the cache memory.
16--->0 ( block 16 is placed in set 0, set 0 has 1 empty block location,
block 0 is placed in that, set 0 is full now, and block 0 is the LRU one)

55--->3 ( block 55 should be placed in set 3, but set 3 is full with block 3 and 7,
hence need to replace one block with block 55, as block 3 is the least
recently used block in the set 3, it is replaced with block 55.
Hence the main memory blocks present in the cache memory are : 0, 5, 7, 9, 16, 55 . (Note: block 3 is not
present in the cache memory, it was replaced with block 55 ) Read the following articles to learn more
related to the above question: Cache Memory Cache Organization | Introduction

Question 55

WRONG
Q81 Part_A A disk has 8 equidistant tracks. The diameters of the innermost and outermost tracks are 1
cm and 8 cm respectively. The innermost track has a storage capacity of 10 MB. What is the total amount
of data that can be stored on the disk if it is used with a drive that rotates it with (i) Constant Linear
Velocity (ii) Constant Angular Velocity?
(i) 80 MB (ii) 2040 MB

B (i) 2040 MB (ii) 80 MB

C (i) 80 MB (ii) 360 MB

(i) 360 MB (ii) 80 MB


Memory Management Gate IT 2005
Discuss it

Question 55 Explanation:

Constant linear velocity :


Diameter of inner track = d = 1cm Circumference of inner track : = 2 * 3.14 * (d/2) = 3.14 cm
Storage capacity = 10 MB (given) Circumference of all equidistant tracks : = 2 * 3.14 *(0.5 + 1 + 1.5 + 2 +
2.5 + 3+ 3.5 + 4) = 113.14cm
Here, 3.14 cm holds 10 MB. Therefore, 1 cm holds 3.18 MB. 113.14 cm holds 113.14 * 3.18 = 360 MB.
Total amount of data that can be stored on the disk = 360 MB

Constant angular velocity :


In case of CAV, the disk rotates at a constant angular speed. Same rotation time is taken by all the tracks.
Total amount of data that can be stored on the disk = 8 * 10 = 80 MB

Thus, option (D) is correct.

Please comment below if you find anything wrong in the above post.

Question 56

CORRECT
Consider a computer system with 40-bit virtual addressing and page size of sixteen kilobytes. If the
computer system has a one-level page table per process and each page table entry requires 48 bits, then
the size of the per-process page table is _________megabytes. Note : This question was asked as
Numerical Answer Type.
384

B 48

C 192

D 96

Memory Management GATE-CS-2016 (Set 1)


Discuss it

Question 56 Explanation:
Size of memory = 240 Page size = 16KB = 214 No of pages= size of Memory/ page size = 240 / 214 = 226 Size
of page table = 226 * 48/8 bytes = 26*6 MB =384 MB Thus, A is the correct choice.

Question 57

WRONG
Consider a computer system with ten physical page frames. The system is provided with an access
sequence a1, a2, ..., a20, a1, a2, ..., a20), where each ai number. The difference in the number of page
faults between the last-in-first-out page replacement policy and the optimal page replacement policy is
__________ [Note that this question was originally Fill-in-the-Blanks question]

A 0

1
2

D 3

Memory Management GATE-CS-2016 (Set 1)


Discuss it

Question 57 Explanation:
LIFO stands for last in, first out a1 to a10 will result in page faults, So 10 page faults from a1 to a10.
Then a11 will replace a10(last in is a10), a12 will replace a11 and so on till a20, so 10 page faults from
a11 to a20 and a20 will be top of stack and a9a1 are remained as such. Then a1 to a9 are already
there. So 0 page faults from a1 to a9. a10 will replace a20, a11 will replace a10 and so on. So 11 page
faults from a10 to a20. So total faults will be 10+10+11 = 31. Optimal a1 to a10 will result in page faults,
So 10 page faults from a1 to a10. Then a11 will replace a10 because among a1 to a10, a10 will be used
later, a12 will replace a11 and so on. So 10 page faults from a11 to a20 and a20 will be top of stack and
a9a1 are remained as such. Then a1 to a9 are already there. So 0 page faults from a1 to a9. a10 will
replace a1 because it will not be used afterwards and so on, a10 to a19 will have 10 page faults. a20 is
already there, so no page fault for a20. Total faults 10+10+10 = 30. Difference = 1

Question 58

WRONG
In which one of the following page replacement algorithms it is possible for the page fault rate to increase
even when the number of allocated frames increases?
LRU (Least Recently Used)

B OPT (Optimal Page Replacement)

C MRU (Most Recently Used)

FIFO (First In First Out)


Memory Management GATE-CS-2016 (Set 2)
Discuss it

Question 58 Explanation:
In some situations FIFO page replacement gives more page faults when increasing the number of page
frames. This situation is Beladys anomaly. Beladys anomaly proves that it is possible to have more
page faults when increasing the number of page frames while using the First in First Out (FIFO) page
replacement algorithm. For example, if we consider reference string 3 2 1 0 3 2 4 3 2 1 0 4 and 3 slots, we
get 9 total page faults, but if we increase slots to 4, we get 10 page faults.

Question 59

CORRECT
The address sequence generated by tracing a particular program executing in a pure demand paging
system with 100 bytes per page is
0100, 0200, 0430, 0499, 0510, 0530, 0560, 0120, 0220, 0240, 0260, 0320, 0410.
Suppose that the memory can store only one page and if x is the address which causes a page fault then
the bytes from addresses x to x + 99 are loaded on to the memory.
How many page faults will occur ?
A 0

B 4

D 8

Memory Management Gate IT 2007


Discuss it

Question 59 Explanation:

Address Page faults last byte in memory


0100 page fault, 199
0200 page fault, 299
0430 page fault, 529
0499 no page fault
0510 no page fault
0530 page fault, 629
0560 no page fault
0120 page fault, 219
0220 page fault, 319
0240 no page fault
0260 no page fault
0320 page fault, 419
0410 no page fault
So, 7 is the answer- (C)

Question 60

WRONG
A paging scheme uses a Translation Look-aside Buffer (TLB). A TLB-access takes 10 ns and a main
memory access takes 50 ns. What is the effective access time(in ns) if the TLB hit ratio is 90% and there
is no page-fault?
54

B 60

65

D 75

Memory Management Gate IT 2008


Discuss it

Question 60 Explanation:
Effective access time = hit ratio * time during hit + miss ratio * time during miss TLB time = 10ns, Memory
time = 50ns Hit Ratio= 90% E.A.T. = (0.90)*(60)+0.10*110 =65

Question 61

WRONG
Assume that a main memory with only 4 pages, each of 16 bytes, is initially empty. The CPU generates
the following sequence of virtual addresses and uses the Least Recently Used (LRU) page replacement
policy.
0, 4, 8, 20, 24, 36, 44, 12, 68, 72, 80, 84, 28, 32, 88, 92
How many page faults does this sequence cause? What are the page numbers of the pages present in
the main memory at the end of the sequence?
6 and 1, 2, 3, 4
7 and 1, 2, 4, 5

C 8 and 1, 2, 4, 5

D 9 and 1, 2, 3, 5

Memory Management Gate IT 2008


Discuss it

Question 61 Explanation:

Question 62

WRONG
Match the following flag bits used in the context of virtual memory management on the left side with the
different purposes on the right side of the table below.

A I-d, II-a, III-b, IV-c


I-b, II-c, III-a, IV-d

C I-c, II-d, III-a, IV-b

I-b, II-c, III-d, IV-a


Memory Management Gate IT 2008
Discuss it

Question 63

WRONG
Consider a computer with a 4-ways set-associative mapped cache of the following characteristics: a total
of 1 MB of main memory, a word size of 1 byte, a block size of 128 words and a cache size of 8 KB. The
number of bits in the TAG, SET and WORD fields, respectively are:

A 7, 6, 7

8, 5, 7

C 8, 6, 6

9, 4, 7
Memory Management Computer Organization and Architecture Gate IT 2008
Discuss it

Question 63 Explanation:
According to the question it is given that No. of bytes in a word= 1byte No. of words per
block of memory= 128 words Total size of the cache memory= 8 KB So the total number
of block can be calculated as under Cache size/(no. words per block* size of 1 word) =
8KB/( 128*1) =64 Since, it is given that the computer has a 4 way set associative
memory. Therefore, Total number of sets in the cache memory given = number of cache
blocks given/4 = 64/4 = 16 So, the number of SET bits required = 4 as 16= power(2, 4).
Thus, with 4 bits we will be able to get 16 possible output bits As per the question only
physical memory information is given we can assume that cache memory is physically
tagged. So, the memory can be divided into 16 regions or blocks. Size of the region a
single set can address = 1MB/ 16 = power(2, 16 )Bytes = power(2, 16) / 128 = power(2,
9) cache blocks Thus, to uniquely identify these power(2, 9) blocks we will need 9 bits to
tag these blocks. Thus, TAG= 9 Cache block is 128 words so for indicating any
particular block we will need 7 bits as 128=power(2,7). Thus, WORD = 7. Hence the
answer will be (TAG, SET, WORD) = (9,4,7). This solution is contributed by Namita Singh.
Question 64

CORRECT
Consider a computer with a 4-ways set-associative mapped cache of the following characteristics: a total
of 1 MB of main memory, a word size of 1 byte, a block size of 128 words and a cache size of 8 KB. While
accessing the memory location 0C795H by the CPU, the contents of the TAG field of the corresponding
cache line is
000011000

B 110001111

C 00011000

D 110010101

Memory Management Computer Organization and Architecture Gate IT 2008


Discuss it

Question 64 Explanation:
TAG will take 9 bits SET will need 4 bits and WORD will need 7 bits of the cache
memory location Thus, using the above conclusion as derived in previous question. The
memory location 0C795H can be written as 0000 1100 0111 1001 0101 Thus TAG= 9
bits = 0000 1100 0 SET =4 bits =111 1 WORD = 7 bits =001 0101 Therefore, the
matching option is option A. This solution is contributed by Namita Singh .
Question 65

CORRECT
Linked Questions 58-59
Assume GeeksforGeeks implemented the new page replacement algorithm in virtual memory and given
its name as Geek. Consider the working strategy of Geek as following-
Each page in memory maintains a count which is incremented if the page is referred and no page
fault occurs.
If a page fault occurs, the physical page with zero count or smallest count is replaced by new
page and if more than one page with zero count or smallest count then it uses FIFO strategy to
replace the page.
Find the number of page faults using Geeks algorithm for the following reference string (assume three
physical frames are available which are initially free)
Reference String : A B C D A B E A B C D E B A D

A 7

B 9

11

D 13

Memory Management GATE 2017 Mock


Discuss it
Question 65 Explanation:

Question 66

WRONG
If LRU and Geek page replacement are compared (in terms of page faults) only for above reference string
then find the correct statement from the following:
LRU and Geek are same

B LRU is better than Geek

Geek is better than LRU

D None

Memory Management GATE 2017 Mock


Discuss it

Question 66 Explanation:

You have completed 65/66 questions .


Your accuracy is 26%.

Input Output Systems

Question 1

WRONG
Which of the following is major part of time taken when accessing data on the disk?
A Settle time

Rotational latency
Seek time

D Waiting time

Input Output Systems


Discuss it

Question 1 Explanation:
Seek time is time taken by the head to travel to the track of the disk where the data to be accessed is
stored.

Question 2

WRONG
We describe a protocol of input device communication below. a. Each device has a distinct address b.
The bus controller scans each device in sequence of increasing address value to determine if the entity
wishes to communicate. c. The device ready to communicate leaves it data in IO register. d. The data is
picked up and the controller moves to step-a above. Identify the form of communication best describes the
IO mode amongst the following: Source: nptel

A Programmed mode of data transfer

DMA

C Interrupt mode

Polling
Input Output Systems
Discuss it

Question 2 Explanation:
See Polling

Question 3
From amongst the following given scenarios determine the right one to justify interrupt mode of data-
transfer: Source: nptel

A Bulk transfer of several kilo-byte

B Moderately large data transfer but more that 1 KB

C Short events like mouse action

D Key board inputs


Input Output Systems
Discuss it

Question 4

WRONG
Normally user programs are prevented from handling I/O directly by I/O instructions in them. For CPUs
having explicit I/O instructions, such I/O protection is ensured by having the I/O instructions privileged. In
a CPU with memory mapped I/O, there is no explicit I/O instruction. Which one of the following is true for
a CPU with memory mapped I/O? (GATE CS 2005)
I/O protection is ensured by operating system routine(s)

B I/O protection is ensured by a hardware trap

I/O protection is ensured during system configuration

D I/O protection is not possible

Input Output Systems


Discuss it

Question 4 Explanation:
See question 1 of http://www.geeksforgeeks.org/operating-systems-set-16/

Question 5

WRONG
Put the following disk scheduling policies results in minimum amount of head movement.
FCS
Circular scan

C Elevator

Input Output Systems


Discuss it

Question 5 Explanation:
First Come -First Serve (FCFS) All incoming requests are placed at the end of the queue. Whatever
number that is next in the queue will be the next number served. Using this algorithm doesn't provide the
best results. Elevator (SCAN): This approach works like an elevator does. It scans down towards the
nearest end and then when it hits the bottom it scans up servicing the requests that it didn't get going
down. If a request comes in after it has been scanned it will not be serviced until the process comes back
down or moves back up. Circular Scan (C-SCAN): Circular scanning works just like the elevator to some
extent. It begins its scan toward the nearest end and works it way all the way to the end of the system.
Once it hits the bottom or top it jumps to the other end and moves in the same direction. Keep in mind that
the huge jump doesn't count as a head movement.
Source: http://www.cs.iit.edu/~cs561/cs450/disksched/disksched.html
Question 6

WRONG
Consider a hard disk with 16 recording surfaces (0-15) having 16384 cylinders (0-16383) and each
cylinder contains 64 sectors (0-63). Data storage capacity in each sector is 512 bytes. Data are organized
cylinder-wise and the addressing format is <cylinder no., surface no., sector no.> . A file of size 42797 KB
is stored in the disk and the starting disk location of the file is <1200, 9, 40>. What is the cylinder number
of the last sector of the file, if it is stored in a contiguous manner?
1281

B 1282

C 1283

1284
Input Output Systems GATE CS 2013
Discuss it

Question 6 Explanation:
42797KB will take 85512 sectors (42797*1024 bytes / 512 bytes)

Since there are 64 sectors per surface, 85512/64 = 1337.406

sectors are required, so we take 1338 sectors these sectors are

distributed among 16 surfaces, so 1338/16 = 83.58 cylinders will be

required.

So the final ans will be 84+1200 = 1284.

one more fact to be noted is that the file occupies 83.58 cylinders,

but the 0.58 cannot be accommodated in the first one (the file storage

starts from <1200,9,40>). Hence, the file will be extended to 194

(85594-85400) more bytes of cylinder 1284.

Question 7

WRONG
A file system with 300 GByte disk uses a file descriptor with 8 direct block addresses, 1 indirect block
address and 1 doubly indirect block address. The size of each disk block is 128 Bytes and the size of
each disk block address is 8 Bytes. The maximum possible file size in this file system is
3 Kbytes
35 Kbytes

C 280 Bytes
D Dependent on the size of the disk

GATE CS 2012 Input Output Systems


Discuss it

Question 7 Explanation:
See http://www.geeksforgeeks.org/operating-systems-set-5/

Question 8

WRONG
A computer handles several interrupt sources of which the following are relevant for this question.

. Interrupt from CPU temperature sensor (raises interrupt if

CPU temperature is too high)

. Interrupt from Mouse(raises interrupt if the mouse is moved

or a button is pressed)

. Interrupt from Keyboard(raises interrupt when a key is

pressed or released)

. Interrupt from Hard Disk(raises interrupt when a disk

read is completed)

Which one of these will be handled at the HIGHEST priority?

A Interrupt from Hard Disk

B Interrupt from Mouse

Interrupt from Keyboard


Interrupt from CPU temperature sensor
Input Output Systems GATE CS 2011
Discuss it

Question 8 Explanation:
Higher priority interrupt levels are assigned to requests which, if delayed or interrupted, could have
serious consequences. Devices with high speed transfer such as magnetic disks are given high priority,
and slow devices such as keyboard receive low priority (Source: Computer System Architecture by Morris
Mano) Interrupt from CPU temperature sensor would have serious consequences if ignored.

Question 9

CORRECT
An application loads 100 libraries at start-up. Loading each library requires exactly one disk access. The
seek time of the disk to a random location is given as 10 ms. Rotational speed of disk is 6000 rpm. If all
100 libraries are loaded from random locations on the disk, how long does it take to load all libraries?
(The time to transfer data from the disk block once the head has been positioned at the start of the block
may be neglected)

A 0.50 s

1.50 s

C 1.25 s

D 1.00 s

Input Output Systems GATE CS 2011


Discuss it

Question 9 Explanation:
See Question 3 of http://www.geeksforgeeks.org/operating-systems-set-6/

Question 10

CORRECT
A CPU generally handles an interrupt by executing an interrupt service routine

A As soon as an interrupt is raised

B By checking the interrupt register at the end of fetch cycle.

By checking the interrupt register after finishing the execution of the current instruction.

D By checking the interrupt register at fixed time intervals.

Input Output Systems GATE-CS-2009


Discuss it

Question 10 Explanation:
Hardware detects interrupt immediately, but CPU acts only after its current instruction. This is followed to
ensure integrity of instructions.

Question 11

WRONG
A hard disk has 63 sectors per track, 10 platters each with 2 recording surfaces and 1000 cylinders. The
address of a sector is given as a triple (c, h, s), where c is the cylinder number, h is the surface number
and s is the sector number. Thus, the 0th sector is addressed as (0, 0, 0), the 1st sector as (0, 0, 1), and
so on The address <400,16,29> corresponds to sector number:
505035

B 505036

505037
D 505038

Input Output Systems GATE-CS-2009


Discuss it

Question 11 Explanation:

Overview The data in hard disk is arranged in the shown manner. The smallest division is sector. Sectors
are then combined to make a track. Cylinder is formed by combining the tracks which lie on same
dimension of the platters. Read write head access the disk. Head has to reach at a particular track and
then wait for the rotation of the platter so that the required sector comes under it. Here, each platter has
two surfaces, which is the r/w head can access the platter from the two sides, upper and lower.
So,<400,16,29> will represent 400 cylinders are passed(0-399) and thus, for each cylinder 20 surfaces
(10 platters * 2 surface each) and each cylinder has 63 sectors per surface. Hence we have passed 0-399
= 400 * 20 * 63 sectors + In 400th cylinder we have passed 16 surfaces(0-15) each of which again
contains 63 sectors per cylinder so 16 * 63 sectors. + Now on the 16th surface we are on 29th sector. So,
sector no = 400x20x63 + 1663 + 29 = 505037. Reference :https://www.ilbe.com/1144674842 This
solution is contributed by Shashank Shanker khare.

Question 12
WRONG
Consider the data given in previous question. The address of the 1039th sector is
(0, 15, 31)

B (0, 16, 30)

(0, 16, 31)

D (0, 17, 31)

Input Output Systems GATE-CS-2009


Discuss it

Question 12 Explanation:
You can also see the image uploaded in previous question. (a) <0,15,31> 0th cylinder 15th surface and
31st sector So, 0 cylinders passed 0*20*63 As each cylinder has 20 surfaces and each surface has 63
sectors. + 15 surfaces passed (0-14) 15*63 As each surface has 63 sectors + We are on 31st sector So,
sector no. =0*20*63+15*63+31=976 sector. Which is not equal to 1039. (b) <0,16,30> Similarly this
represents, 0*20*63 + 16*63 (0-15 sectors and each sector has 63 sectors) + 30 sectors on 16th sector
Sector no = 0*20*63+16*63+30=1038 sector which is not equal to 1039. (c) <0,16,31> Similarly this
represents, 0*20*63 + 16*63 (0-15 sectors and each sector has 63 sectors) + 31 sectors on 16th sector
Sector no = 0*20*63+16*63+31=1039 sector which is equal to 1039. Hence,option c is correct.
(d) <0,17,31> Similarly this represents, 0*20*63 + 17*63 (0-16 sectors and each sector has 63 sectors) +
31 sectors on 17th sector Sector no = 0*20*63+17*63+31=1102 sector which is not equal to 1039. This
solution is contributed by Shashank Shanker khare.

Question 13

WRONG
The data blocks of a very large file in the Unix file system are allocated using

A contiguous allocation

B linked allocation

indexed allocation
an extension of indexed allocation
Input Output Systems GATE CS 2008
Discuss it

Question 13 Explanation:
The Unix file system uses an extension of indexed allocation. It uses direct blocks, single indirect blocks,
double indirect blocks and triple indirect blocks. Following diagram shows implementation of Unix file
system. The diagram is taken from Operating System Concept book.

Question 14

WRONG
For a magnetic disk with concentric circular tracks, the seek latency is not linearly proportional to the seek
distance due to

A non-uniform distribution of requests

arm starting and stopping inertia

C higher capacity of tracks on the periphery of the platter

use of unfair arm scheduling policies


Input Output Systems GATE CS 2008
Discuss it

Question 14 Explanation:
Whenever head moves from one track to other then its speed and direction changes, which is noting but
change in motion or the case of inertia. So answer B This explanation has been contributed by Abhishek
Kumar. See Disk drive performance characteristics_Seek_time

Question 15

WRONG
Which of the following statements about synchronous and asynchronous I/O is NOT true?
An ISR is invoked on completion of I/O in synchronous I/O but not in asynchronous I/O
In both synchronous and asynchronous I/O, an ISR (Interrupt Service Routine) is invoked after
B completion of the I/O
A process making a synchronous I/O call waits until I/O is complete, but a process making an
C asynchronous I/O call does not wait for completion of the I/O
In the case of synchronous I/O, the process waiting for the completion of I/O is woken up by the

ISR that is invoked after the completion of I/O


Input Output Systems GATE CS 2008
Discuss it

Question 15 Explanation:
There are two types of input/output (I/O) synchronization: synchronous I/O and asynchronous I/O.
Asynchronous I/O is also referred to as overlapped I/O. In synchronous file I/O, a thread starts an I/O
operation and immediately enters a wait state until the I/O request has completed. An ISR will be invoked
after the completion of I/O operation and it will place process from block state to ready state. A thread
performing asynchronous file I/O sends an I/O request to the kernel by calling an appropriate function. If
the request is accepted by the kernel, the calling thread continues processing another job until the kernel
signals to the thread that the I/O operation is complete. It then interrupts its current job and processes the
data from the I/O operation as necessary. See Question 3 of http://www.geeksforgeeks.org/operating-
systems-set-10/ Reference:https://msdn.microsoft.com/en-
us/library/windows/desktop/aa365683%28v=vs.85%29.aspx This solution is contributed by Nitika Bansal

Question 16

WRONG
Consider a disk pack with 16 surfaces, 128 tracks per surface and 256 sectors per track. 512 bytes of
data are stored in a bit serial manner in a sector. The capacity of the disk pack and the number of bits
required to specify a particular sector in the disk are respectively:
256 Mbyte, 19 bits

B 256 Mbyte, 28 bits

C 512 Mbyte, 20 bits

64 Gbyte, 28 bit
Input Output Systems GATE-CS-2007
Discuss it

Question 16 Explanation:
See Question 1 of http://www.geeksforgeeks.org/operating-systems-set-12/

Question 17

CORRECT
Suppose a disk has 201 cylinders, numbered from 0 to 200. At some time the disk arm is at cylinder 100,
and there is a queue of disk access requests for cylinders 30, 85, 90, 100, 105, 110, 135 and 145. If
Shortest-Seek Time First (SSTF) is being used for scheduling the disk access, the request for cylinder 90
is serviced after servicing ____________ number of requests.
A 1

B 2

D 4

Input Output Systems GATE-CS-2014-(Set-1)


Discuss it

Question 17 Explanation:
In Shortest-Seek-First algorithm, request closest to the current position of the disk arm and head is
handled first. In this question, the arm is currently at cylinder number 100. Now the requests come in the
queue order for cylinder numbers 30, 85, 90, 100, 105, 110, 135 and 145. The disk will service that
request first whose cylinder number is closest to its arm. Hence 1st serviced request is for cylinder no 100
( as the arm is itself pointing to it ), then 105, then 110, and then the arm comes to service request for
cylinder 90. Hence before servicing request for cylinder 90, the disk would had serviced 3 requests.
Hence option C.

Question 18

WRONG
A device with data transfer rate 10 KB/sec is connected to a CPU. Data is transferred byte-wise. Let the
interrupt overhead be 4 msec. The byte transfer time between the device interface register and CPU or
memory is negligible. What is the minimum performance gain of operating the device under interrupt
mode over operating it under program controlled mode?

A 15

25

C 35

45
Input Output Systems GATE-CS-2005
Discuss it

Question 18 Explanation:
In programmed I/O, CPU does continuous polling,

To transfer 10KB CPU polls for 1 sec = 10^6 micro-sec of processing

In interrupt mode CPU is interrupted on completion of i\o ,

To transfer 10 KB CPU does 4 micro-sec of processing.

Gain = 10^6 / 4 = 25000

250000 for 10000 bytes and 25 for 1 bytes.


Question 19

WRONG
Consider a disk drive with the following specifications: 16 surfaces, 512 tracks/surface, 512 sectors/track,
1 KB/sector, rotation speed 3000 rpm. The disk is operated in cycle stealing mode whereby whenever one
byte word is ready it is sent to memory; similarly, for writing, the disk interface reads a 4 byte word from
the memory in each DMA cycle. Memory cycle time is 40 nsec. The maximum percentage of time that the
CPU gets blocked during DMA operation is:
10
25

C 40

D 50

Input Output Systems GATE-CS-2005


Discuss it

Question 19 Explanation:
Time takes for 1 rotation = 60/3000 It reads 512*1024 Bytes in one rotation. Time taken to read 4 bytes =
153 ns 153 is approximately 4 cycles (160ns) Percentage of time CPU gets blocked = 40*100/160 = 25

Question 20

CORRECT
Consider an operating system capable of loading and executing a single sequential user process at a
time. The disk head scheduling algorithm used is First Come First Served (FCFS). If FCFS is replaced by
Shortest Seek Time First (SSTF), claimed by the vendor to give 50% better benchmark results, what is
the expected improvement in the I/O performance of user programs?

A 50%

B 40%

C 25%

0%
Input Output Systems GATE-CS-2004
Discuss it

Question 20 Explanation:
Since Operating System can execute a single sequential user process at a time, the disk is accessed in
FCFS manner always. The OS never has a choice to pick an IO from multiple IOs as there is always one
IO at a time

Question 21

CORRECT
A Unix-style i-node has 10 direct pointers and one single, one double and one triple indirect pointers. Disk
block size is 1 Kbyte, disk block address is 32 bits, and 48-bit integers are used. What is the maximum
possible file size ?

A 224 bytes

B 232 bytes

234 bytes

D 248 bytes

Input Output Systems GATE-CS-2004


Discuss it

Question 21 Explanation:

Image Source: Wiki


Size of Disk Block = 1Kbyte

Disk Blocks address = 32bits,

but 48 bit integers are used for address

Therefore address size = 6 bytes

No of addresses per block = 1024/6 = 170.66

Therefore 170 2^8 addresses per block can be stored

Maximum File Size = 10 Direct + 1 Single Indirect +


1 Double Indirect + 1 Triple Indirect

= 10 + 28 + 28*28 + 28*28*28
224 Blocks

Since each block is of size 210

Maximum files size = 224 * 210


= 234

Question 22

WRONG
A hard disk with a transfer rate of 10 Mbytes/ second is constantly transferring data to memory using
DMA. The processor runs at 600 MHz, and takes 300 and 900 clock cycles to initiate and complete DMA
transfer respectively. If the size of the transfer is 20 Kbytes, what is the percentage of processor time
consumed for the transfer operation ?

A 5.0%

B 1.0%

0.5%
0.1%
Input Output Systems GATE-CS-2004
Discuss it

Question 22 Explanation:
Transfer rate=10 MB per second Data=20 KB=20* 2 10 So Time=(20 * 2 10)/(10 * 2 20)= 2* 10-3 =2 ms
Processor speed= 600 MHz=600 Cycles/sec Cycles required by CPU=300+900 =1200 For DMA=1200 So
time=1200/(600 *10 6)=.002 ms In %=.002/2*100=.1% So (D) is correct option

Question 23

WRONG
Using a larger block size in a fixed block size file system leads to :
better disk throughput but poorer disk space utilization

B better disk throughput and better disk space utilization

C poorer disk throughput but better disk space utilization

poorer disk throughput and poorer disk space utilization


Input Output Systems GATE-CS-2003
Discuss it

Question 23 Explanation:
Using larger block size makes disk utilization poorer as more space would be wasted for small data in a
block. It may make throughput better as the number of blocks would decrease. A larger block size
guarantees that more data from a single file can be written or read at a time into a single block without
having to move the disk s head to another spot on the disk. The less time you spend moving your heads
across the disk, the more continuous reads/writes per second. The smaller the block size, the more
frequent it is required to move before a read/write can occur. Larger block size means less number of
blocks to fetch and hence better throughput. But larger block size also means space is wasted when only
small size is required and hence poor utilization.
This solution is contributed by Nitika Bansal

Question 24

WRONG
Which of the following requires a device driver?
Register

B Cache

C Main memory

Disk
Input Output Systems GATE-CS-2001
Discuss it

Question 24 Explanation:

A disk driver is software which enables communication between internal hard disk (or drive) and
computer.
It allows a specific disk drive to interact with the remainder of the computer.

Thus, option (D) is the answer.

Please comment below if you find anything wrong in the above post.

Question 25

WRONG
A graphics card has on board memory of 1 MB. Which of the following modes can the card not support?
1600 x 400 resolution with 256 colours on a 17-inch monitor
1600 x 400 resolution with 16 million colours on a 14-inch monitor

C 800 x 400 resolution with 16 million colours on a 17-inch monitor

D 800 x 800 resolution with 256 colours on a 14-inch monitor

Input Output Systems GATE-CS-2000


Discuss it

Question 25 Explanation:
See question 3 of http://www.geeksforgeeks.org/operating-systems-set-1/
Question 26

WRONG
Consider the situation in which the disk read/write head is currently located at track 45 (of tracks 0-255)
and moving in the positive direction. Assume that the following track requests have been made in this
order: 40, 67, 11, 240, 87. What is the order in which optimised C-SCAN would service these requests
and what is the total seek distance?

A 600

B 810

505
550
Input Output Systems GATE-CS-2015 (Mock Test)
Discuss it

Question 26 Explanation:
Circular scanning works just like the elevator to some extent. It begins its scan toward the nearest end
and works it way all the way to the end of the system. Once it hits the bottom or top it jumps to the other
end and moves in the same direction. Keep in mind that the huge jump doesn't count as a head
movement. Solution: Disk queue: 40, 67, 11, 240, 87 and disk is currently located at track 45.The order
in which optimised C-SCAN would service these requests is shown by the following
diagram.

Total seek distance=(67-45)+(87-67)+(240-87)+(255-240)+(255-0)+(11-0)+(40-11)


=22+20+153+15+255+11+29 =505 Option (C) is the correct answer.
Reference: http://www.cs.iit.edu/~cs561/cs450/disksched/disksched.html http://iete-
elan.ac.in/SolQP/soln/DC14_sol.pdf
This solution is contributed by Nitika Bansal

Question 27

WRONG
Suppose the following disk request sequence (track numbers) for a disk with 100 tracks is given: 45, 20,
90, 10, 50, 60, 80, 25, 70. Assume that the initial position of the R/W head is on track 50. The additional
distance that will be traversed by the R/W head when the Shortest Seek Time First (SSTF) algorithm is
used compared to the SCAN (Elevator) algorithm (assuming that SCAN algorithm moves towards 100
when it starts execution) is _________ tracks
8

B 9

10

D 11

Input Output Systems GATE-CS-2015 (Set 1)


Discuss it

Question 27 Explanation:
In Shortest seek first (SSTF), closest request to the current position of the head, and then services that
request next. In SCAN (or Elevator) algorithm, requests are serviced only in the current direction of arm
movement until the arm reaches the edge of the disk. When this happens, the direction of the arm
reverses, and the requests that were remaining in the opposite direction are serviced, and so on.
Given a disk with 100 tracks

And Sequence 45, 20, 90, 10, 50, 60, 80, 25, 70.

Initial position of the R/W head is on track 50.

In SSTF, requests are served as following

Next Served Distance Traveled


50 0
45 5
60 15
70 10
80 10
90 10
25 65
20 5
10 10
-----------------------------------
Total Dist = 130

If Simple SCAN is used, requests are served as following

Next Served Distance Traveled


50 0
60 10
70 10
80 10
90 10
45 65 [disk arm goes to 100, then to 45]
25 20
20 5
10 10
-----------------------------------
Total Dist = 140

Extra Distance traveled in SSTF = 140 - 120 = -10


If SCAN with LOOK is used, requests are served as following

Next Served Distance Traveled

50 0

60 10

70 10

80 10

90 10

45 45 [disk arm comes back from 90]

25 20

20 5

10 10

-----------------------------------

Total Dist = 120

Extra Distance traveled in SSTF = 130 - 120 = 10

Question 28

WRONG
Consider a disk pack with a seek time of 4 milliseconds and rotational speed of 10000 rotations per
minute (RPM). It has 600 sectors per track and each sector can store 512 bytes of data. Consider a file
stored in the disk. The file contains 2000 sectors. Assume that every sector access necessitates a seek,
and the average rotational latency for accessing each sector is half of the time for one complete rotation.
The total time (in milliseconds) needed to read the entire file is _________.
14020

B 14000

25030

D 15000

Input Output Systems GATE-CS-2015 (Set 1)


Discuss it
Question 28 Explanation:
Seek time (given) = 4ms

RPM = 10000 rotation in 1 min [60 sec]

So, 1 rotation will be =60/10000 =6ms [rotation speed]

Rotation latency= 1/2 * 6ms=3ms

# To access a file,

total time includes =seek time + rot. latency +transfer time

TO calc. transfer time, find transfer rate

Transfer rate = bytes on track /rotation speed

so, transfer rate = 600*512/6ms =51200 B/ms

transfer time= total bytes to be transferred/ transfer rate

so, Transfer time =2000*512/51200 = 20ms

Given as each sector requires seek tim + rot. latency

= 4ms+3ms =7ms

Total 2000 sector takes = 2000*7 ms =14000 ms

To read entire file ,total time = 14000 + 20(transfer time)

= 14020 ms

Question 29

CORRECT
Consider a typical disk that rotates at 15000 rotations per minute (RPM) and has a transfer rate of 50
106 bytes/sec. If the average seek time of the disk is twice the average rotational delay and the controllers
transfer time is 10 times the disk transfer time, the average time (in milliseconds) to read or write a 512
byte sector of the disk is _____________
6.1
Input Output Systems GATE-CS-2015 (Set 2)
Discuss it

Question 29 Explanation:
Disk latency = Seek Time + Rotation Time + Transfer Time + Controller Overhead

Seek Time? Depends no. tracks the arm moves and seek speed of disk

Rotation Time? depends on rotational speed and how far the sector is from the head

Transfer Time? depends on data rate (bandwidth) of disk (bit density) and the size of request
Disk latency = Seek Time + Rotation Time +

Transfer Time + Controller Overhead

Average Rotational Time = (0.5)/(15000 / 60) = 2 miliseconds

[On average half rotation is made]

It is given that the average seek time is twice the average rotational delay

So Avg. Seek Time = 2 * 2 = 4 miliseconds.

Transfer Time = 512 / (50 106 bytes/sec)


= 10.24 microseconds

Given that controller time is 10 times the average transfer time


Controller Overhead = 10 * 10.24 microseconds
= 0.1 miliseconds

Disk latency = Seek Time + Rotation Time +


Transfer Time + Controller Overhead
= 4 + 2 + 10.24 * 10 -3 + 0.1 miliseconds
= 6.1 miliseconds
Refer http://cse.unl.edu/~jiang/cse430/Lecture%20Notes/reference-ppt-
slides/Disk_Storage_Systems_2.ppt

Question 30

CORRECT
Consider a disk queue with requests for I/O to blocks on cylinders 47, 38, 121, 191, 87, 11, 92, 10. The C-
LOOK scheduling algorithm is used. The head is initially at cylinder number 63, moving towards larger
cylinder numbers on its servicing pass. The cylinders are numbered from 0 to 199. The total head
movement (in number of cylinders) incurred while servicing these requests is: Note : This question was
asked as Numerical Answer Type.

A 346

165

C 154

D 173

Input Output Systems GATE-CS-2016 (Set 1)


Discuss it

Question 30 Explanation:
The head movement would be :
63 => 87 24 movements
87 => 92 5 movements

92 => 121 29 movements

121 => 191 70 movements

191 --> 10 0 movement

10 => 11 1 movement

11 => 38 27 movements

38 => 47 9 movements

Total head movements = 165

Question 31

WRONG
Which of the following DMA transfer modes and interrupt handling mechanisms will enable the highest I/O
band-width?
Transparent DMA and Polling interrupts

B Cycle-stealing and Vectored interrupts

Block transfer and Vectored interrupts

D Block transfer and Polling interrupts

Process Management Input Output Systems Computer Organization an

Operating Systems | Set 10


Following questions have been asked in GATE 2008 CS exam.

1) The data blocks of a very large file in the Unix file system are allocated using
(A) contiguous allocation
(B) linked allocation
(C) indexed allocation
(D) an extension of indexed allocation

Answer (D)
The Unix file system uses an extension of indexed allocation. It uses direct blocks, single indirect blocks,
double indirect blocks and triple indirect blocks. Following diagram shows implementation of Unix file
system. The diagram is taken from Operating System Concept book.
2) The P and V operations on counting semaphores, where s is a counting semaphore, are defined
as follows:

P(s) : s = s - 1;

if (s < 0) then wait;

V(s) : s = s + 1;

if (s <= 0) then wakeup a process waiting on s;

Assume that Pb and Vb the wait and signal operations on binary semaphores are provided. Two
binary semaphores Xb and Yb are used to implement the semaphore operations P(s) and V(s) as
follows:

P(s) : Pb(Xb);

s = s - 1;

if (s < 0) {

Vb(Xb) ;

Pb(Yb) ;

else Vb(Xb);
V(s) : Pb(Xb) ;

s = s + 1;

if (s <= 0) Vb(Yb) ;

Vb(Xb) ;

The initial values of Xb and Yb are respectively


(A) 0 and 0
(B) 0 and 1
(C) 1 and 0
(D) 1 and 1

Answer (C)
Both P(s) and V(s) operations are perform Pb(xb) as first step. If Xb is 0, then all processes executing
these operations will be blocked. Therefore, Xb must be 1.
If Yb is 1, it may become possible that two processes can execute P(s) one after other (implying 2
processes in critical section). Consider the case when s = 1, y = 1. So Yb must be 0.

3) Which of the following statements about synchronous and asynchronous I/O is NOT true?
(A) An ISR is invoked on completion of I/O in synchronous I/O but not in asynchronous I/O
(B) In both synchronous and asynchronous I/O, an ISR (Interrupt Service Routine) is invoked after
completion of the I/O
(C) A process making a synchronous I/O call waits until I/O is complete, but a process making an
asynchronous I/O call does not wait for completion of the I/O
(D) In the case of synchronous I/O, the process waiting for the completion of I/O is woken up by the ISR
that is invoked after the completion of I/O

Answer (A)
In both Synchronous and Asynchronous, an interrupt is generated on completion of I/O. In Synchronous,
interrupt is generated to wake up the process waiting for I/O. In Asynchronous, interrupt is generated to
inform the process that the I/O is complete and it can process the data from the I/O operation.
See this for more details.
Petersons Algorithm for Mutual Exclusion |
Set 1 (Basic C implementation)
Problem: Given 2 process i and j, you need to write a program that can guarantee mutual exclusion
between the two without any additional hardware support.

Solution: There can be multiple ways to solve this problem, but most of them require additional hardware
support. The simplest and the most popular way to do this is by using Peterson Algorithm for mutual
Exclusion. It was developed by Peterson in 1981 though the initial work in this direction by done by
Theodorus Jozef Dekker who came up with Dekkers algorithm in 1960, which was later refined by
Peterson and came to be known as Petersons Algorithm.

Basically, Petersons algorithm provides guaranteed mutual exclusion by using only the shared memory. It
uses two ideas in the algorithm,

1. Willingness to acquire lock.


2. Turn to acquire lock.

Prerequisite : Multithreading in C

Explanation:

The idea is that first a thread expresses its desire to acquire lock and sets flag[self] = 1 and then gives
the other thread a chance to acquire the lock. If the thread desires to acquire the lock, then, it gets the
lock and then passes the chance to the 1st thread. If it does not desire to get the lock then the while loop
breaks and the 1st thread gets the chance.

Implementation in C language

// Filename: peterson_spinlock.c

// Use below command to compile:

// gcc -pthread peterson_spinlock.c -o peterson_spinlock

#include <stdio.h>

#include <pthread.h>
#include"mythreads.h"

int flag[2];

int turn;

const int MAX = 1e9;

int ans = 0;

void lock_init()

// Initialize lock by reseting the desire of

// both the threads to acquire the locks.

// And, giving turn to one of them.

flag[0] = flag[1] = 0;

turn = 0;

// Executed before entering critical section

void lock(int self)

// Set flag[self] = 1 saying you want to acquire lock

flag[self] = 1;

// But, first give the other thread the chance to

// acquire lock

turn = 1-self;

// Wait until the other thread looses the desire


// to acquire lock or it is your turn to get the lock.

while (flag[1-self]==1 && turn==1-self) ;

// Executed after leaving critical section

void unlock(int self)

// You do not desire to acquire lock in future.

// This will allow the other thread to acquire

// the lock.

flag[self] = 0;

// A Sample function run by two threads created

// in main()

void* func(void *s)

int i = 0;

int self = (int *)s;

printf("Thread Entered: %d\n", self);

lock(self);

// Critical section (Only one thread

// can enter here at a time)

for (i=0; i<MAX; i++)

ans++;
unlock(self);

// Driver code

int main()

// Initialized the lock then fork 2 threads

pthread_t p1, p2;

lock_init();

// Create two threads (both run func)

Pthread_create(&p1, NULL, func, (void*)0);

Pthread_create(&p2, NULL, func, (void*)1);

// Wait for the threads to end.

Pthread_join(p1, NULL);

Pthread_join(p2, NULL);

printf("Actual Count: %d | Expected Count: %d\n",

ans, MAX*2);

return 0;

Run on IDE

// mythread.h (A wrapper header file with assert

// statements)
#ifndef __MYTHREADS_h__

#define __MYTHREADS_h__

#include <pthread.h>

#include <assert.h>

#include <sched.h>

void Pthread_mutex_lock(pthread_mutex_t *m)

int rc = pthread_mutex_lock(m);

assert(rc == 0);

void Pthread_mutex_unlock(pthread_mutex_t *m)

int rc = pthread_mutex_unlock(m);

assert(rc == 0);

void Pthread_create(pthread_t *thread, const pthread_attr_t *attr,

void *(*start_routine)(void*), void *arg)

int rc = pthread_create(thread, attr, start_routine, arg);

assert(rc == 0);

}
void Pthread_join(pthread_t thread, void **value_ptr)

int rc = pthread_join(thread, value_ptr);

assert(rc == 0);

#endif // __MYTHREADS_h__

Run on IDE

Output:

Thread Entered: 1

Thread Entered: 0

Actual Count: 2000000000 | Expected Count: 2000000000

The produced output is 2*109 where 109 is incremented by both threads.

Petersons Algorithm for Mutual Exclusion |


Set 2 (CPU Cycles and Memory Fence)
Problem: Given 2 process i and j, you need to write a program that can guarantee mutual exclusion
between the two without any additional hardware support.

We strongly recommend to refer below basic solution discussed in previous article.


Petersons Algorithm for Mutual Exclusion | Set 1

We would be resolving 2 issues in the previous algorithm.

Wastage of CPU clock cycles

In layman terms, when a thread was waiting for its turn, it ended in a long while loop which tested the
condition millions of times per second thus doing unnecessary computation. There is a better way to wait,
and it is known as yield.
To understand what it does, we need to dig deep into how the Process scheduler works in Linux. The idea
mentioned here is a simplified version of the scheduler, the actual implementation has lots of
complications.

Consider the following example,


There are three processes, P1, P2 and P3. Process P3 is such that it has a while loop similar to the one
in our code, doing not so useful computation, and it exists from the loop only when P2 finishes its
execution. The scheduler puts all of them in a round robin queue. Now, say the clock speed of processor
is 1000000/sec, and it allocates 100 clocks to each process in each iteration. Then, first P1 will be run for
100 clocks (0.0001 seconds), then P2(0.0001 seconds) followed by P3(0.0001 seconds), now since there
are no more processes, this cycle repeats untill P2 ends and then followed by P3s execution and
eventually its termination.

This is a complete waste of the 100 CPU clock cycles. To avoid this, we mutually give up the CPU time
slice, i.e. yield, which essentially ends this time slice and the scheduler picks up the next process to run.
Now, we test our condition once, then we give up the CPU. Considering our test takes 25 clock cycles, we
save 75% of our computation in a time slice. To put this graphically,
Considering the processor clock speed as 1MHz this is a lot of saving!.
Different distributions provide different function to achieve this functionality. Linux provides sched_yield().

void lock(int self)

flag[self] = 1;

turn = 1-self;

while (flag[1-self] == 1 &&

turn == 1-self)

// Only change is the addition of

// sched_yield() call

sched_yield();

Run on IDE

Memory fence.

The code in earlier tutorial might have worked on most systems, but is was not 100% correct. The logic
was perfect, but most modern CPUs employ performance optimizations that can result in out-of-order
execution. This reordering of memory operations (loads and stores) normally goes unnoticed within a
single thread of execution, but can cause unpredictable behaviour in concurrent programs.

Consider this example,

while (f == 0);

// Memory fence required here

print x;

Run on IDE
In the above example, the compiler considers the 2 statements as independent of each other and thus
tries to increase the code efficiency by re-ordering them, which can lead to problems for concurrent
programs. To avoid this we place a memory fence to give hint to the compiler about the possible
relationship between the statements across the barrier.

So the order of statements,

flag[self] = 1;
turn = 1-self;
while (turn condition check)
yield();

has to be exactly the same in order for the lock to work, otherwise it will end up in a deadlock condition.

To ensure this, compilers provide a instruction that prevent ordering of statements across this barrier. In
case of gcc, its __sync_synchronize().

So the modified code becomes,


Full Implementation in C:

// Filename: peterson_yieldlock_memoryfence.c

// Use below command to compile:

// gcc -pthread peterson_yieldlock_memoryfence.c -o


peterson_yieldlock_memoryfence

#include<stdio.h>

#include<pthread.h>

#include "mythreads.h"

int flag[2];

int turn;

const int MAX = 1e9;

int ans = 0;
void lock_init()

// Initialize lock by reseting the desire of

// both the threads to acquire the locks.

// And, giving turn to one of them.

flag[0] = flag[1] = 0;

turn = 0;

// Executed before entering critical section

void lock(int self)

// Set flag[self] = 1 saying you want

// to acquire lock

flag[self]=1;

// But, first give the other thread the

// chance to acquire lock

turn = 1-self;

// Memory fence to prevent the reordering

// of instructions beyond this barrier.

__sync_synchronize();

// Wait untill the other thread looses the

// desire to acquire lock or it is your


// turn to get the lock.

while (flag[1-self]==1 && turn==1-self)

// Yield to avoid wastage of resources.

sched_yield();

// Executed after leaving critical section

void unlock(int self)

// You do not desire to acquire lock in future.

// This will allow the other thread to acquire

// the lock.

flag[self]=0;

// A Sample function run by two threads created

// in main()

void* func(void *s)

int i = 0;

int self = (int *)s;

printf("Thread Entered: %d\n",self);

lock(self);

// Critical section (Only one thread

// can enter here at a time)


for (i=0; i<MAX; i++)

ans++;

unlock(self);

// Driver code

int main()

pthread_t p1, p2;

// Initialize the lock

lock_init();

// Create two threads (both run func)

Pthread_create(&p1, NULL, func, (void*)0);

Pthread_create(&p2, NULL, func, (void*)1);

// Wait for the threads to end.

Pthread_join(p1, NULL);

Pthread_join(p2, NULL);

printf("Actual Count: %d | Expected Count:"

" %d\n",ans,MAX*2);

return 0;

Run on IDE
// mythread.h (A wrapper header file with assert

// statements)

#ifndef __MYTHREADS_h__

#define __MYTHREADS_h__

#include <pthread.h>

#include <assert.h>

#include <sched.h>

void Pthread_mutex_lock(pthread_mutex_t *m)

int rc = pthread_mutex_lock(m);

assert(rc == 0);

void Pthread_mutex_unlock(pthread_mutex_t *m)

int rc = pthread_mutex_unlock(m);

assert(rc == 0);

void Pthread_create(pthread_t *thread, const pthread_attr_t *attr,

void *(*start_routine)(void*), void *arg)

int rc = pthread_create(thread, attr, start_routine, arg);

assert(rc == 0);
}

void Pthread_join(pthread_t thread, void **value_ptr)

int rc = pthread_join(thread, value_ptr);

assert(rc == 0);

#endif // __MYTHREADS_h__

Run on IDE

Output:

Thread Entered: 1

Thread Entered: 0

Actual Count: 2000000000 | Expected Count: 2000000000

Last Minute Notes Operating Systems


Operating Systems: It is the interface between the user and the computer hardware.

Types of OS:
Batch OS: A set of similar jobs are stored in the main memory for execution. A job
gets assigned to the CPU, only when the execution of the previous job completes.
Multiprogramming OS: The main memory consists of jobs waiting for CPU time.
The OS selects one of the processes and assigns it the CPU time. Whenever the
executing process needs to wait for any other operation (like I/O), the OS selects
another process from the job queue and assigns it the CPU. This way, the CPU is
never kept idle and the user gets the flavor of getting multiple tasks done at once.
Multitasking OS: Multitasking OS combines the benefits of Multiprogramming OS
and CPU scheduling to perform quick switches between jobs. The switch is so quick
that the user can interact with each program as it runs
Time Sharing OS: Time sharing systems require interaction with the user to instruct
the OS to perform various tasks. The OS responds with an output. The instructions are
usually given through an input device like the keyboard.
Real Time OS : Real Time OS are usually built for dedicated systems to accomplish
a specific set of tasks within deadlines.
Threads

A thread is a light weight process and forms a basic unit of CPU utilization. A process can
perform more
than one task at the same time by including multiple threads.

A thread has its own program counter, register set, and stack
A thread shares with other threads of the same process the code section, the
data section, files and signals.

A new thread, or a child process of a given process, can be introduced by using the fork()
system call. A process with n fork() system calls generates 2n 1 child processes.
There are two types of threads:

User threads
Kernel threads

Example : Java thread, POSIX threads.Example : Window Solaris.

USER LEVEL THREAD KERNEL LEVEL THREAD

User thread are implemented by kernel threads are implemented by

users. OS.

OS doesnt recognized user level

threads. Kernel threads are recognized by OS.

Implementation of User threads is Implementation of Kernel thread is

easy. complicated.

Context switch time is less. Context switch time is more.


Context switch requires no

hardware support. Hardware support is needed.

If one user level thread perform If one kernel thread perform blocking

blocking operation then entire operation then another thread can

process will be blocked. continue execution.

Process:

A process is a program under execution. The value of program counter (PC) indicates the
address of the current instruction of the process being executed. Each process is
represented by a Process Control Block (PCB).

Process Scheduling:

Below are different time with respect to a process.


Arrival Time: Time at which the process arrives in the ready queue.
Completion Time: Time at which process completes its execution.
Burst Time: Time required by a process for CPU execution.
Turn Around Time: Time Difference between completion time and arrival time.
Turn Around Time = Completion Time - Arrival Time

Waiting Time(W.T): Time Difference between turn around time and burst time.
Waiting Time = Turn Around Time - Burst Time

Why do we need scheduling?


A typical process involves both I/O time and CPU time. In a uniprogramming system like
MS-DOS, time spent waiting for I/O is wasted and CPU is free during this time. In
multiprogramming systems, one process can use CPU while another is waiting for I/O. This
is possible only with process scheduling.
Objectives of Process Scheduling Algorithm

Max CPU utilization [Keep CPU as busy as possible]

Fair allocation of CPU.

Max throughput [Number of processes that complete their execution per time unit]

Min turnaround time [Time taken by a process to finish execution]

Min waiting time [Time a process waits in ready queue]

Min response time [Time when a process produces first response]

Different Scheduling Algorithms


First Come First Serve (FCFS): Simplest scheduling algorithm that schedules according to
arrival times of processes.

Shortest Job First(SJF): Process which have the shortest burst time are scheduled first.

Shortest Remaining Time First(SRTF): It is preemptive mode of SJF algorithm in which


jobs are schedule according to shortest remaining time.

Round Robin Scheduling: Each process is assigned a fixed time in cyclic way.

Priority Based scheduling (Non Preemptive): In this scheduling, processes are


scheduled according to their priorities, i.e., highest priority process is schedule first. If
priorities of two processes match, then schedule according to arrival time.

Highest Response Ratio Next (HRRN) In this scheduling, processes with highest
response ratio is scheduled. This algorithm avoids starvation.

Response Ratio = (Waiting Time + Burst time) / Burst time

Multilevel Queue Scheduling: According to the priority of process, processes are placed in
the different queues. Generally high priority process are placed in the top level queue. Only
after completion of processes from top level queue, lower level queued processes are
scheduled.
Multi level Feedback Queue Scheduling: It allows the process to move in between
queues. The idea is to separate processes according to the characteristics of their CPU
bursts. If a process uses too much CPU time, it is moved to a lower-priority queue.

Some useful facts about Scheduling Algorithms:


1) FCFS can cause long waiting times, especially when the first job takes too much CPU
time.

2) Both SJF and Shortest Remaining time first algorithms may cause starvation. Consider a
situation when long process is there in ready queue and shorter processes keep coming.

3) If time quantum for Round Robin scheduling is very large, then it behaves same as FCFS
scheduling.

4) SJF is optimal in terms of average waiting time for a given set of processes. SJF gives
minimum average waiting time, but problems with SJF is how to know/predict time of next
job.

The Critical Section Problem

Critical Section: The portion of the code in the program where shared variables are
accessed and/or updated.

Remainder Section: The remaining portion of the program excluding the Critical Section.

Race around Condition: The final output of the code depends on the order in which the
variables are accessed. This is termed as the race around condition.

A solution for the critical section problem must satisfy the following three conditions:

1. Mutual Exclusion: If a process Pi is executing in its critical section, then no other


process is allowed to enter into the critical section.
2. Progress: If no process is executing in the critical section, then the decision of a
process to enter a critical section cannot be made by any other process that is
executing in its remainder section. The selection of the process cannot be postponed
indefinitely.
3. Bounded Waiting: There exists a bound on the number of times other processes
can enter into the critical section after a process has made request to access the
critical section and before the requested is granted.

Synchronization Tools

Semaphores: A semaphore is an integer variable that is accessed only through two atomic
operations, wait () and signal (). An atomic operation is executed in a single CPU time slice
without any pre-emption.

Semaphores are of two types:

1. Counting Semaphore: A counting semaphore is an integer variable whose value


can range over an unrestricted domain.
2. Mutex: Binary Semaphores are called Mutex. These can have only two values, 0 or
1. The operations wait () and signal () operate on these in a similar fashion.

Deadlock

A situation where a set of processes are blocked because each process is holding a
resource and waiting for another resource acquired by some other process.

Deadlock can arise if following four conditions hold simultaneously (Necessary


Conditions)
Mutual Exclusion: One or more than one resource are non-sharable (Only one process
can use at a time)
Hold and Wait: A process is holding at least one resource and waiting for resources.
No Preemption: A resource cannot be taken from a process unless the process releases
the resource.
Circular Wait: A set of processes are waiting for each other in circular form.

Methods for handling deadlock


There are three ways to handle deadlock
1) Deadlock prevention or avoidance: The idea is to not let the system into deadlock state.

2) Deadlock detection and recovery: Let deadlock occur, then do preemption to handle it
once occurred.

3) Ignore the problem all together: If deadlock is very rare, then let it happen and reboot the
system. This is the approach that both Windows and UNIX take.

Bankers Algorithm:
This algorithm handles multiple instances of the same resource.

Example: The snapshot of the system at a given instant:

Memory Management:

These techniques allow the memory to be shared among multiple processes.


Overlays: The memory should contain only those instructions and data that are required at a
given time.

Swapping: In a multiprogramming program, the instructions that have used the time slice
are swapped out from the memory.

Memory Management Techniques:

1: Single Partition Allocation Schemes: The memory is divided into two parts. One part
is kept for use by the OS and the other for use by the users.

2: Multiple Partition Schemes:


Fixed Partition: The memory is divided into fixed size partitions.
Variable Partition: The memory is divided into variable sized partitions.

Variable partition allocation schemes:


First Fit: The arriving process is allotted the first hole of memory in which it fits completely.
Best Fit: The arriving process is allotted the hole of memory in which it fits the best by
leaving the minimum memory empty.
Worst Fit: The arriving process is allotted the hole of memory in which it leaves the
maximum gap. Note: Best fit does necessarily give the best results for memory allocation.

1. Paging: The physical memory is divided into equal sized frames. The main memory is
divided into fixed size pages. The size of a physical memory frame is equal to the size of a
virtual memory frame.

2. Segmentation: Segmentation is implemented to give users view of memory. The logical


address space is a collection of segments. Segmentation can be implemented with or
without the use of paging.

Page Fault
A page fault is a type of interrupt, raised by the hardware when a running program accesses
a memory page that is mapped into the virtual address space, but not loaded in physical
memory.

Page Replacement Algorithms

First In First Out


This is the simplest page replacement algorithm. In this algorithm, operating system keeps
track of all pages in the memory in a queue, oldest page is in the front of the queue. When a
page needs to be replaced page in the front of the queue is selected for removal.

For example, consider page reference string 1, 3, 0, 3, 5, 6 and 3 page slots.

Initially all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots > 3
Page Faults.
when 3 comes, it is already in memory so > 0 Page Faults.
Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1. >1
Page Fault.
Finally 6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3
>1 Page Fault.

Beladys anomaly
Beladys anomaly proves that it is possible to have more page faults when increasing the
number of page frames while using the First in First Out (FIFO) page replacement
algorithm. For example, if we consider reference string 3 2 1 0 3 2 4 3
2 1 0 4 and 3 slots, we get 9 total page faults, but if we increase slots to 4, we get
10 page faults.
Optimal Page replacement
In this algorithm, pages are replaced which are not used for the longest duration of time in
the future.

Let us consider page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 and 4 page slots.

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots > 4 Page
faults
0 is already there so > 0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest duration of time
in the future.>1 Page fault.
0 is already there so > 0 Page fault..
4 will takes place of 1 > 1 Page Fault.

Now for the further page reference string > 0 Page fault because they are already
available in the memory.

Optimal page replacement is perfect, but not possible in practice as operating system
cannot know future requests. The use of Optimal Page replacement is to set up a
benchmark so that other replacement algorithms can be analyzed against it.

Least Recently Used


In this algorithm page will be replaced which is least recently used.

Let say the page reference string 7 0 1 2 0 3 0 4 2 3 0 3 2 . Initially we have 4 page slots
empty.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots > 4 Page
faults
0 is already their so > 0 Page fault.
when 3 came it will take the place of 7 because it is least recently used >1 Page fault
0 is already in memory so > 0 Page fault.
4 will takes place of 1 > 1 Page Fault
Now for the further page reference string > 0 Page fault because they are already
available in the memory.

Anda mungkin juga menyukai