Anda di halaman 1dari 44

Memory Management | 1

UNIT 4: Memory Management

Structure

4.0 Objectives
4.1 Introduction
4.2 Requirements of memory management
4.3 Single Process Monitor
4.4 Partitioned memory allocation static
4.5 Partitioned memory allocation dynamic
4.6 Paging
4.7 Segmentation
4.8 Virtual Memory
4.9 Comparison of memory management techniques
4.10 OS policies for virtual memory
4.11 Summary
4.12 Exercise
4.13 Suggested Readings

4.0 Objectives

At the end of this chapter you will know:

Static memory partition


Dynamic memory partition
Implementation of Segmentation and Paging in OS
Virtual memory in OS
Operating system policies for virtual memory

4.1 Introduction

The main memory in a uniprogramming system is divided into two parts. One part is for the
Operating System and the other part is for the program that is being executedcurrently. The user
part of memory in a multiprogramming system, is again divided to hold several processes. Operating
System carries out the task of subdivision dynamically. This is called memory management. In a
multiprogramming system, effective memory management is very important.If only a few processes
2 | Operating System

are in memory, then for much of the time all of the processes will be waiting for I/O and the
processor will be idle. Thus memory needs to be allocated to ensure a reasonable supply of ready
processes to consume available processor time.

In the following sections you will study requirements of memory management, memory management
techniques like fixed partitioning, dynamic partitioning, segmentation, paging and virtual memory.
At the end of this chapter you will find a brief summary of the chapter which is followed by an
exercise and some suggested readings.

4.2 Requirements of memory management

Before studying the memory management techniques, it is important to understand the requirements
of memory management in a system. The requirements have been listed below:

Relocation
Protection
Sharing
Logical organization
Physical organization

Relocation

In a multiprogramming system, a number of processes share the available main memory. The
programmer cannot possibly know beforehand, which programs the main memory will hold when his
own program is executing. The system should allow swapping of processes in and out of main
memory, to maximize processor utilization by providing a large pool of ready processes to execute. It
would be pretty restrictive to specify that a program should get the same memory region as before,
when it is swapped back into main memory following a swap out to disk. The system should
therefore allow the process to be relocated to a different memory region.

Figure 4.1 shows how these facts may lead to some technical issues as far as addressing is concerned.
In the figure a process image has been depicted. Suppose that the process image is held by a
contiguous region of the main memory. Its obvious that the OS has to know the location of the
execution stack, process control information and the entry point of the program to begin the
execution. As the OS is managing memory and is responsible for allocating memory region to
processes, it can easily know the locations stated above. The memory references within a program
however, should be dealt with by the processor. A branch instruction contains the address of the
instruction that is to be executed next. Similarly, a data reference instruction contains the address of
the word or byte that has been referred. The OS software and the processor hardware should be
capable of translating memory references in the program code into actual physical memory addresses.
Memory Management | 3

Figure 4.1: Addressing Requirements for a Process

Protection

Each and every process should be protected from other processes, so that they do not interfere either
intentionally or accidentally. In other words, any program of one process should not be able to refer
to memory locations of any other process without its permission. The location of a program in main
memory cannot be predicted as it is not possible to check the absolute address at compile time. So,
satisfaction of the relocation requirement increases the difficulty of satisfying the protection
requirement. Moreover, almost all programming languages allow us to calculate addresses
dynamically at run time. Therefore, all the memory locations referred to by a process, are checked at
run time. This is done to confirm that all those locations belong to the same process.

Generally, any portion (either program or data) of the operating system cannot be accessed by a user
process. A program in one process cannot branch to an instruction in another process. Similarly,
without any special arrangement, a program in one process cannot access the data area of another
process. Such instructions should be aborted by the processor at the point of execution. The memory
protection requirement must not be satisfied by the OS (software), but by the processor (hardware).
The reason behind this is that the OS cannot predict all the memory references that a program will
make. Even if this were possible, it would be very time consuming to scan each program in advance
to look for violations in memory-references. So, it is possible to check if a memory reference is
permissible or not, only at the time when the corresponding instruction is executing. The hardware of
the processor should be capable enough to accomplish this.
4 | Operating System

Sharing

Any system should be flexible enough to allow a number of processes to access the same region of
main memory. For instance, if some processes are executing the same program, then it is beneficial to
permit each process to access the same copy of the program rather than have its own separate copy. If
a number of processes are doing a task together, then they may have to share access to the same data
structure. It is the duty of the memory management system to control the access to shared memory
areas, so that the protection requirements as discussed above are not violated. We will see that the
mechanisms used to support relocation support sharing capabilities.

Logical organization

The main memory in a computer system is usually organized as a one-dimensional or linear address
space that consists of a sequence of words or bytes. At its physical level, the secondary memory of a
computer system is also organized in a similar fashion. Even though this organization represents the
actual hardware of the machine, it does not resemble the typical method in which programs are
constructed. Almost all the programs are organized into modules, some of which contain data that
can be modified, and some of which cannot be modified (read only or execute only). If the OS and
hardware are able to deal with the user programs and data in the form of modules of some kind, then
a number of advantages can be realized:

1. Modules can be written and compiled independently. All references made from one
module to another can be resolved by the system during the run time.

2. Different modules can be given different degrees of protection (read only, execute only),
with modest additional overhead.

3. Modules can also be shared among processes. A user views his problem in the form of
modules. Therefore, the advantage of sharing on a module level is that, a user can easily
specify the sharing that he desires.

Segmentation is one of the memory management techniques that can most readily satisfy these
requirements. It has been discussed in detail in section 4.6.

Physical organization

We have already discussed that the computer memory is organized into a minimum of two levels-
primary memory (main memory) and secondary memory. Main memory provides a faster access, but
at a relatively higher cost. Moreover, the main memory is volatile, i.e., it does not provide a
permanent storage facility. On the other hand, the secondary memory is non-volatile. It provides a
slower, but a cheaper memory access as compared to the main memory. In a nutshell, the main
memory is smaller and holds programs and data that are currently being used. The secondary memory
is larger and is used to hold programs and data for a longer duration. A major issue in this two level
Memory Management | 5

memory organization is the flow of information between the main memory and the secondary
memory. If the responsibility for this flow is assigned to the individual programmer, then the
following problems may be encountered:

1. The main memory available may not be sufficient enough to hold the program and its
data. In such a case, the programmer should perform overlaying, in which the program
and data are organized in such a way that various modules can be assigned the same
region of memory. A main program is accountable for switching the modules in and out
as required. Even with the aid of compiler tools, overlay programming wastes
programmer time.

2. In a multiprogramming environment, the programmer cannot predict how much space


will be available or where that space will be at the time of coding.

Thus, the responsibility of moving the information between main memory and secondary memory
should be assigned to the system. This task is the essence of memory management.

4.3 Single Process Monitor

Single process monitor is the simplest approach for memory management. There are two sections in
memory one section is for the operating system program (monitor) and the other section is for user
program. In this approach, OS keeps track of just the starting and ending locations that are available
for allocation to user programs. To allocate a continuous free storage space to user program, the OS
is loaded either at the top or at the bottom (one of the extreme ends).

The main factor that affects this decision is the location of interrupt vector. Interrupt vector is
generally located in low memory. Therefore, OS program (also called monitor) is also placed in low
memory. Once OS passes control to a user process (new program), it is loaded into the memory.
Having received the control, the program runs till its completed or it is terminated due to some error
or I/O operation. When the currently executing program completes or terminates, the OS loads
another program into memory for execution. Such a memory management technique was generally
used in single process operating system such as CP/M.

While designing any memory management technique, two things should be taken care of sharing
and protection of code. However, in a single process environment, the memory holds only one
process at a time. Therefore, sharing of code and data will not make much sense. For the same
reason, single process monitor hardly supports protection. But, OS program should be protected from
user code otherwise the system may crash.

In a single process monitor, dedicated registers are used to enforce protection. The process has been
explained below:

Usually, the OS occupies a low memory area. The highest address of the OS code is allotted to a
register known as fence register. To access a specific memory location, the user program generates a
memory address. This address is compared with the content of fence register. If the generated address
6 | Operating System

is below the fence, then it is trapped and permission is denied. Only the OS is allowed to modify the
fence register as this is a privileged operation.

Single process monitor is a very simple memory management technique. However, CPU and memory
capacity is not efficiently used, as there is no support for multiprogramming. Only a single program
resides in the memory at a time. This may lead to underutilization of memory if the program does not
occupy the whole memory. Additionally, if the program waits for some I/O operation, then the CPU
becomes idle for that time duration.

4.4 Partitioned memory allocation static

Almost all the memory management techniques divide the main memory into two portions- one fixed
portion is allotted to the OS and the rest of the memory space is available for user processes. The
most elemental method to manage this available space is to divide it into a number of regions having
fixed boundaries.

There are two methods of creating fixed partitions. One method is to divide the available space into
equal-sized partitions. Figure 4.2 below, shows a scenario where the available memory space has
been divided into partitions of equal size. In this case, a process can be loaded into an available
partition whose size is equal to or greater than the process size. If all the partitions are occupied and
none of the processes is in the ready or running state, then the OS can swap out any process from the
memory to load another process. This is done to keep the processor busy so that it does not sit idle.
But let us see what are the problems with using fixed partitions of equal size?

Figure 4.2: Equal size partitions


Memory Management | 7

A program may be big enough to fit into a partition. In such cases, the programmer should
design the program with the use of overlays so that at any time only a portion of the program
has to be loaded into the memory. If a module that is required is not present in the memory,
then that module is loaded into the programs partition overlaying everything (program and
data) present there.

Another issue is that main memory is not efficiently utilized. If a program of size less than 8
Mbytes is loaded into a partition, then the rest of the space in that partition is wasted. Even if
the available space in that partition is sufficient enough to accommodate another program, it
will not be utilized. The wastage of memory space when a block of data loaded into a
partition, is less than the size of the partition, is called internal fragmentation.

Both the problems that we discussed can be solved to some extent by dividing the memory into fixed
partitions of unequal size. Figure 4.3 below shows a scenario where the available memory space is
divided into unequal size partitions. Here, we can even load programs of size 16 Mbytes without
using overlays. Similarly, programs of size less than 8 Mbytes can be loaded into smaller partitions,
reducing the internal fragmentation.

Figure 4.3: Unequal-size partitions


8 | Operating System

In case of equal-size partitions, a process can be loaded into any available partition. As all the
partitions are same in size, it does not matter which partition is selected. Suppose that all the
partitions have been utilized and each one holds a process which is not ready to run. In this case any
single process can be swapped out of the memory to accommodate a new process.

In case of unequal-size partitions, there are two ways a select a partition for a process. The simplest
method is to assign a process the smallest partition that can hold it. Each partition requires a
scheduling queue to hold the processes that have been swapped out and destined for that partition. In
figure 4.4, each partition is allotted one process queue. The benefit of this method is that the
allocation of partitions is done in such a way so that the internal fragmentation is minimized. This
method however, will not work efficiently in some scenarios. Suppose, at a certain point in time there
is not even a single process of size between 12 and 16 Mbytes. In this case, the partition of size 16
Mbytes remains vacant, even though it could have been allocated to some smaller processes.

Figure 4.4: One process queue per partition


Memory Management | 9

The second method of selecting a partition for a process, solves the problem. In this case, a single
process queue is maintained for all the partitions. This has been shown in figure 4.5 below. When a
process is to be loaded into the memory, then the smallest available partition that can hold the process
is selected. A swapping decision is made if no partition is available. The smallest partition that can
hold the incoming process is given the first preference. Other aspects like priority, and a preference
for swapping out ready processes versus blockedprocesses can also be considered. Using partitions of
unequal size provides more flexibility as compared to partitions of equal size.

Figure 4.5: One process queue for all partitions

Fixed-partitioning techniques are simple and the overhead on processing is also modest. But there are
certain drawbacks which have been described below.

The number of partitions that is fixed at system generation time, also decides the maximum
number of processes that can be active in the system at a given time.
10 | Operating System

Additionally, the size of all the partitions is also fixed at the system generation time. As a
result, small jobs wont use the available spaces in partitions efficiently. If the size of jobs is
known beforehand, then partitions can be created accordingly and internal fragmentation can
be minimized. Otherwise, this may prove to be an inefficient technique.

4.5 Partitioned memory allocation dynamic

Dynamic partitioning was developed to deal with the issues that were found with fixed partitioning
schemes. IBMs mainframe operating system, OS/MVT (Multiprogramming with a Variable Number
of Tasks) used this technique. In dynamic memory partitioning scheme, the available space can be
divided into partitions of variable length and number. Each partition that is to be loaded into memory,
is allocated the exact memory space that it needs.

Figure 4.6: Dynamic partitioning scheme


Memory Management | 11

Figure 4.6 shows how the dynamic partitioning scheme works when we are using a main memory of
size 64 Mbytes. As shown in figure 4.6 (a), the OS occupies a portion of the main memory and the
rest of the space is empty. Three processes are loaded one after another with sizes 20 Mbytes, 14
Mbytes and 18 Mbytes respectively. The fourth process which is of size 8 Mbytes cannot be loaded
into memory as it cannot fit into the hole of size 4 Mbytes created in figure 4.6 (d). At some point in
time, not a single process in the memory is ready. The OS swaps out process 2 and swaps in process
4 in the space created. As process 4 occupies only 8 Mbytes of memory space, we can see in figure
4.6 (f) that another hole of size 6 Mbytes is created. Later at a point in time, no processes in memory
is ready, except for process 2, which is now available in the Ready-Suspend state. But there is not
sufficient memory space to load process 2. Therefore, OS swaps out process 1 and swaps in process 2
as can be seen in figures 4.6 (g) and (h) respectively.

This scheme makes a good start, but eventually leads to the creation of numerous holes in the
memory. As time passes by, the number of fragments increases and the memory utilization drops.
This phenomenon is known as external fragmentation. It indicates that memory that is external to all
partitions gets incrementally fragmented. This is in contrast to internal fragmentation discussed in the
previous section. External fragmentation is overcome by a mechanism called compaction in which
the OS shifts the processes so that they are contiguous and all the holes in memory are combined to
form one single block. In figure 4.6 (h) for example, compaction will lead to the creation of a single
free block of memory of size 16 Mbytes. This may be sufficient for another process to be loaded into
the memory. The demerit of compaction is that it is a time consuming procedure and can waste a lot
of time of the processor. Moreover, to make compaction possible, OS should be able to move a
program from one region to another without violating the memory references within the program.

Memory compaction is a time consuming process as we have discussed before. Therefore, it is up to


OS designer to select a suitable method to allocate memory to different processes. Consider a case
where a process is to be loaded or swapped into the memory and there are many free blocks big
enough to hold the process. In such a case, the OS should decide which block should be used.
Basically, there are three placement algorithms for this purpose Best-fit, First-fit and Next-fit.
Each one certainly, can choose from the available blocks which are equal to or greater than the size
of the incoming process. Best-fit chooses the smallest block that can hold the process. First-fit scans
the memory from the starting location and chooses the first block available that is big enough. Next-
fit scans the memory from the last placement location and chooses the next block available that is big
enough.
12 | Operating System

Figure 4.7: Memory Configuration before and after Allocation of 16-Mbyte Block

Let us try to understand first-fit, best-fit and next-fit through the help of figure 4.7 above. In the
memory configuration shown in figure 4.7 (a), the last block that is occupied is a block of size 22
Mbytes. A partition of size 14 Mbytes is created from it, and is used to hold a process of size 14
Mbytes. Now another process of size 16 Mbytes is to be loaded into the memory. We will see how
first-fit, best-fit and next-fit differ from each other in their working procedure.

(i) First-fit scans the memory from the beginning and loads the process in a 16 Mbyte
partition created from a 22 Mbyte block. This results in a 6 Mbyte fragment.

(ii) Best-fit makes use of an 18 Mbyte block after scanning the whole memory for
available blocks. This results in a 2 Mbyte fragment.
Memory Management | 13

(iii) Next-fit chooses the block of size 36 Mbyte, creates a partition of size 16 Mbytes and
loads the process into it. This results in a 20 Mbyte fragment.

It cannot be said which approach is the best, as the decision relies on the sequence of process
swapping and also the size of those processes. Generally, first-fit algorithm is the simplest and the
fastest. It is better than the next-fit algorithm in terms of performance. The next-fit will usually select
a free block at the end of memory for allocation purpose. This leads to the breaking down of the
largest available block, which usually appears at the end of memory, into small fragments. Hence,
next-fit makes it necessary to use compaction more frequently. On the contrary, first-fit algorithm
populates the front end with small free partitions that need to be scanned each time the first-fit
algorithm runs. The best-fit algorithm performs the worst, despite its name. It looks for the smallest
available block that can fit the process. So, it also ensures that the fragment left behind is as small as
possible. The memory is eventually populated with blocks that are too small to satisfy memory
allocation requests. Hence, as compared to both the other algorithms, best-fit requires a memory
compaction more often.

While using dynamic partitioning in a multiprogramming system, a time may come when all the
processes in the memory are in a blocked state, and even after compaction the available memory will
be insufficient to hold an additional process. To avoid wasting processor time waiting for an active
process to become unblocked, the OS will swap out one process from main memory so that a new
process or a process in a Ready-Suspend state can be swapped in. Thus, the OS should choose which
process should be replaced.

4.6 Paging

In this memory management technique, main memory is partitioned into small fixed and equal-sized
chunks, known as frames. Similarly, each process is divided into small fixed-size chunks that are of
the same size, known as pages. Now, each page of the process can be assigned to one available frame
in the memory. In this section, we will see how this scheme reduces the internal fragmentation and
completely eliminates the external fragmentation.
14 | Operating System

Figure 4.8: Assignment of Process to Free Frames

We will try to understand the usage of pages and frames though figure 4.8 above. The memory is
initially divided into fifteen frames. All the frames are available for use. Process A which is currently
stored in the disk, consists of four pages. When it is the time to load process A into main memory, the
OS looks for four free frames and loads the four pages of process A into those frames. This has been
shown in figure 4.8 (b). Then the OS subsequently loads process B and process C which consist of
Memory Management | 15

three pages and four pages respectively. Process B is suspended and the OS swaps it out of the main
memory. At a point in time both process A and process C are blocked, and the OS needs to load a
new process D into the memory. Process D consists of five pages.

But as we can see in figure 4.8 (e), sufficient contiguous frames are not available for this purpose.
Yet, it does not prevent the OS from bringing the new process into memory. Here the concept of
logical address is used. A simple base address register is no longer sufficient. So, the OS maintains a
page table for each process. This page table is referred to by the OS to see which frame the
concerned page of a process occupies. We know that in a program, each logical address consists of a
page number and an offset within the page. In simple partition, a logical address denotes the location
of a word with respect to the beginning of the program. It is the processor which translates the logical
address into a physical address. In paging also, processor converts the logical address to the physical
address. For this, the processor should know how the page table of a process can be accessed. With a
logical address (page number, offset) as the input, the processor utilizes the page table to generate a
physical address (frame number, offset).

Figure 4.9: Page tables for the processes in figure 4.8 (f)

We consider the memory configuration depicted in figure 4.8 (f). Figure 4.9 above, shows the page
tables of all the processes discussed in figure 4.8. Pages of process A occupy frames 0, 1, 2 and 3.
Pages of process C occupy frames 7, 8, 9 and 10. Pages of process D occupy frames 4, 5, 6, 11 and
12. As process B has been swapped out, its page table does not mention the name of any frames.
Apart from the page tables of different processes, the OS also maintains a free frame list of all the
frames that are empty and can be used to hold pages. Hence, simple paging is more or less similar to
fixed partitioning. The basic difference is that, in case of simple paging the partitions are smaller. A
process may occupy multiple partitions and the partitions do not need to be contiguous. It should be
noted that when a process is brought into memory, all its pages are loaded into available frames and
then a page table is constructed.
16 | Operating System

To make this scheme convenient, the page size and the frame size must be a power of 2. When the
page size is a power of 2, it is easy to show that the relative address (reference to the origin of the
program) and the logical address (a page number and offset) are similar. Figure 4.10 is an example to
demonstrate the concept. Here 16-bit addresses are used, and page size is 1K or 1,024 bytes. The
relative address is 1502, which is 0000010111011110 in binary form. When the page size is 1K, an
offset field of size 10 bits is required. Thus, 6 bits are left for the page number. The maximum page
size for any program is 26 or 64 pages, each of which can be of size 1K bytes.

As shown in figure 4.10 (b), relative address 1502 corresponds to an offset of 478 (0111011110) on
page 1 (000001). This generates the same 16-bit number i.e. 0000010111011110. There are two
advantages of using a page size that is a power of 2. Firstly, the logical addressing scheme is
transparent to the linker, the assembler and also the programmer. For each program, the logical
address (page number, offset) is same as its relative address. The second advantage is that, it is easy
to implement a function hardware to perform dynamic address translation at run time.

Figure 4.10: Paging


Memory Management | 17

In figure 4.10 (b) above, the leftmost n bits (6 bits) represent the page number and the rightmost m
bits (10 bits) represent the offset of the logical address. To translate a logical address into a physical
address the following steps should be executed:

The leftmost n bits that represent the page number are extracted first.
This page number is then used as an index into the process page table to get the frame
number, k.
The frame number is then appended to the offset to get the physical address of the referenced
byte.

In this example, the logical address is 0000010111011110 page number is 1 and offset is 478. Let
us suppose that the page resides in frame number 6 of the main memory. Now, the physical address
can be calculated just by appending 000110 (binary representation of 6) to 0111011110 (binary
representation of 478). Thus, the physical address is 0001100111011110.

Figure 4.11: Logical-to-physical address translation in Paging

4.7 Segmentation

In case of segmentation, the user program and its associated data is divided into a number of
segments. All the segments need not be of the same length. But a maximum segment length is
defined. As in the case of Paging, a logical address consists of two parts segment number and an
offset. Segmentation is like dynamic partitioning, as the segments do not need to be of the same size.
In the absence of an overlay scheme or the use of virtual memory, all the segments of a program
would be required to be brought into the memory for execution. The difference between dynamic
18 | Operating System

partitioning and segmentation is that, in the latter a program may occupy multiple partitions and those
partitions are not required to be contiguous. Though segmentation successfully prevents internal
fragmentation, it suffers from external fragmentation. But the external fragmentation should be
modest as the process is broken down into smaller pieces.

In contrast to paging which is not visible to the programmer, segmentation is usually visible that
makes it convenient to organize programs and data. Generally, the compiler or the programmer keeps
programs and data in separate segments. The program or data may be broken down further into
multiple segments for purposes of modular programming. However, the programmer should know
the maximum segment size that is allowed. Additionally, the relationship between logical addresses
and physical addresses does not remain simple because of the fact that unequal size segments are
allowed. As in the case of Paging, a segment table is maintained for each process and alsofor a list of
free blocks in main memory.

Each entry in the segment table is composed of two parts one part gives the starting address of the
segment in the main memory and the other part gives the length of the segment so that invalid
addresses are not used. When a process goes into the Running state, the address of its segment table
is loaded into a special register that the memory management hardware uses.

Figure 4.12: Segmentation


Memory Management | 19

In figure 4.12, the leftmost n bits (4 bits) represent the segment number and the rightmost m bits (12
bits) represent the offset. The maximum number of segments is 24 = 16. To translate a logical address
into a physical address the following steps are executed:

First the segment number which is represented by the leftmost n bits of the logical address is
extracted.
This segment number is then used as an index into the process segment table to get the
starting physical address of the segment.
Next the offset which is represented by the rightmost m bits in the logical address is
compared with the length of the segment.

If the offset is less than the length then the address is valid, otherwise the address is invalid. To get
the physical address, the starting physical address of the segment is added to the offset. One thing
that should be noted is that, when a process is brought into memory all its segments are loaded into
the available regions and then a segment table is created. We now look at an example to understand
how this works.

Figure 4.13: Logical-to-physical address translation in Segmentation

In figure 4.13 above, the 16-bit logical address consists of a segment number which is represented by
the leftmost 4 bits. It also contains an offset which is represented by the rightmost 12 bits. Here, the
segment number is 1 and the offset is 752. Let us suppose that the starting address of the segment in
20 | Operating System

main memory is 0010000000100000. The offset is less than the length, so the address is valid. To get
the physical address of the concerned logical address, 0010000000100000 is added to 001011110000.
Thus the physical address is 0010001100010000.

4.8 Virtual Memory

It is not mandatory for all the segments or pages of a process to be present in the main memory at the
same time. If in the memory, there is a piece (page or segment) which holds the next instruction to be
fetched and a piece that holds the next data location to be accessed, then the execution can proceed
for the time being. Let at a point in time, the processor comes across a logical address that is not
present in main memory. Here, the processor indicates a memory access fault by generating an
interrupt. Operating system blocks the interrupted process and issues a disk I/O read request to load
the process piece that contains the logical address (this address caused the access fault). Once the I/O
request is issued, another process is dispatched by the OS. It runs until the disk I/O operation is
complete. As soon as the desired piece is brought into memory, an I/O interrupt is issued. OS gets
back the control and changes the state of the affected process to Ready state.

There are two main advantages of what has been discussed above:

1. As only some of the pieces of a process are loaded into memory instead of the whole process,
there is space available to load even more processes. The processor is utilized more
effectively as the number of processes in ready state will most probably increase with the
number of processes residing in memory at any given time.
2. A program may be large enough to be loaded into memory. In such a case there is no other
option but to divide the program into small pieces so that they can be brought into memory
when required, using overlay strategy. With virtual memory based on paging and
segmentation, this task is done by the OS and the hardware.

A process executes in the main memory only and therefore that memory is referred to as real
memory. However, from a programmers point of view the size of memory is much more that
which is allocated on disk. This memory is referred to as virtual memory.

(i)Paging

As in the case of simple paging, each process has its own page table in virtual memory based paging.
However, the entries of the page table are more complex in this case. All the pages of a process need
not be in the memory. Only some pages of a process may be loaded according to the requirement.
Therefore, each page table entry needs a bit which will specify whether the page is present (P) in
memory or not. If the bit specifies that the page is present, then the entry will also include the frame
number of the page. There is another bit known as the modify (M) bit which indicates if the page has
been modified since it was last loaded into memory. If the page has not been modified, then there is
no need to write the page out when it has to be replaced with another page in the same frame. Some
Memory Management | 21

control bits may be present too. For instance, if at the page level, protection or sharing is managed,
then it will be required to include bits for the same.

Figure 4.14: Virtual memory based paging

Page Table Structure

To read a word from main memory, the virtual or logical address (page number and offset) should be
translated into physical address (frame number and offset), using a page table. Page tables should be
placed in memory and not in the registers as they can of different lengths, depending on the size of
the respective processes. When a specific process is running, a register holds the starting address of
the page table of that process. The page number in the virtual address is used as an index into page
table to get the frame number. This frame number is then combined with the offset of the virtual
address to get the physical address. Figure 4.15 below, shows the complete procedure. Generally,
frame number field is shorter than page number field.
Usually, a single page table is associated with each process. But a process can occupy vast amount of
virtual space. Hence, a page table can itself consume a lot of memory, depending on the size of the
process. To avoid this problem, page tables are generally stored in the virtual memory and not the
real memory. When a particular process is running, only a part of its page table may be present in the
main memory. This part must include the page table entry of the page that is currently being
executed. Some processors use a two-level scheme to organize large page tables. This scheme has a
page directory in which each entry corresponds to a page table. If the length of page directory is X
and the maximum length of a page table is Y, then a process can contain a maximum of X Y pages.
Usually, a page table can have a maximum length equal to one page.
22 | Operating System

Figure 4.15: Address Translation in paging system

In figure 4.16 below, a two level scheme is shown that is generally used with a 32-bit address. If
byte-level addressing is assumed and size of each page is 4-Kbyte (212), then the virtual address space
of size 4-Gbyte (232) contains 220 pages. If each page is indicated by a 4-byte entry in page table, then
a 4 Mbyte (222) user page table is created that is capable of holding 220 page-table entries. This
massive user page table that holds 210 pages, can be kept in virtual memory and mapped by a root
page table with 210 page-table entries occupying 4 Kbytes (212) of main memory.
Memory Management | 23

Figure 4.16: A Two-Level Hierarchical Page Table

Figure 4.17 below shows the steps involved in converting the logical address to physical address. The
root page table is always present in the main memory. The first 10 bits of the virtual address is used
as an index to the root page table to get a page-table entry (PTE) for a page of the user page table. If
the concerned page is not in the memory, then a page fault occurs. If the page is in memory, then the
next 10 bits of the virtual address is used as an index into the user page table to find the frame in
memory where the page resides. The frame number is then combined with the offset to get the
physical address.
24 | Operating System

Figure 4.17: Address Translation in a Two-Level Paging System

Inverted Page Table

There is an alternative approach than one or multiple-level page table that uses an inverted page table
structure. The page number portion of the virtual address is mapped into a hash value using a simple
hashing function. The hash value is used to point to an inverted page table, which contains the page
table entries. Each entry in the inverted page table denotes a real memory page frame and not a
virtual page. Therefore, irrespective of the number of processes or the virtual pages supported, a fixed
memory region is required for the tables. More than one virtual address may map into a single entry
in the hash table. So, a chaining technique is used to manage the overflow. Generally, the hashing
mechanism results in chains that are short- between one and two entries. This page table is called an
inverted page tableas it indexes the entries by frame number and not by virtual page number. Figure
4.18 below shows an example of an inverted page table. The page table will also have 2 m entries if
the size of physical memory is 2m frames. In other words, the ith entry in the page table refers to frame
iin the physical memory.

Figure 4.18: Inverted Page Table Structure


Memory Management | 25

Each page table entry consists of the following fields:

Page number: This denotes the page number portion of the virtual address.
Process identifier: This denotes the process which owns the page. The page number and the
process identifier, together identify the page within the process virtual address space.
Control bits: It contains flags, i.e. valid, referenced and modified; and protection and locking
information.
Chain pointer: This field is empty (perhaps indicated by a separate bit) if there are no
chained entries for this entry. Else, the field holds the index value (number between 0 and 2 m
1) of the next entry in the chain.

In the above example, the virtual address containsa page number of n bits, wheren >m.The hash
function maps the page number of n bits into a quantity of m bits. This quantity is then usedas a
pointer to the inverted page table.

Translation Lookaside Buffer

Every virtual memory reference can lead to two physical memory accesses: one to fetch the
appropriate page table entry and the other to fetch the data that is sought. Hence, a straightforward
virtual memory scheme can double memory access time. To avoid this, a special high speed cache is
used for page table entries by most of the virtual memory schemes. This is called a Translation
Lookaside Buffer (TLB). Figure 4.19 shows the structure of a TLB.

Figure 4.19: Translation Lookaside Buffer


26 | Operating System

It works in the same way as a memory cache. It contains the page table entries that have been used
recently. For every virtual address, the processor will first look into the TLB to get the desired page
table entry if it is present there. If so, then the frame number is retrieved and the real physical address
is generated. If the page table entry is present, then we call it a TLB hit. But, if the desired page table
entry is not present (TLB miss), then the processor uses the page number as an index into the process
page table to analyze the corresponding page table entry. If the present bit is set, then the page is
present in the main memory. So, the processor retrieves the frame number from the page table entry
and generates the real physical address. This new page table entry is also included in the TLB. If the
present bit is not set, then the page is not present in the main memory. In this case, a memory access
fault, known as page fault,is issued. The operating system is invoked, which then loads the desired
page from the secondary memory into the main memory and updates the page table.
In figure 4.20, a flowchart shows in detail the working of the TLB. When a page fault occurs, then
the page fault handling routine is invoked.

Figure 4.20: Operation of Paging and TLB


Memory Management | 27

The Principle of locality states that most virtual memory references are made to locations in recently
used pages. Hence, most of the references will include page table entries in the cache. One important
fact is that the TLB contains only some of the entries in a full page table. Therefore, only the page
number cannot be used as an index into the TLB. Each TLB entry contains the page number as well
as the complete page table entry.

A technique called associative mapping can be used to examine multiple TLB entries at the same
time to check is there is a match on page number. This is in contrast with direct mapping which is
used with page tables. Figure 4.21 shows the difference between associated mapping and direct
mapping.

Figure 4.21: Direct versus Associative Lookup for Page Table Entries

(ii) Segmentation
With segmentation, the memory can be viewed as a collection of a number of address spaces or
segments. Segments can vary in size and are dynamic in nature. The virtual address is composed of
the segment number and the offset. The following are the advantages of a segmented address space as
compared to a non-segmented address space:
28 | Operating System

1. Segmentation makes it easy to handle the growing data structures. If the programmer does
not know the size of a specific data structure beforehand, then segments should be allowed to
be dynamic in size. In case of segmented virtual memory, each data structure is assigned its
own segment. The operating system can shrink or expand the segment as and when required.
If the memory space is not sufficient for a segment to expand in size, then that segment is
moved by the OS to a larger available space in memory. If a larger memory space is not
available, then the concerned segment is swapped out. It can be swapped in later when
sufficient memory space is available.

2. It allows the programs to be modified and recompiled independent of each other. The entire
set of programs need not be relinked and reloaded.

3. User can place data or a utility program in a segment that can be accessed by different
processes. So, sharing something among different processes becomes easier.

4. Each segment can be created to include a well-defined set of programs or data. So, the
system administrator or the programmer can assign access privileges according to the
requirement.

In case of simple segmentation, we saw that when a process is brought into main memory all its
segments are loaded into memory. Then a segment table is created for that process which is also
loaded into memory. Each entry in the segment table contains the starting address of that segment in
memory as well as the length of the segment. Similarly, in virtual memory based segmentation, each
process is associated with a unique segment table. However, the entries in the segment table are a
little more complex in nature. Figure 4.22 is an example of how virtual memory based segmentation
works.

At any time only some of the segments of a process may be present in the main memory. Therefore,
one bit of information is added to each entry in a segment table to denote whether that particular
segment in present in main memory or not. If the bit indicates that the segment is present in main
memory, then the starting address and the length of the segment are also mentioned in the entry.
Memory Management | 29

Figure 4.22: Address Translation in a Segmentation System

Another control bit called a modify bit is also included in each entry as shown in figure 4.23. It
indicates whether the contents of that segment have been modified since it was last loaded into
memory. If the segment has not been modified, then there is no need to write the segment out when it
has to be replaced by another segment in the main memory. If protection and sharing is managed at
the segment level, then for this purpose, other control bits may also be included.

Figure 4.23: Virtual memory based segmentation


30 | Operating System

To read a word from main memory, the virtual or logical address (segment number and offset) should
be translated into physical address (frame number and offset), using a segment table. Segment tables
should be placed in main memory and not in the registers, as they can of different lengths, depending
on the size of the respective processes. When a specific process is running, a register holds the
starting address of the segment table of that process. The segment number in the virtual address is
used as an index into segment table to get the starting address of that segment in main memory. This
is then combined with the offset part of the virtual address to get the real physical address.

(iii)Combined Paging and Segmentation

Both paging and segmentation have their own advantages. Paging prevents external fragmentation
and thus allows memory to be used efficiently. All the pages that are swapped in and out are of the
same fixed size. Therefore, complex memory management algorithms can be designed to exploit the
behavior of programs. Segmentation on the other hand, facilitates handling of growing data
structures, modularity and support for sharing and protection. Some systems are provided with
suitable processor hardware and OS software to implement both paging and segmentation. In such
systems, the programmer divides the users address space into a number of segments. Each segment
is further divided into a number of fixed-size pages which are of the same length as a memory frame.
If the length of segment is less than that of a page, the segment occupies just one page. From the
programmers perspective, a virtual address still contains a segment number and a segment offset.
The system views the segment offset as a page number and page offset for a page within the specified
segment.

Figure 4.24: Virtual memory based on paging and segmentation

Figure 4.25 shows an example of how address translation is performed in a Segmentation/Paging


system. One segment table is associated with each process. Each segment in the segment table is
associated with a number of page tables. When a specific process is running, a register holds the
starting address of the segment table of that process. The processor utilizes the segment number
portion in the virtual address as an index into the segment table to get the page table for that segment.
Next, processor uses the page number portion in the virtual address as an index into the page table to
Memory Management | 31

get the frame number. Finally, the frame number is combined with the offset portion of the virtual
address to get the real physical address.

Figure 4.25: Address Translation in a Segmentation/Paging System

In figure 4.24 the format of page table entry and segment table entry has been shown. Each entry in
the segment table includes the length of the segment. It also includes a base field which denotes to a
page table. The present and modified bits are not included in a segment table entry as these matters
are handled at the page level. But for the purpose of sharing and protection, other control bits may be
added. Each page table entry includes the frame number and the present and modified bits. A page
table entry may also consist of other control bits for the purpose of sharing and protection. If a page is
present in main memory, its page number can be mapped to a frame number.

4.9 Comparison of memory management techniques

The following table summarizes the characteristics of each of the memory management techniques
discussed in the previous sections along with their strengths and weaknesses.
32 | Operating System

Table 4.1: Memory management techniques

Technique Description Strengths Weakness


Fixed At the time of system generation, the Simple Ineffective memory
Partitioning main memory is divided into implementation; usage due to
numerous fixed/static partitions. A modest overhead on internal
process is loaded into a partition of operating system fragmentation; total
either equal or greater size. number of processes
that can be active is
fixed
Dynamic Dynamic creation of partitions. Each Internal fragmentation Processor not used
Partitioning process is loaded into a partition of absent; main memory efficiently;
the same size. used more efficiently compaction required
to counter external
fragmentation.
Simple Main memory divided into same External fragmentation A small amount of
Paging sized frames. Each process is divided absent internal
into same sized pages (size of page is fragmentation
equal to size of frame). A process is
loaded by loading all its pages into
available frames (not necessarily
continuous).
Simple A process is divided into segments. Internal fragmentation External
Segmentation Process is loaded by loading all its absent; Better fragmentation
segments into dynamic partitions utilization of memory present
(need not be continuous) and reduced overhead
than dynamic
partitioning
Virtual Same as simple paging, but all the External fragmentation Memory
Memory pages of a process do not need to be absent; large virtual management is
Paging loaded. Pages can be loaded when address space; higher complex
they are required. degree of
multiprogramming
Virtual Same as simple segmentation, but all Internal fragmentation Memory
Memory the segments of a process do not need absent; large virtual management is
Segmentation to be loaded. Segments can be loaded address space; higher complex
when they are required. degree of
multiprogramming;
support for sharing and
protection
Memory Management | 33

4.10 OS policies for Virtual Memory

Operating systems offer a variety of algorithms to manage various aspects of memory management.
The key design issue of these algorithms are minimizing page faults, fetching information faster,
reducing the I/O operations carried out during page replacements etc. Moreover, the performance of
any specific group of policies is driven by factors like size of the main memory, execution behavior
of different programs, speed of main memory and secondary memory relative to each other, number
and size of processes that are competing for resources etc. Therefore, the decision to choose a
specific group of policies depends on various factors. Some of the common memory management
policies have been discussed below.

Fetch Policy

This policy decides when a page should be brought into the main memory. There are two basic
methods for this demand paging and prepaging. In case of demand paging, a page is brought into
main memory only when a reference is made to that page. In case of prepaging, pages that are
brought into the main memory are other than the one that caused a page fault.

(i) Demand paging

A demand-paging system works just like a paging system with swapping as shown in figure 4.26
below. The processes reside in the secondary memory. When a process starts running we do not bring
all the pages of that process into the memory. Rather pages are brought in, as and when demanded. A
page fault occurs when the desired page is not present in the main memory. The following steps are
involved when a page fault occurs in a demand-paging system.

1. The OS is invoked to bring the desired page into the main memory from the secondary
memory.

2. A free frame is selected from the frames available in the free frame list that is maintained by
the OS.

3. A disk operation is scheduled to bring the desired page into the newly allocated frame.

4. With the completion of the disk read operation, the page table entry is also updated to include
the frame number.

5. The instruction which was interrupted due to the page fault is now restarted.

6. The process can now access the page as if it had always been in the memory.
34 | Operating System

Figure 4.26: Transfer of a paged memory to contiguous disk space.

If a process does not have sufficient space or enough frames relative to the number of pages in it,
then the process may spend more time in swapping pages in and out rather than executing. This
undesirable condition is called thrashing. One way to resolve thrashing is to provide a process as
many frames as it needs.

(ii) Prepaging

In prepaging, pages other than the one demanded by a page fault are brought into the main memory.
If the pages of a process are stored in the secondary memory in a contiguous manner, then it is better
to bring in multiple contiguous pages at a time, rather than to bring in one page at a time. This is
because the second technique would take more time and effort. However, in case of the first
technique, if most of the extra pages that are brought in are not referenced, then it would itself prove
to be inefficient.
Memory Management | 35

This policy could be employed when a process starts up for the first time or every time there is a page
fault. In the former case, the programmer has to specify the pages which need to be brought in once
the process starts running. The second case however, is preferable as it is invisible to the
programmer. When a process is swapped out and put in a suspended state, all its resident pages are
moved out of the memory. Similarly, when a process is resumed, all of its pages that were previously
in main memory are brought in again.

In some cases, prepaging may prove to be advantageous. The choice depends on the fact that whether
the cost of prepaging is more or less compared to the cost of servicing the corresponding page faults.
Most of the extra pages brought in, may be referenced, in which case the decision to bring in the
extra pages would not be regretted. However, the decision would prove to be ineffective if a lot of the
pages are not referenced.

Replacement policy
In an OS that uses paging as a memory management technique, a page replacement algorithm is
required to decide which page to swap out so that an incoming page can be stored in its place
(incoming page is stored in the memory frame vacated by the swapped out page). Some of the page
replacement algorithms have been discussed below.

(i) First In First Out (FIFO)

FIFO is a very simple page replacement algorithm in which the oldest page in the main memory is
selected to be swapped out when a new page has to be brought in. The OS maintains a queue to keep
track of all pages in the memory. The oldest page is in front of the queue. When a page needs to be
swapped out to make place for an incoming page, then the page in the front of the queue is selected
for the purpose.

To understand the FIFO page replacement algorithm, we take the help of an example. Let us suppose
the reference string is 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1. The reference string is
nothing but a string of memory references. Figure 4.27 shows how the page frames are filled and
vacated according to this reference string.

Figure 4.27: FIFO page replacement


36 | Operating System

1. Initially, all the frames are empty (not shown in figure). The first three page references 7, 0,
1 cause page faults. These pages are brought in and placed in these empty frames.
2. When the reference to page 2 causes a page fault, it replaces page 7 which is the oldest page
in memory.
3. The next reference does not cause any page fault as page 0 is already present in main
memory.
4. Reference to page 3 causes a page fault as page 3 is not available in memory. Page 3 is then
brought in by replacing page 0 which is currently the oldest page in memory.
5. The rest of the references and replacements in the example can be understood by looking at
figure 4.27.

In some page replacement algorithms, the page fault may increase with the increase in the number of
allocated frames. This unexpected result is known as Beladys anomaly. The initial assumption was
that giving more space to a process would improve its performance. But researchers proved this
assumption cannot always be true. FIFO suffers from Beladys anomaly.

(ii) Optimal Page Replacement (OPT)

After the discovery of Beladys anomaly, researchers searched for a page replacement algorithm that
would minimize the number of page faults. An optimal page-replacement algorithm has the lowest
page-fault rate as compared to all other algorithms. In this algorithm, the page which will not be
referenced for the longest duration is swapped out when a new page needs to be brought into
memory. OPT does not suffer from Beladys anomaly.

To understand the OPT page replacement algorithm, we will take the help of an example. The same
reference string which was used in the previous example is considered here. Figure 4.28 shows how
the page frames are filled and vacated in the OPT page replacement algorithm.

Figure 4.28: OPT page replacement


Memory Management | 37

1. Initially, all the frames are empty (not shown in figure). The first three page references 7, 0,
1 cause page faults. These pages are brought in and placed in these empty frames.

2. When the reference to page 2 causes a page fault, it replaces page 7 as this page will not be
used until reference 18. Pages 0 and 1 are not considered for replacement because page 0 will
be used in reference 5 and page 1 will be used in reference 14.

3. The next reference does not cause any page fault as page 0 is already present in main
memory.

4. Reference to page 3 causes a page fault as page 3 is not available in memory. Page 3 is then
brought in by replacing page 1 as this page will not be used until reference 14. However,
page 0 will be used in reference 7 and page 2 will be used in reference 9.

5. The rest of the references and replacements in the example can be understood by looking at
figure 4.28.

Optimal page replacement is the best as compared to all other page replacement algorithms as the
number of page faults is the minimum. However, for this to work, the OS should be aware of all the
future requests which is not possible in practice. OPT is used just as a benchmark so that other place
replacement algorithms can be compared to it and their performance can be analyzed.

(iii) Least Recently Used (LRU)

LRU replaces the page which has not been referenced for the longest time period. LRU replacement
associates with each page the time of that page's last use. When a new page has to be bought into
memory, then the resident page which has not been used for the longest duration is selected to be
replaced. This strategy can be thought of as the OPT algorithm in which we look backward in time
rather than forward. LRU does not suffer from Beladys anomaly.
We will try to understand LRU page replacement algorithm through an example. The reference string
used is exactly the same that was used in case of FIFO and OPT. Figure 4.29 shows how the page
frames are filled and vacated in the LRU page replacement algorithm.

Figure 4.29: LRU page replacement


38 | Operating System

1. Initially, all the frames are empty (not shown in figure). The first three page references 7, 0,
1 cause page faults. These pages are brought in and placed in these empty frames.

2. When the reference to page 2 causes a page fault, it replaces page 7 as this page is the least
recently used page or the page that has not been used of the longest duration.

3. The next reference does not cause any page fault as page 0 is already present in main
memory.

4. Reference to page 3 causes a page fault as page 3 is not available in memory. Page 3 is then
brought in by replacing page 1 as this page is the least recently used page currently present in
memory.

5. The rest of the references and replacements in the example can be understood by looking at
figure 4.29.

Resident Set Management

(i) Resident Set Size

It is not possible to bring all the pages of a process into the main memory in case of paged virtual
memory. Hence, the operating system should decide how many pages it should bring in. In other
words, OS should decide how much memory should be allocated to a particular process. This
decision can be driven by several factors:

If processes are allocated less amount of memory, then more number of processes can reside
in the main memory at a time. This will increase the chances of the OS of finding at least one
ready process at any given time. Consequently, the time lost due to swapping will be
reduced.

If a reasonably small number of pages are present in main memory, then the page fault rate
will increase, regardless of the principle of locality.

Beyond a certain size, additional space allocation to a process will have no visible effect on
the page fault rate for that process. This is because of the principle of locality.

Based on these factors, modern operating systems implement either of the two policies - fixed-
allocation policy or variable-allocation policy. In a fixed-allocation policy, each process is allotted
a fixed number of frames in the memory. This number is decided at the initial load time (process
creation time) and may be determined by the system manager or the programmer, or may be based on
the type of the process. As the number of frames are fixed, in case of a page fault, an existing page is
replaced by the desired page. In a variable-allocation policy, the number of frames allotted to a
process can vary over the lifetime of the process depending on the page fault rate. If the page fault
Memory Management | 39

rate is high, then it means that the principle of locality only holds in a weak form for that process.
Thus, additional frames can be allotted to reduce the page fault rate. Similarly, if the page fault rate is
exceptionally low, then it means that the process is well behaved from a locality point of view. In this
case, the number of frames allotted to the process may be reduced.

(ii) Replacement Scope

The scope of the replacement strategy can be local or global. In a local replacement policy, one of
the resident pages of the concerned process is considered to be replaced by the new incoming page in
case of a page fault. But in a global replacement policy, any page in the main memory that is not
locked (When a frame is locked, the page which is currently stored in that frame cannot be replaced.
Locking is achieved by associating a lock bit with each frame) is a potential candidate. It does not
matter who the owner of that page is. There is a correlation between resident set size and replacement
scope. We will try to understand it through table 4.2 given below.

Table 4.2: Resident Set Management

Local Replacement Global Replacement

Fixed number of frames are Not possible


allocated to a process.
Fixed Allocation
One of the frames allocated to
the process is chosen for page
replacement.
The number of frames Page to be replaced is chosen
allotted to a process can vary from all available frames in
over the lifetime of the main memory. This causes
Variable Allocation process to maintain its the size of the resident set of
working set. processes to vary.
One of the frames allocated to
the process is chosen for page
replacement.

It should be noted that resident set is the number of frames in real memory that have been allotted to
a process at any given time. On the other hand, working set is a subset of the resident set i.e. the
number of frames that the process actually needs for execution.

(iii) Cleaning Policy

A cleaning policy determines when the modified page should be written out to secondary memory.
There are basically two alternatives to this policy demand cleaning and precleaning. In demand
cleaning, a page is written out to the secondary memory only when the concerned frame is chosen for
page replacement. In precleaning, pages are written to secondary memory before their frames are
needed. This is done to facilitate writing out pages in batches.
40 | Operating System

Both the schemes have their own issues. With precelaning, a page stays in main memory even after it
has been written out to the secondary memory. The page stays in the main memory until the page
replacement algorithm decides to replace it by a new page. However, by that time, the page may have
been modified again. So writing out hundreds or thousands of pages from time to time may not prove
to be so useful. Additionally, secondary memory has a limited transfer capacity. It should not be
wasted with cleaning operations that are not needed. With demand cleaning, the number of page
writes is greatly reduced. But following a page fault, a process may have to wait for two page
transfers before it can be unblocked. This may decrease the processor utilization.

A better approach involves page buffering. The following policy is adopted: Only replaceable pages
are cleaned, but replacement and cleaning operations are decoupled. Replaced pages can be placed on
two lists: modified and unmodified. The pages on the modified list can be written out in batches from
time to time. Once a page has been written, it can be moved to unmodified list. A page on the
unmodified list is either reclaimed (if it is referenced) or lost (if the frame is allotted to another page).

(iv) Load Control

Load control determines how many processes can be present in the main memory at any given time.
This is referred to as the multiprogramming level. The load control policy is critical in effective
memory management. Two cases may be considered.

The number of processes in main memory is too less at a given time. Thus, there may be a lot
of occasions when all processes are blocked. This causes much of the time to be spent in
swapping.
Too many processes are present in main memory at a given time. Thus, a lot of processes are
allotted inadequate space. This leads to frequent faulting and ultimately to thrashing.

4.11 Summary

In a multiprogramming system, effective memory management is very important. If only a few


processes are in memory, then for much of the time all of the processes will be waiting for I/O and
the processor will be idle. Thus memory needs to be allocated to ensure a reasonable supply of ready
processes to consume available processor time. The most elemental method to manage available
space is to divide it into a number of regions having fixed boundaries. There are two methods of
creating fixed partitions - equal-sized partitions and unequal size partitions. In dynamic memory
partitioning scheme, the available space can be divided into partitions of variable length and number.
Each partition that is to be loaded into memory, is allocated the exact memory space that it needs. In
simple paging, each process is divided into a number of fixed-size pages of equal length which can be
loaded into memory frames which are of the same size as the of pages. In case of segmentation, the
user program and its associated data is divided into a number of segments that can be of unequal size.
A system can address more memory than what has been physically installed on it. This extra memory
is called virtual memory and it is a section of a hard disk that's set up to emulate the computer's
Memory Management | 41

RAM. Operating systems provide a number of fetching and page replacement policies to support the
concept of virtual memory.

4.12 Exercise

1. What is compaction?
A. a technique for overcoming internal fragmentation
B. a paging technique
C. a technique for overcoming external fragmentation
D. a technique for overcoming fatal error
2. Program always deals with:
A. logical address B. absolute address C. physical address D. relative address
3. Operating System maintains the page table for:
A. each process B. each thread C. each instruction D. each address
4. The operating system is:
A. in the low memory B. in the high memory
C. either a or b (depending on the location of interrupt vector)
D. None of these
5. In fixed sized partition, the degree of multiprogramming is bounded by ___________.
A. the number of partitions B. the CPU utilization
C. the memory size D. All of these
6. The first fit, best fit and worst fit are strategies to select a ______.
A. process from a queue to put in memory B. processor to run the next process
C. free hole from a set of available holes D. All of these
7. When the memory allocated to a process is slightly larger than the process, then:
A. internal fragmentation occurs B. external fragmentation occurs
C. both a and b D. neither a nor b
8. In LRU algorithm
A. the oldest page in memory is replaced
B. the page which will not be referenced for the longest duration is replaced
C. the page which has not been used for the longest duration is replaced
D. none of the above
42 | Operating System

9. In .............. there is not necessary to load all of the segments of a process and non-resident
segments that are needed are brought in later automatically.
A. Fixed partitioning B. Simple Paging
C. Virtual memory segmentation D. Simple segmentation
10. Which of the following page replacement algorithms suffers from Beladys anomaly?
A. FIFO B. LRU C. OPT
D. Both LRU and FIFO
11. Explain in detail how dynamic memory partitioning is better than static memory partitioning.
Give an example to support your answer.
12. What do you mean by the Translation Lookaside Buffer? With the help of a neat and labelled
diagram describe all the steps involved in loading a page into main memory when a page fault
occurs.
13. Consider that there are four frames in main memory and the reference string is 5, 0, 1, 2, 0, 3,
1, 5, 4, 3, 1, 2, 3, 0. Show how these frames are filled and vacated in each of the page
replacement algorithms FIFO, LRU and OPT.
14. Describe how the logical address is translated into a physical address in case of a virtual
memory based paging system. Consider a two level hierarchical paging system in this case.
15. What are the problems faced if we maintain a process queue for each partition, when the
memory is divided into fixed partitions of unequal size? How does maintaining a single queue
for all partitions resolve the problem?
16. What are the advantages and disadvantages of Demand Paging with respect to Prepaging as a
page fetching policy?
17. Compare simple paging with simple segmentation with respect to the amount of memory
required by the address translation structures in order to convert virtual addresses to physical
addresses.
18. Consideralogicaladdressspaceof32pageswith1,024 words per page, mapped onto a physical
memory of 16 frames.
a. How many bits are required in the logical address?
b. How many bits are required in the physical address?
19. Consider a simple paging system with the following parameters: 232 bytes of physical
memory; page size of 210 bytes; 216 pages of logical address space.
a. How many bits are in a logical address?
b. How many bytes in a frame?
c. How many bits in the physical address specify the frame?
Memory Management | 43

d. How many entries in the page table?


e. How many bits in each page table entry? Assume each page table entry contains
avalid/invalid bit.
20. Consider a simple segmentation system that has the following segment table:

For each of the following logical addresses, determine the physical address or indicate if a
segment fault occurs:
a. 0, 198 b. 2, 156 c. 1, 530 d. 3, 444
e. 0, 222
Answer following questions:
2. Consider a virtual memory based paging system. Assuming a page size of 4 Kbytes and that a
page table entry takes 4 bytes, how many levels of page tables would be required to map a 64-
bit address space, if the top level page table fits into a single page?
2. Consider a page reference string for a process with a working set of M frames, initially all
empty. The page reference string is of length P with N distinct page numbers in it. For any
page replacement algorithm,
a. What is a lower bound on the number of page faults?
b. What is an upper bound on the number of page faults?
3. A process references five pages, A, B, C, D, and E, in the following order:
A; B; C; D; A; B; E; A; B; C; D; E
Assume that the replacement algorithm is first-in-first-out and find the number of page
transfers during this sequence of references starting with an empty main memory with three
page frames. Repeat for four page frames.
4. Assume that a program has just referenced an address in virtual memory. Describe a scenario in
which each of the following can occur. (If no such scenario can occur, explain why.)
a. TLB miss with no page fault b. TLB miss and page fault
c. TLB hit and no page fault d. TLB hit and page fault
5. Consider the following memory configuration at any point in time. The shaded blocks are
occupied, whereas the unshaded blocks are empty. Besides each block the size has been
specified. The partition of size 12 Mbytes was the last partition that was created. A process of
44 | Operating System

size 5 Mbytes has to be loaded into the memory now. Explain how a new partition is created
for it using the allocation algorithms first-fit, best-fit and next-fit.

4.13 Suggested Readings

1. Operating System Concepts by Abraham Silberschatz, Peter B. Galvin, Greg Gagne,Wiley.


2. Operating Systems: Internals and Design Principles by William Stallings, Pearson.

Anda mungkin juga menyukai