Anda di halaman 1dari 8

Virtual memory

Virtual memory is a technique that allows the execution of processes that may
not be completely in memory i.e., programs can be larger than physical memory.
Further, it abstracts main memory into an extremely large, uniform array of
storage, separating logical memory as viewed by the user from physical memory.
This technique frees programmers from concern over memory storage
limitations.

Example - Let's suppose the operating system needs 80 MB of memory to hold all
the programs that are running, but there are only 32 MB of RAM chips installed in
the computer. The operating system sets up 80 MB of virtual memory and
employs a virtual memory manager, a program designed to control virtual
memory, to manage the 80 MB.

The virtual memory manager sets up a file on the hard disk that is 48 MB in size
(80 minus 32). The operating system then proceeds to use 80 MB worth of
(http://gradestack.com/mainsite/home)
Explore (/mainsite/explorecourse)
Virtual memory
memory addresses. To the operating system, it appears as if 80 MB of memory
exists. It lets the virtual memory manager worry about how to handle the fact that
we only have 32 MB of real memory.

segment 0 segment 1
segment 4
segment 2
segment 3
subroutine Sqrt
symbol
table main
program
stack
limit base
1000 1400
400 6300
400 4300
1100 3200
1000 4700

Virtual memory is commonly implemented by demand paging. A demand-paging
system is similar to a paging system with swapping. Processes reside on
secondary memory. When we want to execute a process, we swap it into
memory. Rather than swapping the entire process into memory, however, we use
a lazy swapper. A lazy swapper never swaps a page into memory unless that page
will be needed. The ability to execute a program that is only partially in memory
has many benefits:

The program would no longer be constrained by the amount of physical
memory that is available. Users would be able to write programs for an
extremely large virtual address space, simplifying the programming task.
Because each user program could take less physical memory, more
programs could be run at the same time, with a corresponding increase in
CPU utilization and throughput, but with no increase in response time or
turnaround time.
Less I/O would be needed to load or swap each user program into
memory, so each user program into memory, so each user program would
run faster.
Thus, running a program that is not entirely in memory would benefit both the
system and the user.

Now, when a process tries to access a page that is not in memory, page fault trap
occurs.

This trap is the result of the operating system's failure to bring the desired page
into memory. Thus to bring the page into memory operating system does
following steps:

1. It checks an internal table for this process, to determine whether the reference
was a valid or invalid memory access.

2. If the reference was invalid, it terminates the process. If it was valid, but OS has
not yet brought in that page, it now pages it in.

3. It then finds a free frame.

4. It schedules a disk operation to read the desired page into the newly allocated
frame.

5. When the disk read is complete, it modifies the internal table kept with the
process and the page table to indicate that the page is now in memory.

6. It then restarts the instruction that was interrupted by the illegal address trap.

The process can now access the page as though it had always been in memory.
Demand Paging can have a significant effect on the performance of a system.
The memory access time for systems is 10 to 200 nanoseconds but time to
access the disk is in milliseconds. Now whenever a page fault occurs the page
has to be brought from disk to main memory and hence more time is used to
serve page fault. Thus, the effective access time is:
Effective access time = (1-p) x memory access time + p x page fault time.

Where p = probability of page fault occurring.
If the p is large, then effective access time increases and the system
Comment
performance decreases.

Page replacement
Whenever a page fault occurs, a new page has to be loaded into memory. But
since other process are also running, chances that a free frame does not exist
increases. Thus page replacement is a possibility to continue with the running of
the program. When the page being replaced is needed by the program, another
page is replaced to bring in that page again. This series continues.

Some of the common page replacement algorithms used for this purpose is:

1. FIFO this is the simplest page replacement algorithm where with each page
the time when that page was brought into memory is associated. For the
replacement of the page, the oldest page is chosen. Thus the name of this
algorithm. The page that is paged in first is also the one which is paged out first.
This algorithm suffers from Beladys Anomaly, which is for some page
replacement algorithms, the page-fault rate increases with increase in the
number of available frames. Normally with increase in number of frames, page-
fault rate decreases.

2. Optimal Page replacement this algorithm replaces the page that will not be
used for the longest period of time. This is called optimal because it guarantees
Comment
the lowest possible page-fault rate for a fixed number of frames. Bur, this
algorithm is not practical as it requires the knowledge of usage of pages in
advance, which is not always possible.

3. LRU Page replacement this algorithm like its name replaces the least
recently used page. For this it keeps track of the time of last usage of each page.
According to simulation results, this algorithm performs better than FIFO but has
higher page fault rate than optimal page replacement algorithm.

Thrashing
To make the effective utilization of CPU and virtual memory, we increase the
degree of multiprogramming by starting more process and reducing the number
of frames for each process. This can be done as if we need a new page for a
process, it can be page in using any of the page replacement algorithm. Thus, if it
so happens, that the degree of multiprogramming becomes so high that more of
the CPU time is consumed in swapping in and out of pages than execution of the
programs, the condition is called thrashing.

Thrashing
CPU
Utilization
Degree of multiprogramming
Comment

The graph clearly explains the thrashing phenomenon. By increasing the degree
of multiprogramming to certain extent, CPU utilization increases linearly but after
certain limit starts to decrease exponentially. To prevent thrashing, a process
must be provided as many frames as it needs. This must be done dynamically.
There is a strategy called working set strategy which starts by looking at how
many frames a process is actually using. This approach depends on locality
model of process execution.
Full Name
Email
Password
Facebook Connect
Previous
Memory management techniques
<
Next
File system
>
Please Register
f
or
Contact Number
Account Already Exist?
Login Here (/web/login)
Create Account
mail us at contact@gradestack.com (mailto:contact@gradestack.com)
About Us (/mainsite/aboutus)

News (/mainsite/news)

Blog (/blog/)

Contact Us (/mainsite/contact)
Terms and Conditions (/mainsite/policy)

Developers (/mainsite/developer)

Publishers (/mainsite/publisher)

Anda mungkin juga menyukai