Anda di halaman 1dari 22

Term paper

Of
Bca-mca
COMPUTER ORGANISATION AND
ARCHITECTURE
on
topic
VIRTUAL MEMORY ARCHITECTURE

SUBMITTED TO:- SUBMITTED BY:-


Ms - ketin sood Maninder singh
ROLLNO:RE3802b43
Bca-mca(int)
ACKNOWLEDGMENTS

First and foremost I thank my teacher who has assign me this term

paper to bring out my creative capabilities. I express my gratitude to

my parents for bring a countinous source of encouragement and for

their all financial add given. I have like to acknowledge the

assignment provided to me by the library staff of lovely professional

university. My hard felt gratitude to my friends, classmates and

roommates for helping me to complete my work in time.


TABLE OF CONTENTS:-
• VIRTUAL MEMORY ARCHITECTURE
• HISTORY
• PAGED VIRTUAL MEMORY
• PAGE TABLES
• DYNAMIC ADDRESS TRANSLATION
• PAGING SUPERVISOR
• SEGMENTED VIRTUAL MEMORY
• ABSTRACT
• BACKGROUND
• DETAILS
• BIBILOGRAPHY
Virtual memory architecture
Virtual memory is a computer system technique which gives an application
program the impression that it has contiguous working memory (an address
space), while in fact it may be physically fragmented and may even overflow
on to disk storage. Systems that use this technique make programming of
large applications easier and use real physical memory (e.g. RAM) more
efficiently than those without virtual memory. Virtual memory differs
significantly from memory virtualization in that virtual memory allows
resources to be virtualized as memory for a specific system, as opposed to a
large pool of memory being virtualized as smaller pools for many different
systems.

Note that "virtual memory" is more than just "using disk space to extend
physical memory size" - that is merely the extension of the memory
hierarchy to include hard disk drives. Extending memory to disk is a normal
consequence of using virtual memory techniques, but could be done by other
means such as overlays or swapping programs and their data completely out
to disk while they are inactive. The definition of "virtual memory" is based
on redefining the address space with a contiguous virtual memory addresses
to "trick" programs into thinking they are using large blocks of contiguous
addresses.

All modern general-purpose computer operating systems use virtual memory


techniques for ordinary applications, such as word processors, spreadsheets,
multimedia players, accounting, etc. Older operating systems, such as DOS
of the 1980s, or those for the mainframes of the 1960s, generally had no
virtual memory functionality - notable exceptions being the Atlas, B5000
and Apple Computer's Lisa.

Embedded systems and other special-purpose computer systems which


require very fast and/or very consistent response times may choose not to
use virtual memory due to decreased determinism.

Virtual memory is intended to help the programmer by taking care of some


memory housekeeping duties.
Virtual memory or virtual memory addressing is a memory management
technique, used by multitasking computer operating system s wherein non-
contiguous memory is presented to a software application (aka process) as
contiguous memory. The contiguous memory is referred to as the virtual
address space .

Virtual memory addressing is typically used in paged memory systems. This


in turn is often combined with memory swapping, whereby memory pages
stored in primary storage are written to secondary storage , thus freeing
faster primary storage for other processes to use. A further benefit of this is
that, since secondary storage is typically larger than primary storage, the
combined storage of all virtual address spaces may be larger than the total
primary storage in a system.

The term "virtual memory" is often confused with "memory swapping" (or
"page/swap file" use), probably due in part to the prolific Microsoft
Windows family of operating systems referring to the enabling/disabling of
memory swapping as virtual memory. In fact, Windows uses paged memory
and virtual memory addressing, even if the so called "virtual memory" is
disabled.

In technical terms, virtual memory allows software to run in a memory


address space whose size and addressing are not necessarily tied to the
computer's physical memory. To properly implement virtual memory
the CPU (or a device attatched to it) must provide a way for the
operating system to map virtual memory to physical memory and for it
to detect when an address is required that does not currently relate to
main memory so that the needed data can be swapped in. While it
would certainly be possible to provide virtual memory without the
CPU's assistance it would essentially require emulating a CPU that did
provide the needed feature
History

In the 1940s and 1950s, before the development of a virtual memory, all
larger programs had to contain logic for managing two-level storage
(primary and secondary, today's analogies being RAM and hard disk), such
as overlaying techniques. Programs were responsible for moving overlays
back and forth from secondary storage to primary.

The main reason for introducing virtual memory was therefore not simply to
extend primary memory, but to make such an extension as easy to use for
programmers as possible.

Many systems already had the ability to divide the memory between
multiple programs (required for multiprogramming and multiprocessing),
provided for example by "base and bounds registers" on early models of the
PDP-10, without providing virtual memory. That gave each application a
private address space starting at an address of 0, with an address in the
private address space being checked against a bounds register to make sure
it's within the section of memory allocated to the application and, if it is,
having the contents of the corresponding base register being added to it to
give an address in main memory. This is a simple form of segmentation
without virtual memory.

Virtual memory was developed in approximately 1959–1962, at the


University of Manchester for the Atlas Computer, completed in 1962.
However, Fritz-Rudolf Güntsch, one of Germany's pioneering computer
scientists and later the developer of the Telefunken TR 440 mainframe,
claims to have invented the concept in 1957 in his doctoral dissertation
Logischer Entwurfeines digitalen Rechengerätes mit mehreren asynchron
laufenden Trommeln und automatischem Schnellspeicherbetrieb (Logic
Concept of a Digital Computing Device with Multiple Asynchronous Drum
Storage and Automatic Fast Memory Mode).

In 1961, Burroughs released the B5000, the first commercial computer with
virtual memory. It used segmentation rather than paging.

Like many technologies in the history of computing, virtual memory was not
accepted without challenge. Before it could be implemented in mainstream
operating systems, many models, experiments, and theories had to be
developed to overcome the numerous problems. Dynamic address translation
required a specialized, expensive, and hard to build hardware, moreover
initially it slightly slowed down the access to memory. There were also
worries that new system-wide algorithms of utilizing secondary storage
would be far less effective than previously used application-specific ones.

By 1969 the debate over virtual memory for commercial computers was
over. An IBM research team led by David Sayre showed that the virtual
memory overlay system consistently worked better than the best manually
controlled systems.

Possibly the first minicomputer to introduce virtual memory was the


Norwegian NORD-1. During the 1970s, other minicomputers implemented
virtual memory, notably VAX models running VMS.

Virtual memory was introduced to the x86 architecture with the protected
mode of the Intel 80286 processor. At first it was done with segment
swapping, which became inefficient with larger segments. The Intel 80386
introduced support for paging underneath the existing segmentation layer.
The page fault exception could be chained with other exceptions without
causing a double fault.

Before the development of the virtual memory technique, programmers in


the 1940s and 1950s had to manage two-level storage (main memory or
RAM, and secondary memory in the form of hard disks or earlier, magnetic
drum s) directly.

Virtual memory was developed in approximately 1959 - 1962 , at the


University of Manchester for the Atlas Computer , completed in 1962 .
However, Fritz-Rudolf Güntsch, one of Germany's pioneering computer
scientists and later the developer of the Telefunken TR 440 mainframe,
claims to have invented the concept in his doctoral dissertation Logischer
Entwurf eines digitalen Rechengerätes mit mehreren asynchron laufenden
Trommeln und automatischem Schnellspeicherbetrieb (Logic Concept of a
Digital Computing Device with Multiple Asynchronous Drum Storage and
Automatic Fast Memory Mode) in 1957.

Like many technologies in the history of computing, virtual memory was not
accepted without challenge. Before it could be regarded as a stable entity,
many models, experiments, and theories had to be developed to overcome
the numerous problems with virtual memory. Specialized hardware had to be
developed that would take a "virtual" address and translate it into an actual
physical address in memory (secondary or primary). Some worried that this
process would be expensive, hard to build, and take too much processor
power to do the address translation.

By 1969 the debates over virtual memory for commercial computers were
over. An IBM research team, lead by David Sayre, showed that the virtual
memory overlay system worked consistently better than the best manual-
controlled systems.
Paged virtual memory

Almost all implementations of virtual memory divide the virtual address


space of an application program into pages; a page is a block of contiguous
virtual memory addresses. Pages are usually at least 4K bytes in size, and
systems with large virtual address ranges or large amounts of real memory
(e.g. RAM) generally use larger page sizes.

Page tables

Almost all implementations use page tables to translate the virtual addresses
seen by the application program into physical addresses (also referred to as
"real addresses") used by the hardware to process instructions. Each entry in
the page table contains a mapping for a virtual page to either the real
memory address at which the page is stored, or an indicator that the page is
currently held in a disk file. (Although most do, some systems may not
support use of a disk file for virtual memory.)

Systems can have one page table for the whole system or a separate page
table for each application. If there is only one, different applications which
are running at the same time share a single virtual address space, i.e. they
use different parts of a single range of virtual addresses. Systems which use
multiple page tables provide multiple virtual address spaces - concurrent
applications think they are using the same range of virtual addresses, but
their separate page tables redirect to different real addresses.

Dynamic address translation


If, while executing an instruction, a CPU fetches an instruction located at a
particular virtual address, fetches data from a specific virtual address or
stores data to a particular virtual address, the virtual address must be
translated to the corresponding physical address. This is done by a hardware
component, sometimes called a memory management unit, which looks up
the real address (from the page table) corresponding to a virtual address and
passes the real address to the parts of the CPU which execute instructions. If
the page tables indicate that the virtual memory page is not currently in real
memory, the hardware raises a page fault exception (special internal signal)
which invokes the paging supervisor component of the operating system (see
below).

Paging supervisor

This part of the operating system creates and manages the page tables. If the
dynamic address translation hardware raises a page fault exception, the
paging supervisor searches the page space on secondary storage for the page
containing the required virtual address, reads it into real physical memory,
updates the page tables to reflect the new location of the virtual address and
finally tells the dynamic address translation mechanism to start the search
again. Usually all of the real physical memory is already in use and the
paging supervisor must first save an area of real physical memory to disk
and update the page table to say that the associated virtual addresses are no
longer in real physical memory but saved on disk. Paging supervisors
generally save and overwrite areas of real physical memory which have been
least recently used, because these are probably the areas which are used least
often. So every time the dynamic address translation hardware matches a
virtual address with a real physical memory address, it must put a time-
stamp in the page table entry for that virtual address.
Permanently resident pages

All virtual memory systems have memory areas that are "pinned down", i.e.
cannot be swapped out to secondary storage, for example:

• Interrupt mechanisms generally rely on an array of pointers to the


handlers for various types of interrupt (I/O completion, timer event,
program error, page fault, etc.). If the pages containing these pointers
or the code that they invoke were pageable, interrupt-handling would
become even more complex and time-consuming; and it would be
especially difficult in the case of page fault interrupts.
• The page tables are usually not pageable.
• Data buffers that are accessed outside of the CPU, for example by
peripheral devices that use direct memory access (DMA) or by I/O
channels. Usually such devices and the buses (connection paths) to
which they are attached use physical memory addresses rather than
virtual memory addresses. Even on buses with an IOMMU, which is a
special memory management unit that can translate virtual addresses
used on an I/O bus to physical addresses, the transfer cannot be
stopped if a page fault occurs and then restarted when the page fault
has been processed. So pages containing locations to which or from
which a peripheral device is transferring data are either permanently
pinned down or pinned down while the transfer is in progress.
• Timing-dependent kernel/application areas cannot tolerate the varying
response time caused by paging.

Segmented virtual memory

Some systems, such as the Burroughs large systems, do not use paging to
implement virtual memory. Instead, they use segmentation, so that an
application's virtual address space is divided into variable-length segments.
A virtual address consists of a segment number and an offset within the
segment.

Memory is still physically addressed with a single number (called absolute


or linear address). To obtain it, the processor looks up the segment number
in a segment table to find a segment descriptor. The segment descriptor
contains a flag indicating whether the segment is present in main memory
and, if it is, the address in main memory of the beginning of the segment
(segment's base address) and the length of the segment. It checks whether
the offset within the segment is less than the length of the segment and, if it
isn't, an interrupt is generated. If a segment is not present in main memory, a
hardware interrupt is raised to the operating system, which may try to read
the segment into main memory, or to swap in. The operating system might
have to remove other segments (swap out) from main memory in order to
make room in main memory for the segment to be read in.

Notably, the Intel 80286 supported a similar segmentation scheme as an


option, but it was unused by most operating systems.

It is possible to combine segmentation and paging, usually dividing each


segment into pages. In systems that combine them, such as Multics and the
IBM System/38 and IBM System i machines, virtual memory is usually
implemented with paging, with segmentation used to provide memory
protection. With the Intel 80386 and later IA-32 processors, the segments
reside in a 32-bit linear paged address space, so segments can be moved into
and out of that linear address space, and pages in that linear address space
can be moved in and out of main memory, providing two levels of virtual
memory; however, few if any operating systems do so. Instead, they only
use paging.

The difference between virtual memory implementations using pages and


using segments is not only about the memory division with fixed and
variable sizes, respectively. In some systems, e.g. Multics, or later
System/38 and Prime machines, the segmentation was actually visible to the
user processes, as part of the semantics of a memory model. In other words,
instead of a process just having a memory which looked like a single large
vector of bytes or words, it was more structured. This is different from using
pages, which doesn't change the model visible to the process. This had
important consequences.
A segment wasn't just a "page with a variable length", or a simple way to
lengthen the address space (as in Intel 80286). In Multics, the segmentation
was a very powerful mechanism that was used to provide a single-level
virtual memory model, in which there was no differentiation between
"process memory" and "file system" - a process' active address space
consisted only a list of segments (files) which were mapped into its potential
address space, both code and data. It is not the same as the later mmap
function in Unix, because inter-file pointers don't work when mapping files
into semi-arbitrary places. Multics had such addressing mode built into most
instructions. In other words it could perform relocated inter-segment
references, thus eliminating the need for a linker completely. This also
worked when different processes mapped the same file into different places
in their private address spaces.

Avoiding thrashing

All implementations need to avoid a problem called "thrashing", where the


computer spends too much time shuffling blocks of virtual memory between
real memory and disks, and therefore appears to work slower. Better design
of application programs can help, but ultimately the only cure is to install
more real memory.

Abstract

Real-time ray tracing offers a number of interesting benefits over current


rasterization techniques. However, a major drawback has been that ray
tracing requires access to the entire scene data base. This is particularly
problematic for hardware implementations that only have a limited amount
of dedicated on-board memory.
In this paper we propose a virtual memory architecture for ray tracing that
efficiently renders scenes many times larger than the available on-board
memory. Instead of wasting large dedicated memory on a graphics card,
scene data is stored in main memory, and on-board memory is used only as a
cache. We show that typical scenes from computer games only require less
than 8 MB of cache memory while 64 MB are sufficient even for scenes
with GBs of geometry and textures. The caching approach also minimizes
the bandwidth between the graphics subsystem and the host such that even a
standard PCI connection is sufficient.

Background

Most computers possess four kinds of memory : registers in the CPU, CPU
cache s (generally some kind of static RAM ) both inside and adjacent to the
CPU, main memory (generally dynamic RAM ) which the CPU can read and
write to directly and reasonably quickly; and disk storage , which is much
slower, but also much larger. CPU register use is generally handled by the
compiler and this isn't a huge burden as data doesn't generally stay in them
very long. The decision of when to use cache and when to use main memory
is generally dealt with by hardware so generally both are regarded together
by the programmer as simply physical memory .

Many applications require access to more information ( code as well as data)


than can be stored in physical memory. This is especially true when the
operating system allows multiple processes/ applications to run seemingly in
parallel. The obvious response to the problem of the maximum size of the
physical memory being less than that required for all running programs is for
the application to keep some of its information on the disk, and move it back
and forth to physical memory as needed, but there are a number of ways to
do this.

One option is for the application software itself to be responsible both for
deciding which information is to be kept where, and also for moving it back
and forth. The programmer would do this by determining which sections of
the program (and also its data) were mutually exclusive , and then arranging
for loading and unloading the appropriate sections from physical memory, as
needed. The disadvantage of this approach is that each application's
programmer must spend time and effort on designing, implementing, and
debugging this mechanism, instead of focusing on his or her application; this
hampers programmer's efficiency. Also, if any programmer could truly
choose which of their items of data to store in the physical memory at any
one time, they could easily conflict with the decisions made by another
programmer, who also wanted to use all the available physical memory at
that point.

Another option is to store some form of handles to data rather than direct
pointers and let the OS deal with swapping the data associated with those
handles between the swapfile and physical memory as needed. This works
but has a couple of problems, namely that it complicates application code,
that it requires applications to play nice (they generally need the power to
lock the data into physical memory to actually work on it) and that it stops
the languages standard library doing its own suballocations inside large
blocks from the OS to improve performance. The best known example of
this kind of arrangement is probably the 16-bit versions of Windows .

The modern solution is to use virtual memory, in which a combination of


special hardware and operating system software makes use of both kinds of
memory to make it look as if the computer has a much larger main memory
than it actually does and to lay that space out differently at will. It does this
in a way that is invisible to the rest of the software running on the computer.
It usually provides the ability to simulate a main memory of almost any size
(as limited by the size of the addresses being used by the operating system
and cpu; the total size of the Virtual Memory can be 2 32 for a 32 bit system,
or approximately 4 Gigabytes (though OS design decisions can make the
amount available to applications in practice much less than this), while
newer 64 bit chips and operating systems use 64 or 48 bit addresses and can
index much more virtual memory).

This makes the job of the application programmer much simpler. No matter
how much memory the application needs, it can act as if it has access to a
main memory of that size and can place its data wherever in that virtual
space that it likes. The programmer can also completely ignore the need to
manage the moving of data back and forth between the different kinds of
memory.

Basic operation
When virtual memory is used, or when a main memory location is read or
written to by the CPU, hardware within the computer translates the address
of the memory location generated by the software (the virtual memory
address ) into either:

• the address of a real memory location (the physical memory address )


which is assigned within the computer's physical memory to hold that
memory item, or
• an indication that the desired memory item is not currently resident in
main memory (a so-called virtual memory exception or page fault )

In the former case, the memory reference operation is completed, just as if


the virtual memory were not involved. In the latter case, the operating
system is invoked to handle the situation, since the actions needed before the
program can continue are usually quite complex.

The effect of this is to swap sections of information between the physical


memory and the disk; the area of the disk which holds the information which
is not currently in physical memory is called the swap file (OS/2, early
Windowses, and others), page file (Windows), or swap partition (a dedicated
partition of a hard disk, commonly seen in the Linux operating system).

Also on most modern systems virtual address space can be mapped to disk
storage other than the swapfile. This allows parts of executables to be paged
in as needed direct from the executable saving the need to load the entire
executable at application load time and reducing the demand for swap space.
It also allows the operating system to keep one copy of an executable in
memory at once rather than loading it separately for each running instance
thereby reducing the pressure on physical memory.
Details

The translation from virtual to physical addresses is implemented by an


MMU (Memory Management Unit). This may be either a module of the
CPU, or an auxiliary, closely coupled chip.

The operating system is responsible for deciding which parts of the


program's simulated main memory are kept in physical memory. The
operating system also maintains the translation tables which provide the
mappings between virtual and physical addresses, for use by the MMU.
Finally, when a virtual memory exception occurs, the operating system is
responsible for allocating an area of physical memory to hold the missing
information (and possiblly in the process pushing something else out to
disk), bringing the relevant information in from the disk, updating the
translation tables, and finally resuming execution of the software that
incurred the virtual memory exception.

In most computers, these translation tables are stored in physical memory.


Therefore, a virtual memory reference might actually involve two or more
physical memory references: one or more to retrieve the needed address
translation from the page tables, and a final one to actually do the memory
reference.

To minimize the performance penalty of address translation, most modern


CPUs include an on-chip MMU, and maintain a table of recently used
virtual-to-physical translations, called a Translation Lookaside Buffer , or
TLB. Addresses with entries in the TLB require no additional memory
references (and therefore time) to translate, However, the TLB can only
maintain a fixed number of mappings between virtual and physical
addresses; when the needed translation is not resident in the TLB, action will
have to be taken to load it in.

On some processors, this is performed entirely in hardware; the MMU has to


do additional memory references to load the required translations from the
translation tables, but no other action is needed. In other processors,
assistance from the operating system is needed; an exception is raised, and
on this exception, the operating system replaces one of the entries in the
TLB with an entry from the translation table, and the instruction which made
the original memory reference is restarted.

The hardware that supports virtual memory almost always supports memory
protection mechanisms as well. The MMU may have the ability to vary its
operation according to the type of memory reference (for read, write or
execution), as well as the privilege mode of the CPU at the time the memory
reference was made. This allows the operating system to protect its own
code and data (such as the translation tables used for virtual memory) from
corruption by an erroneous application program and to protect application
programs from each other and (to some extent) from themselves (e.g. by
preventing writes to areas of memory which contain code).

One additional advantage of virtual memory is that it allows a computer to


multiplex its CPU and memory between multiple programs without the need
to perform expensive copying of the programs' memory images. If the
combination of virtual memory system and operating system supports
swapping, then the computer may be able to run simultaneous programs
whose total size exceeds the available physical memory. Since most
programs have a small subset ( active set ) of pages that they reference over
significant periods of their execution, the performance penalty is less than
that which might be expected. If too many programs are run at once, or if a
single program continuously accesses widely scattered memory locations,
then page swapping becomes excessively frequent and overall system
performance will become unacceptably slow. This is often called thrashing
(since the disk is being excessively overworked - thrashed) or paging storm ,
which corresponds to accessing the swap medium being three orders of
magnitude slower compared to main memory access.

Note that virtual memory is not a requirement for precompilation of


software, even if the software is to be executed on a multiprogramming
system. Precompiled software loaded by the operating system has the
opportunity to carry out address relocation at load time. This suffers by
comparison with virtual memory in that a copy of program relocated at load
time cannot run at a distinct address once it has started execution.
It is possible to avoid the overhead of address relocation using a process
called rebasing , which uses metadata in the executable image header to
guarantee to the run-time loader that the image will only run within a certain
virtual address space . This technique is used on the system libraries on
Win32 platforms, for example.

Also many systems run multiple instances of the same program using the
same physical copy of the program in physical memory but separate virtual
address spaces. This is possible because the separate virtual address spaces
can all have the same layout and thus avoid the need to relocate the code at
load time. Some operating systems take this even further implementing copy
on write systems to allow a process to fork into two copies of itself without a
complete copy of its data being created immediately.

Paging and virtual memory

Virtual memory is usually (but not necessarily) implemented using paging .


In paging, the low order bits of the binary representation of the virtual
address are preserved, and used directly as the low order bits of the actual
physical address; the high order bits are treated as a key to one or more
address translation tables, which provide the high order bits of the actual
physical address.

For this reason a range of consecutive addresses in the virtual address space
whose size is a power of two will be translated in a corresponding range of
consecutive physical addresses. The memory referenced by such a range is
called a page . The page size is typically in the range of 512 to 8192 bytes
(with 4K currently being very common), though page sizes of 4 megabytes
or larger may be used for special purposes. (Using the same or a related
mechanism, contiguous regions of virtual memory larger than a page are
often mappable to contiguous physical memory for purposes other than
virtualization, such as setting access and caching control bits.)
The operating system stores the address translation tables, the mappings
from virtual to physical page numbers, in a data structure known as a page
table .

If a page that is marked as unavailable (perhaps because it is not present in


physical memory, but instead is in the swap area), when the CPU tries to
reference a memory location in that page, the MMU responds by raising an
exception (commonly called a page fault ) with the CPU, which then jumps
to a routine in the operating system. If the page is in the swap area, this
routine invokes an operation called a page swap , to bring in the required
page.

The page swap operation involves a series of steps. First it selects a page in
memory, for example, a page that has not been recently accessed and
(preferably) has not been modified since it was last read from disk or the
swap area. (See page replacement algorithms for details.) If the page has
been modified, the process writes the modified page to the swap area. The
next step in the process is to read in the information in the needed page (the
page corresponding to the virtual address the original program was trying to
reference when the exception occurred) from the swap file. When the page
has been read in, the tables for translating virtual addresses to physical
addresses are updated to reflect the revised contents of the physical memory.
Once the page swap completes, it exits, and the program is restarted and
continues on as if nothing had happened, returning to the point in the
program that caused the exception.

It is also possible that a virtual page was marked as unavailable because the
page was never previously allocated. In such cases, a page of physical
memory is allocated and filled with zeros, the page table is modified to
describe it, and the program is restarted as above.

Windows example

Virtual memory has been a feature of Microsoft Windows since Windows


3.1 in 1992 . 386SPART.PAR (or WIN386.SWP on Windows 3.11 and
Windows for Workgroups) is a hidden file created by Windows 3.x for use
as a virtual memory swap file. It is generally found in the root directory , but
it may appear elsewhere (typically in the WINDOWS directory). Its size
depends on how much virtual memory the system has set up under Control
Panel - Enhanced under "Virtual Memory." If a user moves or deletes this
file, Windows will BSoD the next time it is started with "The permanent
swap file is corrupt" and will ask the user if they want to delete the file (It
asks whether or not the file exists).

Windows 95 uses a similar file, except it is named WIN386.SWP, and the


controls for it are located under Control Panel - System - Performance tab -
Virtual Memory. Windows automatically sets the page file to be 1.5 x
physical memory. This page file is located at C:\pagefile.sys on all NT-
based versions of Windows (including Windows 2000 and Windows XP ). If
you run memory intensive applications on a low physical memory system it
is preferable to manually set the size to a value higher than default.
Additionally, fixing the size of the swap file will prevent it from being
dynamically resized by Windows. This resizing causes the swap file to
become fragmented, resulting in reduced performance. This page file cannot
be defragmented with Windows' built-in defragmenting tools, such as
ntfsdefrag.

Virtual Memory in Linux

In Linux operating system, it is possible to use a whole partition of the HDD


for virtual memory. Though it is still possible to use a file for swapping, it is
recommended to use a separate partition, because this excludes chances of
fragmentation , which reduces the performance of swapping. A swap area is
created using the command ' mkswap filename/device , and may be turned
on and off using the commands swapon and swapoff ', respectively,
accompanied by the name of the swap file or the swap partition.
Bibliography

• www.wikipedia.com

• www.ask.com

• www.livemint.com

• www.script.com

Anda mungkin juga menyukai