Anda di halaman 1dari 2

1. Advantages and Disadvantages of Virtual Memory Management Schemes a.

Paged Memory Allocation


Advantages: Allows jobs to be allocated in non-contiguous memory locations. Memory used more efficiently more jobs can fit. Disadvantages: Address resolution causes increased overhead. !nternal fragmentation still e"ists# though in last $age. %e&uires the entire job to be stored in memory location. Si'e of $age is crucial (not too small# not too large). b. Demand Paging Advantages: *ob no longer constrained by the si'e of $hysical memory (conce$t of virtual memory). +tili'es memory more efficiently than the $revious schemes. Disadvantages: !ncreased overhead caused by the tables and the $age interru$ts.

c. Segmented Memory Allocation


Advantages: !nternal fragmentation is removed. Disadvantages: Difficulty managing variable-length segments in secondary storage. ,"ternal fragmentation.

d. Segmented-Demand Paged Memory Allocation


Advantages: .arge virtual memory. Segment loaded on demand. Disadvantages: /able handling overhead. Memory needed for $age and segment tables.

!f given an o$tion# ! would im$lement Segmented-Demand Paged Memory Allocation because it combines all the features of the $revious schemes. !t uses both the logical benefits of segmentation and the $hysical benefits of $aging.

0. Segmented Memory Allocation solved the $roblem on internal fragmentation because through segmentation# each job is divided into segments of different si'es. 1ith this# no s$ace is wasted thus# removing internal fragmentation. 2. Associative Memory and 3ache Memory /he cache is a small amount of high-s$eed memory# usually with a memory cycle time com$arable to the time re&uired by the 3P+ to fetch one instruction. /he cache is usually filled from main memory when instructions or data are fetched into the 3P+. 4ften the main memory will su$$ly a wider data word to the cache than the 3P+ re&uires# to fill the cache more ra$idly. /he amount of information which is re$laces at one time in the cache is called the line size for the cache. /his is normally the width of the data bus between the cache memory and the main memory. A wide line si'e for the cache means that several instruction or data words are loaded into the cache at one time# $roviding a 5ind of $refetching for instructions or data. 1hen a cache is used# there must be some way in which the memory controller determines whether the value currently being addressed in memory is available from the cache. /here are several ways that this can be accom$lished. 4ne $ossibility is to store both the address and the value from main memory in the cache# with the address stored in a ty$e of memory called associative memory or# more descri$tively# content addressable memory. An associative memory# or content addressable memory# has the $ro$erty that when a value is $resented to the memory# the address of the value is returned if the value is stored in the memory# otherwise an indication that the value is not in the associative memory is returned. All of the com$arisons are done simultaneously# so the search is $erformed very &uic5ly. /his ty$e of memory is very e"$ensive# because each memory location must have both a com$arator and a storage element. A cache memory can be im$lemented with a bloc5 of associative memory# together with a bloc5 of 66ordinary77 memory. /he associative memory would hold the address of the data stored in the cache# and the ordinary memory would contain the data at that address.

Anda mungkin juga menyukai