Anda di halaman 1dari 15

Contents

Background

Logical vs Physical Address Space vs.

Chapter Ch t 8
Memory Management Strategy y g gy

Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation S t ti Example: Intel Pentium

8. Memory Management Strategy

Objectives
To provide a detailed description of various ways of organizing memory hardware To discuss various memory-management techniques, including paging and segmentation To provide a detailed description of the Intel Pentium, which supports both pure segmentation and segmentation with paging

8.1 Background
Program must be brought (from disk) into memory and placed within a process for it to be executed executed.
disk program load memory PCB process

Main memory and registers are the only storage that CPU can y access directly

registers are accessible within one CPU clock main memory access can take many CPU cycles processor stall cache is added between the CPU and main memory this protection is provided by hardware
8. Memory Management Strategy 4

Protection of memory is required to ensure correct operation

8. Memory Management Strategy

Base and Limit Registers


Make sure that each process has a separate memory space A pair of base and limit registers define the logical address space

Hardware address protection with base/limit reg.

legal address range: l l dd base address < base+limit

base and limit registers can be loaded only by the operating g y y p g system, which uses a special privileged instruction Operating system can access to OS and users memory

8. Memory Management Strategy

8. Memory Management Strategy

Address Binding
Address representation
source program symbolic address (count) compiler relocatable address (start+14) linkage editor or loader absolute address (74014)

Multistep processing of a user program

Address Binding

bind addresses of instructions and data to memory addresses (logical address) (physical address)

8. Memory Management Strategy

8. Memory Management Strategy

Address Binding Schemes


Compile time:

Logical vs. Physical Address Space


logical address vs. physical address

If memory location known a priori, absolute code can be generated; must recompile code if starting location changes. Must generate relocatable code if memory location is not known at compile time. Binding is delayed until load time. Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps MMU Most general-purpose OS use this method

logical address(virtual address): address seen by a program physical address: address seen by the memory unit perform run time mapping from logical address to physical address run-time a program deals with logical address and it never sees the real physical addresses. compile-time and load-time binding logical addr = physical addr execution time execution-time binding logical addr physical addr recent processor logical g p y physical address address MMU CPU Memory logical = ph sical physical address address

Load time:

Memory management unit(MMU)


Execution time:

Add Address bi di scheme and l i l/ h i l address binding h d logical/physical dd


8. Memory Management Strategy

8. Memory Management Strategy

10

Dynamic relocation using a relocation register y g g


relocation register = base register

Dynamic Loading
Dynamic loading:

Routine is not loaded until it is called Better memory-space utilization; unused routine is never loaded. Useful when large amounts of code are needed to handle infrequently occurring cases. (e.g. error routine)

Advantages

Dynamic loading does not require special support from the operating system

the responsibility of the users to design their program OS may provide library routines to implement dynamic loading

Intel 80x86 family: 4 relocation registers (CS, DS, ES, SS)

8. Memory Management Strategy

11

8. Memory Management Strategy

12

Dynamic Linking and Shared Libraries


Static linking and Dynamic linking

8.2 Swapping
Swapping

stat c static linking: linked by t e loader into the executable image g ed the oade to t e e ecutab e age dynamic linking: linking postponed until execution time. small piece of code that indicates how to locate the appropriate memory-resident library routine how to load the library if the routine is not already present Stub St b replaces itself with th address of th routine, and executes l it lf ith the dd f the ti d t the routine. The next time, the library routine is executed directly all processes th t use shared lib i execute only one copy of th ll that h d libraries t l f the library code easy library update OS need to check if routine is in processes memory address.

Stub

A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution. For example, in multiprogramming environment with RR scheduling, p p g g g When a quantum expires, start to swap out the process that just finished and to swap another process into the freed memory fast disk large enough to accommodate copies of all memory images for all users must provide direct access to these memory images images. swapping variant used for priority-based scheduling algorithms lower-priority process is swapped out so higher-priority process can be loaded and executed.

Backing store

Shared library: dynamic linked library


Roll out, roll in


D Dynamic li ki generally require h l f i linking ll i help from operating system ti t

8. Memory Management Strategy

13

8. Memory Management Strategy

14

Schematic View of Swapping

Swapping (cont)
System maintains a ready queue of ready-to-run processes which have memory images on disk or in memory

if the next process in the queue is not in memory, the dispatcher swaps out a process in memory and swaps in the desired process context switch time > swap in time + swap out time total transfer time is directly proportional to the amount of memory swapped. need to swap only what is actually used, reducing swap time

Major part of swap time is transfer time;

Swapping process must be completely idle idle. For a process with pending I/O
1. never to swap a process with pending I/O, or 2. 2 execute I/O operations only into OS buffers. Transfers between OS i l i b ff T f b and process memory occur only when the process is swapped in. (refer to next slide)

8. Memory Management Strategy

15

8. Memory Management Strategy

16

Modified Swapping
Modified versions of swapping are found on many systems (UNIX, Linux, Linux Windows)

8.3 Contiguous Memory Allocation


Main memory usually into two partitions:

Swapping is normally disabled Swapping start if many processes are running and are using a threshold amount of memory Swapping is again halted when the load is reduced.

N-1 Resident operating system: high usually in low memory with interrupt vector. user User processes: processes then in high memory. each process is contained in a single contiguous section of memory OS intr vector low 0

In the contiguous memory allocation,

Relocation register scheme


OS memory p process memory I/O

protect user processes f t t from each other, and f h th d from changing operatingh i ti system code and data. Use relocation register and limit register Relocation register: contains value of smallest physical address Limit register: contains range of logical addresses MMU maps logical address dynamically into physical address
8. Memory Management Strategy 18

8. Memory Management Strategy

17

Hardware support for relocation and limit registers

Multiple Partition Allocation


Hole block of available memory;

holes of various size are scattered throughout memory memory.

When a process arrives, it is allocated memory from a hole large enough to accommodate it. Operating system maintains information about:
0 logical address < limit (a) allocated partitions (b) free partitions (hole)

OS process 5 process 8 process 2

OS process 5

OS process 5 process 9

OS process 5 process 9 p process 10

Hole
process 2 process 2

protection
8. Memory Management Strategy 19

process 2

8. Memory Management Strategy

20

Dynamic Storage-Allocation Problem


How to satisfy a request of size n from a list of free holes. First-fit: First fit:

Fragmentation
External fragmentation

Allocate the first hole that is big enough. Allocate the smallest hole that is big enough must search entire list, unless ordered by size. Produces the smallest leftover hole. Allocate the largest hole must also search entire list. Produces the largest leftover hole.

Best-fit:

request 70K 100K 80K 50K worst-fit 150K

first-fit best-fit

total memory space exists to satisfy a request, but it is not y p y q , contiguous. 50-percent rule: given N allocated blocks, another 0.5N blocks will be lost due to external fragmentation in first-fit first fit 1/3 of memory may be unusable break the memory into fixed size blocks and allocate memory in unit blocks, of block. reduce the overhead of keep track of holes allocated memory may be slightly larger than requested memory; this size diff i difference i memory i t is internal t a partition, but not being used. l to titi b t tb i d req size

Internal fragmentation

Worst-fit:

Performance

First-fit and best-fit are better than worst-fit in terms of time and storage utilization

req size

external fragmentation

internal fragmentation

8. Memory Management Strategy

21

8. Memory Management Strategy

22

Compaction
Solutions to the external-fragmentation problem
1. 1 compaction 2. permit noncontiguous physical memory space: paging, segmentation

8.4 Paging
Paging permit a process to allocate noncontiguous physical memory. memory Page and Page frame

Compaction

Divide physical memory into fixed-sized blocks called page frames (size is ( i i 2n, usually b t ll between 512B and 8KB) d 8KB). Divide logical memory into blocks of same size called pages.
0 1 2 3 4 5 6 7

Shuffle memory contents to place all free memory together in one large block. Compaction is possible only if relocation is dynamic, and is done at execution time. time pending I/O problem Latch job in memory while it is involved in I/O. Do I/O only into OS buffers.

page

mapping

0 1 2 3 4 5

page frame

logical address space g p


8. Memory Management Strategy 23

p y physical address space p


24

8. Memory Management Strategy

Address Translation Scheme


logical address is divided into page number and page offset and page number is translated into page frame number
logical address
p g page no. (p) offset (d) ( )

Paging hardware: page table

mapping physical address


page frame no. (f) offset (d)

Page Table

n (page size = 2n)

contains the base address of each page in physical memory (page frame number) ( f b ) page number is used as an index into a page table

8. Memory Management Strategy

25

8. Memory Management Strategy

26

Paging Example

Page Allocation
Page allocation

Keep track of all free frames free frame list frames. To run a program of size n pages, need to find n free frames and load program into the allocated frames Set up a page table to translate logical to physical addresses addresses. (See next slides) no external f t l fragmentation t ti memory allocation unit: page frame internal fragmentation to average half page per process smaller page size smaller internal fragmentation, but larger page table larger page size larger internal fragmentation, but ffi i t di k I/O b t efficient disk I/O, smaller page table ll t bl

Internal fragmentation

Some CPUs and kernels support multiple page sizes

8. Memory Management Strategy

27

8. Memory Management Strategy

28

Free frames

Frame table and Page table


Frame table

one entry for each physical page frame indicate whether the frame is free or allocated, and if it is allocated, to which page of which process or processes

O Operating system maintains a copy of the page table f each ti t i t i f th t bl for h process increases the context-switch time
CPU
PC registers PCB state PC registers page table

before allocation

after allocation

8. Memory Management Strategy

29

8. Memory Management Strategy

30

Structure of Page Table


Hardware implementation of the page table
1. 1 a set of dedicated registers satisfactory only if page table is reasonably small (256 or less) 2. page table is kept in main memory page-table base register (PTBR) points to the page table page-table length register (PTLR) indicates size of page table. memory page table registers registers
PTBR PTLR page table

Page Table in memory and TLB


Every data/instruction access requires two memory accesses.

one for the page table one for the data/instruction

this delay would be intolerable

Translation Look-aside buffer(TLB)


A special fast-lookup hardware cache for page table entries also called Associative memory Two memory access problem can be solved by the use of TLBs Some TLBs store address-space identifier(ASIDs) in each TLB entry If TLB does not support separate ASIDs, then every time a new page table is selected (context switch) TLB must be flushed (context-switch), memory TLB

loaded/modified by privileged instruction


8. Memory Management Strategy 31

page table

8. Memory Management Strategy

32

Translation Lookaside Buffer(TLB)


TLB

Paging hardware with TLB

Associative memory parallel search(fully associative mapped cache) search(fully-associative


page number frame number

some TLBs use set-associative mapped cache If p is in page number field of TLB get the frame number TLB hit TLB, Otherwise, get frame number from page table in memory TLB miss (or, copy the frame number from page table to TLB and retry memory access) replacement policies: LRU(least recently used), random

Address translation (p, d)


If TLB is full of entries, OS selects one entry for replacement

8. Memory Management Strategy

33

8. Memory Management Strategy

34

Effective Memory Access Time


(Example)

Memory Protection
Protection bits

TLB Lookup time = 20 ns memory cycle time = 100 ns Hit ratio = 80 % (percentage of times that a page number is found in the TLB) Effective memory Access Time (EAT) 80% : access ti time = 20 + 100 = 120 ns 20% : access time = 20 + 100 + 100 = 220 ns EAT = 120 0.8 + 220 0.2 = 140 ns 16 entries: hit ratio = 80%, EAT = 140ns 512 entries: hit ratio = 98% EAT = 120 0 98 + 220 0 02 = 122ns 98%, 1200.98 2200.02

Memory protection implemented by associating protection bit with each frame provide finer level of memory protection define a page to be read-write or read-only, executable-only Valid bit(V) is attached to each entry in the page table V=1 (valid): legal p g ( ( ) g page (the associated p g is in the p page process logical address space) V=0 (invalid): illegal page
frame no. 0 1 2 3 4 P eo eo rw ro V 1 1 1 1 0

V lid bit ( valid-invalid bit) Valid (or lid i lid


hit ratio is related to the size of TLB


page table

8. Memory Management Strategy

35

8. Memory Management Strategy

36

Valid or Invalid Bit in a Page Table

Shared Pages
An advantage of paging is the possibility of sharing common code Shared code

If code is reentrant code(non-self-modifying, read-only), it can be shared. Two or more processes can execute the same code at the same time Only one copy of the code is kept in physical memory. (i.e. text editors, compilers, window systems) Each process has its own copy of registers and data storage Shared code must appear in same location in the logical address space of all processes Each process keeps a separate copy of the code and data The pages for the private code and data can appear anywhere in the logical address space

Private code and data


8. Memory Management Strategy

37

8. Memory Management Strategy

38

Shared Pages Example

8.5 Structure of the Page Table


Hierarchical Paging Hashed Page Tables Inverted Page Tables

8. Memory Management Strategy

39

8. Memory Management Strategy

40

Hierarchical Page table (Multilevel Paging)


Problem of large page table

Two-Level Page-Table Scheme

Most modern computer systems support a large logical address space(232 264) page table become excessively large contiguous page table is larger than page size divide the page table into smaller pieces use multilevel paging p1
PTBR

S l ti Solution

p2
f

logical address

f page directory

physical address
page table

(page directory)

(inner)
41 8. Memory Management Strategy 42

8. Memory Management Strategy

Two-Level Paging Example


A logical address (on 32-bit machine with 4K page size) 80386
page number n mber offset

Multilevel Paging
Three-level paging example
2nd outer page outer page innner page offset

p1
10-bit

p2
10-bit

d
12-bit

p1
32-bit

p2
10-bit

p3
10-bit

d
12-bit

Since the page table is paged, the page number is further divided into: a 10-bit index into outer page table(p1). a 10-bit displacement within the page of the page table(p2). (page size = 212 = 4KB, page table size = 210 x 4B/entry = 4KB)
section page offset

page

VAX
s
2-bit

p
21-bit

d
9-bit

larger than page size

page size

page size = 512B page table size of each section: 221 4 = 8MB VAX pages th user-process page t bl reduce main-memory use the tables d i
8. Memory Management Strategy 43

converting 64-bit logical address to 32-bit physical address may take four memory accesses

For 64-bit architecture, hierarchical page tables are generally considered i id d inappropriate. i t
8. Memory Management Strategy 44

Hashed Page Tables


Common in address spaces > 32 bits. Hashed Page tables

The virtual page number is hashed into a page table. This page table contains a chain of elements hashing to the same location. virtual pointer <virtual page number, mapped page frame number, next pointer> Virtual page numbers are compared in this chain searching for a match. If a match is found, the corresponding physical frame is extracted. extracted a chain of elements (virtual, physical)

8. Memory Management Strategy

45

8. Memory Management Strategy

46

Inverted Page Table


Problem of page table

Inverted Page Table Architecture


IBM RT

a page table for each process one entry for each page, or one slot for each virtual address each page table may be large (millions of entries). solution: inverted page table only one page table only one entry for each page of physical memory. <process id, page number of virtual address> decreases memory needed t store a page t bl d d d to t table, Increases time needed to search the table when a page reference occurs. (sequential search)

Inverted Page Table


Use hash table to limit the search to one or at most a few page table entries. at least two memory read (hash table, page table)

8. Memory Management Strategy

47

8. Memory Management Strategy

48

8.6 Segmentation
An important aspect of memory-management scheme

Logical View of Segmentation

the separation of the user view of memory and physical memory physical memory: a linear array of bytes (physical address) paging: also a linear array of bytes (logical address) what is a preferable real user view of memory ? A segment is a logical unit such as: main program, procedure, function, local variables, global variables, common block, stack, symbol table, arrays

1 1 subroutine 2 stack 4 symbol table 3 2 4

A program is a collection of segments.

3 main

Segmentation is a memory management scheme that supports this user view of memory

user space
8. Memory Management Strategy 49

physical memory space


8. Memory Management Strategy 50

Segmentation Architecture
Logical address consists of a two tuple:
<segment-number, <segment-number offset>

Segmentation Hardware
STBR STLR

Segment table

maps two-dimensional logical address into one-dimensional physical addresses each table entry has: base contains the starting p y g physical address of segment g limit specifies the length of the segment. points t th segment t bl l i t to the t tables location i memory. ti in indicates number of segments used by a p g g y program segment number s is legal if s < STLR.

Segment-table base register (STBR)

Segment-table length register (STLR)


8. Memory Management Strategy

51

8. Memory Management Strategy

52

Protection and Sharing


Relocation

Example of Segmentation

dy a c, and dynamic, a d by seg e tat o tab e segmentation table Each entry in segment table contains <protection bit, valid bit> valid bit = 0 illegal segment protection bit: read/write/execute privileges Instruction segment can be defined as read-only or execute-only. Placing Pl i an array in its own segment, segmentation hardware will i it t t ti h d ill automatically check that indexes are legal or outside boundaries

Protection.

Since segments vary in length, memory allocation is a dynamic storage-allocation storage allocation problem

8. Memory Management Strategy

53

8. Memory Management Strategy

54

8.7 Example: Intel Pentium


Support both segmentation and segmentation with paging (in protected addressing mode) Logical to physical address translation

Segmentation and Paging


Support both segmentation and segmentation with paging (in protected addressing mode) segmentation:

two segment tables LDT(local descriptor table) GDT(global descriptor table) selector
s 13 g p 1 2

s: segment g: GDT/LDT p: privilege level


offset 32

translate 46-bit logical address into 32-bit linear address a two-level paging scheme.
dir 10 page 10 offset 12

paging

translate 32-bit linear address into 32-bit physical address

8. Memory Management Strategy

55

8. Memory Management Strategy

56

Three-level Paging in Linux


GDTR LDTR

Linux has adopted a three-level paging so that it run on a variety of hardware platforms
size=0 in Pentium

CR3

8. Memory Management Strategy

57

8. Memory Management Strategy

58

Anda mungkin juga menyukai