Anda di halaman 1dari 11

Introduction Memory Management Address Binding Memory Management

Utilization is improved by having many processes share For a program to execute it must be copied into main
the CPU. This requires memory management. memory at a particular location. Many instructions
use “fixed” addresses these must be bound to “fixed”
There is a variety of memory management approaches. locations in the memory.
The approach used by an operating system is limited
by the available hardware. This binding of instructions and data to memory ad-
dresses may occur at:

compile time,

load time, or

execution time.

1 2

Dynamic Loading Memory Management Dynamic Linking Memory Management

Dynamic loading involves loading routines into mem- Dynamic linking is often used for libraries. Only a
ory only when required. This is done during execution. “stub” of the library is kept in the programs image.

Dynamic loading reduces the memory requirements When a program calls one of these routines, the rou-
of large programs. This is especially the case if there tine is loaded and linked into memory.
is a large set of infrequently used routines.
All programs share the one copy of the same library
Help from the operating system is not essential for dy- routine.
namic loading to take place. Although, the operating
system may provide library routines to aid dynamic Dynamic linking requires the operating systems inter-
loading. vention as sharing between processes is required.

3 4
Overlays Memory Management Overlays Memory Management

If the program is to large to fit into main memory then Kernal

the program may be divided into a common section Data

and sections that are used at different times. The


Common
common section is load into its own portion of mem- Routines
ory. The other sections share the same portion and
are only loaded when required. This approach is known
as overlaying. Section 1 Section 2

Overlaying has mainly been used on micro-computers


which only have a small amount of memory.

Overlaying does not require the help from the operat-


ing system, however, it requires very careful program
in terms of address binding and the way procedures
are called.
5 6

Logical/Physical Address Space Memory Management Logical/Physical Address Space Memory Management

Addresses generated by the CPU are referred to as The set of all logical addresses generated by a pro-
logical addresses. These are the addresses “seen” gram is referred to as the logical address space. These
by the user’s programs. logical addresses map to physical addresses and are
referred to as the physical address space.
Addresses seen by the main memory are referred to
a physical addresses. The mapping between the logical and physical ad-
dresses is performed by the MMU(Memory-Management
In some system logical and physical addresses are Unit). This is done in hardware.
identical. In these cases address binding must occur
at compile-time or load-time.

However, it is useful to separate logical and physi-


cal addresses, this permits execution-time address-
binding schemes. Logical addresses may also be re-
ferred to as virtual addresses.

7 8
Logical/Physical Address Space Memory Management Logical/Physical Address Space Memory Management

MMU
Some advantages of separating logical and physical
Base
addresses include: 500
Operating
System



 
 

execution-time address-binding, 
500
Process 1
23 523 150

    


 
CPU

simplifies the swapping of processes in and out of       
memory,
Limit
150

Process 2

      


      
trap addressing error
protection/security of data between processes, and

simplifies sharing of data.

9 10

Contiguous Allocation Memory Management Contiguous Allocation Memory Management

Memory may be partitioned into the operating system When new processes arrive they must be assigned a
and user sections. section of physical memory. This will place the new
process into one of the holes that is big enough to fit
A simple approach is to allocate each process a sin- the process.
gle fixed-sized partition. Base and limit registers may
Memory
be used in the MMU. The operating system will keep
     
P1
tack the of the partitions attached to each process.     
Also it maintains a list of “holes” or available areas of      
memory. P4

New Process
           
P5
         
P6

         
         

  
 
 
 



 








P3


 




P9

11 12
Contiguous Allocation Memory Management External Fragmentation Memory Management

Strategies used to determine which hole to place a In the previous approach a number of small holes will
new process include: be created when processes are added and removed
from the system. These holes are to small for new
processes and hence are wasted memory. This is
first-fit, known as external fragmentation.

best-fit, or 
 OS

 
    
P5
   
 
        
worst-fit.
        
P10
??
  P9
     
Simulations have shown that first-fit and best-fit are
better than worst-fit in terms of improving storage uti-

     
     
P2
lization.

P7

13 14

External Fragmentation Memory Management Paging Memory Management

One solution to external fragmentation is compaction. There is considerable overhead in managing variable
sized memory-chunks. Paging overcomes many of
these problems and solves the external fragmentation
OS
problem. Paging is used in many operating systems.
P5

Paging involves the following:



P9

Physical memory is partitioned into fixed-sized blocks called


P2 frames.
Logical memory is partitioned into blocks of the same size
P7 called pages.





















Logical address(produced by the CPU) are divided into two













P10 parts: the page number, and the offset.

page number offset


Each process has a page table. The page number indexes
the page table, for the running process, and looks up the
frame number for that page. The frame number is combined
with the offset to produce the physical address.

15 16
Paging Memory Management Paging Memory Management

Physical Address Space


Frame
Logical Address Space Number Memory
Logical Physical
Address Process A 0 P2 A
Address
Page Table A
1 P2 B
CPU p d f d P0 0
Memory 3
Page Table
1 2 P1 B
P1 7
2 0 3 P0 A
P2
p
3 10 4 P3 B
P3

f 5

Process B 6

Page Table B 7 P1 A
P0
0 11 8
P1 1 2 9
P2 2 1
10 P3 A
P3 3 4
11 P0 B

17 18

Paging Memory Management Internal fragmentation Memory Management

The size of each frame(and page) is dependent on the Suppose the offset uses  bits. Hence the frames will
 offset consists of
number of bits in the offset. If the be  bytes long. If a process requires  
  bytes
bits then the frame size will be  bytes. then 3 frames must be used by the process. Only one
byte of the third frame is used! This waste is known
Allocating memory for a new process is simply a mat- as internal fragmentation.
ter of finding the available frames. A frame table is
            
Physical Addresses
used to keep track of the frames in memory. Frames

Logical Addresses
Pages            
            
           
                
              
                 
           
              
               Wasted memory

    

19 20
Paging Memory Management Paging Memory Management

Selecting the frame size is important. The page tables may be very large making it infeasible
to store the entire page table in registers. So the page
table is stored in main memory. A Page Table Base
The larger the frame size the more internal frag-
Register(PTBR) points to the page table.
mentation.

Physical
The smaller the frame size the larger the page Logical
Address
Address
table overhead. p
CPU d f d Memory

Page Table

Page table Base Register

21 22

Paging Memory Management Paging Memory Management

This would require two memory accesses for every The TLB is a set of associative registers which con-
memory access a process wishes to make!! A fast tains a set of page number(the key) and frame number
lookup cache may be used to overcome this prob- pairs. All the page number registers may be checked
lem. These specially built caches are called transla- in parallel. This makes a TLB look up very fast.
tion look-aside buffers(TLBs).
When a context switch occurs the TLB must be flushed.

Logical Physical
Address Address

CPU p d f d
TLB Memory

Page Table

Page Numbers +

f
Frame Numbers

Page table Base Register

23 24
Paging Memory Management Protection Memory Management

The hit ratio is the percentage of time the page num- Memory protection in a the paged environment may
ber is found in the associative registers. be accomplished by associating protection bits with
each page. This enables the operating system to per-
Using the hit ratio and information about the mem- mit the process to either just read from a page or both
ory access time it is possible to calculate the effective read from and write to a page. This information may
memory access time. be kept the page table. A trap will occur if the process
attempts to write to read only pages.
Suppose memory access takes 100ns and accessing
the TLB adds 10ns. What is the effective access time This approach may also be used to make some pages
if the hit ratio was 95% ? execute only.
effective access time          
   
   ns

25 26

Protection Memory Management Protection Memory Management

 
    
   
Memory
Another bit that may be attached to each entry in the
0     
page table is a valid-invalid bit. This indicates if page Logical Address Space 1
 
    
is part of a process logical address space. Accessing
2 
  
 

an invalid page will cause a memory violation trap. 3   

  

4   
5



 
 



    

Page Table 6

 
  
7     
0 5 RW v
8
1 1 RO v

2 8 RO v

3 i

Valid/Invalid Bit

Read Only/Read Write Bit

27 28
PTLR Memory Management Multilevel Paging Memory Management


Very rarely will a process use its entire address range. The logical address space may be very large(  ). In
Hence it is a waste of the memory resource to create such cases the page table itself would also be exten-
a mostly empty page table. Some systems provide sively large.
a page-table length register, this indicates the size of

the page table. Suppose the frame size is 4K(  ) bytes. This would
leave 20 bits for the page number and if each page
table entry required 4 bytes then the entire page table
Physical
Logical
Address Address for would be 4M for each process!
CPU p d f d
TLB Memory

Page Table
Page Table Length Register

f
Trap

Page table Base Register

29 30

Multilevel Paging Memory Management Multilevel Paging Memory Management

One solution is to divide the page number into smaller This reduces the memory overhead of maintaining pag-
pieces and use an out page table to index a page of ing.
the page table.

Logical Address

Page number Page offset

             


                  
p1 p2 d Outer Page Table

        


Page Table
Physical Address

                   


        
Page Table Base Register Page of page table

       


Outer Page Table
        
Page of page table


 















p1
Page Of Page Table
       
p2

31 32
Multilevel Paging Memory Management Inverted Page Table Memory Management

Suppose the logical address space uses 32 bit ad- Each process has its own page table. If there are
dresses with a 12 bit offset and a 20 bit page number. many processes the memory overhead of these page-
The page number is also divided into a 10 bit page tables is considerable. One solution to this problem
number(indexes the outer page table) and a 10 bit is to invert the page table. This uses one entry for
offset(indexes the page of the page table). Both the each frame. The entry contains the process ID and
outer page table and a page of the page table require the page-number. Hence, only one table is need for
four bytes per entry. What is the memory overhead for all the processes.
a process that only required 3K?
Memory

Logical Address
Physical Address

Also suppose the memory access time is 100ns and


the TLB delays access by 10ns. What is the effective CPU p d f d

access time using this approach if the TLBs hit-ratio


was 80% ? pid

search pid p

33 34

Inverted Page Table Memory Management Inverted Page Table Memory Management

What is the inverted page table for the following? Searching the inverted page table would take far to
Physical Address Space
long. Hence, a hash table may be used to reduce
Frame memory access to at most a few accesses.
Logical Address Space Number Memory

Process A 0 P2 A
Page Table A Also, associative registers may be used to improve the
1 P2 B
P0 0 3 performance.
1 2 P1 B
P1 7
2 0 3 P0 A
P2
3 10 4 P3 B
P3
5

Process B 6

Page Table B 7 P1 A
P0
0 11 8
P1 1 2 9
P2 2 1
10 P3 A
P3 3 4
11 P0 B

35 36
Sharing Pages Memory Management Segmentation Memory Management

A user or process would like to view memory as a set


Paging allows the possibility of common code to be
of variable-sized segments.
shared. This code must be reentrant(non-self-modifying).

Process A
Each segment has a name and there is no necessary
ordering among segments.
ed 1 Page Table A
0 data 1
Symbol table
ed2 4 1 Stack
2
ed3 ed2
5 2
data 1 0
Subroutine
3 data 2
Main Program

Process B 4 ed 1

Page Table B 5 ed3


ed 1
Subroutine
6
ed2 4
2 7
ed3
5
data 2 3 8
Logical Address Space

37 38

Segmentation Memory Management Segmentation Memory Management

An address within a segment may be referred to by a


Logical Address Space















Physical Memory

segment 3
10
segment number and offset. segment 0 Symbol table
      
segment 2
Stack
40
54      
      
Segment Table segment 0
The segment table will consist of base and limit regis- 59


segment 2
limit base
segment 1 Subroutine
ters for each segment. Main Program
0
1
5
30
54
108
108
segment 1
2
3
30
10
10
200
      
segment 4
4 50 150 138
     
Subroutine 150
segment 4
Logical Address Physical Address

CPU s d
200 


segment 3
210

limit base

39 40
Segmentation Memory Management Segmentation with Paging Memory Management

Segments are very useful for providing protection both It is possible to combine both paging and segmenta-
between processes and within a process. tion to improve on both schemes.

Paged segmentation on the GE 645 with MULTICS

Segmentation provides a simple mechanism for shar- Logical Address


CPU
ing code and data between processes. s d Memory

Segment Table
Segment Table Base Register

One difficult that arises when using segmentation to s

limit page table base Trap


share code is that routines often refer to them self this
may be a problem if the segments have different num- d

bers in the different processes. This problem may be p d’

addressed by insisting that shared routines only indi- p


f d’
rectly refer to them self. f
Physical Address

41 42

Continuous Frames Memory Management

In some cases a process requires a set of continu-


oues frames. For example a DMA control will often sit
outside the MMU circuitry. Hence, any transfer done
from say a disk drive controler to a processes memory
must be done to a continuoues set of frames.

Linux uses a buddy algorithm to provide a continu-


oues set of frames. This also helps address the prob-
lem of external fragmentation.

43

Anda mungkin juga menyukai