Anda di halaman 1dari 36

Memory Organization

12.1 Memory Hierarchy


12.2 Main Memory
12.3 Auxiliary Memory
12.4 Associative Memory
12.5 Cache Memory
12.6 Virtual Memory
12.7 Memory management hardware
Memory Hierarchy

The overall goal of using a memory hierarchy is to obtain the


highest-possible average access speed while minimizing the
total cost of the entire memory system.
Memory Hierarchy
The memory hierarchy s/m consists of all storage devices
employed in a computer s/m from slow but high capacity
auxiliary memory to a relatively faster main memory, to even
smaller and faster cache memory accessible to the high speed
processing logic
When pgms not residing in main memory are needed in main
memory are needed by the CPU, they are brought in from
auxiliary memory.
Cache Memory
A special high speed memory called cache is sometimes used to
increase the speed of processing by making current pgms and
data available to the CPU at a rapid rate.
I/O processor manages data transfer b/w auxiliary memory and
main memory
Memory Hierarchy

Difference b/w auxiliary memory and cache memory


The cache memory holds those part of the pgm and data that are
most heavily used, while auxiliary memory holds those parts
that are not presently used by the CPU.

Microprogramming: refers to the existence of many programs in


different parts of main memory at the same time.

Memory management system:


The part of the computer s/m that supervises the flow of
information b/w auxiliary memory and main memory is called
the memory management system
Main memory

Main memory
Internal memory, also called "main or primary memory" refers to
memory that stores small amounts of data that can be accessed quickly
while the computer is running.

There are basically two kinds of internal memory: ROM and RAM.

ROM stands for read-only memory. It is non-volatile, which means it


can retain data even without power. It is used mainly to start or boot
up a computer.
Once the operating system is loaded, the computer uses RAM which
temporarily stores data while the central processing unit (CPU) is
executing other tasks. With more RAM on the computer, the less the
CPU has to read data from the external or secondary memory (storage
device), allowing the computer to run faster. RAM is fast but it is
volatile, which means it will not retain data if there is no power. It is
therefore important to save data to the storage device before the
system is turned off.
Main memory

Types of RAM

There are two main types of RAM: Dynamic RAM (DRAM) and Static
RAM (SRAM).
DRAM (pronounced DEE-RAM), is widely used as a computer’s main
memory. Each DRAM memory cell is made up of a transistor and a
capacitor within an integrated circuit, and a data bit is stored in the
capacitor(Binary infn is stored in the form of electric charges that are
applied to capacitors). Since transistors always leak a small amount, the
capacitors will slowly discharge, causing information stored in it to
drain; hence, DRAM has to be refreshed (given a new electronic charge)
every few milliseconds to retain data.
SRAM (pronounced ES-RAM) is made up of four to six transistors. It
keeps data in the memory as long as power is supplied to the system
unlike DRAM, which has to be refreshed periodically. As such, SRAM is
faster but also more expensive, making DRAM the more prevalent
memory in computer systems.
Bootstrap Loader
It is a pgm whose function is to start the computer software
operating when power is turned on.
Main memory
Types of ROM :
PROM : Short for programmable read-only memory, a memory chip on
which data can be written only once. Once a program has been written
onto a PROM, it remains there forever. Unlike RAM, PROMs retain their
contents when the computer is turned off. The difference between a
PROM and a ROM (read-only memory) is that a PROM is manufactured
as blank memory, whereas a ROM is programmed during the
manufacturing process. To write data onto a PROM chip, you need a
special device called a PROM programmer or PROM burner. The process
of programming a PROM is sometimes called burning the PROM.
EPROM : Acronym for erasable programmable read-only memory, and
pronounced ee-prom, EPROM is a special type of memory that retains its
contents until it is exposed to ultraviolet light. The ultraviolet light clears
its contents, making it possible to reprogram the memory. To write to
and erase an EPROM, you need a special device called a PROM
programmer or PROM burner.
EEPROM : Short form of electrically erasable programmable read-only
memory. EEPROM is a special type of PROM that can be erased by
exposing it to an electrical charge. Like other types of PROM, EEPROM
retains its contents even when the power is turned off. Also like other
types of ROM, EEPROM is not as fast as RAM.
RAM CHIP

Three-state logic is a logic used in electronic circuits wherein a third


state, the high-impedance state, is added to the original 1 and 0 logic
states that a port can be in. This high-impedance state effectively
removes the port from the circuit, as if it were not part of it. So in the
third state of high impedance, the output from the port is neither 1 nor
0, but rather the port does not appear to exist.
Main memory
ROM Chip
Memory Address Map

The designer of a computer system must calculate the


amount of memory required for the particular application
and assign it to either RAM or ROM.

The interconnection between memory and processor is then


established from knowledge of the size of memory needed
and the type of RAM and ROM chips available.

The addressing of memory can be established by means


of a table that specifies the memory address assigned to
each chip.

The table, called a memory address map, is a pictorial


representation of assigned address space for each chip in
the system.

Memory Configuration (case study):

Required: 512 bytes ROM + 512 bytes RAM


Available: 512 byte ROM + 128 bytes RAM
Memory Address Map
Memory connections Address bus CPU

to the CPU 16 - 11 10 9 8 R
7D - W1
R Data bus

Decoder
3 2 1 0
CS1
CS2
128× 8 Data
RD
RAM 1
WR
AD7

CS1
CS2
128× 8 Data
RD
RAM 2
WR
AD7

CS1
CS2
128× 8 Data
RD
RAM 3
WR
AD7

CS1
CS2
128× 8 Data
RD
RAM 4
WR
AD7

CS1
CS2
1-7 128× 8 Data
ROM
8
AD9
9
Associative Memory
Associative Memory

The time required to find an item stored in memory can be


reduced considerably if stored data can be identified for access
by the content of the data itself rather than by an address.

A memory unit access by content is called an associative


memory or Content Addressable Memory (CAM). This type of
memory is accessed simultaneously and in parallel on the basis
of data content rather than specific address or location.

When a word is written in an associative memory, no address is


given. The memory is capable of finding an empty unused
location to store the word. When a word is to be read from an
associative memory, the content of the word or part of the
word is specified.

The associative memory is uniquely suited to do parallel


searches by data association. Moreover, searches can be done
on an entire word or on a specific field within a word.
Associative memories are used in applications where the
search time is very critical and must be very short.
Hardware Organization

Argument register (A)

Key register (K)

Match
register

Input

Associative memory
array and logic M
Read
Write m words
n bits per word

Output
Associative memory of an m word, n cells per word

A1 Aj An

K1 Kj Kn

Word 1 C 11 C 1j C 1n M1

Word i C i1 C ij C in Mi

Word m C m1 C mj C mn Mm

Bit 1 Bit j Bit n


One Cell of Associative Memory
Ai Kj
Input

Write

R S Match
F ij To M i
logic
Read

Output
Match logic

Neglect the K bits and compare the argument in A with the bits
stored in the cells of the words.

Word i is equal to the argument in A if

Two bits are equal if they are both 1 or 0

For a word i to be equal to the argument in A we must have all xj


variables equal to 1.

This is the condition for setting the corresponding match bit Mi to 1.


Now include the key bit Kj in the comparison logic Cont.
The requirement is that if Kj=0, the corresponding bits of Aj and
need no comparison. Only when Kj=1 must be compared. This
requirement is achieved by OR ing each term with Kj

The match logic for word i in an associative memory can now be


expressed by the following Boolean function.

If we substitute the original definition of xj, the above Boolean


function can be expressed as follows:

Where is a product symbol designating the AND operation of all n terms.


Match Logic cct.

K1 A1 K2 A2 Kn An

F'i1 F i1 F'i2 F i2 F'in F in

Mi
Read Operation

If more than one word in memory matches the unmasked argument


field, all the matched words will have 1’s in the corresponding bit
position of the match register.

It is then necessary to scan the bits of the match register one at


a time. The matched words are read in sequence by applying a
read signal to each word line whose corresponding Mi bit is a 1.

If only one word may match the unmasked argument field, then
connect output Mi directly to the read line in the same word
position,

The content of the matched word will be presented automatically


at the output lines and no special read command signal is
needed.

If we exclude words having zero content, then all zero output will
indicate that no match occurred and that the searched item is
not available in memory.
Write Operation

If the entire memory is loaded with new information at once,


then the writing can be done by addressing each location in
sequence.

The information is loaded prior to a search operation.

If unwanted words have to be deleted and new words inserted


one at a time, there is a need for a special register to
distinguish between active an inactive words.

This register is called “Tag Register”.

A word is deleted from memory by clearing its tag bit to 0.


Cache Memory
Cache Memory

Locality of reference
The references to memory at any given interval of time tent to be
contained within a few localized areas in memory.

If the active portions of the program and data are placed in a fast
small memory, the average memory access time can be reduced.

Thus, reducing the total execution time of the program. Such a


fast small memory is referred to as “Cache Memory”.
The performance of the cache memory is measured in terms of a
quality called “Hit Ratio”.

When the CPU refers to memory and finds the word in cache, it
produces a hit. If the word is not found in cache, it counts it as
a miss.

The ratio of the number of hits divided by the total CPU references
to memory (hits + misses) is the hit ratio. The hit ratios of 0.9 and
higher have been reported
Cache Memory
The average memory access time of a computer system can be
improved considerably by use of cache.

The cache is placed between the CPU and main memory. It is the
faster component in the hierarchy and approaches the speed of
CPU components.

When the CPU needs to access memory, the cache is examined. If


it is found in the cache, it is read very quickly.

If it is not found in the cache, the main memory is accessed.

A block of words containing the one just accessed is then


transferred from main memory to cache memory.

For example,

A computer with cache access time of 100ns, a main memory


access time of 1000ns and a hit of 0.9 produce an average access
time of 200ns. This is a considerable improvement over a similar
computer without a cache memory, whose access time is 1000ns.
Cache Memory

The basic characteristic of cache memory is its fast access time.


Therefore, very little or no time must be wasted when searching
for words in the cache.

The transformation of data from main memory to cache memory


is referred to as a “Mapping Process”.

There are three types of mapping procedures are available.

· Associative Mapping
· Direct Mapping
· Self – Associative Mapping.
Cache Memory

Consider the following memory organization to show mapping


procedures of the cache memory.

· The main memory can stores 32k word of 12 bits each.


· The cache is capable of storing 512 of these words at any given
time.
· For every word stored in cache, there is a duplicate copy in main
memory.
· The CPU communicates with both memories
· It first sends a 15 – bit address to cache.
· If there is a hit, the CPU accepts the 12 bit data from cache
· If there is a miss, the CPU reads the word from main memory and
the word is then transferred to cache.
Associative Mapping
The associative mapping stores both the address and content (data)
of the memory word.
Octal
Argument register

A CPU address of 15 bits is placed in the argument register and


associative memory is searched for a matching address.
If the address is found, the corresponding 12 bit data is read and
sent to the CPU.
If no match occurs, the main memory is accessed for the word.
The address – data pair is then transferred to associative cache
memory.
If the cache is full, it must be displayed, using replacement algorithm.
FIFO may be used.
Direct Mapping

The 15-bit CPU address is divided into two fields.

The 9 least significant bits constitute the index field and the
remaining 6 bits form the tag fields.

The main memory needs an address but includes both the tag and
the index bits.

The cache memory requires the index bit only i.e., 9 bits.

There are 2k words in the cache memory & 2n words in the main
memory.

e.g: k = 9, n = 15
Direct Mapping
Direct Mapping

00000

6710
Direct Mapping

Each word in cache consists of the data word and it associated tag.

When a new word is brought into cache, the tag bits store along
data

When the CPU generates a memory request, the index field is


used in the address to access the cache.

The tag field of the CPU address is equal to tag in the word from
cache; there is a hit, otherwise miss.

How can we calculate the word size of the


cache memory?
Set – Associative Mapping

In set – Associative mapping, each word of cache can store two or


more words of memory under the same index address.
Each data word is stored together with its tag and the number of
tag – data items in one word of cache is said to form a set.
Each index address refers to two data words and their associated tags.
Set – Associative Mapping

Each tag requires 6 bits & each data word has 12 bits, so the word
length is 2(6+12) =36 bits

An index address of 9 bits can accommodate 512 cache words. It


can accommodate 1024 memory words.

When the CPU generates a memory request, the index value of the
address is used to access the cache.

The tag field of the CPU address is compared with both tags in the
cache.

The most common replacement algorithms are:

· Random replacement
· FIFO
· Least Recently Used (LRU)
Writing into cache

there are two writing methods that the system can proceed.

Write-through method (The simplest & commonly used way)


update main memory with every memory write operation, with cache
memory being update in parallel if it contains the word at the specified
address.

This method has the advantage that main memory always contains the
same data as the cache.

Write-back method
In this method only the cache location is updated during a write operation.

The location is then marked by a flag so that later when the word is
removed from the cache it is copied into main memory.

The reason for the write-back method is that during the time a word resides
in the cache, it may be updated several times.

Anda mungkin juga menyukai