Anda di halaman 1dari 22

Unit 3 Memories and memory subsystem

INTRODUCTION
Memory subsystem the place within an embedded system where instructions and data are stored. Major concern in design is the execution time. This is taken care by the memory management (static/ dynamic allocation of memory)

Classification of memory
MEMORY

RAM

ROM

DRAM

SRAM

PROM

EPROM

SDRAM

EEPROM

FLASH

Random Access Memory (RAM)


In SRAM, a bit of data is stored using the state of a flip-flop. It is more expensive to produce, but is generally faster and requires less power than DRAM. In modern computers, SRAM is often used as cache memory for the CPU. DRAM stores a bit of data using a transistor and capacitor pair, which together comprise a memory cell. The capacitor holds a high or low charge (1 or 0, respectively), and the transistor acts as a switch that lets the control circuitry on the chip read the capacitor's state of charge or change it. As this form of memory is less expensive to produce than static RAM, it is the predominant form of computer memory used in modern computers.

It is the main memory in most computers. One can read and over-write data in RAM. Can be classified as: SRAM DRAM Both static and dynamic RAM are considered volatile, as their state is lost or reset when power is removed from the system.

Read Only Memory (ROM)


Stores data by permanently enabling or disabling selected transistors, such that the memory cannot be altered. Classified as: Mask ROM PROM EPROM EEPROM FLASH In mask ROM, the data is physically encoded in the circuit, so it can only be programmed during fabrication - PROM, invented in 1956,
allowed users to program its contents exactly once by physically altering its structure with the application of highvoltage pulses. - The 1971 invention of EPROM essentially solved problem with PROM, since EPROM can be repeatedly reset to its unprogrammed state by exposure to strong ultraviolet light (typically for 10 minutes or longer). - Repeated exposure to UV light will eventually wear out an EPROM, but the endurance of most EPROM chips exceeds 1000 cycles of erasing and reprogramming

Read Only Memory (ROM)

EEPROM, invented in 1983, can be programmed inplace if the containing device provides a means to receive the program contents from an external source (for example, a personal computer via a serial cable). - Writing is a very slow process and again needs higher voltage (usually around 12 V)

Flash memory, invented at Toshiba in the mid-1980s, and commercialized in the early 1990s, is a form of EEPROM that makes very efficient use of chip area and can be erased and reprogrammed thousands of times without damage.

General memory interface


The first level model of memory can be viewed as an ARRAY. Each location in an array will have an index no. (address) Read access the binary encoded address (index value) can be decoded and returns the stored value. Write access- stores a new value in the binary encoded address (index value)

Common memory control signals


CS- enables a memory device for read/write operation. OE-output control of a memory device. R- indicates READ operation on the memory device. W- indicates WRITE operation on the memory device. RAS-Indicates that the address inputs represents a row address in the memory device. CAS-Indicates that the address inputs represents a column address in the memory device.
CONTROL MEANING SIGNAL __ CS ___ OE R Chip Select Output Enable Read

__ W
____ RAS ____ CAS

Write
Row Address Strobe Column Address Strobe

Chip organization

ROM timing diagram


Only READ operation is possible. User has to give the address. When the CS is made low, the data can be read after a certain processing time (tread)

SRAM timing diagram


READ/WRITE both are possible.

SRAM CELL
A typical SRAM cell is made up of six MOSFETs. Each bit in an SRAM is stored on four transistors (M1, M2, M3, M4) that form two crosscoupled inverters. This storage cell has two stable states which are used to denote 0 and 1. Two additional access transistors serve to control the access to a storage cell during read and write operations. Access to the cell is enabled by the word line (WL in figure) which controls the two access transistors M5 and M6 which, in turn, control whether the cell should be connected to the bit lines: BL and BL. They are used to transfer data for both read and write operations.

SRAM cell

DRAM Timing Diagram

DRAM Cell

Terminology
Access time
Time to access a word in memory. Time taken for read/write operation.

Blocks
large quantities of data are transferred within a system in blocks. block size is the no. of words in a block.

Cycle time
Time interval from the start of one read/write operation until the start of the next. It is the measure of how quickly the memory can be repeatedly accessed.

Latency
Time required to compute the address of a sequence of words and then locate its first block in the memory.

Memory bandwidth
The measure of word transmission rate to and from the memory by the bus.

Block Access Time


Time required to find the 0th word of the block and then transfer the entire block.

Page
Collection of blocks.

Access time and cycle time

The Primary physical memory map


Memory mapped I/O and DMA Firmware
0xFFFF

0xE5FF

Volatile RAM
0x68FF

Non- Volatile RAM(SRAM/DRAM) Stack space

0x4FF

0x3FF

System memory
0x0

Memory subsystem architecture


It is comprised of a number of memory components of different kinds/sizes/speeds arranged in hierarchical manner and designed to cooperate with each other. Purpose of building a memory system is the reduce the memory access time depending upon the application. Cache memory
Smallest Fastest Most expensive

Main/primary memory
SRAM/DRAM

Secondary memory
Slowest Largest Least expensive

Need for Cache memory


High speed memories are expensive and complex to design. Also, the support circuitry for high speed memory device will be complex and expensive. Two main goals of a good design of ES are:
To reduce the no. of memory accesses. To reduce the latency of each memory access.

The cache is a smaller, faster memory which temporarily stores copies of the block data and program instructions from main memory locations. As long as most memory accesses are cached, the average latency of memory accesses will be very nominal.
Most modern desktop and server CPUs have independent caches for:
instruction cache (icache) to speed up executable instruction fetch a data cache (dcache) to speed up data fetch

Sequential Locality of reference


85% of the embedded software is written in Embedded C. IBM analyzed the flow of these programs and came out with a phenomenon called sequential locality of reference. It states that actual execution generally happens within a small window that moves forward through the program. These instructions can be kept in a fast memory. Thus execution speed is increased (as the memory access time is reduced) at a reduced cost. (no need of high end support for high speed memories) This concept fails if there are large loops or repeated branches outside the window.

Cache system architecture

Anda mungkin juga menyukai