Anda di halaman 1dari 6

Computer Memory, a mechanism that stores data for use by a computer.

In a computer all
data consist of numbers. A computer stores a number into a specific location in memory
and later fetches the value. Most memories represent data with the binary number system.
In the binary number system, numbers are represented by sequences of the two binary
digits 0 and 1, which are called bits (see Number Systems). In a computer, the two
possible values of a bit correspond to the on and off states of the computer's electronic
circuitry.

In memory, bits are grouped together so they can represent larger values. A group of
eight bits is called a byte and can represent decimal numbers ranging from 0 to 255. The
particular sequence of bits in the byte encodes a unit of information, such as a keyboard
character. One byte typically represents a single character such as a number, letter, or
symbol. Most computers operate by manipulating groups of 2, 4, or 8 bytes called words.

Memory capacity is usually quantified in terms of kilobytes, megabytes, and gigabytes.


Although the prefixes kilo-, mega-, and giga-, are taken from the metric system, they
have a slightly different meaning when applied to computer memories. In the metric
system, kilo- means 1 thousand; mega-, 1 million; and giga-, 1 billion. When applied to
computer memory, however, the prefixes are measured as powers of two, with kilo-
meaning 2 raised to the 10th power, or 1,024; mega- meaning 2 raised to the 20th power,
or 1,048,576; and giga- meaning 2 raised to the 30th power, or 1,073,741,824. Thus, a
kilobyte is 1,024 bytes and a megabyte is 1,048,576 bytes. It is easier to remember that a
kilobyte is approximately 1,000 bytes, a megabyte is approximately 1 million bytes, and a
gigabyte is approximately 1 billion bytes.

II HOW MEMORY WORKS

Computer memory may be divided into two broad categories known as internal memory
and external memory. Internal memory operates at the highest speed and can be accessed
directly by the central processing unit (CPU)—the main electronic circuitry within a
computer that processes information. Internal memory is contained on computer chips
and uses electronic circuits to store information (see Microprocessor). External memory
consists of storage on peripheral devices that are slower than internal memories but offer
lower cost and the ability to hold data after the computer’s power has been turned off.
External memory uses inexpensive mass-storage devices such as magnetic hard drives.
See also Information Storage and Retrieval.

Internal memory is also known as random access memory (RAM) or read-only memory
(ROM). Information stored in RAM can be accessed in any order, and may be erased or
written over. Information stored in ROM may also be random-access, in that it may be
accessed in any order, but the information recorded on ROM is usually permanent and
cannot be erased or written over.

A Internal RAM
Random access memory is also called main memory because it is the primary memory
that the CPU uses when processing information. The electronic circuits used to construct
this main internal RAM can be classified as dynamic RAM (DRAM), synchronized
dynamic RAM (SDRAM), or static RAM (SRAM). DRAM, SDRAM, and SRAM all
involve different ways of using transistors and capacitors to store data. In DRAM or
SDRAM, the circuit for each bit consists of a transistor, which acts as a switch, and a
capacitor, a device that can store a charge. To store the binary value 1 in a bit, DRAM
places an electric charge on the capacitor. To store the binary value 0, DRAM removes
all electric charge from the capacitor. The transistor is used to switch the charge onto the
capacitor. When it is turned on, the transistor acts like a closed switch that allows electric
current to flow into the capacitor and build up a charge. The transistor is then turned off,
meaning that it acts like an open switch, leaving the charge on the capacitor. To store a 0,
the charge is drained from the capacitor while the transistor is on, and then the transistor
is turned off, leaving the capacitor uncharged. To read a value in a DRAM bit location, a
detector circuit determines whether a charge is present or absent on the relevant
capacitor.

DRAM is called dynamic because it is continually refreshed. The memory chips


themselves cannot hold values over long periods of time. Because capacitors are
imperfect, the charge slowly leaks out of them, which results in loss of the stored data.
Thus, a DRAM memory system contains additional circuitry that periodically reads and
rewrites each data value. This replaces the charge on the capacitors, a process known as
refreshing memory. The major difference between SDRAM and DRAM arises from the
way in which refresh circuitry is created. DRAM contains separate, independent circuitry
to refresh memory. The refresh circuitry in SDRAM is synchronized to use the same
hardware clock as the CPU. The hardware clock sends a constant stream of pulses
through the CPU’s circuitry. Synchronizing the refresh circuitry with the hardware clock
results in less duplication of electronics and better access coordination between the CPU
and the refresh circuits.

In SRAM, the circuit for a bit consists of multiple transistors that hold the stored value
without the need for refresh. The chief advantage of SRAM lies in its speed. A computer
can access data in SRAM more quickly than it can access data in DRAM or SDRAM.
However, the SRAM circuitry draws more power and generates more heat than DRAM
or SDRAM. The circuitry for a SRAM bit is also larger, which means that a SRAM
memory chip holds fewer bits than a DRAM chip of the same size. Therefore, SRAM is
used when access speed is more important than large memory capacity or low power
consumption.

The time it takes the CPU to transfer data to or from memory is particularly important
because it determines the overall performance of the computer. The time required to read
or write one bit is known as the memory access time. Current DRAM and SDRAM
access times are between 30 and 80 nanoseconds (billionths of a second). SRAM access
times are typically four times faster than DRAM.
The internal RAM on a computer is divided into locations, each of which has a unique
numerical address associated with it. In some computers a memory address refers directly
to a single byte in memory, while in others, an address specifies a group of four bytes
called a word. Computers also exist in which a word consists of two or eight bytes, or in
which a byte consists of six or ten bits.

When a computer performs an arithmetic operation, such as addition or multiplication,


the numbers used in the operation can be found in memory. The instruction code that tells
the computer which operation to perform also specifies which memory address or
addresses to access. An address is sent from the CPU to the main memory (RAM) over a
set of wires called an address bus. Control circuits in the memory use the address to
select the bits at the specified location in RAM and send a copy of the data back to the
CPU over another set of wires called a data bus. Inside the CPU, the data passes through
circuits called the data path to the circuits that perform the arithmetic operation. The
exact details depend on the model of the CPU. For example, some CPUs use an
intermediate step in which the data is first loaded into a high-speed memory device
within the CPU called a register.

B Internal ROM

Read-only memory is the other type of internal memory. ROM memory is used to store
items that the computer needs to execute when it is first turned on. For example, the
ROM memory on a PC contains a basic set of instructions, called the basic input-output
system (BIOS). The PC uses BIOS to start up the operating system. BIOS is stored on
computer chips in a way that causes the information to remain even when power is turned
off.

Information in ROM is usually permanent and cannot be erased or written over easily. A
ROM is permanent if the information cannot be changed—once the ROM has been created,
information can be retrieved but not changed. Newer technologies allow ROMs to be
semi-permanent—that is, the information can be changed, but it takes several seconds to
make the change. For example, a FLASH memory acts like a ROM because values
remain stored in memory, but the values can be changed.

C External Memory

External memory can generally be classified as either magnetic or optical, or a


combination called magneto-optical. A magnetic storage device, such as a computer's
hard drive, uses a surface coated with material that can be magnetized in two possible
ways. The surface rotates under a small electromagnet that magnetizes each spot on the
surface to record a 0 or 1. To retrieve data, the surface passes under a sensor that
determines whether the magnetism was set for a 0 or 1. Optical storage devices such as a
compact disc (CD) player use lasers to store and retrieve information from a plastic disk.
Magneto-optical memory devices use a combination of optical storage and retrieval
technology coupled with a magnetic medium.
C1 Magnetic Media

Memory stored on external magnetic media include magnetic tape, a hard disk, and a
floppy disk. Magnetic tape is a form of external computer memory used primarily for
backup storage. Like the surface on a magnetic disk, the surface of tape is coated with a
material that can be magnetized. As the tape passes over an electromagnet, individual bits
are magnetically encoded. Computer systems using magnetic tape storage devices
employ machinery similar to that used with analog tape: open-reel tapes, cassette tapes,
and helical-scan tapes (similar to video tape).

Another form of magnetic memory uses a spinning disk coated with magnetic material.
As the disk spins, a sensitive electromagnetic sensor, called a read-write head, scans
across the surface of the disk, reading and writing magnetic spots in concentric circles
called tracks.

Magnetic disks are classified as either hard or floppy, depending on the flexibility of the
material from which they are made. A floppy disk is made of flexible plastic with small
pieces of a magnetic material imbedded in its surface. The read-write head touches the
surface of the disk as it scans the floppy. A hard disk is made of a rigid metal, with the
read-write head flying just above its surface on a cushion of air to prevent wear.

C2 Optical Media

Optical external memory uses a laser to scan a spinning reflective disk in which the
presence or absence of nonreflective pits in the disk indicates 1s or 0s. This is the same
technology employed in the audio CD. Because its contents are permanently stored on it
when it is manufactured, it is known as compact disc-read only memory (CD-ROM). A
variation on the CD, called compact disc-recordable (CD-R), uses a dye that turns dark
when a stronger laser beam strikes it, and can thus have information written permanently
on it by a computer.

C3 Magneto-Optical Media

Magneto-optical (MO) devices write data to a disk with the help of a laser beam and a
magnetic write-head. To write data to the disk, the laser focuses on a spot on the surface
of the disk heating it up slightly. This allows the magnetic write-head to change the
physical orientation of small grains of magnetic material (actually tiny crystals) on the
surface of the disk. These tiny crystals reflect light differently depending on their
orientation. By aligning the crystals in one direction a 0 can be stored, while aligning the
crystals in the opposite direction stores a 1. Another, separate, low-power laser is used to
read data from the disk in a way similar to a standard CD-ROM. The advantage of MO
disks over CD-ROMs is that they can be read and written to. They are, however, more
expensive than CD-ROMs and are used mostly in industrial applications. MO devices are
not popular consumer products.

D Cache Memory
CPU speeds continue to increase much more rapidly than memory access times decrease.
The result is a growing gap in performance between the CPU and its main RAM memory.
To compensate for the growing difference in speeds, engineers add layers of cache
memory between the CPU and the main memory. A cache consists of a small, high-speed
memory system that holds recently used values. When the CPU makes a request to fetch
or store a memory value, the CPU sends the request to the cache. If the item is already
present in the cache, the cache can honor the request quickly because the cache operates
at higher speed than main memory. For example, if the CPU needs to add two numbers,
retrieving the values from the cache can take less than one-tenth as long as retrieving the
values from main memory. However, because the cache is smaller than main memory,
not all values can fit in the cache at one time. Therefore, if the requested item is not in the
cache, the cache must fetch the item from main memory.

Cache cannot replace conventional RAM because cache is much more expensive and
consumes more power. However, research has shown that even a small cache that can
store only 1 percent of the data stored in main memory still provides a significant
speedup for memory access. Therefore, most computers include a small, external memory
cache attached to their RAM. More important, multiple caches can be arranged in a
hierarchy to lower memory access times even further. In addition, most CPUs now have a
cache on the CPU chip itself. The on-chip internal cache is smaller than the external
cache, which is smaller than RAM. The advantage of the on-chip cache is that once a data
item has been fetched from the external cache, the CPU can use the item without having
to wait for an external cache access.

III DEVELOPMENTS AND LIMITATIONS

Since the inception of computer memory, the capacity of both internal and external
memory devices has grown steadily at a rate that leads to a quadrupling in size every
three years. Computer industry analysts expect this rapid rate of growth to continue
unimpeded. Computer engineers consider it possible to make multigigabyte memory
chips and disks capable of storing a terabyte (one trillion bytes) of memory.

Some computer engineers are concerned that the silicon-based memory chips are
approaching a limit in the amount of data they can hold. However, it is expected that
transistors can be made at least four times smaller before inherent limits of physics make
further reductions difficult. Engineers also expect that the external dimensions of memory
chips will increase by a factor of four, meaning that larger amounts of memory will fit on
a single chip. Current memory chips use only a single layer of circuitry, but researchers
are working on ways to stack multiple layers onto one chip. Once all of these approaches
are exhausted, RAM memory may reach a limit. Researchers, however, are also exploring
more exotic technologies with the potential to provide even more capacity, including the
use of biotechnology to produce memories out of living cells. The memory in a computer
is composed of many memory chips. While current memory chips contain megabytes of
RAM, future chips will likely have gigabytes of RAM on a single chip. To add to RAM,
computer users can purchase memory cards that each contain many memory chips. In
addition, future computers will likely have advanced data transfer capabilities and
additional caches that enable the CPU to access memory faster.

IV HISTORY

Early electronic computers in the late 1940s and early 1950s used cathode ray tubes
(CRT), similar to a computer display screen, to store data. The coating on a CRT remains
lit for a short time after an electron beam strikes it. Thus, a pattern of dots could be
written on the CRT, representing 1s and 0s, and then be read back for a short time before
fading. Like DRAM, CRT storage had to be periodically refreshed to retain its contents.
A typical CRT held 128 bytes, and the entire memory of such a computer was usually 4
kilobytes.

International Business Machines Corporation (IBM) developed magnetic core memory in


the early 1950s. Magnetic core (often just called “core”) memory consisted of tiny rings of
magnetic material woven into meshes of thin wires. When the computer sent a current
through a pair of wires, the ring at their intersection became magnetized either clockwise
or counterclockwise (corresponding to a 0 or a 1), depending on the direction of the
current. Computer manufacturers first used core memory in production computers in the
1960s, at about the same time that they began to replace vacuum tubes with transistors.
Magnetic core memory was used through most of the 1960s and into the 1970s.

The next step in the development of computer memory came with the introduction of
integrated circuits, which enabled multiple transistors to be placed on one chip. Computer
scientists developed the first such memory when they constructed an experimental
supercomputer called Illiac-IV in the late 1960s. Integrated circuit memory quickly
displaced core and has been the dominant technology for internal memory ever since.

Anda mungkin juga menyukai