Anda di halaman 1dari 4

CACHE MEMORY

Cache memory, also called CPU memory, is random access memory (RAM) that a
computer microprocessorcan access more quickly than it can access regular RAM.
This memory is typically integrated directly with theCPU chip or placed on a
separate chipthat has a separate bus interconnect with the CPU.
Hot data storage market technology trends for 2016
Download the December edition of Storage Magazine Online to check out our Hot
Techs for 2016! For the past 13 years, we've honored the best and brightest
technologies for the upcoming year. As always, we are proud to present a batch of
technologies we believe will make a big impact on data storage market technology.
By submitting your email address, you agree to receive emails regarding relevant
topic offers from TechTarget and its partners. You can withdraw your consent at any
time. ContactTechTarget at 275 Grove Street, Newton, MA.
You also agree that your personal information may be transferred and processed in
the United States, and that you have read and agree to theTerms of Use and
thePrivacy Policy.
The basic purpose of cache memory is to store program instructions that are
frequently re-referenced by softwareduring operation. Fast access to these
instructions increases the overall speed of the software program.
As the microprocessor processes data, it looks first in the cache memory; if it finds
the instructions there (from a previous reading of data), it does not have to do a more
time-consuming reading of data from larger memory or other data storage devices.
Most programs use very few resources once they have been opened and operated for
a time, mainly because frequently re-referenced instructions tend to be cached. This
explains why measurements of system performancein computers with
slower processorsbut larger caches tend to be faster than measurements of system
performance in computers with faster processors but more limited cache space.
Multi-tier or multilevel caching has become popular
in server and desktoparchitectures, with different levels providing greater efficiency
through managed tiering. Simply put, the less frequently access is made to certain
data or instructions, the lower down the cache level the data or instructions are
written.
Cache memory levels explained
Cache memory is fast and expensive. Traditionally, it is categorized as "levels" that
describe its closeness and accessibility to the microprocessor:
Level 1 (L1) cache is extremely fast but relatively small, and is usually embedded in
the processor chip (CPU).Level 2 (L2) cache is often more capacious than L1; it may
be located on the CPU or on a separate chip orcoprocessor with a high-speed
alternative system bus interconnecting the cache to the CPU, so as not to be slowed
by traffic on the main system bus.Level 3 (L3) cache is typically specialized memory
that works to improve the performance of L1 and L2. It can be significantly slower
than L1 or L2, but is usually double the speed of RAM. In the case ofmulticore
processors, each core may have its own dedicated L1 and L2 cache, but share a
common L3 cache. When an instruction is referenced in the L3 cache, it is typically
elevated to a higher tier cache.

Memory cache configurations


Caching configurations continue to evolve, but memory cache traditionally works
under three differentconfigurations:

Direct mapping, in which each blockis mapped to exactly one cache location.
Conceptually, this is like rows in a table with three columns: the data block or cache
line that contains the actual data fetched and stored, a tag that contains all or part of
the address of the fetched data, and a flag bit that connotes the presence of a valid
bit of data in the row entry.Fully associative mapping is similar to direct mapping in
structure, but allows a block to be mapped to any cache location rather than to a prespecified cache location (as is the case with direct mapping).Set associative
mapping can be viewed as a compromise between direct mapping and fully
associative mapping in which each block is mapped to a subset of cache locations. It
is sometimes called N-way set associative mapping, which provides for a location in
main memory to be cached to any of "N" locations in the L1 cache.
Specialized caches
In addition to instruction and data caches, there are other caches designed to provide
specialized functions in a system. By some definitions, the L3 cache is a specialized
cache because of its shared design. Other definitions separate instruction caching
from data caching, referring to each as a specialized cache.
Other specialized memory caches include the translation lookaside buffer (TLB)
whose function is to record virtual address to physical addresstranslations.
Still other caches are not, technically speaking, memory caches at all. Disk caches,
for example, may leverage RAM or flash memory to provide much the same kind of
data caching as memory caches do with CPU instructions. If data is frequently
accessed from disk, it is cached into DRAM or flash-basedsilicon storage technology
for faster access and response.
In the video below, Dennis Martin, founder and president of Demartek LLC, explains
the pros and cons of using solid-state drives as cache and asprimary storage.
Specialized caches also exist for such applications as Web browsers,databases,
network address binding and client-side Network File Systemprotocol support. These
types of caches might be distributed across multiple networked hosts to provide
greater scalability or performance to anapplication that uses them.
Increasing cache size
L1, L2 and L3 caches have been implemented in the past using a combination of
processor andmotherboard components. Recently, the trend has been toward
consolidating all three levels of memory caching on the CPU itself. For this reason,
the primary means for increasing cache size has begun to shift from the acquisition
of a specific motherboard with differentchipsets and bus architectures to buying the
right CPU with the right amount of integrated L1, L2 and L3 cache.
Contrary to popular belief, implementing flash or greater amounts of DRAM on a
system does not increase cache memory. This can be confusing since the
term memory caching (hard disk buffering) is often used interchangeably with cache
memory. The former, using DRAM or flash to buffer disk reads, is intended to improve
storage I/O by caching data that is frequently referenced in a buffer ahead of slower

performing magnetic disk or tape. Cache memory, by contrast, provides read


buffering for the CPU.

Trkesi
nbellek olarak da adlandrlan ilemci bellek, rasgele eriim bellei (olan RAMbir
bilgisayar olduunu) mikroilemcidzenli RAM eriebilirsiniz daha hzl
eriebilirsiniz. Bu bellek genellikle dorudan entegre CPU ipi veya ayr
yerletirilen ip ayr vardr otobs CPU ile ara balanty.
Cache bellek temel amac saklamak iinbir program talimatlarn sklkla tarafndan
yeniden bavurulan yazlmnalmas srasnda. Hzl eriim Bu talimatlara yazlm
programnn genel hzn artrr.
Mikroilemci iler olarak veri , bu nbelleine ilk bakar; o (veri nceki okuma) orada
talimatlar bulursa, daha byk bellek veya dier veri verilerin daha zaman alc okuma
yapmak zorunda deildir depolama cihazlarnn .
Atlar ve sk sk yeniden bavurulan talimatlar olma eilimindedir balca nedeni, bir sre
iin ameliyat edildikten sonra ou program ok az kaynak kullanm nbellee . lmleri
aklyorsistem performans olarakbilgisayarlarla yava olan ilemcilerancak daha byk
nbelleklerini daha hzl ilemciler, ancak daha snrl nbellek alan bilgisayarlarda sistem
performansnn lmlerinden daha hzl olma eilimindedir.
ok-katmanl veya ok seviyeli nbellekpopler hale gelmitir sunucu vemasast farkl
dzeylerde ynetilen sayesinde daha yksek verimlilik salayan,
mimariler katmanlama .Basite sylemek gerekirse, daha az sklkla veri veya talimatlar
yazlr nbellek seviyesine aa alt belirli veri ya da talimatlara, yaplan eriimin.
nbellek seviyeleri aklad
Cache bellek, hzl ve pahal. Geleneksel olarak, mikroilemci olan yaknlk ve
eriilebilirliini tarif "dzeyleri" olarak kategorize edilir:
Seviye 1 ( L1 ) nbellek ok hzl ama nispeten kk ve genellikle ilemci ipi (CPU)
gmldr.Seviye 2 (L2) nbellek genellikle L1 daha geni olduu; o CPU zerinde veya
ayr bir ip ya da zerinde yer olabilir ilemcisi ana sistem veri yolu zerinde trafik ile
yavalatlabilir ekilde deil, CPU nbellek birbirine balayan yksek hzl bir alternatif
sistem veriyolu ile.Seviye 3 (L3) nbellek tipik L1 ve L2 performansn artrmak iin alr
uzman bellektir. Bu L1 veya L2 ok daha yava olabilir, ancak genellikle RAM hzn iki
katna edilir.Durumunda ok ekirdekli ilemciler, her ekirdek kendi zel L1 ve L2
nbellei var, ama ortak bir L3 nbellek paylaabilir. Bir talimat L3 nbellek bavurulan
olduunda, genellikle daha yksek bir katman nbellee ykselir.
nbellek yaplandrmalar
nbellee alma yaplandrmalar gelimeye devam eder, ancak nbellek geleneksel
olarak farkl altnda alrkonfigrasyonlarda :
Dorudan haritalama , her hangi,blok tam olarak bir nbellek konumu eletirilir. Veri
blou veya nbellek satr gerek veri getirilen ve depolanan ierir getirilen verilerin
adresi tamamn veya bir ksmn ieren bir etiket ve: Kavramsal olarak, bu stunlu bir
tablodaki satrlarn gibi bayrak biraz artrr satr giriinde verilerin geerli bir bit
varl.Tamamen ilikisel haritalamayapsnda dorudan haritalama benzer, fakat (direkt
haritalama olduu gibi) bir blok yerine nceden belirlenmi nbellek konumu dnda
herhangi bir nbellek konumu elenen salar.Set arml haritalama her blok nbellek
yerlerde bir alt eletirilmi olduu dorudan haritalama ve tamamen ilikisel haritalama

arasnda bir uzlama olarak grlebilir. Bazen denir N-way ilikisel haritalama
ayarlamak ana bellekte bir yere L1 nbellek "N" konumlardan birine nbellee alnmasn
salar.
ihtisas nbelleklerini komut ve veri nbellekleri ek olarak, bir sistemdeki zel ilevleri
salamak zere tasarlanm dier nbellekleri vardr.Baz tanmlar ile, L3 nbellek
nedeniyle paylalan tasarm zel bir nbelleidir.Dier tanmlar zel bir nbellek olarak
her atfta bulunarak, verileri nbellee alma gelen talimat nbellee alma ayrn.

Dier zel bellek nbellee eviri denetleme tampon (dahil TLB olan fonksiyon kaydetmek
iin) sanal adresile fiziksel adres eviriler.
Yine de dier nbelleklerini deil, teknik olarak konuursak, bellek nbellee at all. Disk
nbelleklerini , rnein, RAM veya kaldra olabilir flash bellek bellek nbellee CPU
talimatlar ile yaptmz gibi verileri nbellee alma ok ayn tr salamak. Veri sk
diskten eriilen ise, iine nbellee alnan DRAM veya flash tabanl silikon daha hzl
eriim ve yant iin depolama teknolojisinin.
Aadaki videoda, Dennis Martin, Demartek LLC'nin kurucusu ve bakan, kullanma
artlarn ve eksilerini aklarsolid-state srcler nbellek olarak ve olduu gibi birincil
depolama .
htisas nbelleklerini ayrca Web gibi uygulamalar iin mevcut tarayclar ,veritabanlar ,
a adresi balayc ve istemci taraf A Dosya Sistemiprotokol destei. Zula Bu tr oklu
a dalm olabilir konaklar byk salamak iin leklenebilirlik bir veya
performans uygulamas kullanr.
Artan nbellek boyutu
L1, L2 ve L3 nbellee ilemci ve bir arada kullanarak gemite uygulanmakta
olan anakart bileenleri .Son zamanlarda, eilim CPU zerindeki bellek nbellee her
seviyeleri konsolide ynnde olmutur. Bu nedenle, nbellek boyutunu artrmak iin
birincil yolu farkl olan belirli bir anakart almndan kaynaklanan kaydrmaya balad yonga
setleri , entegre L1, L2 ve L3 nbellek doru miktarda doru CPU satn alma ve otobs
mimarileri.
Genel kannn, uygulama flash veya nbellei artmaz bir sistemde DRAM byk
miktarlarda aksine. Terim nk bu kafa kartrc olabilir bellek nbellee alma (sabit
disk tamponlama) sklkla birbirlerinin yerine kullanlrnbellek . Eski, iin DRAM veya fla
kullanarak tampon okur diski, sk sk ncesinde yava performans bir tampon bavurulan
verileri nbellee alarak depolama I / O gelitirmek iin tasarlanmtr manyetik
disk veya teyp .nbellek, aksine, CPU iin okuma tamponlama salar.

Anda mungkin juga menyukai