Cache mapping
Commonly used methods: Associative Mapped Cache Direct-Mapped Cache Set-Associative Mapped Cache
Direct-Mapped Cache
Each cache slot corresponds to an explicit set of main memory. In our example we have 227 memory blocks and 214 cache slots. A total of 227 / 214 = 213 main memory blocks
can be mapped onto each cache slot.
Direct-Mapped Cache
The 32-bit main memory address is partitioned into a 13-bit tag field, followed by a 14-bit slot field, followed by a five-bit word field.
Direct-Mapped Cache
When a reference is made to the main memory address, the slot field identifies in which of the 214 slots the block will be found. If the valid bit is 1, then the tag field of the referenced address is compared with the tag field of the slot.
Direct-Mapped Cache
How an access to memory location (A035F014)16 is mapped to the cache. If the addressed word is in the cache, it will be found in word (14)16 of slot (2F80)16 which will have a tag of (1406)16.
Direct-Mapped Cache
Advantages
The tag memory is much smaller than in associative mapped cache. No need for an associative search, since the slot field is used to direct the comparison to a single field.
Direct-Mapped Cache
Disadvantages
Consider what happens when a program references locations that are 219 words apart, which is the size of the cache. Every memory reference will result in a miss, which will cause an entire block to be read into the cache even though only a single word is used.
Cache Performance
Cache read and write policies
Cache Performance
As to which cache read or write policies are best, there is no simple answer. The organization of a cache is optimized for each computer architecture and the mix of programs that the computer executes.