In modern computing, cache memory is the unsung hero bridging the speed gap between blazing-fast CPUs and comparatively slower RAM. But how does it actually work? And what’s all this about associativity, ways, and hit rates? If you’re ready to geek out, grab a notebook and let’s dive in.
What is Cache Memory?
Cache memory is a small, high-speed storage located close to the CPU. Its job is to store frequently accessed data, reducing the time needed to fetch it from slower main memory (RAM). In technical terms, cache reduces the memory latency and improves system performance.
Key Parameters of Cache Design
- Cache Size (C): Total size of the cache in bytes (or words).
- Block Size (B): The size of a cache block (line), typically measured in bytes.
-
Number of Lines (L): Calculated as
L = C / B
. -
Associativity (k): Determines the organization of blocks in the cache.
- Direct-mapped (
k = 1
) - Fully associative (
k = L
) - Set-associative (
k = n
, wheren > 1
)
- Direct-mapped (
Address Mapping in Cache
When the CPU generates a memory address, the cache determines where to store or fetch the corresponding data. Let’s break down a 32-bit address into its components for a k
-way set-associative cache:
- Tag: Identifies if the block in the cache corresponds to the requested address.
- Index: Determines which cache set to look in.
- Block Offset: Specifies the exact byte in the block.
Example Calculation
Let’s assume:
- Cache size (
C
) = 64 KB - Block size (
B
) = 64 bytes - 4-way set associativity (
k = 4
)
Number of Cache Lines (
L
):
L = C / B = (64 * 1024) / 64 = 1024
lines.Number of Sets:
Sets = L / k = 1024 / 4 = 256
sets.-
Bits for Index, Offset, and Tag:
-
Block Offset:
log2(B) = log2(64) = 6
bits. -
Index:
log2(Sets) = log2(256) = 8
bits. -
Tag:
32 - (Index + Block Offset) = 32 - (8 + 6) = 18
bits.
-
Block Offset:
Thus, a 32-bit address is divided as:
[Tag: 18 bits | Index: 8 bits | Offset: 6 bits]
Cache Associativity and Performance
- Direct-Mapped Cache: Each block from main memory maps to exactly one line in the cache. It’s simple but prone to conflict misses.
Example: For k = 1
, if blocks 0x1000
and 0x2000
map to the same line, accessing both alternately leads to frequent replacements.
Fully Associative Cache:
Any block can occupy any line. This minimizes conflict misses but increases search time and hardware complexity.Set-Associative Cache:
A middle ground where blocks are mapped to a specific set but can occupy any line within that set. Common values fork
are 2, 4, and 8.
Hit Time: Higher k
increases complexity of searching within a set.
Cache Performance Metrics
Hit Rate (HR):
HR = Cache Hits / Total Accesses
A high hit rate improves performance.-
Average Memory Access Time (AMAT):
AMAT = Hit Time + Miss Rate × Miss Penalty
Where:- Hit time: Time to fetch data from cache.
- Miss penalty: Time to fetch data from RAM.
Example:
If:
- Hit time = 2 ns
- Miss penalty = 50 ns
- Hit rate = 95%
Top comments (0)