0% found this document useful (0 votes)
46 views19 pages

Cache Performance & Mapping

The lecture on 'Advanced Computer Architecture' discusses the role of cache memory in processing, including concepts like cache hits and misses, and how performance is measured using hit ratio. It explains cache mapping techniques, including direct mapping, fully associative mapping, and K-way set associative mapping, detailing how main memory blocks are allocated to cache lines. The document emphasizes the importance of cache performance optimization through various strategies.

Uploaded by

cani26387
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views19 pages

Cache Performance & Mapping

The lecture on 'Advanced Computer Architecture' discusses the role of cache memory in processing, including concepts like cache hits and misses, and how performance is measured using hit ratio. It explains cache mapping techniques, including direct mapping, fully associative mapping, and K-way set associative mapping, detailing how main memory blocks are allocated to cache lines. The document emphasizes the importance of cache performance optimization through various strategies.

Uploaded by

cani26387
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Lecture

on
“Advanced Computer Architecture –TCS 704”

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


GRAPHIC ERA DEEMED TO BE UNIVERSITY – 248002
When the processor needs to read or write a location in
main memory, it first checks for a corresponding entry in
the cache.
•If the processor finds that the memory location is in the
cache, a cache hit has occurred and data is read from
cache

•If
the processor does not find the memory location in the
cache, a cache miss has occurred. For a cache miss, the
cache allocates a new entry and copies in data from main
memory, then the request is fulfilled from the contents of
the cache.
 The performance of cache memory is
frequently measured in terms of a quantity
called Hit ratio.
Hit ratio = hit / (hit + miss) = no. of
hits/total accesses
 We can improve Cache performance using
higher cache block size, higher associativity,
reduce miss rate, reduce miss penalty, and
reduce the time to hit in the cache.
Cache mapping defines how a block from the
main memory is mapped to the cache
memory in case of a cache miss.
OR
Cache mapping is a technique by which the
contents of main memory are brought into
the cache memory.
The following diagram illustrates the mapping
process-
 Main memory is divided into equal size
partitions called as blocks or frames.
 Cache memory is divided into partitions
having same size as that of blocks called
as lines.
 During cache mapping, block of main
memory is simply copied to the cache and the
block is not actually brought from the main
memory.
Note: Here is how we divide the main memory into blocks
and the size of a block is equal to the size of the cache line.
Cache mapping is performed using following
three different techniques-

[Link] Mapping
[Link] Associative Mapping
3.K-way Set Associative
Mapping
In direct mapping,
 A particular block of main memory can map
only to a particular line of the cache.
 The line number of cache to which a
particular block can map is given by-
Cache line number= ( Main Memory Block
Address ) Modulo (Number of lines in Cache)
In direct mapping, the physical address is divided
as-
In direct mapping,
 There is no need of any replacement
algorithm.
 This is because a main memory block can
map only to a particular line of the cache.
 Thus, the new incoming block will always
replace the existing block (if any) in that
particular line.

You might also like