Summary of "Computer Organization & Architecture ( COA ) 02 | Cache Memory | CS & IT | GATE 2025 Crash Course"
Summary of "Computer Organization & Architecture (COA) 02 | Cache Memory | CS & IT | GATE 2025 Crash Course"
Main Ideas and Concepts
-
Introduction to Cache Memory
- Cache Memory is a crucial topic in Computer Organization & Architecture (COA) and frequently appears in GATE exams.
- The course aims to cover COA basics and Cache Memory comprehensively in a few days.
-
Memory Hierarchy
- Memory in a computer is organized hierarchically to balance speed, cost, and size.
- Levels include:
- Registers and Cache Memory (small, very fast, inside CPU)
- Main Memory (RAM/ROM) (larger but slower)
- Secondary Storage (HDD, SSD, tape drives for permanent storage)
- Larger memories are slower and cheaper; smaller memories are faster and costlier.
- The CPU is very fast and needs quick data access to avoid waiting, hence the need for cache.
-
Why Cache Memory?
- The CPU executes instructions sequentially, fetching data from memory.
- Main Memory is slower compared to CPU speed, causing delays.
- Cache Memory acts as a small, fast memory between CPU and Main Memory to reduce access time.
- Only frequently or recently used data/instructions are stored in cache (principle of locality).
-
Locality of Reference
- Temporal locality: Recently accessed data/instructions are likely to be accessed again soon.
- Spatial locality: Data/instructions near recently accessed locations are likely to be accessed soon.
- Cache uses this principle to anticipate future CPU requests and pre-load data.
-
Cache Memory Operation
- Cache stores copies of data from Main Memory.
- When CPU needs data:
- It first checks cache (cache hit if found).
- If not found (cache miss), it fetches from Main Memory and loads a block of data (including surrounding data) into cache.
- Cache Memory reduces average memory access time.
-
Cache Hit, Miss, and Hit Ratio
- Cache Hit: Requested data found in cache; fast access.
- Cache Miss: Data not in cache; slower access from Main Memory.
- Hit Ratio: Fraction of accesses that result in cache hits.
- Miss Ratio: Fraction of accesses that result in cache misses.
- Formulas:
- Hit Ratio = Number of Hits / Total Memory References
- Miss Ratio = Number of Misses / Total Memory References
-
Average Memory Access Time (AMAT)
- Without cache: AMAT = Main Memory Access Time.
- With cache: AMAT = (Hit Ratio × Cache Access Time) + (Miss Ratio × Miss Penalty).
- Miss penalty includes time to access Main Memory plus any block transfer time.
- Two access models:
- Simple (serial) access: Cache checked first; if miss, then Main Memory accessed.
- Hierarchical (parallel) access: Cache and Main Memory accessed simultaneously; result depends on hit/miss.
-
Cache Memory Write Policies
- Write-Through Cache:
- CPU writes data to both cache and Main Memory simultaneously.
- Ensures data consistency but is slower due to multiple writes.
- Write-Back Cache:
- CPU writes data only to cache initially.
- Modified cache blocks (dirty blocks) are written back to Main Memory only when replaced.
- Faster but may cause data inconsistency temporarily.
- Write Allocate vs. No Write Allocate:
- On write miss, write allocate loads the block into cache before writing.
- No write allocate writes directly to Main Memory without loading cache.
- Write-Through Cache:
-
Cache Block Replacement
- When cache is full, new blocks replace existing ones.
- If replaced block is dirty (modified), it must be written back to Main Memory in write-back policy.
- Replacement policies affect cache efficiency and consistency.
-
Practical Examples and Analogies
- Tea-making analogy to explain caching: small box of jaggery (cache) vs. big storage box (Main Memory).
- Classroom analogy to explain locality of reference.
-
Exam Preparation Tips
- Understand key formulas for hit ratio, miss ratio, and average memory access time.
- Recognize question cues for hierarchical vs. simple access in exams.
- Practice problems involving cache hit/miss times and speedup calculations.
- Revise write policies and their implications thoroughly as questions frequently arise from these topics.
Methodology / Instructions for Cache Memory Concepts
- Memory Hierarchy Understanding:
- Know the speed, size, and cost trade-offs of Registers, cache, Main Memory, and secondary storage.
- Cache Operation:
- Always check cache first.
- If miss
Category
Educational