Thursday, October 13, 2016

Caches

First of all, let’s understand what is a cache? In plain computer science terms, a cache is small buffer of pages OS maintains in order to avoid more expensive main memory accesses.

Cache is usually present on the CPU chip itself.Main memory(RAM) is placed on the motherboard and is connected to the CPU.
Because cache is closer to the CPU, it is much faster than RAM. Each read access on the main memory has to travel to CPU while the CPU cache is right there.

Cache is more expensive than primary memory.

Why to have another temporary memory when we already have cheap and large main memory?


It is mainly to improve speed.The cache is there to reduce the average memory access time for the CPU.

When the CPU needs some data from the memory, the cache is checked first and if data is available in the cache it gets it from there. There is no need to perform a memory read.

Caches are faster than main memory, however, they smaller in size compared to main memory.  Therefore, there is probability pages are swapped in and out of cache. If a page is not found in cache and main memory is accessed to fetch that page, it’s a cache miss. Page is brought in cache and next time when it is accessed, it is served from cache.

What if there is no space left in cache when a cache miss occurs? New page as to swapped with one of already existing pages in cache. How to decide which page goes out of cache, so that there is minimum increase in cache miss? There are many approaches (eviction methods) to decide which page goes out of cache to make space for new page like First In First Out approach, Least Recently Used, Least Frequently Used etc.
What is least recently used cache ?

In ‘First In First Out’ approach, OS selects page which is oldest in cache and swaps that it with new page. In ‘Least Recently Used’ approach, OS selects the page which was not accessed for longest period of time. In ‘Least Frequently Used’ approach, OS selects the page which is accessed least number of time till a given point of time.

In this post, we would concentrate on Least Recently Used approach and implement it.

LRU cache is similar to first in first out (FIFO) storage where pages which came first are evicted first. Difference between FIFO storage and LRU cache comes from the fact that when a page is accessed again, that page is move to the top.

If a page is entered in cache first, it is first candidate to go out if it not accessed again in before cache is full and cache miss happens.

Here we will describe the implementation of LRU Cache in Java.

No comments:

Post a Comment