Sunday, June 1, 2014

Caching basics

Caching
Caching is the technique of storing data temporarily (frequently used data) which are expensive to fetch every time. Fetching the data from caching is much faster compared to fetching it from the source.

Cache Hit:

Simply put, if the cache contains the data being looked up, it is known as cache hit.
The percentage of requests that result in cache hits is known as the hit rate or hit ratio of the cache.

Cache Miss:
When the data being looked up is not available in the cache, then it is known as cache miss. In case of a cache miss, the data is fetched again from the primary storage and put into cache.
In case of cache miss there can be two scenarios:

Free Space available
In this case the object will be fetched from primary storage and added to the cache.

No Free Space available
In this case the object will be retrieved from the primary storage and will replace an existing entry in the cache. Now, the entry to be replaced is governed by a cache replacement algorithm (LRU, MRU, LRU2 etc, details of which I’ll share in a later post).

Storage Cost
When a cache miss occurs, data will be fetch it from the primary storage, load it and place it in the cache. Also data can be prefetched at application startup. Here what is important is how much space to be used by cache (space versus time tradeoff)

Miss Penalty
And when we need to load the data we need to know how much does it take to load the data. This is known as Retrieval cost

Replacement Policy:
When cache miss happens, the cache ejects entries in order to make room for the newly fetched data from primary storage. The heuristic used to select the entry to eject is known as the replacement policy.


Average memory-access time = Hit time + Miss rate x Miss penalty

Hit time – time to access the cache
Hit ratio -- percentage of time the data is found in the cache
Miss ratio -- (1 - hit ratio)

Miss penalty – Time to access the primary storage

No comments: