Wednesday, June 4, 2014

Cache Replacement Algorithms

Since last few days, I was studying cache replacement algorithms. After going through a lot of documents and papers, I thought of summarizing what I have read and understand about cache replacement algorithms in last few days. Although there are lot more algorithms, I am summarizing onlya few of the popular algorithms.

LRU (Least Recently Used)
In case of a cache hit, the cache returns a reference to the object and object gets moved to the first position in the cache (the most recently used object). On the other hand, if it is a cache miss, then the object needs to read from the primary storage (or some other source) and then puts the object in to the first position of the cache. If the cache is full, the last entry (the least recently used one) in the cache will be removed before a new object can be added.

LRU2 (Least Recently Used 2)
This algorithm is a development on the LRU algorithm. In this case, the algorithm replaces the object whose penultimate (second-to-last) access is least recent among all penultimate accesses. The objective of this algorithm is not to find and evict an object since it is accessed once but to find out how frequent the object is accessed over time (popularity) and to evict the less popular one.

MRU (Most Recently Used)
MRU algorithm is most useful in situations where the older an item is more likely to be accessed. When we need to get rid of a object in cache, we trash the one we just recently accessed. This algorithm has very specific use, for example, a photo gallery. Users who have already viewed a photo has less chance of viewing it again.

2Q is similar to LRU2 but improves upon it making it better and adaptive cache replacement algorithm. When an object in cache is first hit, 2Q place it in a special buffer, which is managed as a FIFO queue. If the object is referenced again while in the special buffer, the object is moved into another list which is managed as LRU. If a object is not referenced while on the special buffer, removed from the buffer.

LFU (Least Frequently Used)
In this algorithm, each object in cache has a corresponding counter associated with it. The counter counts how many times an object is accessed. As a replacement policy, the objects that are access least number of times are first to be replaced. So if an object is accessed a lot number of times, it is going to stay in cache.

First-In First-Out (FIFO) is considered as the oldest cache replacement algorithm. It simply maintains a list of all objects in the cache such that head of the list is the oldest arrival and tail of the list is the most recent arrival. FIFO has a poor performance compared to LRU and is seldom used today in its original form.

Second Chance
Second Chance (SC) is an enhancement to FIFO. In SC, a reference bit is maintained for each object in cache while maintaining the objects as a FIFO queue. When a new object is fetched from a primary storage, it is appended at the tail of the queue. The reference bit for the new object added is set to zero. When an object in cache is accessed then its reference bit is set to 1. The replacement policy in SC checks the object at the head of the FIFO queue and replaces it if its reference bit is zero else the object at the head is moved to the tail and its reference bit is reset to zero. The policy keeps checking the objects at the head until an object is found with a zero reference bit. Once it finds an object with zero reference bit, it will be replaced. SC has a overhead of moving objects from head to the tail of queue which makes it a bit inconsistent.

CLOCK is functionally very similar to SC. The difference being in CLOCK uses a circular queue instead of FIFO and the replacement algorithm cycles through the queue like a arm of CLOCK. By default, the clock hand points to a object and whenever a replacement is needed the policy starts checking the object, pointed by the hand, in the same as SC. Clock eliminates the need to move a object from head to tail thus providing a better performance.

ARC (Adaptive Replacement Cache)
ARC is considered to be one of the best cache replacement algorithms. ARC uses 2 lists, L1 ans L2. The first list contains objects that have been seen only once recently, while the latter contains objects that have been seen at least twice recently. So the List L1 captures recency while the list L2 captures the frequency. The replacement policy is based on constant reshuffling of the L1 and L2 lists. ARC also adapts based on workloads. It is fast and self-tuning.

CAR (Clock with Adaptive Replacement)
This algorithm CAR is inspired by the Adaptive Replacement Cache (ARC) algorithm, and inherits all advantages of ARC including its high performance. CAR maintains four doubly linked lists, say, T1, T2, B1 and B2. The lists T1 and T2 contain the objects in cache, while the lists B1 and B2 maintain history information about the recently evicted objects. The CLOCKS T1 and T2 contain those objects that are in the cache and the lists B1 and B2 contain history objects that were recently evicted from the cache. The CLOCK T1 captures “recency” while the CLOCK T2 captures “frequency.” The lists B1 and B2 are simple LRU lists. Objects evicted from T1 are placed on B1 , and those evicted from T2 are placed on B2.
Post a Comment