Design Least Recently Used (LRU) cache – Algorithms and Me
First of all, lets understand what is a cache. In plain computer architectural terms, a cache is small buffer of pages OS maintains in-order to avoid more expensive main memory accesses.
Usually cache are much more faster than main memory. Since caches are very small in size as compared to main memory, there is probability that we need to swap pages between cache and main memory. Whenever a page which is not found in cache and needs to br brought in from main memory, it is called as cache miss.There are many approaches used to decide which page goes out of cache in order to make place for new page like First In First Out approach, Least Recently Used, Least Frequently Used.
Read full article from Design Least Recently Used (LRU) cache – Algorithms and Me
No comments:
Post a Comment