Software caches optimize the performance of diverse storage systems, databases and other software systems. Existing works on software caches automatically resort to fully associative cache designs. Our work shows that limited associativity caches are a promising direction for concurrent software caches. Specifically, we demonstrate that limited associativity enables simple yet efficient realizations of multiple cache management schemes that can be trivially parallelized. We show that the obtained hit ratio is usually similar to fully associative caches of the same management policy, but the throughput is improved by up to X5 compared to production-grade caching libraries, especially in multi-threaded executions.
|Original language||English GB|
|State||Published - 2021|
|Name||arXiv preprint arXiv:2109.03021|