Novel Cache Optimization Strategies for Multicore Processor Architectures
Author(s):R. Naveen1, S. Priyanka2
Affiliation: 1,2Department of Computer Science and Engineering, R. V. College of Engineering, Bengaluru, India
Page No: 23-26
Volume issue & Publishing Year: Volume 2 Issue 9 ,Sep -2025
Journal: International Journal of Advanced Engineering Application (IJAEA)
ISSN NO: 3048-6807
DOI: https://doi.org/10.5281/zenodo.17623603
Abstract:
The rapid evolution of multicore processor architectures has intensified the demand for efficient cache management techniques to meet the growing computational and memory requirements of modern applications. Traditional cache optimization approaches often face challenges such as high latency, frequent cache misses, and scalability issues when applied to parallel workloads. This paper proposes novel cache optimization strategies that combine adaptive replacement policies, data prefetching mechanisms, and cooperative caching techniques tailored for multicore environments. Simulation studies conducted on benchmark workloads demonstrate that the proposed strategies reduce cache miss rates by up to 18 percent and improve execution time by nearly 12 percent compared to conventional policies such as LRU and FIFO. Furthermore, energy consumption is optimized through selective prefetching and intelligent block replacement, making the strategies suitable for power-constrained computing platforms. The findings highlight the potential of innovative cache management frameworks to enhance system performance, scalability, and energy efficiency in next-generation multicore processors
Keywords: Cache Optimization, Multicore Processors, Adaptive Replacement Policy, Cooperative Caching, Data Prefetching, Energy Efficiency, High-Performance Computing
Reference:
- [1] N. Jouppi, “Improving direct-mapped cache performance by the addition of a small fully-associative cache and prefetch buffers,” ACM SIGARCH Computer Architecture News, vol. 18, no. 2, pp. 364–373, 1990.
- [2] M. K. Qureshi, D. N. Lynch, O. Mutlu, and Y. N. Patt, “A case for MLP-aware cache replacement,” ACM SIGARCH Computer Architecture News, vol. 34, no. 2, pp. 167–178, 2006.
- [3] A. Jaleel, K. B. Theobald, S. C. Steely Jr, and J. Emer, “High performance cache replacement using re-reference interval prediction (RRIP),” ACM SIGARCH Computer Architecture News, vol. 38, no. 3, pp. 60–71, 2010.
- [4] S. Somogyi, T. Wenisch, A. Ailamaki, B. Falsafi, and A. Moshovos, “Spatial memory streaming,” ACM SIGARCH Computer Architecture News, vol. 34, no. 2, pp. 252–263, 2006.
- [5] H. Zhang and Z. Zhu, “Fair cache sharing and partitioning in a chip multiprocessor architecture,” ACM Journal on Emerging Technologies in Computing Systems, vol. 3, no. 1, pp. 1–37, 2007.
- [6] D. Chiou, “Cooperative caching: Using remote client memory to improve file system performance,” Proceedings of the USENIX Symposium on Operating Systems Design and Implementation, pp. 267–280, 1995.
- [7] S. P. Vanderwiel and D. J. Lilja, “Data prefetch mechanisms,” ACM Computing Surveys, vol. 32, no. 2, pp. 174–199, 2000.
- [8] C. Hsu, I. Singh, L. K. John, and A. R. Lebeck, “Exploring energy-performance trade-offs in processors: Cache and memory design considerations,” ACM Transactions on Computer Systems, vol. 22, no. 4, pp. 489–523, 2004.
- [9] Z. Wang, S. Kim, and M. Lipasti, “Predicting conditional branch direction with neural networks,” ACM SIGARCH Computer Architecture News, vol. 29, no. 2, pp. 1–12, 2001.
- [10] K. Sudan, N. Madan, A. Alameldeen, A. Davis, and R. Balasubramonian, “Dynamic partitioning of shared caches: A case for QoS,” Proceedings of the IEEE International Symposium on High-Performance Computer Architecture, pp. 23–34, 2009.
