Request Schedule Oriented Compression Cache Memory

  • Authors

    • G D.Kesavan
    • P N.Karthikayan
    2018-04-17
    https://doi.org/10.14419/ijet.v7i2.19.15053
  • Dictionary based cache compression, cache compression, cache optimization
  • Abstract

    Using cache memory the overall memory access time to fetch data gets reduced. As use of cache memory is related to a    system's performance, the caching process should take less time. To speed up caching process, there are many cache optimization techniques available. Some of the cache optimization process are Reducing Miss Rate, Reducing Miss Penalty, Re-ducing the time to hit in the cache etc. Re-cent advancement paved way for compressing data in cache, accessing recent data use pat-tern etc. All the techniques focus on increasing cache capacity or replacement policies in cache resulting in more hit ratio. There are many cache related compression and optimization techniques available which address only capacity and replacement related              optimization and their related issues. This paper deals with scheduling the requests of cache memory as per compressed cache organization. So that cache searching and indexing speed gets reduced considerably and service the request in a faster manner. For capacity and replacement improvements Dictionary sharing based caching is used. Through this scheme multiple requests are foreseen using pre-fetcher and are searched as per cache organization, promoting easier indexing process.The benefit comes from both compressed storage and also easier storage ac-cess.

     

  • References

    1. [1] A. Sha ee, M. Taassori, R. Balasubramonian, and A. Davis. Memzip: Exploring unconventional benfits from memory compression. In 2014 IEEE 20th International Symposium on High Perfor-mance Computer Architecture (HPCA), pages 638{ 649, Feb 2014.

      [2] David J. Palframan, Nam Sung Kim, and Mikko H. Lipasti. Cop: To compress and protect main mem-ory. In Proceedings of the 42Nd Annual Interna-tional Symposium on Computer Architecture, ISCA '15, pages 682{693, New York, NY, USA, 2015. ACM.

      [3] Tri M. Nguyen and David Wentzla . Morc: a manycore-oriented compressed cache. In Proceed-ings of the 48th International Symposium on Mi-croarchitecture, pages 76{88, 12 2015.

      [4] Angelos Arelakis, Fredrik Dahlgren, and Per Sten-strom. Hycomp: A hybrid cache compression method for selection of data-type-speci c compres-sion methods. In Proceedings of the 48th Interna-tional Symposium on Microarchitecture, MICRO-48, pages 38{49, New York, NY, USA, 2015. ACM.

      [5] G. Pekhimenko, T. Huberty, R. Cai, O. Mutlu, P. B. Gibbons, M. A. Kozuch, and T. C. Mowry. Exploiting compressed block size as an indicator of future reuse. In 2015 IEEE 21st International Symposium on High Performance Computer Archi-tecture (HPCA), pages 51{63, Feb 2015.

      [6] K. Raghavendra, B. Panda, and M. Mutyam. Pbc: Prefetched blocks compaction. IEEE Transactions on Computers, 65(8):2534{2547, Aug 2016.

      [7] Somayeh Sardashti, Andre Seznec, and David A. Wood. Yet another compressed cache: A low-cost yet e ective compressed cache. ACM Trans. Archit. Code Optim., 13(3):27:1{27:25, September 2016.

      [8] B. Panda and A. Seznec. Dictionary sharing: An e cient cache compression scheme for compressed caches. In 2016 49th Annual IEEE/ACM Interna-tional Symposium on Microarchitecture (MICRO), pages 1-12, Oct 2016.

  • Downloads

  • How to Cite

    D.Kesavan, G., & N.Karthikayan, P. (2018). Request Schedule Oriented Compression Cache Memory. International Journal of Engineering & Technology, 7(2.19), 80-83. https://doi.org/10.14419/ijet.v7i2.19.15053

    Received date: 2018-07-04

    Accepted date: 2018-07-04

    Published date: 2018-04-17