Skip to main content

Multi-Core Cache Hierarchies

  • Book
  • © 2011

Overview

Part of the book series: Synthesis Lectures on Computer Architecture (SLCA)

This is a preview of subscription content, log in via an institution to check access.

Access this book

eBook USD 29.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book USD 37.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

Table of contents (6 chapters)

About this book

A key determinant of overall system performance and power dissipation is the cache hierarchy since access to off-chip memory consumes many more cycles and energy than on-chip accesses. In addition, multi-core processors are expected to place ever higher bandwidth demands on the memory system. All these issues make it important to avoid off-chip memory access by improving the efficiency of the on-chip cache. Future multi-core processors will have many large cache banks connected by a network and shared by many cores. Hence, many important problems must be solved: cache resources must be allocated across many cores, data must be placed in cache banks that are near the accessing core, and the most important data must be identified for retention. Finally, difficulties in scaling existing technologies require adapting to and exploiting new technology constraints. The book attempts a synthesis of recent cache research that has focused on innovations for multi-core processors. It is an excellent starting point for early-stage graduate students, researchers, and practitioners who wish to understand the landscape of recent cache research. The book is suitable as a reference for advanced computer architecture classes as well as for experienced researchers and VLSI engineers. Table of Contents: Basic Elements of Large Cache Design / Organizing Data in CMP Last Level Caches / Policies Impacting Cache Hit Rates / Interconnection Networks within Large Caches / Technology / Concluding Remarks

Authors and Affiliations

  • University of Utah, USA

    Rajeev Balasubramonian

  • HP Labs, USA

    Norman P. Jouppi, Naveen Muralimanohar

About the authors

Rajeev Balasubramonian is a Professor at the School of Computing, University of Utah. He received his B.Tech. in Computer Science and Engineering from the Indian Institute of Technology, Bombay in 1998. He received his M.S. (2000)and Ph.D. (2003) from the University of Rochester. His primary research interests include memory systems, security, and application-specific architectures. Prof. Balasubramonian is a recipient of an NSF CAREER award, faculty research awards from IBM, Google, HPE, an Intel Outstanding Research Award, and various teaching awards at the University of Utah. He has co-authored papers that have been selected as IEEE Micro Top Picks (2007 and 2010) and that have received three best paper awards.Norman P. Jouppi is an HP Senior Fellow and Director of the Intelligent Infrastructure Lab at HP Labs. He is known for his innovations in computer memory systems, including stream prefetch buffers, victim caching, multi-level exclusive caching and development of the CACTItool for modeling cache timing, area, and power. He has also been the principal architect and lead designer of several microprocessors, contributed to the architecture and design of graphics accelerators, and extensively researched video, audio, and physical telepresence. Jouppi received his Ph.D. in electrical engineering from Stanford University in 1984, where he was one of the principal architects and designers of the MIPS microprocessor,as well as a developer of techniques for CMOS VLSI timing verification. He currently serves as past chair of ACMSIGARCH and is a member of the Computing Research Association (CRA) board. He is on the editorial board of Communications of the ACM and IEEE Micro. He is a Fellow of the ACM and the IEEE, and holds more than 50 U.S. patents. He has published over 100 technical papers, with several best paper awards and one Symposium on Computer Architecture (ISCA) Influential Paper Award.
Naveen Muralimanohar is a senior researcher in the Intelligent Infrastructure Lab at HP Labs. His research focuses on designing reliable and efficient memory hierarchies and communication fabrics for high performance systems. He has published several influential papers on on-chip caches, including a best paper award and an IEEE Micro Top Pick for his work on large cache models with CACTI. He received his Ph.D. in computer science from the University of Utah and B.E in electrical engineering from the University of Madras.

Bibliographic Information

Publish with us