Click here for full text:
Minimizing the Page Close Penalty: Indexing Memory Banks Revisited
Keyword(s): memory controller; page-mode; indexing; addressing; performance; prefetching
Abstract: This paper introduces several new techniques to optimize the use of page-mode and prefetching to improve the performance of a memory system. We start by improving the basic bank indexing scheme, allowing higher hit rates to be attained with a smaller number of banks. Next, we introduce the idea of keeping only a subset of the possible banks open. When this is done, only a small fraction of page-mode misses are to open banks, while the banks that are open capture the great majority of the possible page-mode and sequential prefetching hits. This both improves the performance of workloads with a high hit rate and significantly decreases the risk of page misses under difficult or random workloads. This idea might also simplify the memory controller. In our workloads, using only eight bank controllers on a memory system supporting only 32 banks, 55% of the all read references were satisfied by data already in the bus drivers, requiring no DRAM latency at all. We compare different techniques for mapping a smaller number of bank controllers onto a larger number of memory banks, and show that using a fully-associative mapping works best. The report finishes with a number of auxiliary investigations. We show that there is a slight advantage to not associating the prefetch buffers with the bank controllers, but rather using a separate LRU prefetch cache. We show that DRAM refresh has a small impact on the results, while the cache line size has a large impact.
Back to Index