Cache optimization for memory-resident decision support commercial workloads
Place of publicationAustin, TX, USA
SourceInternational Conference on Computer Design (ICCD'99)
Google Scholar check
MetadataShow full item record
Dramatic increases in the main-memory size of computers is allowing some applications to shift their main data storage area from disk to main memory and, as a result, increase their performance. This trend is at work in some databases, resulting in what is called memory-resident databases. However, because of the increasing gap between processor and main memory speed, in these systems, effective use of the cache hierarchy is crucial to high performance. Unfortunately, there has been relatively little work on building cache-friendly database systems. In this paper, we present several cache-oriented optimizations to enable effective exploitation of caches in memory-resident decision support databases. The main optimization involves developing a query optimizer that includes the cost of cache misses in its cost metrics. The other optimizations are sophisticated data blocking and software prefetching. These optimizations require no custom-designed hardware support and are effective for the more complicated TPC-D queries. In a simple database, these queries run about 13% faster with the cache-oriented optimizer and blocking, and a total of 31% faster if, in addition, we add prefetching. The effectiveness of these optimizations is stable across a range of cache sizes, cache line sizes, and miss penalties.