Abstract
An interesting and common class of workloads for shared-memory multiprocessors is multiprogrammed workloads. Because these workloads generally contain more processes than there are processors in the machine, there are two factors that increase the number of cache misses. First, several processes are forced to time-share the same cache, resulting in one process displacing the cache state previously built up by a second one. Consequently, when the second process runs again, it generates a stream of misses as it rebuilds ita cache state. Second since an idle processor simply selects the highest priority runnable process, a given process often moves from one CPU to another. This frequent migration results in the process having to continuously reload its state into new caches, producing streams of cache misses. To reduce the number of misses in these workloads, processes should reuse their cached state more. One way to encourage this is to schedule each process based on its affinity to individual caches, that is, based on the amount of state that the process has accumulated in an individual cache. This technique is called cache affinity scheduling.
- 1 M, Devarakonda and A. Mukherjee. Issues in Implementation of Cache-Affinity Scheduling. In USENIX, 1992.Google Scholar
- 2 A. Gupta, A. Tucker, and S. Urushibara. The Impact of Operating System Scheduling Policies and Synchronization Methods on the Performance of Parallel Applications. SIGMETRICS 91. Google ScholarDigital Library
- 3 M. Squillante and E. Lazowska. Using Processor-Cache Affinity in Shared-Memory Multiprocessor Scheduling. Tech. Rep. 89- 060-01, Dept. of Comp. Sci., Univ. of Washington, June 1989.Google Scholar
- 4 J. Torrellas, A. Tucker, and A. Gupta. Evaluating the Benefits of Cache-Affinity Scheduling in Shared-Memory Multiprocessors. Tech. Rep. CSL-TR-92-536, Stanford Univ., August 1992.Google Scholar
- 5 R. Vaswani and J. Zahorjan. The Implications of Cache Affinity on Processor Scheduling for Multiprogrammed, Shared Memory Multiprocessors. In SOSP, October 1991. Google ScholarDigital Library
Index Terms
- Benefits of cache-affinity scheduling in shared-memory multiprocessors: a summary
Recommendations
Benefits of cache-affinity scheduling in shared-memory multiprocessors: a summary
SIGMETRICS '93: Proceedings of the 1993 ACM SIGMETRICS conference on Measurement and modeling of computer systemsAn interesting and common class of workloads for shared-memory multiprocessors is multiprogrammed workloads. Because these workloads generally contain more processes than there are processors in the machine, there are two factors that increase the ...
Evaluating the Performance of Cache-Affinity Scheduling in Shared-Memory Multiprocessors
As a process executes on a processor, it builds up state in that processor s cache. In multiprogrammed workloads, the opportunity to reuse this state may be lost when a process gets rescheduled, either because intervening processes destroy its cache ...
Comments