ABSTRACT
While iterative optimization has become a popular compiler optimization approach, it is based on a premise which has never been truly evaluated: that it is possible to learn the best compiler optimizations across data sets. Up to now, most iterative optimization studies find the best optimizations through repeated runs on the same data set. Only a handful of studies have attempted to exercise iterative optimization on a few tens of data sets.
In this paper, we truly put iterative compilation to the test for the first time by evaluating its effectiveness across a large number of data sets. We therefore compose KDataSets, a data set suite with 1000 data sets for 32 programs, which we release to the public. We characterize the diversity of KDataSets, and subsequently use it to evaluate iterative optimization.We demonstrate that it is possible to derive a robust iterative optimization strategy across data sets: for all 32 programs, we find that there exists at least one combination of compiler optimizations that achieves 86% or more of the best possible speedup across all data sets using Intel's ICC (83% for GNU's GCC). This optimal combination is program-specific and yields speedups up to 1.71 on ICC and 2.23 on GCC over the highest optimization level (-fast and -O3, respectively). This finding makes the task of optimizing programs across data sets much easier than previously anticipated, and it paves the way for the practical and reliable usage of iterative optimization. Finally, we derive pre-shipping and post-shipping optimization strategies for software vendors.
- EEMBC: The Embedded Microprocessor Benchmark Consortium. http://www.eembc.org.Google Scholar
- cBench: Collective Benchmarks. http://www.ctuning.org/ cbench.Google Scholar
- PAPI: A Portable Interface to Hardware Performance Counters. http: //icl.cs.utk.edu/papi.Google Scholar
- F. Agakov, E. Bonilla, J. Cavazos, B. Franke, G. Fursin, M. F. P. O'Boyle, J. Thomson, M. Toussaint, and C. K. I. Williams. Using machine learning to focus iterative optimization. In Proceedings of the International Symposium on Code Generation and Optimization (CGO), pages 295--305, March 2006. Google ScholarDigital Library
- M. Arnold, A. Welc, and V.T.Rajan. Improving virtual machine performance using a cross-run profile repository. In Proceedings of the ACM Conference on Object-Oriented Programming, Systems, Languages and Applications (OOPSLA), pages 297--311, October 2005. Google ScholarDigital Library
- P. Berube and J. Amaral. Aestimo: a feedback-directed optimization evaluation tool. In Proceedings of the IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), pages 251--260, March 2006.Google ScholarCross Ref
- C. Bienia, S. Kumar, J. P. Singh, and K. Li. The PARSEC benchmark suite: characterization and architectural implications. In Proceedings of the 17th International Conference on Parallel Architectures and Compilation Techniques (PACT), pages 72--81, October 2008. Google ScholarDigital Library
- J. Cavazos, G. Fursin, F. Agakov, E. Bonilla, M. F. P. O'Boyle, and O. Temam. Rapidly selecting good compiler optimizations using performance counters. In Proceedings of the International Symposium on Code Generation and Optimization (CGO), pages 185--197, March 2007. Google ScholarDigital Library
- K. Cooper, P. Schielke, and D. Subramanian. Optimizing for reduced code space using genetic algorithms. In Proceedings of the Conference on Languages, Compilers, and Tools for Embedded Systems (LCTES), pages 1--9, July 1999. Google ScholarDigital Library
- K. D. Cooper, A. Grosul, T. J. Harvey, S. Reeves, D. Subramanian, L. Torczon, and T. Waterman. ACME: adaptive compilation made efficient. In Proceedings of the ACM SIGPLAN/SIGBED Conference on Languages, Compilers, and Tools for Embedded Systems (LCTES), pages 69--77, July 2005. Google ScholarDigital Library
- L. Eeckhout, H. Vandierendonck, and K. De Bosschere. Quantifying the impact of input data sets on program behavior and its applications. Journal of Instruction-Level Parallelism, 5:1--33, February 2003.Google Scholar
- B. Franke, M. O'Boyle, J. Thomson, and G. Fursin. Probabilistic source-level optimisation of embedded programs. In Proceedings of the ACM SIGPLAN/SIGBED Conference on Languages, Compilers, and Tools for Embedded Systems (LCTES), pages 78--86, July 2005. Google ScholarDigital Library
- G. Fursin and O. Temam. Collective optimization. In Proceedings of the International Conference on High Performance Embedded Architectures & Compilers (HiPEAC), pages 34--49, January 2009. Google ScholarDigital Library
- G. Fursin, J. Cavazos, M. O'Boyle, and O. Temam. Midatasets: Creating the conditions for a more realistic evaluation of iterative optimization. In Proceedings of the International Conference on High Performance Embedded Architectures & Compilers (HiPEAC), pages 245--260, January 2007. Google ScholarDigital Library
- M. Guthaus, J. Ringenberg, D. Ernst, T. Austin, T. Mudge, and R. Brown. Mibench: A free, commercially representative embedded benchmark suite. In Proceedings of the IEEE Fourth Annual International Workshop on Workload Characterization (WWC), pages 3--14, December 2001. Google ScholarDigital Library
- M. Haneda, P. Knijnenburg, and H. Wijshoff. On the impact of data input sets on statistical compiler tuning. In Proceedings of the 20th IEEE International Parallel and Distributed Processing Symposium (IPDPS), April 2006. Google ScholarDigital Library
- K. Hoste and L. Eeckhout. Cole: compiler optimization level exploration. In Proceedings of the Sixth Annual IEEE/ACM International Symposium on Code Generation and Optimization (CGO), pages 165--174, April 2008. Google ScholarDigital Library
- K. Hoste and L. Eeckhout. Comparing benchmarks using key microarchitecture-independent characteristics. In Proceedings of the IEEE International Symposium on Workload Characterization (IISWC), pages 83--92, October 2006.Google ScholarCross Ref
- K. Hoste, A. Georges, and L. Eeckhout. Automated just-in-time compiler tuning. In Proceedings of the Eighth Annual IEEE/ACM International Symposium on Code Generation and Optimization (CGO), April 2010. Google ScholarDigital Library
- W. C. Hsu, H. Chen, P. C. Yew, and D.-Y. Chen. On the predictability of program behavior using different input data sets. In Proceedings of the Sixth Annual Workshop on Interaction between Compilers and Computer Architectures (INTERACT), pages 45--53, February 2002. Google ScholarDigital Library
- Y. Jiang, E. Z. Zhang, K. Tian, F. Mao, M. Gethers, X. Shen, and Y. Gao. Exploiting statistical correlations for proactive prediction of program behaviors. In Proceedings of the International Symposium on Code Generation and Optimization (CGO), April 2010. Google ScholarDigital Library
- P. Kulkarni, S. Hines, J. Hiser, D. Whalley, J. Davidson, and D. Jones. Fast searches for effective optimization phase sequences. In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), pages 171--182, June 2004. Google ScholarDigital Library
- G. Magklis, M. L. Scott, G. Semeraro, D. H. Albonesi, and S. Dropsho. Profile-based dynamic voltage and frequency scaling for a multiple clock domain microprocessor. In Proceedings of the 30th Annual International Symposium on Computer Architecture (ISCA), pages 14-- 27, June 2003. Google ScholarDigital Library
- F. Mao, E. Z. Zhang, and X. Shen. Influence of program inputs on the selection of garbage collectors. In Proceedings of the ACM SIGPLAN/ SIGOPS International Conference on Virtual Execution Environments (VEE), pages 91--100, March 2009. Google ScholarDigital Library
- F. Matteo and S. Johnson. FFTW: An adaptive software architecture for the FFT. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), volume 3, pages 1381--1384, May 1998.Google Scholar
- T. Mytkowicz, A. Diwan, M. Hauswirth, and P. F. Sweeney. Producing wrong data without doing anything obviously wrong! In Proceeding of the 14th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), pages 265--276, February 2009. Google ScholarDigital Library
- Z. Pan and R. Eigenmann. Fast and effective orchestration of compiler optimizations for automatic performance tuning. In Proceedings of the International Symposium on Code Generation and Optimization (CGO), pages 319--332, March 2006. Google ScholarDigital Library
- K. Sankaranarayanan and K. Skadron. Profile-based adaptation for cache decay. ACM Transactions on Architecture and Code Optimization (TACO), 1:305--322, September 2004. Google ScholarDigital Library
- M. Stephenson, M. Martin, and U. O'Reilly. Meta optimization: Improving compiler heuristics with machine learning. In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), pages 77--90, June 2003. Google ScholarDigital Library
- M. W. Stephenson. Automating the Construction of Compiler Heuristics Using Machine Learning. PhD thesis, MIT, USA, January 2006. Google ScholarDigital Library
- R. C. Whaley, A. Petitet, and J. Dongarra. Automated empirical optimization of software and the atlas project. In Parallel Computing, March 2001.Google Scholar
- Y. Zhong, X. Shen, and C. Ding. Program locality analysis using reuse distance. Transactions on Programming Languages and Systems (TOPLAS), 31(6):1--39, Aug. 2009. Google ScholarDigital Library
Index Terms
- Evaluating iterative optimization across 1000 datasets
Recommendations
Deconstructing iterative optimization
Iterative optimization is a popular compiler optimization approach that has been studied extensively over the past decade. In this article, we deconstruct iterative optimization by evaluating whether it works across datasets and by analyzing why it ...
Evaluating iterative optimization across 1000 datasets
PLDI '10While iterative optimization has become a popular compiler optimization approach, it is based on a premise which has never been truly evaluated: that it is possible to learn the best compiler optimizations across data sets. Up to now, most iterative ...
Iterative optimization for the data center
ASPLOS '12Iterative optimization is a simple but powerful approach that searches for the best possible combination of compiler optimizations for a given workload. However, each program, if not each data set, potentially favors a different combination. As a result,...
Comments