skip to main content
10.1145/1806596.1806647acmconferencesArticle/Chapter ViewAbstractPublication PagespldiConference Proceedingsconference-collections
research-article

Evaluating iterative optimization across 1000 datasets

Authors Info & Claims
Published:05 June 2010Publication History

ABSTRACT

While iterative optimization has become a popular compiler optimization approach, it is based on a premise which has never been truly evaluated: that it is possible to learn the best compiler optimizations across data sets. Up to now, most iterative optimization studies find the best optimizations through repeated runs on the same data set. Only a handful of studies have attempted to exercise iterative optimization on a few tens of data sets.

In this paper, we truly put iterative compilation to the test for the first time by evaluating its effectiveness across a large number of data sets. We therefore compose KDataSets, a data set suite with 1000 data sets for 32 programs, which we release to the public. We characterize the diversity of KDataSets, and subsequently use it to evaluate iterative optimization.We demonstrate that it is possible to derive a robust iterative optimization strategy across data sets: for all 32 programs, we find that there exists at least one combination of compiler optimizations that achieves 86% or more of the best possible speedup across all data sets using Intel's ICC (83% for GNU's GCC). This optimal combination is program-specific and yields speedups up to 1.71 on ICC and 2.23 on GCC over the highest optimization level (-fast and -O3, respectively). This finding makes the task of optimizing programs across data sets much easier than previously anticipated, and it paves the way for the practical and reliable usage of iterative optimization. Finally, we derive pre-shipping and post-shipping optimization strategies for software vendors.

References

  1. EEMBC: The Embedded Microprocessor Benchmark Consortium. http://www.eembc.org.Google ScholarGoogle Scholar
  2. cBench: Collective Benchmarks. http://www.ctuning.org/ cbench.Google ScholarGoogle Scholar
  3. PAPI: A Portable Interface to Hardware Performance Counters. http: //icl.cs.utk.edu/papi.Google ScholarGoogle Scholar
  4. F. Agakov, E. Bonilla, J. Cavazos, B. Franke, G. Fursin, M. F. P. O'Boyle, J. Thomson, M. Toussaint, and C. K. I. Williams. Using machine learning to focus iterative optimization. In Proceedings of the International Symposium on Code Generation and Optimization (CGO), pages 295--305, March 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. M. Arnold, A. Welc, and V.T.Rajan. Improving virtual machine performance using a cross-run profile repository. In Proceedings of the ACM Conference on Object-Oriented Programming, Systems, Languages and Applications (OOPSLA), pages 297--311, October 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. P. Berube and J. Amaral. Aestimo: a feedback-directed optimization evaluation tool. In Proceedings of the IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), pages 251--260, March 2006.Google ScholarGoogle ScholarCross RefCross Ref
  7. C. Bienia, S. Kumar, J. P. Singh, and K. Li. The PARSEC benchmark suite: characterization and architectural implications. In Proceedings of the 17th International Conference on Parallel Architectures and Compilation Techniques (PACT), pages 72--81, October 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. J. Cavazos, G. Fursin, F. Agakov, E. Bonilla, M. F. P. O'Boyle, and O. Temam. Rapidly selecting good compiler optimizations using performance counters. In Proceedings of the International Symposium on Code Generation and Optimization (CGO), pages 185--197, March 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. K. Cooper, P. Schielke, and D. Subramanian. Optimizing for reduced code space using genetic algorithms. In Proceedings of the Conference on Languages, Compilers, and Tools for Embedded Systems (LCTES), pages 1--9, July 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. K. D. Cooper, A. Grosul, T. J. Harvey, S. Reeves, D. Subramanian, L. Torczon, and T. Waterman. ACME: adaptive compilation made efficient. In Proceedings of the ACM SIGPLAN/SIGBED Conference on Languages, Compilers, and Tools for Embedded Systems (LCTES), pages 69--77, July 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. L. Eeckhout, H. Vandierendonck, and K. De Bosschere. Quantifying the impact of input data sets on program behavior and its applications. Journal of Instruction-Level Parallelism, 5:1--33, February 2003.Google ScholarGoogle Scholar
  12. B. Franke, M. O'Boyle, J. Thomson, and G. Fursin. Probabilistic source-level optimisation of embedded programs. In Proceedings of the ACM SIGPLAN/SIGBED Conference on Languages, Compilers, and Tools for Embedded Systems (LCTES), pages 78--86, July 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. G. Fursin and O. Temam. Collective optimization. In Proceedings of the International Conference on High Performance Embedded Architectures & Compilers (HiPEAC), pages 34--49, January 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. G. Fursin, J. Cavazos, M. O'Boyle, and O. Temam. Midatasets: Creating the conditions for a more realistic evaluation of iterative optimization. In Proceedings of the International Conference on High Performance Embedded Architectures & Compilers (HiPEAC), pages 245--260, January 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. M. Guthaus, J. Ringenberg, D. Ernst, T. Austin, T. Mudge, and R. Brown. Mibench: A free, commercially representative embedded benchmark suite. In Proceedings of the IEEE Fourth Annual International Workshop on Workload Characterization (WWC), pages 3--14, December 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. M. Haneda, P. Knijnenburg, and H. Wijshoff. On the impact of data input sets on statistical compiler tuning. In Proceedings of the 20th IEEE International Parallel and Distributed Processing Symposium (IPDPS), April 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. K. Hoste and L. Eeckhout. Cole: compiler optimization level exploration. In Proceedings of the Sixth Annual IEEE/ACM International Symposium on Code Generation and Optimization (CGO), pages 165--174, April 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. K. Hoste and L. Eeckhout. Comparing benchmarks using key microarchitecture-independent characteristics. In Proceedings of the IEEE International Symposium on Workload Characterization (IISWC), pages 83--92, October 2006.Google ScholarGoogle ScholarCross RefCross Ref
  19. K. Hoste, A. Georges, and L. Eeckhout. Automated just-in-time compiler tuning. In Proceedings of the Eighth Annual IEEE/ACM International Symposium on Code Generation and Optimization (CGO), April 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. W. C. Hsu, H. Chen, P. C. Yew, and D.-Y. Chen. On the predictability of program behavior using different input data sets. In Proceedings of the Sixth Annual Workshop on Interaction between Compilers and Computer Architectures (INTERACT), pages 45--53, February 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Y. Jiang, E. Z. Zhang, K. Tian, F. Mao, M. Gethers, X. Shen, and Y. Gao. Exploiting statistical correlations for proactive prediction of program behaviors. In Proceedings of the International Symposium on Code Generation and Optimization (CGO), April 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. P. Kulkarni, S. Hines, J. Hiser, D. Whalley, J. Davidson, and D. Jones. Fast searches for effective optimization phase sequences. In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), pages 171--182, June 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. G. Magklis, M. L. Scott, G. Semeraro, D. H. Albonesi, and S. Dropsho. Profile-based dynamic voltage and frequency scaling for a multiple clock domain microprocessor. In Proceedings of the 30th Annual International Symposium on Computer Architecture (ISCA), pages 14-- 27, June 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. F. Mao, E. Z. Zhang, and X. Shen. Influence of program inputs on the selection of garbage collectors. In Proceedings of the ACM SIGPLAN/ SIGOPS International Conference on Virtual Execution Environments (VEE), pages 91--100, March 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. F. Matteo and S. Johnson. FFTW: An adaptive software architecture for the FFT. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), volume 3, pages 1381--1384, May 1998.Google ScholarGoogle Scholar
  26. T. Mytkowicz, A. Diwan, M. Hauswirth, and P. F. Sweeney. Producing wrong data without doing anything obviously wrong! In Proceeding of the 14th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), pages 265--276, February 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Z. Pan and R. Eigenmann. Fast and effective orchestration of compiler optimizations for automatic performance tuning. In Proceedings of the International Symposium on Code Generation and Optimization (CGO), pages 319--332, March 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. K. Sankaranarayanan and K. Skadron. Profile-based adaptation for cache decay. ACM Transactions on Architecture and Code Optimization (TACO), 1:305--322, September 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. M. Stephenson, M. Martin, and U. O'Reilly. Meta optimization: Improving compiler heuristics with machine learning. In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), pages 77--90, June 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. M. W. Stephenson. Automating the Construction of Compiler Heuristics Using Machine Learning. PhD thesis, MIT, USA, January 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. R. C. Whaley, A. Petitet, and J. Dongarra. Automated empirical optimization of software and the atlas project. In Parallel Computing, March 2001.Google ScholarGoogle Scholar
  32. Y. Zhong, X. Shen, and C. Ding. Program locality analysis using reuse distance. Transactions on Programming Languages and Systems (TOPLAS), 31(6):1--39, Aug. 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Evaluating iterative optimization across 1000 datasets

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      PLDI '10: Proceedings of the 31st ACM SIGPLAN Conference on Programming Language Design and Implementation
      June 2010
      514 pages
      ISBN:9781450300193
      DOI:10.1145/1806596
      • cover image ACM SIGPLAN Notices
        ACM SIGPLAN Notices  Volume 45, Issue 6
        PLDI '10
        June 2010
        496 pages
        ISSN:0362-1340
        EISSN:1558-1160
        DOI:10.1145/1809028
        Issue’s Table of Contents

      Copyright © 2010 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 5 June 2010

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate406of2,067submissions,20%

      Upcoming Conference

      PLDI '24

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader