skip to main content
article
Open Access

Evidence-based static branch prediction using machine learning

Published:01 January 1997Publication History
Skip Abstract Section

Abstract

Correctly predicting the direction that branches will take is increasingly important in today's wide-issue computer architectures. The name program-based branch prediction is given to static branch prediction techniques that base their prediction on a program's structure. In this article, we investigate a new approach to program-based branch prediction that uses a body of existing programs to predict the branch behavior in a new program. We call this approach to program-based branch prediction evidence-based static prediction, or ESP. The main idea of ESP is that the behavior of a corpus of programs can be used to infer the behavior of new programs. In this article, we use neural networks and decision trees to map static features associated with each branch to a prediction that the branch will be taken. ESP shows significant advantages over other prediction mechanisms. Specifically, it is a program-based technique; it is effective across a range of programming languages and programming styles; and it does not rely on the use of expert-defined heuristics. In this article, we describe the application of ESP to the problem of static branch prediction and compare our results to existing program-based branch predictors. We also investigate the applicability of ESP across computer architectures, programming languages, compilers, and run-time systems. We provide results showing how sensitive ESP is to the number and type of static features and programs included in the ESP training sets, and we compare the efficacy of static branch prediction for subroutine libraries. Averaging over a body of 43 C and Fortran programs, ESP branch prediction results in a miss rate of 20%, as compared with the 25% miss rate obtained using the best existing program-based heuristics.

References

  1. ALVERSON, R., CALLAHAN, D., CUMMINGS, D., KOBLENZ, B., POTERFIELD, A., AND SMITH, B. 1990. The Tera computer system. In the International Conference on Supercomputing. IEEE, New York, 1-6. Google ScholarGoogle Scholar
  2. ARNOLD, C. N. 1982. Performance evaluation of three automatic vectorizer packages. In Proceedings of the 1982 International Conference on Parallel Processing. IEEE, New York, 235-242.Google ScholarGoogle Scholar
  3. BALASUNDARAM, V., FOX, G., KENNEDY, K., AND KREMER, U. 1991. A static performance estimator to guide data partitioning decision. In the 3rd ACM SIGPLAN Symposium on Princ iples and Practice of Parallel Programming. ACM, New York, 213-223. Google ScholarGoogle Scholar
  4. BALL, T. AND LARUS, J.R. 1993. Branch prediction for free. In Proceedings of the SIGPLAN '93 Conference on Programming Language Design and Implementation. ACM, New York, 300-313. Google ScholarGoogle Scholar
  5. BERRY, M. 1989. The Perfect Club benchmarks: Effective performance evaluation of supercomputers. Int. J. Supercomput. Appl. 3, 3 (Fall), 5-40.Google ScholarGoogle Scholar
  6. CALDER, B. AND GRUNWALD, D. 1994a. Fast and accurate instruction fetch and branch prediction. In the 21st Annual International Symposium on Computer Architecture. ACM, New York, 2-11. Google ScholarGoogle Scholar
  7. CALDER, B. AND GRUNWALD, D. 1994b. Reducing branch costs via branch alignment. In the 6th International Conference on Architectural Support for Programming Languages and Operating Systems. ACM, New York, 242-251. Google ScholarGoogle Scholar
  8. CALDER, B., GRUNWALD, D., AND SRIVASTAVA, A. 1995. The predictability of branches in libraries. In the 28th International Symposium on Microarchitecture. IEEE, New York, 24 -34. Google ScholarGoogle Scholar
  9. CALDER, B., GRUNWALD, D., AND ZORN, B. 1994. Quantifying behavioral differences between C and C+ + programs. J. Program. Lang. 2, 4. Also available as Tech. Rep. CU-CS-698-94, Univ. of Colorado, Boulder, Colo.Google ScholarGoogle Scholar
  10. CHANG, P. P. AND HWU, W. W. 1992. Profile-guided automatic inline expansion for C programs. Softw. Pract. Exper. 22, 5, 349-376. Google ScholarGoogle Scholar
  11. CHANG, P. P., MAHLKE, A. S., AND HWU, W.W. 1991. Using profile information to assist classic compiler code optimizations. Softw. Pract. Exper. 21, 12, 1301-1321. Google ScholarGoogle Scholar
  12. DEMPSTER, A. P. 1968. A generalization of Bayesian inference. J. Roy. Stat. Soc. 30, 205-247.Google ScholarGoogle Scholar
  13. FISHER, J.A. 1981. Trace scheduling: A technique for global microcode compaction. IEEE Trans. Comput. C-30, 7 (July), 478-490.Google ScholarGoogle Scholar
  14. FISHER, J. A. AND FREUDENBERGER, S. M. 1992. Predicting conditional branch directions from previous runs of a program. In Proceedings of the 5th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS-V). ACM, New York, 85-95. Google ScholarGoogle Scholar
  15. HANK, R., MAHLKE, S., BRINGMANN, R., GYLLENHALL, J., AND HWU, W.W. 1993. Superblock formation using static program analysis. In the 26th International Symposium on Microarchitecture. IEEE, New York, 247-256. Google ScholarGoogle Scholar
  16. HUDSON, C., MACKE, T., DAVIES, J., WOLFE, M., AND LEASURE, B. 1986. The KAP/205: An advanced source-to-source vectorizer for the Cyber 205 supercomputer. In Proceedings of the 1986 International Conference on Parallel Processing. IEEE, New York, 827-835.Google ScholarGoogle Scholar
  17. Hwu, W.-M. W. AND CHANG, P.P. 1989. Achieving high instruction cache performance with an optimizing compiler. In the 16th Annual International Symposium on Computer Architecture. ACM, New York, 242-251. Google ScholarGoogle Scholar
  18. MCFARLING, S. AND HENNESSY, J. 1986. Reducing the cost of branches. In the 13th Annual International Symposium on Computer Architecture. ACM, New York, 396-403. Google ScholarGoogle Scholar
  19. PETTIS, K. AND HANSEN, R.C. 1990. Profile guided code positioning. In Proceedings of the SIGPLAN '90 Conference on Programming Language Design and Implementation. ACM, New York, 16-27. Google ScholarGoogle Scholar
  20. QUINLAN, J.R. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, Calif. Google ScholarGoogle Scholar
  21. RUMELHART, D. E., HINTON, G. E., AND WILLIAMS, R.J. 1986. Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Vol. 1, Foundations, D. E. Rumelhart and J. L. McClelland, Eds. MIT Press, Cambridge, Mass., 318-362. Google ScholarGoogle Scholar
  22. SHAFER, G. 1976. A Mathematical Theory of Evidence. Princeton University Press, Princeton, N.J.Google ScholarGoogle Scholar
  23. SMITH, J.E. 1981. A study of branch prediction strategies. In the 8th Annual International Symposium on Computer Architecture. ACM, New York, 135-148. Google ScholarGoogle Scholar
  24. SMOLENSKY, P., MOZER, M. C., AND RUMELHART, D. E., Eds. 1996. Mathematical Perspectives on Neural Networks. Erlbaum, Hillsdale, N.J. Google ScholarGoogle Scholar
  25. SRIVASTAVA, A. AND EUSTACE, A. 1994. ATOM: A system for building customized program analysis tools. In Proceedings of the SIGPLAN '94 Conference on Programming Language Design and Implementation. ACM, New York, 196-205. Google ScholarGoogle Scholar
  26. AGNER, T. A., MAVERICK, V., GRAHAM, S., AND HARRISON, M. 1994. Accurate static estimators for program optimization. In Proceedings of the SIGPLAN '94 Conference on Programming Language Design and Implementation. ACM, New York, 85-96. Google ScholarGoogle Scholar
  27. Wu, Y. AND LARUS, J.R. 1994. Static branch frequency and program profile analysis. In the 27th International Symposium on Microarchitecture. IEEE, New York. Google ScholarGoogle Scholar
  28. YEH, T.-Y. AND PATT, Y.N. 1993. A comparison of dynamic branch predictors that use two levels of branch history. In the 20th Annual International Symposium on Computer Architecture. ACM, New York, 257-266. Google ScholarGoogle Scholar

Index Terms

  1. Evidence-based static branch prediction using machine learning

                Recommendations

                Comments

                Login options

                Check if you have access through your login credentials or your institution to get full access on this article.

                Sign in

                Full Access

                PDF Format

                View or Download as a PDF file.

                PDF

                eReader

                View online with eReader.

                eReader