Skip to main content
Log in

Data Dependence Analysis Techniques for Increased Accuracy and Extracted Parallelism

  • Published:
International Journal of Parallel Programming Aims and scope Submit manuscript

Abstract

Parallelizing compilers rely on data dependence information in order to produce valide parallel code. Traditional data dependence analysis techniques, such as the Banerjee test and the I-Test, can efficiently compute data dependence information for simple instances of the data dependence problem. However, in more complex cases involving triangular or trapezoidal loop regions, symbolic variables, and multidimensional arrays with coupled subscripts these tests, including the triangular Banerjee test, ignore or simplify many of the constraints and thus introduce approximations, especially when testing for data dependence under direction vector constraints. The Omega test can accurately handle such complex cases, but at a higher computation cost. In this paper we extend the ideas behind the I-Test and present new techniques to handle complex instances of the dependence problem, which are frequently found in actual source code. In particular, we provide polynomial-time techniques that can prove or disprove data dependences, subject to any direction vector, in loops with triangular or trapezoidal bounds, symbolic variables, and multidimensional arrays with coupled subscripts. We also investigate the impact of the proposed data dependence analysis techniques in practice. We perform an extensive experimental evaluation of the data dependence analysis tests, including the I-Test, the Omega test and the proposed new techniques. We compare these tests in terms of data dependence accuracy, compilation efficiency and effectiveness in program parallelization. We run several experiments using the Perfect Club benchmarks and the scientific library Lapack. We analyze the trade-off between accuracy and efficiency and the reasons for any approximation of each data dependence test. We determine the impact of the dependence analysis phase on the total compilation time and we measure the number of loops parallelized by each test. We conclude that we can employ polynomial-time techniques to improve data dependence accuracy and increase program parallelization at a reasonable computation cost.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

REFERENCES

  1. A. Aho, R. Sethi, and J. Ullman, Compilers: Principles, Techniques, and Tools, Addison-Wesley (1988).

  2. David F. Bacon, Susan L. Graham, and Oliver J. Sharp, Compiler Transformations for High-Performance Computing, ACM Comput. Surveys, 26(4) (Dec. 1994).

  3. U. Banerjee, Dependence Analysis for Supercomputing, Kluwer Academic Publishers, Boston, MA (1988).

    Google Scholar 

  4. W. Blume, R. Eigenmann, K. Faigin, J. Grout, J. Hoeflinger, D. Padua, P. Petersen, B. Pottenger, L. Rauchwerger, P. Tu, and S. Weatherford, “Effective Automatic Parallelization with Polaris”, Int. J. of Parallel Progr. (May 1995).

  5. W. Blume, R. Doallo, R. Eigenmann, J. Grout, J. Hoeflinger, T. Lawrence, J. Lee, D. Padua, Y. Paek, B. Pottenger, L. Rauchwerger, and P. Tu, “Parallel Programming with Polaris”, IEEE Computer, 29(12):78–82 (December 1996).

    Google Scholar 

  6. M. Burke and R. Cytron, Interprocedural Dependence Analysis and Parallelization, Proceedings of SIGPLAN '86 Symposium on Compiler Construction, Palo Alto, CA (June 1986).

  7. G. Dantzig and B. Eaves, Fourier-Motzkin Elimination and its Dual, J. Combinatorial Theory(A), 14(1973).

  8. G. Golf, K. Kennedy, and C. Tseng, Practical Dependence Testing, Proceedings of the SIGPLAN '91 Conference on Programming Language Design and Implementation, Toronto, Canada (June 1991).

  9. X. Kong, D. Klappholz, and K. Psarris, The I-Test: An Improved Dependence Test for Automatic Parallelization and Vectorization, IEEE Transactions on Parallel and Distributed Systems, 2(3) (July 1991).

  10. D. Maydan, J. Hennesy, and M. Lam, Efficient and Exact Data Dependence Analysis for Parallelizing Compilers, Proceedings of the SIGPLAN '91 Conference on Programming Language Design and Implementation, Toronto, Canada (June 1991).

  11. K. Psarris, The Banerjee-Wolfe and GCD Tests on Exact Data Dependence Information, J. Parallel Distributed Comput., 32(2) (February 1996).

  12. K. Psarris, D. Klappholz, and X. Kong, On the Accuracy of the Banerjee Test, J. Parallel Distributed Comput., 12(2) (June 1991).

  13. K. Psarris, X. Kong, and D. Klappholz, The Direction Vector I Test, IEEE Transactions on Parallel and Distributed Systems, 4(11) (November 1993).

  14. K. Psarris and K. Kyriakopoulos, The Impact of Data Dependence Analysis on Compilation and Program Parallelization, Proceedings of the Seventeenth ACM International Conference on Supercomputing, San Francisco, California (June 2003).

  15. W. Pugh, A Practical Algorithm for Exact Array Dependence Analysis, Commun. ACM, 35(8) (August 1992).

  16. M. Wolfe, High Performance Compilers for Parallel Computing, Addison-Wesley, Redwood City, CA (1996).

    Google Scholar 

Download references

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kyriakopoulos, K., Psarris, K. Data Dependence Analysis Techniques for Increased Accuracy and Extracted Parallelism. International Journal of Parallel Programming 32, 317–359 (2004). https://doi.org/10.1023/B:IJPP.0000035817.01263.d0

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/B:IJPP.0000035817.01263.d0

Navigation