Skip to main content

Exploiting Parallelism Available in Loops Using Abstract Syntax Tree

  • Conference paper
  • First Online:
Emerging Research in Computing, Information, Communication and Applications

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 882))

Abstract

Performance of a program depends on two factors: better hardware of the executing machine and exploiting parallelism for concurrent execution. Loops with multiple iterations provide efficient parallelism in an application, and are used to reduce overall execution time and also to increase performance. Abstract Syntax Tree (AST) can be used as an effective tool for exploiting parallelism at the compiler level. This definitely saves time and automates the decomposition of a parallel job that is to be executed in parallel framework. AST can be used to detect loops in the source code, therefore this approach can be used to design a new parallel computing framework where simple codes written for normal machines can be parallelized by the framework itself.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 259.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 329.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Parhami, B. (2002). Introduction to parallelism. Introduction to parallel processing: Algorithms and architectures (pp. 3–23).

    Google Scholar 

  2. https://en.wikipedia.org/wiki/Instruction-level_parallelism.

  3. Lilja, D. J. (1994). Exploiting the parallelism available in loops. Computer, 27(2), 13–26.

    Article  Google Scholar 

  4. https://simple.wikipedia.org/wiki/Task_parallelism.

  5. Kumar, R., & Singh, P. K. (2014). An approach for compiler optimization to exploit instruction level parallelism. In Advanced Computing, Networking and Informatics (Vol. 2, pp. 509–516). Cham: Springer.

    Google Scholar 

  6. Rau, B. R., & Fisher, J. A. (2003). Instruction-level parallelism.

    Google Scholar 

  7. Lo, J. L., Emer, J. S., Levy, H. M., Stamm, R. L., Tullsen, D. M., & Eggers, S. J. (1997). Converting thread-level parallelism to instruction-level parallelism via simultaneous multithreading. ACM Transactions on Computer Systems (TOCS), 15(3), 322–354.

    Article  Google Scholar 

  8. http://www2.phys.canterbury.ac.nz/dept/docs/manuals/FORTRAN.

  9. https://cloud.google.com/dataflow/examples/wordcount-example.

  10. Zhong, H., Mehrara, M., Lieberman, S., & Mahlke, S. (2008, February). Uncovering hidden loop level parallelism in sequential applications. In IEEE 14th International Symposium on High Performance Computer Architecture, 2008. HPCA 2008 (pp. 290–301). IEEE.

    Google Scholar 

  11. Tullsen, D. M., Eggers, S. J., & Levy, H. M. (1995, June). Simultaneous multithreading: Maximizing on-chip parallelism. In 22nd Annual International Symposium on Computer Architecture, 1995. Proceedings (pp. 392–403). IEEE.

    Google Scholar 

  12. Wall, D. W. (1991). Limits of instruction-level parallelism (Vol. 19, No. 2, pp. 176–188). ACM.

    Google Scholar 

  13. Larsen, S., & Amarasinghe, S. (2000). Exploiting superword level parallelism with multimedia instruction sets (Vol. 35, No. 5, pp. 145–156). ACM.

    Google Scholar 

  14. Quarteroni, A., & Valli, A. (1996). Domain decomposition methods for partial differential equations.

    Google Scholar 

  15. Johnson, T. A., Eigenmann, R., & Vijaykumar, T. N. (2007, March). Speculative thread decomposition through empirical optimization. In Proceedings of the 12th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (pp. 205–214). ACM.

    Google Scholar 

  16. Jackins, C. L., & Tanimoto, S. L. (1983). Quad-trees, Oct-trees, and K-trees: A generalized approach to recursive decomposition of euclidean space. IEEE Transactions on Pattern Analysis and Machine Intelligence, 5, 533–539.

    Article  Google Scholar 

  17. Stone, R. B., & Wood, K. L. (2000). Development of a functional basis for design. Journal of Mechanical Design, 122(4), 359–370.

    Article  Google Scholar 

  18. Arabnia, H. R. (1990). A parallel algorithm for the arbitrary rotation of digitized images using process-and-data-decomposition approach. Journal of Parallel and Distributed Computing, 10(2), 188–192.

    Article  Google Scholar 

  19. Jojic, V., Gould, S., & Koller, D. (2010, June). Accelerated dual decomposition for MAP inference. In ICML (pp. 503–510).

    Google Scholar 

  20. Devaurs, D., Siméon, T., & Cortés, J. (2013). Parallelizing RRT on large-scale distributed-memory architectures. IEEE Transactions on Robotics, 29(2), 571–579.

    Article  Google Scholar 

  21. Neamtiu, I., Foster, J. S., & Hicks, M. (2005). Understanding source code evolution using abstract syntax tree matching. ACM SIGSOFT Software Engineering Notes, 30(4), 1–5.

    Article  Google Scholar 

  22. Dinh, Q. T., Necoara, I., & Diehl, M. (2014). Path-following gradient-based decomposition algorithms for separable convex optimization. Journal of Global Optimization, 59(1), 59–80.

    Article  MathSciNet  Google Scholar 

  23. https://web.stanford.edu/class/ee364b/lectures/decomposition_slides.pdf linear algebra with applications 7.7-8 (2000), 687–714.

  24. Pfenning, F., & Elliott, C. (1988, June). Higher-order abstract syntax. In ACM SIGPLAN Notices (Vol. 23, No. 7, pp. 199–208). ACM.

    Google Scholar 

  25. Fiore, M., Plotkin, G., & Turi, D. (1999). Abstract syntax and variable binding. In 14th Symposium on Logic in Computer Science, 1999. Proceedings (pp. 193–202). IEEE.

    Google Scholar 

  26. Wile, D. S. (1997, May). Abstract syntax from concrete syntax. In Proceedings of the 19th International Conference on Software Engineering (pp. 472–480). ACM.

    Google Scholar 

  27. Lim, A. W., & Lam, M. S. (1997, January). Maximizing parallelism and minimizing synchronization with affine transforms. In Proceedings of the 24th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (pp. 201–214). ACM.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anil Kumar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kumar, A., Singh, H. (2019). Exploiting Parallelism Available in Loops Using Abstract Syntax Tree. In: Shetty, N., Patnaik, L., Nagaraj, H., Hamsavath, P., Nalini, N. (eds) Emerging Research in Computing, Information, Communication and Applications. Advances in Intelligent Systems and Computing, vol 882. Springer, Singapore. https://doi.org/10.1007/978-981-13-5953-8_47

Download citation

  • DOI: https://doi.org/10.1007/978-981-13-5953-8_47

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-13-5952-1

  • Online ISBN: 978-981-13-5953-8

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics