Skip to main content

A Global View Programming Abstraction for Transitioning MPI Codes to PGAS Languages

  • Conference paper
OpenSHMEM and Related Technologies. Experiences, Implementations, and Tools (OpenSHMEM 2014)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 8356))

Included in the following conference series:

  • 562 Accesses

Abstract

The multicore generation of scientific high performance computing has provided a platform for the realization of Exascale computing, and has also underscored the need for new paradigms in coding parallel applications. The current standard for writing parallel applications requires programmers to use languages designed for sequential execution. These languages have abstractions that only allow programmers to operate on the process centric local view of data. To provide suitable languages for parallel execution, many research efforts have designed languages based on the Partitioned Global Address Space (PGAS) programming model. Chapel is one of the more recent languages to be developed using this model. Chapel supports multithreaded execution with high-level abstractions for parallelism. With Chapel in mind, we have developed a set of directives that serve as intermediate expressions for transitioning scientific applications from languages designed for sequential execution to PGAS languages like Chapel that are being developed with parallelism in mind.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Top 500 supercomputers (2013), http://www.top500.org/

  2. Bolosky, W., Fitzgerald, R., Scott, M.: Simple but effective techniques for numa memory management. In: Proceedings of the Twelfth ACM Symposium on Operating Systems Principles, SOSP 1989, pp. 19–31. ACM, New York (1989)

    Chapter  Google Scholar 

  3. Black, D., Gupta, A., Weber, W.D.: Competitive management of distributed shared memory. In: COMPCON Spring 1989. Thirty-Fourth IEEE Computer Society International Conference: Intellectual Leverage, Digest of Papers, pp. 184–190 (1989)

    Google Scholar 

  4. Blagodurov, S., Zhuravlev, S., Fedorova, A., Kamali, A.: A case for numa-aware contention management on multicore systems. In: Proceedings of the 19th International Conference on Parallel Architectures and Compilation Techniques, PACT 2010, pp. 557–558. ACM, New York (2010)

    Google Scholar 

  5. Rabenseifner, R., Hager, G., Jost, G.: Hybrid mpi/openmp parallel programming on clusters of multi-core smp nodes. In: 2009 17th Euromicro International Conference on Parallel, Distributed and Network-based Processing, pp. 427–436 (2009)

    Google Scholar 

  6. Jin, H., Jespersen, D., Mehrotra, P., Biswas, R., Huang, L., Chapman, B.: High performance computing using mpi and openmp on multi-core parallel systems. Parallel Comput. 37(9), 562–575 (2011)

    Article  Google Scholar 

  7. Kasim, H., March, V., Zhang, R., See, S.: Survey on parallel programming model. In: Cao, J., Li, M., Wu, M.-Y., Chen, J. (eds.) NPC 2008. LNCS, vol. 5245, pp. 266–275. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  8. Yelick, K., Bonachea, D., Chen, W.Y., Colella, P., Datta, K., Duell, J., Graham, S.L., Hargrove, P., Hilfinger, P., Husbands, P., Iancu, C., Kamil, A., Nishtala, R., Su, J., Welcome, M., Wen, T.: Productivity and performance using partitioned global address space languages. In: Proceedings of the 2007 International Workshop on Parallel Symbolic Computation, PASCO 2007, pp. 24–32. ACM, New York (2007)

    Chapter  Google Scholar 

  9. Bonachea, D., Hargrove, P., Welcome, M., Yelick, K.: Porting gasnet to portals: Partitioned global address space (pgas) language support for the cray xt. Cray User Group, CUG 2009 (2009)

    Google Scholar 

  10. Barrett, R.F., Alam, S.R., d Almeida, V.F., Bernholdt, D.E., Elwasif, W.R., Kuehn, J.A., Poole, S.W., Shet, A.G.: Exploring hpcs languages in scientific computing. Journal of Physics: Conference Series 125(1), 012034 (2008)

    Google Scholar 

  11. Dun, N., Taura, K.: An empirical performance study of chapel programming language. In: 2012 IEEE 26th International Parallel and Distributed Processing Symposium Workshops PhD Forum (IPDPSW), pp. 497–506. IEEE Computer Society, Los Alamitos (2012)

    Google Scholar 

  12. Chamberlain, B., Callahan, D., Zima, H.: Parallel programmability and the chapel language. Int. J. High Perform. Comput. Appl. 21(3), 291–312 (2007)

    Article  Google Scholar 

  13. Kennedy, K., Koelbel, C., Zima, H.: The rise and fall of high performance fortran: an historical object lesson. In: Proceedings of the Third ACM SIGPLAN Conference on History of Programming Languages, HOPL III, pp. 7–1–7–22. ACM, New York (2007)

    Google Scholar 

  14. Charles, P., Grothoff, C., Saraswat, V., Donawa, C., Kielstra, A., Ebcioglu, K., von Praun, C., Sarkar, V.: X10: an object-oriented approach to non-uniform cluster computing. In: Proceedings of the 20th Annual ACM SIGPLAN Conference on Object-oriented Programming, Systems, Languages, and Applications, OOPSLA 2005, pp. 519–538. ACM, New York (2005)

    Chapter  Google Scholar 

  15. Chamberlain, B.L., Choi, S.E., Deitz, S.J., Snyder, L.: The high-level parallel language zpl improves productivity and performance. In: Proceedings of the First Workshop on Productivity and Performance in High-End Computing (PPHEC 2004), pp. 66–75. Citeseer (2004)

    Google Scholar 

  16. Carlson, W.W., Draper, J.M., Culler, D.E., Yelick, K., Brooks, E., Warren, K.: Introduction to UPC and language specification. Technical report, Center for Computing Sciences (May 1999)

    Google Scholar 

  17. Nieplocha, J., Krishnan, M., Tipparaju, V., Palmer, B.: Global Arrays User Manual

    Google Scholar 

  18. Numrich, R.W., Reid, J.K.: Co-Array Fortran for parallel programming. ACM Fortran Forum 17(2), 1–31 (1998)

    Article  Google Scholar 

  19. Yelick, K., Semenzato, L., Pike, G., Miyamoto, C., Liblit, B., Krishnamurthy, A., Hilfinger, P., Graham, S., Gay, D., Colella, P., Aiken, A.: Titanium: A high performance Java dialect. Concurrency: Practice and Experience 10, 825–836 (1998)

    Article  Google Scholar 

  20. Allen, E., Chase, D., Luchangco, V., Maessen, J.W., Ryu, S., Steele Jr., G., Tobin-Hochstadt, S.: The Fortress language specification, version 0.785 (2005)

    Google Scholar 

  21. Cray Inc.: Chapel specification 0.4 (2005), http://chapel.cs.washington.edu/specification.pdf

  22. Charles, P., Donawa, C., Ebicioğlu, K., Grothoff, C., Kielstra, A., Saraswat, V., Sarkar, V., Praun, C.V.: X10: An object-oriented approach to non-uniform cluster computing. In: Proceedings of the 20th ACM SIGPLAN Conference on Object-Oriented Programing, Systems, Languages, and Applications, ACM SIGPLAN, pp. 519–538 (2005)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Mintz, T.M., Hernandez, O., Bernholdt, D.E. (2014). A Global View Programming Abstraction for Transitioning MPI Codes to PGAS Languages. In: Poole, S., Hernandez, O., Shamis, P. (eds) OpenSHMEM and Related Technologies. Experiences, Implementations, and Tools. OpenSHMEM 2014. Lecture Notes in Computer Science, vol 8356. Springer, Cham. https://doi.org/10.1007/978-3-319-05215-1_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-05215-1_9

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-05214-4

  • Online ISBN: 978-3-319-05215-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics