Abstract
A merged request is a handle representing a group of Remote Memory Access (RMA), Atomic or Collective operations. The merged request can be created either by combining multiple outstanding merged request handles or using the same merged request handle for additional operations. We show that introducing such simple yet powerful semantics in OpenSHMEM provides many productivity and performance advantages. In this paper, we first introduce the interfaces and semantics for creating and using merged request handles. Then, we demonstrate with a merge request that we can achieve better performance characteristics in multithreaded OpenSHMEM application. Particularly, we show one can achieve higher message rate, a higher bandwidth for smaller message, and better computation-communication overlap. Further, we use merged request to realize multithreaded collectives, where multiple threads co-operate to complete the collective operation. Our experimental results show that in a multithreaded OpenSHMEM program, the merged request based RMA operations achieve over 100 Million Messages Per Second (MMPS). It achieves over 10 MMPS compared to 4.5 MMPS with default RMA operations in a single threaded environment. Also, we achieve higher bandwidth for smaller message sizes, close to 100% overlap, and reduce the latency by 60%.
This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan).
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
OpenSHMEM reference implementation. https://github.com/openshmem-org/openshmem. Accessed 26 June 2017
Baker, M., Aderholdt, F., Venkata, M.G., Shamis, P.: OpenSHMEM-UCX: evaluation of UCX for implementing OpenSHMEM programming model. In: Venkata et al. [12], pp. 114–130. https://doi.org/10.1007/978-3-319-50995-2_8
Boehm, S., Pophale, S., Venkata, M.G.: Evaluating OpenSHMEM explicit remote memory access operations and merged requests. In: Venkata et al. [12], pp. 18–34. https://doi.org/10.1007/978-3-319-50995-2_2
Bonachea, D.: Gasnet specification, v1.1. Technical report, Berkeley, CA, USA (2002)
Dinan, J., Balaji, P., Goodell, D., Miller, D., Snir, M., Thakur, R.: Enabling MPI interoperability through flexible communication endpoints. In: EuroMPI 2013, Madrid, Spain (2013)
Forum, M.P.: MPI: A message-passing interface standard. Technical report, Knoxville, TN, USA (1994)
Lawry, W., Wilson, C., Maccabe, A.B., Brightwell, R.: COMB: a portable benchmark suite for assessing MPI overlap. In: 2002 IEEE International Conference on Cluster Computing (CLUSTER 2002), Chicago, IL, USA, 23–26 September 2002, pp. 472–475. IEEE Computer Society (2002). https://doi.org/10.1109/CLUSTR.2002.1137785
Li, G., Palmer, R., DeLisi, M., Gopalakrishnan, G., Kirby, R.M.: Formal specification of MPI 2.0: case study in specifying a practical concurrent programming API. Sci. Comput. Program. 76(2), 65–81 (2011). https://doi.org/10.1016/j.scico.2010.03.007
Nieplocha, J., Carpenter, B.: ARMCI: a portable remote memory copy library for distributed array libraries and compiler run-time systems. In: Rolim, J., et al. (eds.) IPPS 1999. LNCS, vol. 1586, pp. 533–546. Springer, Heidelberg (1999). https://doi.org/10.1007/BFb0097937
Potluri, S., et al.: Exploring OpenSHMEM model to program GPU-based extreme-scale systems. In: Venkata, M.G., Shamis, P., Imam, N., Lopez, M.G. (eds.) OpenSHMEM 2014. LNCS, vol. 9397, pp. 18–35. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-26428-8_2
Sridharan, S., Dinan, J., Kalamkar, D.D.: Enabling efficient multithreaded MPI communication through a library-based implementation of MPI endpoints. In: SC 2014: International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 487–498 (2014)
Venkata, M.G., Imam, N., Pophale, S., Mintz, T.M. (eds.): OpenSHMEM 2016. LNCS, vol. 10007. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-50995-2
Acknowledgment
This work is supported by the United States Department of Defense and used resources of the Extreme Scale Systems Center located at the Oak Ridge National Laboratory.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG
About this paper
Cite this paper
Boehm, S., Pophale, S., Baker, M.B., Venkata, M.G. (2018). Merged Requests for Better Performance and Productivity in Multithreaded OpenSHMEM. In: Gorentla Venkata, M., Imam, N., Pophale, S. (eds) OpenSHMEM and Related Technologies. Big Compute and Big Data Convergence. OpenSHMEM 2017. Lecture Notes in Computer Science(), vol 10679. Springer, Cham. https://doi.org/10.1007/978-3-319-73814-7_3
Download citation
DOI: https://doi.org/10.1007/978-3-319-73814-7_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-73813-0
Online ISBN: 978-3-319-73814-7
eBook Packages: Computer ScienceComputer Science (R0)