Skip to main content
Log in

Analysis of scalable data-privatization threading algorithms for hybrid MPI/OpenMP parallelization of molecular dynamics

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

We propose and analyze threading algorithms for hybrid MPI/OpenMP parallelization of a molecular-dynamics simulation, which are scalable on large multicore clusters. Two data-privatization thread scheduling algorithms via nucleation-growth allocation are introduced: (1) compact-volume allocation scheduling (CVAS); and (2) breadth-first allocation scheduling (BFAS). The algorithms combine fine-grain dynamic load balancing and minimal memory-footprint data privatization threading. We show that the computational costs of CVAS and BFAS are bounded by Θ(n 5/3 p −2/3) and Θ(n), respectively, for p threads working on n particles on a multicore compute node. Memory consumption per node of both algorithms scales as O(n+n 2/3 p 1/3), but CVAS has smaller prefactors due to a geometric effect. Based on these analyses, we derive the selection criterion between the two algorithms in terms of the granularity, n/p. We observe that memory consumption is reduced by 75 % for p=16 and n=8,192 compared to a naïve data privatization, while maintaining thread imbalance below 5 %. We obtain a strong-scaling speedup of 14.4 with 16-way threading on a four quad-core AMD Opteron node. In addition, our MPI/OpenMP code achieves 2.58× and 2.16× speedups over the MPI-only implementation on 32,768 cores of BlueGene/P for 0.84 and 1.68 million particle systems, respectively.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

References

  1. Phillips JC, Zheng G, Kumar S, Kale’ LV (2002) NAMD: biomolecular simulations on thousands of processors. In: Supercomputing, Los Alamitos, CA

    Google Scholar 

  2. Bowers KJ, Dror RO, Shaw DE (2007) Zonal methods for the parallel execution of range-limited N-body simulations. J Comput Phys 221(1):303–329

    Article  MathSciNet  MATH  Google Scholar 

  3. Hess B, Kutzner C, van der Spoel D, Lindahl E (2008) GROMACS 4: algorithms for highly efficient, load-balanced, and scalable molecular simulation. J Chem Theory Comput 4(3):435–447

    Article  Google Scholar 

  4. Shaw DE, Dror RO, Salmon JK, Grossman JP, Mackenzie KM, Bank JA, Young C, Deneroff MM, Batson B, Bowers KJ, Chow E, Eastwood MP, Ierardi DJ, Klepeis JL, Kuskin JS, Larson RH, Lindorff-Larsen K, Maragakis P, Moraes MA, Piana S, Shan Y, Towles B (2009) Millisecond-scale molecular dynamics simulations on Anton. In: Supercomputing, Portland, OR

    Google Scholar 

  5. Nomura K, Dursun H, Seymour R, Wang W, Kalia RK, Nakano A, Vashishta P, Shimojo F, Yang LH (2009) A metascalable computing framework for large spatiotemporal-scale atomistic simulations. In: International parallel and distributed processing symposium

    Google Scholar 

  6. Kushima A, Lin X, Li J, Eapen J, Mauro JC, Qian X, Diep P, Yip S (2009) Computing the viscosity of supercooled liquids. J Chem Phys 130:224501

    Article  Google Scholar 

  7. Wang W, Clark R, Nakano A, Kalia RK, Vashishta P (2009) Multi-million atom molecular dynamics study of combustion mechanism of aluminum nanoparticle. In: Material research society symposium proceeding

    Google Scholar 

  8. Streitz FH, Glosli JN, Patel MV, Chan B, Yates RK, de Supinski BR, Sexton J, Gunnels JA (2006) Simulating solidification in metals at high pressure: the drive to petascale computing. J Phys Conf Ser 46:254–267

    Article  Google Scholar 

  9. Brown WM, Kohlmeyer A, Plimpton SJ, Tharrington AN (2012) Implementing molecular dynamics on hybrid high performance computers—particle-particle particle-mesh. Comput Phys Commun 183(3):449–459

    Article  Google Scholar 

  10. Alam SR, Agarwal PK, Hampton SS, Ong H, Vetter JS (2008) Impact of multicores on large-scale molecular dynamics simulations. In: International parallel and distributed processing symposium, Miami, FL

    Google Scholar 

  11. Fuller SH, Millett LI (2011) Computing performance: game over or next level? Computer 44(1):31–38

    Article  Google Scholar 

  12. Peng L, Kunaseth M, Dursun H, Nomura K, Wang W, Kalia RK, Nakano A, Vashishta P (2009) A scalable hierarchical parallelization framework for molecular dynamics simulation on multicore clusters. In: International conference on parallel and distributed processing techniques and applications, Las Vegas, NV

    Google Scholar 

  13. Chorley MJ, Walker DW, Guest MF (2009) Hybrid message-passing and shared-memory programming in a molecular dynamics application on multicore clusters. Int J High Perform C 23(3):196–211

    Article  Google Scholar 

  14. Rabenseifner R, Hager G, Jost G (2009) Hybrid MPI/OpenMP parallel programming on clusters of multi-core SMP nodes. In: Euromicro workshop, pp 427–436

    Google Scholar 

  15. Osthoff C, Grunmann P, Boito F, Kassick R, Pilla L, Navaux P, Schepke C, Panetta J, Maillard N, Silva Dias PL, Walko R (2011) Improving performance on atmospheric models through a hybrid OpenMP/MPI implementation. In: International symposium on parallel and distributed processing with applications

    Google Scholar 

  16. Glosli JN, Richards DF, Caspersen KJ, Rudd RE, Gunnels JA, Streitz FH (2007) Extending stability beyond CPU millennium: a micron-scale atomistic simulation of Kelvin–Helmholtz instability. In: Supercomputing, Reno, NV

    Google Scholar 

  17. York D, Yang W (1994) The fast Fourier Poisson method for calculating Ewald sums. J Chem Phys 101(4):3298–3300

    Article  Google Scholar 

  18. Hockney R, Eastwood J (1981) Computer simulation using particles. McGraw-Hill, New York

    Google Scholar 

  19. Darden T, York D, Pedersen L (1993) Particle mesh Ewald: an Nlog(N) method for Ewald sums in large systems. J Chem Phys 98(12):10089–10092

    Article  Google Scholar 

  20. Richards DF, Glosli JN, Chan B, Dorr MR, Draeger EW, Fattebert J-L, Krauss WD, Spelce T, Streitz FH, Surh MP, Gunnels JA (2009) Beyond homogeneous decomposition: scaling long-range forces on massively parallel systems. In: Supercomputing, Portland, OR

    Google Scholar 

  21. Fattebert J-L, Richards DF, Glosli JN (2012) Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions. Comput Phys Commun 183(12):2608–2615

    Article  Google Scholar 

  22. Mellor-Crummey J, Whalley D, Kennedy K (2001) Improving memory hierarchy performance for irregular applications using data and computation reorderings. Int J Parallel Program 29(3):217–247

    Article  MATH  Google Scholar 

  23. Peng L, Kunaseth M, Dursun H, Nomura K, Wang WQ, Kalia RK, Nakano A, Vashishta P (2011) Exploiting hierarchical parallelisms for molecular dynamics simulation on multicore clusters. J Supercomput 57(1):20–33

    Article  Google Scholar 

  24. Penmatsa S, Chronopoulos AT, Karonis NT, Toonen B (2007) Implementation of distributed loop scheduling schemes on the TeraGrid. In: International parallel and distributed processing, Long Beach, CA

    Google Scholar 

  25. Ciorba FM, Andronikos T, Riakiotakis AT, Papakonstantinou G (2006) Dynamic multi-phase scheduling for heterogeneous clusters. In: International parallel and distributed processing, Rhodes, Greece

    Google Scholar 

  26. Sunarso A, Tsuji T, Chono S (2010) GPU-accelerated molecular dynamics simulation for study of liquid crystalline flows. J Comput Phys 229(15):5486–5497

    Article  MATH  Google Scholar 

  27. Yang J, Wang Y, Chen Y (2007) GPU accelerated molecular dynamics simulation of thermal conductivities. J Comput Phys 221(2):799–804

    Article  MATH  Google Scholar 

  28. Hu C, Liu Y, Li J (2009) Efficient parallel implementation of molecular dynamics with embedded atom method on multi-core platforms. In: International conference on parallel processing workshops

    Google Scholar 

  29. Holmes DW, Williams JR, Tilke P (2010) An events based algorithm for distributing concurrent tasks on multi-core architectures. Comput Phys Commun 181(2):341–354

    Article  Google Scholar 

  30. Madduri K, Williams S, Ethier S, Oliker L, Shalf J, Strohmaier E, Yelicky K (2009) Memory-efficient optimization of gyrokinetic particle-to-grid interpolation for multicore processors. In: Supercomputing, Portland, OR

    Google Scholar 

  31. Kleinberg J, Tardos E (2005) Algorithm design, 2 edn. Pearson Education, Upper Saddle River

    Google Scholar 

  32. Catalyurek UV, Boman EG, Devine KD, Bozdag D, Heaphy R, Riesen L (2007) A hypergraph-based dynamic load balancing for adaptive scientific computations. In: International parallel and distributed processing symposium

    Google Scholar 

Download references

Acknowledgements

This work was performed under the auspices of the US Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 (LLNL-JRNL-528373). The work at USC was partially supported by DOE BES/EFRC/SciDAC/SciDAC-e/INCITE and NSF PetaApps/CDI.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Manaschai Kunaseth.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kunaseth, M., Richards, D.F., Glosli, J.N. et al. Analysis of scalable data-privatization threading algorithms for hybrid MPI/OpenMP parallelization of molecular dynamics. J Supercomput 66, 406–430 (2013). https://doi.org/10.1007/s11227-013-0915-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-013-0915-x

Keywords

Navigation