Skip to main content

LAMMPS’ PPPM Long-Range Solver for the Second Generation Xeon Phi

  • Conference paper
  • First Online:
High Performance Computing (ISC High Performance 2017)

Abstract

Molecular Dynamics is an important tool for computational biologists, chemists, and materials scientists, consuming a sizable amount of supercomputing resources. Many of the investigated systems contain charged particles, which can only be simulated accurately using a long-range solver, such as PPPM. We extend the popular LAMMPS molecular dynamics code with an implementation of PPPM particularly suitable for the second generation Intel Xeon Phi. Our main target is the optimization of computational kernels by means of vectorization, and we observe speedups in these kernels of up to 12\(\times \). These improvements carry over to LAMMPS users, with overall speedups ranging between 2–3\(\times \), without requiring users to retune input parameters. Furthermore, our optimizations make it easier for users to determine optimal input parameters for attaining top performance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

References

  1. Hockney, R.W., Eastwood, J.W.: Computer Simulation Using Particles. Hilger, Bristol (1988)

    Book  Google Scholar 

  2. Plimpton, S.: Fast parallel algorithms for short-range molecular dynamics. J. Comput. Phys. 117(1), 1–19 (1995)

    Article  Google Scholar 

  3. Brown, W.M., Kohlmeyer, A., Plimpton, S.J., Tharrington, A.N.: Implementing molecular dynamics on hybrid high performance computers - particle - particle particle-mesh. Comput. Phys. Commun. 183(3), 449–459 (2012)

    Article  Google Scholar 

  4. Harvey, M.J., De Fabritiis, G.: An implementation of the smooth particle mesh Ewald method on GPU hardware. J. Chem. Theor. Comput. 5(9), 2371–2377 (2009). doi:10.1021/ct900275y

    Article  Google Scholar 

  5. Höhnerbach, M., Ismail, A.E., Bientinesi, P.: The vectorization of the tersoff multi-body potential: an exercise in performance portability. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2016, pp. 7:1–7:13. IEEE Press, Piscataway (2016)

    Google Scholar 

  6. Fabregat-Traver, D., Ismail, A.E., Bientinesi, P.: Accelerating scientific codes by performance and accuracy modeling, In: CoRR (2016). http://arxiv.org/abs/1608.04694

  7. Brown, W.M., Carrillo, J.-M.Y., Gavhane, N., Thakkar, F.M., Plimpton, S.J.: Optimizing legacy molecular dynamics software with directive-based offload. Comput. Phys. Commun. 195, 95–101 (2015)

    Article  Google Scholar 

  8. Brown, W.M., Semin, A., Hebenstreit, M., Khvostov, S., Raman, K., Plimpton, S.J.: Increasing molecular dynamics simulation rates with an 8-fold increase in electrical power efficiency. In: Proceedings of the 2016 ACM/IEEE Conference on Supercomputing, SC 2016, IEEE, New York (2016)

    Google Scholar 

  9. in ’t Veld, P.J., Ismail, A.E., Grest, G.S.: Application of Ewald summations to long-range dispersion forces. J. Chem. Phys. 127(14), 144711 (2007)

    Article  Google Scholar 

  10. Isele-Holder, R.E., Mitchell, W., Ismail, A.E.: Development and application of a particle-particle particle-mesh Ewald method for dispersion interactions. J. Chem. Phys. 137(17), 174107 (2012)

    Article  Google Scholar 

  11. Ewald, P.P.: Die Berechnung optischer und elektrostatischer Gitterpotentiale. Ann. Phys. 369(3), 253–287 (1921)

    Article  Google Scholar 

  12. Berendsen, H., van der Spoel, D., van Drunen, R.: GROMACS: a message-passing parallel molecular dynamics implementation. Comput. Phys. Commun. 91(1), 43–56 (1995)

    Article  Google Scholar 

  13. Todorov, I.T., Smith, W., Trachenko, K., Dove, M.T.: DL_POLY_3: new dimensions in molecular dynamics simulations via massive parallelism. J. Mater. Chem. 16, 1911–1918 (2006)

    Article  Google Scholar 

  14. Salomon-Ferrer, R., Case, D.A., Walker, R.C.: An overview of the Amber biomolecular simulation package. Wiley Interdisc. Rev.: Comput. Mol. Sci. 3(2), 198–210 (2013)

    Google Scholar 

  15. Bowers, K.J., Chow, E., Xu, H., Dror, R.O., Eastwood, M.P., Gregersen, B.A., Klepeis, J.L., Kolossvary, I., Moraes, M.A., Sacerdoti, F.D., Salmon, J.K., Shan, Y., Shaw, D.E.: Scalable algorithms for molecular dynamics simulations on commodity clusters. In: Proceedings of the 2006 ACM/IEEE Conference on Supercomputing, SC 2006, ACM, New York (2006)

    Google Scholar 

  16. Phillips, J.C., Braun, R., Wang, W., Gumbart, J., Tajkhorshid, E., Villa, E., Chipot, C., Skeel, R.D., Kalé, L., Schulten, K.: Scalable molecular dynamics with NAMD. J. Comput. Chem. 26(16), 1781–1802 (2005)

    Article  Google Scholar 

  17. Darden, T., York, D., Pedersen, L.: Particle mesh Ewald: an n log(n) method for Ewald sums in large systems. J. Chem. Phys. 98(12), 10089–10092 (1993)

    Article  Google Scholar 

  18. Essmann, U., Perera, L., Berkowitz, M.L., Darden, T., Lee, H., Pedersen, L.G.: A smooth particle mesh Ewald method. J. Chem. Phys. 103(19), 8577–8593 (1995)

    Article  Google Scholar 

  19. Shan, Y., Klepeis, J.L., Eastwood, M.P., Dror, R.O., Shaw, D.E.: Gaussian split Ewald: a fast Ewald mesh method for molecular simulation. J. Chem. Phys. 122(5), 054101 (2005)

    Article  Google Scholar 

  20. Sagui, C., Darden, T.: Multigrid methods for classical molecular dynamics simulations of biomolecules. J. Chem. Phys. 114(15), 6578–6591 (2001)

    Article  Google Scholar 

  21. Hardy, D.J., Wu, Z., Phillips, J.C., Stone, J.E., Skeel, R.D., Schulten, K.: Multilevel summation method for electrostatic force evaluation. J. Chem. Theor. Comput. 11(2), 766–779 (2015). doi:10.1021/ct5009075

    Article  Google Scholar 

  22. Berendsen, H.J.C., Grigera, J.R., Straatsma, T.P.: The missing term in effective pair potentials. J. Phys. Chem. 91(24), 6269–6271 (1987)

    Article  Google Scholar 

  23. Wende, F., Marsman, M., Steinke, T.: On enhancing 3D-FFT performance in VASP. In: CUG Proceedings (2016)

    Google Scholar 

Download references

Acknowledgments

The authors gratefully acknowledge financial support from the Deutsche Forschungsgemeinschaft (German Research Association) through grant GSC 111, and from Intel Corporation via the Intel Parallel Computing Center initiative.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to William McDoniel .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

McDoniel, W., Höhnerbach, M., Canales, R., Ismail, A.E., Bientinesi, P. (2017). LAMMPS’ PPPM Long-Range Solver for the Second Generation Xeon Phi. In: Kunkel, J.M., Yokota, R., Balaji, P., Keyes, D. (eds) High Performance Computing. ISC High Performance 2017. Lecture Notes in Computer Science(), vol 10266. Springer, Cham. https://doi.org/10.1007/978-3-319-58667-0_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-58667-0_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-58666-3

  • Online ISBN: 978-3-319-58667-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics