skip to main content
10.1145/1188455.1188504acmconferencesArticle/Chapter ViewAbstractPublication PagesscConference Proceedingsconference-collections
Article

High-performance computing for exact numerical approaches to quantum many-body problems on the earth simulator

Authors Info & Claims
Published:11 November 2006Publication History

ABSTRACT

In order to study quantum many-body problems, we develop two matrix diagonalization codes, which solve only the ground state and all quantum states, respectively. The target model in both codes is the Hubbard model with confinement potential which describes an atomic Fermi gas loaded on an optical lattice and partly High-Tc cuprate superconductor. For the former code, we obtain 18.692TFlops (57% of the peak) as the best performance on the Earth Simulator when calculating the ground state of 100-billion dimensional matrix. From these large-scale calculations, we find atomic-scale inhomogeneous superfluid state which is now a challenging subject for physicists. For the latter code, we succeed in solving the matrix whose dimension is 375,000 with locally 24.6TFlops (75% of the peak). The calculations reveal that a change from Schrodinger's cat to classical like one can be controlled by tuning the interaction. This is a marked contrast to the general concept.

References

  1. See, for example, M. Rasetti, ed., The Hubbard Model: Recent Results, World Scientific, Singapore, 1991; A. Montorsi, ed., The Hubbard Model, World Scientific, Singapore, 1992.Google ScholarGoogle Scholar
  2. M. Machida, S. Yamada, Y. Ohashi, and H. Matsumoto, Novel superfluidity in a trapped gas of Fermi atoms with repulsive interaction loaded on an optical lattice, Phys. Rev. Lett., 93, 200402, 2004.Google ScholarGoogle ScholarCross RefCross Ref
  3. M. Rigol, A. Muramatsu, G. G. Batrouni, and R. T. Scalettar, Local quantum criticality in confined fermions on optical lattices, Phys. Rev. Lett., 91, 130403, 2003.Google ScholarGoogle ScholarCross RefCross Ref
  4. For example, see, E. Dagotto, Correlated electrons in high-temperature superconductors, Rev. Mod. Phys., 66, 763, 1994.Google ScholarGoogle ScholarCross RefCross Ref
  5. S. Yamada, T. Imamura, M. Machida, 16.447 TFlops and 159-Billion-dimensional Exact-diagonalization for Trapped Fermion-Hubbard Model on the Earth Simulator, Proc. of SC2005, 2005. http://sc05.supercomputing.org/schedule/pdf/pap188.pdf Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. The Earth Simulator Center, http://www.es.jamstec.go.jp/esc/eng/index.htmlGoogle ScholarGoogle Scholar
  7. S. Shingu, H. Takahara, H. Fuchigami, M. Yamada, Y. Tsuda, W. Ohfuchi, Y. Sasaki, K. Kobayashi, T. Hagiwara, S. Habata, M. Yokokawa, H. Itoh and K. Otsuka, A 26.58 Tflops Global Atmospheric Simulation with the Spectral Transform Method on the Earth Simulator, Proc. of SC2002, 2002. http://sc-2002.org/paperpdfs/pap.pap331.pdf Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. H. Sakagami, H. Murai, Y. Seo and M. Yokokawa, 14.9 TFLOPS Three-dimensional Fluid Simulation for Fusion Science with HPF on the Earth Simulator, Proc. of SC2002, 2002. http://sc-2002.org/paperpdfs/pap.pap147.pdf Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. M. Yokokawa, K. Itakura, A. Uno, T. Ishihara and Y. Kaneda, 16.4 Tflops Direct Numerical Simulation of Turbulence by Fourier Spectral Method on the Earth Simulator, Proc. of SC2002, 2002. http://sc-2002.org/paperpdfs/pap.pap273.pdf Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. D. Komatitsch, S. Tsuboi, C, Ji and J. Tromp, A 14.6 billion degrees of freedom, 5 teraflops, 2.5 terabyte earthquake simulation on the Earth Simulator, Proc. of SC2003, 2003. http://www.sc-conference.org/sc2003/paperpdfs/pap124.pdf Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. A. Kageyama, M. Kameyama, S. Fujihara, M. Yoshida, M. Hyodo, and Y. Tsuda, A 15.2 TFlops Simulation of Geodynamo on the Earth Simulator, Proc. of SC2004, 2004. http://www.sc-conference.org/sc2004/schedule/pdfs/pap234.pdf Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. A. V. Knyazev, Preconditioned eigensolvers - An oxymoron?, Electronic Transactions on Numerical analysis, Vol. 7, 104--123, 1998.Google ScholarGoogle Scholar
  13. A. V. Knyazev, Toward the optimal eigensolver: Locally optimal block preconditioned conjugate gradient method, SIAM J. Sci. Comput., 23, 517--541, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. H. Uehara, M. Tamura, K. Itakura and M. Yokokawa, MPI Performance Evaluation on the Earth Simulator (in Japanese), Transactions on High Performance Computing Systems, 44(SIG 1 (HPS 6)), 2003, 24--34.Google ScholarGoogle Scholar
  15. J. E. Hoffman, E. W. Hudson, K. M. Lang, V. Madhavan, H. Eisaki, S. Uchida, and J. C. Davis, A four unit cell periodic pattern of Quasi-particle states surrounding vertex cores in Bi2Sr2CaCuO8+Δ, SCIENCE, Vol. 295, 2002Google ScholarGoogle Scholar
  16. M Machida, and T. Koyama, Friedel oscillation in charge profile and position dependent screening around a superconducting vortex core, Phys. Rev. Lett., Vol. 90, 077003, 2003.Google ScholarGoogle ScholarCross RefCross Ref
  17. G. Levy, M. Kugler, A. A. Manuel, and Ø. Fischer, Fourfold structure of vertex-cores states in Bi2Sr2CaCuO8+Δ, Phys. Rev. Lett., Vol. 95, 257005, 2005.Google ScholarGoogle ScholarCross RefCross Ref
  18. M Machida, and T. Koyama, Structure of a quantized vortex near the BCS-BEC crossover in an atomic fermi gas, Phys. Rev. Lett., Vol. 94, 140401, 2005.Google ScholarGoogle ScholarCross RefCross Ref
  19. ScaLAPACK, http://www.netlib.org/scalapack/scalapack_home.htmlGoogle ScholarGoogle Scholar
  20. K. Naono, Y. Yamamoto, M. Igai, and H. Hirayama: "High performance implementation of tridiagonalization on the SR8000", In Proc. of the High Performance Computing in Asia-Pacific Region (HPC-ASIA2000), Beijing, pp.206--219, 2000.Google ScholarGoogle ScholarCross RefCross Ref
  21. C. Bischof, X. Sun, and B. Lang, "Parallel tridiagonalization through two-step band reduction", In Proc. of Scalable High Performance Computing Conference, pp.23--27. IEEE, 1994.Google ScholarGoogle ScholarCross RefCross Ref
  22. Y. Yamamoto, Performance and Accuracy of Algorithms for Computing the Eigenvalues of Real Symmetric Matrices on Cache-based Multiprocessors (in Japanese), Transactions on High Performance Computing Systems, 46(SIG 3 (ACS8)), 2005, 81--91.Google ScholarGoogle Scholar
  23. C. Bischof, C. V. Loan, "The WY Representation for Products of Householder Matrices", SIAM J. Sci. Stat. Comput. Vol.8, No.1, 1987. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. P. Bientinest, I. S. Dhillon, R. V. Geijn: "A Parallel Eigensolver for Dense Symmetric Matrices Based on Multiple Relatively Robust Representations", SIAM J. Sci. Comput. Vol.27, No.1, pp.43--66, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. P. Alpatov, et al.: "PLAPACK: Parallel Linear Algebra Package," in Proc. of the SIAM Parallel Processing Conference, 1997 http://www.cs.utexas.edu/~plapack/Google ScholarGoogle Scholar
  26. E. Breitmoser, A. G. Sunderland, "A performance study of the PLAPACK and ScaLAPACK Eigensolvers on HPCx for the standard problem", Technical Report from the HPCx Consortium, 2004. http://www.hpcx.ac.uk/research/hpc/HPCxTR0406.pdfGoogle ScholarGoogle Scholar

Index Terms

  1. High-performance computing for exact numerical approaches to quantum many-body problems on the earth simulator

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          SC '06: Proceedings of the 2006 ACM/IEEE conference on Supercomputing
          November 2006
          746 pages
          ISBN:0769527000
          DOI:10.1145/1188455

          Copyright © 2006 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 11 November 2006

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • Article

          Acceptance Rates

          SC '06 Paper Acceptance Rate54of239submissions,23%Overall Acceptance Rate1,516of6,373submissions,24%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format