Abstract
Many world leading high-end computing (HEC) facilities are now offering over 100 Teraflops/s of performance and several initiatives have begun to look forward to Petascale computing5 (1015 flop/s). Los Alamos National Laboratory and Oak Ridge National Laboratory (ORNL) already have Petascale systems, which are leading the current (Nov 2008) TOP500 list [1]. Computing at the Petascale raises a number of significant challenges for parallel computational fluid dynamics codes. Most significantly, further improvements to the performance of individual processors will be limited and therefore Petascale systems are likely to contain 100,000+ processors. Thus a critical aspect for utilising high Terascale and Petascale resources is the scalability of the underlying numerical methods, both with execution time with the number of processors and scaling of time with problem size. In this paper we analyse the performance of several CFD codes for a range of datasets on some of the latest high performance computing architectures. This includes Direct Numerical Simulations (DNS) via the SBLI [2] and SENGA2 [3] codes, and Large Eddy Simulations (LES) using both STREAMS LES [4] and the general purpose open source CFD code Code Saturne [5].
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Top 500 Supercomputer sites, http://www.top500.org/
N.D. Sandham, M. Ashworth and D.R. Emerson, Direct Numerical Simulation of Shock/Boundary Layer Interaction, http://www.cse.clrc.ac.uk/ceg/sbli.shtml
L. Temmerman, M.A. Leschziner, C.P. Mellen, and J. Frohlich, Investigation of wall-function approximations and subgrid-scale models in large eddy simulation of separated flow in a channel with streamwise periodic constrictions, International Journal of Heat and Fluid Flow, 24(2): 157–180, 2003.
F.Archambeau, N. Méchitoua, and M. Sakiz, Code }Saturne: a finite volume code for the computation of turbulent incompressible flows industrial applications, Int. J. Finite Volumes, February 2004.
MPI: A Message Passing Interface Standard, Message Passing Interface Forum 1995, http://www.netlib.org/mpi/index.html
EDF Research and Development, http://rd.edf.com/107008i/EDF.fr/Research-and-Development/softwares/Code-Saturne.html
HPCx -The UK’s World-Class Service for World-Class Research, http://www.hpcx.ac.uk
STFC’s Computational Science and Engineering Department, http://www.cse.scitech.ac.uk/
HECToR UK National Supercomputing Service, http://www.hector.ac.uk
Supercomputing at Oak Ridge National Laboratory, http://computing.ornl.gov/supercomputing.shtml.
The Green Top 500 List, http://www.green500.org/lists/2007/11/green500.php.
Single Node Performance Analysis of Applications on HPCx, M. Bull, HPCx Technical Report HPCxTR0703 2007, http://www.hpcx.ac.uk/research/hpc/technical}reports/HPCxTR0703.pdf.
J. Bonelle, Y. Fournier, F. Jusserand, S.Ploix, L. Maas, B. Quach, Numerical methodology for the study of a fluid flow through a mixing grid, Presentation to Club Utilisateurs Code }Saturne, 2007, http://research.edf.com/fichiers/fckeditor/File/EDFRD/Code}Saturne/ClubU/2007/07-mixing}grid}HPC.pdf.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer Berlin Heidelberg
About this paper
Cite this paper
Sunderland, A.G., Ashworth, M., Moulinec, C., Li, N., Uribe, J., Fournier, Y. (2010). Towards Petascale Computing with Parallel CFD codes. In: Tromeur-Dervout, D., Brenner, G., Emerson, D., Erhel, J. (eds) Parallel Computational Fluid Dynamics 2008. Lecture Notes in Computational Science and Engineering, vol 74. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-14438-7_33
Download citation
DOI: https://doi.org/10.1007/978-3-642-14438-7_33
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-14437-0
Online ISBN: 978-3-642-14438-7
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)