ABSTRACT
The complexity of heterogeneous computing architectures, as well as the demand for productive and portable parallel application development, have driven the evolution of parallel programming models to become more comprehensive and complex than before. Enhancing the conventional compilation technologies and software infrastructure to be parallelism-aware has become one of the main goals of recent compiler development. In this work, we propose the design of unified parallel intermediate representation (UPIR) for multiple parallel programming models and for enabling unified compiler transformation for the models. UPIR specifies three commonly used parallelism patterns (SPMD, data and task parallelism), data attributes and explicit data movement and memory management, and synchronization operations used in parallel programming. We demonstrate UPIR via a prototype implementation in the ROSE compiler for unifying IR for both OpenMP and OpenACC and in both C/C++ and Fortran, for unifying the transformation that lowers both OpenMP and OpenACC code to LLVM runtime, and for exporting UPIR to LLVM MLIR dialect. The fully extended paper of this abstract can be found from https://arxiv.org/abs/2209.10643.
- Dan Quinlan, Chunhua Liao, Justin Too, Robb P Matzke, Markus Schordan, and PH Lin. 2012. ROSE compiler infrastructure.Google Scholar
- Solmaz Salehian, Jiawen Liu, and Yonghong Yan. 2017. Comparison of threading programming models. In 2017 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE, 766--774.Google ScholarCross Ref
Index Terms
- UPIR: Toward the Design of Unified Parallel Intermediate Representation for Parallel Programming Models
Recommendations
Automatic generation of parallel C code for stencil applications written in MATLAB
ARRAY 2016: Proceedings of the 3rd ACM SIGPLAN International Workshop on Libraries, Languages, and Compilers for Array ProgrammingHigh-level programming languages such as MATLAB are widely used in scientific domains to implement prototypes based on mathematical models. These prototypes are finally often re-implemented in low-level languages to reach execution times required for ...
Practical parallelization of scientific applications with OpenMP, OpenACC and MPI
AbstractThis work aims at distilling a systematic methodology to modernize existing sequential scientific codes with a little re-designing effort, turning an old codebase into modern code, i.e., parallel and robust code. We propose a semi-...
Highlights- Learn how to parallelize legacy code in C and Fortran with reduce effort.
- ...
Hybridizing S3D into an exascale application using OpenACC: an approach for moving to multi-petaflops and beyond
SC '12: Proceedings of the International Conference on High Performance Computing, Networking, Storage and AnalysisHybridization is the process of converting an application with a single level of parallelism to an application with multiple levels of parallelism. Over the past 15 years a majority of the applications that run on High Performance Computing systems have ...
Comments