ABSTRACT
ELLPACK(ELL) sparse matrix storage format has problems such as high storage consumption and low efficiency of sparse matrix vector multiplication(SpMV). To solve this problem, we propose a Graphic Processing Unit(GPU)-based efficient ELLPACK-Block(ELLB) sparse matrix storage format. Based on the original ELL storage format, this format adaptively divides the matrix into blocks according to the average number of non-zero elements in each row, and uses auxiliary matrices to improve the efficiency of SpMV solution. We use the ELLB storage format to solve the SpMV problem for different matrices. The experimental results show that compared with the Perfect Compressed Sparse Row(PCSR) format, the ELLB sparse matrix storage format saves 50 of the memory space, and the average efficiency of solving SpMV is increased by 7 times; compared with the Effective Compressed Sparse Row(ECSR) format, the memory space usage is increased by 25, but the solution of SpMV The efficiency is increased by an average of 7.65 times.
- Nathan Bell and Michael Garland. 2009. Implementing sparse matrix-vector multiplication on throughput-oriented processors. In Proceedings of the conference on high performance computing networking, storage and analysis. 1–11.Google ScholarDigital Library
- Akrem Benatia, Weixing Ji, Yizhuo Wang, and Feng Shi. 2018. BestSF: a sparse meta-format for optimizing SpMV on GPU. ACM Transactions on Architecture and Code Optimization (TACO) 15, 3 (2018), 1–27.Google ScholarDigital Library
- G. E Blelloch, M. A Heroux, and M Zagha. 1993. Segmented Operations for Sparse Matrix Computation on Vector Multiprocessors. Carnegie Mellon University (1993).Google ScholarDigital Library
- K. Cheng, J. Tian, and M. A. Ruilin. 2018. Study on Efficient Storage Format of Sparse Matrix Based on GPU. Computer Engineering (2018).Google Scholar
- T. A. Davis and Y. Hu. 2011. The university of Florida sparse matrix collection. ACM (2011).Google Scholar
- A. Dziekonski, M. Rewienski, P. Sypek, A. Lamecki, and M. Mrozowski. 2017. GPU-Accelerated LOBPCG Method with Inexact Null-Space Filtering for Solving Generalized Eigenvalue Problems in Computational Electromagnetics Analysis with Higher-Order FEM. Communications in Computational Physics 22, 04 (2017), 997–1014.Google ScholarCross Ref
- Guixia He and Jiaquan Gao. 2016. A Novel CSR-Based Sparse Matrix-Vector Multiplication on GPUs. Mathematical Problems in Engineering 2016, pt.4 (2016), 1–12.Google Scholar
- D. Horvat and Borut Alik. 2015. Inclusion test for polyhedra using depth value comparisons on the GPU. In ICCSIT 2015.Google Scholar
- A. Imakura and T. Sakurai. 2016. Block Krylov-type complex moment-based eigensolvers for solving generalized eigenvalue problems. Numerical Algorithms 75, 2 (2016), 1–21.Google Scholar
- W. Liu and B. Vinter. 2015. CSR5: An Efficient Storage Format for Cross-Platform Sparse Matrix-Vector Multiplication. ACM (2015).Google Scholar
- G. Markall. [n. d.]. Accelerating Unstructured Mesh Computational Fluid Dynamics on the NVidia Tesla GPU Architecture. ([n. d.]).Google Scholar
- Thaha Muhammed, Rashid Mehmood, Aiiad Albeshri, and Iyad Katib. 2019. SURAA: A novel method and tool for loadbalanced and coalesced SpMV computations on GPUs. Applied Sciences 9, 5 (2019), 947.Google ScholarCross Ref
- C. Richter, S. Schops, and M. Clemens. 2015. Multi-GPU Acceleration of Algebraic Multi-Grid Preconditioners for Elliptic Field Problems. Magnetics, IEEE Transactions on 51, 3 (2015), 1–4.Google Scholar
- X. Sun, K. C. Wei, L. F. Lai, S. H. Tsai, and C. C. Wu. 2018. Optimizing Sparse Matrix-Vector Multiplication on GPUs via Index Compression. In 2018 IEEE 3rd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC).Google Scholar
- B. D. Wozniak, F. D. Witherden, F. P. Russell, P. E. Vincent, and Phj Kelly. 2016. GiMMiK — Generating bespoke matrix multiplication kernels for accelerators: Application to high-order Computational Fluid Dynamics. Computer Physics Communications 202, 6 (2016), 12–22.Google ScholarCross Ref
- Wangdong Yang, Kenli Li, and Keqin Li. 2017. A hybrid computing method of SpMV on CPU–GPU heterogeneous computing systems. J. Parallel and Distrib. Comput. 104 (2017), 49–60.Google ScholarCross Ref
- Wangdong Yang, Kenli Li, and Keqin Li. 2018. A parallel computing method using blocked format with optimal partitioning for SpMV on GPU. J. Comput. System Sci. 92 (2018), 152–170.Google ScholarDigital Library
Index Terms
- GPU Sparse Matrix Vector Multiplication Optimization Based on ELLB Storage Format
Recommendations
Heterogeneous sparse matrix–vector multiplication via compressed sparse row format
AbstractSparse matrix–vector multiplication (SpMV) is one of the most important kernels in high-performance computing (HPC), yet SpMV normally suffers from ill performance on many devices. Due to ill performance, SpMV normally requires special ...
GPU accelerated sparse matrix-vector multiplication and sparse matrix-transpose vector multiplication
Many high performance computing applications require computing both sparse matrix-vector product SMVP and sparse matrix-transpose vector product SMTVP for better overall performance. Under such a circumstance, it is critical to maintain a similarly high ...
Accelerating Sparse Matrix Vector Multiplication in Iterative Methods Using GPU
ICPP '11: Proceedings of the 2011 International Conference on Parallel ProcessingMultiplying a sparse matrix with a vector (spmv for short) is a fundamental operation in many linear algebra kernels. Having an efficient spmv kernel on modern architectures such as the GPUs is therefore of principal interest. The computational ...
Comments