کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
524656 | 868810 | 2012 | 24 صفحه PDF | دانلود رایگان |

The Conjugate Gradient (CG) method is a widely-used iterative method for solving linear systems described by a (sparse) matrix. The method requires a large amount of Sparse-Matrix Vector (SpMV) multiplications, vector reductions and other vector operations to be performed. We present a number of mappings for the SpMV operation on modern programmable GPUs using the Block Compressed Sparse Row (BCSR) format. Further, we show that reordering matrix blocks substantially improves the performance of the SpMV operation, especially when small blocks are used, so that our method outperforms existing state-of-the-art approaches, in most cases. Finally, a thorough analysis of the performance of both SpMV and CG methods is performed, which allows us to model and estimate the expected maximum performance for a given (unseen) problem.
► We implemented a fast Sparse-Matrix Vector multiplication routine for GPUs.
► This implementation is used to accelerate the Conjugate Gradient or related methods.
► We developed a framework for estimating the performance of such algorithms.
► The estimated performances agree with the measured performances.
► This framework also gives proper estimations when two GPUs are used in parallel.
Journal: Parallel Computing - Volume 38, Issues 10–11, October–November 2012, Pages 552–575