Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
519302 | Journal of Computational Physics | 2010 | 13 Pages |
A block tridiagonal matrix is factored with minimal fill-in using a cyclic reduction algorithm that is easily parallelized. Storage of the factored blocks allows the application of the inverse to multiple right-hand sides which may not be known at factorization time. Scalability with the number of block rows is achieved with cyclic reduction, while scalability with the block size is achieved using multithreaded routines (OpenMP, GotoBLAS) for block matrix manipulation. This dual scalability is a noteworthy feature of this new solver, as well as its ability to efficiently handle arbitrary (non-powers-of-2) block row and processor numbers. Comparison with a state-of-the art parallel sparse solver is presented. It is expected that this new solver will allow many physical applications to optimally use the parallel resources on current supercomputers. Example usage of the solver in magneto-hydrodynamic (MHD), three-dimensional equilibrium solvers for high-temperature fusion plasmas is cited.
Research highlights► A block tridiagonal matrix is factored with minimal fill-in using a parallel cyclic reduction algorithm. ► Storage of the factored blocks allows the application of the inverse to multiple right-hand sides. ► Dual scalability of the block solver is achieved with cyclic reduction for number of block rows and multithreaded routines (OpenMP, GotoBLAS) for block matrix manipulation. ► Block solver efficiently handles arbitary (non-powers-of-2) block row and processor numbers. ► Observed to execute at least one order of magnitude faster than general-purpose sparse matrix solvers.