کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
523869 868511 2013 20 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Multi-level parallelism for incompressible flow computations on GPU clusters
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر نرم افزارهای علوم کامپیوتر
پیش نمایش صفحه اول مقاله
Multi-level parallelism for incompressible flow computations on GPU clusters
چکیده انگلیسی

We investigate multi-level parallelism on GPU clusters with MPI-CUDA and hybrid MPI-OpenMP-CUDA parallel implementations, in which all computations are done on the GPU using CUDA. We explore efficiency and scalability of incompressible flow computations using up to 256 GPUs on a problem with approximately 17.2 billion cells. Our work addresses some of the unique issues faced when merging fine-grain parallelism on the GPU using CUDA with coarse-grain parallelism that use either MPI or MPI-OpenMP for communications. We present three different strategies to overlap computations with communications, and systematically assess their impact on parallel performance on two different GPU clusters. Our results for strong and weak scaling analysis of incompressible flow computations demonstrate that GPU clusters offer significant benefits for large data sets, and a dual-level MPI-CUDA implementation with maximum overlapping of computation and communication provides substantial benefits in performance. We also find that our tri-level MPI-OpenMP-CUDA parallel implementation does not offer a significant advantage in performance over the dual-level implementation on GPU clusters with two GPUs per node, but on clusters with higher GPU counts per node or with different domain decomposition strategies a tri-level implementation may exhibit higher efficiency than a dual-level implementation and needs to be investigated further.


► A flow solver is parallelized with MPI-CUDA and MPI-OpenMP-CUDA implementations.
► Weak and strong scaling analysis performed using up to 256 GPUs.
► Three strategies to overlap computation and communication are assessed.
► MPI-CUDA implementation with maximum overlapping gives the best performance.
► Tri-level parallelism does not show any advantage for the present application.

ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Parallel Computing - Volume 39, Issue 1, January 2013, Pages 1–20
نویسندگان
, ,