Article ID Journal Published Year Pages File Type
472791 Computers & Mathematics with Applications 2011 11 Pages PDF
Abstract

Emerging many-core processors, like CUDA capable nVidia GPUs, are promising platforms for regular parallel algorithms such as the Lattice Boltzmann Method (LBM). Since the global memory for graphic devices shows high latency and LBM is data intensive, the memory access pattern is an important issue for achieving good performances. Whenever possible, global memory loads and stores should be coalescent and aligned, but the propagation phase in LBM can lead to frequent misaligned memory accesses. Most previous CUDA implementations of 3D LBM addressed this problem by using low latency on chip shared memory. Instead of this, our CUDA implementation of LBM follows carefully chosen data transfer schemes in global memory. For the 3D lid-driven cavity test case, we obtained up to 86% of the global memory maximal throughput on nVidia’s GT200. We show that as a consequence highly efficient implementations of LBM on GPUs are possible, even for complex models.

Related Topics
Physical Sciences and Engineering Computer Science Computer Science (General)
Authors
, , , ,