کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
492633 | 721623 | 2012 | 9 صفحه PDF | دانلود رایگان |

A three-dimensional Lattice-Boltzmann fluid model with nineteen discrete velocities was implemented using NVIDIA Graphic Processing Unit (GPU) programing language “Compute Unified Device Architecture” (CUDA). Previous LBM GPU implementations required two steps to maximize memory bandwidth due to memory access restrictions of earlier versions of CUDA toolkit and hardware capabilities. In this work, a new approach based on single-step algorithm with a reversed collision–propagation scheme is developed to maximize GPU memory bandwidth, taking advantage of the newer versions of CUDA programming model and newer NVIDIA Graphic Cards. The code was tested on the numerical calculation of lid driven cubic cavity flow at Reynolds number 100 and 1000 showing great precision and stability. Simulations running on low cost GPU cards can calculate 400 cell updates per second with more than 65% hardware bandwidth.
► A parallel three-dimensional Lattice-Boltzmann fluid model for GPU was implemented.
► Version 3.0 of NVIDIA CUDA GPU programming language was used.
► Single-step and reversed collision–propagation scheme maximizes memory bandwidth.
► Flow simulations for Re 100 and 1000 were validated against literature.
► Solution reaches high performance simulation over low cost hardware.
Journal: Simulation Modelling Practice and Theory - Volume 25, June 2012, Pages 163–171