کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
507335 | 865116 | 2014 | 11 صفحه PDF | دانلود رایگان |
• We have developed a GPU-accelerated higher-order ice sheet model.
• We have measured the speedup produced by the GPU in two very different experiments.
• The GPU algorithm is 60–180× faster than a similar serial CPU version.
• The speedup depends primarily on grid size and GPU generation.
Studies of glaciers and ice sheets have increased the demand for high performance numerical ice flow models over the past decades. When exploring the highly non-linear dynamics of fast flowing glaciers and ice streams, or when coupling multiple flow processes for ice, water, and sediment, researchers are often forced to use super-computing clusters. As an alternative to conventional high-performance computing hardware, the Graphical Processing Unit (GPU) is capable of massively parallel computing while retaining a compact design and low cost. In this study, we present a strategy for accelerating a higher-order ice flow model using a GPU. By applying the newest GPU hardware, we achieve up to 180× speedup compared to a similar but serial CPU implementation. Our results suggest that GPU acceleration is a competitive option for ice-flow modelling when compared to CPU-optimised algorithms parallelised by the OpenMP or Message Passing Interface (MPI) protocols.
Journal: Computers & Geosciences - Volume 72, November 2014, Pages 210–220