| Article ID | Journal | Published Year | Pages | File Type |
|---|---|---|---|---|
| 10336490 | Computers & Graphics | 2005 | 10 Pages |
Abstract
The latest graphics processing units (GPUs) are reported to reach up to 200 billion floating point operations per second (200Â Gflops (Spode's Abode, GeForce FX Preview (NV30), Spode, November (2002), Internet address (accessed on 10/2003): http://www.spodesabode.com/content/article/geforcefx)) and to have price performance of 0.1 cents per Mflop. These facts raise great interest in the plausibility of extending the GPUs' use to non-graphics applications, in particular numerical simulations on structured grids (lattice). In this paper we (1) review previous works on using GPUs for non-graphics applications, (2) implement probability-based simulations on the GPU, namely the Ising and percolation models, (3) implement vector operation benchmarks for the GPU, and finally (4) compare the CPU's and GPU's performance. Original contribution of this work is implementing Monte Carlo type simulations on the GPU. Such simulations have a wide area of applications. They are computationally intensive and, as we show in the paper, lend themselves naturally to implementation on GPUs, therefore allowing us to better use the GPU's computational power and speed up the computation. A general conclusion from the results obtained is that moving computations from the CPU to the GPU is feasible, yielding good time and price performance, for certain lattice computations. Preliminary results also show that it is feasible to use them in parallel.
Keywords
Related Topics
Physical Sciences and Engineering
Computer Science
Computer Graphics and Computer-Aided Design
Authors
Stanimire Tomov, Michael McGuigan, Robert Bennett, Gordon Smith, John Spiletic,
