کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
432689 689033 2015 8 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
A GEMM interface and implementation on NVIDIA GPUs for multiple small matrices
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر نظریه محاسباتی و ریاضیات
پیش نمایش صفحه اول مقاله
A GEMM interface and implementation on NVIDIA GPUs for multiple small matrices
چکیده انگلیسی


• A second leading dimension-based batched GEMM interface for CUDA.
• Implementation of GEMM routine for multiple small matrices.
• 30% to 600% faster than the batched cuBLAS in CUDA Toolkit 5.0.
• Specialized for matrix sizes under 16 on NVIDIA Tesla K20c.

We present an interface and an implementation of the General Matrix Multiply (GEMM) routine for multiple small matrices processed simultaneously on NVIDIA graphics processing units (GPUs). We focus on matrix sizes under 16. The implementation can be easily extended to larger sizes. For single precision matrices, our implementation is 30% to 600% faster than the batched cuBLAS implementation distributed in the CUDA Toolkit 5.0 on NVIDIA Tesla K20c. For example, we obtain 104 GFlop/s and 216 GFlop/s when multiplying 100,000 independent matrix pairs of size 10 and 16, respectively. Similar improvement in performance is obtained for other sizes, in single and double precisions for real and complex types, and when the number of matrices is smaller. Apart from our implementation, our different function interface also plays an important role in the improved performance. Applications of this software include finite element computation on GPUs.

ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Journal of Parallel and Distributed Computing - Volume 75, January 2015, Pages 133–140
نویسندگان
, ,