کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
430052 | 687788 | 2013 | 13 صفحه PDF | دانلود رایگان |

• We present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model.
• We model and predict the performance of OpenMP, MPI and hybrid scientific applications with weak scaling on multicore supercomputers.
• We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications.
• We also use a weak-scaling hybrid large-scale scientific application: Gyrokinetic Toroidal Code in magnetic fusion to validate the performance model.
In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intelʼs MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers.
Journal: Journal of Computer and System Sciences - Volume 79, Issue 8, December 2013, Pages 1256-1268