Article ID Journal Published Year Pages File Type
535756 Pattern Recognition Letters 2013 8 Pages PDF
Abstract

High Performance Computing (HPC) is a field concerned with solving large-scale problems in science and engineering. However, the computational infrastructure of HPC systems can also be misused as demonstrated by the recent commoditization of cloud computing resources on the black market. As a first step towards addressing this, we introduce a machine learning approach for classifying distributed parallel computations based on communication patterns between compute nodes. We first provide relevant background on message passing and computational equivalence classes called dwarfs and describe our exploratory data analysis using self organizing maps. We then present our classification results across 29 scientific codes using Bayesian networks and compare their performance against Random Forest classifiers. These models, trained with hundreds of gigabytes of communication logs collected at Lawrence Berkeley National Laboratory, perform well without any a priori information and address several shortcomings of previous approaches.

► We classify unknown distributed memory computations using communi-cation patterns. ► We apply self organizing maps to aid model class selection. ► We use sampling to equalize class distributions over 100 GB of data. ► Classifiers achieved 90% F1 scores over 29 classes. ► Our work improves upon previous approaches and has a variety of applications.

Related Topics
Physical Sciences and Engineering Computer Science Computer Vision and Pattern Recognition
Authors
, , ,