Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
4943024 | Expert Systems with Applications | 2018 | 26 Pages |
Abstract
In this work we study the adversarial resilience of detection systems based on supervised machine learning models. We provide a formal definition for adversarial resilience while focusing on multisensory fusion systems. We define the model robustness (MRB) score, a metric for evaluating the relative resilience of different models, and suggest two novel feature selection algorithms for constructing adversary aware classifiers. The first algorithm selects only features that cannot realistically be modified by the adversary, while the second algorithm allows control over the resilience versus accuracy tradeoff. Finally, we evaluate our approach with a real-life use case of dynamic malware classification using an extensive, up-to-date corpus of benign and malware executables. We demonstrate the potential of using adversary aware feature selection for building more resilient classifiers and provide empirical evidence supporting the inherent resilience of ensemble algorithms compared to single model algorithms.
Keywords
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Ziv Katzir, Yuval Elovici,