Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6863918 | Neurocomputing | 2018 | 22 Pages |
Abstract
A popular testbed for deep learning has been multimodal recognition of human activity or gesture involving diverse inputs like video, audio, skeletal pose and depth images. Deep learning architectures have excelled on such problems due to their ability to combine modality representations at different levels of nonlinear feature extraction. However, designing an optimal architecture in which to fuse such learned representations has largely been a non-trivial human engineering effort. We treat fusion structure optimization as a hyperparameter search and cast it as a discrete optimization problem under the Bayesian optimization framework. We propose two methods to compute structural similarities in the search space of tree-structured multimodal architectures, and demonstrate their effectiveness on two challenging multimodal human activity recognition problems.
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Dhanesh Ramachandram, Michal Lisicki, Timothy J. Shields, Mohamed R. Amer, Graham W. Taylor,