Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6864190 | Neurocomputing | 2018 | 8 Pages |
Abstract
In order to move toward efficient autonomous learning, we must have control over our datasets to test and adaptively train systems for complex problems such as Visual Question Answering (VQA). Thus, we created a testing environment around MNIST images with optional cluttering. Although less complex than publicly available VQA datasets, the new environment generates datasets that decouple answers from questions and incorporate abstract ideas (content, context, and arithmetic) that must be learned. In addition, we analyze the performance of merged CNNs and LSTMs using the environment while exploring different ways to incorporate pretrained object classifiers. We demonstrate the usefulness of our environment as well as provide insight on the limitations of simple architectures and the complexities of different questions.
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Mihael Cudic, Ryan Burt, Eder Santana, Jose C. Principe,