Article ID Journal Published Year Pages File Type
4033626 Vision Research 2015 11 Pages PDF
Abstract

•Taxonomies organize models by attributes and scope but not performance.•Existing taxonomies have few models in common making direct comparison difficult.•Operationally defined performance metrics enable direct quantitative comparison.•Benchmarking protocols for computational models permit objective evaluation.

Scores of visual attention models have been developed over the past several decades of research. Differences in implementation, assumptions, and evaluations have made comparison of these models very difficult. Taxonomies have been constructed in an attempt at the organization and classification of models, but are not sufficient at quantifying which classes of models are most capable of explaining available data. At the same time, a multitude of physiological and behavioral findings have been published, measuring various aspects of human and non-human primate visual attention. All of these elements highlight the need to integrate the computational models with the data by (1) operationalizing the definitions of visual attention tasks and (2) designing benchmark datasets to measure success on specific tasks, under these definitions. In this paper, we provide some examples of operationalizing and benchmarking different visual attention tasks, along with the relevant design considerations.

Related Topics
Life Sciences Neuroscience Sensory Systems
Authors
, , , , , ,