Article ID Journal Published Year Pages File Type
6940096 Pattern Recognition Letters 2018 11 Pages PDF
Abstract
The recent and exponential increase of online photographs is catalysing the development of artificial intelligence systems that evaluate images on their aesthetics in order to filter out photos and provide users with more pleasing content. This paper proposes a new approach inspired by findings in psychophysics and neuroscience, to build a cross-dataset aesthetic classifier which learns by extracting an efficient set of features from images. Inspired from low-level features present in the human early visual process, the artificial intelligent system extracts percentage distributions for orientation, curvature, colour and global reflectional symmetry. Knowing only people's aesthetic judgments on images, the features are then fed to a deep neural network under the form of only 114 inputs. Once trained, the proposed system was successful in classifying unseen images depending on their aesthetics to state-of-the-art level, even on datasets different from the initial training dataset. Analysis of differences in extracted features between aesthetically good and poor images highlights previously observed human aesthetic preferences in static two-dimensional scenes, such as preference for the colour blue or horizontal lines. By learning from brain-inspired features, it is hoped to allow a knowledge transfer of aesthetic expertise in photographs towards other types of visual media (paintings, movies, etc.).
Related Topics
Physical Sciences and Engineering Computer Science Computer Vision and Pattern Recognition
Authors
,