Learning Visual Biases from Human Imagination

Carl Vondrick Hamed Pirsiavash Aude Oliva Antonio Torralba
Massachusetts Institute of Technology

Download Paper


Although the human visual system can recognize many concepts under challenging conditions, it still has some biases. In this paper, we investigate whether we can extract these biases and transfer them into a machine recognition system. We introduce a novel method that, inspired by well-known tools in human psychophysics, estimates the biases that the human visual system might use for recognition, but in computer vision feature spaces. Our experiments are surprising, and suggest that classifiers from the human visual system can be transferred into a machine with some success. Since these classifiers seem to capture favorable biases in the human visual system, we further present an SVM formulation that constrains the orientation of the SVM hyperplane to agree with the bias from human visual system. Our results suggest that transferring this human bias into machines may help object recognition systems generalize across datasets and perform better when very little training data is available.

a) Noise in Feature Space b) Human Visual System c) Classifier for Car

Although all image patches on the left are just noise, when we show thousands of them to online workers and ask them to find ones that look like cars, a car emerges in the average, shown on the right. This noise-driven method is based on well known tools in human psychophysics that estimates the biases that the human visual system uses for recognition. We explore how to transfer these biases into a machine.