Neural Organization of Objects
Occipitotemporal cortex has strong object-centered responses. However, there is
no widely accepted model of the coding dimensions of objects,
nor how this high-dimensional domain is mapped onto the cortical sheet.
How do you parameterize objects?
Here we showed that the real-world size of objects is a fundamental dimension that has a large-scale organization across the cortical surface, and shows an interleaved organization with the dimension of animacy. This work demonstrates object-responsive cortex is not a heterogeneous bank of features but has a systematic organization at a macro-scale.
This work also highlights that understanding what a region computes will be informed by considering it's role at multiple spatial scales of organization.
A Real-World Size Organization of Object Responses in Occipito-Temporal Cortex.
Konkle & Oliva. Neuron, 2012.
Tripartite Organization of the Ventral Stream by Animacy and Object Size.
Konkle & Caramazza. Journal of Neuroscience, 2013.
Parametric Coding of the Size and Clutter of Natural Scenes in the Human Brain.
Park*, Konkle*, & Oliva. Cerebral Cortex, 2014.
Processing multiple visual objects is limited by overlap in neural channels.
Cohen, Konkle, Rhee, Nakayama, & Alvarez. PNAS, 2014.
Cognitive Architecture: Size
One insight into the nature of object representation is to consider that objects
are physical entities in a 3-dimensional world. This geometry places places important constraints on how people experience and interact with objects of different sizes.
In a series of behavioral studies, we found that the real-world size of objects is a basic component of object representation. Just as objects have a canonical perspective, we showed they also have a canonical visual size (proportional to the log of their real-world size). Further, size-knowledge is automatically activated when an object is recognized.
Canonical visual size for real-world objects.
Konkle & Oliva. Journal of Experimental Psychology: Human Perception and Performance, 2011.
A Familiar Size Stroop Effect: Real-world size is an automatic property of object representation.
Konkle & Oliva. Journal of Experimental Psychology: Human Perception and Performance, 2012.
Normative representation of objects: Evidence for an ecological bias in perception and memory.
Konkle & Oliva. Proceedings of the 29th Annual Cognitive Science Society, 2007.
Representing, Perceiving and Remembering the Shape of Visual Space.
Oliva, Park, & Konkle. Chapter in Vision in 3D Environments, 2010.
Cognitive Architecture: Memory
One area of my research investigates the nature of object representations
by understanding how and what we store about them in memory.
We discovered that people are capable of remembering thousands of visually-presented objects with much more detail than previously believed. This remarkable capacity for retaining highly-detailed memory traces relies on our existing conceptual knowledge: the more we know about the different kinds of objects, the less they interfere in memory.
The thesis emerging from this research is that one cannot fully understand memory capacity or memory processes without also determining the nature of representations over which they operate.
Visual long-term memory has a massive capacity for object details.
Brady, Konkle, Alvarez, & Oliva. PNAS 2008.
Conceptual knowledge supports perceptual detail in visual long-term memory.
Konkle, Brady, Alvarez, & Oliva. Journal of Experimental Psychology: General, 2010.
Scene memory is more detailed than you think: the role of scene categories in visual long-term memory.
Konkle, Brady, Alvarez, & Oliva. Psychological Science, 2010.
Compression in visual short-term memory: using statistical regularities to form more efficient memory representations.
Brady, Konkle, & Alvarez. Journal of Experimental Psychology: General, 2009.
A review of visual memory capacity: Beyond individual items and toward structured representations.
Brady, Konkle, & Alvarez. Journal of Vision, 2011.
Discover Magazine - Top 100 Science Stories of 2008
Scientific American | Ars Technica | Washington Post | MIT press release | demo
Multi-Sensory Representation: Motion
Vision derives from patterns of light in the optic array;
audition encodes the changing pressure patterns of air;
touch reflects the physical deformation of our skin.
However, these highly distinct information sources arise from a common 3-dimensional world; and thus
the information that they each represent is intimately related.
Here we examined whether there are shared representations across these sensory modalities, focusing on the case of motion. We found that motion after effects transfer between the modalites of vision and touch. We further explored how different spatial reference frames effect the perception of a tactile motion illusion, and how motion area MT responds to motion stimuli across the modalities in sighted and blind individuals.
Motion Aftereffects Transfer Between Touch and Vision.
Konkle, Wang, Hayward, & Moore. Current Biology 2009.
Sensitive period for a vision-dominated response in human MT/MST.
Bedny, Konkle, Pelphrey, Saxe, Pascual-Leone. Current Biology, 2010.
Tactile Rivalry Demonstrated with an Ambiguous Apparent-Motion Quartet.
Carter, Konkle, Wang, Hayward, & Moore. Current Biology, 2008. press release | demo
What can crossmodal aftereffects reveal about neural representation and dynamics?
Konkle & Moore. Communicative and Integrative Biology, 2009.