PhD 2002, Harvard University
We want to understand how we know what is where by looking. How are objects identified, and how is the ambient optic array sensed by the retinae transformed into a vivid 3D representation of space? We use fMRI, electrophysiology, anatomy, and mathematical modeling.
The What Problem
Tackling the problem of how the brain recognizes visual form is incredibly difficult due to the infinite number of possible forms and the huge cortical territory dedicated to encoding visual form. For making headway into this problem, it would be ideal if there were a small piece of brain specialized to encode a single visual form. This situation, surprisingly, exists: functional magnetic resonance imaging (fMRI) reveals six small regions of highly face-selective cortex in the macaque temporal lobe. In at least two of these regions, termed “face patches”, single-unit recordings show that almost all visually responsive cells are face-selective. Experiments combining fMRI with microstimulation demonstrate that the face patches are strongly and specifically interconnected. The face patch system thus offers a unique opportunity to dissect the neural mechanisms underlying form perception, because the system is specialized to process one class of complex forms, and because its computational components are spatially segregated. The central challenge to understanding the face patch system, and the goal of our research program, is to understand the functional specialization of each patch. We are approaching this from several directions, including: Representation, Behavior, Connectivity, and Transformation.
The Where Problem
In 1832, Charles Wheatstone made a remarkable accidental observation (which can be reproduced by looking at a frying pan with a well-turned bottom under a light source, or at a Magic Eye Picture):
"An effect of binocular perspective may be remarked in a plate of metal, the surface of which has been made smooth by turning it in a lathe. When a single candle is brought near such a plate, a line of light appears standing out from it, one half being above, and the other half below the surface; the position and inclination of this line changes with the situation of the light and of the observer, but it always passes through the centre of the plate. On closing the left eye the relief disappears, and the luminous line coincides with one of the diameters of the plate; on closing the right eye the line appears equally in the plane of the surface, but coincides with another diameter; on opening both eyes it instantly starts into relief."
What is the neural mechanism by which the brain reconstructs the 3D world? Several lines of evidence point to the critical importance of areas V3, V3A, and CIPS in 3D representation: 1) Columns of cells tuned to near, far, and zero disparities have been found in areas V3 and V3A. 2) Cells in CIPS are tuned to surface orientation defined by binocular disparity and perspective. 3) fMRI in alert monkeys shows that areas V3, V3A, and CIPS are the areas in macaque visual cortex most strongly activated by a disparity-rich compared to a zero disparity stimulus. Most physiological studies of 3D perception, however, have focused on areas V1, V2, and MT. Thus our knowledge of the neural mechanisms underlying 3D perception contains a gap precisely where a vital processing module appears to exist. Our research aims to fill this gap, by systematically exploring the hypothesis that areas V3, V3A, and CIPS are representing the geometry of 3D visual surfaces.
Last modified 2010-04-09 01:54 PM