Skip to main content

CTVR New Faculty Member


Posted: 2023-07-04

Source: Center for Translational Vision Research
News Type: 

It is only through the senses that life is able to gather information about the environment in order to survive. Being highly visual creatures, humans gather visual information from two eyes that differ in position and optical imperfections. The result are two two-dimensional images on the left and right retinas blurred to different extents and viewed from different angles. How the cortex effortlessly merges these two different images into a single, clear, 3-D view of the world is the topic of our lab’s research. Our studies use a synergy of binocular wavefront technology, human psychophysics, and computational modeling. Wavefront sensing measures the optical imperfections –the wavefront aberrations – of light passing through the anterior segment of the eyes. Knowing the aberrations allows us to reconstruct of the type of blur a person has in each eye. Psychophysics measures the perceptual outcome: after visual information is being blurred through the ocular wavefront aberrations as well as after being processed by the cortex. However, psychophysics alone cannot distinguish between visual percepts that arise from neural mechanisms and the limitations caused by optical blur. We address this conundrum by coupling psychophysics with adaptive optics technology. This method minimizes optical limitations by correcting for the wavefront aberrations in the eyes. Hence, percepts brought about by the neural mechanisms can be measured in isolation. These measurements serve the eventual purpose of building an image computable model that explains how the brain chooses between selecting and balancing information from the two eyes. Such a model contributes to the longstanding gap in our basic neuroscientific knowledge of the mechanisms underlying binocular integration. It also helps clinicians predict individualized time-course changes to vision should a patient undergo ocular treatment.