Constructing a three-dimensional world during self-motion: Neural mechanisms of depth perception and moving object detection in macaque monkeys

Summary

Date: 
April 8, 2015 - 1:00pm
Location: 
NW 243
About the Speaker
Name: 
HyungGoo Kim (Uchida Lab)

When an observer moves through the world, stationary objects in a scene have retinal image motion that depends on distance (motion parallax). Previous work has shown that neurons in area MT can combine ambiguous retinal image motion with smooth pursuit signals to represent depth (Nadler et al. 2008, 2009). What other signals can provide information about eye rotation?

As the eye changes orientation relative to a scene (e.g., during pursuit), dynamic changes in perspective introduce a component of “rocking” motion (dynamic perspective) in the optic flow field. In the first part of the talk, we show that neurons in area MT combine this dynamic perspective information with ambiguous retinal image motion to represent depth in the absence of pursuit or binocular cues.

Next, while animals move, detecting moving objects is important to recognize prey or predators. I show that macaque monkeys can detect moving objects based on conflict between motion parallax and binocular disparity cues, even when the motion of the object is not salient from the background. Responses of MT neurons with mismatched depth preferences for disparity and motion parallax show stronger trial-by-trial correlation with the detection of the moving object than neurons with congruent depth preferences. Our results suggest that these MT neurons play a specialized role in detecting moving objects during self-motion.