I use a mirror stereoscope, behavioural techniques, eye tracking, and fMRI to investigate the perceptual and cognitive processes underlying visual perception and attention.
My current research explores the factors that influence how we perceive and represent reality. I am interested in what we can learn from individual differences in both behavioural responses and brain architecture. Rather than averaging across subjects, we explore the differences between subjects and what might explain these individual differences. I particularly focus on how (and where) perceived size and depth are represented in the visual cortex.
Objects and Locations
To interact with our environment, we need to identify and locate the objects around us, and this involves binding object features and location information. Research has shown that we have a bias towards thinking that if two objects appear in the same location, they have the same identity. However, it is unknown whether this location bias is also seen in 3D space. In the Golomb lab we are exploring how object identity and 3D spatial information interact.
Attention in Depth
Each time we open our eyes we are confronted with an overwhelming amount of information. We use attention to enhance the relevant while diminishing the less relevant. However, while there has been a lot of research looking at how attention works in 2D, we know very little about how attention works in 3D. My primary interests lie in examining how living in a world with depth information affects our attention.
Is depth information represented in the brain as a coordinate system, as seen for 2D spatial attention, or as a feature of an object, similar to colour and orientation? And how does organisation of objects in depth affect the way we search among them, and how does the depth information from our 3D environment affect the way we pay attention?
Stereopsis leads to the perception of depth from two slightly different views of the world falling onto each eye. Reproducing this effect digitally has been no mean feat, with 3D technology requiring exact knowledge of how binocular vision works to optimally reproduce this for 3D displays. Working with Phil Grove, I investigated how our two eyes fuse these separate images into one.