Changing the real viewing distance reveals the temporal evolution of size constancy in visual cortex

Chen, Juan, Sperandio, Irene, Henry, Molly J. and Goodale, Melvyn A. (2019) Changing the real viewing distance reveals the temporal evolution of size constancy in visual cortex. Current Biology, 29 (13). 2237-2243.e4. ISSN 0960-9822

[thumbnail of Accepted_Version]
PDF (Accepted_Version) - Accepted Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.

Download (593kB) | Preview


Our visual system provides a distance-invariant percept of object size by integrating retinal image size with viewing distance (size constancy). Single-unit studies with animals have shown that some distance cues, especially oculomotor cues such as vergence and accommodation, can modulate the signals in the thalamus or V1 at the initial processing stage [1, 2, 3, 4, 5, 6, 7]. Accordingly, one might predict that size constancy emerges much earlier in time [8, 9, 10], even as visual signals are being processed in the thalamus. So far, the studies that have looked directly at size coding have either used fMRI (poor temporal resolution [11, 12, 13]) or relied on inadequate stimuli (pictorial illusions presented on a monitor at a fixed distance [11, 12, 14, 15]). Here, we physically moved the monitor to different distances, a more ecologically valid paradigm that emulates what happens in everyday life and is an example of the increasing trend of “bringing the real world into the lab.” Using this paradigm in combination with electroencephalography (EEG), we examined the computation of size constancy in real time with real-world viewing conditions. Our study provides strong evidence that, even though oculomotor distance cues have been shown to modulate the spiking rate of neurons in the thalamus and in V1, the integration of viewing distance cues and retinal image size takes at least 150 ms to unfold, which suggests that the size-constancy-related activation patterns in V1 reported in previous fMRI studies (e.g., [12, 13]) reflect the later processing within V1 and/or top-down input from other high-level visual areas.

Item Type: Article
Faculty \ School: Faculty of Social Sciences > School of Psychology
Depositing User: LivePure Connector
Date Deposited: 02 Jul 2019 12:30
Last Modified: 22 Oct 2022 04:58
DOI: 10.1016/j.cub.2019.05.069


Downloads per month over past year

Actions (login required)

View Item View Item