Size constancy is one of the well-known perceptual phenomena that demonstrates perceptual stability to account for the effect of viewing distance on retinal image size. Although theories involving distance scaling to achieve size constancy have flourished based on psychophysical studies, its underlying neural mechanisms remain unclear. Recently, single cell recordings show that distance-dependent size tuned cells are common along the ventral stream, originating from V1, V2, and V4 leading to IT (Dobbins et al., 1998). In addition, fMRI studies demonstrate that an object's perceived size, associated with its perceived egocentric distance, modulates its retinotopic representation in V1 (Murray et al., 2006; Sperandio et al., 2012). These results suggest that V1 contributes to size constancy, and its activity is possibly regulated by feedback of distance information from other brain areas. Here, we propose a neural model based on these findings. A population of gain-modulated MT neurons integrate horizontal disparity (arising from V1) and vergence (arising from FEF) to construct a three-dimensional spatial representation in area LIP. Disparity selective cells in V1 are gain-modulated and simulated by gaussian functions, vergence selective cells in FEF are simulated by sigmoidal functions. Cells in MT integrate the outputs both from V1 and FEF cells by means of a set of basis functions; the outputs of MT cells feed forward to cells in LIP to construct a distance map. The LIP neurons send feedback of distance information to MT to obtain a distance scaling function, and then further back to V1 to modulate the activity of size tuned cells, resulting a spread of V1 cortical activity. This process provides V1 with distance-dependent size representations. The model supports that size constancy is preserved by scaling retinal image size to compensate for changes in perceived distance, and suggests a neural circuit capable of implementing this process.