Color is the brain's estimate of reflectance for a given surface. Reflectance describes how much light a surface reflects at different wavelengths. Since the light reflected from a surface depends on its reflectance and on the spectral power distribution of the incident light, it is impossible to predict surface reflectance directly from the wavelength composition of the reflected light. Despite this computational problem, the human visual system is remarkably accurate at inferring the reflectance – perceived as color – of surfaces across different illuminants. This ability is referred to as color constancy and it is essential for the organism to use color as a cue in object search, recognition, and identification. We devised images of two surfaces presented under three different illuminants using physically realistic rendering methods to study the neural architecture underlying surface color perception. Measuring patterns of fMRI voxel activity elicited by these images, we tested to what extent responses to surface color in various retinotopically mapped areas remained stable across illuminants and which regions encoded illuminant information. We made three important observations: First, patterns of fMRI responses to surface color generalized across illuminants in V1 but not V2, V3, hV4, or VO1. Second, accuracy of illuminant decoding was positively correlated with psychophysically measured color constancy as predicted by the Equivalent Illuminant Model. Third, when fMRI activity was elicited by stimuli that were matched in reflected light but differed in illumination and therefore also differed in perceived surface color, there was a gradient from lower to higher visual areas to distinguish between the two inputs in terms of a difference in surface color rather than illumination. Our results demonstrate that V1 represents chromatic invariances in the stimulus environment (possibly via feedback) whereas downstream visual areas are more biased to link chromatic differences to different surface color percepts.