Classical theories of the perception of surface gray shades propose that a key computational goal of human vision is to mathematically invert the physical processes of light reflection from surfaces (e.g. that create shadows and highlights) and light transmission through atmospheric media (e.g. due to fog or smoke) to recover surface reflectance. Yet the computational goal of recovering surface reflectance is extremely difficult to accomplish, incompatible with key perceptual data on the distinction between brightness (luminance) and lightness (reflectance) perception, and does not necessarily solve other important computational problems, such as how the visual system computes perceptual layers corresponding to physical surfaces, illumination, and atmospheric media. Here we present a model, based on a recently introduced theory of surface perception, which suggests that the characteristic properties of surface gray-shade perception are better explained in terms of the computational goal of parsing the retinal image into perceptual layers, rather than in terms of the goal of recovering surface reflectance. The model explains some striking demonstrations of surface gray-shade perception through transparent overlays, quantitatively predicts key perceptual data on brightness/lightness perception, and conceptually unifies the prominent anchoring and scission theories of surface perception. The model thus suggests that a detailed understanding of human vision may require a subtle reformulation of the computational problems solved by the visual system.