Literature on shape-from-shading (SFS) perception often claims that light and shape perception are intrinsically linked. Indeed, the classic convex/concave ambiguity can be solved when an observer assumes a (global) light direction. However, most studies only address 3D shape perception while ignoring illumination perception. This makes sense in the classical convex/concave ambiguity but in case of the more general bas-relief ambiguity, measuring both light and shape is paramount for making claims on the light-shape relation. We explored whether the formal relation between light and shape as described by Belhumeur et al. (1999) can be used to model human vision. To do so, we modeled bumpy spherical shapes that were 3D printed in three versions: compressed (40%, in the viewing direction), normal and stretched (140%). These flattened, spherical and oblong stimuli were presented in a lab setting where we could accurately manipulate the (collimated) light direction. Observers had to match the light direction of the real, illuminated 3D stimulus on a virtual illuminated sphere shown on a computer screen. Additionally, for each shape the perceived 3D geometry was measured by letting the observers adjust a virtual cross section of the stimulus. Overall, we found that although observers used both eyes (at about 1.5 m), the shapes were consistently misperceived as being spherical, despite their large physical variations. These erroneous shape perceptions could partly be traced back in their illumination settings, especially in the case of the flattened shape. However, we also found an unexpected large degree of variability in the illumination settings. To account for that we performed a control experiment on a smooth sphere, which revealed much smaller variability. Our data suggest that human vision can partly be modeled by the bas-relief ambiguity, and that observers are heavily biased towards globular shape inference.