Human observers can perceive the shape and material properties of three dimensional objects, even from a single two dimensional image. What information in an image do they utilize to make these judgments? We conducted eye movement studies to pursue this question. Previous works have used the human figure, natural objects or silhouettes of abstract objects as stimuli in eye tracking setups to study shape perception (Van Doorn et al 2002, Renninger et al 2007). We are interested in shape and material perception for images of unfamiliar, three dimensional objects. We constructed shapes by adding randomized spherical harmonics and rendered these shapes using PBRT under different illuminations and viewing conditions (Pharr & Humphreys 2004). The surface reflectance properties — albedo and gloss — were varied, as were the spherical harmonic coefficients in order to generate different shapes. Based on psychophysical and computational results in shape perception, one might expect that some image regions (e.g. occluding boundaries, high contrast areas, corners etc.) are more useful than others for shape judgments. Recent work in material perception (Motoyoshi et al 2007) has shown that luminance contrast and skewness are predictive of albedo and gloss. Regions of higher contrast and skewness usually contain specular highlights and prominent edges. Therefore, it is plausible that observers look in different places during shape and material perception tasks. In our data, we found that observers' eye movements were a) non-random, b) correlated with each other and c) similar for both tasks. There seem to be regions in our objects that elicit eye movements during shape as well as material judgment tasks. These regions cannot be predicted by simple, low-level image measurements like mean luminance, local contrast, local skewness or local energy.