The visual world is full of noise: from smog to fog to static on a television. Even with noise obscuring our view, we are able to perceive the world and judge visual quality in real-time. How do we judge image quality, and what is it about an image that clues us into its quality? We set out to determine what features of images are utilized to judge quality of an image: we presented images (aerial shots, landscapes and medical images) with noise of varying level added to obscure either global (“diffuse” noise) or local (“sparse” noise) features of an image and naïve observers made a binary choice judgment on better image quality while the diffuse and sparse noisy versions were shown side-by-side. We analyzed each image for the values of various local and global visual features and compared human decisions with classification by a binary tree using these features. Features that aligned with human judgment based on high information gain ratio were determined to be features that observers used to render judgment. The psychophysical task provided data to the classifier: At low noise levels, observers preferred diffuse noise (aerials: 59%, p=0.136; landscapes: 67%, p=0.015; medicals: 73%, p=0.012); at intermediate noise levels, observer preference varied with image type (aerials: 99% diffuse, p< 0.0001; landscapes: 75% sparse, p< 0.0001; medicals: 53% diffuse, p=0.24); at high noise levels, observer preference shifted to sparse noise (aerials: 97%, p< 0.0001; landscapes: 93%, p< 0.0001; medicals: 94%, p< 0.0001). Decision tree analysis of psychophysical data showed clear alignment of local (e.g. local contrast) rather than global (e.g. RMS contrast, spectral power) image features with perceptual decisions across image type and noise level tested. Our results are in stark contrast to the accepted perspective that perceptual processing proceeds from global structuring towards more fine-grained, local analysis.