Egocentric distance judgment on a surface spanned by different textures is impaired compared to one on a continuous ground surface, suggesting that an accurate representation of the ground surface is required for the visual system to correctly judge distance (Sinai et al, 1998). Here, extending the ideas of Gibson & Cornsweet (1952) we propose that the visual system uses a sequential surface integration process (SSIP) to represent the ground surface. Integral to the SSIP is that the visual system first relies on near depth cues to represent the near surface, and then integrates it with the farther adjacent patch of surface using texture gradient cue. Step by step, the process continues until the farthest patch of surface is integrated, culminating in a global ground surface representation. Thus, when the ground surface is disrupted by incongruent textures, the inaccuracies in integrating from near to far is compounded, resulting in an erroneous global surface representation. In this study, we tested the notion that surface integration occurs from near to far. Observers judged target distance (4–7m) while wearing a shade to limit the visual field (13–40 deg vertical, 60 deg horizontal), and responded by walking blindly to the perceived target distance. Four viewing conditions were tested: (i) with fixed head position, (ii) by rotating the head downward to scan the ground from far to near, (iii) by rotating the head upward to scan the ground from near to far, (iv) repeatedly rotating the head up and down to scan in both directions. We found that distance judgments were accurate when the vertical visual field was larger than 30 deg, consistent with Loomis & Knapp (in press). With smaller fields, conditions (i) & (ii) led to underestimation, while conditions (iii) & (iv) remained accurate. The resultant difference between conditions (ii) & (iii) underscores the directional dictate of the SSIP (near to far) for forming a veridical ground surface representation.