Goal: Spatial features of an object can be specified by use of symbols or motorically by directly acting upon the object. Is this response dichotomy reflected in a dual representation of the visual world: one for perception and one for action? Here we test whether motoric and symbolic specification of length and orientation rely on common or dual representations. Methods: One at a time, bars of different lengths and orientations were presented (47 ms.) on a monitor. Within trials, subjects first rapidly placed thumb and index finger at the endpoints of the bar. This motoric specification was measured with an optotrak system. Immediately thereafter subjects used a keyboard to indicate perceived length and orientation of the bar (symbolic specification). Results: The probability of making the same motoric and symbolic specification was well above chance for both length and orientation, indicating that a common representation was driving the two response types. Nevertheless, alternative explanations are possible. Discussion: Seeing or feeling the hand making the motoric specification might influence the symbolic specification, causing the high agreement between response types. A control experiment indicated that symbolic specifications of lengths were unaffected by the motoric specifications. Further, grasp precision of 3D objects does not follow Weber’s law. This has been proposed as a marker for dorsal stream processing (Milner & Goodale, 2010). Ventral stream processing might drive the motoric specifications of 2D objects, also thought to drive symbolic specifications. If so a well-above-chance agreement of the two response types is expected. In the present experiment, the precision of the motoric specification, of the 2D objects, also did not follow Weber’s law. Conclusion: the well-above-chance agreement between motoric and symbolic specification of both length and orientation is best explained by assuming that the two response types are driven by a common representation of spatial features.