Standard, linguistic means of specifying the content of mental states do so by expressing the content in question. Such means fail when it comes to capturing non-conceptual aspects of visual experience, since no linguistic expression can adequately express such content. One alternative is to use depictions: images that either evoke (reproduce in the recipient) or refer to the content of the experience. Practical considerations concerning the generation and integration of such depictions argue in favour of a synthetic approach: the generation of depictions through the use of an embodied, perceiving and acting agent, either virtual or real. This paper takes the first steps in an investigation as to how one might use a robot to specify the non-conceptual content of the visual experience of an (hypothetical) organism that the robot models.