Not all research in machine consciousness aims to instantiate phenomenal states in artefacts. For example, one can use artefacts that do not themselves have phenomenal states, merely to simulate or model organisms that do. Nevertheless, one might refer to all of these pursuits -- instantiating, simulating or modelling phenomenal states in an artefact -- as 'synthetic phenomenality'. But there is another way in which artificial agents (be they simulated or real) may play a crucial role in understanding or creating consciousness: 'synthetic phenomenology'. Explanations involving specific experiential events require a means of specifying the contents of experience; not all of them can be specified linguistically. One alternative, at least for the case of visual experience, is to use depictions that either evoke or refer to the content of the experience. Practical considerations concerning the generation and integration of such depictions argue in favour of a synthetic approach: the generation of depictions through the use of an embodied, perceiving and acting agent, either virtual or real. Synthetic phenomenology, then, is the attempt to use the states, interactions and capacities of an artificial agent for the purpose of specifying the contents of conscious experience. This paper takes the first steps toward seeing how one might use a robot to specify the non- conceptual content of the visual experience of an (hypothetical) organism that the robot models.