before transmitting it, you could save it to file (simSaveImage) to make sure that the vision sensor is operational. Remember that you should do this in the sensing section of a child script, otherwise you will retrieve the image from previous simulation step (and no image at all in the first simulation step). Unless you explicitely handle your vision sensor.

There is an example how to publish/subscribe to an image with the new RosInterface: rosInterfaceTopicPublisherAndSubscriber.ttt

When you start V-REP in headless mode, you basically run the exact same binaries as the V-REP with GUI. It is not a true headless version of V-REP. I see currently two possibilities:

You can try to run with different graphic settings and see if this makes a difference. Have a look at file system/usrset.txt. In there, try out different values for offscreenContextType, fboType, forceFboViaExt and vboOperation.

You can recompile V-REP in true headless mode by compiling V-REP with the makefile_noGui_noGl makefile. In that case however, vision sensors will only work if they rely on a plugin for image generation (e.g. v_repExtPovRay).

The solution of rezama unfortunately only seems to work on remote computers with an X server.
Even having libgl1-mesa-dri installed does not work
xvfb-run --auto-servernum --server-num=1 -s "-screen 0 640x480x24" ./vrep.sh -h

is there any plan to provide a real headless version of VREP?
If yes what is the time line for this?
This issue was originally posted in Feb 2014!!!

This is currently not high priority. But the plan is to replace the whole graphics engine with something that is more detachable, scalable and maintainable. But we can't make any fixed schedule for that..

This is currently not high priority. But the plan is to replace the whole graphics engine with something that is more detachable, scalable and maintainable. But we can't make any fixed schedule for that..