All the vertex shader is doing, is to pass through the vertices. From input to output. Now, where is the difference in having the vertex shader put a vertex to a specific position?

Where is the difference? You just said the difference; it's right there in your post.

Currently, vertex shaders pass vertices "from input to output". This means that they have no knowledge of a "specific position". All they know is that they received a set of inputs, and will be writing a set of outputs. They neither know nor care where those outputs are going. The "position" of a vertex is defined by the rendering pipeline.

So if a VS is going to write to a "specific position", the VS now must have the concept of a position. Which it currently does not.

There is a reason why VS's can't generate vertices, why they can't cull vertices. Why the only thing a VS can do is transform a vertex "from input to output". That's what a VS is for. For other things, we have Geometry Shaders.

If that doesnt make sense, than something like a load/store would also make no sense. It would be exactly the same but doable with transform feedback.

... what?

You know that Image Load/Store is not actually possible on OpenGL 3.x-class hardware, right? That's a feature of 4.x hardware. So you seem to be arguing with yourself, since you want this scattered write TF feature in 3.x hardware... where it generally isn't possible.

Also, his "makes no sense" isn't about the idea of scattered writes itself; it's about the idea of scattered writes using Transform Feedback. Scattered writing is good functionality, when needed. But scattered writes have no place in transform feedback.

Only because you can not image use-cases for that doen't mean it "makes no sense". That is all I want to say.

Um, no. It makes no sense because it makes no sense. It runs contrary to the basic functioning of the rendering pipeline.

I am going to port gasoline on this fire... just for fun. There is a way, sort of, to do random write transform feedback with even GL2 generation jazz, but it kind of feels silly:

Render to MRT point sprites of radius so that exactly one pixel is hit

Specify "where" in the buffer objects one wishes to write by in the vertex shader setting the value of gl_Position

Read back the values from the MRT color buffers into one's buffer objects

Essentially, "render points" to color buffers, each color buffer packing an output of the vertex shader to stage. The color buffer should be GL_RGBA32F (or GL_RGB32F, GL_RG32F or GL_R32F). Then do glReadPixel with a buffer object bound to GL_PIXEL_PACK_BUFFER to read the values into a buffer object. You'll have to be very careful about the value you set gl_Position as to make sure only one pixel is hit and exactly where you want it to be read into the buffer object after the "render", but this should work, and I have memory that this was a strategy at one point in time too.