Then it will compute a special effect based on the depth. Screen is split in 8 vertial bands. And effect is apply 8 times (that a shame it would be free to do it once for the full screen with current GPU)

GSdx limitation1: blending will clamp not wrap
=> solution use sw blending => accurate_colclip
GSdx limitation2: depth texture is float 32 bits on GPU. So it need to be converted. And we need to reuse it as input texture.
=> half solution: add some shader to convert it
=> add various hack to copy the depth into the local memory and disable the texture cache. Temporary until I find a better solution (and that it works this way).
GSdx limitation3: RT is 24 bits so alpha channel mustn't be written
=> solution not yet implemented but doable

Yes so much issue for a single draw call! Anyway I manage to have a texture. Initially it was completely dark now you have that

GSdx limitation: support of RT/Depth as same address.
RT will receive the color of RT (so no change)
Depth will be updated with the primitive test)
Note: due to GSdx limitation alpha value will be wrongly copied

Rt will receive texture with a mask of ~0x3FFF, which mean that red channels is left untouched. And green partially masked

Gsdx limitation: GPU only supports mask of Full channel. You can't do it halfy.
GSdx limitation: initial RT was 32 bits now it is 16 bits ! Texture cache won't like it.

Actually what happen is that depth information is more or less stored in the alpha channel of the RT. If I get it correctly
Z[14:8] is copyied into A[6:0]
0 is copyied into A[7] (due to aem stuff)

Current screenshot of the situation, due to wrong format conversion we draw silly thing:

Once the effect is done 8 times for the full screen. There is a final post-processing step that multiply the the alpha channel of the RT (which is reused as 32 bits). On my example the factor is 91 (i.e. 0.71%)
input texture: 32 bits @ 0x0
output RT: 32 bits @ 0x0

First run give you a 32 bits swizzle format (PSMCT32). Then GS will reinterpret it as PSMCT16 (no conversion).
Let's take the first line from x=8 to x=16, how do I compute the address in 16 bits format. And so which pixels in 32 bits is impacted.

A word can contains 32 bits pixels in RGBA8
A word can also contains two 16 bits pixels in RGB5A1

So the game draw lines because it wants to access the 2nd 16 pixels of the word (aka the upper bit of RGBA8 which mean BA8). Lower bits RG will remains untouched.
Then the game sets a mask of 0x3FFF which is logical when we decompose the remaining 16 bits
Originally you have 8 bits of alpha and 8 bits of blue. Now you must consider it as a 16 bits colors

In others word, only the alpha part of the 32 bits color will be updated ! Therefore the alpha channel will be a function of the depth. I was wrong the last post-processing step isn't a format conversion but a way to apply a final multiplication factor to the alpha.