Thread view

Hi,
I need to compute the row means of a texture A. The resulting texture D is
used to demean texture A. I've currently implemented it in a way that is
really slow but works: walk over the columns of texture A, add the values of
that column with additive blending to destination texture D. Finally, when
demeaning texture A, subtract a value from texture A using texture D divided
by the number of columns in texture A.
Has anybody tried to implement a demeaning algorithm on the GPU before ?
Kind regards,
Chris
---
Keep IT Simple Software
P: P.O.Box 548, 7500 AM Enschede, The Netherlands
W: http://www.keepitsimple.nl
E: mailto:info@...
T: +31 53 4356687

Hm could this be faster: Add the right half of the texture to the left
half of the texture (so added[0]=3Dpixel[0]+pixel[128] for a pixel with
256 in width). Then recurse for the left 128 pixels. Pretty soon you
should have the row-sums in one column (needs to be float or something
to not overflow though, but it seems your texture D should overflow
with your algorithm as well so I've guess you've got that covered!).
For a row-width of 2^k you need k divisions and they are dependant, ie
you need to add from what you are creating. They size of the added
texture however halfs each time.
Just a thought.
Anders Nilsson