Pixel Corruption From Neighboring Pixels with gl_FragCoord

I'm trying to create a color filter array demosaicing fragment shader for raw capture from a color camera with color bayer filter. In order to properly decode the image, I require acces to clean neighboring texels without any averaging from neighboring texels. Since I'm not using TEXTURE_RECTANGLE textures, I'm a little nervous. I also have to know the exact x/y offset of the main pixel in the shader.

Here is a partial simple shader that implements the RGB values from a red pixel of the color bayer filter array:

1) How do I implement the layout qualifier "pixel_center_integer" with this version of OpenGL? I need the offsets to be the actual integer values, not with 0.5 offsets.

2) Will my greenOffset[] and blueOffset[] texel offsets return me the original buffer pixels without any sub-pixel neighbor averaging? The color demosaicing will not work if the pixel values are in any way mixed.

1) How do I implement the layout qualifier "pixel_center_integer" with this version of OpenGL? I need the offsets to be the actual integer values, not with 0.5 offsets.

2) Will my greenOffset[] and blueOffset[] texel offsets return me the original buffer pixels without any sub-pixel neighbor averaging? The color demosaicing will not work if the pixel values are in any way mixed.

Often a simpler way to deal with this is to just use glFragCoord and texelFetch.

Do I need to declare a higher version of OpenGL or an extension to use "texelFetch"?

Depends on which version you are declaring now. May already be supported. Heck, it's been in GLSL since v1.3. So anything greater than or equal to #version 130 should pull it in.

Alternatively, you'll probably just get it with the #extension GL_EXT_gpu_shader4 : enable you've got in there now, though you'll likely need to call texelFetch2D() instead of texelFetch() (for 2D textures anyway). GLSL 1.3 did away with all the sampler type suffixes from the names of the sampling functions, eliminating a lot of needless function list bloat.

The issue with declaring 1.3 right now if you're using 1.2 (the default IIRC) is that the built-in identifiers (fixed-function shader shims) go away. Which may or may not be more code rework than you want to bite off right now.

Also, since you still used glFragCoord(), don't I still have to deal with the 0.5 pixel offset?

I cast to an int (ivec2), which (just as in C/C++) whacks the fractional portion.

p.s. I'm not sure what the origin of your profile photo is (looks like a cross between Darth Vader and a Dalek, but you might be interested to know I that attended the first showing of STAR WARS (1977) in San Jose. It was the greatest experience (female company excluded) of my life!

So, none of the scaling works, and the image isn't drawn in the correct offset to the texture I'm rendering to (the texture is larger), so that means that the glViewport() command isn't working, either.

I suppose you're going to tell me I have to do all of this in a vertex shader now.

HELP!!!!!!!!!!!!!!!!

Living in the past,
Rennie

p.s to Cy Bo Rg:I met Mel Brooks at Metrocolor lab in 1986 when he was color timing the negative on SPACEBALLS.