Hybrid View

perfect screen quad aligned and GL_NEAREST

Hi,

I am trying to improve my yuv->rgb convert .. Now I only draw quad (scaled, rotate, ..) in my scene with a texture with GL_NEAREST, GL_CLAMP_TO_EDGE and a shader that do yuv->rgb and the bilinear filtering.
but it seems it's not perfect.

It's perhaps better to render a screen aligned quad in an fbo with GL_NEAREST with a basic yuv->rgb shader, then render use the texture of the fbo in RGB space.

What is the exact correct definition (hardware independant) of a a screen aligned quad ? Do I need to use GL_CLAMP_TO_EDGE (it seems that texture is not exactly from 0.0 to 1.0) ?

Assuming you're using a half-recent version of GL, just use texelFetch and then forget about the sampling and filtering details. Just pass in the integer coordinates of your current fragment and be done with it:

Code glsl:

texelFetch( tex,ivec2(gl_FragCoord.xy),0);

If this is an older OpenGL (pre-texelFetch), then you can still do it just fine with GL_NEAREST 0..1 quad (then CLAMP_TO_EDGE is fine but not needed). Just remember that for texcoords, 0 and 1 are coords of "texel edges", but the texcoords of texel "centers" should be done for the lookup (cell-centered data). So for instance if your texture is NxN, and you wanted to know the texcoords of the center of the Ith texel (I in 0..N-1), then its texcoord is: (I+0.5)/N.

If I understand correctly : texelFetch bypass any filtering,works in non normalized texture coordinate ? In this case, I need to render a quad aligned without bilinear filtering ?
In this case I need to do that ?
create fbo of the yuv420 texture size
glviewport (texture size)
glortho(0,0,widthTexture,heightTexture)
drawopenglquad (size of the texture) ?
and use texelfetch with basic yuv->rgb and no bilinear filtering

The OpenGL Shading Language texel fetch functions provide the ability to extract a single texel from a specified texture image. The integer coordinates passed to the texel fetch functions are used as the texel coordinates (i, j, k) into the texture image. This in turn means the texture image is point-sampled (no filtering is performed), but the remaining steps of texture access (described below) are still applied.

For what you want, texelFetches are perfect. Otherwise use Dark Photons approach. I can't speak to GClement's suggestions, however, without more background on the conversion you're doing.

If I understand correctly : texelFetch bypass any filtering,works in non normalized texture coordinate ? In this case, I need to render a quad aligned without bilinear filtering ?

texelFetch() doesn't perform any filtering, interpolation or mipmap selection. You pass in integer texture coordinates and a mipmap level, and it returns the requested texel.

How you use that is up to you. One option is to render a same-size, aligned rectangle to produce a 1:1 copy, but in RGB rather than YUV. Another option is to perform the conversion as you render the final polygons, implementing the texture filtering logic in the shader. The latter option requires more processing but less memory bandwidth.