Display image in multithreaded QGLWidget

Hi All,
I want to display a QImage created in a different thread to be displayed in a QGLWidget which is created in the main GUI thread. My code structure looks like this. I am completely new to OpenGL. Going by the example "2D Painting Example" in Qt I try to display the image in the paintEvent() like this.

If I use update() instead of repaint() the frames are displayed, of course sometimes with lot of flicker due to lack of timely refreshes.
I know that using QGLWidget is tricky in a multithreaded environment but I don't really know which is the correct method to do this. I have seen examples of rendering other geometric shapes in a multithreaded examples but I don't have a knowledge in OGL to understand this. Therefore I would like to get an idea on the direction I should go since I don't want to dig deep in to openGL programming at this moment. I have tried to use SDL to show video, but I gave up since embedding video on a Qt widget is not easy (SDL_WINDOWID doesn't work).

Images should come from a different thread since I want to do lot of processing on them. I would really appreciate if somebody could advise on the exact steps I should follow on this.

First of all can you provide some more info about what sort of processing you need to do on the images please? Where do the images originate? Disk? Calculated? How often do they change or does the processing change?

Hi ZapB,
thanks for the reply. My application is for video enhancement. The video files are decoded and converted into RGB format frames using the FFMpeg's libavcodec etc. libraries. So the final frame is available as an AVFrame struct (FFMpeg). I intend to perform from simple brightness/contrast adjustments to more complex super-resolution reconstructions on the video. I hope this clarifies my purpose.

I don't have any experience with realtime video processing with OpenGL. However, VLC has an OpenGL output mode and also offers lots of nice video effects so it might be worth having a dig through the VLC source code to see how they do it or contacting the authors.

I would imagine that the sequence is something like:

Grab frame from decoder

Bind it to an OpenGL texture

Set a suitable vertex and fragment shader

Render texture to a quad (well pair of triangles) with suitable texture coordinates

For video the fragment shader will do most of the work ie increase gamma, invert colours, whatever. For those types of filters all you will need is a single quad in terms of geometry.

For more complex filters, like the puzzle filter or wave effect in vlc, I would imagine that you would need several quads in order to be able to jumble the pieces up (puzzle) or to displace the vertices (wave).