I have started to code a 2D CAD (Computer Aided Design) project. The objective is (clearly) to design figures in a two-dimension plane. I have already some experience with OpenGL so, as a first reflex, I started coding directly in OpenGL using a QGLWidget derivated object.

The point is that I saw that the drawing tools of the Qt libraries: they are quite intuitive and maybe they can spare me some lines of code (and of time). So, now I have the doubt wheter using Qt tools or to continue with OpenGL. I expect that, asking this on a Qt forum you answer me to use the Qt tools - I would do that. But the problem is that I want the 2D to be a first step of the program, and to extend it in the future to a 3D CAD. From what I read, the big drawback of Qt paint libraries is in fact that they are intended only for 2D... but I cannot say that as I have no experience on them. Can you give me some advises?

Indeed, QPainter, QGraphicsView, QML 1 and 2 are focused on 2D. For 3D, there is Qt3D and QtQuick3D, and - as you already know - QGLWidget. I'm not experienced in 3D, so I can't say much more than that. Either others will respond better, or you'll have to look into Qt documentation.

You want to do complex per-pixel operations (where the GPU is invaluable).

(Edit: Or obviously, if you want to do 3D; my application is not 3D).

What you will find is that there is a significant unquantifiable cost overhead with OpenGL: The drivers on different platforms have different flaws.

I have been focused on Win32, however, my application is developed on linux. Let me give you a snippet of the rules I have learned:

ATI drivers do not support threaded upload nor context sharing. Such operations are within spec, and do work on NVidia and Intel, but blue-screen XP/ATI; and cause other failures on Vista and 7.

Intel on HD graphics (i.e. Sandybridge) leak memory if you do not call glFinish prior to calling SwapBuffers. I also believe there are multithreading and context sharing issues. There is no easy way to communicate this with intel's driver developers. G31/33 (i.e. core 2) and 945GM don't seem to need this work-around.

NVidia have the best drivers. They are for the most part reliable. However, careful smithing of code paths is required to get performance.

On linux/Intel 945GM, I have learned that there is a bug in buffer swapping that causes a race-condition exception.

Linux/NVidia is a joy to work with; but leads you into the trap of thinking that your code works correctly; only to find that on windows it blows up. (Edit: let me qualify - on other win32/non-NVidia hardware).

I do not own a Mac, but am led to believe that the OpenGL implementation is fairly robust on it.

I would strongly suggest that you consider your paint routines in a manner that allows platform abstraction. I.e. build a plugin that encapsulates all of your rendering operations. If you require performance in the future, you rewrite this using OpenGL or Direct X for windows. Otherwise, just stick with Qt's software paint routines.

[quote author="Seba84" date="1324591465"]
But regarding the problems you have found with OpenGL on different platforms and hardware, these should be reflected on Qt painting tools as the underlaying code is done on OpenGL, am I right?
[/quote]

I honestly don't know. I did not use the qpainter/opengl routines at all. Software raster QPainters are used elsewhere (i.e. for painting to offscreen buffers, in separate processes; but that is another story).

My application is slightly different than the common video game style application. Namely I push video, web pages, images up to the GPU as textures at a high rate. Normal video game usage pre-pushes the textures than uses them in rendering.

Also, gamers probably get tired after six hours then shut down for the night. Our application needs to run continuously; hence the testing pattern is different. Think of walking through the airport and looking at the digital signs - I generally see one windows error (an exposed dialog, or an application error message) every time I travel; such things are unacceptable. Linux would be a reliability improvement for all of those displays; but politics generally forces otherwise.

My point being that for most use cases (e.g. 5 minute app run), couple hours of gaming - it probably doesn't matter that these errors exist under the hood.

Essentially I started with the tutorial: http://doc.qt.nokia.com/qq/qq06-glimpsing.html . I implemented a thread which performed the GL drawing, and the SwapBuffers call. Then I implemented a texture upload thread using PBOs (which in retrospect was silly because I had a separate thread doing this). The upload thread used a shared GL context with the mainwidget (a QGLwidget).

This worked fine on NVidia. The synchronization logic was all fairly standard, since I had to communicate completed uploads to the render thread; so for playing ffmpeg videos (one of many supported media types), the upload thread would scream ahead filling up a ring-buffer of YUV format texture triples. The render thread would then consume them at video frame rate, painting them with a fragment shader performing the yuv->rgb conversion on the GPU - all fairly standard stuff, but lots of fun!

Moving from NVidia to ATI/win32 my program would end up rendering green textures instead of frames, or just garbage memory areas after a period of about 5 minutes of playback. XP would BSOD; Vista and 7 would limp on. Out went the PBOs; same problem, less frquent, so out went the upload thread - this got things working again at a cost in latency; all OpenGL operations are now performed in 1 thread. We suspected that the calls to glGenTexture/glDeleteTexture are not thread safe on ATI. We have observed similar behavior on ATI's proprietary linux drivers.

With just a single opengl paint thread all was working again, until along came a Sandybridge (core i3) / windows 7 computer to test on. This worked great; except for a significant memory leak. About every 30 seconds I'd loose about 8 to 10mb of heap space. Now you may say - ok make a huge page file and reboot every 12 hours.... After about three weeks of analysis (and ensuring our application did not have any memory leaks) this is what I have concluded (which may or may not be accurate; since I can't see the driver source code I have had to guess): The Intel drivers do some memory manipulation under the hood; I believe marking texture memory as write protected. Thus, when a user calls glTexSubImage2D or glTexImage2D the pages backing the memory pointer are locked in physical ram and marked as write protected. If you modify the memory, the ensuing trap copies the memory to a driver backed buffer and internally re-associates this new copied memory with the texture id; allowing the user process to carry on. This makes a lot of sense on an UMA architecture - after all, why copy if the user is just going to leave that memory mapped and not touch it; the UMA means that you can use your entire system memory as a texture buffer! However, touching it is just what I did; after all I had to! It appears that these texture buffers, allocated by the driver, have issues being freed when the texture is no longer needed. The buffer I passed in was from a memory mapped file (e.g. MapViewOfFile) and this may have complicated the underlying drivers - however, I have seen similar leaks when using a stack-allocated QImage and passing img.bits() to glTexImage2D. Forcing a glFinish call in the render loop has removed these leaks; however, I don't believe this is a sane way to address this problem.

Anyhow - I've driveled long enough on this - I hope it helps others out there getting their software reliable.