We have a commercial Photoshop plug-in that uses OpenGL for both high responsiveness preview display and also final rendering of the user's results. We use GL_RGBA32F for all internal data.

Our last release worked on all kinds of systems all over the world. So well in fact, that I remember thinking, "Gee, OpenGL has finally arrived as a serious, professional graphics subsystem that just works.

Then a customer from Australia reported that his system, with a decent 4 core Intel(R) Core(TM) i7-3667U CPU @ 2.00GHz and running Windows 10 v1703, ran so slowly as to be utterly unusable. OpenGL-based operations were literally taking tens of seconds to complete, but only for the 64 bit code, not the 32 bit code built from the same source.

Logs didn't show any obvious reason for the problem, so we embarked on a quest to clean up our OpenGL implementation, to ferret out every little resource overuse or wrong or redundant sequence of commands. We turned the debugging output all the way up and set out to eliminate all the warnings - and hopefully find and fix what we were doing to irritate the little Intel GPU in the process.

Testing with nVidia and a different Intel GPU showed a clean bill of health and full functionality.

Since we don't have an Intel HD 4000 GPU to test with here we had to have the customer try our new software builds and send us logs. The customer is on the other side of the world, so the cycle time was typically one day. Fortunately he was very patient and careful.

Long story short, we figured out that the Intel HD 4000 absolutely HATES glFinish(). It causes long internal timeouts. Even with just ONE glFinish command in our rendering loop it would cause havoc. So I recommend:

Use no glFinish() commands to synchronize the GPU and CPU with the Intel(R) HD Graphics 4000.

We also learned that it will fail glReadPixels calls if you set glReadBuffer and/or glDrawBuffer to GL_NONE at some prior point in operation, even if you have the proper things bound at the time.

Don't set glReadBuffer or glDrawBuffer to GL_NONE with the Intel(R) HD Graphics 4000.

Finally, we're still seeing a number of these emitted when we destroy the rendering context (possibly one per shader):