And what this does [at least in my theory] is that it creates another layer over the existing plane. This causes code on clicking the plane below to be rendered useless. Changing the existing code too much is not an option right now. I found that if I draw a texture on the old plane, the code stays working.
That said, I was only successful in loading a texture from an image I made. Since the image is suppose to be calculated and painted during run time, I tried making an image from the data to load as a texture.
I used the Qt API functionality for achieving the same. I failed to recreate the same image. Might I be suggested a way to create a texture image from data owned.

It's fine, but just not that useful as these forums don't have a high post frequency.

I think the reason you didn't get any responses is because it wasn't apparent from your post what you're doing, much less what your problem is. That you ended up talking about creating images to load just further confused my perception of your goals.

I currently have some heat-map data in a database. I was successful in creating painting a heat-map using the same data [using some vertex shading] onto a plane. Example:

Ok, I'm with you so far.

Now, the problem is that I am currently using something like :
...
And what this does [at least in my theory] is that it creates another layer over the existing plane.

Ok.

This causes code on clicking the plane below to be rendered useless.

Ok. So I assume this is the problem? Well, it's hard to help because you didn't tell us what "clicking" does. What mechanism does it use, and does this mechanism even involve OpenGL? Depth buffer readback and gluUnProject()? Color buffer readback? Or CPU-side intersection with the polygon geometry? You didn't tell us, so no one can even take a stab at helping you.

Changing the existing code too much is not an option right now. I found that if I draw a texture on the old plane, the code stays working.

Ok. This is making it sound like you may be doing a depth readback at the click position via OpenGL, and both the first plane and second plane are writing depth, but that's still a complete stab-in-the-dark. There are a few solutions, such as saving off the depth buffer before you rasterize the second plane, and then later read from that saved copy.

That said, I was only successful in loading a texture from an image I made. Since the image is suppose to be calculated and painted during run time, I tried making an image from the data to load as a texture.
I used the Qt API functionality for achieving the same. I failed to recreate the same image. Might I be suggested a way to create a texture image from data owned.

I'm a bit lost as to why you're talking about making an image to load as a texture, unless perhaps you're asking about how to save off the depth buffer into a texture or renderbuffer on the GPU. If so, you can do that a couple different ways: 1) glCopyTexImage2D, or 2) glBlitFramebuffer from the depth buffer to an FBO with a depth texture or renderbuffer attached to it.

Buffer textures (aka texture buffers) are just views into 1D arrays of data stored in buffer objects. They don't support 2D access, so they're not that useful to you (if I understand your intended application correctly).

Sorry about not expressing the problem well. In my deference I just moved to a new team and have been put to updating an old code. I have previous experience with Qt, but openGL is something I have never used before hence the complexity of the same escapes me.

Well, we currently have some objects [3d objects] which can be placed onto the plane for viewing. Hence when you click on the plane, the object takes the position and places an object onto the plane.

I had given the framebuffer a try but realized that maybe due to the version in use [I have no idea which version I am using], the functions do not exist. I included qopenglext.h but it was not very helpful either.

Overall I require to "paint" a texture using openGL functions [like GL_QUADS etc].

To complicate the problem to another level, the plane may not always be plane. Due to this, it is indeed better to define a texture rather than painting a new layer.

EDIT:

How texture (Roughness (image is loaded into tex1 by loading a QImage and coverting to GL format)) is being used: