It appears you're already using alpha-blending on the surface you want to "cut" with the "hole" texture. In order to add the "hole", you can use multi-texturing. Here's a simple OpenGL tutorial: http://www.clockworkcoders.com/oglsl/tutorial8.htm . In addition to what is explained there, you will also have to add a certain offset and a scale to the texture coordinates of the second texture (e.g., (gl_TexCoord[1] + offset) * scale), which is the position where you want the "hole" to appear, and the "size" of the hole (etc., etc.), and clamp the result to [0,1] (or was it [-1, 1] for OpenGL?), or use the "clamp" texture-addressing mode (this is what it's called in D3D - I don't know if OpenGL has it or what it's called).

You will also have to modify the gl_FragColor returned from the Fragment Program/Shader to something like this: gl_FragColor = texval1 * texval2; (just an "off the top of my head" example - if it doesn't work, try tweaking only the alpha value from texval1, with the average of the r, g and b values from texval2. etc... depends on the color format of your second texture).

Thank you very much ! +1 for you. But what if I had a vector of 2D points, and I would want all of them to make holes in "the texture", and the number of the points is specified in a variable, so it can change anytime? Can I add multiple textures of "the hole" to the multitexture? Or is it possible to combine all of the "holes" that point made into one texture, and use it in the multi-texture?

Can I add multiple textures of "the hole" to the multitexture? Or is it possible to combine all of the "holes" that point made into one texture, and use it in the multi-texture?

You can do both.

The first method: you can send your 2D points vector into the fragment shader, as a 1D-texture object, and then iteratively sample the "hole" texture for each point from the 1D texture, then blend all of the "hole" samples together (multiply them, or just add them and clamp the final result), then use the final value instead of the texval2. You get the points from the 1D texture with texelFetch(), and the iteration (for loop) should run from 0 to the size of the points vectorm which you also send into a separate uniform variable to the fragment shader (or you could put it in the first float from the 1D texture...).
You'll have to re-create the 1D texture every time one of your 2D points changes, so it might be slow if it changes a lot, and/or you have a lot of points. There is also a limit on the size of a 1D texture (I don't know what it is, though).

Also, instead of a single offset&size for the "hole", you now have to send an array of offsets&sizes, for all of the holes, and use them like before.

The second method (combining all of the "hole" textures into one) can be done by using render-to-texture and then drawing the "hole" at different spots. First, clear everything to white (glClearColor, glClear), apply an ortho projection (glOrtho) then iterate over your 2D points vector, and for each point, draw a polygon with the "hole" texture applied to it at the position of the point. You may have to translate the points to OpenGL's screen coordinates first [-1, 1], if they're not already transformed. The texture you rendered to (let's call it "holes") can then be used for multi-texturing as before.

For this method, you have to re-draw the "holes" texture every time your 2D points changes, and it will be at least eight times slower than the first method (because instead of just sending the 2D points to the programmable pipeline, you are now sending four vertices of a polygon for each point), and you are also doing more work on the CPU (like the iteration of the 2D points).

I think that's called "texture painting". It's probably similar to the second method I tried to describe before, but without keeping track of all the mouse positions in a vector, and with some fancy blend effects.