Detecting changing uniforms in shader

Hi All,
I contemplate building matrices in the vertexshader, but wonder if there is an inherent way of detecting weather an input uniform (say camera-direction) has changed since the 'last' execution on the vertex? If I can detect that a change of direction has not happened, I can skip/spare a new calculation of the matrices ..
I'm also in doubt as to the 'scoop' of 'local variables' in the shader. Do they keep values between executions?
I would prefer to put the matrix-stuff in the shaders, but if I cannot bypass calculations it may be a better solution to manipulate that part in the application .. is there any rules-of-thumb on how to pick the right solution?
An obvious solution could be to send just one boolean uniform to controle the flow.
I've started out from an sample using freeglut. The sample did no more re-painting than necessary .. I don't know what changes causes the changed sample to do much more 'painting' than strictly necessary (as if the render() was placed in the idle-function). I could spare the GPU from unnecessary calculations by controlling the re-painting.
Any comments?
Yes, I'm brand new on opengl, c++ and shaders ;o/

I'm also in doubt as to the 'scoop' of 'local variables' in the shader. Do they keep values between executions?

No, they don't keep their values. Also, keep in mind that many vertices are processed in parallel and they all have independent copies of local variables - which is a good thing, because you don't have to worry about concurrent writes.

The rule of thumb to determine whether a calculation should be done on the GPU or CPU is to look at how often the value changes (per frame, per object, per vertex, per fragment?) and how many times it is used. For example the model view projection matrix is the same for all vertices of an object, so it is wasteful to compute it for each vertex; compute it on the CPU and upload it to a uniform. Of course rules of thumb do have exceptions, so there can be cases when a different approach is better.
I'm not sure what you are referring to when you talk about re-painting. The normal mode of operation for many (most?) applications is to completely redraw the entire window for each frame. Many applications also do this as often as they can, i.e. as soon as a frame is finished they start drawing the next. Some only redraw when either something in the scene has moved or the camera did, but often that is almost always the case so there is little point in only drawing on demand.

One way to minimize redraw time is to draw into layers and then merge the layers. If a layer has not changed (including no camera moves) that layer does not have to be redrawn just merged. This works well for 2D type applications and for things like HUDs

Hi Carsten & Tonyo
Thanks for your attention. I've skipped the thought of stuffing the shader with scale, rotate, position & direction-values for models and camera and then build the matrices there. As for 're-painting' (the eqvivalent of 'display' in glutDisplayFunc(display)).. the startout sample did a re-paint only when the window was clicked (had placed glutPostRedisplay() in that function). As of now, the display() executes whenever there is a free moment (I'm not causing it with glutPostRedisplay()). Maybe it'll line up properly if I put glutPostRedisplay() in Idle() along with some timing-code. ... [I may not have detect all the cycles in the initial sample though]
Tonyo, I checked if I could use 'Layers', but I cannot. I use OpenGL 3.3 core. It's very tempting functionallity though ;o)

Tonyo, I checked if I could use 'Layers', but I cannot. I use OpenGL 3.3 core.

I don't mean OpenGL layers. You just render to several different frame buffers and manually merge them into the back buffer. You can, for example, clear each frame buffer with transparency set and then render each buffer with a full screen texture blit or use a shader to merge them.

I don't mean OpenGL layers. You just render to several different frame buffers and manually merge them into the back buffer. You can, for example, clear each frame buffer with transparency set and then render each buffer with a full screen texture blit or use a shader to merge them.[/COLOR]

Hi Tonio,
If I could do that .. I probably wouldn't have asked the initial question ;o). I've just been reading about the framebuffers in the Superbible 5. ed. and it's great to get a sense of how it can be used.
I'll have to focus on initial steps, and that still includes premake, makefiles and compiling .. I'm yet not 'free' enough to involve whatever library/dependancy I would like. The premake4 hieracy is an evolved topic, but I'll eventually get there, one step at a time.