You calculate your view matrix and you calculate your projection matrix. Instead of telling your shader: "Here's the both of them", you simply multiply them once on the CPU and tell your shader: "Here's the view*projection matrix". There's no need to work out how to calculate the both of them in one go.

Since the result of that matrix multiplication is the same for one draw call anyway, you might as well do it just once on the CPU, instead of doing it again and again for every vertex in the shader.

Yes, every time you update your view matrix, you will have to multiply it with the projection matrix again and then feed that to your shaders.

Well, uniform variables in GLSL remain constant over a drawing call, like glDrawArrays() for example. So with a separate call, I meant that you have your uniform matrix set for drawing the background tiles, draw them, set the uniform matrix for your character and then draw that one.

If you were to use a different shader for drawing your character, which uses an extra matrix that you don't want to have in the drawing of your background tiles, then you would have to create a separate shader and program for this. You then just bind the right program right before you draw.

Now my initial suggestion of using an extra matrix might be overkill for simple quads (I assume you draw 2D sprites on quads). If all you plan sprites to have is different positions, then you could use some uniform offset/position variable. For all these different characters, you would first set the uniform position variable, then draw that character. This might be very similar to what you are doing now, I guess.

I'd like to add that I'm by no means an expert, I'm just putting my thoughts out there.

Instead of altering a buffer with positions, wouldn't it be a good idea to draw your character in a separate call and alter the matrix that specifies where it will be drawn?

Usually, when working with a 3D mesh, the vertex positions are specified in its own model coordinate system. Using a set of matrices, you can transform those positions from its model space to a place in the world, and then possibly perspective projection and so on. The perspective projection stays mostly the same for all meshes, while the matrix that moves the mesh from model space to world space changes from mesh to mesh.

For your 2D game, you could have an extra matrix that specifies a translation and a rotation for the sprite that you are drawing. It's made up of the position and orientation of your character, for example.

I'm sorry to say that we did not work with the Vrui Toolkit. The CAVE was at a neighbouring college, and I believe the libraries we used were developed in-house. It was called Cavelib3, if I remember correctly. Here's a link to some information about the CAVE: http://www.fontysvr.nl/facilities-and-systems-in-vrlab/virtual-reality-cave. If you go to the wiki, there should be some information about Cavelib3 too. I hope most of it is in English, the site did not seem to be very consistent with language.

That library made things very simple for us, since it handled pretty much like GLUT, but with extras. One of those extras was trigger volumes, which we used to detect whether someone hit an axe or a hole.