Hot questions for Using Lightweight Java Game Library in camera

I have got some trees, which are greatly lagging the game, so I would like to check if the trees are in front of the camera or not.

I have had some help from the Mathematics forum, and also had a look at This link to help me convert pitch/yaw to the directional vector needed.

But for some reason, whenever I move the camera to the left, the trees become visible, wheras whenever I move it to the right, they become unvisible (So if camera is pointing at +1 on the Z axis, it seems to be rendering the trees, but -1 on the Z axis and it seems to not render them).
(See http://i.gyazo.com/cdd05dc3f5dbdc07577c6e41fab3a549 for a less-jumpy .mp4)

I am using the following code to check if an object is in front of the camera or not:

It appears to be where the camera is looking. For example, if I look to -Z, nothing happens, but if I look to +Z, they all render.
The if (dot > 0) code appears to somehow being +Z rather than +TheCameraRotation.

Answer:

Your camera rotations yaw around Y, implying Y is your up vector. However, float z = (float) Math.sin(Math.toRadians(camera.pitch())); gives Z for your up vector. There is an inconsistency. I'd start by swapping y and z here, then print everything out every frame so you can see what happens as you rotate the camera. Also render just one tree and print dot. E.g. you might quickly notice the numbers approach 1.0 only when you look at 90 degrees left of the tree which narrows down the problem. As @DWilches notes, swapping cos/sin will change the phase of the rotation, which would produce such an effect.

You might consider limiting the dot product to the camera's field of view. There are still problems in that trees are not just points. A better way would be to test tree bounding boxes against the camera frustum, as @glampert suggests.

Still, the tree geometry doesn't look that complex. Optimization wise, I'd start trying to draw them faster. Are you using VBOs? Perhaps look at methods to reduce draw calls such as instancing. Perhaps even use a few models for LOD or billboards. Going even further, billboards with multiple trees on them. Occlusion culling methods could be used to ignore trees behind mountains.

[EDIT]
Since your trees are all roughly on a plane, you could limit the problem to the camera's yaw:

Now I want to calculate the world coordinates for the current position of my camera. What I'm trying is to create a new matrix with glPushMatrix, then transform it the same way that the camera is transformed, and then get the matrix and multiply the given camera coordinate with it:

The problem now is: this works for the x coordinate, but the y coordinate is wrong and always 0. Have I misused the matrix somehow? Is there a "smoother" way of getting the world coordinates from the eye coordinates?

Answer:

The problem is with the way you're calling getFloat(). When you call it with an index on a ByteBuffer, the index is the number of bytes into the buffer at which to start reading the float, not the number of floats. You need to multiply each of your indices by 4:

UPDATE:
I have tried a lot of combinations, but the camera.yaw() does not seem to be remotely relevant to what the trees are doing?
No matter what I times or divide or seem to do with it, it always seems to be wrong!

Answer:

What you want is an axis aligned billboard. First take the center axis in local coordinates, let's call it a. Second you need the axis from the point of view to some point along that axis (the tree's base will do just fine), let's call it v. Given these two vectors you want to form a "tripod" with one leg being coplanar with the center axis and the direction to viewpoint.

This can be done by orthogonalizing the vector v against a using the Gram-Schmidt process, yielding v'. The third leg of the tripod is the cross product between a and v' yielding r = a × v'. The edges of the axis aligned billboard are parallel to a and r; but this is just another way of saying, that a billboard is rotated into the (a,r) plane, which is exactly what rotation matrices describe. Assume the untransformed billboard geometry is in the XY plane, with a parallel to Y, then the rotation matrix would be

Note that if anything about matrixes and vector operations doesn't yet make sense to you, you have to stop anything you do with OpenGL right now, and first learn these essential basic skills. You will need them.

I am experimenting with LWJGL2 and I want to be able to tell if the camera is able to see a certain point in 3D space. I was trying on my own to see if I could do it, and ended up with something that kinda works and only for rotation on the Y axis.

This code works, but not in both axes. I am not sure if this is the correct way to do it either.

I just want to know how I could change this so that it works for x and y rotation of the camera. For example, it could tell me if if a point is visible regardless of the cameras rotation.

NOTE:
I am not worrying about anything obstructing the view of the point.

Thanks for the help in advance.

Answer:

If you have the view and projection matrices of the camera (let's call them V, P), you can just apply the transformations to your point and check whether the result lies within the clip volume of the camera.

The view transform V applies the transformation of the world relative to the camera, based on the camera position and orientation. Then, the projection P deforms the camera's view frustum (i.e., the visible space of the camera) into a unit cube, like this:

Given the surfaceNormal(gl_NormalMatrix * gl_Normal) and a gl_Vertex how do I rotate the gl_Vertex such that it will adjust to that normal. I want to use this for billboards and general rotation.

2 Questions:

How would you rotate the gl_Vertex using the surfaceNormal (In the .vert shader)?

Should the rotation be done on the GPU (in the shader) or on the CPU? (Please adjust question #1 according to this question given 2 Vector3fs, one for the rotation (normal) the other for the vertex position if it should be done on the CPU)

Thanks!

Answer:

In most of cases, the rotation should be done on the CPU, by the way of the model matrix (or directly world matrix).

Even if the CPU is slower than the GPU, keep in mind a vertex shader will have been executed for each vertex, whereas a model matrix linked to a mesh, so a lot of vertices, shall be calculated only once per frame if your mesh is dynamic, and only once of your entire program if your mesh won't move.

I'm implementing a camera which responds to change of mouse position. It's a more question of maths than of coding but I'd like to know how to use it as well.

I have a Camera object which rotates along the Y-axis when the mouse changes its X-position. This works as intended and I can rotate around the cube I'm drawing just fine. Now I would like to implement looking up and down triggered by mouse change vertically but the X and Z-axis are relative to the camera object so I can't just rotate along the X-axis but have to combine the X and Z-axis to do this in a fluid motion.

I don't think it's necessary to show you my Window class as the functions are quite self-explanitory. As you can see the part at the bottom that I commented out was my approach to solving the problem and at first it seemed to work but the rotating was slightly off.

I expect fluid up and down motion(that is, relative to the camera) but receive a weird rolling motion.

Any help is greatly appreciated!

Answer:

I fixed my problem. It's strange but I have to multiply the Y-rotation matrix by the X-rotation matrix. That doesn't make sense to me but it works. Thanks for your help!

In my game (3D game based on LWJGL) I walk in a voxel (block) world. The character goes blocks up and down quite fast so I want the camera too smoothly follow the character. I tried interpolation but the point, which I have to interpolate to is changing all the time, because the character is not taking the step at once (about 5-8 frames). This leads to some shaking which doesn't look nice. Is there a way to do this better?

This is the deciding line. The cameraYCorrectionPoint is the point, where the camera started to interpolate, while player.y is the position to interpolate towards (which can obviously change every frame). The other part is to calculate the time passed and scaling it up so it ranges from 0 to 1.

This isn't really working, since the position can change again before the initial interpolation is done, resulting into ugly interpolation. So what can I do for a better approach?

You can model the camera a bit differently. Split the position into a current position and a target position. When you want to move the camera, only move the target position. Then, each frame, update the current camera position to get closer to the target position. I have good experience with the following update:

f is a factor, which lets you choose how immediate the reaction of the camera will be. Higher values will result in a very lose motion, a value of 0 will make the camera follow exactly its target. t is the time since the last update call. If t is measured in milliseconds, f should have values between 0.95 and 0.99.

I'm currently working on a small fps project for testing, although I am familiar with OpenGl (LWJGL). My problem here is that the rotation of the camera is not very smooth. It "jumps" from pixel to pixel, which is actually very obvious. How can I smoothen it out?
[Link to footage:] https://www.youtube.com/watch?v=6Hgt1hXCKKA&feature=youtu.be

Summary of my code:
I'm storing the current mouse position in a Vector2f;

I'm increasing yaw and pitch by the relative movement of the camera (new position - old position);

I'm moving the mouse to the center of the window

I'm storing the currennt position (center of the window) in the old position Vector2f

Answer:

One possible way is to treat the (delta) input of your input device (mouse, keyboard, whatever) not as absolute values for your new camera position or rotation angles, but to treat them as impulse or force to move/rotate in a certain direction. You would then simply use integration over some time differentials dt to update the camera position/rotation with some damping/friction factor to reduce the translational or angular momentum of the camera for it to quickly come to a stop. This would be a somewhat physical simulation.
Another possible approach is via parametric interpolation: Whenever you receive a (delta) input of your input device, you calculate a new "desired target position or rotation angle" from that and then interpolate between the current and target state over time to reach that target.

(I am using a LibGDX framework which is basically just LWJGL(Java) with OpenGL for rendering)
Hi, I'm trying to render a laser beam, so far I've got this effect,

It's just a rectangle and then the whole effect is done in fragment Shader.

However, as it is a laser beam, I want the rectangle to face a camera, so the player always sees this red transparent "line" everytime. And this is driving me crazy. I tried to do some billboarding stuff, however what I want isn't really billboarding. I just want to rotate it on Z axis so that the player always sees the whole line, that's all. No X and Y rotations.

As you can see, that's what I want. And it's not billboarding at all.

If it was billboarding, it would look like this: .

I also tried to draw cylinder and the effect based on gl_FragCoord, which was working fine, but the coords were varying(sometimes the UVs were 0 and 1, sometimes 0 and 0.7) and it was not sampling whole texture, so the effect was broken.

Thus I don't even know what to do now.
I would really appreciate any help. Thanks in advance.

#ifdef GL_ES
precision mediump float;
#endif
varying vec2 v_texCoord0;
uniform sampler2D tex; //texture I apply the red color onto. It's how I get the smooth(transparent) edges.
void main() {
vec4 texelColor = texture2D( tex, v_texCoord0 ); //sampling the texture
vec4 color = vec4(10.0,0.0,0.0,1.0); //the red color
float r = 0.15; //here I want to make the whole texture be red, so when there's less transparency, I want it to be more red, and on the edges(more transparency) less red.
if (texelColor.a > 0.5) r = 0.1;
gl_FragColor = vec4(mix(color.rgb,texelColor.rgb,texelColor.a * r),texelColor.a); //and here I just mix the two colors into one, depengind on the alpha value of texColor and the r float.
}

The texture is just a white line opaque in the middle, but transparent at the edges of the texuture. (smooth transition)

Answer:

If you use DecalBatch to draw your laser, you can do it this way. It's called axial billboarding or cylindrical billboarding, as opposed to the spherical billboarding you described.

The basic idea is that you calculate the direction the sprite would be oriented for spherical billboarding, and then you do a couple of cross products to get the component of that direction that is perpendicular to the axis.

Let's assume your laser sprite is aligned to point up and down. You would do this series of calculations on every frame that the camera or laser moves.

//reusable calculation vectors
final Vector3 axis = new Vector3();
final Vector3 look = new Vector3();
final Vector3 tmp = new Vector3();
void orientLaserDecal (Decal decal, float beamWidth, Vector3 endA, Vector3 endB, Camera camera) {
axis.set(endB).sub(endA); //the axis direction
decal.setDimensions(beamWidth, axis.len());
axis.scl(0.5f);
tmp.set(endA).add(axis); //the center point of the laser
decal.setPosition(tmp);
look.set(camera.position).sub(tmp); //Laser center to camera. This is
//the look vector you'd use if doing spherical billboarding, so it needs
//to be adjusted.
tmp.set(axis).crs(look); //Axis cross look gives you the
//right vector, the direction the right edge of the sprite should be
//pointing. This is the same for spherical or cylindrical billboarding.
look.set(tmp).crs(axis); //Right cross axis gives you an adjusted
//look vector that is perpendicular to the axis, i.e. cylindrical billboarding.
decal.setRotation(look.nor(), axis); //Note that setRotation method requires
//direction vector to be normalized beforehand.
}

I didn't check to make sure the direction doesn't get flipped, because I draw it with back face culling turned off. So if you have culling on and don't see the sprite, that last cross product step might need to have its order reversed so the look vector points in the opposite direction.

I have this camera that is set up with vecmath.lookatMatrix(eye, center, up).
The movement works fine, forwards, backwards, right, left, these work fine.
What does not seem to work fine is the rotation.

I am not really good at math, so I assume I may be missing some logic here, but I thought the rotation would work like this:
On rotation around the Y-axis I add/sub a value to the X value of the center vector.
On rotation around the X-axis I add/sub a value to the Y value of the center vector.
For example here is rotation to the right: center = center.add(vecmath.vector(turnSpeed, 0, 0))

This actually works, but with some strange behaviour. It looks like the higher the x/y of the center vector value gets, the slower the rotation. I guess it's because through the addition/substraction to the center vector it moves too far away or something similar, I would really like to know what is actually happening.

Actually while writing this, I just realized this can't work like this, because once I have moved around and rotated a bit, and for example I'm in "mid air", the rotation would be wrong....

I really hope someone can help me here.

Answer:

Rotating a vector for OpenGL should be done using matrices. Linear movement can be executed by simply adding vectors together, but for rotation it is not enough just to change one of the coordinates... if that was the case, how'd you get from (X,0,0) direction to (0,X,0)?
Here is another tutorial, which is C++, but there are Java samples too.
There is a bit of math behind all this - you seem to be familiar with vectors, and probably have a 'feel' of them, which helps.
EDIT - if you are to use matrices in OpenGL properly, you'll need to familiarize yourself with the MVP concepts. You have something to display (the model) which is placed somewhere in your world (view) at which you are looking through a camera (projection).

I am using a camera that has a yaw, a pitch, and a roll. When yaw == 0 the camera is looking down the -z axis(yaw == 90 is positive x), when pitch == 270 the camera is looking up(pitch == 0 is looking straight), and when roll == 180 the camera is upside down.

The camera's yaw, pitch, and roll values are never less than zero or greater than 360(when any value approaches 0 or 360 when it passes that amount it is automatically moved to the 'other side').

I have implemented 3DoF and it works quite nicely; however, when I implemented 6DoF, everything appears to work until the roll is around 90 or 270, then strange things occur to the up and right vectors(forward always seems to work because roll rotates around that axis?)

The scene I am rendering is just a bunch of blocks(in minecraft-style chunks) and I am always able to move forward/backward and use the forward vector to target a block so I know that the forward vector is done.

I did find a very similar question here, but it uses matrices and quaternions and I don't want to have to do that unless I absolutely have to(and I was careful to try to multiply roll pitch and yaw in the correct order): LWJGL - Problems implementing 'roll' in a 6DOF Camera using quaternions and a translation matrix

Answer:

So I finally got the hang of the meaning of cos and sin(but don't ask me to teach it) and was able to get this working!

does not really make sense. I'll just assume you have camera.position here (that would be wrong, too, but I'm coming to that later.

You actually do translate the camera first, and then rotate (around the origin).

Now, you might think that it is the other way around as you might have learned that the matrix operations are applied in the reverse order - that is so, but only when you view the transformations as moving around the objects. With a classical view matrix, you place a camera in the world - which is exactly the inverse. Moving a camera in a world is exactly the same as having a fixed camera and inversely moving all objects of the world.

With marix math, (A * B) ^ -1 is the same as B^-1 * A^-1. so when you want to define the camera that way, you have to use the reverse order (of the reverse order, ending up in the order as you write things down), but with each transformation inverted. You will need the rotations with a negated angle, followed by a translation with the negated position, to make this work.

Unanswered Questions

We will update and show the full solutions if these questions are resolved.

Third Person Camera using JBullet and lwjgl

I am developing a race game in java using jbullet and lwjgl.I am currently having trouble in making my camera follow the vehicle.Here is my code.It follows the car until it rotates.Here is the code....

The descriptionI'm quite new to OpenGL and started working on a camera system using OpenGL 4.5. I've got an orthogonal camera which should follow the player entity around (Bird's-eye view) through a ...

How can I move the camera under the terrain for a reflection?

I am attempting to render the reflection of some water.To create the illusion of reflection, I need the camera to be below the water:(Pictures not drawn by me)Therefore, I need to move the ...