I don't remember having come across any examples that show it, and I don't have the code at hand. But it should be relatively easy to implement, have a look at the following methods in Object3D:

Quote

rotateX(float w), rotateY(float w) & rotateZ(float w)

Rotates the object's rotation matrix around the x-axis by the given angle w (radian, counter clockwise for positive values).

You'd just calculate the X/Y deltas of the mouse-drags and feed those distances into the rotation methods. The other thing you could do is keep the object still and rotate the camera around the object instead (in that case look at the Camera class documentation for rotation).

You'd just calculate the X/Y deltas of the mouse-drags and feed those distances into the rotation methods. The other thing you could do is keep the object still and rotate the camera around the object instead (in that case look at the Camera class documentation for rotation).

Hi, thanks for the reply. I have a regular camera rotation based on the x/y deltas of the mouse-drags. Unless I've understood you wrong, what you described wouldn't give me the drag point (where the mouse button was first downed) that Google Earth has. If you grabbed a sphere close to the top and dragged it a little, the rotation would be much more than if you grabbed a sphere from the middle and dragged it the equal distance across the screen.

Where dragDelta is the distance the user dragged. Center is the center of screen and mouseOrigin is where the user initially pressed at the start of the drag. The centerDelta method would return a sensible variable based on the distance from mouseOrigin.X to Center.X. So the further away from the center the user clicks, the rotation gains more force.

If I'm still misunderstanding you, and the actual question is how you get an acceleration effect — where the rotation slowly decelerates over time — then the answer would be the above and additionally decreasing the value of delta from center multiplier over time.

This'd work okay if the sphere is close to you, but might not work so well if the sphere is small on screen. Does this help? Or am I still misunderstanding your question?

EDIT: This, of course, does not depend on the user actually grabbing the sphere. If that's an issue then you could use Interact2D to check if the user is actually clicking on the sphere or not, and then calculating the distance from that point to the center of the sphere (instead of calculating the distance from the click and to the center of the screen).

EDIT: This, of course, does not depend on the user actually grabbing the sphere. If that's an issue then you could use Interact2D to check if the user is actually clicking on the sphere or not, and then calculating the distance from that point to the center of the sphere (instead of calculating the distance from the click and to the center of the screen).

The sphere is imaginary. Basically I have a 3D graph chart as a scene and I wanted to implement "natural rotations" around the chart (scene). The effect is that if the user grabs a point and drags it in any zig zag direction, they can return to the scene to the original view by simply dragging the mouse back to its original mouse down position - "zero-hysteresis rotation". With normal rotations you cant really do this. :-S Also, the original mouse-down point can not be rotated behind the sphere.

The main idea is to think in terms of arc intervals. If we have two arbitrary points A and B on the surface of a unit sphere, the most natural way to get from A to B is to rotate the sphere so that A follows the shortest path (or geodesic) from A to B. Thus the rotation occurs in the plane of the geodesic. If a = (xa, ya, za) and b = (xb,yb,zb) are the position vectors of the points, the axis of rotation is given by a 5 b. The angle of the rotation can be obtained from cos-1 (a × b).

How do we get from 2D mouse coordinates to 3D rotations? We construct a pseudo-3D coordinate space as follows. (This is essentially the method of Shoemake in Graphic Gems IV, p. 176.) Superimpose an imaginary sphere on our 3D object such that the center of the sphere is at the position vector c = (screen_x, screen_y, 0), where screen_x and screen_y are the local screen coordinates of the center of the object. We assume that any mouse-downs will happen on the surface of our imaginary sphere, which has a radius of r pixels.

I'd prefer to rotate the camera rather than the scene ^_^ To be honest my math is rusty to achieve this :-(

The tutorial that you've linked to looks quite easy to implement (judging from a quick look). I would go with the vector based approach. If you want to use the quaternion-based, look at the skeletal-animation-framework in the download section. I think it contains some quaternion classes that may be helpful.

I think my main problem is that I haven't fully sorted out in my head the transition from mouse (screen space) to camera space to world space and back. Are there any examples that show mouse interaction with objects? That might enlighten me more :-)

In the example I posted, it appears everything is done in world space and the object is moving, not a free moving camera that does arbitrary rotations on mouse drags.

There are some examples of mouse/object interaction in the forums IIRC, but the search function still sucks (= it finds threads that are totally unrelated to the search phrase), so they are a bit hard to find. I've found this, not sure if it is helpful: http://www.jpct.net/forum2/index.php/topic,226Anyway, the transition from screen to camera space is quite easy because Interact2D offers methods for it. Going back from camera to world space requires to apply the invers camera transformation, which is not that difficult either. If you are still stuck, i'll try to write an example that shows how to do it, but my time is a little limited right now..

It's not really what I'm trying to accomplish. I'm trying to get the same effect that google earth has. The use clicks on the screen, the code calculates the closet point on an imaginary sphere as the drag point. As the user drags the cursor around the screen, the camera moves such that the original point grabbed is rotated to where the current mouse position is. If the user drags outside the sphere area and around it would rotate around the Z axis (i believe).

Does that make sense? I think the term is called Trackball Rotation? or Virtual Trackball Rotation.

I see...however, i don't have a solution ATM and not much time to invent one. If i would start to code one, i would try this approach:

see if the mouse hits the globe

if so, determine the intersection point using calcMinDistance() or something

getting the rotation matrix from the resulting vertex (center of the globe->intersection point)

get the current rotation matrix and interpolate between the two

set the interpolated matrix as new rotation matrix

repeat the last two steps until both matrices match (almost)

That's just an idea, it's untested...it may work, it may produce complete nonsense. But it's the best i can come up with at the moment.If it doesn't help, i'll try to implement that solution myself, but i can't say when...

Whatever can be done with quaternions can be done with matrices as the concepts are mathematically equivalent (albeit some tutorials may tell you otherwise). However, it may be difficult to do the transfer, so maybe you should check out the skeletal animation package in the download section which includes some classes for quaternions.

Maybe i don't get it, but isn't that basically the same as the code that i had given to you with the difference that the mouse cursor is visible and the sphere's position below the cursor rotates with the cursor? If so, maybe this slightly modified version is a starting point. I don't think that it is mathematically correct and it only works for the given sphere- and window size, but it should be possible to make the d=w/200f-thing depend on those dimensions so that it works for all sizes. Or am i not getting it again?