// Fullscreen is not necessary... it's up to you.getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN,WindowManager.LayoutParams.FLAG_FULLSCREEN);

setContentView(R.layout.THE_XML_LAYOUT_CREATED_BEFORE); // attach our glSurfaceView to the one in the XML file.glSurfaceView = (GLSurfaceView) findViewById(R.id.glsurfaceview);

Now let's create the camera and the engine.This is an example of my own code, so perhaps it won't fill exactly your needs,but you can be inspired by this one.

The following code is pretty easy to understand,I create a new camera and I give a render to my glSurfaceViewand of course set the Translucent window (8888) pixel format and depth buffer to it.(Without that your glSurfaceView will not support alpha channel and you will not see the camera layer.)

So basically :1) Create the camera view.2) Set up the glSurfaceView.3) Set a Render to glSurfaceView.4) Set the correct pixelformat to the glSurfaceView holder.

// Install a SurfaceHolder.Callback so we get notified when the// underlying surface is created and destroyed.previewHolder = surfaceView.getHolder();previewHolder.addCallback(this);previewHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);//previewHolder.setType(SurfaceHolder.SURFACE_TYPE_NORMAL);

// Hold the reference of the caputreCallback (null yet, will be changed// on SurfaceChanged).this.imageCaptureCallback = imageCaptureCallback; }

public void onStop() {// Surface will be destroyed when we return, so stop the preview.// Because the CameraDevice object is not a shared resource, it's very// important to release it when the activity is paused.imageCaptureCallback.stopImageProcessing();camera.setPreviewCallback(null);camera.stopPreview();previewRunning = false;camera.release(); }

You'll need a specific library for that as its quite complex work.If you google around you should be able to find some open source projects for it though. Theres a lot of rapid AR development at the moment, there's tones of open source projects.---

Anyone know a good way to sycn camera angle in the code to the real camera's angle on the phone?

I know how to read the sensors and get (rough) angles in the x/y/z from both magnetic and gravitational sensors.

Not sure how to turn this into a SimpleVector for my camera though.I'm guessing maths is involved

I know how to read the sensors and get (rough) angles in the x/y/z from both magnetic and gravitational sensors.

Now if only you could get exact GPS coordinates for the phone as well - with that and the angles, you could, for example, place a secret clue somewhere, and create a real-world treasure hunt game that people use their androids to play..

Or, rather, allowing anyone to place messages tied to real locations and share them with anyone else

This is why I'm using Wave-servers as a back-end, it lets people have a kinda "social" AR. They can share their posts with either individuals, groups, or the public at large. I already got the system working on PC's with a google map style client;http://arwave.org/ (see video)That was more or less to prove the concept. (though as its made in qt, porting later to nokia phones shouldn't be too hard).

If the matrix from Android is similar to what OpenGL uses, it's most likely column major, while jPCT's matrices are row major. You have to convert them by making rows to cols. The easiest way is to create a float[16]-array for Matrix.setDump() and fill it accordingly. So that

In addition, you have to convert between the coordinate systems. You can either do this by rotating the matrix 90° around the x-axis or by negating the second and third column of the matrix (can be done when filling the array anyway). The next release will include a method that does this conversion.

Then just try to apply a rotateX((float)Math.PI) on the matrix (the 90° i wrote in my former post are of course wrong, it has to 180). Or maybe you have to invert it in addition to be useful? Keep in mind that a camera transformation is actually an inverse world transformation. How are you creating the jPCT Matrix? Have you ensured that the result really looks like

I know how to read the sensors and get (rough) angles in the x/y/z from both magnetic and gravitational sensors.

Now if only you could get exact GPS coordinates for the phone as well - with that and the angles, you could, for example, place a secret clue somewhere, and create a real-world treasure hunt game that people use their androids to play..