Table of Contents

Getting Started

In this tutorial, we'll create an Android app demonstrating AprilTag detection using the ViSP SDK. We assume that you've already created the SDK using this tutorial: Tutorial: Building ViSP SDK for Android.

This tutorial assumes you have the following software installed and configured:

Create an Android Project

If you're new to app development using Android Studio, we'll recommend this tutorial for getting acquainted to it. Following this tutorial create an Android Project with an Empty Activity. Make sure to keep minSdkVersion >= 21, since the SDK is not compatible with older versions. You're app's build.gradle file should look like:

android {

compileSdkVersion ...

defaultConfig {

applicationId "example.myapplication"

minSdkVersion 21

versionCode 1

versionName "1.0"

...

}

Importing ViSP SDK

In Android Studio, head to File -> New -> Import Module option.

Head over to the directory where you've installed the SDK. Select sdk -> java folder and name the module.

This only imports the Java code. You need to copy the libraries too (.so files in linux, .dll in windows and .dylib in mac). Create a folder named jniLibs in app/src/main folder. Then depending upon your emulator/device architecture (mostly x86 or x86_64), create a folder inside jniLibs and copy those libraries into your project.

Then in your MainActivity.java, you need to load ViSP libraries

publicclass MainActivity{

// Used to load the 'native-lib' library on application startup.

static {

System.loadLibrary("visp_java3");

}

...

}

Begin Camera Preview

Before you begin scanning, you need to ask user for Camera Permissions. Firstly in your manifest file, you need to include

<manifest xmlns:android="http://schemas.android.com/apk/res/android"

package="...">

<uses-permission android:name="android.permission.CAMERA" />

<uses-feature

android:name="android.hardware.camera"

android:required="true" />

<application ...>

...

</application>

</manifest>

Then, you need to add a runtime permission for accessing camera in MainActivity.java. Note that detection will execute only when user allows camera access

// Check if the Camera permission has been granted

if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA)

== PackageManager.PERMISSION_GRANTED) {

// Permission is already available, start camera preview

Intent intent = new Intent(this, CameraPreviewActivity.class);

startActivity(intent);

} else {

// Permission is missing and must be requested.

// requestCameraPermission();

ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.CAMERA},

Starting Camera Preview

Now create a new activity CameraPreview.java. This will call the camera API. The incident image is recieved as a byte array which can be easily manipulated for our purposes. We can render the resultant image as Java Bitmap in an ImageView element. In brief,

publicclass CameraPreviewActivity extends AppCompatActivity {

private Camera mCamera;

publicstatic ImageView resultImageView;

...

@Override

protectedvoid onCreate(Bundle savedInstanceState) {

super.onCreate(savedInstanceState);

// Open an instance of the first camera and retrieve its info.

mCamera = getCameraInstance(CAMERA_ID);

Camera.CameraInfo cameraInfo = new Camera.CameraInfo();

Camera.getCameraInfo(CAMERA_ID, cameraInfo);

if (mCamera == null) {

// Camera is not available, display error message

Toast.makeText(this, "Camera is not available.", Toast.LENGTH_SHORT).show();

setContentView(R.layout.camera_unavailable);

} else {

setContentView(R.layout.activity_camera_preview);

resultImageView = findViewById(R.id.resultImage);

...

// Get the rotation of the screen to adjust the preview image accordingly.

finalint displayRotation = getWindowManager().getDefaultDisplay()

.getRotation();

...

}

}

Now that we get access to Camera, we need to create a Camera Preview class that'll process the image for AprilTag detection. In brief,

Note that the detector works on grayscale images. The camera API returns values for all pixels (R,G,B,A). Depending on the image format rendered in Android, we can convert those color values into grayscale. Refer this page, for a complete list of formats. Most commonly used format is NV21. In it, the first few bytes are grayscale values of the image and rest are used to compute the color image. So AprilTag detector process only first width*height bytes of the image as VpImageUChar.

Also, we're detecting tags every 50 milli-seconds. This is simple but efficient since actual tag dectection time will vary according to the image and should be an asynchronous task.

We need to change CameraPreviewActivity accordingly,

publicclass CameraPreviewActivity extends AppCompatActivity {

...

publicstatic ImageView resultImageView;

staticint w,h;

@Override

protectedvoid onCreate(Bundle savedInstanceState) {

super.onCreate(savedInstanceState);

w = mCamera.getParameters().getPreviewSize().width;

h = mCamera.getParameters().getPreviewSize().height;

// Get the rotation of the screen to adjust the preview image accordingly.

finalint displayRotation = getWindowManager().getDefaultDisplay()

.getRotation();

// Create the Preview view and set it as the content of this Activity.

Note the inversion in updateResult method. Visp in C++ accepts image as sequence of RGBA values but Java Bitmap process them as ARGB.

Further Image manipulation

In this tutorial, we've developed a bare bones tag detection app. We can use OpenGL for Android to manipulate the image (for instance, drawing a 3D arrow on the tags) using the list of VpHomogeneous matrices. You can find the complete source code of above Android App here.