How ARCore enables you to create brand new types of user interaction (Part III)

In part I and part II of our ARCore blog series, we shared how you can leverage ARCore for Unity features, like motion tracking to create a Jenga AR game or light estimation to trigger object behavior. In this post, we want to share some cool ARCore for Unity experiments that show what you can do with data generated from the camera feed.

A handheld device’s camera is not just for taking photos and videos

ARCore for Unity enhances the utility of the cameras by bringing contextual data to the user experience. To showcase some of the things you can do, we asked some of our top engineers to create some AR experiments, including a breakdown of their techniques and code snippets, so you can explore them on your own. Here are just a few things you can start testing today!

World Captures

By Dan Miller

Contextual applications of AR, that is those that live and interact with the real world, are perhaps one of the most mainstream use cases. With World Captures, you can use the camera feed to capture and record a moment in time and space in order to share it in its context. Inspired by Zach Lieberman, World Captures spawns a quad in space, which uses a screenshot from the camera feed as texture.

To transform the camera feed to a quad texture, I used the CaptureScreenshotAsTexture API. Once the screenshot is captured, I can easily add it as a texture to a material on a quad that is spawned at the same time the user taps the screen. Notice that you need to wait until the end of the frame in order to properly give the application time to render the full screenshot to a texture.

The code snippet below will help you experiment with World Capture with ARCore for Unity.

AR Camera Lighting

by John Sietsma

Use the camera feed to provide lighting and reflections to virtual objects.

It’s difficult to create the illusion that virtual objects blend with the real world as if they actually exist. A key component in creating this illusion is influencing the behavior of 3D digital objects, using the real light and reflections around them.

AR Camera Lighting allows you to do that by creating a skybox based on the camera feed. You can then use the skybox in your Unity scene to add lighting to virtual objects, and use a reflection probe to create reflections from the skybox.

Because the image captured from your camera view won’t be enough to cover a sphere, the lighting and reflections won’t be fully accurate. Still the illusion it creates is very compelling, in particular for cases in which the user is moving the camera and the model itself is moving.

To create the sphere, I transform the camera feed to a RenderTexture and access the texture in ARCore using a GLSL shader. You can find more thorough instructions and access all assets used in AR Camera Lighting here.

Feature Point Colors:

by Amy DiGiovanni

Use the camera feed to place pixelated cubes at visible feature points. Each cube is colored based on the pixelation of each feature point.

Feature Point Colors showcases how you can add depth and shape, and call out distinct elements in real world objects presented in your camera view by using visual cues.

I use GoogleARCore’s TextureReader component to get the raw camera texture from the GPU, and then make a friendlier representation of the pixels from this texture. The cubes are all spawned up-front, based on a pool size, for performance, and they are activated and deactivated as needed.

// laid out row by row from bottom to top, and left to right within each row.

varbufferIndex=0;

for(vary=0;y<height;++y)

{

for(varx=0;x<width;++x)

{

intr=m_PixelByteBuffer[bufferIndex++];

intg=m_PixelByteBuffer[bufferIndex++];

intb=m_PixelByteBuffer[bufferIndex++];

inta=m_PixelByteBuffer[bufferIndex++];

varcolor=newColor(r/255f,g/255f,b/255f,a/255f);

intpixelIndex;

switch(Screen.orientation)

{

caseScreenOrientation.LandscapeRight:

pixelIndex=y*width+width-1-x;

break;

caseScreenOrientation.Portrait:

pixelIndex=(width-1-x)*height+height-1-y;

break;

caseScreenOrientation.LandscapeLeft:

pixelIndex=(height-1-y)*width+x;

break;

default:

pixelIndex=x*height+y;

break;

}

m_PixelColors[pixelIndex]=color;

}

}

FeaturePointCubes();

}

Once I have a friendly representation of the pixel colors, I go through all points in the ARCore point cloud (until the pool size is reached), and then I position cubes at any points that are visible in screen space. Each cube is colored based on the pixel at its feature point’s screen space position.

C#

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

voidFeaturePointCubes()

{

foreach(varpixelObj inm_PixelObjects)

{

pixelObj.SetActive(false);

}

varindex=0;

varpointsInViewCount=0;

varcamera=Camera.main;

varscaledScreenWidth=Screen.width/k_DimensionsInverseScale;

while(index<Frame.PointCloud.PointCount&&pointsInViewCount<poolSize)

{

// If a feature point is visible, use its screen space position to get the correct color for its cube

Sobel Spaces

By Stella Cannefax

Use the camera feed to draw shapes from one side of the screen to the other creating interesting spatial effects.

Sobel Spaces is an example of how you can use the camera feed to reveal new layers of information from the real world. Emphasizing edges or creating visually compelling filters that alter the viewport are just two examples of what you can do.

The experiment is based on the Sobel operator, a common method of detecting edges from the camera feed in order to produce an image with the edges emphasized. Sobel Spaces is a modified version of the ComputerVision sample from the ARCore SDK. All that’s really changed is how the Sobel filter works:

// offset - width , the halfWidth is part of how we get the offset effect

inta00=s_ImageBuffer[offset-halfWidth-1];

inta01=s_ImageBuffer[offset-halfWidth];

inta02=s_ImageBuffer[offset-halfWidth+1];

inta10=s_ImageBuffer[offset-1];

inta12=s_ImageBuffer[offset+1];

inta20=s_ImageBuffer[offset+halfWidth-1];

inta21=s_ImageBuffer[offset+halfWidth];

inta22=s_ImageBuffer[offset+halfWidth+1];

intxSum=-a00-(2*a10)-a20+a02+(2*a12)+a22;

intySum=a00+(2*a01)+a02-a20-(2*a21)-a22;

// instead of summing the X & Y sums like a normal sobel,

// here we consider them separately, which enables a tricolor look

if((xSum*xSum)>128)

{

outputImage[offset]=0x2F;

}

elseif((ySum*ySum)>128)

{

outputImage[offset]=0xDF;

}

else

{

// the noise is just for looks - achieves a more unstable feel

byteyPerlinByte=(byte)Mathf.PerlinNoise(j,0f);

bytecolor=(byte)(pixel|yPerlinByte);

outputImage[offset]=color;

}

}

}

ARCore resources and how to share your ideas

With significant AR utility introduced to handheld cameras, AR will continue to become mainstream practice for consumers, simply because the camera is one of the most-used features in mobile devices. We’d love to learn how you would leverage the camera feed to create engaging AR experiences on Android!

Share your ideas with the community and use ARCore 1.1.0 for Unity to create high-quality AR apps for more than 100 million Android devices on Google Play! Here’s how: