Augmented Toys

Post navigation

The augmented reality SDK Vuforia has a lesser known feature that is able to recognize pre-selected three dimensional objects as-is without calibration ornaments like QR codes. I built a very simple app in Unity to test this functionality. The app uses the device’s camera (in this case an iPhone 6S Plus) to recognize a Skylander toy in real-time. Once the app recognizes a set of pixels resembling the toy, a few 3D objects are rendered over it to very crudely demonstrate tracking accuracy and reliability. There’s nothing sexy or polished here, but you can imagine this technique might enable some more interesting mobile experiences. Here’s a few:

Bring the toy to life: render animated facial features on top of the static toy and have it speak.

Workflow:

Export the capture data and upload it to the Vuforia Target manager. Download the dataset as a Unity package.

Assuming your account license is setup, import the downloaded Unity package into your scene, point the data set at the file and parent virtual objects to the target object. At this stage, if everything is setup correctly, you can hit “Play” and use your PC’s webcam to verify before building to your target device.

Build + Launch!

Some take away thoughts:

Picture quality matters.

Quality of camera matters: Before compiling to the phone, I used the Macbook’s built in webcam to test situations. It yielded dramatically worse results. See for yourself: https://youtu.be/8NXWB7EbzZ8

Lighting conditions, obstructions, and backgrounds also matter. Vuforia is interpreting a lot purely from optical data. It’s a lot to ask without true depth sensing: https://youtu.be/CqlbRZp7hvA

Transparent/reflective materials confuse the scan.

Future Tech

Stereo cameras appear to be the next big hardware innovation in mobile phones. Sampling scenes from two perspectives will garner much more accurate recognition.