AR Kit Image Anchors

This week week we wanted to try out some new features that were released by Apple for their ARKit SDK 1.5 update. The main one being Image Anchors (or Image Tracking if you come from a Vuforia background).

What are Image Anchors?

Image Anchors are reference images that ARKit can recognise and then provide you a location, within your world, where those images are. In simpler terms, they are triggers. Apple explains the feature through a couple of examples:

"Many AR experiences can be enhanced by using known features of the user’s environment to trigger the appearance of virtual content. For example, a museum app might show a virtual curator when the user points their device at a painting, or a board game might place virtual pieces when the player points their device at a game board. In iOS 11.3 and later, you can add such features to your AR experience by enabling image recognition in ARKit: Your app provides known 2D images, and ARKit tells you when and where those images are detected during an AR session."

Leading on from our last R&D experiment with Vuforia Fusion, we were really interested to see how well this would perform against the leading competitor (Vuforia).

Using Image Anchors

Implementing Image Anchors within a Unity project was quite simple. The ARKit Unity SDK had a simple example of how to add it to your existing scene, and we had something usable within 10 minutes. ARKit uses a familiar approach with Reference Images. These are individual images which are what ARKit will look for when performing its tracking, and each image can be assigned to a "set". That set is then whats used by ARKit, allowing it to search for multiple images at once.

Once we had the images and scene setup we tested it on a device (iPad Pro 12.9). The results were pretty good! Once it had found an image the content was locked into position and ARKit handled the rest of the world tracking.

The tracking is blazingly fast, in fact, too fast at times. As soon as it thinks it has found the trigger image, it will fire the callback for us to use and lock the content to that location. Most of the time this is fine, however we found by asking ARKit to re-evaluate the tracking (either automatically or through user interaction) we could achieve more stable and accurate results.

Issues

When testing this feature out, we tried moving the trigger to see how well ARKit would handle the tracking. It did not. Now, this was not a total surprise, as the developer documentation made us aware of this, however its something to bare in mind when working on certain projects as the trigger image may move (for example a magazine) and you want the tracking solution to follow the triggers movement. When thinking about the large scale mural projects we work on however, we realised this was not going to be an issue for us (buildings do not move, well, most of the time).

The second issue was scale. For this to work you need to define the exact size that your trigger will be viewed at in reality. This was not a concern when using Vuforia, as it scaled everything appropriately. However it does mean when using ARKit's solution we will need to develop under a few testing environments, so we can test this at different scales.

Final Thoughts

This is a fantastic solution for a first implementation. I can only hope that this is partly due to the Metaio acquisition that Apple made a few years ago, and bringing those expertise in house has allowed them to achieve these results. What Metaio were doing prior to the purchase was fantastic work, re-thinking what could be achieved within Augmented Reality.

We would like to see actual image tracking come into play at some point, so triggers can be moved around and ARKit will update its pose correctly. And we would also like to see a solution where scale is not an issue, so we can use a single trigger image at whichever size we want and it will just work out of the box. But these are nice to have's right now.

Adam Goodchild

CTO, Heavy Projects

 

Adam Goodchild