Augmented Reality Interface User Study

To manufacture a product quickly and accurately, you need to consider the work instructions used by your assembly workers.
Are the work instructions understandable? Does the format and software interface overwhelm the worker’s cognitive load, causing confusion and forcing them to verify the contents before taking action? These are the types concerns we addressed with our augmented reality (AR) work instruction user study.
We partnered with an industry sponsor who wanted to see if cutting-edge AR instructions can improve worker performance and to determine which interface features are needed.

IR cameras tracked the user’s location and head orientation – allowing us to determine when they looked at the work instructions.

My role in the study

Within a team of 10 graduate students and professors, my job was to organize and manage the actual user study. From start to finish, I worked under the guidance of HCI professors to design the study and all of the necessary surveys, recruitment materials, observation forms, and scripts. I also performed statistical analysis in SPSS and published the results in a journal article (currently in review).

The Study Setup

The participants had to complete 2 trials of an assembly task with a given set of work instructions: either AR instructions on a mobile tablet, or instructions with static images similar to a typical powerpoint slide show on a mobile tablet or desktop computer. This between-group design needed enough participants to achieve statistical significance for our findings, so we ultimately tested 45 individual participants.
The original AR interface tested in Phase 1 was designed by engineers without any user research input. The goal here was to assess the feasibility of AR and provide a benchmark for future UI improvements, so only minor UI and functionality changes were considered based on pilot participant feedback. In other words, we only addressed show-stopper issues and left things like UI and visual cues for future study Phases.

Phase 1 Results

The results from Study 1 showed that AR interface users are quicker, more accurate, and report a much higher net promoter score than users of the other two instruction modes. Why is AR better? Primarily because it reduces the cognitive load. Users spend less effort remembering what to do and can concentrate on actually doing the work. Our head tracking data shows that they spend less time inspecting AR instructions, and alternate between looking at instructions and the work area fewer times. The results have been presented in a conference paper, a journal paper (in review), and a poster presentation.

Heuristic review and UI redesign

Moving on to the second study phase, we redesigned the AR software based on the knowledge gained from the first phase. During this period we analyzed what worked, what didn’t, and came up with a report of problematic areas in both the way the AR instructions are displayed and also the UI. Our team of programmers and UX experts brainstormed to redesign the UI and to select AR features found in literature, justifying their inclusion to the study with a number of citations: the Nielsen & Norman Group, ISO 9241, Gerhardt Powals’ cognitive engineering principles, and many others.

Moving forward to Phase 2

Phase 2 compares several types of AR interfaces to find which is easiest to use and interpret. We used the same procedure and user task for this study, so we could compare our results against Phase 1. The only difference here is the variations to the UI and AR features developed with our heuristic review. Preliminary results indicate that with just a few minor changes, we were able to increase user efficiency and accuracy and reduce frustration.
Phase 2 user testing finished late 2014. An additional iteration on the interface design is planned, and then the study will move on to testing with subject matter experts – actual workers in an industrial work location.