Search form

Experimental Overview

Background and Purpose of Study

Robotic systems, including Unmanned Aerial Vehicles (UAVs), have increasingly benefitted from the capabilities introduced by increasing levels of autonomy. Autonomy enables the system to automatically close lower level system control loops, and can elevate the human controller to a supervisory role in which they provide higher level, goal-oriented commands to the system (Cummings 2014). In Cummings’ skills-rules-knowledge-expertise (SRKE) framework, autonomy is most suited to the low cognitive requirements of skills and rules, while human controllers are better suited for the more cognitively demanding knowledge and expertise. For UAVs, the rise of autonomy has manifested primarily as a transition from “manual” control of the vehicle using joysticks to maintain the attitude and altitude of the UAV to “supervisory” control where the operator provides waypoints that the UAV flies between automatically.

A challenge that arises in such systems is determining how to train operators when the level of autonomy is changing. In the US Department of Defense, training of UAV pilots has remained relatively unchanged through the developments of autonomous technologies. Pilots typically first go through a generic flight school in which they learn about lower level vehicle control, then they complete a specific training for the particular vehicle platform. Such training programs can be expensive in both time and resources for pilots to complete. As lower level vehicle functionalities are handed over to on-board autonomy, questions arise such as: Are the “manual” skills learned from such traditional training programs useful to pilots operating supervisory control systems? Can pilots of supervisory control systems be trained to equal performance using shorter, supervisory-focused training programs? Do the skills learned in traditional training programs help during off-nominal or emergency situations that require finer vehicle control?

The purpose of this study is to answer these questions by developing UAV training programs and interfaces for both manual and supervisory control modes, and conducting a human-in-the-loop experiment to determine the effectiveness of these training programs under varying environmental conditions. These training programs are designed to train participants how to fly the UAV under each control mode, in preparation to complete a task utilizing the UAV in a representative UAV environment (disaster response).

For the human-in-the-loop experiments, we propose three hypotheses:

1) Under nominal conditions, pilots using supervisory control will exhibit superior performance to those using manual control

2) Under nominal conditions for pilots using supervisory control, there will be no difference between those trained with both manual and supervisory training programs, compared to just the supervisory training program

3) Under off-nominal conditions, those trained with both manual and supervisory programs will exhibit superior performance compared to those trained with only the supervisory program.

Participant Selection

The participant pool consists of any members of the Duke University community over 18 with 20/20 or corrected to normal vision (i.e., with glasses or contact lenses), no neurological disorders, seizure disorders, head injury or any physical impairments that would prevent them from using conventional computer input device. A total of 36 total participants will be recruited for the experiment. Recruitment methods include student listserves on campus, email, and flyers posted around campus.

General Procedure

Each participant is trained to fly a UAV using one of two tablet interfaces (also called “App Interfaces”). The Manual interface requires the user to fly via a traditional “joystick” approach, with one joystick controlling the altitude and yaw of the vehicle while the other joystick controls the lateral motion (roll and pitch). The Supervisory interface utilizes waypoint control, where the user sets waypoints on a map for the vehicle to automatically follow. See App Interfaces for more detail on the interfaces used.

After being trained, participants are given a test mission that requires them to complete a task in a representative disaster response environment. Participants are briefed that a nuclear reactor has been partially destroyed due to an earthquake. The building is heavily damaged, and it is unclear how compromised the containment of radioactive material has become. Due to the risk of sending humans into this environment, a UAV will be sent to determine the extent of the damage. The participant is then asked to fly the UAV through the building to reach a control panel near the reactor, and to read key information on the status of the reactor on this panel. Then, the participant is to fly the UAV safely back to the takeoff location for recovery of the vehicle. The participant is provided a general map of the building layout containing major features such as walls, hallways, and doors, which is also shown in the map in the interface. Using this map the participant can form a general idea of what path to take to get to the control panel. See Environment Layout for more detail on the environment used.

Once the participant has reached the control panel and gathered the required information, an auditory explosion noise will be played, and several key features within the environment will be altered. The result is that the participant cannot take the same path back out of the environment that was used to go in. The participant will be responsible for using the interface to gain an understanding of what has changed in the environment, and formulate a new egress plan for the UAV.

Participants are assigned to one of three experimental groups, defining the training content and control interface used in the test mission. These are as follows:

· Group 1 receives training on the manual interface only, and flies the UAV using the manual interface during the test mission

· Group 2 receives training on both the manual and supervisory interface, and flies the UAV using the supervisory interface during the test mission

· Group 3 receives training on the supervisory interface only, and flies the UAV using the supervisory interface during the test mission

Participants’ performances on the test mission are evaluated through several key metrics. The time to reach the control panel, the accuracy of the information read on the control panel, the time for the UAV to return to the start position, and any crashes or accidents that prevent a safe return are all considered as performance metrics.

Detailed Procedure

The procedure consists of a series of steps for each experimental participant, outlined below:

Participants are paid an Amazon gift card worth $25/$40/$50 depending on their group (supervisory control/manual control/both), which was determined by the increasing time required for training. To provide incentive for best behavior, participants will also be also to win a $100 Amazon gift certificate for best performance, determined by the fastest time to get in and out of the building, with correctly identifying the reactor status information.