Unmanned Forces: Building a Multi-Domain Autonomous Fleet

By Katie Rittoo

Success on the frontline relies largely upon harmonious operations between all sectors of the armed forces, whether that is Army, Navy, Air Force or Amphibious units. Naturally, each unit is assigned operations according the capabilities it brings to the battlefield and together bring the best of each domain. While this concept seems a clear choice for manned operations, multi-domain collaboration is rarely seen in unmanned operations. While unmanned systems are increasingly being used in the military domain across air, ground and sea in their own right, they are rarely used in tandem.

Key questions inevitably arise regarding the ethics and practicalities of deploying unmanned systems into an active military operational environment. Can you send a system into enemy territory with a list of objectives and trust it to do “the right thing”? Can you program to react rapidly to changes in a volatile environment? These are just some of the questions facing lawmakers, policy makers and the Executive Branch. As a result today’s fielded unmanned assets have very limited levels of decision making. They can act as a data gathering tool, but not as a force multiplying and enabling tool.

With an increasing number of unmanned vehicle manufacturers bringing products to market, it is essential to discuss the challenges of compiling an unmanned fleet comprised of the best vehicles the market has to offer, regardless of manufacturer or domain. Rather than one specialist manufacturer building a cross-domain fleet from the ground up, the key to this approach is to use existing offerings and tackle the challenges from an after-market software perspective.

There are several hurdles that need to be overcome in order to achieve seamless multi-domain operations where the benefits of unmanned operations are not outweighed by prohibitively heavy implementation costs. Issues such as proprietary standards that are incapable of communicating cross platform and challenges intrinsic to autonomous unmanned operations have limited the potential of unmanned vehicles. Another block in the road toward collaborative autonomy is that manufacturers of unmanned systems may be specialists in autonomy for their own particular domain whether that be land, air or maritime.

Each domain comes with its own challenges. To date, autonomous systems have seen the most development and traction in air. The air domain in itself presents very challenging obstacles such as creating an aerodynamic system capable of successful landing and take-offs, as well as maintaining airborne flight unaided by a pilot. There are also legal challenges such as airspace restrictions and increasingly tight regulations. However, the value that aerial drones have proven to add to operations, for example in Iraq and Afghanistan, means that there has been substantial investment and R&D despite the challenging environment. Investment has been such that unmanned aerial vehicles have branched out from their early roots in the military domain out into commercial markets where they are commonly used for surveillance, survey and delivery tasks. In maritime, autonomy is progressing to sectors outside of the early adopter mine countermeasures (MCM) domain, such as oceanography and oil and gas. Unmanned systems have cr

opped up in various marine industries whether that is assisting warfighters in MCM operations, marine researchers and projects in the commercial maritime industry. However the MCM community is by far the biggest end user of unmanned maritime systems which are equipped with varying levels of autonomy. While several steps have been taken to develop fully autonomous systems, there is still a long way to go before solutions gain widespread market traction.

While unmanned systems are not currently being put to use at their full potential, there are two key areas which if invested in could take unmanned systems from auxiliary assets to full blown multi-domain squads carrying out over-the-horizon autonomous operations; firstly developing a system that allows for real-time communication between vehicles to facilitate collaboration, and, secondly, the development of software to allow for true autonomous operations where the fleet is able to adapt, respond and react in real-time.

Moving from Waypoints to Autonomy

The term autonomy is often applied as a sweeping term to a broad range of technology and covers a wide range of levels of autonomy.

True autonomy is a leap beyond the automation of basic functions, such as waypoint navigation. Autonomy implies independence and a degree of intelligence – the ability to sense, interpret, decide and act without external control.

There are three main threads to autonomy: true adaptive autonomy, where the vehicle is able to adjust its behavior in response to feedback from the environment; collaborative autonomy which enables unmanned systems to work as part of an adaptive fleet with each asset communicating continuously with the others in the fleet to achieve a common goal; and thirdly over-the-horizon operations where the fleet is able to perform tasks autonomously supported only by a launch and recovery team.

In essence, true adaptive autonomy is goal-based planning means telling robots what to do rather than how to do it. With a goal-based architecture the user provides the vehicle with a set of goals, examples of which may be to survey an area or to look for certain objects. The vehicle and its software engine compute what waypoints are necessary in order to accomplish those goals while staying out of identified hazardous areas. Goal-based autonomy simplifies human machine interaction in that the human just states what they want to accomplish. The software engine using goal based autonomy does the mission planning and is able to leverage from all the available expertise programmed into it and learned from previous missions to generate optimal plans.

Secondly, to create an autonomous fleet of vehicles, it is essential that they are able to communicate with each other through a shared ‘language’, regardless of vehicle make or model. This can be achieved by integrating the vehicles onto a shared command station which providing a central control capability.

This gives rise to the third and final thread of autonomy: over-the-horizon operations. The crux of this approach relies on goal-based mission planning; the fleet is assigned a task or tasks to accomplish by the operator pre-deployment but software decides the optimal approach based on the feedback from the vehicle payloads. The technologies developed provide the first major steps toward a paradigm shift: a move away from men on the frontline operations towards unmanned over-the-horizon multi-squad operations supported by a shore-side team.

But how do we go from an unmanned system that follows waypoints to a team of vehicles that make and communicate decisions? First and foremost, autonomy relies on the software underpinning the vehicles. Limited software design which is only designed to work with one particular system will inevitably lead to limited autonomy. Ideally the most success is found using a goal-based, open, modular, scalable architecture which acts like a central brain for a fleet of unmanned systems– in other words, a software engine, upon which industry and government laboratories can collaborate and contribute to new robotic capabilities.

Open means the system can easily be extended to work with new systems, new sensors and new programmers and hardware manufacturers – the goal should always be best-of-breed. An improvement in code, sensing or industrial applications is something that we should be able to take advantage of, without building a whole new system. By generating waypoints that dynamically stay ahead of the vehicle, the vehicle can adapt the mission in real-time as it senses new stimuli. Since all autonomous underwater vehicle (AUV) and unmanned surface vehicle (USV) systems to date have been designed to follow waypoints, this simple concept enables dynamic control of the vehicles in the maritime domain. In other words, the autonomy engine acts as a backseat driver that doesn’t interfere with the vehicle manufacturers’ proprietary operating software.

Modularity is required to help developers to choose best-of-breed capability. As new autonomous behaviors are developed to enable unmanned systems to react to the environment, including the actions of other unmanned systems or threats to the fleet, additional modules can be integrated, or existing ones can be replaced, to provide new capability to the systems. The developer is therefore able to design solutions to evolving requirements without having to rebuild the entire system.

Scalability is required to help the system work with any number of vehicles. The architecture needs to be able to work with a single system but enable multiple vehicles to share the mission goals and updates to enable full collaborative missions where unmanned systems tackle tasks as part of a dynamically cooperating team.

SeeByte has made steps toward rising to this challenge with Neptune, an autonomy engine which been designed and developed to be goal-based, open, modular and scalable.

Pedro Patron, Engineering Manager at SeeByte describes the scenario, “Originally when SeeByte looked at the state-of-the-art of autonomous systems in the maritime domain it was faced with a stark reality: they have been designed to follow predetermined waypoints. There often there wasn’t even the payload processing power on-board to do anything else beyond following waypoints. The crucial paradigm shift was to move away from waypoints and into autonomy. Autonomy means that the success or the failure of the entire fleet relies on the software underpinning it. While this is a daunting task, it appears this is a realistic and achievable method of extending the capabilities of the robots that users already have access to, without resorting to redesigned robot fleets equipped with autonomous capabilities from scratch.”

With Neptune the vehicles employ user designed behaviors to accomplish goals. These behaviors provide the vehicles with sets of waypoints that adapt on the fly to suit the sensed environment, as it is balanced against the user defined goals. The design of Neptune allows for third parties to develop their own behaviors and implement them within the Neptune Autonomy Engine. This means that the behaviors can be developed and chosen to suit different mission profiles. The same is the case for the sensors on board that sense the environment and the algorithms that tell the vehicle where it is, both in relation to where it is supposed to be in the world, and where it is relative to other vehicles.

SeeByte’s next challenge was to develop a common framework for all the surface and underwater vehicle to share the mission objectives. This was achieved by enabling each system to keep a model of what the world looks like, how it is changing as new data is gathered, and the progress that has been made against each of the human defined objectives. It is pursuant upon the system to share the latest information, or worldview, with all the other vehicles in the fleet. The use of high-level metadata to describe the world and mission progress through it, makes it possible to share the information using low bandwidth applications. This can be done opportunistically. Each vehicle therefore keeps an up to date view of its own progress and the progress of all the other vehicles in the fleet. This view can be shared with the operator to ensure that they monitor the operations as they happen, things happen as they expected, and when they don’t, Neptune describes what did happen and why.

To date Neptune has been made to work with AUVs, gliders and USVs from different manufacturers and with significantly different payloads. Neptune has been optimized for missions encompassing mine countermeasures, reconnaissance, and oceanography. However, a significant milestone was reached in October of 2016 with Unmanned Warrior.

Unmanned Warrior: Autonomy in Action

Unmanned Warrior was a real test. This demonstration was organized by the U.K.’s Royal Navy to showcase never-seen-before capabilities in the field of autonomy and unmanned systems and gathered over 50 vehicles, sensors and systems from different nations, different vendors and different government sponsored laboratories.

Hell Bay 4 was run in conjunction with The Technical Cooperation Program and became a big part of the Unmanned Warrior demonstration. TTCP is an international organization that collaborates in a number of areas: defense scientific and technical information exchange, program harmonization and alignment, and shared research activities for the governments of United States, United Kingdom, Canada, Australia and New Zealand.

In Hell Bay 4 SeeByte supported the U.S. Navy Labs from Naval Surface Warfare Center Panama City Division (NSWC-PCD) and Space & Naval Warfare Systems Center, Pacific (SPAWAR-SSCPAC), Defense Research and Development Canada (DRDC) and the U.K.’s Defense Science and Technology Lab (Dstl). SeeByte’s Neptune software formed the basis of the autonomy engine in the U.K.’s Maritime Architecture Framework (MAF) to facilitate autonomous collaboration between unmanned assets from multiple nations. The U.K. MAF provides advanced autonomous capabilities and allows fleets of unmanned systems, both surface and subsurface, to be managed from a single command station. The main focus was on over-the-horizon multi-squad, collaborative autonomous and automatic operations, allowing subsea and surface autonomous marine assets to communicate and report on shore via an unmanned aerial vehicle communications repeater.

Going forward, there are several hurdles that must be overcome before these advancements in technology become viable solutions for modern warfare. Trial and testing is key to building robust systems capable of withstanding unpredictable environments. Taking heed of lessons learnt in these early trials will be key to achieving this. There are clearly benefits to be enjoyed from deploying over-the-horizon autonomous fleets, namely putting a layer between the front line and the operators. To reach a stage where these systems can be reliably deployed there has to be not only investment into the core technology itself, but extensive testing of the system in real-world environments.