The OpenFog Reference Architecture: A baseline for interoperability in the IIoT cloud-to-things continuum

Fog computing concepts have been floating in the ether for some time now, but it seems that industry has been challenged to put the theoretical models behind the architecture to use in the real world. Recently, however, the OpenFog Consortium released the OpenFog Reference Architecture (RA), a foundational document that will enable interoperable semiconductors, systems, and software for Industrial Internet of Things (IIoT) stakeholders, industry-wide. In this roundtable interview, Dr. Maria Gorlatova, Associate Research Scholar at Princeton University and Co-Char of the OpenFog Consortium Communications Working Group (above, left); Brett Murphy, Director of Business Development for IIoT at Real-Time Innovations (RTI) and Co-Chair of the OpenFog Consortium Software Infrastructure Group (above, center); and Rob Swanson, Principal Engineer at Intel and Technical Chair of the OpenFog Consortium (above, right), explain how the OpenFog RA’s eight technical pillars of Security, Scalability, Openness, Autonomy, Remote Access Services (RASs), Agility, Hierarchy, and Programmability provide a roadmap for developing fog solutions for the cloud-to-things continuum.

Can you briefly define a fog computing architecture as it would exist in a moderate-scale IIoT deployment?

MURPHY: For the OpenFog Consortium, the IIoT covers more than manufacturing or process automation; it also includes transportation, healthcare, energy, city infrastructure, and more. Across these industries, as IIoT is deployed, we see challenges that make fog computing a necessity. With terabytes of data being generated near the “edge” of these systems, it’s not practical to stream data to the cloud and back, all day, every day. There are latency challenges, network bandwidth costs to consider, the availability of those networks on top of this, and security concerns. Fog computing addresses these challenges in a way to process, protect, and act on this data much closer to where it’s created.

For example, think of all the data in an airport that is constantly using video surveillance to monitor the activities of tens of thousands of passengers and employees every day. There would be huge amounts of video data created that is best processed and analyzed by fog compute nodes near the parking garage entrance, airport entrance, security line, and airport gate where the video cameras sit. License plates, luggage, and other pertinent data are connected to unique people identified as they enter the airport and proceed through it. People are checked against a no-fly list in an airport data center and suspicious behavior is flagged to airport security. Data is passing across layers of local fog compute nodes and analytics and other applications are running across many different layers in the system. When an aircraft leaves a gate, the destination airport’s system is notified through the cloud so it can track the same people as they leave the airport.

SWANSON: This same “mesh” like architecture of networked fog compute nodes will be used across other industries and scenarios, with data being passed peer-to-peer within layers of fog compute nodes and between layers, all the way up to the cloud or data center. If you think of a pump analytics system, you will need to use machine learning with inferencing based upon trained models for various pump scenarios (such audio and vibration), etc. The training of these machine learning models will take place in large servers in a data center or the cloud, while the models/algorithms themselves will be deployed to fog compute nodes on or near the pumps. In addition, since pumps are physically connected to each other through the pipes and fluid that are running through them in sequence, the behavior of one pump affects all others downstream. So the analytics are connected as well. Through fog computing and peer-to-peer data communications, the analytics can be more reactive to adjacent pump behavior.

What are the proposed benefits of fog networking over traditional network architectures for IoT?

SWANSON: Fog computing really works around 3 main vectors: Minimization of data backhaul to a cloud (on-premises or off-premises); reduced latency; reliability of operation.

As IIoT systems grow in complexity and capability, more capability is provided beyond monitoring to extend into optimization and eventual autonomy of system processes. With processing close to edge, quick response time to events is better assured, and with processing deployed across layers from the edge to the cloud, resilience and security are increased.

MURPHY: The simplest IIoT architectures use IoT gateways to gather data from edge devices and move it to the cloud for analysis and processing. These are typically monitoring use cases. The next step in system capability comes with deploying analytics and processing to the edge devices or gateways, turning them into fog compute nodes. This provides the IIoT system the ability to handle much more data, to increase the depth of the analytics in the system, and to reduce the network bandwidth required back to the cloud.

GORLATOVA: In addition, low latency and “intelligence at the edge” of a fog network enable fundamentally new capabilities in IoT devices, such as context awareness and adaptive behavior at a level that is impossible with current networking approaches.

What is the OpenFog RA, and how will it help stakeholders overcome the obstacles of fog deployment in the IIoT?

MURPHY: Some open architecture systems in a few industries that use fog computing concepts have just recently begun to get deployed, mostly in pilot projects. But those are separate efforts in a few industries with little cross-pollination and duplication of effort.

Traditional systems in most industries are still developed with proprietary vendor platforms, with some including community or ecosystem partners. This walled-garden platform model with vendor lock-in is one inhibitor to adoption. In addition, there is the cost of deployment of fog computing (more compute capability across the system with pervasive networking) over deploying simple IIoT gateways and doing cloud computing. There has to be a business, technical, and/or regulatory reason to move to fog computing. This is why we believe certain scenarios will see movement to fog computing prior to others.

SWANSON: The OpenFog RA is a framework and roadmap to help software developers and system architects create the first generation of open fog computing systems. It creates a common language for fog computing, representing a unified framework for providing computing, networking, and storage in the cloud-to-things continuum. We’ve been working on this since OpenFog was formed in November 2015, and the OpenFog RA document has just been released.

The OpenFog RA intends to align requirements for all of the suppliers — from silicon manufacturing to system manufacturing to software — in the fog computing environment. This means that silicon manufactures need to provide a baseline of technology and system manufacturers need to utilize them. This is required to set an established baseline for software running on these systems.

Most setbacks are a result of incomplete requirements or just flatly not following system design requirements. Our goal with the OpenFog RA, and the subsequent work of OpenFog, is to address those requirements for areas of importance in fog computing.

How will advanced technologies such as 5G and AI factor into fog computing, and will the OpenFog RA address integration of these technologies?

MURPHY: The future of IIoT will be driven by the deployment of pervasive networking, pervasive computing, and AI. Fog computing is the architecture that brings those three elements together. AI enables computer systems to think and operate independently on data. This helps IIoT systems better deliver on advanced use cases around optimization and autonomy. It is critical that as AI emerges we have a stable, baseline architecture for computing. 5G promises to connect and allocate bandwidth more efficiently than 4G, but that promise needs to be realized.

GORLATOVA: Fog will bring AI close to the endpoint devices. In developing the OpenFog RA, we made multiple decisions that will allow this to happen.

The IoT has been described as an ecosystem with more standards bodies than actual standards. How will the OpenFog RA integrate with other leading technology standards, if at all?

SWANSON: The OpenFog Consortium is not a standards organization, but much of our work focuses on creating and testing the requirements for the eventual standards that will be created to enable component-level interoperability. We are partnering with standards organizations such as IEEE in this work. We are identifying many of the main SDOs and consortia that we want to work with so that we can leverage their work and not re-invent the wheel as it relates to IoT requirements.

The OpenFog RA is an important first step in establishing interoperability. But we know that even with the most stringent standards, interoperability is challenged, so we will also include fog fests to help address these issues.

We are first addressing the various interfaces between our architectural layers. This will help with overall system composability. One such example is the liaison arrangement we have with the Open Connectivity Foundation (OCF), where we are working to integrate their efforts into ours.

GORLATOVA: We also recently signed a partnership agreement with ETSI-MEC and are actively collaborating with them.

Where and when should we expect to see fog truly take hold?

GORLATOVA: Fog computing will accelerate dramatically over the next 2-5 years. On the academic side, we already see a dramatic increase in interest in fog-related challenges, with many exciting projects currently underway in universities worldwide. We are highly likely to see the academic community solve many important challenges in the different aspects of fog deployment.

SWANSON: We’re starting to see adoption take place, but we expect this to really accelerate as the specifications for standards start to emerge. There will be early market adopters in certain industries where fog is essential and necessary to the use cases. For example, visual analytics will likely be an early market where fog computing takes off. This is really addressing the backhaul minimization aspect where you cannot afford the network costs to process everything in the backend cloud. We also think that transportation, energy, and smart cities will be early adopters. The City of Barcelona, for example, is already using fog computing for waste management, traffic management, and smart lighting, so city administrators can have real-time information to make decisions in a single platform.

MURPHY: We believe fog computing is the fundamental enabler of the more advanced IIoT use cases coming in the future. It brings together the pervasive networking and computing needed for IIoT. Our goal at OpenFog is to define an open architecture for fog computing that ensures a vibrant ecosystem of providers and interoperable solutions that will accelerate the IIoT.