Cutting the cord: How edge intelligence is enabling the IoT to go where cloud can’t

In a world where data’s time to value or irrelevancy may be measured in milliseconds, the latency introduced in transferring data to the cloud threatens to undermine many of the Internet of Things (IoT‘s) most compelling use cases.

Think of
data as the fuel that powers our new decision-making engines – fail to get the
fuel to the engines fast enough and the engine splutters and dies. Meanwhile,
that fuel is constantly decreasing in quality and usefulness. But solving this
problem isn’t a case of making bigger pipes to carry the world’s data or
increasing the cloud’s compute capability.

The issue
is one of balance, or as we see now, imbalance. In 2016 the IoT generated 1.6
trillion gigabytes of data, and Cisco estimates this will rise to 500 trillion
gigabytes by the end of 2019, growing exponentially in years following. As the
data grows, the friction it encounters along its journey will increase.

Think of data as the fuel that powers our new decision-making engines – fail to get the fuel to the engines fast enough, the engine splutters and dies.

Physical problem, physical solution

It’s clear that we need a new path; one that isn’t simply about adding more central cloud compute capability to the data center. While we do need more compute, it needs to be strategically placed so we co-solve the digital and physical distance challenge that increasing data congestion is causing. As distance increases network latency, the obvious answer is to decrease the real-world distance between where sensor data is collected and where it is computed.

This means enabling compute in the most physically appropriate location – whether that is on a device, in network devices such as IoT gateways, or in the traditional cloud.

While there are other factors to consider such as the level of computing required, privacy and security needs and how much latency can be tolerated, the general principle of enabling a more intelligent ‘edge’ is a now a compute priority. By adding more compute power where it counts, we will be able to scale the Internet of Things and not exacerbate the latency issues – and the extensive costs of transferring raw data – that will disrupt value.

A recent
study by McKinsey concluded that compared to the gains offered
by cloud-based computing, where most gains were felt in the technology sector,
edge compute provides a unique opportunity to enable a far broader spread of
industries.

Enabling a new wave of use cases

The study
identified 107 unique edge computing use cases across 11 industries – from
improved retail processes to the tracking of humanitarian inventories in
disaster situations. In particular, the study highlighted the hugely positive
effect edge compute capability can have on environments such as mines and oil
rigs, underwater applications and other adverse environments that may lack the
safety net of always-on internet connectivity – or even always-on electricity.

Today, the
majority of devices derive their value from the fact that they are connected
into a huge network of intelligence. Smart speakers, for example, give the
illusion of in-built intelligence yet actually rely on powerful AI algorithms
in the cloud to translate anything other than an activation keyword. In this
case, a delay while the device waits for an answer is unlikely to present a
life or death situation. Yet consider a medical device or app being used in a
remote environment: a data bottleneck and poor connectivity might be far more
serious. It’s therefore fundamental that a capable edge device should have the
power and resource to perform – and to continue to perform – its primary
function independently.

True IoT edge intelligence is a complex challenge

Yet ‘adding intelligence’ isn’t as simple as putting a more powerful capable processor into every IoT device: increase the compute capability and the complexity, cost and power consumption can skyrocket. Autonomous vehicles, perhaps today’s most complex example of an edge device, must be capable of making sense of the world around them via powerful machine learning (ML) processing of massive amounts of data received from multiple sensors in real time.

To put
this requirement into context, the autopilot software in a Boeing 787
Dreamliner comprises around 14 million lines of code. A Level 5 (fully
autonomous) self-driving car’s software is likely to approach 1 billion lines.
That means packing supercomputer-level power in a device that doesn’t take up
half the car’s trunk, emit more heat than a Death Valley sun and account for
most of its cost.

Adding intelligence isn’t as simple as putting a more powerful capable processor into every IoT device: increase the compute capability and the complexity, cost and power consumption can skyrocket.

Through programs like Project Trillium, Arm is committed to finding ways to add intelligent compute capability such as artificial intelligence (AI) into even the smallest IoT devices while retaining the trademark energy-efficient, cost-effective benefits our technology is known for, helping to bring the benefits of Arm-powered compute to the IoT’s most challenging environments.

Security, security, security

Security is another key concern: as the frequency, severity, and complexity of malicious attacks against transmitted data increases, security is becoming paramount. Processing information at the edge, and transmitting only the useful data, mitigates much of the risk yet does not absolve us of security concerns. Even without connectivity to an external system, security is a critical component for protecting integrity and confidentiality of the data and firmware stored on the device, as well as controlling lifecycle management such as over-the-air (OTA) firmware upgrades.

Facing adversity

Finally, let’s not forget that in enabling powerful compute at the logical extremes of the network there will be adverse physical conditions to deal with. A device hardwired within a temperature-controlled warehouse might enjoy a long, easy life with device provisioning and updates performed locally, while a wave-powered gas sensor placed on the bed of the North Sea is likely to face a far greater challenge – and keep performing its duties, even if network or power connectivity is lost for long periods of time.

Already, we’re seeing devices capable of incredible edge compute in harsh conditions: take Oxford Nanopore’s MinIT, a portable, Arm-based healthcare IoT device for DNA and RNA sequencing that does away with uploading tens of gigabytes of locally-captured data to a cloud server by processing a sample locally. Notwithstanding any latency issues, the cost of communications alone in remote and harsh environments such as these would make the use of any device incapable of this kind of on-board processing commercially impossible.

The enablement of compute at the edge isn’t just one more step towards a world of a trillion devices; if we ever want to realize the ambitions of the IoT, it’s imperative. The Internet of Things has evolved as its use cases have expanded, and so has the need for these devices to perform independently. IDC predicts that by 2020, IT spend on edge infrastructure will reach up to 18 percent of the total spend on IoT infrastructure; for this to become a reality, we need the entire spectrum of computecapable, fired up and ready, from the smallest device to the highest performance data center.