Edge computing is poised to boost the next generation of IoT technology into the mainstream. Here's how it works with the cloud to benefit business operations in all industries

Subscribe now

Get the highlights in your inbox every week.

Cloud computing has dominated IT discussions for the last two decades, particularly since Amazon popularized the term in 2006 with the release of its Elastic Compute Cloud. In its simplest form, cloud computing is the centralization of computing services to take advantage of a shared data center infrastructure and the economy of scale to reduce costs. However, latency, influenced by the number of router hops, packet delays introduced by virtualization, or server placement within a data center, has always been a key issue of cloud migration. Edge computing has also been a driver of innovation within OpenStack, the open source cloud computing project.

This is where edge computing comes in. Edge computing is essentially the process of decentralizing computer services and moving them closer to the source of data. This can have a significant impact on latency, as it can drastically reduce the volume of data moved and the distance it travels.

The term “edge computing” covers a wide range of technologies, including peer-to-peer, grid/mesh computing, fog computing, blockchain, and content delivery network. It's been popular within the mobile sector and is now branching off into almost every industry.

The relationship between edge and cloud

There is much speculation about edge replacing cloud, and in some cases, it may do so. However, in many situations, the two have a symbiotic relationship. For instance, services such as web hosting and IoT benefit greatly from edge computing when it comes to performance and initial processing of data. These services, however, still require a robust cloud backend for things like centralized storage and data analysis.

Edge computing: a brief history

Edge computing can be traced back to the 1990s, when Akamai launched its content delivery network (CDN), which introduced nodes at locations geographically closer to the end user. These nodes store cached static content such as images and videos. Edge computing takes this concept further by allowing nodes to perform basic computational tasks. In 1997, computer scientist Brian Noble demonstrated how mobile technology could use edge computing for speech recognition. Two years later, this method was also used to extend the battery life of mobile phones. At the time, this process was termed “cyber foraging,” which is basically how both Apple’s Siri and Google’s speech recognition services work.

1999 saw the arrival of peer-to-peer computing. In 2006, cloud computing emerged with the release of Amazon’s EC2 service, and companies have adopted it in huge numbers since then. In 2009, “The Case for VM-Based Cloudlets in Mobile Computing” was published, detailing the end-to-end relationship between latency and cloud computing. The article advocated for a “two-level architecture: the first level is today’s unmodified cloud infrastructure” and the second consisted of dispersed elements called cloudlets with state cached from the first level.” This is the theoretical basis for many aspects of modern edge computing, and in 2012 Cisco introduced the term “fog computing” for dispersed cloud infrastructure designed to promote IoT scalability.

This brings us to current edge solutions, of which there are many. Whether purely distributed systems such as blockchain and peer-to-peer or mixed systems such as AWS’s Lambda@Edge, Greengrass, and Microsoft Azure IoT Edge, edge computing has become a key factor driving the adoption of technologies such as IoT.

Why proximity matters

Proximity, or low latency, is extremely important in business because data loses value as it ages. This is true for all industries, from financial to health and safety, shipping, and others. For instance, the medical industry used IoT when real-time monitoring and processing is critical to ensure that patients receive the care they require exactly when they need it. Another good example is e-commerce. In 2009, Akamai published a research report titled “Akamai Reveals 2 Seconds as the New Threshold of Acceptability for eCommerce Web Page Response Times”, which details the relationship between website performance and online shopper behavior. It found that 40 percent of consumers will not wait longer than three seconds for a page to load before leaving, as they become distracted or find alternatives.

This research underscores the extreme importance of the speed at which you transfer, process, and return data to the customer, device, or internal user. Edge computing was designed specifically with this “need for speed” in mind.

Scalable and resilient

The distributed nature of edge computing means that along with reducing latency, it also improves resiliency, reduces networking load, and is easier to scale.

Processing of data starts at its source. Once initial processing is completed, only the data that needs further analysis or requires other services needs to be sent. This reduces networking requirements and the potential for bottlenecks at any centralized services. Furthermore, with other nearby edge locations, or the potential of caching data on the device, you can mask outages and improve your system’s resiliency. This reduces the need to scale your centralized services since they are handling less traffic. The results can also reduce costs, architecture complexity, and management.

The future

Where will edge computing go from here? Over the next few years, we will see an explosion in this technology as more and more end-user devices use it to improve performance, functionality, and battery life. Where once edge devices were limited to smartphones, tablets, laptops, PCs, and game consoles, we are now seeing it employed in virtual reality headsets, autonomous vehicles, drones, wearable tech, augmented reality devices, and more.

The prevalence of IoT devices is skyrocketing, and this expansion seems set to continue for some time as industries such as healthcare, mining, logistics, and smart homes are just starting to incorporate IoT technologies into their business models.

Regarding the technology behind edge computing, we will see a decoupling of many existing cloud technologies from their centralized roots. Services such as AWS Lambda may be overhauled to run functions at the edge location nearest to the request’s origination point rather than being region-locked. We have already seen the first signs of this with AWS Lambda@Edge.

We will also see the maturing of emerging edge technologies such as blockchain and fog computing. There is a lot of excitement about blockchain’s potential, as its decentralized system and complex algorithms have applications far beyond Bitcoin. Potential uses include both logistics and voting, where it could help with security and fraud prevention.

Edge computing could potentially eclipse cloud computing in terms of scale and market cap. But it will not likely replace cloud, or even reduce its market cap. Rather, as edge matures, cloud computing will grow along with it, but at a slower pace, thereby providing many back-end and support functions for edge computing and business operations.

OpenDev conference

The OpenStack Foundation is spearheading a new event called OpenDev, which will take place in San Francisco September 7-8. It will focus on edge computing, and Cloudify will be there to talk about all the latest developments on the edge. See you there!

Topics

About the author

Nati Shalom - Nati Shalom, Founder and CTO at GigaSpaces, is a thought leader in Cloud Computing and Big Data Technologies. Shalom was recently recognized as a Top Cloud Computing Blogger for CIOs by The CIO Magazine and his blog is listed as an excellent blog by YCombinator. Shalom is the founder and also one of leaders of OpenStack Israel group, and is a frequent presenter at industry conferences. Find him on Twitter - @natishalom.

1 Comments

Informative article. I had forgotten what a power drain second-generation+ networks are. This article reminded me of my Samsung Blackjack: there were hacks that allowed users to turn off 3G --- force the device to use GSM. It made a huge difference on battery life. It doesn't read like 4G+ has gotten any better. Manufacturers are simply masking it with the retarded norm of charging feeble-capcity non removable batteries on a daily basis. Meanwhile, the 650 mAh flip-phone I charged on August 31st still has a full-charge icon.

Using Mozilla's Lightbeam addon, the additional domains (triangles), this is the edge computing? And this is why there's this obscene infestation of javascripts (NoScript) on so many websites?

Vote up!

2

Get your copy of the 2017 Open Source Yearbook

Highlights from open source organizations, projects, technologies, and events.

Footer

The opinions expressed on this website are those of each author, not of the author's employer or of Red Hat.

Opensource.com aspires to publish all content under a Creative Commons license but may not be able to do so in all cases. You are responsible for ensuring that you have the necessary permission to reuse any work on this site. Red Hat and the Red Hat logo are trademarks of Red Hat, Inc., registered in the United States and other countries.