Digital transformation is affecting where and how applications and their supporting services (e.g., orchestration, performance optimization, availability, management, security, etc.) are deployed. As cloud computing enters the mainstream, companies are deploying greater numbers of applications and services across private and public clouds. According to the F5 “State of the Application Report 2017,” four out of five IT and network professionals surveyed reported that their organizations are adopting hybrid cloud. RightScale’s “State of the Cloud 2017” report showed that the companies it surveyed are running 41% of their workloads in public cloud and 38% in private clouds, with cloud users running applications in as many as four different clouds and experimenting with four more.

If you’re wondering how wide-spread application state can be coordinated throughout such a dynamic landscape, you’d be right to do so. Coordination is needed not just across internal business services, but also across disparate services that may or may not be connected to a common messaging platform up and down the application layer stack. As applications and their various service components are distributed across clouds and geographically dispersed edge nodes (traffic exchange and control points), application automation and orchestration become more important to digital business. In addition, coordination and configuration methods/processes must scale to keep up in a rapidly diversifying digital world.

The shift to API-centric application development

DevOps frameworks that leverage application programming interfaces (APIs) are increasingly popular, according to the F5 report, with the number of respondents using at least one DevOps framework more than doubling from 2016 to 2017 to nearly 50%. Some of the forces shaping this trend are complicating distributed application coordination and configuration. They include:

The shift to an API-centric application development model has reduced the traditional rigidness, mandating what applications and services building blocks must be used behind the APIs (i.e., languages, tools, DevOp platforms, etc.). This disparity in implementation can add complexity when trying to coordinate across APIs.

Real-time, integrated business flows are becoming the norm (e.g., retail coupons, digital payments and online credit and fraud checks – all in less than a second).

To gain greater automation and consistency and decrease complexity and cost, application architects now need to design application deployments for wide-spread integration at scale, across hybrid and multicloud environments.

The challenges behind managing the state of applications and their services

API-centric application development empowers digital businesses to bring applications and related services to market faster and more uniformly. However, challenges arise as each application’s state is coordinated at each layer of the application stack.

There are multiple standards to distributed application coordination. Distributed coordination is either already designed and implemented for each application environment—or it’s not at all.

Even when distributed coordination is taken into account, some designs probably don’t include end-to-end coordination as a common solution—creating an integration gap. As a result, the following challenges arise:

Implementation disparities exist across applications, application services, languages, databases, clouds, and their management and control functions.

Onboarding processes, documentation and test environments can’t simulate an integrated platform, which should reside alongside common services for developers, because the required components do not exist.

There’s no process or ownership for cross-application integration as a cloud service outside of the on-premises data center, and both need to be identified.

Solution

These challenges can be addressed by adding distributed configuration and coordination (DCC) capabilities (e.g., Apache Zookeeper, etcd, consul, hazelcast) to edge nodes and making them available as a common service through your application platform. Such an implementation can also be extended into partner and hybrid/multicloud environments and made available across segmented networks. While other solutions can coexist with what we are prescribing, this approach provides widespread application state coordination where it is needed most.

DCC is typically implemented via a secure, distributed, in-memory namespace that application affinity groups can use to share values and state information between application stack layers. The implementation’s scope includes distributed run-time application configuration and coordination, which requires little data and fit alongside messaging and API gateways within distributed edge nodes. DCC capabilities can provide stateful connections and optimize multicloud application workflows across application stacks by placing edge nodes close to users and clouds. In addition, you can localize, segment and integrate application traffic flows with direct, secure, low-latency interconnection and real-time, dynamic cloud connection provisioning.

Placing a common name space over distributed control points, data and clouds, in proximity to one another throughout the edge nodes, enables the synchronization and coordination of databases and applications using multiple APIs. Flexible data orchestration, data policies and service levels can be defined through automated workflows. Provisioning and change management are likewise defined via data pipeline orchestration/provenance, both of which allow you to leverage DCC capabilities.

Publish a service API for DCC commands (add, change, delete) and connectors.

Use the DCC API to establish secure namespaces for components and to provide for their ongoing configuration management (configuration of network and security services, data mappings, message queues, etc.) starting with edge node services.

Integrated DCC coordination and configuration offers the following benefits:

Access to a secure, self-service, distributed configuration and coordination service with mixed local and global groups as needed.

Implementation across a mesh of edge nodes either as local appliances or SaaS augmentation.

The ability to “park” components when provisioned and then assign their functional configuration when they are put into service. You may not want a component that has experienced a fault to immediately return to production. Instead, configure and deploy a fresh replacement.

A higher level of orchestration achieved via administrative tools that directly interact with the namespace.

A more flexible, programmable infrastructure that allows the namespace to be backed up to the distributed repository (See the IOA Data Blueprint).

Increasing the performance of distributed applications

Australian-based ASE provides end-to-end, cloud-based solutions and managed services on hybrid IT infrastructures. With a strong focus on the media and entertainment industry and the transition to digital content workflows, the company expanded its operations to Equinix International Business Exchange™ (IBX®) data centers in Los Angeles (LA) and New York City (NY) to service large media markets. The joint solution placed ASE cloud and data services geographically closer to its U.S. customers, empowering the company to deliver a greater quality of experience (QoE) at a significantly lower cost (by as much as 75%). ASE leveraged an IOA strategy to interconnect multiple cloud service providers’ platforms and exchange data and application workloads via the Southern Cross Cable (submarine sea cable network) between the two countries.

Equinix provided ASE with one-to-many, direct and secure remote connections to Amazon Web Services and Microsoft Azure cloud platforms and a distributed data repository. Equinix and ASE partner, NetApp, brought its object storage repository closer to the cloud services inside the IBX data centers, enabling greater collaboration and performance on content workflows and backend data analytics for media content across cloud services and pre- and post-production users. The ASE hybrid IT and multicloud environment is an example of how developers can optimize content workflows and data analytics between multiple cloud services and their users. Also, in this type of scenario, DCC capabilities would be extremely beneficial.

In the next blog article, we’ll discuss complex event processing at the edge.

In the meantime, visit the IOA Knowledge Base for vendor-neutral blueprints that take you step-by-step through the right patterns for your architecture, or if you’re ready to begin architecting for the digital edge now, contact an Equinix Global Solutions Architect.

You also may be interested in reading other blogs in the IOA Application Blueprint Design Pattern series: