Simplify the Cloud

Monday, June 19, 2017

It is generally accepted that declarative orchestration is superior to imperative orchestration because declarative orchestrators let end-users focus on the WHAT, while leaving the complexities of the HOW to the orchestrator. Intent Models provide an excellent framework for declarative orchestration, since they combine both model-driven and policy-based orchestration, and they allow for mapping of high-level business goals into low-level deployment descriptors using recursive decomposition.

What hasn’t been widely explored is the impact of the chosen orchestration paradigm on the architecture of the orchestrator that implements that paradigm. As can be expected, declarative orchestrators tend to implement an architecture that is very different from than of imperative orchestrators. What may be a bit surprising, however, is how much simpler and how much more elegant declarative architectures can be as compared to the more traditional imperative architectures. The following table highlights some of the architectural differences between declarative and imperative architectures, and hopefully illustrates the simplicity of declarative architectures:

Declarative Architectures

Imperative Architectures

Organizing construct: recursive decomposition

Organizing construct: static layering

Arbitrary number of levels of recursion

Fixed number of layers

Flexible resource layer: no architectural distinction between resources and services; resources are accessed as services,
and services can be exposed as resources

Inflexible resources layer: distinction between resource layer
and services layer is baked into the architecture

Identical orchestration functionality at each level in the
recursion

Layer-specific orchestration functionality

Interface paradigm based on negotiation (request/response) and
delegation

Interface paradigm based on management (higher layers control
lower layers)

Interface implementations based on Domain-Specific Language
(DSL)

Interface implementations based on APIs

Identical DSL at each level in the recursion

Layer-specific APIs

Federation built-in: North-south and east-west interfaces are the same

Federation must be added-on: North-south and east-west interfaces are different

Unfortunately, the reference architectures that are currently most widely adopted in the industry (e.g. the ETSI NFV architecture and the MEF LSO architecture) are largely imperative architectures that impose a "legacy" layered view of the world. Attempting to implement a declarative orchestration paradigm in a declarative architecture is like trying to fit a square peg in a round hole. It can be made to work, but likely with bloated implementations and lots of special-purpose code as a result. If we're serious about adopting declarative orchestration paradigms, we may want to go back to the drawing board when it comes to standard reference architectures.

Thursday, June 15, 2017

The main challenge for intent engines is how to map intent expressions into deployable service topologies. At first glance, this appears to be a daunting task. However, we may have developed just have enough of an understanding of intent to create a high-level mapping framework. Let’s review:

Declarative orchestrators can either be model-driven or policy-based. Both focus on the WHAT rather than the HOW, but model-driven orchestrators focus on service topologies whereas policy-based orchestrators focus on service outcomes and behaviors.

We’ve had the notion for a while that policies can be expressed at different levels of abstraction in a policy continuum. Declarative policies expressed in terms of business goals are referred to as intent statements. Intent engines realize services by mapping intent statements into configuration parameters of deployable software components and the resources on which these components are deployed.

We have come to the realization that it is also possible to model services at different levels of abstraction in a model continuum. Model-driven orchestrators can realize abstract service models by translating them into deployment descriptors that define a low-level service topologies at the device/resource level.

This summary leads to a couple of important observations. First, there appears to be a natural synergy between model-driven and policy-based approaches. Recall that declarative policies are expressed as constraints on capabilities within a given context (the context-capabilities-constraints pattern). But how does one specify context, and what is the mechanism for identifying capabilities within that context? Well, it seems obvious that at a specific level of abstraction, one could define a service model that serves as context for a declarative policy at that same level of abstraction. Observable (and controllable) parameters of that model could then represent capabilities, and policy expressions would specify constraints for those capabilities within the service model context. This suggests that abstract service models and declarative policies must go hand-in-hand in order to fully specify requested service behaviors at a given level of abstraction. I propose to use the term intent model to represent the combination of an abstract service model with a declarative policy that specifies desired outcomes for the modeled service.

A second observation should be clear from the first one: the intent model pattern can be used at all levels of abstraction. At each level of abstraction, one could define an abstract service model that represents the service at that level of abstraction, and that service model could then serve as the context for the corresponding declarative policies at that same level of abstraction. This means we shouldn’t just think about a policy continuum or a model continuum, but rather about an intent model continuum that combines the two.

This then leads to the following strategy for intent mapping: rather than mapping high-level intent statements into low-level deployment descriptors in one fell swoop, intent mapping should be performed recursively. Starting with an intent model expressed at the business view level, intent engines should recursively translate intent models at a higher level of abstraction into corresponding intent models at lower levels of abstraction, traversing the intent model continuum until the mapping process results in deployable service topologies at the device view level. This recursive mapping involves:

Mapping abstract service models at one level of abstraction into corresponding service models at the next lower level of abstraction

Mapping constraints on capabilities of a higher-level service model into equivalent constraints on capabilities of the lower-level service model.

This now gives us a much better handle on the problem, since we can borrow from readily-available constructs in software engineering. Software architects are very familiar with the type of model mapping we just described, since they perform such mappings as a matter of course in the context of top-down software design. Proper software design starts with high-level abstractions that are then decomposed recursively into increasingly lower level concepts. Recursive decomposition is a key design pattern that is supported by many software frameworks and architectures. The ETSI NFV architecture positions VNFs as abstract entities that need to be decomposed into sub topologies of VDUs. The ONFs most recent Information Modeling work introduces a component pattern that models most entities in the networking domain as components that can be decomposed into topologies of other components. TOSCA has built-in language support for recursive decomposition using a feature called substitution mappings, which allows any node in a service topology to be ‘substituted’ with an entire service topology consisting of other nodes. It seems natural, then, to use recursive decomposition as the primary construct for mapping abstract service models into lower levels of abstraction.

Once service model mappings have been established, the problem of constraint mapping becomes a whole lot simpler. We are already used to ‘rolling-up’ low-level metrics into higher-level summary metrics, and constraint mapping is nothing more than the inverse of this ‘rolling-up’ activity. I have described previously a number of quality metrics for real-time communications that can be associated with service models at different levels of abstraction. We could go through similar exercises for other uses cases, and in fact this could result in (reusable) constraint mapping functions that can become part of a ‘toolbox’ for intent mapping engines.

I hope to see recursive decomposition of intent models be adopted as a framework to accelerate development of general-purpose intent engines.

Tuesday, June 13, 2017

From the classification introduced earlier, we know that
model-driven and policy-based orchestrators both fall in the category of
declarative orchestrators. Model-driven approaches describe services to be
orchestrated using service models that represent service topologies, whereas
policy-based approaches describe the services to be orchestrated in terms of
expected service behaviors and/or outcomes.

Policy-based orchestrators tend to provide simpler interfaces
than model-driven orchestrators, since they expect less detail from service
designers. Whereas model-driven approaches expect entire service models, policy-based
approaches expect declarative policies that express constraints on observable
parameters (“capabilities”) of the deployed service (the “context”). Context,
capabilities, and constraints are all that is needed for declarative policies.

Policy-based orchestrators appear to have the added advantage that (declarative) policies can be expressed at various levels of
abstraction in a policy continuum. This leads to the concept of intent, which
refers to declarative policies expressed in terms of business goals.

However, we should ask ourselves if that advantage really
only applies to policies, or whether model-driven orchestration can benefit from
abstraction as well? The answer is obviously yes, since abstraction is at the core
of almost everything we do in software engineering. In fact, a model is in essence
nothing more than an abstract representation of the actual entity that is being
modeled. If models are (almost by definition) abstract, then clearly we should
be able to create different types of models depending on the level of
abstraction at which we’re trying to model. Allowing users to model services at
a high level of abstraction would simplify the task of the service designer,
similar to how intent engines simplify the challenge of creating declarative
policies.

This observation leads to an obvious parallel between models
and policies: if policies can be expressed at different levels of abstraction
in a policy continuum, then it should be equally possible to create models at
different levels of abstraction in a model continuum. The following example
levels of abstraction could be used in a model continuum:

The business view describes services as products that are
available to customers

The system view describes the system architectural of the
service

The administrator view specifies technologies used for each
of the components in the system architecture

The device view lists specific software modules and/or
resource configurations for all of the components of the service

The instance view captures configurations of each instance

Granted, most model-driven orchestration systems today don’t
fully support these levels of abstraction. They expect users to provide service
templates that sit at the device view level and can best be described as
deployment descriptors: low-level representations of the actual software
components to be deployed, coupled with the resource configurations that are required
to host those software components. But if intent engines can be expected to have
the smarts to map high-level business goals into deployable services, wouldn’t
it be reasonable to also expect model-driven orchestrators to be sophisticated
enough to translate abstract service models into low-level deployment
descriptors? I plan to explore later how such translation could be constructed.

Monday, June 12, 2017

We discussed earlier that intent engines are a class of declarative orchestrators that allow customers to use policy expressions to specify the services to be orchestrated. This classification does a nice job of distinguishing intent engines from model-driven orchestrators, but is it sufficient to fully define intent engines? Specifically, can all policy-based orchestrators be classified as intent engines?

To answer that question, we need to take a closer look at classification of policy expressions themselves. Policies can generally be classified according to at least the following two dimensions:

Policies can be imperative or declarative

Policies can be specified at different levels of abstraction

Imperative vs. Declarative Policies

Imperative policies expect policy expressions to specify actions that need to be taken by the policy engine in order to maintain compliance with the policy. Imperative policy statements are typically expressed using an Event-Condition-Action (ECA) pattern: if a certain event occurs, and the specified conditions are met, then take the following actions.

Declarative policies, on the other hand, focus on expected outcome rather than on the actions required to achieve that outcome. Declarative policies are typically expressed using a Context-Capability-Constraint (CCC) pattern: within the specified context, make sure that capabilities exposed by the system comply with the given constraints at all times.

Given that we have previously defined intent engines as a class of declarative orchestrator, it should come as no surprise that intent engines use declarative rather than imperative policies. As with other declarative systems, declarative policies are simpler to construct while leaving the complexities of the ensuring compliance with the policy to the orchestrator.

Level of Abstraction

The other dimension along which policies can be classified is level of abstraction. John Strassner’s policy continuum lists the following example levels of abstraction

The business view expresses services and policies in terms of business goals

The system view describes the architectural components of the service

The administrator view specifies technologies used for each of the architectural components

The device view lists device-specific configurations

The instance view captures configurations of each instance

From the early ONF work on Intent NBIs (as described in their Intent Definition whitepaper), it is clear that the original goal of intent NBIs was to let customers express services using terminology that expresses business goals rather than technology details. This would imply that policies capturing intent should be expressed at the Business View level of abstraction. As specified in the ONF document, this presumes some type of mapping service that translates “intent” expressions into software-consumable constructs. This mapping is one of the primary functions that has to be performed by the intent engine.

Based on this analysis, we now have a better definition for intent engines: intent engines are a class of declarative orchestrators that allow customers to specify desired services using declarative policies that express business goals.

I’ll explore next how this definition can help guide us in the implementation of intent engines.

Thursday, June 8, 2017

Intent is increasingly being positioned as the appropriate paradigm for network and service orchestration, and for good reason since the intent paradigm significantly reduces complexity for service designers. Consumers of intent-based systems can use simple intent statements to express their expectations for the service, while leaving the details (and the associated complexity) of how to meet those expectations up to the service provider. In other words, intent focuses on the WHAT, not on the HOW.

Systems that adopt an intent paradigm are typically referred to as intent engines. Intent engines are orchestrators that map simple intent expressions into the resource allocations and configuration parameters that are required to deliver on the intent.

To understand how intent engines relate to other types of orchestration systems, it is helpful to review the traditional distinction between imperative and declarative orchestrators:

Imperative orchestrators focus on the HOW: users of imperative orchestrators are expected to tell the orchestrator exactly how to get services deployed by prescribing the exact set of actions the orchestrator needs to take. Traditional automation techniques (e.g. “infrastructure as code”) are an example of this approach.

Declarative orchestrators focus on the WHAT: users of declarative orchestrators are expected to describe what it is that they’re trying to get deployed and leave it up to the orchestrator to use the appropriate mechanisms to get a service deployed that delivers “WHAT” is expected.

Based on this classification, intent engines are clearly declarative orchestrators, given their focus on WHAT. But is it fair to assume then that all declarative orchestrators are also intent engines? To answer that question, we need to take a closer look at what exactly can be meant by WHAT:

In one interpretation, WHAT can refer to “what things look like”. This is the STRUCTURAL point of view. Declarative orchestrators that use a structural approach expect service designers to describe the service topology that contains the entities that make up the service and the relationships between those entities.

On the other hand, WHAT can also refer to “what things are supposed to do”. This is the FUNCTIONAL point of view. Declarative orchestrators that use a functional approach expect service designers to describe expected behavior of a service or expected outcomes to be delivered by the service.

These two different categories tend to use entirely different technologies to describe what is expected:

Structure is typically described using service models—information models that describe a service as a set of components and their relationships. Declarative orchestrators that process service models can be referred to as model-driven orchestrators.

Behavior is typically described using policies that describe observable parameters of the service and the constraints with which these parameters have to comply. Declarative orchestrators that process policies can be referred to as policy-based orchestrators.

Using this classification, it should be clear then that intent engines fall in the functional category and are a type of policy-based system. While model-driven orchestrators and intent engines are both types of declarative orchestrators, they approach the problem from very different angles.

One could ask then, which approach is best. Answering that question requires a bit more analysis about how to construct policies that can capture intent, but it is safe to say that intent-based systems present a higher-level of abstraction than model-driven systems and as a result should be simpler to use. That said, I believe there is a clear synergy between intent-based and model-driven systems, which I hope to explore in more detail next.

Thursday, December 29, 2016

In the networking world, YANG has become the de-facto data modeling standard. For cloud applications, TOSCA is getting all the buzz. NFV straddles both worlds and is getting pull from both the YANG and the TOSCA camps. One could ask then if we really need two languages? Shouldn’t we standardize on one, and if so, which one?

While these may sound like fair questions, they assume that both languages serve the same purpose and therefore are interchangeable. However, if we take a closer look at both languages, it becomes clear fairly quickly that this is not the case. TOSCA and YANG are in fact quite different. In a nutshell, YANG is a data modeling language while TOSCA is a service automation language.

YANG is in essence a replacement for SNMP MIBs. YANG models define the schema for configuration and state data of networking devices, and network management tools use the NETCONF protocol to manipulate these data. In the SDN world, YANG has been adopted as a general-purpose modeling language that can be used independent of NETCONF (for example, YANG defines the data models for the model-driven service abstraction layer (MD-SAL) in OpenDaylight) but it is still strictly a data modeling language: it defines data schema without associating any semantics with the data that are being modeled.

TOSCA, on the other hand, was designed as an automation language for deploying and managing cloud services. Unlike many other automation tools, TOSCA uses a declarative approach that starts with a description of what needs to be deployed, rather than with a prescription of the steps that need to be taken for the deployment. In TOSCA, this description consists of a service model that contains the components that make up the service (TOSCA nodes) as well as the relationships between these components. It is this service modeling functionality that invites comparisons with YANG.

However, this is not an apples-to-apples comparison, since unlike YANG models, TOSCA models carry service semantics: the language includes constructs such as nodes, relationships, requirements, and capabilities, and it uses these constructs to build service topologies. While TOSCA also includes general-purpose data modeling capabilities that are similar to YANG, its main contribution is in its focus on service modeling as well as on the associated service management functionality. Specifically:

TOSCA services and their components have life-cycle management interfaces that define operations for creating, configuring, deploying, and decommissioning services. TOSCA has built-in implementations for these operations for a number of “primitive” node and relationship types but service designers can provide their own implementations or create custom life-cycle interfaces.

TOSCA implements a standard workflow for deploying services using the operations of standard life-cycle interfaces, but service designers can also create their own custom workflows.

TOSCA allows service designers to specify policies with which their services have to comply. This policy support can be used to as a framework for enabling intent-based interfaces.

TOSCA supports service templates that can be used as blueprints for instantiating new services.

It is these service management features that make TOSCA a service automation language rather than just a modeling language. This doesn’t imply that TOSCA is better than YANG, just that it serves a different purpose. In fact, there is plenty of room for both standards and it makes perfect sense to use YANG in the context of TOSCA. For example, when providing implementations for life-cycle management operations for network nodes, TOSCA could translate node properties and attributes into the corresponding YANG configuration data that are then applied to the physical devices using NETCONF-based tools. I’d love to see these types of ideas implemented so we can start focusing on the synergies between TOSCA and YANG and benefit from both.

Tuesday, November 29, 2016

If you’re like me, you probably have a hard time figuring out exactly what is meant by various terms used in the network and cloud automation space. Sometimes it seems like we use the same term for a wide variety of different technologies that are only vaguely related. At other times, we use different terms to talk about similar technologies that differ in only subtle ways.

The best example of the first issue is the term Software Defined Networking (SDN). While the ONF at least provided a definition of this term (“physical separation of the control plane and the forwarding plane”), that definition was obsolete almost as soon as it was conceived. In fact, technology from the first successful SDN company (Nicira—now VMware NSX) arguably was not SDN at all, since it had nothing to do with “separating the control plane from the forwarding plane”. Instead, Nicira was a network virtualization platform for creating and managing virtual network overlays. Even today, pure (Openflow-based) SDN implementations are rare, and it seems that any network technology that relies on a “controller” is called SDN independent of whether that controller gets involved in control plane functionality or not.

That brings us to the second issue: what the heck is a controller anyway, and how is it different from other technologies that provide similar functionality? If an SDN controller manages the life-cycle of virtual network overlays, is it really still a controller? In the cloud space, technologies that manage the life-cycle of services and the (virtualized) components from which the service is constructed are called orchestrators. Given that most SDN controllers are used for managing virtual networks, shouldn’t they be called network orchestrators instead? What is it about an SDN controller that makes it a controller and not an orchestrator?

This may seem like intellectual nit-picking. However, finding similarities (as well as differences) between technologies from different domains can help simplify architectures, and in my opinion getting the architecture right is key to creating long-term sustainable technologies. For example, if SDN controllers are really orchestrators, then perhaps we should use common architectural constructs and common technologies for both. Specifically, shouldn’t we use the same declarative model-based approaches to interact with SDN controllers as we do with orchestrators?

I believe the lack of crispness in today’s SDN terminology is the result of some sloppy early SDN architecture work that completely ignored the management plane. The control plane was implemented by a controller, which communicated with multiple network elements that together implemented the forwarding plane. The management plane was nowhere to be found, nor was any integration with NMS and/or OSS systems. As a result, whenever management functionality was needed, implementers just shoved it into the controller since that was the only convenient place in which to put it.

This suggests that to start rationalizing terminology, we should start by more clearly organizing different functions along the three planes traditionally used in the networking world:

The forwarding plane is responsible for forwarding data traffic as directed by the control plane. By the way, is has become practical in recent years to implement this forwarding functionality entirely in software, using virtual switches or software routers. Some people have started referring to such software forwarding as SDN. This is not SDN. This is just “software switching” or “software routing”.

The control plane is responsible for making decisions about where to forward traffic. In an SDN world, some of the control plane functionality can be centralized in a controller, which makes it easier to provide novel forwarding functionality beyond the shortest-path or lowest-cost path algorithms used by traditional routing protocols.

The management plane provides the functionality used to configure and manage the other two planes. Traditional management functions include fault handling, configuration, accounting, performance management, and security. In virtualized environments, the management plane is also responsible for life-cycle management.

Using these three different planes as a guideline, I suggest we should adopt the following rules to help improve communication in the network and cloud automation space:

Use the word “controller’ exclusively to refer to those software modules that actually implement control plane functionality

Use the word “orchestrator” to refer to the specific component of the management plane that is responsible for life-cycle management

Use the word “network orchestrator” (or network services orchestrator) to refer to orchestrators that have responsibility for managing the life-cycle of virtual networks.

If we rationalized terminology in this fashion, we would have an easier time coordinating discussions between cloud and networking people about virtual networks and how they fit into broader cloud-based services. Specifically, we could start harmonizing various model-driven deployment technologies.

Of course, this opens up a whole new can of worms. What exactly is a model-driven approach, and how does it relate to declarative versus imperative? Where does intent fit in? Lots more room for confusion. I hope to provide my thoughts on these terms in a future post.