“The difference in mind between man and the higher animals, great as it is, certainly is one of degree and not of kind” – Charles Darwin (1871) (quote taken from Frans de Waal, (2016) “Ae We Smart Enough to Know How Smart Animals are?” W. W. Norton.)

In biological systems, natural intelligence evolved over a period of long time to include managed information processing methods transcending what genes and neurons do. As history accumulates with experience, it is processed to create new best practices and gain newer predictive insight to configure/reconfigure, monitor and control themselves and their environment in a harmonious way to assure safety and survival while executing the intent in a most efficient way. Genes transmit the encoded information from the survived to the successor. Neurons organize themselves in the nervous system with or without brain (as in the case of some organisms) to deal with real-time sensory information and motor coordination. In both cases, an overlay control architecture allows the coordination of integrated information from different subsystems. Together, the body (consisting of sensors and motors), the nervous system (including the brain) providing coordination of sensory and motor activities with 4E (embedded, enacted, extended and embodied) cognition and an overlay supervisory control architecture (mind?) that provides integration of information from various quarters.

The result is a being with a sense of self-identity, intelligence (the ability to acquire and apply knowledge and skills), sentience (the capacity to feel, perceive or experience) and resilience (the capacity to recover quickly from difficulties without requiring a reboot) – all the ingredients for developing various degrees of consciousness.

Current generation neural network based systems, despite all the marketing hype, fall short in mimicking even the lowest degree of consciousness of biological systems let alone humans . Before we implement consciousness in the digital universe, we must first understand the true nature of cognition and how to implement it because cognition is a prerequisite for consciousness and culture to evolve.

Both Alan Turing and John von Neumann pointed out in 1948 that symbolic computing and neural networks are two sides of the same coin of information processing. Turing while discussing how to organize unorganized machinery says “If we are trying to produce an intelligent machine, and are following the human model as closely as we can, we should begin with a machine with very little capacity to carry out elaborate operations or to react in a disciplined manner to orders (taking the form of interference). Then by applying appropriate interference, mimicking education, we should hope to modify the machine until it could be relied on to produce deﬁnite reactions to certain commands. This would be the beginning of the process. I will not attempt to follow it further now.” He is here sowing the seeds for a process to infuse cognition into machines.

John von Neumann says (Hixon Lecture 1948) “It has often been claimed that the activities and functions of the human nervous system are so complicated that no ordinary mechanism could possibly perform them. It has also been attempted to name specific functions which by their nature exhibit this limitation. It has been attempted to show that such specific functions, logically, completely described, are per se unable of mechanical, neural realization. The McCulloch-Pitts result puts an end to this. It proves that anything that can be exhaustively and unambiguously described, anything that can be completely and unambiguously put into words, is ipso facto realizable by a suitable finite neural network. Since the converse statement is obvious, we can therefore say that there is no difference between the possibility of describing a real or imagined mode of behavior completely and unambiguously in words, and the possibility of realizing it by a finite formal neural network.”

This is an important insight that brings out the closeness of genetic and brain computing modes and Turing Machine based computing and neural networks. “The two concepts are co-extensive. A difficulty of principle embodying any mode of behavior in such a network can exist only if we are also unable to describe that behavior completely.” He asserts that there is an equivalence between logical principles and their embodiment in a neural network, and while in the simpler cases the principles might furnish a simplified expression of the network, it is quite possible that in cases of extreme complexity the reverse is true.

Cognition is the ability to process information, apply knowledge, and change the circumstance. Cognition is associated with intent and its accomplishment through various processes that monitor and control a system and its environment. Cognition is associated with a sense of “self” (the observer) and the systems with which it interacts (the environment or the “observed”). Cognition extensively uses time and history in executing and regulating tasks that constitute a cognitive process. However, as Cockshott e al., (“Computation and its limits, Oxford University Press 2012) point out the Turing’s system is limited to single, sequential processes and is not amenable for expressing dynamic concurrent processes where changes in one process can influence changes in other processes while the computation is still in progress in those processes which is an essential requirement for describing cognitive processes. Concurrent task execution and regulation require a systemic view of the context, constraints, communication and control where the identities, autonomic behaviors and associations of individual components also must be part of the description. However, an important implication of Gödel’s incompleteness theorem is that it is not possible to have a finite description with the description itself as the proper part. In other words, it is not possible to read yourself or process yourself as a process.

The last paragraph of the last chapter in the book “Computation and its limits” brings out the limitation of current computing models to infuse cognition that results in intelligence, sentience and resilience with self-identity. “The key property of general-purpose computer is that they are general purpose. We can use them to deterministically model any physical system, of which they are not themselves a part, to an arbitrary degree of accuracy. Their logical limits arise when we try to get them to model a part of the world that includes themselves.”

Autonomic computing, by definition implies two components in the system: 1) the observer (or the “self”) and 2) the observed (or the environment) with which the observer interacts by monitoring and controlling various aspects that are of importance. It also implies that the observer is aware of systemic goals in terms of best practices, to measure and control its interaction with the observed. Autonomic computing systems attempt to model system wide actors and their interactions to monitor and control various domain specific goals also in terms of best practices. However, cellular organisms take a more selfish view of defining their models on how they interact with their environment. The autonomic behavior in living organisms is attributed to the “self” and “consciousness” which contribute to defining one’s multiple tasks to reach specific goals within a dynamic environment and adapting the behavior accordingly.

The autonomy in cellular organisms comes from at least three sources:

1) Genetic knowledge that is transmitted by the survivor to its successor in the form of executable workflows and control structures that describe stable patterns to optimally deploy the resources available to assure the organism’s safe keeping in interacting with its environment.

2) The ability to dynamically monitor and control organism’s own behavior along with its interaction with its environment (neural networks) using the genetic descriptions and

3) Developing a history through memorizing the transactions and identifying new associations through analysis.

In short, the genetic computing model allows the formulation of descriptions of workflow components with not only the content of how to accomplish a task but also provide the context, constraints, control and communication to assure systemic coordination to accomplish the overall purpose of the system.

A path to cognitive software is to first integrate both symbolic and neural network computing models and infuse a sense of self and a composition and control scheme to create autonomous behavior that allows the system to know its intent, configure, monitor and control its behavior in the face of non-deterministic impact of its interactions within and with outside in its environment. Cognition is the first step from computing and communication toward consciousness and culture (ability of individuals to learn habits from one another resulting in behavioral diversity between groups) that provide a global information processing system. Infusing cognition into computing and communications software requires us to first understand the true nature of cognition – how 4E cognition evolved from single cells, multi-cellular organisms, plants, and animals to humans.

It would also allow us to design next generation software systems with various degrees of consciousness that are appropriate to different tasks.

I am following two venues that are focusing on the future computing paradigms going beyond Turing machines and Neural networks. Fist,

“We grow in direct proportion to the amount of chaos we can sustain and dissipate” ― Ilya Prigogine, Order out of Chaos: Man’s New Dialogue with Nature

Abstract

According to Gartner “Alpha organizations aggressively focus on disruptive innovation to achieve competitive advantage. Characterized by unknowns, disruptive innovation requires business and IT leaders to go beyond traditional management techniques and implement new ground rules to enable success.”

While there is a lot of buzz about “game changing” technologies, and “disruptive innovation”, real “game changers” and “disruptive innovators” are few and far between. Leap-frog innovation is more like a “phase transition” in physics. A system is composed of individual elements with a well-defined function which interact with each other and the external world with a well-defined structure. The system usually exhibits normal equilibrium behavior that is predictable and when there are small fluctuations, incremental innovation allows to adjust itself and maintain the equilibrium with predictability. Only when the external forces inflict large or wild unexpected fluctuations in the system, the equilibrium is threatened and the system exhibits an emergent behavior where unstable equilibrium introduces unpredictability in the evolution dynamics of the system. A phase transition occurs with a reconfiguration of the structure of the system going through an architecture transformation resulting in order from chaos.

The difference between “Kaizen” (incremental improvement) and “disruptive innovation” is in dealing with stable equilibrium with small fluctuations versus dealing with meta-stable equilibrium with large-scale and big fluctuations. Current datacenter is in a similar transition from “being” to “becoming” driven by both the hyper-scale structure and fluctuations (which, the hardware and software systems delivering business processes are experiencing) caused by rapidly changing business priorities on a global scale, workload fluctuations and latency constraints. Is the current von Neumann stored program control implementation of the Turing machine reaching its limit? Is the datacenter poised for a phase transition from current ad-hoc distributed computing practices to a new theory-driven self-* architecture? In this blog we discuss a non-von Neumann managed Turing oracle machine network with a control architecture as an alternative.

The representation of the dynamics of a physical systems as linear, reversible (hence deterministic), temporal order of states requires that, in a deep sense, physical systems never change their identities through time; hence they can never become anything radically new (e.g., they must at most merely rearrange their parts, parts whose being is fixed). However, as elements interact with each other and their environment, the system dynamics can dramatically change when large fluctuations in the interactions induce a structural transformation leading to chaos and the eventual emergence of a new order out of chaos. This is denoted as “becoming”. In short, the dynamics of near equilibrium states with small-scale fluctuations in a system represent the “being” and large deviations from the equilibrium, emergence of an unstable equilibrium and the final restoration of order in a new equilibrium state represent the “becoming”. According to Plato “being” is absolute, independent, and transcendent. It never changes and yet causes the essential nature of things we perceive in the world of “becoming”. The world of becoming is the physical world we perceive through our senses. This world is always in movement, always changing. The two aspects – the static structures and their dynamics of evolution are two sides of a coin. Dynamics (becoming) represents time and static configurations at any particular instance represent the “being”. Prigogine applied this concept to understand the chemistry of matter, phase transitions and the like. Individual elements represent function and the groups (constituting a system) represent structure with dynamics. Fluctuations caused by the interaction within the system and between the system and its environment, cause the dynamics of the system to induce transitions from being to becoming. Thus, function, structure and fluctuations determine the system and its dynamics defining the complexity, chaos and order.

Why is it Relevant to Datacenters?

Datacenters are dynamic systems where software working with hardware delivers information processing services that allow modeling, interaction, reasoning, analysis and control of the environment external to them. Figure 1 shows the hardware, software and their interaction among themselves and the external world. There are two distinct systems interacting with each other to deliver the intent of the datacenter which is to execute specific computational workflows that model, monitor and control the external world processes using the computing resources:

Service workflows modeling the process dynamics of the system depicting the external world and its interactions. Usually this consists of functional requirements of the system that is under consideration such as business logic, sensors and actuator monitoring and control (the computed) etc. The model consists of various functions captured in a structure (e.g., a directed acyclic graph, DAG, and it’s evolution in time. This model does not include the computing resources required to execute the process dynamics. It is assumed tat the resources will be available for the computation (cpu, memory, time etc.)

The non-functional requirements that address the required resources to execute the functions as a function of time and fluctuations both in the interactions in the external world and also in the computing resources available to accomplish the intent defined in the functional requirements. The computation as implemented in the von Neumann stored program control model of the Turing machine requires time (impacted by the cpu speed, network latency, bandwidth, storage IOPs, throughput, capacity) and memory. The computing model assumes unbounded resources including time for completing the computation. Today, these resources are provided by a cluster of servers and other devices containing multi-core cpu’s and memory networked with different types of storage. The computations are executed in the server or device by allocating the resources using an operating system which itself is a software that mediates the resources to various computations.

On the right hand side of Figure 1, we depict the computing resources required to execute the functions in a given structure whether it is distributed or not. In the middle, we represent the application workflows composed of various components constituting an application area network (AAN) that is executed in a distributed computing cluster (DCC) made up of the hardware resources with specified service levels (cpu, memory, network bandwidth, cluster latency, storage capacity, IOPs , throughput and capacity). The left hand side shows a desired end-to-end process configuration and evolution monitoring and control mechanism. When all is said and done, the process workflows need to execute various functions using the computing resources made available in the form of a distributed cluster providing required CPU, memory, network bandwidth, latency, storage IOPs, throughput and capacity. The structure is determined by the non-functional requirements such as resource availability, performance, security and cost. Fluctuations evolve the process dynamics and require adjusting the resources to meet the needs of applications to cope with the fluctuations.

Figure 1: Decoupling service orchestration and infrastructure orchestration to deliver function, structure and dynamic process flow to address the fluctuations both in resource availability and service demand

There are two ways to match the resources available to the computing nodes connected by links that execute the business process dynamics. First approach is the current state of the art and the second one is an alternative approach based on extensions to the current von Neumann stored program implementation of the Turing machine.

Current State of the Art

The infrastructure is infused with intelligence about various applications and their evolving needs and adjust the resources (time of computation affected by cpu, network bandwidth, latency, storage capacity, throughput and IOPs and the memory required for the computation). Current IT has evolved from a model where the resources are provisioned anticipating the peak workloads and the structure of the application network is optimized for coping with deviations from equilibrium. Conventional computing models using physical servers (often referred to as bare-metal) cannot cope with wild fluctuations if the new server provisioning times are much larger than the time it takes for the onset of fluctuations and the predictability of their magnitude to pre-plan the provisioning of additional resources. Virtualization of the servers and on-demand provisioning of Virtual machines reduces the provisioning times substantially to institute auto-scaling, auto-failover and live migration across distributed resources using Virtual Machine image mobility. However, it comes with a price:

The Virtual Image is still tied to the infrastructure (network, storage and computing resources supporting the VM and moving a VM involves manipulating a multitude of distributed resources often owned or operated by different owners and touch many infrastructure management systems thus increasing complexity and cost of management.

If the distributed infrastructure is homogeneous and supports VM mobility, it is simpler but the solution forces vendor lock-in and does not allow to take advantage of commodity infrastructure offered by multiple suppliers.

If the distributed infrastructure is heterogeneous, VM mobility now must depend on myriad management systems and most often, these management systems themselves need other management systems to manage their resources.

The VM mobility and management also increase bandwidth and storage requirements and proliferation of point solutions and tools to move across heterogeneous distributed infrastructure that increase operational complexity and additional cost.

Current state of the art based on the mobility of VMs and infrastructure orchestration is summarized in figure 2.

Figure 2: The infrastructure orchestration based on second guessing the application quality of service requirements and its dynamic behavior

It clearly shows the futility of orchestrating service availability, performance, compliance, cost and security in a very distributed and heterogeneous environment where scale and fluctuations dominate. The cost and complexity of navigating multiple infrastructure service offerings often outweigh the benefits of commodity computing. It is one reason why enterprises complain that 70% of their budget often is spent on keeping the service lights on.

Alternative Approach: A Clean Separation of Business Logic Implementation and the Operational Realization of Non-functional Requirements

Another approach is to decouple application and business process workflow management from the distributed infrastructure mobility by placing the applications in the right infrastructure that has the right resources, monitor the evolution of the applications and proactively manage the infrastructure to add or delete resources with predictability based on history. Based on the RPO and RTO, adjust the application structure to create active/passive or active/active nodes to manage application QoS and workflow/business process QoS. This approach requires top down method of business process implementation with the specification of the business process intent followed by a hierarchical and temporal specification of process dynamics with context, constraints, communication, control of the group and its constituents and the initial conditions for the equilibrium quality of service (QoS). The details include:

Non-functional requirements that specify availability, performance, security, compliance and cost constraints and the policies specified with hierarchical and temporal process flows. The intent at higher level are translated to the down-stream intent of the computing nodes contributing to the workflow.

A method to implement autonomic behavior with visibility and control of application components so that they can be managed with policies defined. When scale and fluctuations demand a change in the structure to transition to a new equilibrium state, the policy implementation processes proactively add or subtract computing nodes or find existing nodes to replicate, repair, recombine or reconfigure the application components. The structural change implements the transition from being to becoming.

A New Architecture to Accommodate Scale and Fluctuations: Toward the Oneness of the Computer and the Computed

There is a fundamental reason why current Turing, von Neumann stored program computing model cannot address large-scale distributed computing with fluctuations both in resources and in computation workloads without increasing complexity and cost (Mikkilineni et. al. 2012). As von Neumann put it “It is a theorem of Gödel that the description of an object is one class type higher than the object.” An important implication of Gödel’s incompleteness theorem is that it is not possible to have a finite description with the description itself as the proper part. In other words, it is not possible to read yourself or process yourself as a process. In short, Gödel’s theorems prohibit “self-reflection” in Turing machines. According to Alan Turing, Gödel’s theorems show that every system of logic is in a certain sense incomplete, but at the same time it indicates means whereby from a system L of logic a more complete system L_ may be obtained. By repeating the process we get a sequence L, L1 = L_, L2 = L_1 … each more complete than the preceding. A logic Lω may then be constructed in which the provable theorems are the totality of theorems provable with the help of the logics L, L1, L2, … Proceeding in this way we can associate a system of logic with any constructive ordinal. It may be asked whether such a sequence of logics of this kind is complete in the sense that to any problem A, there corresponds an ordinal α such that A is solvable by means of the logic Lα.”

This observation along with his introduction of the oracle-machine influenced many theoretical advances including the development of generalized recursion theory that extended the concept of an algorithm. “An o-machine is like a Turing machine (TM) except that the machine is endowed with an additional basic operation of a type that no Turing machine can simulate.” Turing called the new operation the ‘oracle’ and said that it works by ‘some unspecified means’. When the Turing machine is in a certain internal state, it can query the oracle for an answer to a specific question and act accordingly depending on the answer. The o-machine provides a generalization of the Turing machines to explore means to address the impact of Gödel’s incompleteness theorems and problems that are not explicitly computable but are limit computable using relative reducibility and relative computability.

According to Mark Burgin, an Information processing system (IPS) “has two structures—static and dynamic. The static structure reflects the mechanisms and devices that realize information processing, while the dynamic structure shows how this processing goes on and how these mechanisms and devices function and interact.”

The software contains the algorithms (à la the Turing machine) that specify information processing tasks while the hardware provides the required resources to execute the algorithms. The static structure is defined by the association of software and hardware devices and the dynamic structure is defined by the execution of the algorithms. The meta-knowledge of the intent of the algorithm, the association of specific algorithm execution to a specific device, and the temporal evolution of information processing and exception handling when the computation deviates from the intent (be it because of software behavior or the hardware behavior or their interaction with the environment) is outside the software and hardware design and is expressed in non-functional requirements. Mark Burgin calls this Infware which contains the description and specification of the meta-knowledge that can be also be implemented using the hardware and software to enforce the intent with appropriate actions.

The implementation of Infware using Turing machines introduces the same dichotomy mentioned by Turing with respect to the manager of manager conundrum. This is consistent with the observation of Cockshott et al. (2012) ““The key property of general-purpose computer is that they are general purpose. We can use them to deterministically model any physical system, of which they are not themselves a part, to an arbitrary degree of accuracy. Their logical limits arise when we try to get them to model a part of the world that includes themselves.”

The goals of the distributed system determine the resource requirements and computational process definition of individual service components based on their priorities, workload characteristics and latency constraints. The overall system resiliency, efficiency and scalability depend upon the individual service component workload and latency characteristics of their interconnections that in turn depend on the placement of these components (configuration) and available resources. The resiliency (fault, configuration, accounting, performance and security often denoted by FCAPS) is measured with respect to a service’s tolerance to faults, fluctuations in contention for resources, performance fluctuations, security threats and changing system-wide priorities. Efficiency depicts the optimal resource utilization. Scaling addresses end-to-end resource provisioning and management with respect to increasing the number of computing elements required to meet service needs.

A possible solution to address resiliency with respect to scale and fluctuations is an application network architecture, based on increasing the intelligence of computing nodes which, is presented in the Turing centenary conference (2012) for improving the resiliency, efficiency and scaling of information processing systems. In its essence, the distributed intelligent managed element (DIME) network architecture extends the conventional computational model of information processing networks, allowing improvement of the efficiency and resiliency of computational processes. This approach is based on organizing the process dynamics under the supervision of intelligent agents. The DIME network architecture utilizes the DIME computing model with non-von Neumann parallel implementation of a managed Turing machine with a signaling network overlay and adds cognitive elements to evolve super recursive information processing. The DIME network architecture introduces three key functional constructs to enable process design, execution, and management to improve both resiliency and efficiency of application area networks delivering distributed service transactions using both software and hardware (Burgin and Mikkilineni):

Machines with an Oracle: Executing an algorithm, the DIME basic processor P performs the {read -> compute -> write} instruction cycle or its modified version the {interact with a network agent -> read -> compute -> interact with a network agent -> write} instruction cycle. This allows the different network agents to influence the further evolution of computation, while the computation is still in progress. We consider three types of network agents: (a) A DIME agent. (b) A human agent. (c) An external computing agent. It is assumed that a DIME agent knows the goal and intent of the algorithm (along with the context, constraints, communications and control of the algorithm) the DIME basic processor is executing and has the visibility of available resources and the needs of the basic processor as it executes its tasks. In addition, the DIME agent also has the knowledge about alternate courses of action available to facilitate the evolution of the computation to achieve its goal and realize its intent. Thus, every algorithm is associated with a blueprint (analogous to a genetic specification in biology), which provides the knowledge required by the DIME agent to manage the process evolution. An external computing agent is any computing node in the network with which the DIME unit interacts.

Blue-print or policy managed fault, configuration, accounting, performance and security monitoring and control (FCAPS): The DIME agent, which uses the blueprint to configure, instantiate, and manage the DIME basic processor executing the algorithm uses concurrent DIME basic processors with their own blueprints specifying their evolution to monitor the vital signs of the DIME basic processor and implements various policies to assure non-functional requirements such as availability, performance, security and cost management while the managed DIME basic processor is executing its intent. This approach integrates the evolution of the execution of an algorithm with concurrent management of available resources to assure the progress of the computation.

DIME network management control overlay over the managed Turing oracle machines: In addition to read/write communication of the DIME basic processor (the data channel), other DIME basic processors communicate with each other using a parallel signaling channel. This allows the external DIME agents to influence the computation of any managed DIME basic processor in progress based on the context and constraints. The external DIME agents are DIMEs themselves. As a result, changes in one computing element could influence the evolution of another computing element at run time without halting its Turing machine executing the algorithm. The signaling channel and the network of DIME agents can be programmed to execute a process, the intent of which can be specified in a blueprint. Each DIME basic processor can have its own oracle managing its intent, and groups of managed DIME basic processors can have their own domain managers implementing the domain’s intent to execute a process. The management DIME agents specify, configure, and manage the sub-network of DIME units by monitoring and executing policies to optimize the resources while delivering the intent.

The result is a new computing model, a management model and a programming model which infuse self-awareness using an intelligent Infware into a group of software components deployed on a distributed cluster of hardware devices while enabling the monitoring and control of the dynamics of computation to conform to the intent of the computational process. The DNA based control architecture configures appropriately the software and hardware components to execute the intent. As the computation evolves, the control agents monitor the evolution and makes appropriate adjustments to maintain an equilibrium conforming to the intent. When the fluctuations create conditions for unstable equilibrium, the control agents reconfigure the structure in order to create a new equilibrium state that conforms to the intent based on policies.

Figure 3 shows the Infware, hardware and software executing a web service using DNA.

Figure 3: Hardware and software networks with a process control Infware orchestrating the life-cycle evolution of a web service deployed on a Distributed Computing Cluster

The hardware components are managed dynamically to configure an elastic distributed computing cluster (DCC) to provide the required resources to execute the computations. The software components are organized as managed Turing oracle machines with a control architecture to create AANs that can be monitored and controlled to execute the intent using the network management abstractions of replication, repair, recombination and reconfiguration. With DNA, the datacenters are able to evolve from being to becoming.

It is important to note that DNA is implemented (Mikkilineni, et. al. 2012, 2014) to demonstrate a couple of functions that cannot be accomplished today with current state of the art:

Migrating a workflow being executed in a physical server (a web service transaction including a web server, application server and a database) to another physical server without a reboot or losing transactions to maintain recovery time and recovery point objectives. No virtual machines are required although they can be used just as if they were bare-metal servers.

Provide workflow auto-scaling, auto-failover and live migration with retention of application state using distributed computing clusters with heterogeneous infrastructure (bare metal servers, private and public clouds etc.) without infrastructure orchestration to accomplish them (e.g., without moving virtual machine images or LXC container based images).

The approach using DNA allows the implementation of the above functions without requiring changes to existing applications, OSs or current infrastructure because the architecture non-intrusively extends the current Turing computing model to a managed Turing oracle machine network with control network overlay. It is not a coincidence that similar abstractions are present in how cellular organisms, human organizations and telecommunication networks self-govern and deliver the intent of the system (Mikkilineni 2012).

Only time will tell if the DNA implementation of Infware is an incremental or leap-frog innovation.

Acknowledgements

This work originated from discussions started in IEEE WETICE 2009 to address the complexity, security and compliance issues in Cloud Computing. The work of Dr. Giovanni Morana, the C3DNA Team and the theoretical insights from professor Eugene Eberbach, Professor Mark Burgin and Pankaj Goyal are behind the current implementation of DNA.

“It’s very likely that on the basis of philosophy that every error has to be caught, explained, and corrected, a system of the complexity of the living organism would not run for a millisecond.“
—– von Neumann, Papers of John von Neumann on Computing and Computing Theory, Hixon Symposium, September 20, 1948, Pasadena, CA, The MIT Press, 1987.

Communication, Collaboration and Commerce at the Speed of Light:

With the advent of many-core servers, high bandwidth network technologies connecting these servers, and new class of high performance storage devices that can be optimized to meet the workload needs (IOPs intensive, throughput sensitive or capacity hungry workloads), Information Technology (IT) industry is looking at a transition from its server-centric, low-bandwidth, client-server origins to geographically distributed, highly scalable and resilient composed service creation, delivery and assurance environments that meet the rapidly changing business priorities, latency constraints, fluctuations in workloads and availability of required resources. Distributed service composition and delivery brings new challenges with scale and fluctuations both in demand and the availability of resources. New approaches are emerging to improve resiliency and the efficiency of distributed system design, deployment, management and control.

The Jazz Metaphor:

The quest for transition is best described by the Jazz metaphor aptly summarized by Holbrook [1] (Holbrook 2003), “Specifically, creativity in all areas seems to follow a sort of dialectic in which some structure (a thesis or configuration) gives way to a departure (an antithesis or deviation) that is followed, in turn, by a reconciliation (a synthesis or integration that becomes the basis for further development of the dialectic). In the case of jazz, the structure would include the melodic contour of a piece, its harmonic pattern, or its meter…. The departure would consist of melodic variations, harmonic substitutions, or rhythmic liberties…. The reconciliation depends on the way that the musical departures or violations of expectations are integrated into an emergent structure that resolves deviation into a new regularity, chaos into a new order, and surprise into a new pattern as the performance progresses.”

The Thesis:

The thesis in the IT evolution is the automation of business processes and service delivery using client-server architectures. It served well as long as the service scale and fluctuations of service delivery infrastructure resources were within certain bounds that allowed the action to increase or decrease available resources and meet the fluctuating demands. In addition, the resiliency of the service is always adjusted by improving the resiliency (availability, performance and security) of the infrastructure through various appliances, processes and tools. This introduced a timescale for meeting the resiliency required for various applications in terms of recovery time objectives and recovery point objectives. The resulting management “time constant” (defined as the time to recover a service to meet customer satisfaction) has been continuously decreasing with the use of newer technologies, tools and process automation.
However, with the introduction of the high-speed Internet, access to mobile technology and globalization of e-commerce, the scale and fluctuations in service demand have radically changed which have put challenging demands on provisioning the resources within shorter and shorter periods of time. Figure 1 summarizes the key drivers that are forcing the drastic reduction of management time constant.

Figure 1: Global communication, collaboration and commerce at the speed of light is forcing the drastic reduction in IT resource management time constant

The Anti-Thesis:

The result is the anti-thesis (the word is not used pejoratively but actually it denotes innovation, creativity and a touch of anti-establishment rebellion in the Jazz metaphor) to virtualize the infrastructure management (compute, storage and network resources) and provide intelligent resource management services that utilize commodity infrastructure connecting fat pipes. Software defined data center (SDDC) is used to represent the dynamic provisioning of server clusters connected by a network attached to required storage all meeting the service levels required by the applications that are composed to create a service transaction. The idea is to monitor the resource utilization by these service components and adjust the resources as required to meet the Quality of Service (QoS) needs of the service transaction (in terms of cpu, memory, network bandwidth, latency, storage throughput, IOPs and capacity.) Network function virtualization (NFV) is used to denote the dynamic provisioning and management of network services such as routing, switching and controlling commodity hardware that is solely devoted to connect various devices to assure desired network bandwidth and latency. Storage function virtualization (SFV) similarly denotes the dynamic provisioning and management of commodity storage hardware with required IOPs, throughput and capacity. ACI denotes application centric infrastructure which is sensitive to the needs of particular application and dynamically adjusts the resources to provide right cpu, memory, bandwidth, latency, storage IOPs, throughput and capacity. The drive to move away from proprietary network and storage equipment to commodity high performance hardware made ubiquitous with open interface architectures are intended to foster competition and innovation both in hardware and software. The open software is supposed to match the needs of the application by tuning the resources dynamically using the compute, network and storage management function made available with open-source software.

Unfortunately, the anti-thesis brings its own issues in transforming the current infrastructure that has evolved over few decades to the new paradigm.

The new approach has to accommodate current infrastructure and applications and allow seamless migration to new paradigm without vendor lock-in to use new infrastructure. Fork-lift strategy will not work that involves time. money and service interruption.

Current infrastructure is designed to provide low latency high performance application quality of service with various levels of security. For mission critical applications to migrate to new paradigm, these requirements have to be met without compromise.

The new paradigm should not require new way of developing applications or it must support current development languages and processes without new methodology lock-in. An application is defined both by functional requirements that dictate the specific domain functions and logic as well as non-functional requirements that define operational constraints related to service availability, reliability, performance, security and cost dictated by business priorities, workload fluctuations and resource latency constraints. A non-functional requirement specifies criteria that can be used to judge the operation of a system, rather than specific behaviors. The plan for implementing functional requirements is detailed in the system design. The plan for implementing non-functional requirements is detailed in the system architecture. The architecture for non-functional requirements plays a key role in whether the open systems approach will succeed or fail. An architecture that defines a plug and play approach requires a composition scheme which leads to the next issue.

There must be a way to compose applications developed by different vendors without having to look inside their implementation. In essence there must be a composition architecture that allows applications to be developed independently but can be composed to create new applications without having to modify the original components. Even when you have open-sourced applications, integrating them and creating new workflows and services is a labor intensive and knowledge sensitive task. The efficiency will be thwarted by the need for service engagements, training and maintenance of integrated workflows.

Current approaches suggested in the anti-thesis movement embracing virtual machines (VM), open-sourced applications and cloud computing fail on all these accounts by increasing complexity or requiring vendor, API and architecture dependency. The result is increased operation cost of integration dependency on ad-hoc software and services.

The increase in complexity with scale and distribution is more an issue of architecture and is not addressed by throwing more ad-hoc software to automate with managers of managers, point solutions and tools. It has to do more with the limitation of current computing architecture than lack of good ad-hoc software approaches.

Server virtualization creates a Virtual Machine image that can be replicated easily in different physical servers with shared resources. The introduction of Hypervisor to virtualize hardware resources (cpu and memory) allows multiple virtual machine images to share the resources in a physical server. NFV and SFV provide management functions to control the underlying commodity hardware. OpenStack and other infrastructure provisioning mechanisms have evolved through the anti-thesis movement to integrate VM provisioning integrated with NFV and SFV provisioning to create clusters of VMs on which the applications can deliver the service transactions. Figure 2 shows OpenStack implementation of such a service provisioning process. A cluster of VMs required for a service delivery can be provisioned with required service level agreements to assure right cpu, memory, bandwidth, latency, storage IOPs, throughput and capacity. It is also important to note that OpenStack not only can provision a VM cluster but also physical server cluster or a mixture. It allows adding or deleting or tuning a VM on demand. In addition, OpenStack allows including applications themselves to be part of the image and snapshots that can be reused to replicate the VM on any server. Clusters with appropriate applications and dependencies with connectivity and firewall rules can be provisioned and replicated. This allows for orchestration of VM images to provide auto-failover, auto-scaling, live-migration and auto-protection for service delivery.

Figure 2: OpenStack is used to provision infrastructure with required service level agreements to assure cpu, memory, bandwidth, storage IOPs, throughput, storage capacity of individual virtual machine (VM) and the network latency of the VM cluster

Unfortunately, the anti-thesis movement solely depends on infrastructure mobility and management through VMs and associated plumbing which requires a lock-in on the availability of same OpenStack in a distributed environment or complex image orchestration add-ons. More recently instead of moving the whole virtual image containing the OS, run-time environments and applications along with their configurations, a mini-OS (using subset of operating system services) image is created with application and their configurations. LXC containers and Docker containers are examples. The use of mobility of VMs or containers to move applications from one infrastructure to another to manage the infrastructure SLAs to meet QoS needs of an application has created a plethora of ad-hoc solutions adding to the complexity. Figure 3 shows the current state-of-the-art.

Figure 3: Current state-of-the-art that provides application QoS through Virtual Machine mobility or container mobility where container is also an image

While this approach provides a solution to meet application scaling and fluctuations needs as long as the infrastructure meets certain requirements, there are certain shortcomings in distributed heterogeneous infrastructures provided by different vendors:

Multiple Orchestrators are required when different architectures and infrastructure management systems are involved

Figure 4 shows the complexity involved in scaling services across distributed heterogeneous infrastructures with different owners using different infrastructure management systems. Integrating multiple distributed infrastructures with disparate management systems is not a highly scalable solution without increasing complexity and cost.

Obviously if scale, distribution and fluctuations (both in demand and resources) are not a requirement, then, the thesis will do well. Today, there are still many main-frame systems providing high transaction rates albeit at a higher cost. Anti-thesis is born out of the need for high degree of scalability, distribution and fluctuations with higher efficiency. Big data analysis, large scale collaboration systems are examples. However there is a large class of services that like to leverage commodity infrastructure and resiliency with security and application QoS management without vendor lock-in or high cost of complexity.

There are three stakeholders in an enterprise who want different things from infrastructure to provide QoS assurance:

Ability to “migrate service” or “tune infrastructure SLAs” based on Policies and application demand

Ability to burst into cloud without vendor-lock-in

The developers want:

Focus on business logic coding and specification of run-time requirements for resources (application intent, context, communications, control and constraints) without worrying about run-time infrastructure configurations

End-to-end visibility and profiling at run-time across the stack for Debugging

In essence, service developers would want to focus on functional requirement fulfillment without having to worry about resource availability in a fluctuating environment. Monitoring resource utilization and taking action on non-deterministic impact of scaling and fluctuations should be supported by a common architecture that decouples application execution from underlying resource management distributed or not.

Figure 4: Complexity in a distributed infrastructure where scaling and fluctuations are increasing

The Synthesis:

The synthesis depends on addressing the scaling and fluctuation issues without vendor lock-in or architecture lock-in that restricts developers to use their current environments and requires accommodating current infrastructure while allowing new infrastructure with NFV and SFV to seamlessly integrate. For example the anti-thesis solutions require certain features in their OSs and new middleware must run in distributed environments. This leaves a host of legacy systems out.

A call for the synthesis is emerging from two quarters:

Industry analysts such as Gartner who predict that a service governor will emerge in due time. “A service governor [2] is a runtime execution engine that has several inputs: business priorities, IT service descriptions (and dependency model), service quality and cost policies. In addition, it takes real-time data feeds that assess the performance of user transactions and the end-to-end infrastructure, and uses them to dynamically optimize the consumption of real and virtual IT infrastructure resources to meet the business requirements and service-level agreements (SLAs). It performs optimization through dynamic capacity management (that is, scaling resources up and down) and dynamically tuning the environment for optimum throughput given the demand. The service governor is the culmination of all technologies required to build the real-time infrastructure (RTI), and it’s the runtime execution management tool that pulls everything together.”

From the academic community who recognize the limitations of Turing’s formulation of computation in terms of functions to process information using simple read, compute (change state) and write instructions combined with the introduction of program, data duality by von Neumann which has allowed information technology (IT) to model, monitor, reason and control any physical system. Prof. Mark Burgin [3] in his 2005 book on super recursive algorithms states “it is important to see how different is functioning of a real computer or network from what any mathematical model in general and a Turing machine,(as an abstract, logical device), in particular, reputedly does when it follows instructions. In comparison with instructions of a Turing machine, programming languages provide a diversity of operations for a programmer. Operations involve various devices of computer and demand their interaction. In addition, there are several types of data. As a result, computer programs have to give more instructions to computer and specify more details than instructions of a Turing machine. The same is true for other models of computation. For example, when a finite automaton represents a computer program, only certain aspects of the program are reflected. That is why computer programs give more specified description of computer functioning, and this description is adapted to the needs of the computer. Consequently, programs demand a specific theory of programs, which is different from the theory of algorithms and automata.”

In short, the programs (or functions) developers develop to code business logic do not contain knowledge about how compute, storage and network devices interact with each other (structure) and how to deal with changing business priorities, workload variations and latency constraints (fluctuations that force changes to structure). This knowledge has to be incorporated in the architecture of the new computing, management and programming model.

These non-functional requirements are requirements that specify criteria that can be used to judge the operation of a system, rather than specific behavior. This should be contrasted with functional requirements that define specific behavior or functions that deal with algorithms, or business logic. The plan for implementing functional requirements is detailed in the system design. The plan for implementing non-functional requirements is detailed in the system architecture. These requirements include availability, reliability, performance, security, scalability and efficiency at run-time. The new architecture must encapsulate the intent of the program, its operational requirements such as the context, connectivity to other components, constraints and control abstractions that are required to manage the non-functional requirements. Figure 5 shows an architecture where the service management architecture is decoupled from the infrastructure management systems monitoring and managing distributed resources that may belong to different providers with different incentives.

The infrastructure control plane provides automation, monitoring and management of infrastructure required for applications to execute their intent. The output of the infrastructure is a cluster of physical servers or virtual servers with an operating system in each server to provide well-defined computing resources in terms of total CPU, Memory, network bandwidth, latency, storage IOPs, throughput and capacity. The infrastructure control plane will be able to provide required clusters on demand and elastically scale the nodes or the individual node resources on demand. The elastic on-demand resources use automation processes or NFV and SFV resources connected to Virtual or Physical servers.

As Professor Mark Burgin points out, the intent and the application monitoring to process information, apply knowledge, and change the circumstance must be part of the service management knowledge independent of distributed infrastructure management systems for providing true scalability, distribution and resiliency; and avoiding vendor lock-in or infrastructure, architecture or API lock-in. In addition, the service control plane must support recursive service composition to be able to have end-to-end service visibility and control to avail the best resources wherever they are available to meet the quality of service dictated by business priorities, latency constraints and workload fluctuations. The application quality of service must not be dictated or limited by the infrastructure limitations. Then only we can predictably deploy highly reliable services on even not so reliable distributed infrastructure and increase efficiency to meet demand that is not as predictable.

Borrowing from biological and intelligent systems which specialize in exploiting architectures that provide predictability, we can argue that infusing cognition into service management will provide such an architecture. Cognition [4] is associated with intent and its accomplishment through various processes that monitor and control a system and its environment. Cognition is associated with a sense of “self” (the observer) and the systems with which it interacts (the environment or the “observed”). Cognition [4] extensively uses time, history and reasoning in executing and regulating tasks that constitute a cognitive process. There is a fundamental reason why current Turing, von Neumann stored program computing model cannot address large-scale distributed computing with fluctuations both in resources and in computation workloads without increasing complexity and cost. As von Neumann [5] put it “It is a theorem of Gödel that the description of an object is one class type higher than the object.” An important implication of Gödel’s incompleteness theorem is that it is not possible to have a finite description with the description itself as the proper part. In other words, it is not possible to read yourself or process yourself as a process. In short, Gödel’s theorems prohibit “self-reflection” in Turing machines. Turing’s O-machine was designed to provide information that is not available in the computing algorithm executed by the TM. More recently, the super recursive algorithms proposed by Mark Burgin [3] points a way to model the knowledge about the hardware and software to reason and act to self-manage. He proves that the super recursive algorithms are more efficient than plain Turing computations which assume unbounded resources.

Perhaps, we should look for “synthesis” solutions not in familiar places where we feel comfortable with more ad-hoc software and services that are labor and knowledge intensive. We should look for clues in biology, human organizational networks and even telecommunication networks to transform current datacenters from being infrastructure management systems to services switching centers of the future [6]. This requires search for new computing, management and programming models without disturbing current applications, operating systems or infrastructure while facilitating smooth migration to a more harmonious melody of orchestrated services on a global scale with high efficiency and resiliency.

The “gap between the hardware and the software of a concrete computer and even greater gap between pure functioning of the computer and its utilization by a user, demands description of many other operations that lie beyond the scope of a computer program, but might be represented by a technology of computer functioning and utilization”

Introduction

According to Holbrook (Holbrook 2003), “Specifically, creativity in all areas seems to follow a sort of dialectic in which some structure (a thesis or configuration) gives way to a departure (an antithesis or deviation) that is followed, in turn, by a reconciliation (a synthesis or integration that becomes the basis for further development of the dialectic). In the case of jazz, the structure would include the melodic contour of a piece, its harmonic pattern, or its meter…. The departure would consist of melodic variations, harmonic substitutions, or rhythmic liberties…. The reconciliation depends on the way that the musical departures or violations of expectations are integrated into an emergent structure that resolves deviation into a new regularity, chaos into a new order, surprise into a new pattern as the performance progresses.”

Current IT in this Jazz metaphor, evolved from a thesis and currently is experiencing an anti-thesis and is ripe for a synthesis that would blend the old and the new with a harmonious melody to create a new generation of highly scalable, distributed, secure services with desired availability, cost and performance characteristics to meet the changing business priorities, highly fluctuating workloads and latency constraints.

The Hardware Upheaval and the Software Shortfall

There are three major factors driving the datacenter traffic and their patterns:
1. A multi-tier architecture which determines the availability, reliability, performance, security and cost of initiating a user transaction to an end-point and delivering that service transaction to the user. The composition and management of the service transaction involves both the north-south traffic from end-user to the end-point (most often over the Internet) and the east-west traffic that flows through various service components such as DMZ servers, web servers, application servers and databases. Most often these components exist within the datacenter or connected through a WAN to other datacenters. Figure 1 shows a typical configuration.

Service Transaction Delivery Network

The transformation from the client-server architectures to “composed service” model along with virtualization of servers allowing the mobility of Virtual Machines at run-time are introducing new patterns of traffic that increase in the east west direction inside the datacenter by orders of magnitude compared to the north-south traffic going from end-user to the service end-point or vice-versa. Traditional applications that evolved from client-server architectures use TCP/IP for all the traffic that goes across servers. While some optimizations attempt to improve performance for applications that go across servers using high-speed network technologies such as InfiniBand, Ethernet etc., TCP/IP and socket communications still dominate even among virtual servers within the same physical server.

2. The advent of many-core severs with tens and even hundreds of computing cores with high bandwidth communication among them drastically alters the traffic patterns. When two applications are using two cores within a processor, the communication among them is not very efficient if it uses socket communication and TCP/IP protocol instead of shared memory. When the two applications are running in two processors within the same server, it is more efficient to use PCIExpress or other high-speed bus protocols instead of socket communication using TCP/IP. If the two applications are running in two servers within the same datacenter it is more efficient to use Ethernet or InfiniBand. With the advent of mobility of applications using containers or even Virtual Machines, it is more efficient to switch the communication mechanism based on the context of where they are running. This context sensitive switching is a better alternative to replicating current VLAN and socket communications inside the many-core server. It is important to recognize that the many-core servers and processors constitute a network where each node itself is a sub-network with different bandwidths and protocols (socket-based low-bandwidth communication between servers, InfiniBand, or PCI Express bus based communication across processors in the same server and shared memory based low latency communication across the cores inside the processor). Figure 2 shows the network of networks using many-core processors.

A Network of Networks with Multiple Protocols

3. The many-core servers with new class of flash memory and high-bandwidth networks offer a new architecture for services creation, delivery and assurance going far beyond the current infrastructure-centric service management systems that have evolved from single-CPU and low-bandwidth origins. Figure 3 shows a potential architecture where many-core servers are connected with high-bandwidth networks that obviate the need for current complex web of infrastructure technologies and their management systems. The many-core servers each with huge solid-state Drives, SAS attached inexpensive disks, optical switching interfaces connected to WAN Routers offer a new class of services architecture if only the current software shortfall is plugged to match the hardware advances in server, network and storage devices.

If Server is the Cloud, What is the Service Delivery Network?

This would eliminate the current complexity mainly involved in dealing with TCP/IP across east-west traffic and infrastructure based service delivery and management systems to assure availability, reliability, performance, cost and security. For example, current security mechanisms that have evolved from TCP/IP communications do not make sense across east/west traffic and emerging container based architectures with layer 7 switching and routing independent of server and network security offer new efficiencies and security compliance.

Current evolution of commodity clouds and distributed virtual datacenters while providing on-demand resource provisioning, auto-failover, auto-scaling and live-migration of Virtual machines, they are still tied to the IP address and associated complexity of dealing with infrastructure management in distributed environments to assure the end-to-end service transaction quality of service (QoS).

The QoS Gap

This introduces either vendor lock-in that precludes the advantages of commodity hardware or introduces complexity in dealing with multitude of distributed infrastructures and their management to tune the service transaction QoS. Figure 4 shows the current state of the art. One can quibble whether it includes every product available or whether they are depicted correctly to represent their functionality but the general picture describes the complexity and or vendor lock-in dilemma. The important point to recognize is that the service transaction QoS depends on tuning the SLAs of distributed resources at run-time across multiple infrastructure owners with disparate management systems and incentives. The QoS tuning of service transactions is not scalable without increasing cost and complexity if it depends on tuning the distributed infrastructure with a multitude of point solutions and myriad infrastructure management systems..

What the Enterprise IT Wants:

There are three business drivers that are at the heart of the Enterprise Quest for an IT framework:

Compression of Time-to-Market: Proliferation of mobile applications, social networking, and web-based communication, collaboration and commerce are increasing the pressure on enterprise IT to support a rapid service development, deployment and management processes. Consumer facing services are demanding quick response to rapidly changing workloads and the large-scale computing, network and storage infrastructure supporting service delivery requires rapid reconfiguration to meet the fluctuations in workloads and infrastructure reliability, availability, performance and security.

Compression of Time-to-Fix: With consumers demanding “always-on” services supporting choice, mobility and collaboration, the availability, performance and security of end to end service transaction is at a premium and IT is under great pressure to respond by compressing the time to fix the “service” regardless of which infrastructure is at fault. In essence, the business is demanding the deployment of reliable services on not so reliable distributed infrastructure.

Cost Reduction of IT operation and management which is running at about 60% to 70% of its budget going to keep the “service lights” on: Current service administration and management paradigm that originated with server-centric and low-bandwidth network architecture is resource-centric and assumes that the resources (CPU, memory, network bandwidth, latency, storage capacity, throughput and IOPs) allocated to an application at install time can be changed to meet rapidly changing workloads and business priorities in real-time. Current state-of-the art uses virtual servers, network and storage that can be dynamically provisioned using software API. Thus the application and service (a group of applications providing a service transaction) QoS (quality of service defining the availability, performance, security and cost) can be tuned by dynamically reconfiguring the infrastructure. There are three major issues with this approach:

With a heterogeneous, distributed and multi-vendor infrastructure, tuning the infrastructure requires myriad point solutions, tools and integration packages to monitor current utilization of the resources by the service components, correlate and reason to define the actions required and coordinate many distributed infrastructure management systems to reconfigure the resources.

Introduction of public clouds and the availability of software as a service, while they have worked well for new application development or non-mission critical applications or applications that can be re-architected to optimize for the Cloud API which leverage application/service components available, they are also adding additional cost for IT to migrate many existing mission critical applications that demand high security, performance and low-latency. The suggested Hybrid solutions require adopting new cloud architecture in the datacenters or use myriad orchestration packages that add additional complexity and tool fatigue.

In order to address the need to compress time to market and time to fix and to reduce the complexity, enterprises small and big are desperately looking for solutions.

The lines of business owners want:

End-to-end visibility and control of service QoS independent of who provides the infrastructure

Availability, performance and security governance based on policies

Accounting of resource utilization and dynamic resolution of contention for resources

Application architecture decoupled from infrastructure by separating functional and non-functional requirements so that the application developers focus on business functions while deployment and operations are adjusted at run-time based on business priorities, latency constraints and workload fluctuations

Provide cloud-like services (on-demand provisioning of applications, self-repair, auto-scaling, live-migration and end-to-end security) at service level instead of at infrastructure level so that they can leverage own datacenter resources, or commodity resources abundant in public clouds without depending on cloud architectures, vendor API and cloud management systems.

Provide a suite of applications as a service (databases, queues, web servers etc.)

Service composition schemes that allow developers to reuse components and

Ability to provide end to end service level security independent of server and network security deployed to manage distributed resources

Ability to provide end-to-end service QoS visibility and control (on-demand service provisioning, auto-failover, auto-scaling, live migration and end-to-end security) across distributed physical or virtual servers in private or public infrastructure

Ability to reduce complexity and eliminate point solutions and myriad tools to manage distributed private and public infrastructure

Application Developers want:

To focus on developing service components, test them in their own environments and publish them in a service catalogue for reuse

Ability to compose services, test and deploy in their own environments and publish then in the service catalogue ready to deploy anywhere

Ability to specify the intent, context, constraints, communication, and control aspects of the service at run-time for managing non-functional requirements

An infrastructure that uses the specification to manage the run-time QoS with on-demand service provisioning on appropriate infrastructure (a physical or virtual server with appropriate service level assurance, SLA), manage run-time policies for fail-over, auto-scaling, live-migration and end-to-end security to meet run-time changes in business priorities, workloads and latency constraints.

Separation of run-time safety and survival of the service from sectionalizing, isolating, diagnosing and fixing at leisure

Get run-time history of service component behavior and ability to conduct correlated analysis to identify problems when they occur.

We need to discover a path to bridge the current IT to the new IT without changing applications, or the OSs or the current infrastructure while providing a way to migrate to a new IT where service transaction QoS management is truly decoupled from myriad distributed infrastructure management systems. This is not going to happen with current ad-hoc programming approaches. We need a new or at least an improved theory of computing.

As Cockshott et al (2012) point out current computing, management and programming models fall short when you try to include computers and the computed in same model.

“the key property of general-purpose computer is that they are general purpose. We can use them to deterministically model any physical system, of which they are not themselves a part, to an arbitrary degree of accuracy. Their logical limits arise when we try to get them to model a part of the world that includes themselves.”

There are emerging technologies that might just provide the synthesis (reconciliation depends on the way that the architecture departures or violations of expectations are integrated into an emergent structure that resolves deviation into a new regularity, chaos into a new order, surprise into a new pattern as the transformation progresses) required to build the harmony by infusing cognition into computing. Only future will tell if this architecture is expressive enough and efficient as Mark Burgin claims in his elegant book on “Super Recursive Algorithms” quoted above.

Is the Information Technology poised for a renaissance (with a synthesis) since the great masters (Turing, von Neumann, Shannon etc.) developed the original thesis and take us beyond the current distributed-cloud-management anti-thesis.

The IEEE WETICE2015 International conference track on “the Convergence of Distributed Clouds, GRIDs and their Management” to be held in Cyprus next June (15 – 18) will address some of these emerging trends and attempt to bridge the old and the new.

Abstract: Turing’s formulation of computation in terms of functions to process information using simple read, compute (change state) and write instructions combined with the introduction of program, data duality by von Neumann has allowed information technology (IT) to model, monitor, reason and control any physical system. Many-core processors and virtualization technologies with on-demand resource provisioning, application agility and high-bandwidth communications have made web-scale services available anywhere at any time. However, as the scale of distributed systems increases, so does the limitation of the current computing model:

Fluctuations play a major role: (for example, Google is experiencing “emergent behaviour” with their scheduling algorithms as the number of components increase).

As the computing workloads fluctuate wildly, attempts to improve resiliency of the services results in complexity. 70% of IT budget is consumed in assuring availability, performance and security

More importantly, current models of computation – Computationalism (based on Turing machine) and Connectionism (modelled after neural networks) both are inadequate to model cognitive processes involving dynamic coupling between various elements, where each change in one element continually influences every other element’s direction of change.

In this talk, we discuss a new autonomic computing approach that demonstrates self-management and the separation of resources and services without disturbing the current Turing machine implementations. The DIME network architecture (DNA) uses its non-von Neumann parallel implementation of a managed Turing machine with a signalling network overlay to address some of the limitations of both Computationalism and Connectionism. The architecture provides a mechanism for injecting sensors and actuators into a Turing Machine and allows implementing autonomic distributed computing where the computers and the programs they execute are orchestrated to achieve the overall intent while optimizing available computing resources. We present an implementation of cognitive cloud services that provides on-demand provisioning, self-repair, auto-scaling, live-migration and end-to-end service transaction security using a popular web services stack deployed across distributed servers (virtualized or not). The network of networks using the new computing, management and programming models allows modelling dynamic processes with intent and has profound implications to large scale distributed structures where the computer and the computed have to work in harmony to address inherent large-scale fluctuations.

Biography: Dr. Rao Mikkilineni received his PhD from University of California, San Diego in 1972 working under the guidance of prof. Walter Kohn (Nobel Laureate 1998). He later worked as a research associate at the University of Paris, Orsay, Courant Institute of Mathematical Sciences, New York and Columbia University, New York. He is currently the Founder and Chief Scientist at C3 DNA Inc., California, a Silicon Valley start-up. His past experience includes working at AT&T Bell Labs, Bellcore, U S West, several start-ups and more recently at Hitachi Data Systems. He currently chairs IEEE conference track on “Convergence of Distributed Clouds, Grids, and their Management” in WETICE2014. He has published more than fifty papers on topics from Greens Function Monte Carlo to POTS (Plain Old Telephone Service), PANS (Pretty Amazing New Services) using the Internet and SANs (Storage Area Networks). His book “Designing a New Class of Distributed Systems” was published in November 2011 by Springer Verlag, New York. It explores the viability of self-optimizing, self-monitoring autonomous non-von Neumann software systems. His recent paper on a new computing model for creating a new class of distributed computing services with the architectural resiliency of cellular organisms was published in the Turing centenary conference proceedings

Trouble in IT Paradise with Darkening Clouds:

If you ask an enterprise CIO over a couple of drinks, what is his/her biggest hurdle today that is preventing to deliver the business right resources at the right time at a right price, his/her answer would be that “the IT is too darn complex.” Over a long period of time, the infrastructure vendors have hijacked Information Technologies with their complex silos and expediency has given way to myriad tools and point solutions that overlay a management web. In addition, the Venture Capitalists looking for quickie “insertion points” with no overarching architectural framework have proliferated tools and appliances that have contributed to the current complexity and tool fatigue.

After a couple of more drinks, if you press the CIO why his/her mission critical applications are not migrating to the cloud which claims lesser complexity, the CIO laments that there is no cloud provider willing to sign a warranty that assures the service levels for their mission critical applications that guarantee application availability, performance and security levels. “Every cloud provider talks about infrastructure service levels but not willing to step up to assure application availability, performance and security. There are myriad off-the main street providers that claim to offer orchestration to provide the service levels, but no one yet is signing on the dotted line.” The situation is more complicated when the resources span across multiple infrastructure providers.

The decoupling of the strong binding between the management of applications and the infrastructure management is a key for the CIO as more applications are developed with shorter time to market. CIO’s top five priorities are transnational applications demanding distributed resources, security, cost, compliance and uptime. A Gartner report claims that the CIOs spend 74% of IT budget on keeping the application “lights on” and another 18% on “changing the bulbs” and other maintenance activities. (It is interesting to recall that before Strowger’s switch eliminated many operators sitting in long rows plugging countless jacks into countless plugs, the cost of adding and managing new subscribers was rising in a geometric proportion. According to the Bell System chronicles, one large city general manager of a telephone company at that time wrote that he could see the day coming soon when he would go broke merely by adding a few more subscribers because the cost of adding and managing a subscriber is far greater than the corresponding revenue generated. The only difference between today’s IT datacenter and central office before Strowger’s switch is that “very expensive consultants, countless hardware appliances, and countless software systems that manage them” replace “many operators, countless plugs and countless jacks”.)

In order to utilize commodity infrastructure while maintaining high security, mobility for performance and availability, the CIOs are looking to solutions that let them focus on application quality of service (QoS) and are willing to outsource the infrastructure management to providers who can assure application mobility, availability and security albeit with end to end service visibility and control at their disposal.

While the public clouds seem to offer a way out to leverage the commodity infrastructure with on demand Virtual Machine provisioning, there are four hurdles that are preventing the CIO’s to embrace the clouds for mission critical applications:

Current mission critical and even non-mission critical applications and services (groups of applications) are used to highly secure and low latency infrastructures that have been hardened and managed and the CIO’s are loath to spend more money to bring same level of SLA’s in public clouds.

The dependence on particular service provider infrastructure API’s, Virtual Machine Image Management (nested or not) infrastructure dependencies and added self-healing, auto-scaling, live-migration service cost and complexity create service provider lock-in on their infrastructure and their management services. This defeats the intent to leverage the commodity infrastructure offered by different service providers.

The increasing scope creep from infrastructure providers “up-the-stack” to provide application awareness and insert their API in application development in the name of satisfying non-functional requirements (availability, security, performance optimization) at run-time has started to increase the complexity and cost of application and service development. The resulting proliferation of tools and point solutions without a global architectural framework to use resources from multiple service providers have increased the integration and troubleshooting cost.

Global communications, collaboration and commerce at the speed of light has increased the scale of computing and the distributed computing resource management has fallen short in meeting the scale and the fluctuations both caused by demand and also fluctuations in resources availability, performance and security.

The Inadequacy of Ad-hoc Programming to Solve Distributed Computing Complexity:

Unfortunately, the complexity is more a structural issue than an operational or infrastructure technology issue that cannot be resolved with ad-hoc programming techniques to manage the resources. Cockshott et al. conclude their book “Computation and its limits” with the paragraph “The key property of general-purpose computer is that they are general purpose. We can use them to deterministically model any physical system, of which they are not themselves a part, to an arbitrary degree of accuracy. Their logical limits arise when we try to get them to model a part of the world that includes themselves.” While the success of IT in modeling and executing business processes has evolved to current distributed datacenters and cloud computing infrastructures that provide on-demand computing resources to model and execute business processes, the structure and fluctuations that dictate the evolution of computation have introduced complexity in dealing with real-time changes in the interaction of the infrastructure and the computations they perform. The complexity manifests in the following ways:

In a distributed computing environment, maintaining the right computing resources (cpu, memory, network bandwidth, latency, storage capacity, throughput and IOPs) are available to right software component contributing to the service transaction requires orchestration and management of myriad computing infrastructures often owned by different providers with different profit motives and incentives. The resulting complexity in resource management to assure availability, performance and security of service transactions adds to the cost of computing. For example, it is estimated that up to 70% of current IT budget is consumed in assuring service availability, performance and security. The complexity is compounded in distributed computing environments that are supported by heterogeneous infrastructures with disparate management systems.

In a large-scale dynamic distributed computation supported by myriad infrastructure components, the increased component failure probabilities introduce a non-determinism (for example the Google is observing emergent behavior in their scheduling of distributed computing resources when dealing with large number of resources) that must be addressed by a service control architecture that decouples functional and non-functional aspects of computing.

Fluctuations in the computing resource requirements dictated by changing business priorities, workload variations that depend on service consumption profiles and real-time latency constraints dictated by the affinity of service components, all demand a run-time response to dynamically adjust the computing resources. Current dependence on myriad orchestrators and management systems cannot scale in a distributed infrastructure without either a vendor lock-in on infrastructure access methods or a universal standard that often stifles innovation and competition to meet fast changing business needs.

Thus the function, structure and fluctuations involved in dynamic processes delivering service transaction are driving a need to search new computation, management and programming models that address the unification of the computer and the computed and decouple the service management from the infrastructure management at run-time.

It is the Architecture Stupid:

A business process is defined both by functional requirements that dictate the business domain functions and logic as well as non-functional requirements that define operational constraints related to service availability, reliability, performance, security and cost dictated by business priorities, workload fluctuations and resource latency constraints. A non-functional requirement specifies criteria that can be used to judge the operation of a system, rather than specific behaviors. The plan for implementing functional requirements is detailed in the system design. The plan for implementing non-functional requirements is detailed in the system architecture. While much progress has been made in the system design and development, the architecture of distributed systems falls short to address the non-functional requirements for two reasons:

Current distributed systems architecture from its server-centric and low-bandwidth origins has created layers of resource management-centric ad-hoc software to address various uncertainties that arise in a distributed environment. Lack of support for concurrency, synchronization, parallelism and mobility of applications dictated by the current serial von-Neumann stored program control has given rise to ad-hoc software layers that monitor and manage distributed resources. While this approach may have been adequate when distributed resources are owned by a single provider and controlled by a framework that provides architectural support for implementing non-functional requirements, the proliferation of commodity distributed resource clouds offered by different service providers with different management infrastructures adds scaling and complexity issues. Current OpenStack and AWS API discussions are a clear example that forces a choice of one or the other or increased complexity to use both.

The resource-centric view of IT currently demotes application and service management to a second-class citizenship where the QoS of application/service is monitored and managed by myriad resource management systems overlaid with multiple correlation and analysis layers used to manipulate the distributed resources to adjust the Cpu, memory, bandwidth, latency, storage IOPs, throughput and capacity which are all what are required to keep the application/service to meet its quality of service. Obviously, this approach cannot scale unless single set of standards evolve or a single vendor lock-in occurs.

Unless an architectural framework evolves to decouple application/service management from myriad infrastructure management systems owned and operated by different service providers with different profit motives, the complexity and cost of management will only increase.

A Not So Cool Metaphor to Deliver Very Cool Services Anywhere, Anytime and On-demand:

A lesson on an architectural framework that addresses nonfunctional requirements while connecting billions of users anywhere anytime on demand is found in the Plain Old Telephone System (POTS). From the beginnings of AT&T to today’s remaking of at&t, much has changed but two things that remain constant are the universal service (on a global scale) and the telecom grade “trust” that are taken for granted. Very recently, Mark Zuckerberg proclaimed at the largest mobile technology conference in Barcelona that his very cool service Facebook wants to be the dial tone for the Internet. Originally, the dial tone was introduced to assure the telephone user that the exchange is functioning when the telephone is taken off-hook by breaking the silence (before an operator responded) with an audible tone. Later on, the automated exchanges provided a benchmark for telecom grade trust that assures managed resources on-demand with high availability, performance and security. Today, as soon as the user goes on hook, the network recognizes the profile based on the dialing telephone number. As soon as the dialed party number is dialed, the network recognizes the destination profile and provisions all the network resources required to make the desired connection, commence billing, monitor and assure the connection till one of the parties initiates a disconnect. During the call, if the connection experiences any changes that impact the non-functional requirements, the network intelligence takes appropriate action based on policies. The resulting resiliency (availability, performance, and security), efficiency and scaling ability to connect billions of users on demand have come to be known as “Telecom grade trust”. An architectural flaw in the original service design (exploited by Steve Jobs by building a blue-box) was fixed by introducing an architectural change to separate the data path and the control path. The resulting 800 service call model provided a new class of services such as call forwarding, call waiting and conference call.

The Internet on the other hand evolved to connect billions of computers together anywhere, anytime from the prophetic statement made by J. C. R. Licklider “A network of such (computers), connected to one another by wide-band communication lines [which provided] the functions of present-day libraries together with anticipated advances in information storage and retrieval and [other] symbiotic functions.” The convergence of voice over IP, data and video networks has given rise to a new generation of services enabling communication, collaboration and commerce at the speed of light. The result is that the datacenter has replaced the central office to become the hub from which myriad voice, video and data services are created, and delivered on a global scale. However the management of these services which determines their resiliency, efficiency and scaling is another matter. In order to provide on demand services, anywhere, any-time with prescribed quality of service in an environment of wildly fluctuating workloads, changing business priorities and latency constraints dictated by the proximity of service consumers and suppliers, resources have to be managed in real-time across distributed pools to match the service QoS to resource SLAs. The telephone network is designed to share resources on a global scale and to connect them as required in real-time to meet the non-functional service requirements while current datacenters (whether privately owned or publicly provides as cloud services) are not. There are three structural deficiencies in the current distributed datacenter architecture to match the telecom grade resiliency, efficiency and scaling:

The data path and service control path are not decoupled giving rise to same problems that Steve Jobs exploited causing a re-architecting of the network.

The service management is strongly coupled with the resource management systems and does not scale as the resources become distributed and multiple service providers provide those resources with different profit motives and incentives. Since the resources are becoming commodity, every service provider wants to go up the stack to provide lock-in.

Current trend to infuse resource management API in service logic to provide resource management at run-time and application aware architectures that want to establish intimacy with applications only increase complexity and make service composition with reusable service components all the more difficult because of their increased lock-in with resource management systems.

Resource management based datacenter operations miss an important feature of services/applications management which is that all services are not created equal. They have different latency and throughput requirements. They have different business priorities and different workload characteristics and fluctuations. What works for the goose does not work for the gander. In addition to the current complexity and cost of resource management to assure service availability, reliability, performance and security, there is an even more fundamental issue that plagues the current distributed systems architecture. A distributed transaction that spans multiple servers, networks and storage devices in multiple geographies uses resources that span across multiple datacenters. The fault, configuration, accounting, performance and security (FCAPS) of a distributed transaction behavior requires the end-to-end connection management more like telecommunication service spanning distributed resources. Therefore, focusing on only resource management in a datacenter without the visibility and control of all resources participating in the transaction will not provide assurance of service availability, reliability, performance and security at run-time.

New Dial Tones for Application/Service Development, Deployment and Operation:

Current Web-scale applications are distributed transactions that span across multiple resources widely scattered across multiple locations owned and managed by different providers. In addition, the transactions are transient making connections with various components to fulfill an intent and closing them only to reconnect when they need them again. This is very much in contrast to always-on distributed computing paradigm of yesterday.

In creating, deploying and operating these services, there are three key stake holders and associated processes:

Resources providers deliver the vital resources required to create, deploy and operate these resources on demand anywhere anytime (resource dial tone). The vital resources are just the CPU, memory, network latency, bandwidth and storage capacity, throughput and IOPs required to execute the application or service that has been compiled to “1”s and “0”s (the Turing Machine). The resource consumers care less about how you provide these as long as you maintain the service levels the resource providers agree to when the application or service requests the resources at provisioning time (matching the QoS request with SLA and maintaining it during the application/service life-time). The resource dial tone that assures the QoS with resource SLA is offered to two different types of consumers of this resource. First, the application developer who uses these resources to develop the service components and composes them to create more complex services with their own QoS requirements. Second the service operators who use the SLAs to provide management of QoS at run-time to deliver the services to end users.

The application developers like to use their tools and best practices without any constraints from resource providers and the run-time vital signs required to execute their services should be transparent to where or who is providing the vital resources. The resources must support the QoS specified by developer or service composer depending on the context, communication, control and constraint needs. They do not care how they get the CPU, memory, bandwidth, storage capacity, throughput or IOPs or how the latency constraints are met. This model is a major departure from current SDN route focusing on giving control of resources to applications which is not a scalable solution that allows decoupling of resource management from service management.

The service operators provide run-time QoS assurance by brokering the QoS demands to match the best available resource pool that meets the cost and quality constraints (the management dial tone that assures non-functional requirements). The brokering function is a network service ala services switching to match the applications/services to the right resources.

The brokering service must then provide the non-functional requirements management at run-time just as in POTS.

The New Service Operations Center (SOC) with End-to-end Service Visibility and Control Independent of Distributed Infrastructure Management Centers Owned by Different Infrastructure Providers:

The new Telco model that the broker facilitates allows the enterprises and other infrastructure users to focus on services architecture and management and use infrastructure as a commodity from different infrastructure providers just as Telcos provide shared resources with network services.

Figure 1: The Telco Grade Services Architecture that
decouples end to end service transaction management from infrastructure
management systems at run-time

The service broker matches the QoS of service and service components with service levels offered by different infrastructure providers based on the service blueprint which defines the context, constraints, communications and control abstraction of the service at hand. The service components are provided with desired Cpu, memory, bandwidth, latency, storage IOPs, throughput and capacity desired. The decoupling of service management from distributed infrastructure management systems puts the safety and survival of services first and allows sectionalization, isolation, diagnosis anfd fixing infrastructure at leisure as is the case today with POTS.

It is important to note that the service dial tone Zuckerberg is talking about is not related to the resources dial tone or management dial tone required for providing service connections and management at run-time. He is talking about application end user receiving the content. Facebook application developers do not care how the computing resources are provided as long as their service QoS is maintained to meet the business priorities, workloads and latency constraints to deliver their service on a global scale. Facebook CIO would rather spend time maintaining the service QoS by getting the resources wherever they are available to meet the service needs at reasonable cost. In fact most CIOs would get rid of the infrastructure management burden if they have QoS assurance and end-to-end service visibility and service control (they could not care less about access to resources or their management systems) to manage the non-functional requirements at run-time. After all, Facebook’s open compute project is a side effect trying to fill a gap left by infrastructure providers – not their main line of business. The crash that resulted after Zuckerberg’s announcement of WhatsApp acquisition was not the “cool” application’s fault. They probably could have used a service broker/switch providing the old fashioned resource dial tone so that they could provide the service dial tone to their users.

This is similar to a telephone company assuring appropriate resources to connect different users based on their profiles or the Internet connecting devices based on their QoS needs at run-time. The broker acts as service switch that connects various service components at run-time and matches the QoS demands with appropriate resources.

With the right technology, the service broker/switch may yet provide the required service level warranties to the enterprise CEOs from well-established carriers with money and muscle.

Will at&t and other Telcos have the last laugh by incorporating this brokering service switch in the network and make current distributed datacenters (cloud or otherwise with physical or virtual infrastructure) a true commodity?

“Computer science is concerned with information in much the same sense that physics is concerned with energy… The computer scientist is interested in discovering the pragmatic means by which information can be transformed.”

Introduction

There are four major trends shaping the future of information technologies:

The advent of multi-core and many-core processors has caused an upheaval in computing device design giving rise to orders of magnitude of price/performance improvement. According to Intel, in 2015, a typical processor chip will likely consist of dozens to hundreds of cores and parts of each core will be dedicated to specific purposes like network management, graphics, encryption, decryption etc. and a majority of cores will be available for application programs.

As the numbers of processors to meet the global demand increases (one company Google alone expects to deploy 10 million servers in the future), the cost of the electricity needed to run a company’s servers will soon be a lot greater than the actual purchase price of the servers. In addition,the scale of computing elements involved and the resulting component failure probabilities demand new ways to address resiliency and efficiency of services that provide information processing.

The management spending has steadily increased over the last decade to consume around 70% of the total IT budget in an organization. It is estimated that 70% of the IT budget goes to keeping the lights on. For every dollar spent on developing software, another $1.31 is spent on its operation and maintenance. To be sure this spending has improved reliability, availability, performance and security of the services deployed through automation but also has increased the complexity requiring high maintenance software/hardware appliances along with high-cost highly skilled professional services personnel to assure distributed transaction execution. More automation is resulting in point solutions and tool fatigue.

The demand for large-scale web-based multi-media services with high availability and security is putting pressure on reducing the time it takes from developing the system to deploy it in production. The demand for resiliency, efficiency and scaling of these services is driving the need to share the distributed computing resources between the service developers, service operators and the service users in real-time at run-time to meet changing business priorities, workload fluctuations and latency constraints.

What has not improved is our understanding of large-scale distributed computing systems and their evolution to meet the ever-increasing global communication, collaboration and commerce at faster and faster pace. There are three major issues that must be addressed to improve the resiliency, efficiency and scaling of next generation large-scale distributed computing systems to enable information processing in real-time to meet the scale and scope of communication, collaboration and commerce:

Current trend to design application aware infrastructure cannot scale in providing end-to-end service visibility and control at run-time when the infrastructure is distributed, heterogeneous, owned by different operators and designed by different vendors with conflicting profit motives. Any solution that embeds application awareness in infrastructure and requires infrastructure to be manipulated at run-time to meet changing business priorities, workload fluctuations and latency constraints will only increase complexity, reduce transparency and cannot scale across distributed environments.

Current trends to embed infrastructure awareness to control resources at run-time in applications also suffer the same fate of being not scalable across different distributed infrastructures. Either the developers need to embed the knowledge about infrastructure in the applications or a myriad orchestrators have to provide the integration of distributed and heterogeneous infrastructure.

In a large-scale dynamic distributed computation supported by myriad infrastructure components, the increased component failure probabilities introduce a non-determinism (for example the Google is observing emergent behavior in their scheduling of distributed computing resources when dealing with large number of resources) that must be addressed by a service control architecture that decouples functional and non-functional aspects of computing.

In essence, current datacenter and cloud computing paradigms with their server-centric and narrow-bandwidth origins are focused on embedding intelligence (mostly automation through ad-hoc programming) in the resource managers. However, the opportunity exists for discovering new post-Hypervisor computing models, which decouple service management from infrastructure management systems at run-time to assure end-to-end distributed service transaction safety and survival, while avoiding the current complexity cliff and tool fatigue. As Cockshott et al. observed “the key property of general-purpose computer is that they are general purpose. We can use them to deterministically model any physical system, of which they are not themselves a part, to an arbitrary degree of accuracy. Their logical limits arise when we try to get them to model a part of the world that includes themselves.” (Cockshott P., MacKenzie L. M., and Michaelson, G, (2012) Computation and its Limits, Oxford University Press, Oxford.). Any new computing models, management models and programming models that integrate the computers and the computations they perform must support concurrency, mobility and synchronization in order to execute distributed processes with global policy based management and local control. We must find ways to cross the Turing barrier to include the computer and the computed to deal with dynamic distributed processes (concurrent and asynchronous) at large scale. This also should address the need to bring together development and operations (DevOps) with a new approach to integrate functional and non-functional requirements of dynamic process management.

As Gordana Dodig-Crnkovic points out in her paper(Gordana Dodig-Crnkovic and Raffaela Giovagnoli, (Editors), “Natural Computing/Unconventional Computing and its Philosophical Significance” AISB/IACAP World Congress 2012, Birmingham, UK, 2-6 July 2012.) Alan Turing’s Legacy: Info-Computational Philosophy of Nature, information and computation are two complementary concepts representing structure and process, being and becoming. New ideas in information technology must integrate both the structure and process to deal with the interactions of concurrent asynchronous computational processes in real-world, which are the most general representation of information dynamics and go beyond the current Turing machine computing model.

.

The CDCGM track has proven track record in addressing some of these issues and is looking for new ideas to be presented in Parma, Italy during June, 23-25, 2014.

Call For Papers

Cloud Computing is becoming the reference model in the field of Distributed Service Computing. The wide adoption of Virtualization technology, Service-Oriented Architecture (SOA) within the powerful and widely distributed data centers have been allowing developers and consumers to access to a wide range of services and computational resources through a pay-per-use model.

There is an ever increasing demand to share computing resources among cloud service developers, users and service providers (operators) to create, consume and assure services in larger and larger scale. In order to meet the web-scale demand, current computing, management and programming models are evolving to address the complexity and the resulting tool fatigue in current distributed datacenters and clouds. New unified computing theories and implementations are required to address the resiliency, efficiency and scale of global web-scale services creation, delivery and assurance. While current generation virtualization technologies that focus on infrastructure management and infusing application awareness in the infrastructure have served us well, they cannot scale in a distributed environment where multiple owners provide infrastructure that is heterogeneous and is evolving rapidly. New architectures that focus on intelligent services deployed using dumb infrastructure on fat and stupid pipes must evolve.

The goal of CDCGM 2014 is to attract young researchers, Ph.D. students, practitioners, and business leaders to bring contributions in the area of distributed Clouds, Grids and their management, especially in the developments of computing, management and programming models, technologies, framework, and middleware.

You are invited to submit research papers to the following areas:

Discovering new application scenarios, proposing new operating systems, programming abstractions and tools with particular reference to distributed Grids and Clouds and their integration.

Identifying the challenging problems that still need to be solved such as parallel programming, scaling and management of distributed computing elements.

“The key property of general-purpose computer is that they are general purpose. We can use them to deterministically model any physical system, of which they are not themselves a part, to an arbitrary degree of accuracy. Their logical limits arise when we try to get them to model a part of the world that includes themselves.”

Summary

The “Convergence of Clouds, Grids and their Management” conference track is devoted to discussing current and emerging trends in virtualization, cloud computing, high-performance computing, Grid computing and cognitive computing. The tradition that started in WETICE2009 “to analyze current trends in Cloud Computing and identify long-term research themes and facilitate collaboration in future research in the field that will ultimately enable global advancements in the field that are not dictated or driven by the prototypical short term profit driven motives of a particular corporate entity” has resulted in a new computing model that was included in the Turing Centenary Conference proceedings in 2012. More recently, a product based on these ideas was discussed in the 2013 Open Server Summit (www.serverdesignsummit.com), where many new ideas and technologies were presented to exploit the new generation of many-core servers, high-bandwidth networks and high-performance storage. We present here some thoughts on current trends which we hope will stimulate further research to be discussed in the WETICE 2014 conference track in Parma, Italy (http://wetice.org).

Introduction

Current IT datacenters have evolved from their server-centric, low-bandwidth origins to distributed and high-bandwidth environments where resources can be dynamically allocated to applications using computing, network and storage resource virtualization. While Virtual machines improve resiliency and provide live migration to reduce the recovery time objectives in case of service failures, the increased complexity of hypervisors, their orchestration, Virtual Machine images and their movement and management adds an additional burden in the datacenter.

Further automation trends continue to move toward static applications (locked-in-a-virtual machine, often as one application in one virtual machine) in a dynamic infrastructure (virtual servers, virtual networks, virtual storage, Virtual Image managers etc.). The safety and survival of applications and end to end service transactions delivered by a group of applications are managed by dynamically monitoring and controlling the resources at run-time in real-time. As services migrate to distributed environments where applications contributing to a service transaction are deployed in different datacenters and public or private clouds often owned by different providers, resource management across distributed resources is provided using myriad point solutions and tools that monitor, orchestrate and control these resources. A new call for application-centric infrastructure proposes that the infrastructure provide (http://blogs.cisco.com/news/application-centric-infrastructure-a-new-era-in-the-data-center/ ):

Application Velocity (Any workload, anywhere): Reducing application deployment time through a fully automated and programmatic infrastructure for provisioning and placement. Customers will be able to define the infrastructure requirements of the application, and then have those requirements applied automatically throughout the infrastructure.

A common platform for managing physical, virtual and cloud infrastructure: The complete integration across physical and virtual, normalizing endpoint access while delivering the flexibility of software and the performance, scale and visibility of hardware across multi-vendor, virtualized, bare metal, distributed scale out and cloud applications

Systems Architecture: A holistic approach with the integration of infrastructure, services and security along with the ability to deliver simplification of the infrastructure, integration of existing and future services with real time telemetry system wide.

Common Policy, Management and Operations for Network, Security, Applications: A common policy management framework and operational model driving automation across Network, Security and Application IT teams that is extensible to compute and storage in the future.

Open APIs, Open Source and Multivendor: A broad ecosystem of partners who will be empowered by a comprehensive published set of APIs and innovations contributed to open source.

The best of Custom and Merchant Silicon: To provide highly scalable, programmatic performance, low-power platforms and optics innovations that protect investments in existing cabling plants, and optimize capital and operational expenditures.

Perhaps this approach will work in a utopian IT landscape where either the infrastructure is provided by a single vendor or universal standards force all infrastructures to support common API. Unfortunately the real world evolves in a diverse, heterogeneous and competitive environment and what we are left with is a strategy that cannot scale and lacks end-to-end service visibility and control. End-to-end security becomes difficult to assure because of the myriad security management systems that control distributed resources. The result is open source systems that attempt to fill this niche. Unfortunately, in a highly networked world where multiple infrastructure providers provide a plethora of diverse technologies that evolve at a rapid rate to absorb high-paced innovations, orchestrating the infrastructure to meet the changing workload requirements that applications must deliver is a losing battle. The complexity and tool fatigue resulting from layers of virtualization and orchestration of orchestrators is crippling the operation and management of datacenters (virtualized or not) requiring 70% of current IT budgets going toward keeping the lights on. An explosion of tools, special purpose appliances (for Disaster Recovery, IP security, Performance optimization etc.) and administrative controls have escalated operation and management costs. Gartner Report estimates that for every 1$ spent on development of an application, another $1.31 is spent on assuring safety & survival. While all vendors agree upon Open Source, Open API, and multi-vendor support, reality is far from it. An example is the recent debate about whether OpenStack should include Amazon AWS API support while the leading cloud provider conveniently ignores the competing API.

The Strategy of Dynamic Virtual Infrastructure

The following picture presented in the Open Server Summit Presents a vision of future datacenter with a virtual switch network overlay over physical network.

In addition to the Physical network connecting physical servers, an overlay of virtual network inside the physical server to connect the virtual machines inside a physical server. In addition, a plethora of virtual machines are being introduced to replace the physical routers and switches that control the physical network. The quest to dynamically reconfigure the network at run-time to meet the changing application workloads, business priorities and latency constraints has introduced layers of additional network infrastructure albeit software-defined. While applications are locked in a virtual server, the infrastructure is evolving to dynamically reconfigure itself to meet changing application needs. Unfortunately this strategy can not scale in a distributed environment where different infrastructure providers deploy myriad heterogeneous technologies and management strategies and results in orchestrators of orchestrators contributing to complexity and tool fatigue in both datacenters and clod environments (private or public).

Figure 2 shows a new storage management architecture also presented in the Open Server Summit.

The PCIe switch allows a converged physical storage fabric at half the cost and half the power of current infrastructure. In order to leverage these benefits, the management infrastructure has to accommodate it which adds to the complexity.

In addition, it is estimated that the data traffic inside the datacenter is about 1000 times that of the data that is sent to and received from the users outside. This completely changes the role of TCP/IP traffic inside the datacenter and consequently the communication architecture between applications inside the datacenter. It does not anymore make sense for Virtual machines running inside a Many-core server to use TCP/IP as long as they are within the datacenter. In fact, it makes more sense for them to communicate via shared memory when they are executed on different cores within a processor, communicate via high speed bus when they are executed on different processors in the same server and a high speed network when they are executed in different servers in the same datacenter. TCP/IP is only needed when communicating with users outside the datacenter who can only be accessed via the Internet.

Figure 3 shows the server evolution.

Figure 3: Servers for the New Style of IT – Presented in Open Server summit 2013, Dwight Barron, HP Fellow and Chief Technologies Hyper-scale Server Business Segment, HP Servers Global Business Unit, Hewlett-Packard

As the following picture presents, current evolution of the datacenter is designed to provide dynamic control of resources for addressing the work-load fluctuations at run-time, changing business priorities and real-time latency constraints. The applications are static in a Virtual or Physical Server and the software defined infrastructure dynamically adjusts to changing application needs.

With the advent of many-core servers, high bandwidth technologies connecting these servers, and new class of high performance storage devices that can be optimized to meet the workload needs (IOPs intensive, throughput sensitive or capacity hungry), is it time to look at a static infrastructure with dynamic application/service management to reduce IT complexity in both datacenters and clouds (public or private)? This is possible if we can virtualize the applications inside a server (physical or virtual) and decouple the safety and survival of the applications and groups of applications that contribute to a distributed transaction from myriad resource management systems that provision and control a plethora of distributed resources supporting these applications.

The Cognitive Container discussed in the Open Server Summit (http://lnkd.in/b7-rfuK) presents the decoupling required between application and service management and underlying distributed resource management systems. Cognitive Container is specially designed to decouple the management of an application and service transactions that a group of distributed applications execute from the infrastructure management systems, at run-time, controlling their resources that are often owned or operated by different providers. The safety and survival of the application at run-time is put ahead by infusing the knowledge about the application (such as the intent, non-functional attributes, run-time constraints, connections and communication behaviors) into the container and using this information to monitor and manage the application at run-time. The Cognitive Container is instantiated and managed by a Distributed Cognitive Transaction Platform (DCTP) that sits between the applications and the OS facilitating the run-time management of Cognitive Containers. The DCTP does not require any changes to the application, OS or the infrastructure and uses the local OS in a physical or virtual server. A network of Cognitive Containers infused with similar knowledge about the service transaction they execute also is managed at run-time to assure the safety and survival based on policies dictated by business priorities, run-time workload fluctuations and real-time latency constraints. The Cognitive Container network using replication, repair, recombination and reconfiguration properties provide dynamic service management independent of infrastructure management systems at run-time. The Cognitive Containers are designed to use the local operating system to monitor the application vital signs (CPU, memory, bandwidth, latency, storage capacity, IOPs and throughput) and run-time behavior to manage the application to conform to the policies.

The cognitive container can be deployed in a physical or virtual server and does not require any changes to the applications, OSs or the infrastructure. Only the knowledge about the functional and n0n-functional requirements has to be infused into the Cognitive Container. The following figure shows a Cognitive Network deployed in a distributed infrastructure. The Cognitive Container and the service management are designed to provide auto-scaling, self-repair, live-migration and end-to-end service transaction security independent of infrastructure management system.

Using the Cognitive Container network it is possible to create a federated service creation, delivery and assurance platforms that transcend the physical and virtual server boundaries and geographical locations as shown in figure below.

This architecture provides an opportunity to simplify the infrastructure where a tiered server, storage and network infrastructure that is static and hardwired to provide various servers (physical or virtual) with specified service levels (CPU, memory, network bandwidth, latency, storage capacity and throughput) the cognitive containers are looking for based on their QoS requirements. It does not matter what technology is used to provision these servers with required service levels. The Cognitive Containers monitor these vital signs using the local OS and if they are not adequate, they will migrate to other servers where they are adequate based on policies determined by business priorities, run-time workload fluctuations and real-time latency constraints.

The infrastructure provisioning then becomes a simple matter of matching the Cognitive Container to the server based on QoS requirements. Thus the Cognitive Container services network provides a mechanism to deploy intelligent (self-aware, self-reasoning and self-controlling) services using dumb infrastructure with limited intelligence about services and applications (matching application profile to the server profile) on stupid pipes that are designed to provide appropriate performance based on different technologies as discussed in the Open Server Summit.

The managing and safekeeping of application required to cope with a non-deterministic impact on workloads from changing demands, business priorities, latency constraints, limited resources and security threats is very similar to how cellular organisms manage life in a changing environment. The managing and safekeeping of life efficiently at the lowest level of biological architecture that provides the resiliency was in his mind when von Neumann was presenting his Hixon lecture (Von Neumann, J. (1987) Papers of John von Neumann on Computing and Computing Theory, Hixon Symposium, September 20, 1948, Pasadena, CA, The MIT Press, Massachusetts, p474). ‘‘The basic principle of dealing with malfunctions in nature is to make their effect as unimportant as possible and to apply correctives, if they are necessary at all, at leisure. In our dealings with artificial automata, on the other hand, we require an immediate diagnosis. Therefore, we are trying to arrange the automata in such a manner that errors will become as conspicuous as possible, and intervention and correction follow immediately.’’ Comparing the computing machines and living organisms, he points out that the computing machines are not as fault tolerant as the living organisms. He goes on to say ‘‘It’s very likely that on the basis of philosophy that every error has to be caught, explained, and corrected, a system of the complexity of the living organism would not run for a millisecond.’’ Perhaps the Cognitive Container bridges this gap by infusing self-management into computing machines that manage the external world while also managing themselves with self-awareness, reasoning, and control based on policies and best practices.

Cognitive Containers or not, the question is how do we address the problem of ever increasing complexity and cost in current datacenter and cloud offerings? This will be a major theme in the 4th conference track on the Convergence of Distributed Clouds, Grids and their management at WETICE2014 in Parma, Italy.

WETICE is an annual IEEE International conference on state-of-the-art research in enabling technologies for collaboration, consisting of a number of cognate conference tracks. The “Convergence of Clouds, Grids and their Management” conference track is devoted to discussing current and emerging trends in virtualization, cloud computing, high performance computing, Grid computing and Cognitive Computing. The tradition that started in WETICE2009 “to analyze current trends in Cloud Computing and identify long-term research themes and facilitate collaboration in future research in the field that will ultimately enable global advancements in the field that are not dictated or driven by the prototypical short term profit driven motives of a particular corporate entity” has resulted in a new computing model that was included in the Turing Centenary Conference proceedings in 2012. The 2013 conference track discussed Virtualization, Cloud Computing and the Emerging Datacenter Complexity Cliff in addition to conventional cloud and grid computing solutions.

The WETICE 2014 conference to be held in Parma, Italy during June, 23rd-25th, 2014, will continue the tradition by continuing the discussions on the convergence of clouds, grids and their management. In addition, it will also solicit papers on new computing models, cognitive computing platforms and strong AI resulting from recent efforts to inject cognition into computing (Turing Machines).

All papers are refereed by the Scientific Review of Committee of each conference track. All accepted papers will be published in the electronic proceedings by the IEEE Computer Society, and submitted to the IEEE digital library. The proceedings will be submitted for indexing through INSPEC, Compendex, and Thomson Reuters, DBLP, Google Scholar and EI Index.

Here is an excerpt from the WETICE2013 Track #3 -Convergence of Distributed Clouds, Grids and Their Management

Convergence of Distributed Clouds, Grids and their Management – CDCGM2013

WETICE2013 – Hammamet, June 17 – 20, 2013

Track Chair’s Report

Dr. Rao Mikkilineni, IEEE Member, and Dr. Giovanni Morana

Abstract

The Convergence of distributed clouds, grids and their management conference track focuses on virtualization and cloud computing as they enjoy wider acceptance. A recent IDC report predicts that by 2016, $1 of every $5 will be spent on cloud-based software and infrastructure. Three papers address key issues in cloud computing such as resource optimization and scaling to address changing workloads and energy management. In addition, the DIME network architecture proposed in WETICE2010 is discussed in two papers in this conference, both showing its usefulness in addressing fault, configuration, accounting, performance and security of service transactions with in the service oriented architecture implementation and also spanning across multiple clouds.

While virtualization has brought resource elasticity and application agility to the services infrastructure management, the resulting layers of orchestration and the lack of end-to-end service visibility and control spanning across multiple service provider infrastructure have added an alarming degree of complexity. Hopefully, reducing the complexity in the next generation datacenters will be a major research topic in this conference.

Introduction

While virtualization and cloud computing have brought elasticity to computing resources and agility to applications in a distributed environment, they have also increased complexity of managing various distributed applications contributing to a distributed service transaction delivery by adding layers of orchestration and management systems. There are three major factors contributing to the complexity:

Current IT datacenters have evolved from their server-centric, low-bandwidth origins to distributed and high-bandwidth environments where resources can be dynamically allocated to applications using computing, network and storage resource virtualization. While Virtual machines improve resiliency and provide live migration to reduce the recovery time objectives in case of service failures, the increased complexity of hypervisors, their orchestration, Virtual Machine images and their movement and management adds an additional burden in the datacenter. A recent global survey commissioned by Symantec Corporation involving 2,453 IT professionals at organizations in 32 countries concludes [1] that the complexity introduced by virtualization, cloud computing and proliferation of mobile devices is a major problem. The survey asked respondents to rate the level of complexity in each of five areas on a scale of 0 to 10, and the results show that data center complexity affects all aspects of computing, including security and infrastructure, disaster recovery, storage and compliance. For example, respondents on average rated all the areas 6.56 or higher on the complexity scale, with security topping the list at 7.06. The average level of complexity for all areas for companies around the world was 6.69. The survey shows that organizations in the Americas on average rated complexity highest, at 7.81, and those in Asia-Pacific/Japan lowest, at 6.15.

As the complexity increases, the response is to introduce more automation of resource administration and operational controls. However, the increased complexity of management of services may be more a fundamental architectural issue related to Gödel’s prohibition of self-reflection in Turing machines [2] than a software design or an operational execution issue. Cockshott et al. [3] conclude their book “Computation and its limits” with the paragraph “The key property of general-purpose computer is that they are general purpose. We can use them to deterministically model any physical system, of which they are not themselves a part, to an arbitrary degree of accuracy. Their logical limits arise when we try to get them to model a part of the world that includes themselves.” Automation of dynamic resource administration at run-time makes the computer itself a part of the model and also a part of the problem.

As the services increasingly span across multiple datacenters often owned and operated by different service providers and operators, it is unrealistic to expect that more software that coordinates the myriad resource management systems belonging to different owners is the answer for reducing complexity. A new approach that decouples the service management from underlying distributed resource management systems which are often non-communicative and cumbersome is in order.

The current course becomes even more untenable with the advent of many-core severs with tens and even hundreds of computing cores with high bandwidth communication among them. It is hard to imagine replicating current TCP/IP based socket communication, “isolate and fix” diagnostic procedures, and the multiple operating systems (which do not have end-to-end visibility or control of business transactions that span across multiple cores, multiple chips, multiple servers and multiple geographies) inside the next generation many-core servers without addressing their shortcomings. The many-core servers and processors constitute a network where each node itself is a sub-network with different bandwidths and protocols (socket-based low bandwidth communication between servers, InfiniBand, or PCI Express bus based communication across processors in the same server and shared memory based low latency communication across the cores inside the processor).

The tradition that started in WETICE2009 “to analyze current trends in Cloud Computing and identify long-term research themes and facilitate collaboration in future research in the field that will ultimately enable global advancements in the field that are not dictated or driven by the prototypical short term profit driven motives of a particular corporate entity” has resulted in a new computing model that was included in the Turing Centenary Conference proceedings in 2012 [3, 4]. Two papers in this conference continue the investigation of its usefulness. Hopefully, this tradition will result in other novel and different approaches to address the datacenter complexity issue while incremental improvements continue as is evident from another three papers.