Visibility 2.0 for hybrid cloud environments

If implemented correctly, a hybrid environment can offer the control of on-premises infrastructure with the elasticity of the public cloud.

Shares

As virtualization and private cloud become the norm rather than the exception in the enterprise domain, and as public cloud adoption becomes a serious consideration for a hybrid mix, managing applications and workloads across the fence becomes a major challenge for enterprises. A major shift that makes enterprise IT and security folks nervous is that current Infrastructure-as-a-Service (IaaS) models offered by cloud providers such as Amazon, Microsoft, Google, IBM, Oracle and others no longer provide the luxury of full visibility inside their infrastructure, which one always has in a private data centre.

The Challenges of Public Cloud Visibility

Crossing into the public cloud domain, most of the infrastructure, such as compute, storage and network is hidden. For IT and data centre professionals, this causes restrictions in terms of the visibility tools and techniques they are trained to use in order to manage and assure a standard of workload performance. In other words, the blind spots increase rather than decrease for enterprise IT. While the cloud provider is responsible for ensuring the availability of services such as compute, storage and data transfer, it becomes the responsibility of the tenant—the enterprise IT —to safeguard workload performance and secure application data, requiring a full understanding of the data patterns among applications and between an application and its users. As an analogy, it is like moving from owning a car to using Uber.

The challenges described above become more complex when enterprises—usually the larger ones—choose a ‘multi-cloud’ strategy. In this scenario, the IT personnel need to manage multiple heterogeneous environments such as their data centres as well as multiple public clouds such as a combination of AWS and Azure. There can be many reasons behind a multi-cloud strategy, such as application types, the strengths of the cloud, subscription costs, or regional presence. The IT team has to now deal with an even more complex set of challenges in terms of ‘cloud piping’ as well as monitoring and managing multiple service level agreements (SLAs). The problem becomes further complicated when business groups in those large organizations bypass the IT organization and go directly to the cloud to request compute or storage capacity on demand. Whether the motivation for that is stealth mode development, testing or simply bypassing slow IT processes, it ultimately circles back to the IT team as their problem to manage.

Therefore, as applications move to the public cloud, we’re witnessing an era of new challenges brought about by opaque visibility into the infrastructure, including the complexity of providing adequate visibility into application data and infrastructure for application/workload performance management. This responsibility lies between the cloud provider SLA and the tenant IT. For example, AWS provides a certain degree of visibility-as-a-service (VaaS) to its clients via CloudWatch, which monitors the infrastructure related to clients’ virtual private cloud (VPC). However, CloudWatch does not provide visibility or intelligence related to the tenant’s application data or content. Unlike on-premise or private cloud infrastructure, no virtual or physical tapping is allowed, requiring alternatives that may include embedding a sensor network into the application code (containers) and hypervisor layers so that application data and messages can be moved to analysis tools for proactive monitoring and analysis before application performance suffers or customers start calling. Assuring proactive application and workload performance becomes extremely important for business use cases such as capacity-on-demand for cloud bursting, application development, staging, security sandboxing, cyber range training, and disaster recovery. As new instances increase under high demand situations, a visibility mechanism offering horizontal scaling elasticity becomes a vital part of any successful strategy to ensure application performance for good customer experience.

Ensuring Visibility in Hybrid Environments

Given those challenges, the hybrid cloud approach has several advantages as long as appropriate expectations are established from day one. It is important to understand that moving workloads into the public cloud does not mean that the cloud provider is taking full ownership for everything. The cloud provider’s responsibility is typically limited to provisioning and managing the infrastructure resources such as adding new servers, storage capacity or network bandwidth, as well as associated tasks such as rack capacity, cabling, power, and cooling. They are also on the hook for securing those resources. The tenants, however, are still responsible for ensuring that their applications are secure and performing as expected, implying that they need to closely monitor and manage them. Hence setting adequate expectations for this shared responsibility is important. It is also important to understand that migrating to the cloud is not necessarily always going to reduce operational costs. While it may reduce direct costs for compute and networking, and indirect costs in terms of training, rack and stack, power and cooling, it may result in more direct costs for storage and data transfer. Therefore, the cloud model usually makes sense for companies that are either too small to maintain their own IT, or big enough to take advantage of its full potential, especially if your business model is seasonal or experiences higher demands in an unpredictable way and most of your data is going to stay within the cloud.

Once an enterprise has the right expectations and is ready to move to the cloud, how does it ensure proper visibility across the fence? While the public cloud environment restricts visibility in certain ways, it helps increase visibility in other dimensions. There are many useful things that can be done with the application data and meta-data collected about tenant instances. This is a hotbed of innovation, with many companies finding ways to gain valuable insights out of the data, such as determining which applications and databases are connected to other applications in the path of a workload. Building dependency mapping and getting performance measurements can greatly help optimization as well as the migration of the application clusters together from one place to another. This eliminates blind spots. The data or analysis can also be exported from the public cloud to the on-premises side or vice versa, or to other clouds, to provide correlation and single-pane-of-glass (SPOG) visibility. The application and user-related data can be combined with big data and eco-system data inside the cloud to generate very interesting insights that are not otherwise possible. Those analyses can be used with advanced machine learning algorithms to automate repetitive tasks without IT supervision or intervention, making it particularly useful for dynamic container-based microservice environments. Going one step further, the data can be fed into artificial intelligence (AI) engines to predict workload or user behaviours and intents in certain circumstances. Using policies in this way to trigger automatic actions without human intervention ultimately helps to secure business interests through Intent Based Visibility (IBV). The potential and outcome of building an intelligent visibility eco-system in the cloud that is ‘autonomously actionable’ is far greater and much less restricted than anything possible within private infrastructure. In particular, ‘Service-Oriented Visibility (SOV)’ would provide the potential for ‘service chaining’ through multiple tiers of applications hosted in the same or across multiple clouds. Offering advanced digital services through embedded visibility has huge upside.

Together, the combination of Intent-Based-Visibility (IBV) and Service-Oriented-Visibility (SOV) will form the foundation of a Visibility 2.0 framework that will support hybrid environments for the next decade and beyond.

Conclusion

To summarize, while the public cloud has certain blind spots, it opens up an entirely new set of opportunities and possibilities at a higher and broader level, which heavily rely on new rules of provisioning visibility across a broader spectrum. The shift that takes place from traditional application and network performance monitoring (APM/NPM) to more sophisticated cloud-native application performance and service delivery assurance, is all geared towards making the end-user experience simpler, seamless and as-expected, based on an on-demand consumption model. Visibility can no longer be taken as an afterthought. Visibility 2.0 is all about proactive and actionable visibility to drive timely business decisions in an increasingly competitive economy. Only once this is achieved ‘by design’ can the true potential of a hybrid environment—offering the control of on-premises infrastructure with the elasticity of the public cloud—be realized.