Disclaimer : I am an IT guy and my knowledge on human body is limited to my daughter's high school biology class book and information obtained from search engines. So, excuse me if any of the information below is not represented accurately !

The human body is the most complex machine ever created. With a complex network of interconnected organs, millions of cells and the most advanced processor, human body is the most automated system in this planet. In this article, we will draw comparisons between the working of a human body to that of a data center. We will learn how self-defense and self-healing capabilities of our human body is similar to firewalls and intelligent monitoring capabilities in our data centers. We will draw parallels between human body automation to data center automation and explain different levels of automation we need to drive in data centers. This article is divided into four parts covering each of body main functions and drawing parallels on automation

Have you ever felt sick? How do you figure out that you are going to get sick and you need to call it a day. Can you control how fast your heart should beat or can you control your breath as per your wish? Human body is the most automated system we have in the entire universe. It's the most advance machine with the fastest microprocessor and a lightning network which powers us every day. There is lot to learn on how the architect of our body has designed our body and how using the same design principals we should automate the data center of the future.

The fundamental principal of automation is to use the data to do intelligent analytics that enables us to take action. When we are about to fell sick, our body gives us some indicators (alerts) which tells us things are not going per plan and we need to take action. Such indicators can be in the form of developing fever or chills, feeling cold, or having pain. Once we get these alerts either we take action, i.e., take medication or we let our body self-heal if the alert is not to worry about, e.g., a small cut.

Our body like our systems (compute, network, etc.) have a way to read these alerts and take appropriate actions. In addition, our body has tremendous and most advance security system always working to defend ourselves from various malicious attacks! An example when the virus strikes the human body, it attacks the body cellular structure and begins to destroy it. Our body defense mechanism immediately sends white blood cells to attacks the invading virus and tries to destroys it. All this happens 24x7 and without us telling our body to do so! If the body fails to defend on its own then it gives signals to help it out and that is when we either go to a doctor to get us some medicine or take some other external remedies to help our body. Now imagine if we can develop similar advanced security system to defend our data centers from all the attacks. There are several things we can learn from how our body works and incorporate the same in creating highly automated data center of the future. Let's examine each of the body systems and how we can leverage it for our benefit. While this is not biology lesson it is time to go back to your school days.

The Immune SystemThis is perhaps the most intelligent and automated system in our body and most relevant to the way we should automate our data center security. Our immune (security) system is a collection of structures and processes who job is to protect against disease or other potentially damaging foreign bodies. These diseases and/or foreign bodies is equivalent to virus, malware or other type of security threats we see in our data center. Our immune system consists of various parts (hardware) and systems (software) which allows our body to self-defend and self-heal against attacks, which happens 24x7.

Image courtesy:Flexablog.com

There are six main components of our immune system.

Lymph Nodes: This is a small bean shape structures that produce and store cells to fight infection and diseases. Lymph nodes contains lymph, a clear liquid that carries those cells to various parts of our body.

Spleen: This is located on your left-hand side of your body under your ribs and above your stomach. The spleen contains white blood cells that fight infection

Bone-Marrow: The yellow tissue in the center of bones that produced white blood cells

Lymphocytes: These small white blood cells play a large role in defending the body against disease. The two types of lymphocytes are B-cells, which make antibodies that attack bacteria and toxins, and T-cells, which help destroy infected or cancerous cells

Thymus: Responsible to trigger and maintain production of antibodies

Leukocytes: These are disease fighting white blood cells that identifies and eliminates pathogens

Together all the above components make up our immune system. Think these of various security devices like physical access card readers, firewalls, anti-virus software, anti-spam and other security mechanism we deploy in our data center. The immune system can be further divided in two systems.

The Innate Immune SystemThe innate immune response is the first step in protecting our bodies from foreign particles. It is an immediate response that's "hard-wired" into our immune system. It's a generalized system which protects against any type of virus attacks and not tied to specific immunity. For example, general barriers to infection include:

Physical (skin, mucous, tears, saliva, and stomach acid)

Chemical (specific proteins found in tears or saliva that attack foreign particles)

Biological (microbiota or good bacteria in the gut that prevents overgrowth of bad bacteria)

The innate immune system is general i.e. anything that is identified as a foreign or non-self becomes target for the innate immune system

The Adaptive Immune ResponseThe innate immune response leads to the pathogen-specific adaptive immune response. While this response is more effective, it takes time to develop-generally about a week after the infection has occurred. This system is called adaptive because it's a self-learning system which adapts itself to new threats and creates a self-defense mechanism to neutralize such threats in the future much faster. A good example we all know from birth is vaccinations. We are injected with a weakened or dead virus to enable our body learn on how to defend against a particular type of virus. Our body then remembers this all its life and protects us 24x7 from this particular virus.

Thus, the immune system is both reactive and adaptive. It reacts when a pathogen enters our body to neutralizes it, it also is constantly learning and adapting to new threats. It's also intelligent to know what is self - Anything naturally in the body, e.g., our own cells to non-self-Anything that is not naturally present in the body. The system also is a quick reacting system and has inbuilt messaging system which passes signal from one cell to another to act on incoming threat all at lightning speed. In addition, its layered security system with multiple types of cells playing particular role to defend. While some cells are located at the entry point of our body like mouth, nose, ear, etc., and act as security guards, others are located in our circulatory systems or in our bone marrow and gets released as and when required.

Enough of biology. Let's get into our IT world. Imagine our data center having similar innate and adaptive capabilities. The innate or generalized security systems are our firewalls, emails scanners etc. which can neutralize generalized threats in our data center. They are not tied to specific threats like DoS or Dirty cow type OS vulnerability. These systems are continuously watching for any threats and neutralizes once they find known and familiar threats. E.g. email spam filters, anti-virus software, etc. Much like our body has physical, chemical and biological defense layers, our data center needs to have different security layers to product us from various types of attacks. At a minimum, we four level of security in our DC. Physical security (Access card readers, Security guards), network security (DNS, DMZ/Internal, Firewalls), component level (Compute, Storage) and application level (email, OS, Java, Oracle, etc.). There are lot of technologies available today which provides various layers of security including those provide by industry leaders like Cisco.

While we have innate defense capabilities, what we need to protect us against increasing sophistication of attacks is the adaptive self-defense capabilities. The system should self-learn various signatures and patterns from past attacks and can automatically create self-healing code (white blood cells) to defend against new threats. In other words, systems should be able to self-heal itself. Such a system will create new defense signatures based on previous attacks and adapt to new type of attacks.

Humans intervene only when the system fails to do its job. Let's take an example. Let us assume a new type of virus is released, it's an enhanced version of previously known virus, so the signature is different. If the virus pattern is not known, humans have to develop anti-virus signatures and then update anti-virus software to fix the exposure. This is like taking an external dose of antibiotics to heal your body. This can take days if not weeks to get the updated software from vendor and apply it across all vulnerable systems. Now what if we have systems in the future which can create required antibiotics on its own and try to fix the exposure? Such systems much like our body learns from previous attacks, modify its current software to adapt to new threat and tries to defend itself all without human intervention! Seems unreal. Our body is capable for doing this with to do this with 75% or more success rate. Can we aim for 80%?

Another capability we need in our data center is the self-healing capability. Much like how a human body detects abnormalities in the human body and attacks the problem without asking for your permission J, data center security mechanism as well as fault detection system should work in similar way. Imagine your body waiting for your instruction to defend from invading virus!! What if you were sleeping. When an abnormality is detected in the data center, we need to act immediately. Today, while many of data center security products are designed to detect malicious attacks and take appropriate action without human intervention, we need to extend this inside every component (compute/storage/network) in the data center. We should have intelligence at every layer to protect against increasing form of attacks and everything needs to be connected together. An end point device which detected a threat can alert all the security components at all layers about incoming threat. Each system notifies other systems on the status of threat and there is constant communication between fire-walls, compute, storage system based on type and level of attack.

As an example, imagine we discover a new super critical vulnerability in our operating system which allows an authorized user to get root privileges. Today in most enterprises it takes weeks if not days to detect and remediate the vulnerability. In tomorrow's world system should be smart enough to take detect such gaps and apply the fix immediately. Why wait when we know waiting can have adverse impact on our business and yes did I mentioned it should be done without downtime to business. After all your body does not need downtime to fix YOU.

To summarize we need following capabilities for our data center security

Multi-layered inter-connected security system. There should be common messaging bus between different infrastructure components to detect and notify status of threats

Should be both innate and adaptive to react to different type of threats

Self-learning with self-healing capabilities. Should continuous learn and adapt to new threats

Ability to react at the speed of light

In the next article, we will focus on the body's nervous system, which is the most complex but also the most intelligent sensor system in the planet.

Ashish Nanjiani is a Senior IT Manager within Cisco IT managing Cisco worldwide IT data centers as an operations manager. With 20 years of IT experience, he is an expert in data center operations and automation. He has spoken in many inter-company events on data center automation and helps IT professionals digitize their IT operations. He is also an entrepreneur and has been successfully running a website business for 10+ years.

Ashish holds a Bachelor of Science degree in Electrical and Electronics and a Masters in Business Administration. He is a certified PMP, Scrum master. He is married and has two lovely daughters. He enjoys playing with technology during his free time. [email protected]

In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...

Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developer...

When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture...

As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that’s no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, will explore how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He wi...

These days, change is the only constant. In order to adapt and thrive in an ever-advancing and sometimes chaotic workforce, companies must leverage intelligent tools to streamline operations. While we're only at the dawn of machine intelligence, using a workflow manager will benefit your company in both the short and long term. Think: reduced errors, improved efficiency and more empowered employees-and that's just the start. Here are five other reasons workflow automation is leading a revolution...

Cloud adoption is often driven by a desire to increase efficiency, boost agility and save money. All too often, however, the reality involves unpredictable cost spikes and lack of oversight due to resource limitations.
In his session at 20th Cloud Expo, Joe Kinsella, CTO and Founder of CloudHealth Technologies, tackled the question: “How do you build a fully optimized cloud?” He will examine:
Why TCO is critical to achieving cloud success – and why attendees should be thinking holistically ab...

In our first installment of this blog series, we went over the different types of applications migrated to the cloud and the benefits IT organizations hope to achieve by moving applications to the cloud.
Unfortunately, IT can’t just press a button or even whip up a few lines of code to move applications to the cloud. Like any strategic move by IT, a cloud migration requires advanced planning.

Docker is on a roll. In the last few years, this container management service has become immensely popular in development, especially given the great fit with agile-based projects and continuous delivery. In this article, I want to take a brief look at how you can use Docker to accelerate and streamline the software development lifecycle (SDLC) process.

We have Continuous Integration and we have Continuous Deployment, but what’s continuous across all of what we do is people. Even when tasks are automated, someone wrote the automation. So, Jayne Groll evangelizes about Continuous Everyone. Jayne is the CEO of the DevOps Institute and the author of Agile Service Management Guide. She talked about Continuous Everyone at the 2016 All Day DevOps conference. She describes it as "about people, culture, and collaboration mapped into your value streams....

“Why didn’t testing catch this” must become “How did this make it to testing?” Traditional quality teams are the crutch and excuse keeping organizations from making the necessary investment in people, process, and technology to accelerate test automation. Just like societies that did not build waterways because the labor to keep carrying the water was so cheap, we have created disincentives to automate.
In her session at @DevOpsSummit at 20th Cloud Expo, Anne Hungate, President of Daring System...

Did you know that you can develop for mainframes in Java? Or that the testing and deployment can be automated across mobile to mainframe?
In his session and demo at @DevOpsSummit at 21st Cloud Expo, Dana Boudreau, a Senior Director at CA Technologies, will discuss how increasingly teams are developing with agile methodologies, using modern development environments, and automating testing and deployments, mobile to mainframe.

As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory?

@DevOpsSummit at Cloud Expo taking place Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center, Santa Clara, CA, is co-located with the 21st International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is ...

While some vendors scramble to create and sell you a fancy solution for monitoring your spanking new Amazon Lambdas, hear how you can do it on the cheap using just built-in Java APIs yourself. By exploiting a little-known fact that Lambdas aren’t exactly single-threaded, you can effectively identify hot spots in your serverless code. In his session at @DevOpsSummit at 21st Cloud Expo, Dave Martin, Product owner at CA Technologies, will give a live demonstration and code walkthrough, showing how ...

If you are part of the cloud development community, you certainly know about “serverless computing”, almost a misnomer. Because it implies there are no servers which is untrue. However the servers are hidden from the developers. This model eliminates operational complexity and increases developer productivity. We came from monolithic computing to client-server to services to microservices to serverless model. In other words, our systems have slowly “dissolved” from monolithic to function-by-func...

In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.

Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again.
Unfortunately, we've seen this movie before, and we know how it ends: badly.

We define Hybrid IT as a management approach in which organizations create a workload-centric and value-driven integrated technology stack that may include legacy infrastructure, web-scale architectures, private cloud implementations along with public cloud platforms ranging from Infrastructure-as-a-Service to Software-as-a-Service.

IT organizations are moving to the cloud in hopes to approve efficiency, increase agility and save money. Migrating workloads might seem like a simple task, but what many businesses don’t realize is that application migration criteria differs across organizations, making it difficult for architects to arrive at an accurate TCO number. In his session at 21st Cloud Expo, Joe Kinsella, CTO of CloudHealth Technologies, will offer a systematic approach to understanding the TCO of a cloud application...

With Cloud Foundry you can easily deploy and use apps utilizing websocket technology, but not everybody realizes that scaling them out is not that trivial. In his session at 21st Cloud Expo, Roman Swoszowski, CTO and VP, Cloud Foundry Services, at Grape Up, will show you an example of how to deal with this issue. He will demonstrate a cloud-native Spring Boot app running in Cloud Foundry and communicating with clients over websocket protocol that can be easily scaled horizontally and coordinate...

Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developers don't think about, but you can even use Docker with ASP.NET.

These days, change is the only constant. In order to adapt and thrive in an ever-advancing and sometimes chaotic workforce, companies must leverage intelligent tools to streamline operations. While we're only at the dawn of machine intelligence, using a workflow manager will benefit your company in both the short and long term. Think: reduced errors, improved efficiency and more empowered employees-and that's just the start. Here are five other reasons workflow automation is leading a revolution in the 9-5 world.

In our first installment of this blog series, we went over the different types of applications migrated to the cloud and the benefits IT organizations hope to achieve by moving applications to the cloud.
Unfortunately, IT can’t just press a button or even whip up a few lines of code to move applications to the cloud. Like any strategic move by IT, a cloud migration requires advanced planning.

Docker is on a roll. In the last few years, this container management service has become immensely popular in development, especially given the great fit with agile-based projects and continuous delivery. In this article, I want to take a brief look at how you can use Docker to accelerate and streamline the software development lifecycle (SDLC) process.

We have Continuous Integration and we have Continuous Deployment, but what’s continuous across all of what we do is people. Even when tasks are automated, someone wrote the automation. So, Jayne Groll evangelizes about Continuous Everyone. Jayne is the CEO of the DevOps Institute and the author of Agile Service Management Guide. She talked about Continuous Everyone at the 2016 All Day DevOps conference. She describes it as "about people, culture, and collaboration mapped into your value streams."

Did you know that you can develop for mainframes in Java? Or that the testing and deployment can be automated across mobile to mainframe?
In his session and demo at @DevOpsSummit at 21st Cloud Expo, Dana Boudreau, a Senior Director at CA Technologies, will discuss how increasingly teams are developing with agile methodologies, using modern development environments, and automating testing and deployments, mobile to mainframe.

If you are part of the cloud development community, you certainly know about “serverless computing”, almost a misnomer. Because it implies there are no servers which is untrue. However the servers are hidden from the developers. This model eliminates operational complexity and increases developer productivity. We came from monolithic computing to client-server to services to microservices to serverless model. In other words, our systems have slowly “dissolved” from monolithic to function-by-function. Software is developed and deployed as individual functions – a first-class object and cloud ru...

Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again.
Unfortunately, we've seen this movie before, and we know how it ends: badly.

We define Hybrid IT as a management approach in which organizations create a workload-centric and value-driven integrated technology stack that may include legacy infrastructure, web-scale architectures, private cloud implementations along with public cloud platforms ranging from Infrastructure-as-a-Service to Software-as-a-Service.

API Security has finally entered our security zeitgeist. OWASP Top 10 2017 - RC1 recognized API Security as a first class citizen by adding it as number 10, or A-10 on its list of web application vulnerabilities. We believe this is just the start. The attack surface area offered by API is orders or magnitude larger than any other attack surface area. Consider the fact the APIs expose cloud services, internal databases, application and even legacy mainframes over the internet. What could go wrong?

From manual human effort the world is slowly paving its way to a new space where most process are getting replaced with tools and systems to improve efficiency and bring down operational costs. Automation is the next big thing and low code platforms are fueling it in a significant way.
The Automation era is here. We are in the fast pace of replacing manual human efforts with machines and processes. In the world of Information Technology too, we are linking disparate systems, softwares and tools to make things self sustaining. This is called Automation.It has been helping companies overcome t...

Web services have taken the development world by storm, especially in recent years as they've become more and more widely adopted. There are naturally many reasons for this, but first, let's understand what exactly a web service is. The World Wide Web Consortium (W3C) defines "web of services" as "message-based design frequently found on the Web and in enterprise software". Basically, a web service is a method of sending a message between two devices through a network.
In practical terms, this translates to an application which outputs communication in a standardized format for other client a...

We have already established the importance of APIs in today’s digital world (read about it here). With APIs playing such an important role in keeping us connected, it’s necessary to maintain the API’s performance as well as availability. There are multiple aspects to consider when monitoring APIs, from integration to performance issues, therefore a general monitoring strategy that only accounts for up-time is not ideal.

As today's digital disruptions bounce and smash their way through conventional technologies and conventional wisdom alike, predicting their path is a multifaceted challenge. So many areas of technology advance on Moore's Law-like exponential curves that divining the future is fraught with danger. Such is the problem with artificial intelligence (AI), and its related concepts, including cognitive computing, machine learning, and deep learning.

DevOps is good for organizations. According to the soon to be released State of DevOps Report high-performing IT organizations are 2X more likely to exceed profitability, market share, and productivity goals. But how do they do it? How do they use DevOps to drive value and differentiate their companies?
We recently sat down with Nicole Forsgren, CEO and Chief Scientist at DORA (DevOps Research and Assessment) and lead investigator for the State of DevOps Report, to discuss the role of measurement in DevOps Success. From years of research into DevOps operations and culture, Forsgren and her ...

Microservices Journal focuses on the business and technology of the software architecture design pattern, in which complex applications are composed of small, independent processes communicating with each other using language-agnostic APIs.

Cloud computing budgets worldwide are reaching into the hundreds of billions of dollars, and no organization can survive long without some sort of cloud migration strategy. Each month brings new announcements, use cases, and success stories.