The Essence of Uptime

Uptime is a key performance indicator (KPI). Some would say it is the key performance indicator, the sine qua non, of productive computing. If you can’t keep your system operational, you have nothing. None of the many functionalities – the bells and whistles – matter one whit if your customers can’t access your site or service. The expectation in the industry is for near 100% uptime.

So how do you get there? Every company’s IT environment is different. But the concepts of maintaining uptime are common to all, none of which are the result of luck.

Measurement

The standard for network uptime is 99.999% availability. It is also referred to as Five 9s. Of course, the only way to determine that figure is by measuring it. And that all depends on what you are measuring.

Identifying how long a server has been operational is fairly easy. On UNIX computers, for instance, the metric is expressed in years, days, months, minutes, and seconds. The command is simply [uptime], and the results are straightforward. It shows how long the machine has been powered on with the operating system running. What it doesn’t tell you is how long a particular service has been running, or whether the server has been available online.

But what if the server is up and running on the internet but its essential services have crashed? What if only some of the users can access the server? Some people use the term availability for this metric. It makes no difference if the administrator can ping the server from his workstation but on one can reach it. The server has 0% availability.

Anyone trying to measure uptime should first clarify terms and define the measurement appropriately. Uptime on a server is not necessarily the same as uptime for an application on the server.

Monitoring

In the traditional data center, the focus was on ensuring that servers, routers, and switches continued running, that they were available on the network to the customer, and that the performance of these components was satisfactory. Network operation centers used SNMP-based tools to monitor managed objects.

In the early days, much of the work was reactive. A switch turns from red to green, and a NOC technician opens a ticket. In time, proactive automated tickets took over much of that work. Eventually, self-healing networks became a reality, and many of the fixes became automatic.

In today’s world of cloud computing, virtualization, analytics, and artificial intelligence, our monitoring systems have become much smarter. In fact, there is a movement toward autonomous networks with automatic resource allocation. It’s all getting better.

Redundancy

Of course, there is not going to be 99.999% (or 100%) uptime if the resources are not available. Automatic failovers need devices to failover to. And replacement parts should already be readily available, if not already onsite. When a card in a switch fails – even if traffic has been moved to another switch -- there will still be the need for some technician to be onsite and physically replace it.

But now that so much of our infrastructure is going virtual, the footprint for actual hardware is continually decreasing. Even so, any environment for virtual machines, software defined networking (SDN), or network functions virtualization (NFV) should have the resources and capacity for seamless failovers or redirects. Many of these issues are being addressed in current advancements.

Reliability

There is a society within the IEEE organization that is devoted to the concept of reliability. It is called the Reliability Society. Their webpage states their purpose: “We want to assure that a system will perform its intended function for the required duration within a given environment, including the ability to test and support it throughout its total life cycle.”

Reliability is a quality that is essential in our friends as well as our computer systems. Without it, maintaining uptime becomes much more difficult. Better to have a machine or application that keeps on going and does what it’s supposed to do than to deal with frequent repairs. There are rumors on the internet about a Novell Netware 3 computer whose uptime was 16 years before it was finally shut down.

Repair

Thankfully, many of our computing resources are now virtual. That was not the case with the 1940’s computer called ENIAC. Repairmen eventually were able to locate and replace one of its 18,000 vacuum tubes in just 15 minutes. Dealing with today’s IT environment can be much easier.

That depends, however, on how qualified the people are who are dealing with the problems. An experienced engineer may be able to resolve an issue in five minutes that would take a newbie several hours. It helps when there are clear processes in place, good documentation, and a robust knowledge base.

Conclusion

The IBM model for assessing the adequacy of a system is called RAS. It stands for reliability, availability, and serviceability. The uptime of any system is dependent on a variety of factors. The chief element is the desire for high quality. A combination of good design, manufacturing, operation, and maintenance will give a system a better chance. Quality is the key ingredient.

Ready to take control over your network and increase uptime?Access to our platform starts at just $23/month. Continue learning about Cloud Load Balancing or jump right in and try it free now.

Other posts you might like...

The True Costs of Downtime for IT

Downtime is a dirty word in the IT business. Unplanned outages are unacceptable and should not be tolerated. In a universe where customers expect services to be available 99.999% of the time, any time your IT service offering is down is costly to your business.

The Need for Increased Availability is Now

Our predictions for the last half of 2017: Ransomware will keep evolving, the rise of IoT will pave way for increased DDoS Attacks, IPv6 Traffic will continue to grow exponentially, Machine Learning and AI will be applied to enhance security, and the need for increased availability is now.

5 Ways to Increase Application Availability

A service provider that offers software-as-a-service or another cloud-based solution should understand what customers are looking for and what compels those very customers to choose an off-premise, “cloud-based” solution vs. the more traditional on-premise, self-hosted solution.

Follow us:

About Total Uptime Technologies

Total Uptime® Technologies, LLC is a privately held provider of Cloud solutions designed to help organizations achieve high availability in a demanding online world. Our multi-datacenter, multi-country Cloud platform easily delivers on our uptime promise because it has been engineered from the ground up to be fast, flexible and resilient.

While other organizations were busy renaming their legacy solutions as “Cloud” and dressing them up to take advantage of the latest hype, Total Uptime Technologies was engineering a true Cloud Platform that was multi-datacenter at its core. In our mind, Cloud meant resilient, and resilient meant that we had to design an application that could span infrastructure at different datacenters in different geographies – continents apart. Only then would we be content with calling it “Cloud”.