This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.

This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.

Intelligent Power Management Shines A Light During Blackouts

Disaster preparedness is driving a surge in intelligent power management innovation.

Whether they are caused by natural disaster, aging infrastructure, or inadequate supply, blackouts have long been one of our industry’s deepest fears. If your planning and infrastructure are inadequate, a sustained blackout will leave your people scrambling, your servers quiet, and your company frozen.

As “Superstorm Sandy” savagely demonstrated in the fall of 2012, a major disaster can happen at any time. Human tragedy aside, the storm was a wake-up call to the regions data centers, which started to fall one by one as secondary power quickly ran dry. By the end, the lesson was clear: we need some new tools and fresh thinking when it comes to protecting our uptime during power shortages, blackouts, and disasters.

Our industry is in the middle of a major rethink when it comes to how we use energy, thanks in large part to rising energy costs and the push to become more sustainable. For those planning for the next big power outage, this is great news, since it is driving a surge of innovation around intelligent power management.

CONVENTIONAL DISASTER PLANNING

Following Hurricane Sandy, affected data centers found themselves cut off from the power grid and relying on emergency secondary power. In most cases, that meant the conventional choice: diesel generators.

With the blackout dragging on for days, data centers started to go down and several weak links became obvious. These included flooded generator rooms, inadequate on-site fuel storage, and disruption to delivery trucks, bridges, and roads. In a few cases, volunteer bucket brigades were even called on to carry fuel up high-rise buildings in a last-ditch effort to keep servers running.

Naturally, it’s these kinds of obvious weak links that people tend to zero in on when it comes to improved preparedness in the face of the next disaster. Not surprisingly, this has been the focus of conventional disaster recovery planning: making on-site backup power as robust as possible. Some typical measures include:

• Improved contingency planning. It’s essential to plan for both short-term and long-term power outages, including additional, redundant plans in case of major disasters and supply chain interruptions. All of these plans should include regular check-ups and testing to ensure that they are always up to date and ready to go.

• Bolstering diesel back-up.Diesel gensets are generally reliable — when they aren’t situated in vulnerable building locations — but the fuel itself does have a troubling tendency to run out. Conventionally, improved blackout planning has included keeping a larger supply of fuel on hand and, when possible, replacing old gensets with models that waste less fuel.

• Alternative back-up power. Increasingly, data centers are exploring alternative sources of emergency power, either to augment or replace diesel generators. These include renewables that are less vulnerable to supply-chain interruptions, such as wind and solar.

As far as protecting uptime, these measures all have an important role to play, especially when designing new facilities. But if this is our only response to disasters like Hurricane Sandy, then we are guilty of only looking at the final link in the chain — a chain that has multiple weak links.

To explore all of the potential points of failure, we need to start asking questions. Why does backup power run out so quickly? Are there ways to get more from our existing backup infrastructure? How can we accomplish the same work while using less electricity and less diesel?

DOING MORE WITH LESS

Usually, when we talk about energy efficiency, the conversation is all about reducing operating costs. That’s a critical conversation for any competitive business to have, but efficiency can also enhance competitiveness in a less direct manner.

The fact is that diesel and other forms of backup power quickly run out; if you’re not using that power effectively and efficiently, then it runs dry even faster. When your data center is already energy efficient, and you’re already accomplishing more with less resources, you’re automatically in a more competitive position when it comes to riding out a blackout.

This is one area where our industry has been making steady gains in recent years, particularly at the facility level, where a number of advancements in design and procurement are having a big impact:

• Smarter waste heat management.This includes more efficient airflow design, passive cooling measures that take advantage of ambient temperatures, and even allowing equipment to run at significantly higher temperatures than we have in the past.

• Efficient IT equipment.Servers directly use up to half of the power drawn by a data center. On top of that, one watt of power saved at the server level can generate as much as 2.84 Watts of savings along the entire data center power chain, according to research conducted by Emerson Network Power. So choosing new, more efficient technologies can deliver a clear advantage.

• Efficient power distribution.A surprising portion of energy is lost just in the process of delivering power to IT equipment. You can reduce waste dramatically by eliminating power conversion steps and choosing new, high-efficiency transformers, power distribution units (PDUs), and uninterruptible power supply (UPS) units.

• Other facility improvements.Data centers are workplaces that have a lot in common with other commercial and industrial facilities. Significant efficiency gains can be made just by looking at more mundane systems like facility lighting and workspace HVAC. You’re probably noticing something unavoidable about that list —namely, that these are mostly capital-intensive measures that put newly built facilities at a distinct advantage. While retrofits and upgrades are inevitable at most data centers, operators are naturally more interested in measures that can help them make efficiency gains sooner without radically affecting their existing facility and IT infrastructure. This is where software comes in.

SOFTWARE AND EFFICIENCY

Data center operators are accustomed to thinking of software as the one thing that’s the most vulnerable during a blackout — and definitely not as a key component in a robust blackout-readiness strategy.

Until recently, much of the innovation around energy has been focused on facility-level and hardware-level components, but software-based solutions are emerging that give operators more insight and control, massively improving power management and energy efficiency.

How exactly do they accomplish that? Well, there’s such a wide range of solutions out there — from data center infrastructure management (DCIM) solutions to application-aware power management (AAPM) software — that it can be difficult to generalize. But there are some obvious needs that the software providers have stepped in to address:

• Collecting data.The reality today is that most data centers don’t have the tools in place to measure how and where they are using energy, beyond the most basic consumption and performance measurements. Software solutions shine a much-needed light by collecting data all along the power chain, from the facility as a whole down to individual servers and applications.

• Creating actionable insights.One step above pure data collection, software can also play a critical role in creating metrics that allow operators to fully monitor efficiency performance, set future targets, and plan better. This goes much deeper than familiar metrics like power usage effectiveness (PUE), answering questions we’ve never even been able to ask before. For example, do you know your energy consumption per transaction? Per application? Or do you know how much it’s costing to power idle servers? Or just your most critical applications?

• Providing administrative control and automation. Individual infrastructure and IT components often work independently of one another. Software can get all of the pieces sharing data and talking to one another. Ultimately, this means more granular and more flexible management of your resources in direct response to fluctuating conditions, including user demand, power availability, electricity rates, and backup fuel levels.

In a non-emergency situation, this enhanced insight and control simply adds up to significant savings on operating costs. That’s obviously worthwhile in and of itself — but it’s not until a serious outage hits that the real uptime benefits of intelligent power management software will come to the forefront.

BLACKOUTS IN AN APPLICATION-AWARE SETTING

Let’s look at one example of how tools like AAPM can help protect uptime in an emergency.

AAPM allows for a server-level and application-level understanding of power consumption. With an AAPM solution in place, a data center knows exactly how much power they need to support individual applications and different service-level agreements. This information can feed directly into their disaster recovery planning.

When an outage does hit and the data center has to resort to backup power, they are already at an advantage. Since their planning is based on detailed data rather than on rough estimates, they’re equipped with a much more accurate picture of how long they can hold out at full operational capacity before their fuel runs out.

They’re also making much more efficient use of that fuel. On a day-to-day basis, AAPM has allowed them to “cut the fat” by dynamically adjusting the power state of servers in direct response to application demand. That results in much less energy being wasted on powering and cooling idle servers. (How much less? According to the global consultancy McKinsey and Company, the average data center uses just 6% to 12% of the power pulled from the grid to do real computational work.) In other words, if our hypothetical data center is already using 25% to 50% less power than a comparable facility, their fuel can last up to twice as long without impacting their performance at all.

But what if things get really bad, as they did with Hurricane Sandy? With no grid power and no fuel deliveries on the way, the data center has the tools they need to make flexible, real-time decisions about their power consumption as circumstances change.

Using an AAPM solution with fully granular administrative control, they are able to allocate less power to secondary applications and preferentially protect their critical applications above everything else. Looking at costs, they’ll also be able to limit the amount of expensive secondary power — often two to ten times more expensive than grid power — that isn’t being used to generate revenue or provide vital services.

Of course, software-based power management is useful for a lot more than safeguarding your uptime during a crisis. In truth, most of the data centers now adopting these kinds of solutions are more motivated by the ability to dynamically manage operating costs. But when a major blackout or natural disaster does inevitably hit, those same data centers are going to be thanking themselves for paying attention to the connection between smart power management, energy efficiency, and uptime.

Aaron Rallo is the founder and CEO of TSO Logic. Aaron has spent the last 15 years building and managing large-scale transactional solutions for online retailers, with both hands-on and C-level responsibility in data centers around the world. He can be reached at arallo@tsologic.com. www.tsologic.com

Events

With the number of edge sites on the rise, it’s critical for you to know what’s going on in the network at any given moment. However, it’s likely there are sites you have never visited. So, if you don’t know exactly what a site looks like, what security measures are in place, or even where it is located, how can you have true visibility into the physical environment? The answer is by having good sensors in place.

One Wilshire building in Los Angeles, one of the most densely connected buildings in the world, houses 450,000 square feet of data center. Organizing the organic growth of disparate cooling equipment was a major concern for its owners, who were working with the engineering team and manufacturers to increase the cooling capacity. The goal was to achieve 4000 tons of scalable cooling, with a target of 50% free cooling.
Learn from the experts who completed this project in 2018 — about how they achieved the basis of design for One Wilshire tenants and exceeded the energy efficiency goals of the project by 25%, which is 62 times the amount required by Title 24 in California.