Industry Trends

If you’re like me, you’ve had to deal with a few batteries in your career. It’s usually one of the weak links in mission-critical environments; and consequently, we find ourselves obsessing over battery condition, life expectancy, and failure rates quite often. We can literally spend millions of dollars each year due to failed cells or end-of-life replacements. Wouldn’t it be nice not to have to deal with that – or even to worry about it? I have recently (with some excitement, I might add) been investigating vanadium redox flow batteries. These systems use fluids — or more precisely, electrolytes — to store energy. I’m not going to get technical in this post; but if you want to know more today, here’s a Wikipedia link. How the battery works Basically, there are two tanks of electrolyte with a membrane in a frame between the two. The two electrolytes set up a potential across the membrane that allows for the flow of electrons....

A number of articles discuss raising the temperature of data center spaces in order to save cost. In many situations, very significant cost savings can be had by doing this – but raising the temperature is only a part of the picture. The act of raising temperature in any space means that the space simply has an overall increase of the enthalpy or total energy. This act by itself does nothing for the thermodynamics of the system. From a data-center-space perspective, regardless of the temperature, the heat energy generated by the servers and equipment must be transported or expelled from the area. It is in this process that you can either make the system efficient or not. Consider the following diagram: In this conventional design, it doesn’t matter what temperature the data center is held at, there still needs to be 100KW of heat removed. This moving of heat energy costs energy. If we have a centrifugal chiller, it takes about 0.6 KW per ton to remove this heat. So if we use this example:...

While everyone is trying to save money by controlling energy usage at data centers (the topic has even hit the pages of the New York Times in recent days) what do we do with all the older data centers? How can we increase efficiency in these sites? One answer is to control the cold air. While controlling the warm air is also important, it does what it does naturally — rise. Cold air, on the other hand, needs to be delivered to the proper location for it to do its job. Many of the older data centers were designed with under-raised-floor-cold-air delivery. The rooms are lined with CRAC units that draw warm air from the overhead area and deliver cold air under the floor to be distributed through perforated tiles. In my experience, you contain the vital resource to the maximum extent possible to minimize losses and conserve it. So it has always seemed strange to me that designers and engineers have historically chosen to contain the warm air when the vital resource is the cold air. I would allow the warm air to either escape or have the maximum opportunity to lose some of its energy to the environment so as to work with physics and not against it....

I enjoyed watching one of the old “Top Gear” episodes recently that discussed the lifetime cost of ownership of the Land Rover as compared to the Prius. The episode discussed the mining of the materials necessary for the batteries and how that impacted the real cost of each car. The “Top Gear” hosts stated something to the effect that when you factored in what it costs to manufacture and properly dispose of the lithium batteries, the Land Rover was actually “greener” than the Prius. While there is debate whether what was said on “Top Gear” was true or not, it did get me to think about some of the new innovations that I’m seeing in data centers. When I look at factors beyond the obvious operating costs, I find that some of the innovations have hidden costs that affect total cost of ownership. Let’s look at some designs and explore the possible hidden costs....

One of the latest developments in the data center world is operating in a “lights-out” fashion. This operational model of data centers means simply operating with no on-site personnel. The model relies on having automated backup systems, redundant data-center assets and/or software to maintain the appearance of 100 percent uptime. The building itself is protected by its secret, unmarked location, by physical security, and by remote monitoring. So what do lights-out data centers mean to facilities? How does it affect operations and methods? What does it mean for our staffing models, spare parts inventories, maintenance programs, and training initiatives? Two Competing Philosophies: Since the data center operates without human presence (hence “lights out”), one of two strategies will emerge as the method for how to achieve the delivery of services: (1) Either the data center’s capability must be redundant within the overall content delivery/processing network (spare data centers) or (2) the data center itself must be constructed and engineered in such a manner that it is automatically robust enough to maintain services and operations for most all foreseeable events/situations....

Everything Evolves I still remember playing with my Commodore VIC 20 and thinking that 3K of memory was plenty. But of course, within a couple months, 3K of memory wasn’t enough and I was already entertaining the idea of getting a Commodore 64. While both the VIC 20 and the Commodore 64 hooked to your TV set, the Commodore 64 had color – and it had 64K of usable memory, more memory than I could use in a lifetime – or so I thought. Nowadays, my watch has more computing power. Data centers have evolved too. To begin with, computers were housed in “computer rooms.” These computers were so large that an entire room was dedicated to one computer. As computers evolved, “computer rooms” became “data processing rooms.” Then entire floors were devoted to data processing. Gradually, entire facilities became “data centers.” Now we build server farms as this Facebook data center picture shows. Predictable Patterns...