Over the years that I have been at Mestex I have marveled at how well we have been able to control our warranty expenses. Having come from one of the large HVAC product manufacturers where it was not uncommon to have warranty expense at 3 to 4 percent of sales our average of less than 1 percent of sales is extraordinary.

Of course, this level of product quality does not happen by accident. Selecting components that are designed for long life, using material gauges that are one grade heavier than most competitors, constructing many products with welded steel frameworks... all contribute to products that are designed to last. But a great design can fall down at the production level so we have also implemented laser alignment systems, rotating component balancing systems, multi-point functional testing of every product that leaves the building, and compliance with all industry standards for safety. We are confident that when a product leaves our building that it has been built to a high standard for longevity, service, and efficiency.

But there are standards outside of our normal industry standards that can improve our quality beyond even the product itself. One of those internationally recognized standards is, of course, ISO 9001. Reaching into all elements of our business and documenting how we do things in an effort to make all phases of the business better is an expected result of attaining an ISO certification. Mestex is proud to announce that we have now received such a certification and we are now an ISO 9001-2008 certificate holder. (Certificate No.: TRC 00937 issued to Mestex, Dallas)

We don't intend to stop there however and we have already started work on attaining an ISO 14001 certification. This process is targeted at our environmental practices and policies. Although Mestex has taken major steps over the last few years to reduce our environmental impact we believe that there is more that we can do and that is our next target.

The other day I was conducting a training class, and we were
discussing evaporative cooling. Someone said they didn’t think evaporative
cooling would work very well in their area because the summer temperatures were
90°F plus with 90% RH. If you were to look at many psychrometric charts, you’d
see this point is, dare I say it, “off the chart”. To get a feel for this
consider a steam room has general temperature of 104°F and 100%RH. At 90°F with
90% RH the heat index is 122°F. It’s doubtful the temperature and humidity are
as bad at the same time as he imagined.

People generally associate high temperatures with high
humidity percentages. It’s more likely that high temperatures will be
associated with lower humidity percentages. At 80°F and 41%RH the heat index is
80°F. 80 degrees feels like 80 degrees. At this point there is approximately
0.009 pounds of moisture per pound of dry air in the atmosphere. If the
moisture content remained constant and the air warmed to say 90°F, the relative
humidity would actually drop to about 30%. Conversely, if the moisture content
remained constant and the temperature dropped to 70°F, the relative humidity
would increase to about 57%. This is because cooler air can hold less moisture
than warmer air, and relative humidity is the ratio of the moisture in the air
compared to the amount of moisture the air (at a specific temperature) can hold
expressed as a percentage.

People usually think of their air conditioner as providing
cool dry air in the summer, and it does because it does both sensible and
latent cooling. Sensible cooling lowers the temperature we sense, and latent
cooling removes the moisture. The air entering the coil may be 78°F and have
0.0101lbs of moisture per pound of dry air. The coil temperature may be 45°F
and thus the leaving air may be 60°F (It won’t be 45°F because the water in the
air is absorbing some of the cold). At this point the leaving air may have a
moisture content of 0.0062lbs per pound of dry air. This is a significant
reduction in moisture, and it is evidenced you water dripping from the
evaporator coil. The leaving air is much dryer than the entering air.

However, in relative terms the air coming off the evaporator
coil in the air handler has a relative humidity of 100% or close to it. Remember,
cool air can’t hold as much water as warm air. When the air entering the coil,
contacts the cold fins it cools rapidly. Condensation occurs when air can’t
hold the moisture it contains. At this point the air is fully saturated meaning
its relative humidity is 100%.

The point to this brief essay is, as I said at the start, relative
humidity is all relative - to the moisture in the air and the air temperature.
Warm air isn’t necessarily humid; cool air isn’t necessarily dry, relatively
speaking.

Mestek is an extremely broad and diversified HVAC/Architectural products company. While that makes our products some of the most widely used in the construction industry we face an interesting problem.... very few people have heard of us. Since the company goes to market with the individual brand names manufactured by our various divisions the name "Mestek" is pretty much unknown.

Many of the brands that make up the Mestek family have long histories in the HVAC business. One of those brands is produced in Dallas and is known as "LJ Wing". We recently did a little digging in our archives and came across a brief history of the brand that I want to share.

The LJ Wing company actually started with the inventions of Mr. Levi Wing back in 1875. Yes, that date is correct...1875! As early as 1878 Wing was starting to receive awards for designs of products such as a ventilation fan. In 1902 the company was officially incorporated as the LJ Wing Manufacturing Company and that company name stayed unchanged for the next 63 years. Through a series of sales and acquisitions LJ Wing finally became a part of Mestek in 1987.

Looking back through the history of Wing highlighted how innovative the company has been over the past 100 plus years.

Wing is believed to be the originator of gas engine power for marine applications. Your current outboard motor was spawned from that idea decades ago.

Wing received a number of awards from 1878 through 1893 for design and development of a ventilation fan known at the time as a "disc fan"... basically a precursor to today's "propeller" fans.

One of the other significant innovations that was part of the development of the disc fan was the integration an electric, steam, or gas-powered motor to drive the fan.

Following along this line of integrating unique, for the time, drive systems 1907 saw Wing introduce a turbine-driven forced draft blower for the power plant industry.

In 1917 Wing introduced marine versions of its propeller fans to the US Navy. This design ended up installed on over 200 US Navy destroyers over the next couple of years and, eventually, on almost 80% of the fleet.

1920 saw the first product that would eventually become part of today's product line...a lightweight, overhead mounted, unit heater with hydronic coil.

In the late '20s Wing introduced what we know today as "door heaters".

In the late '30s a core product of today's company was introduced... the integral face and bypass coil. Originally in a vertical configuration the same basic coil design is available today in both vertical and horizontal configurations.

Also in the late '30s Wing introduced an axial flow blower system to industry and the navy.

Another unique, and still produced today, product developed in 1935 was the revolving discharge that allows unit heaters to spread heat over broad areas by slowly spinning the discharge openings through 360 degrees of rotation.

Over the next 90 years Wing products were refined and modified for better performance and reliability. Then, in 2014, Wing products that have integrated fans and housings were upgraded with optional DDC controls that not only control unit operation but allow the LJ Wing products to be integrated into a building automation system or accessed remotely via the Internet.

LJ Wing clearly has a history of innovation and a history that has survived all of the major economic downturns. Engineers, contractors, and owners who specify or purchase LJ Wing products can obviously have confidence that the company will be there to support them.

Wing is just one of the many brands produced in Dallas by the Mestex division of Mestek and the common theme for all of those brands is innovation, stability, and longevity.

Over the last couple of months since my last posting I have been very busy managing our movement into new markets and grasping at new opportunities. One of the benefits of taking the deep dive into these markets is getting to look at some of the details of product design and application to the specific problem to be solved.

This has raised a question in my mind.

Why does the mission critical industry design "thermos bottles" and then fret over the cost of and methods of getting rid of the heat that all those servers generate?

﻿

There is something that strikes me as illogical about creating buildings or modular data centers with super insulated walls and ceilings that are guaranteed to trap the heat that is dumped into the hot aisle (assuming they have aisle separation). Then the mechanical system is tasked with rejecting all of the pent up energy without costing the owner a fortune. Is it any wonder that data centers are one of the largest consumers of electrical energy in the world?

Centuries ago architects and designers figured out that it is more efficient to cool a space if you simply dump the heat out to the atmosphere. Buildings used to be designed to take advantage of stratification and stack effect to cause the hot air generated in the space to rise and leave the building. No need to cool the air back down to a reasonable temperature and put it back into the space so that you can heat it all up again. Lofted ceilings and roof lines came into the design world for a reason.

So, why is the data center different? Frankly, I don't know. Why not take the hot aisle air and vent it out to the atmosphere? Sure, you have to replace that exhausted air with new air from the outside but unless the data center is located in Death Valley the odds are that the air being brought into the building is at a lower temperature than the air that would be recycled from the hot aisle of a data center designed to operate under the latest ASHRAE TC 9.9 guidelines for best practices.

My best guess why we continue to do what is intuitively illogical is inertia. "We have always done it that way". I think it is time to rethink the old ways and come up with creative solutions in the design of data centers.

The consortium is currently working on fourteen research projects and Mestex serves as an advisor ("mentor") on three of those projects. Two of the projects that Mestex is mentoring cover research on evaporative and fresh air cooling of data centers and a second project on contaminants in data centers that use fresh air cooling. As you might guess, the project on evaporative and fresh air cooling offers the greatest opportunity for the consortium to reach the stated goals. In order to support that research, Mestex has installed a small data pod at it's facility in Dallas and is cooling that data pod with a commercially available Aztec ASC-5 unit. The ASC-5 has built-in DDC controls that facilitate the use of multiple temperature and humidity sensors for control without any special modifications. The controls also include a provision for pressure sensing control and that is also implemented in this case.

In addition to the data that is presented by the standard Aztec DDC controls there are additional thermocouples and sensors installed that are streaming data to researchers at the University of Texas at Arlington.

One of the most critical considerations that prevents many data center operators from reducing their energy consumption by huge amounts is the reluctance to introduce outside air to the facility. The second Mestex project is focused on that research and we were fortunate to have the input of one of the world's experts on contamination control provide test coupons and laboratory analysis of the results. Dr. Prabjit "PJ" Singh, of IBM, provides guidance and analysis to companies around the world and is a major source of information for the ASHRAE TC 9.9 committee on data center cooling. Dr. Singh, Dr. Dereje Agonafer from the University of Texas at Arlington, and several members of the NSF IAB toured the Mestex facility at the conclusion of the meetings this week.

Drs. Singh and Agonafer are shown here learning about the technology behind the patented "Digital High Turndown Burner" that was developed at Mestex. Jim Jagers, Mestex Sales Manager, conducted the tour and provided a "deep dive" into how this unique technology works before the group proceeded to the research data pod for additional discussions.

The Mestex "Open Access Project" continues to move forward so I thought I would provide a brief update on the current research activity and the plans for the next few months.

The installation at the Mestex facilities in Dallas has been brought up to the expected final configuration with a total of 120 servers, intelligent PDUs, and switches distributed over 4 cabinets. We have separated the hot and cold aisles with a combination of a hard wall and flexible "curtains"...this has turned out to be one of the more important features of the installation. The indirect/direct evaporative cooling system is fully functional although we have also found the need to increase the hot aisle exhaust pressure relief in order to reduce the "back pressure" in the hot aisle.

In addition to the combination temperature and humidity sensors that are part of the standard Aztec control system, and used by the DDC control system to manage the operation of the Aztec unit, we have also installed 32, 10K thermistors. These sensors are used to feed information to our data acquisition system that is running in the background collecting more granular detail about the system performance. These sensors are located on the fronts and backs of the cabinets.

As I mentioned, we have spent some time resolving hot aisle/cold aisle separation issues. Although the Aztec unit is monitoring cold aisle pressure and operating the supply fan to maintain a target positive pressure in the cold aisle we found that we still had hot aisle air migrating back into the cold aisle. Over the last few days we have spent time filling small gaps and sealing around the cabinets more carefully and the results were immediately noticeable. The cold aisle temperature was reduced by 5 to 6 degrees F.

The other factor contributing to better separation was the reduction of the "back pressure" in the hot aisle. We had addressed some of this earlier by removing the standard room exhaust grill and replacing it with a screen that had much greater free area. While that made a measureable difference in server temperature rise we had simply moved the pressure issue from inside the data pod to the return air ductwork on the Aztec unit. That has now been resolved by doubling the size of the pressure relief openings in the return ductwork. Supply fan operation is now improved, server temperature rise is now on target, and supply fan motor power consumption has been reduced. We monitor and report real time PUE for the pod and these changes have lowered the real time PUE to between 1.08 and 1.35, depending upon the system operating mode.

Now that we are beginning to see the kind of stable operation that we were anticipating we have started to plan the next phases of the research.

The Aztec unit is designed to operate in three modes, or some mixture of those modes, depending upon the sensor inputs. The unit can operate in 100% fresh air cooling mode, in an indirect evaporative cooling mode, or in an indirect/direct evaporative cooling mode. Each of those modes introduces characteristics that the data center industry wants to research.

The next round of research will focus on two aspects of fresh air/evaporative cooling:

We will be installing coupons in the space to collect data on contaminants and their potential impact on the circuits in the servers. This project is projected to run for at least 1 month and support is being provided by IBM.

Following the collection of this data (and possibly overlapping) we will be installing particle count measuring devices. These devices will be installed upstream of the filters in the Aztec unit, downstream of the filters, within the cold aisle, and within the hot aisle. The filter racks in the Aztec unit will allow us to evaluate filters of different MERV ratings and see how well they perform in a typical HVAC unit installation versus the controlled lab environment.

As you can tell, this site offers a unique opportunity for researchers to take their lab research findings and compare them to a real world application with real world equipment. Mestex is pleased to be a part of this NSF sponsored research into data center cooling technologies. We will be hosting a tour for the industry advisory board of the NSF-I/UCRC during their upcoming meeting at the University of Texas at Arlington.

Since I have been traveling extensively over the last few weeks I have not been able to give much thought to our blog. However, the travels have also provided a little fuel for some comments.

First, I continue to be surprised/pleased to hear more and more presentations and discussions about evaporative cooling of data centers. It seems that "the big guys" get it...cooling data centers costs a fortune using compressors/chillers and the servers can handle much higher temperatures than people realize. If you run down the roster of large international web service or cloud service providers you will find that most of them have already implemented evaporative cooling or they have it in the construction plans.

As great as this is there are still market forces that are conspiring against this highly efficient cooling solution. One is the concern over humidity levels in the data center. This concern is compounded by the common use of relative humidity as the conversation point when it is actually absolute humidity that should be considered. This topic will likely be a point of debate for a long time to come since some of the larger companies have concluded that absolute humidity doesn't matter in their facilities...especially with 2 or 3 year server refresh rates...and other members of this progressive group are not sure and choose the "safe path" of limiting absolute humidity or dewpoint in their spaces.

The one area where it seems that all of the large players agree is with regard to temperature. It is virtually universal that ASHRAE 9.9 recommended guidelines are acceptable and, for many of these users, ASHRAE 9.9 allowable temperatures are OK.

The challenge for the industry is still finding a way to filter this information and confidence down to smaller operators and owners. I have heard it described as an education issue but is that truly the case? It is hard to find a computer or data center related design publication these days that does not promote higher temperatures as a feasible solution for cutting operating costs. Are we just too busy to read these articles or do we not believe the wealth of research and experience that backs up the statements?

At a recent conference on data center design I sat at a lunch table with a group of design engineers and a manager of 13 data centers. When asked how he learned about managing those centers the response was that he was self taught by attending conferences and talking to "experienced" data center managers. So his knowledge of the work by ASHRAE and others was not a major factor in deciding on appropriate operating temperatures. What he was learning was what these other managers had been doing over the last decade...going back to the "old days" where electric costs were low and low data center temperatures were the norm and research had not shown that to be unnecessary.

So, if education is the issue then how do we go about it? What mechanism will get the message through the daily clutter of information and time demands? I don't have the answer...if I did I would implement it immediately....but it seems to be a key to moving the industry forward.

DALLAS, April 28, 2014–
The digital revolution is sapping the power grid, but a new approach to data
center construction may help reverse the trend of ever-increasing energy
consumption for powering and cooling these facilities. To help data center
operators better understand their options, Mestex, the industry leader in evaporative
cooling systems, is providing a free tool to demonstrate how infrastructure can
be better deployed to manage competing demands for more capacity and greater
energy efficiency.

“Data centers are the enablers of this digital
revolution,” said Mike Kaler, president of Mestex. “The increase in global
digital demand and cloud computing is exponential. As demand rises, data
centers that house digital information consume more electricity, half of it being
used to cool the facility. We wanted to help people see how energy is being
consumed and ways for managing infrastructure and costs.”

The
company believes intelligent technology combined with a flexible, scalable and
energy-saving approach is the best way to “build as you grow.” Adding plug-and-play
cooling units – such as Mestex’s own Aztec
Evaporative Cooling Units – as capacity increases is the most economical strategy
for data centers to manage expansion or new construction while reducing total
cost of ownership. Aztec systems are proven to lower power usage by 70% when
compared to traditional air conditioning; the system’s digital controls, when
integrated with other building automation systems, can extend that savings even
further.

To
help data center operators get a realistic picture of how their own expansion
might play out, the company recently launched the Mestex Open
Access Project to provide information technologists, facility managers and
financial executives the ability to evaluate energy-saving concepts in a real-world
environment.

“We’ve
opened access to our equipment, controls and data, because we want to encourage
energy savings and demonstrate to data center decision makers that there are
smart, effective ways to increase efficiency and optimize operations,” Kaler
said.

The
web-based interface offers visibility into the physical plant and air
conditioning system of an operating data center being tested as a part of a
project spearheaded by the National Science Foundation. The “open access” gives
anyone with Internet access an unembellished look at how a data center is operating,
in real time, 24/7.

The
Open Access Project harnesses the power of Mestex’s direct digital control
(DCC) system, which comes standard on all of its HVAC products and can be
easily integrated with other HVAC vendors’ products and building automation systems
to create an intelligent network that controls cooling for optimal efficiency,
performance and longevity, as well as provides web-based system monitoring and
management.

Note:

Mestex
President Mike Kaler will be hosting a presentation on mission-critical cooling
systems on Wednesday, April 30, at 10:30 a.m. at AFCOM Data Center World at the
Mirage Casino-Hotel, Las Vegas, Nevada. The company is exhibiting (booth #1227)
at the conference April 28 – May 2.

Mestex (www.Mestex.com),
a division of Mestek, Inc., is a group of HVAC manufacturers with a focus on
air handling and a passion for innovation. Mestex is the only HVAC manufacturer
offering industry-standard direct digital controls on virtually all of its
products, which include Applied Air, Alton, Aztec, Koldwave, Temprite and LJ
Wing HVAC systems.

I recently attended an interesting conference on the impact of E-commerce on the world of logistics and warehousing. Of course, virtually everyone knows about Amazon and their E-commerce model. "The Amazon Effect" is used to describe E-commerce in general and the economic impact on communities of having an Amazon facility in their area.

But I think that the part of the story that is not discussed enough is the changing nature of the facilities and the people in the facilities.

Years ago a warehouse was a warehouse. These big, uninsulated, boxes were filled with metal racks stretching to the ceiling and covering the floor from wall to wall. Those racks were at least partially filled with finished goods that would eventually be located manually and pulled from the racks with a human-driven forklift. From there the finished product would be taken to a truck at the loading dock and sent on its way. I actually spent a summer during my school years, eons ago, locating products and driving the forklift to the truck. I had a clipboard (that was like a tablet computer but held actual paper and used something called a "pen" to mark things off on the paper) and the process was tedious. The building itself was hot in the summer and cold in the winter but we just dressed for it.

An E-commerce facility today is only similar in that the box is still big and there are still metal racks but that is about it. In the first place, those racks may not be used to store finished goods but components that can be used to create a finished good. Maybe it is a shirt and tie that are packaged into a set. Maybe it is a cell phone, battery, charging cable, etc packaged into a retail package.

This approach is designed to give the end customer flexibility. Order the color, size, accessories that you want to make the purchase unique to you and the E-commerce company "fulfills" that order to your specification. So, now, you have "fulfillment" centers instead of warehouses and they are occupied by dozens of workers using computers to configure your package to your needs. In many cases the components that these workers put together are not retrieved by a human but by a robotic retrieval system. The robots, and the human workers, all receive their instructions from on-site servers processing thousands of orders a day.
So, the old warehouses of my school days are now air conditioned, filtered, well-lit, high tech "factories" with their own small data center. The challenge for the HVAC systems is how to handle three different requirements in those buildings.

The area used by workers to fulfill the orders needs comfortable temperature conditions but only in the lowest 7 or 8 feet of the building height. The rack area where the robots run around might need temperature control for the entire 35 or 40 feet of the building height depending upon the product storage requirements. And the on-site data center needs filtered fresh air, or evaporative cooling, to keep the servers running at an affordable operating cost.

And to compound the problem the E-commerce company is probably growing so fast that the system configuration today will be obsolete in 2 or 3 years.

In order to satisfy all of those requirements, and provide future flexibility, requires the kind of analysis that a tool like CFD can provide. Being able to create the building and experiment with equipment locations and sizes before the space is built, or reconfigured, has tremendous value. Mistakes can be avoided and performance can be optimized to the requirements of the area being served. Mestex has invested heavily over the last 15 years in CFD software, computers, and training so that we can perform the kind of analysis required. We use this tool almost daily to help designers and owners make the best equipment selection for their project.

If you are involved in the E-commerce world and need to know the best type and location of the HVAC equipment for your project please feel free to contact us at www.mestex.com.

Students from the University of Texas at Arlington will be presenting an update on the progress of a research consortium, partially funded by the National Science Foundation, that is focused on improving the efficiency of data center cooling. This presentation will be made during the SEMI-THERM conference in San Jose, California from March 9-13.

The work presented in this exhibit presents updates on this project since the last industrial advisory board (IAB) meeting in Villanova University in September 2013. The updates include completion of construction of an Aztec ASC-15 cooling unit, attachment of the cooling unit to an IT Pod, construction of internal details of the IT pod, construction of a duct for testing various cooling pads, creation of computational fluid dynamics (CFD) model for the IT pod and the ASC-15 unit.

The cooling unit, ASC-15, which is capable of operating in pure air-side economization, in direct evaporative cooling, in indirect evaporative cooling, and/or in hybrid modes, contains two blowers which can deliver up to 7000 CFM to the IT pod. Various parameters of the cooling unit, such as blower rotational speed, inlet air temperature, supply air temperature, outside air humidity, etc are available through an online portal. ASC-15 is connected to the IT pod at Mestex facility which is providing power and water to the modular research data center. Inside the IT pod, four cabinets, each containing thirty HP SE1102 servers, are placed in a hot/cold aisle configuration.

One of the HP SE1102 servers was tested in UT-Arlington lab to find out its maximum power consumption. The maximum measured energy consumption is used to calculate total dissipated heat per rack in the CFD model of the modular research data center. This CFD model will continue to be updated depending on changes to the IT pod or the cooling unit. For example, updates to the cooling pad model will be applied based on results from the various wet cooling pad tests that will be performed at UT-Arlington.