Energy consumption levels in data centers continue to be major concerns for both facilities operators and regulators. According to Business Solutions, in February 2017 the U.S. House of Representatives passed a bipartisan bill requiring a handful of federal agencies –including the Office of Management and Budget as well as the Department of Energy – to coordinate how they purchase and implement energy-efficient equipment.

Why did Congress act so quickly on this issue? For starters, data centers already account for one-tenth of the federal government's total energy usage. Accordingly, enactment and adequate enforcement of new energy efficiency policies could save the public sector billions of dollars a year over the rest of this decade. It would also put the thousands of federal data centers on superior footing to many private facilities, which still utilize relatively inefficient and less eco-friendly energy mixes.

"Energy efficiency policies could save the public sector billions of dollars a year."

Reducing energy consumption further: What are the most feasible options?

Upgrading essential data center equipment (e.g., servers, storage, power supplies, etc.) to newer and more efficient models is a common approach to saving electricity. But what other options are out there?

Environmental monitoring solutions are a good place to start. Not only do these tools enable data center operators to keep tabs on temperature, humidity, voltage and other key metrics, but they lessen the likelihood of an outage. Downtime is indeed extremely expensive: The Ponemon Institute has estimated that the average cost rose from $505,502 in 2010 to over $740,000 in 2016.

Such incidents lead to wasted power and fewer available funds for necessary technical improvements in the data center. With data center temperature and voltage monitors in place, it is possible to spot issues right away:

Temperature sensors detect whether an appliance is overheating. This information helps in making decisions about whether and where to shift workloads, and how to allocate cooling resources.

Voltage detectors keep tabs on electricity flowing to your equipment. Power fluctuations can degrade performance and waste energy, so it is essential to stay on top of its quality and stability.

Video surveillance systems help protect facilities from unwanted intruders who may interfere with the normal functioning of infrastructures, including power supplies and cooling equipment.

The market for data center power management has expanded over the last few years as facilities' operators have sought to increase their energy efficiency levels, reduce total cost of ownership and scale their activities to keep up with rising demand for IT and cloud services. According to a report from RnR Market Research, this sector could expand at a 9 percent compound annual growth rate between 2017 and 2021, eventually reaching $21.73 billion in value.

Understanding how data center power is utilized by different facilities

Why are data center providers adopting power management technologies? One big reason is the desire to more closely monitor how much electricity they are consuming. The bigger power consumers in the typical data center are pieces of IT equipment, uninterruptible power supplies and chillers. How much electricity goes toward cooling alone depends heavily on the age and design of the data center.

More specifically, in cutting-edge facilities, cooling uses much less electricity than standard IT equipment, since energy-efficient mechanisms such as free-air cooling and containment aisles play a central role. However, in older buildings, there is greater reliance on air conditioning alone, which drives up the share of power consumption that goes toward cooling.

Data center cooling is becoming more efficient.

Minimizing electricity consumption wherever possible is vital, since data centers are projected to require a lot more power in the years ahead. The Natural Resources Defense Council estimated that by 2020, data centers will consume 140 billion kilowatt-hours of electricity per year, resulting in $13 billion worth of power bills.

"In cutting-edge facilities, cooling uses much less electricity than standard IT equipment."

The changing face of the data center cooling market

The good news is that less efficient facilities are gradually being phased out by newer ones, meaning that the growth curve for overall data center power consumption may be kept in check even as the absolute amount of electricity consumed keeps increasing. Research and Markets has predicted that the data center cooling market will double in size between 2016 and 2021, to over $14 billion.

Two of the major drivers of this growth will be the shift to eco-friendly and energy-efficient cooling solutions, as well as the desire to cut operational costs. These alternatives to indiscriminate air conditioning can lower electricity bills while still providing effective heat removal. The upfront investment in new infrastructure may be relatively costly, but the long-term benefit of lowering operating expenses should ensure that the upgrade pays for itself.

For data center operators, ITWatchDogs provides a broad array of environmental monitoring solutions that can keep tabs on many different critical metrics. Temperature, humidity, airflow and even door openings can be carefully tracked, so that you have all the information you need to make smart decisions about your facility, save energy and expand your operations. Find out more today about your options on our main page.

Network outages are exorbitantly expensive, in part because reliable telecommunications networks are the backbones of modern business. Critical applications such as VoIP telephony, video conferencing and email all depend heavily on the corporate wide area network, meaning that any downtime is hugely disruptive and costly.

A 2016 survey of IT professionals, conducted by Talari Networks, revealed that 89 percent of respondents had experienced at least one outage that year, and 69 percent had gone through at least two such incidents. These events take a toll: The monetary losses range from $1 million to $60 million per year, according to an IHS study.

Understanding network downtime and how to prevent it

Given that downtime is so expensive, what can be done to stop it? There are several critical junctures that must be secured:

The WAN itself: Many companies have shifted to versatile software-defined WANs for better adaptability under rapidly changing business requirements and conditions.

The backend telecom infrastructure: Transceiver stations, cellular towers and control rooms are all subject to a variety of environmental and electrical threats that can contribute to outages.

Let's focus only on the latter issue for now. In a research paper about the telecommunications industry in Bangladesh, the authors noted that its growth had been hampered by a "power crisis" under which grid electricity was not being supplied to new BTS cellular facilities. As a result, service providers had to set up and use their own voltage temperature monitors to regulate their generators and battery-based backups.

"A variety of environmental and electrical threats can contribute to outages."

These components were parts of larger systems for guarding against problems such as voltage fluctuations, load shedding and complete outages. The lesson from this particular episode is that telcos often must be proactive in how they protect their assets from a wide range of potential challenges. Environmental monitoring solutions provide the comprehensive protections that service providers need.

Temperature monitors and more: How the telecommunications sector benefits from these tools

ITWatchDogs offers many environmental monitoring solutions that make administrators' lives easier as they attempt to keep tabs on their equipment. It is straightforward to monitor temperature, humidity, analog inputs, BTS power supplies and room entries.

Data center energy consumption continues to rise, even as various efficiency measures – such as the use of free air cooling and renewable power sources – become more widely implemented. Speaking to Network World in January 2015, Salesforce director of sustainability Patrick Flynn noted that the rise of the Internet of Things, virtual/altered reality, artificial intelligence and other innovations has driven up demand for electricity in data centers.

Surging IP traffic, cybersecurity concerns among the major challenges to IT uptime

In other words, data center operators cannot be complacent about the major energy efficiency gains their facilities have made in the last decade plus, since the biggest challenges may still lie ahead of them. Indeed, we should expect data centers to come under greater strain in the near future as they process more requests for applications such as video streaming and embedded sensor reading:

Cisco has predicted that annual data center IP traffic will more than triple between 2015 and 2020, from 4.7 zettabytes to 15.3 zettabytes. It expected a massive transition from traditional to cloud/virtualized data centers over this period.

The stakes will also rise for cybersecurity. Attacks against programmable logic controllers and other critical pieces of infrastructure can often lead to chillers going offline, resulting in overheating that damages equipment and causes outages.

What should data center and IT providers do in order to save power and minimize their exposure to downtime? Environmental monitoring solutions should be a cornerstone of any strategy for maintaining continuous data center operations.

Viruses and distributed denial-of-service attacks receive a lot of attention when it comes to data center outages, but they are just a few of the possible causes of such events. Green House Data noted that many cooling systems are not optimized for handling the high density of modern data centers, which pack numerous servers close together to save space.

"An effective data center temperature monitor can save a company thousands or even millions of dollars."

Without proper cooling, computer room air conditioning units can fail and result in prolonged outages. Data center personnel have accordingly implemented solutions such as hot and cold containment aisles as well as tools for identifying hot spots and being notified about environmental anomalies.

An effective data center temperature monitor can save a company thousands or even millions of dollars that might otherwise be lost to unplanned incidents. ITWatchDogs offers fully scalable environmental monitoring solutions for keeping tabs on your server rooms and network closets. When something goes wrong or a temperature reading deviates from the norm, a technician can promptly get an email, text or Simple Network Management Protocol notification about the issue.

Tools such as the Watchdog 15-PoE offer humidity and temperature readings along with built-in Power over Ethernet that allows power and data to be carried to the device over the same cable. Ultimately, they can alert you early and often to any possible issues in your data center and help you protect your investments in electricity and infrastructure. Find out more about this solution and others on our data center/IT page.

AliCloud, the cloud computing division of the Chinese Alibaba Group, has made a data center with a unique twist: It's cooled by water from a nearby lake. This is quite interesting, and deserves some notice because using water to cool a data center isn't something that's considered an industry standard. Most data centers use artificial air conditioning systems, which are extremely expensive and not very energy efficient, as they suck up quite a bit of electricity.

The data center, which is located near Qiando Lake in Zhejiang Province, is extraordinarily efficient with this cooling method. The lake water can keep data center temperature down while also cutting the building's energy costs nearly 80 percent compared to mechanical cooling methods.

Not the first, but still something to watch
Although this was quite an impressive feat, AliCloud was not the first to think of cooling a data center with water. Like many technological advances, Google was one of the first. The search engine giant acquired a paper mill in Hamina, Finland, back in 2009 with plans to turn it into a data center. Using a pre-existing tunnel system in the paper mill, Google engineers pumped in seawater and created one of the first data centers to be cooled with water.

Google may have been one of the first to cool server equipment with water, however, AliCloud has them beat with what to do with the hot water once the data center has been cooled. Returning hot water to a cool lake or sea is pretty environmentally irresponsible, and with this in mind Google made sure to mix in non-heated seawater before returning it all to the ocean.

This certainly solves the environmental problem, but AliCloud found a way to make this process even more efficient. When the Chinese data center is finished with its hot water, it simply uses the excess heat to warm nearby buildings. By cutting cooling and heating costs while being environmentally friendly, AliCloud has found a way to make sure everyone wins.

And while keeping data center temperature down with cold water is certainly a huge leap forward for the industry, there isn't a system designed yet that has a 100 percent chance of never failing. Cooling structures can fail, and when they do data center equipment can be severely damaged.

For this reason, even data centers with the most technologically-advanced cooling system need to have a temperature sensor in the server room. A temperature monitor, such as the plethora of devices available through ITWatchDogs, have high temperature alarms built in. This means that anytime the server room reaches an undesirable temperature, key employees are notified via text so that precautionary steps can be made.

There are few other destructive forces as powerful and unpredictable as a raging fire. A National Fire Protection Associate study found that in 2013 alone there were approximately 1.2 million fires in the United States, racking up $11.5 billion in property damage. With so much on the line, both in terms of human lives and financial assets, it's no surprise fire suppression is such an important topic in modern data center construction.

And while it's easy to say that fires in the data center need to be suppressed in order to maintain server uptime, there are a lot of questions that need to be answered in this area by facility managers. To that end, this list of tips has been put together in order to give a deeper understanding of the dangers involved in a lax fire suppression system:

Remind yourself of your objective: Many people might jump to the conclusion that a fire suppression system in the data center is only meant to put out fires. While this is obviously true, it doesn't reach the necessary conclusion. Yes, your first objective in creating a system like this is to put out the fires.

However, the final goal is to put out these fires so your company can go back to doing business as quickly and efficiently as possible. This is extremely important to keep in mind, and will guide you through decisions that may lead to a quicker fire suppression but will also decrease server uptime.

Figure out what kind of fire suppressant you need: This is where thinking about both disaster recovery and business continuity before installing a fire fighting system begins to play an important role. Sure, you could install a water sprinkler system that would put the fire out, but dumping gallon after gallon of water on sensitive IT equipment is only going to lead to other performance issues in both the near and long term.

Because you need to worry about these sensitive systems, one of the best fire suppressant option is going to be an Inert Gas Fire Suppression System. An IGFSS is comprised of Argon, Nitrogen or a blend of the two. These gasses are non-reactive and can flood a server room, diluting the oxygen level to 13 to 15 percent. Combustion requires at least 16 percent oxygen, so these systems basically starve a fire while also leaving enough oxygen to allow personnel to evacuate properly.

Keep a portable fire extinguisher handy: This tip has less to do with the safety of the equipment and more to do with that of employees within the data center or server room. Being able to quickly leave a building allows for a higher level of safety for workers, however evacuation isn't possible if fire is blocking exits. This is where a portable fire extinguisher comes into play. Having one of these devices near a server room's exit will allow any employees trapped inside a better chance of getting out of the building safely.

Proper temperature monitoring is key: Now that you have systems in place that protect both the employees within your data center as well as server uptime, you need to make sure important personnel are always immediately notified in the event of a fire. This is where a server room temperature sensor comes into play, and it's important not to skimp in this area, either. A good sensor could save you from wasting time cleaning up a fire suppression system that went off due to faulty temperature data. Thankfully, ITWatchDogs has you covered. With a host of top-of-the-line temperature sensor equipment, ITWatchDogs can help to make sure your data center server room temperature is where it needs to be.

With all of the "going green" initiatives that have gained so much popularity recently, it is interesting that more people are not pushing for greater energy efficiency in America's data centers. U.S. data centers account for about 2 percent of all electricity used in America, consuming more than 100 billion kilowatt-hours of electricity.

Although these numbers seem high, it isn't as though data centers are actively trying to have high energy needs. That's just the nature of the beast. GreenBiz reported that just one rack of blade servers can make the same amount of heat as four Weber gas grills in just an hour of server uptime. This just goes to show that servers require a lot of electricity to run, as well as a hefty amount of the stuff to keep server room temperature at an optimal level.

So with all of this in mind, it is easy to see the ecological impact of keeping connected in the modern world. And while thinking of the environment and limited resources is certainly a good thing, there is more to being energy efficient than saving the world.

Energy lost is money lost
Data centers, on average, spend a huge amount of money each year on the amount of energy that they consume. While a lot of this is necessary, a good amount is referred to as wasted energy. The Department of Energy and others use wasted energy to calculate power usage effectiveness. PUE is calculated by dividing the totaly energy consumed by the data center by the amount of energy consumed just by its IT resources, which gives a sample of how much energy is wasted. The current average PUE is about 2.0 for most data centers.

Per Brashers, founder of the big data infrastructure consultancy group Yttibrium, stated that "a 1 megawatt facility with a PUE of 1.90 spends more than $1 million on waste energy, whereas a facility with a PUE of 1.07 spends $148,000 on waste energy." It's quite clear through this example that energy efficiency is a great way to be cost effective on top of being good for the environment.

So how can data centers cut down on costs while improving energy efficiency?

Well, a good place to start is the cooling systems in place to keep servers from overheating. Based on the Weber grill example above, it's obvious that servers get extremely hot. Because these machines are so sensitive to environmental changes, temperature monitoring and proper cooling is a must for any data center.

That being said, a lot of data centers are increasing the temperature in the server room to cut costs. Data Center Knowledge stated that a lot of data centers are pushing their server room temperature up to 80 degrees Fahrenheit. It used to be an industry standard to keep the server room temperature as close to 72 degrees as possible. While this is still the most recommended and optimal temperature, certain data centers are pushing the limits to see decreases in cooling costs.

Data centers that wish to attempt to raise their server room temperature in order to save costs run the risk of overheating their equipment. Although there is a lot of money to be saved in even a few degrees above normal operating temperature, these savings are dwarfed by the amount of money necessary to fix or even completely replace an overheated server. Therefore, it is extremely important to invest in a state-of-the-art temperature sensor. The Watchdog 1000 has a built-in high temperature alarm that triggers a text message to be sent out if there is ever a server overheating situation that must be dealt with.

As with many modern electronics, servers and other pieces of data center equipment are very specific in terms of their environmental needs. This is especially true when talking about humidity.

As many IT professionals know, having an incorrect humidity level in the server room can seriously affect server uptime and company productivity. Although people might think any humidity in the server room is a bad thing, some water in the air is actually necessary to the proper functioning of IT equipment. While too much leads to condensation and eventual corrosion of the electrical components, too little humidity causes electrostatic buildup and can damage servers just as much as high humidity levels.

It is for this reason that the American Society of Heating, Refrigerating and Air-Conditioning Engineers created a guide for optimally storing IT equipment, stating that things like servers should be kept at a relative humidity between 40 and 55 percent. That being said, quite a lot of people within the data industry are questioning whether server room humidity should be based on relative or absolute humidity.

While most people understand what humidity is, few actually know the difference between relative and absolute humidity. This can lead to disastrous results, as their definitions are by no means interchangeable. Knowing the difference between these two terms should provide a deeper understanding of what goes into the complex nature of server surveillance.

Relative and absolute: The difference
Basically, the difference between these two terms boils down to the amount of water that is actually in the air versus the amount that could be. Absolute humidity deals with the amount of water vapor that is currently in a particular sample of air. This is usually measured in grams of water per kilogram of air.

However, the most important part of absolute humidity is something called a dew point. If the dew point for a particular day is 40 degrees Fahrenheit, then the water in the air will condense into dew when the temperature reaches 40 degrees.

On the other side of this is relative humidity, which actually uses absolute humidity in its calculation. Relative humidity is the ratio of the actual amount of water in a sample to the amount needed to fully saturate the air with water vapor. So, if a particular sample of air has an RH of 40 percent, then that sample is 40 percent saturated with water.

Why does it matter?
So with all of this in mind, the question arises as to why any of this matters in terms of server surveillance. Well, as it turns out, the IT storage industry seems to have been neglecting absolute humidity ranges for a long time now.

As stated above, ASHRAE has always had a relative humidity range for IT equipment. But it wasn't until 2008 that it decided to include absolute humidity ranges as well. The current accepted level for absolute humidity is a dew point around 41.9 degrees as long as the facility is following proper data center temperature protocols.

The reasoning for this inclusion is that relative humidity can fluctuate drastically with the temperature. During a cold winter month, the outside temperature might be a low 15 degrees F, but if the dew point is held high at 10 degrees, then the RH would be 80.2 percent. This percentage is considered extremely high, but the average person would not necessarily call this kind of a day "humid." Having a dew point as a part of these server surveillance guidelines fills in the entire picture of what is happening to IT equipment.

And while all of this seems extremely complicated, thankfully companies like ITWatchDogs have server surveillance equipment specially designed to monitor all of these data points. The Watchdog 100 has a built-in sensor that allows facility managers to monitor both relative humidity and dew point, allowing them to have better information and more server uptime.

Like many of life's delights, fine cheeses have an extremely specific creation process. Making different kinds of cheese requires different levels of mastery, but one thing all fine cheeses require is proper aging.

To understand why cheeses need this, it is important to note what is happening during aging. Basically, mold or bacteria in the cheese breaks down fats and proteins with something called an enzyme, or a product often made by living organisms to bring about a specific biochemical reaction. As this happens, the different kinds of mold and bacteria use different kinds of enzymes to break down the cheese, and these different enzymes bring about each cheese's specific smell and taste.

So, basically, bacteria or mold is doing all the hard work in terms of aging the cheese. That being said, there are very specific guidelines that need to be followed in order to correctly age cheese.

How to set up a 'cave'
Even though the enzyme-producing mold or bacterium is doing all the heavy lifting here, there is still a good amount of work to be done in order to set up the right environment for aging cheese. The cheese aging area, known in the business as a "cave," is pretty simple to set up.

Although a refrigerator could be used as a cave, they're generally too cold and have a tendency to suck out the moisture of a cheese. A chilly cellar might be a better option, with a damp paper towel stuck in the lid of an airtight container. This is to allow the air in the container to be dampened, but make sure to avoid letting the paper towel touch the cheese.

Another bit to take into consideration is what kind of cheese is being aged, as different types of cheeses require different temperature monitoring. Rindless cheeses like fresh mozzarella and feta need to be stored between 35 and 39 degrees Fahrenheit, natural rind cheeses like Parmesan and aged Provolone need to be kept between 40 and 45 degrees, and finally washed rind cheeses like Gruyere need to be kept between 40 and 50 degrees F. Making sure the temperature data on these cheeses is at the right level is key, and a temperature sensor is an absolute must for those that wish to age their cheeses at the correct temperature.

]]>Freezer burn: What it is and how to prevent ithttp://www.itwatchdogs.com/environmental-monitoring-news/cold-storage/freezer-burn-what-it-is-and-how-to-prevent-it-40081202
Thu, 27 Aug 2015 16:15:33 GMTITWatchDogs

Being able to store food for long stretches of time is one of the many perks of living in such a technologically advanced age. It allows people the option to buy large quantities at discount prices, with the added benefit of always having food on hand no matter what.

That being said, improper food storage can lead to a nasty little effect called freezer burn. And while most people have probably run into freezer burn at one point or another, many simply do not know what it is or how to prevent it.

To be an efficient consumer of food, it's important to know what this little food waster actually is and what causes it. Freezer burn occurs when the moisture in a piece of food evaporates into the cold air of the freezer. This act leaves dry, lightly colored patches on the food that look very unappealing to the eye. Although this food is perfectly safe to eat, the freezer-burned pieces will very likely be tasteless and very dry. Many people throw this food out due to its tasteless nature, and as such this list of proper food storage procedures has been compiled to avoid this wasteful habit.

Don't blame the freezer: Many people who don't know what freezer burn is are quick to blame the temperature at which they keep their freezer. As such, many people turn up their freezer's temperature after encountering a freezer-burned piece of food. Not only does temperature monitoring have nothing to do with freezer burn, turning up the temperature of a freezer can be extremely dangerous. The FDA advises people to always keep their freezer temperature monitor at 0 degrees Fahrenheit. Those who want extra security in their temperature monitoring needs should visit ITWatchDog's products page.

Wrap the food correctly: When preparing food to be stored in the freezer, it's vital to wrap the product into an air-tight container. This can be done with foil, plastic wrap or even with a plastic zip-lock bag. Basically, the important thing to remember here is that the food can't have any areas exposed to the air in the freezer. This is the cause of freezer burn and it's what needs to be avoided at all costs.

Remove excess air from the finished product: This tip is mostly for those who choose to use a zip-lock bag for their freezer storage needs. As with the tip above, the key to good freezer storage is the elimination of air contact with the food. A zip-lock bag may be protecting the food from the other air in the freezer, but if there's air in the bag the entire process was all for naught.

Always be on the lookout for broken wrapping: A lot of jostling and moving around occurs in a freezer between a food's entrance and exit from cold storage. In that time, there is a possibility that a foods wrapping could become damaged. If this happens, simply take the item out of the freezer, re-wrap the food and put it back. As long as the freezer temperature monitor stays at 0 degrees, the food should have a very long life in cold storage.

Data room cooling is a huge part of ensuring server uptime. Every data center professional will attest that an overheated piece of equipment is something to be avoided at all costs. As such, data centers spend quite a lot of time and money making sure that their equipment is at the optimal temperature.

That being said, there are an unimaginable number of factors that could bring down a data center's cooling system. A perfect example of this would be the recent explosion in Los Angeles that switched off a Coresite data center's cooling system for a brief period of time. Thankfully no one was severely hurt in this currently unexplained explosion, but the point here is that a data room cooling system is vulnerable to many outside factors. Coresite had a temperature guarantee that this explosion forced them to violate, which certainly puts the company in a negative light with its clients. As such, companies need efficient cooling plans in place to bring data center temperatures back down to manageable levels.

How to cool a data center the right way
Until relatively recently, data room cooling was often done with a method referred to as "chaos" air distribution. This is where computer room air conditioning units pushed out huge gusts of cold air onto the equipment in the room. This had the dual effect of cooling the equipment, while also pushing the hot air away from the servers it was escaping and into return air ducts. While this works well enough in achieving the intended goal of lowering data center temperature, it is also an extremely inefficient way to cool down IT equipment. Chaotic air distribution, like its name entails, is very unpredictable, as such things like recirculation - where insufficient cold air causes hot exhaust to reenter a server - or even air moving too quickly for the servers fans to draw it in happen quite frequently.

The answer to chaos air distribution is quite simple: make the distribution less chaotic. One way to do this is by building a unit that encloses server racks in a contained structure. This structure grabs hot exhaust air leaving the server, allows it to be taken away by the CRAC unit and brings in cool air. By containing the airflow in this way, data centers have the unique opportunity to keep a firm grasp on temperature control.

Of course, with a contained unit comes the ever present possibility of a containment breach. Therefore, it is advisable that any data center intent on optimal server uptime keep a temperature sensor within these containment units. These sensors, such as those provided by ITWatchDogs, give data center professionals the most up-to-date temperature data. If the temperature of one of these containment units were to begin to slip, a temperature sensor placed within the unit would be able to alert employees of a breach. Not only would this allow for more server uptime, but it would also make sure that no energy was wasted in cooling a containment unit that had some sort of crack or hole.

Making sure research specimens are stored correctly is a key responsibility of any research facility. Improper handling of these materials can have extremely disastrous results, chief among them the halting of further research. A perfect example of what can go wrong in a research environment is the freezer fiasco a Harvard-affiliated hospital ran into back in 2012. Due to a malfunctioning high temperature alarm, 54 brain samples that were meant to be kept at -80 degrees Celsius were discovered to have actually been at 7 degrees. Thankfully these samples could still be used for specific kinds of studies, however, it remained unclear if the samples could be used for all neurological research.

Although this particular story has a relatively happy ending, the message here is clear: If an institution as prestigious as Harvard can make a mistake like this, anyone can. As such, proper humidity and temperature monitoring protocols must be followed for research specimens.

What kind of protocols are there?
To begin, it's important to note that there is a specific temperature range for room temperature. What many consider to be a colloquial phrase has a very exact range. A paper put out by the World Health Organization and PATH stated that controlled room storage temperature data should be in the range of 15 to 25 degrees C. Making sure a research room is within this range is extremely important, as any fluctuation out of this range might be disastrous for certain specimen. A room temperature sensor may be needed in order to keep a room at optimal temperature.

Another important factor to specimen care is the humidity of the room in which samples are being stored or studied. The American Museum of Natural History has specific guidelines when it comes to the humidity at which specimen are kept. They state that for the majority of subjects, the relative humidity of the room needs to stay as close to 50 percent as humanly possible. Swings between 40 and 60 percent are acceptable, however, not advisable. That being said, other objects may have more specific needs. The AMNH also stated that samples with metal components should be kept at RH levels that are as low as the facility can manage.

With all of this in mind, any facility with specimens that have specific environmental needs should absolutely invest in a high temperature alarm. Products like the Watchdog 100 can keep a close eye on both the humidity and temperature data of a room. And, if the worst is to happen and the specimen is exposed to a temperature or humidity outside its range, the Watchdog 100 will alert personnel via a built-in text messaging system.

]]>How to set up a server roomhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/how-to-set-up-a-server-room-40080474
Mon, 24 Aug 2015 15:03:21 GMTITWatchDogs

It's a tale as old as time. A small startup founded in someone's basement or garage has finally made it big. They've hooked their first big client and now have the money to move into a real office space. This is quite literally at the heart of what the American Dream promises. Work hard, pay your dues, and business will come.

While this is perhaps the most exciting time in a new company's existence, it comes with quite a few hurdles, one of which being the proper setup of a server room. Going from a small in-house setup to an actual office space server room can be quite a challenge; however, it doesn't have to be a hair-pulling affair. Below is a small list of some of the requirements for a small- to-medium computer server room that will not only make the setup process easier, but will also ensure continued server uptime down the line.

1. Understand your space requirements: While many company heads may want to jump right into finding space in the new office for the server, the first thing any person who is setting up a server room should know is what kind of space requirements they will need. This will be something every company needs to decide for themselves, but a key point to keep in mind is intended growth. TechRepublic recommends that a company make serious considerations to future growth in terms of server room needs. A company may only need a closet right now, but 10 years down the line that same company may need a space three times as big.

2. Find the right space: Once space requirements are known, a room within the new office can now be selected as a server room. This is a pretty basic step, although TechRepublic also stated that the server room be centralized so that it is easily accessed by employees but also out of the way.

3. Temperature monitoring is key: Servers are extremely sensitive pieces of equipment. As such, having a humidity and temperature monitor within the server room is absolutely necessary to ensure the safety of both the servers and the employees within the office. The requirements for a standard small-to-medium server room are that the temperature is kept at a steady 72 degrees Fahrenheit while the Relative Humidity (or RH) is kept at 45 percent. While every company has specific price and temperature monitoring requirements, the list of products sold by ITWatchDogs will fulfill every server surveillance need.

4. Server room safety precautions: Although no one wants to think about it, server room fires are a potential reality for any and every company. Therefore, every company should have a high temperature alarm as well as a fire suppression unit operating within the server room. The high temperature alarm is important not only because it alerts employees to possible server damage, but also because it allows them to be aware of danger within the workplace. The fire suppression system is also absolutely vital, however, installing any old water sprinkler will result in a huge amount of server damage and downtime. It is recommended that every server room be outfitted with a gas-based or clean agent fire suppression, which will put out the fire without damage to the sensitive server equipment.

5. Keep food and drink out: This step may seem basic, but making sure that food and drink are kept out of the server room is of the utmost importance in terms of server uptime. People seem to think they are the exception to this rule, and the fact they've never spilled in the server room should give them special treatment. However, this is nowhere near the reality of the situation. Thankfully, the solution is simple. A sign posted outside of the server room with a strict warning should be more than enough to keep snacks away from the sensitive equipment.

Staying connected has become the cornerstone of modern times. Although such a level of connection is relatively recent, it's hard to think of a world without it. And while many people might jump to social networking as the main form of communication, it is the widespread use of the cell?phone that has allowed people to stay connected to one another unlike during any other time in human history. The Communicator estimates that around 1.12 billion people currently have a mobile phone, meaning that a large number of people around the world are affected daily by telecommunications.

With that in mind, it's easy to see why telecom equipment is so vitally important to maintain properly. And in this constant battle to keep equipment functional, perhaps the most dangerous of elements working against telecommunications is heat. Temperature monitoring of telecom equipment is absolutely essential to the proper operation of these devices. As mentioned in an article put out by Hartford Steam Boiler, today's telecom equipment is really not much different than complex computer systems. And, just like any other computer system, telecom equipment is extremely sensitive to heat and other environmental variables.

Telecom equipment needs special care
The reason for this is simple: Modern telecom equipment puts off a lot of heat when working. HSB noted that equipment that consumes about 10 kilowatts of electrical energy will release that energy as heat, which in this example would equal about 34,000 BTU per hour. That's quite a bit of thermal energy to be compensated for.

And it certainly does need to be considered, along with a temperature monitor of the equipment, because heat is extremely bad for electronics. There are many kinds of material necessary to make modern computing equipment, all of which cannot suffer physical or chemical degradation. The sad fact of heat is that a temperature rise speeds up chemical reactions. HSB specifically said that for every 10 degrees Celsius of increased temperature, chemical degradation doubles.

For this reason it is absolutely imperative that telecom companies keep a detailed log of temperature data. Without knowing how much the temperature of components is changing, it is absolutely impossible to know what kind of degradation the equipment is going through. Thankfully, ITWatchDogs has created a simple solution to this problem. Its vast assortment of handy temperature monitors can make temperature data logging a breeze. Its text-alert system allows even the busiest of telecom professionals to receive up-to-date information about the conditions equipment is currently facing.

Anyone who's had food poisoning knows the horrific effects of improper food handling. The nausea, vomiting and high fever are bad enough in their own right, but perhaps the most annoying aspect of food poisoning is how it puts a stop to any and all near-future plans. The scariest part about the whole ordeal is that there isn't really a set time between consumption of the tainted food and when symptoms may show up. The Mayo Clinic noted that symptoms can appear almost immediately, or that they may not manifest until days or weeks after consuming the contaminated food. The idea of being violently ill weeks after eating spoiled food is extremely alarming, to say the least. Thankfully, there are a few things every food service firm should be doing to avoid causing food poisoning.

1. Refrigerate or freeze foods as soon as possible:Mayo Clinic instructs consumers to move food to cold storage within two hours of purchasing or preparing the food. This may seem like common sense to most, however, this is the first step mentioned because it should be literally the first thing people do when thinking of ways to reduce the risk of food poisoning. A temperature sensor in all areas the food might be found is also a great idea. This is because when the temperature of the room goes above 90 degrees Fahrenheit, food that can spoil needs to be moved to the fridge within an hour.

2. Temperature monitoring of the refrigerator and freezer are key:Making sure the refrigerator and freezer temperature monitor are in the right range is an extremely important part of food cold storage. The FDA recommends that the refrigerator be kept somewhere around 40 degrees F, while the freezer should be kept at 0 degrees. Any person so much as thinking about food storage needs to be aware of the temperatures at which their food is being stored because they need to...

3.Stay out of the Danger Zone:The Danger Zone is basically the reason food needs to be refrigerated. Home Food Safety states that the Danger Zone for food can be found between 40 and 140 degrees F. This area of temperature data is considered dangerous because it is both below the temperature necessary to destroy bacteria in the food but above the temperature needed to cool it. Home Food Safety also states that within this range, "a single bacterium can multiply to trillions in just twenty-four hours."

4. If there's even the slightest chance the food is tainted, throw it out:This might be the hardest step for some, especially with the recent push toward resource conservation. That being said, keeping food that might be spoiled just because there is a lot of it is just downright reckless. According to the Mayo Clinic, perishable food that is left at or above room temperature may contain toxins that can't be killed through cooking. Businesses that wish to conserve food should simply buy with more forethought and planning on how much they will need for the week.

With the sophistication of modern technology, it's not surprising that products such as vaccines and blood require a great amount of temperature monitoring. It's also not surprising, considering its size and level of high-quality medical care, that America uses a lot of these products on an annual basis. According to the National Center for Biotechnology Information, in the United States alone, 15 million bags of blood are used each year to treat patients.

And although this massive amount of bodily fluid is one of the most useful commodities in the healthcare industry, if it isn't stored properly it runs a serious risk of spoiling.

In fact, Blood Center of the Pacific stated that blood can only be stored between 1 and 6 degrees Celsius or 33.8 and 42.8 degrees Fahrenheit. Any variation from these temperatures can be extremely dangerous, and may result in the blood sample having to be thrown out. The center also said that "storage units must be equipped with a continuous monitoring system that records temperatures at least once every four hours and an alarm that sounds if temperature limits are reached."

Blood isn't the only thing that needs a temperature sensor
Of course, there are many other medical products that need a constant level of monitoring. Perhaps the best known of these, and also the most important, are vaccines. Vaccines need just as much care and attention as blood to keep from spoiling, and as such there are very strict ways to store them. The Centers for Disease Control and Prevention has noted that the ideal temperature of the average vaccine is 40 degrees F, with an acceptable range between 35 and 46 degrees.

The CDC also outlined that a vaccine temperature monitor must be checked at least twice a day, once in the morning and once in the evening. And while this is a fine strategy, it simply isn't the best a healthcare professional can do considering the technology the average person has on hand these days. What happens if the temperature varies greatly between these two checkups? Should medical storage professionals be required to personally check these temperatures every hour? Every minute?

How ITWatchDogs can help
Vaccine temperatures should obviously still be recorded, however, there is a much easier way to get up-to-date temperature data. Climate monitors like the Watchdog 1400 are the perfect solution to this problem. The Watchdog 1400 has a temperature range of -22 to 185 degrees F, well within the storage ranges of both blood and vaccines. The built-in data logger gives a consistent view of what temperatures these products have been exposed to, which allows storage professionals the ability to see temperature fluctuation over long periods of time.

And while the Watchdog 1400 can text or email storage professionals if temperatures exceed certain parameters, perhaps the most exciting piece about this product is its ability to trigger an external relay-controlled auto-dialer. Once this auto-dialer is purchased and set up, the Watchdog 1400 can actually call up to nine phone numbers until the phone owners answer the call. Within seconds of the temperature reaching a dangerous level, multiple medical professionals could be notified and could start working on a solution.

A courthouse in Columbia County, Oregon, received a blow to their server uptime last month when a call was made to the local fire department about an odd smell in the server room. According to the Portland Tribune, on July 26 Columbia River Fire and Rescue Division Chief Ron Youngberg was called to the courthouse to investigate a suspicious smell. When he arrived, he assessed that the smell must have been coming from a fire and pulled the fire alarm. Unbeknownst to Youngberg, the fire alarm released a chemical fire retardant all over the courthouse's servers, and a minor battery malfunction causing the smell resulted in a $317,000 cleaning bill.

While CRF&R Chief Jay Tappan stated that Youngberg's action "was the right thing to do," the fact remains that the Columbia County Courthouse is now out a large sum of money due to a completely avoidable event. Devices such as the Watchdog 100, produced by ITWatchDogs, would have allowed an employee of the courthouse to see that while the temperature in the server room may have changed due to the battery overheating, it certainly wasn't to the level of a full-blown fire. And although many might think that a temperature sensor might be a device solely suited for the IT industry, the reality of the situation is that computer equipment such as servers are such a vital piece of any modern workplace that their physical security is of the utmost importance.

The real cost of server downtime
While Columbia County's situation is certainly a costly one, perhaps the greatest fear of any company's IT department is losing server uptime. When the server of a company is down, any and all online activity grinds to a halt. And in this Internet-fueled world, being offline means loss of revenue. A perfect example of this is Amazon's 2013 fiasco, where on April 13 the company's website was down for a grand total of 30 minutes. The linked Forbes article, written by Kelly Clay, calculates just how much the company lost in this time.

So how much damage can 30 minutes of going dark really do? Well, to a company based on the Web such as Amazon, 30 measly minutes of being offline translates to about $2 million in lost revenue. That's a lot of money lost in less time than it takes to bake a cake.

While it's impossible to say whether or not some sort of server surveillance device could have saved Amazon this $2 million headache, the point here is that losing a server can have devastating effects on a company's revenue. The IT department within any company with access to their own server should absolutely consider beefing up the security of their server's physical well-being.

]]>Are data center fires becoming increasingly common?http://www.itwatchdogs.com/environmental-monitoring-news/data-center/are-data-center-fires-becoming-increasingly-common-40078761
Thu, 13 Aug 2015 13:57:02 GMTITWatchDogsOf the myriad threats facing data center operations today, perhaps none is as awe-inspiring and damaging as a fire. A blaze can spread quickly and totally annihilate operations in very little time. As recent headlines show, data center fires are still a major concern, which is why all facilities should ensure they are protected and adopt next-gen data center temperature monitoring solutions.

Data center downtime, while generally quite rare across the industry, is usually caused by human error, ComputerWeekly reported. According to Networks Asia contributor Stefano Valdrighi, about 6 percent of all data center infrastructure issues that lead to downtime are caused by fires. As some recent events show, fires are among the worst types of issues a data center may ever deal with:

A data center operated by the Maryland State Police caught fire last December, shutting down many critical operations for these law enforcement officials for multiple hours.

While many of these events did not last very long, it does not take much time for a downtime incident to really add up. According to statistics cited by Gartner analyst Andrew Lerner last July, a minute of network downtime costs businesses about $5,600 on average. This means that an hour of downtime often sets a company back around $300,000. This is just an average too, as costs for downtime can be as high as $540,000 per hour or more in some instances.

"It is therefore necessary to provide a reliable infrastructure for data center operations in order to minimize any chance of disruption," Valdrighi wrote.

How to effectively prevent data center fires from ever happening
In regard to data center downtime, it is always better to be vigilant about preventing the issue before it ever crops up. In the case of fires, one way to do this is by installing a data center temperature monitoring solution like the Watchdog 15 from ITWatchDogs. That way, should the temperature of a server room ever rise above normal levels, facility managers can be immediately notified. Armed with this information, they can take prompt action to remedy the situation before it becomes a catastrophe.

"Simply put, the central task of a fire safety system is to keep the business functioning, even in the event of a fire," Valdrighi noted.

Data center fires may not be the most common cause of data center downtime, but they still happen too frequently and cause major damage when they do crop up. A great way to protect facilities from blazes is by installing temperature monitoring equipment in server rooms. To learn more about what humidity and temperature monitoring solutions are right for you and your facility, be sure to contact ITWatchDogs today.

]]>How to make smaller data centers more energy efficienthttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/how-to-make-smaller-data-centers-more-energy-efficient-40076450
Mon, 03 Aug 2015 13:47:17 GMTITWatchDogsData center cooling efficiency depends on a lot of factors. Server room size, workload requirements and heat generated by the computing equipment itself are all variables that have to be taken into account when working out a cooling strategy tailored for the specific needs of a data center.

Larger computing facilities run by top-level companies are more likely to employ cooling system options that provide the most energy-efficient solutions. For companies that run smaller data centers, however, managers may struggle to promote energy efficiency. According to Data Center Knowledge, some facility managers may believe there is nothing they can do to improve energy use. However, options exist for smaller data centers to achieve energy savings that translate to reduced operating costs in the long run.

"Many of the operators of smaller data centers have not optimized airflow in years," said Daniel Kennedy, the director sales engineering for cooling systems provider Tate. "But it turns out they can take advantage of many of the same advances that have been made by larger data centers in recent years."

Different types of cooling systems contribute to how much power is used. Here are some tips for maintaining energy-efficient cooling practices in a smaller data center:

Optimize airflow
According to Data Center Knowledge contributor Michiel de Jong, the design of the data center can contribute to air circulation issues like local pressure differences and hotspots. Airflow management is achieved through the use of fans, and pressure controls can help mitigate these air circulation problems.

What more can smaller facilities do to optimize airflow in their server rooms? Maintaining proper rack hygiene is integral to having an energy efficient data center, according to Electronics Protection Magazine contributor Ed Eacueo.

"Regardless of rack dimensions, it is important that data center professionals employ solutions that provide an impenetrable barrier around the front plane of the rack in a front-to-back dominated airflow environment," he wrote. "The tighter the seal provided around the front of the rack, exclusive of a thorough blanking panel strategy, the closer one can come to achieving increased energy efficiency and reduced cooling loads."

This means that even smaller facilities can achieve proper rack hygiene and thus optimize airflow by creating this kind of seal over the front of the rack.

New innovations
Data center construction is a booming market, and companies are always looking for ways to make their facilities more efficient and powerful. For instance, tech giant IBM announced at the end of July that it was working on a special kind of cooling system that would use the heat generated by servers to create cool air flow within the data center. This would cut down on electricity costs, because the cooling system would be utilizing a resource already there, promoting energy efficiency and better cooling management in the long run.

Monitor temperature and humidity for the best result
Until new technologies and cooling systems are used on a widespread basis, however, data center managers need a way to improve efficiency in the short term. The first step in implementing any energy-saving solution is knowing what will most benefit the specific facility in question. Environmental monitoring allows data center managers to have an eye on their equipment at all times.

The information provided by environmental monitors like the Watchdog 1250 give managers the data they need to make crucial decisions about where they should invest in the data center and can help draw attention to changes in temperature or humidity with built-in alarm systems. Kennedy mentioned that it's crucial for facility operators to understand the requirements of the workloads that servers are currently running; only then can organizations achieve the most efficient use of energy.

]]>How does data center uptime impact business revenue?http://www.itwatchdogs.com/environmental-monitoring-news/data-center/how-does-data-center-uptime-impact-business-revenue-40076091
Thu, 30 Jul 2015 11:46:44 GMTITWatchDogsArguably the most important aspect of data center operation is, well, operation. Maintaining optimum server room uptime is of the utmost importance. After all, the servers are the ones doing the computing and data storage that clients pay for; if they weren't on, the data center would just be a building of empty boxes. To this end, uptime has a critical effect on business revenue for infrastructure providers and companies alike.

A 2013 study conducted by the Ponemon Institute found that one minute of data center downtime can cost companies up to $7,900 - that's about $474,000 per hour, and a 41 percent increase from the $5,600 it was in 2010. Those numbers from 2013 portend even higher costs for today's data center operators.

"Given the fact that today's data centers support more critical, interdependent devices and IT systems than ever before, most would expect a rise in the cost of an unplanned data center outage compared to 2010," Larry Ponemon, the CEO of the Ponemon Institute, said when the report was released. "However, the 41 percent increase was higher than expected."

This report was released two years ago - how much has the cost of data center downtime risen since then? If the 41 percent increase from 2010 to 2013 was startling, the fact that more companies are moving data and applications to off-premises environments - whether it be to cloud servers, virtual environments or traditional servers - means the business value of data center uptime has never been greater in 2015.

Demand is going up
Demand for reliable computing facilities keeps increasing at an almost exponential rate, meaning more data centers are being built and more servers being rented out. In fact, according to a study from IDC, there will be 8.6 million data centers in the world in 2017, at which point the rate of new facility construction will start to decline due to more companies investing in hosted managed services for existing IT assets.

That doesn't mean demand for computing infrastructure is going to decrease, however. According to a recent study from Allied Market Research, the global colocation market is expected to be worth $51.8 billion by 2020, growing at a compound annual rate of 12.4 percent from 2015 to 2020. This means that uptime will become even more important for those companies investing in data center real estate, and colocation providers will want to make sure their customers experience no outages.

How are companies affected by outages?
Businesses rely on the computing power they rent at a data center. Whether it's used as storage or to run business-critical applications, server equipment is used by these companies to make sure their operations continue to run smoothly and efficiently. The burden of maintaining operations then falls to those data center operators, because it's up to them to ensure the servers remain online. Sometimes, however, data centers experience downtime caused by a number of factors, and revenue can be impacted negatively.

Prevent downtime with monitoring
Investing in temperature, humidity and power monitoring equipment can make a difference when it comes to maintaining uptime. Making sure your facility is running at the optimum level can contribute to a stronger front against any outages that may occur, and monitoring equipment helps give an overall view of the data center so that decisions can be made regarding cooling or power equipment that will make things run more smoothly in the long term. Equipment monitors like those offered by ITWatchDogs can help data center operators maintain a birds'-eye view of their facilities.

]]>Energy conservation efforts extend to new building construction in Californiahttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/energy-conservation-efforts-extend-to-new-building-construction-in-california-40075668
Wed, 29 Jul 2015 15:55:44 GMTITWatchDogsThe drought in California continues to affect businesses, consumers and technology. Every person and business from every echelon of society is affected by the severe lack of hydration that the western half of the U.S. is experiencing.

Examples abound of businesses and consumers cutting back on water use. According to NPR, farmers are planting crops that require less water - like, for instance, substituting persimmons for water-hungry citrus plants. Businesses and citizens alike are observing water conservation efforts put in place by Governor Jerry Brown, which CNN reported as a mandatory reduction in water use by 25 percent. According to the State of California's website, the California Water Commission adopted a new ordinance on July 15, 2015 that will restrict water use in residential and commercial landscaping projects. After December 1, yards and commercial landscaping projects will be required to use 30 percent less water.

What does this serious lack of water in California mean for data center operators? Regulations aimed at improving energy efficiency come into even greater focus as water grows more scarce. Data center managers need to be aware of every new regulation that could potentially affect their facilities, including recent changes to Title 24 standards regarding how new buildings should be constructed.

Title 24 restrictions
Recently, new changes to the Title 24 rules in California for how companies build facilities have brought into question whether or not an enterprise can expand their data center presence. The main concern the new restrictions strive to address is the energy efficiency of new buildings constructed. According to Data Center Journal contributor Ty Colwell, these new requirements are only implemented if a data center space goes over certain thresholds relating to cooling capacity. For instance, computer room loads that require over five tons of cooling, which translates to a 17.5 kilowatt IT load, would fall under these restrictions. Expansions will need to add more than 50 tons of cooling, or a 175 kilowatt IT load.

These new restrictions are in place in order to support energy efficiency efforts across the country, but especially in California as businesses seek ways to keep their heads above nonexistent water. New data centers will have to be built with these energy-efficient specifications in mind, and facility operators will need to make sure to install cooling systems in their data rooms that promote the best use of energy - whether they be forced-air distribution units or hydrocooling systems.

How do managers prevent a cooling disaster?
When cooling systems malfunction, data rooms can get hot. All that server equipment generates a lot of heat - and heat isn't healthy for a computing environment. Power outages and overheating can take place if temperature isn't properly maintained. Data Center Knowledge contributor Bill Kleyman reported that backup air and cooling units can be a good way for enterprises to maintain a cool data room in case some of their cooling equipment goes offline. The only caveat is that companies will have to make sure these backup units also adhere to the Title 24 restrictions.

Monitor for the best results
As more data centers are constructed while keeping the Title 24 specifications in mind, facility managers will need to make sure they are maintaining energy efficiency standards. One of the best things data center managers can do to prevent disasters related to cooling and heating is to invest in a monitoring system that can provide real-time data on the temperature and humidity in their facilities. Environment monitoring systems like the ones offered by ITWatchDogs, like the Watchdog 100-PoE, can provide critical data to facilities operators that can help them make decisions regarding investments in different cooling systems or more energy efficient practices.

]]>3 tips for maximizing power use in the data centerhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/3-tips-for-maximizing-power-use-in-the-data-center-40073980
Fri, 24 Jul 2015 10:17:04 GMTITWatchDogsOne of the most important parts of the data center is power. When facilities are working effectively and efficiently, server room ecosystems stay healthy and are less likely to cause operators revenue loss from power outages or other similar data center disasters. According to a recent report from IHS Technology, global revenue from data center power distribution technologies is forecast to reach $505.9 million by the end of 2016. As more data centers are built, the demand for effective power distribution increases. Colocation providers realize the importance of investing in the best power distribution systems in order to get the most out of their facilities.

What does an effective power distribution strategy look like? Here are three tips:

Achieve better PUE with more effective technology
Power usage effectiveness is a metric created by measuring how well the data center is utilizing the energy coming into the facility, according to Data Center Knowledge contributor Jerry Gentry. It is a ratio of power available to power used and will always come to greater than 1. Therefore, the bigger the number is, the less effective the power use in the data center. For instance, Facebook's Prineville, Oregon, data center is its first energy efficient facility, with a PUE of 1.07. Facebook invested in power distribution units and air distribution systems to achieve this low number.

Making an investment in more effective servers can make a difference in the long run, and so can decommissioning inactive equipment. So-called "comatose" servers, ones that aren't in use, drain energy even when not doing any computing. Think of all the wasted energy that generates - and all the money going down the drain. This can severely impact a facility's bottom line.

Even without the added worry of comatose servers, computing facilities are power hogs. According to the National Resources Defense Council, U.S. data centers are on track to use 140 billion kilowatt-hours per year by 2020, an incredible number considering the amount of natural resources it takes to produce that much energy. It follows that data center operators want to manage their electricity use so as to reduce the effect on the environment and their pocketbooks - the less energy used, the less money spent.

Balance the data center environment
Power isn't the only important thing in the data center. Temperature is also a crucial metric to monitor, as well as humidity. In the data center ecosystem, everything is connected. Hard-working cooling systems use power, and if they aren't efficient enough, more energy will be wasted. Having an environment wherein all systems work in tandem with one another can boost PUE.

Data center infrastructure management tools can help keep this balance. Monitoring nodes placed throughout the data center can indicate where cooling systems are being the most effective and where the most energy is being used. DCIM helps IT managers analyze the data generated from these devices in order to make decisions in real time about the status of their server rooms.

Monitor your resources for the best results
How can monitoring your power equipment help increase server uptime and maintain the health of the data center? IT managers need to know what their power needs are in the data center and leverage that to create effective power distribution. Gentry noted that when companies prioritize spending, measuring things like PUE can help them know where to allocate their resources. With power monitoring technologies like those offered by ITWatchDogs, data center operators can have a better view of what's going on in their facilities and know which power distribution systems to invest in to achieve the greatest efficiency.

]]>How can monitoring help companies take advantage of wind energy?http://www.itwatchdogs.com/environmental-monitoring-news/data-center/how-can-monitoring-help-companies-take-advantage-of-wind-energy-40072369
Tue, 21 Jul 2015 10:45:10 GMTITWatchDogsPowering the data center is probably the most important part of managing a large computing facility. It goes without saying that high-powered servers use a lot of electricity, and the subsequent cooling needs are nothing to scoff at as well. When high-powered servers are all placed in one space and set to maximum, the generated heat increases demand for better cooling systems. Data center managers need to ensure proper power levels in order for equipment to continue functioning. There are different ways to bring power into the data center, but one that seems to be gaining ground lately is the implementation of wind power in facilities of bigger companies.

Who is using wind?
Companies are utilizing clean wind energy to power their data centers. Facebook announced last week that their new data center in Fort Worth, Texas, will be cooled completely by a nearby wind energy farm that is also under construction. According to Fortune contributor Katie Fehrenbacher, Facebook worked closely with the companies currently managing the building of the 200-megawatt wind farm - Citigroup Energy, Alterra Power Corporation and Starwood Energy Group - to make sure the farm would be built to the proper specifications for powering a Facebook data center.

Not long after Facebook made the announcement that its Fort Worth data center would be powered completely by clean energy, online retail giant Amazon said it would also be investing in wind for its data centers in North Carolina. Data Center Knowledge contributor Yevgeniy Sverdlik reported in mid-July that the company was investing in a 208-megawatt wind farm that will come online sometime in 2016. The farm will generate 670,000 megawatts per year, and this investment is bringing Amazon one step closer to achieving its goal of completely powering its operations with renewable energy. According to Jerry Hunter, vice president of infrastructure at Amazon Web Services, this wind farm will bring the total wind-generated power to 40 percent across all its operations.

Why do companies invest in wind energy?
Wind energy provides a less expensive option for companies looking to make sure their data centers stay cool but don't want to break the bank. According to a report published in June 2015 by the European Commission Joint Research Centre, by the end of 2014, there were about 370 gigawatts of wind turbines installed across the world, and in Europe, 12 percent of all electricity used will be supplied by wind by 2020. It looks like the world is headed in the direction of utilizing wind power, especially as natural resources like coal continue a steady decline in use.

Here in the U.S., the wind industries in Texas and Iowa are growing, according to Fehrenbacher, due to the regions' naturally windy locales and increasing demand for renewable energy. Pressure from organizations to move away from using coal and other unsustainable sources of power has created even more demand, as well.

Maintain sustainability with monitoring
Data center managers of facilities that use wind power need to be able to know how this type of energy is faring in their server rooms. After all, what is the use of investing in a power system if you don't know how it's impacting your energy use? Knowing the server room temperature is crucial to telling whether or not your facility is using power correctly. The monitoring solutions offered by ITWatchDogs can be helpful tools in the long term for IT managers hoping to save money in their facilities and contribute to the sustainable energy movement as well. Climate and power monitoring can help operators create concrete strategies for power use.

]]>Data centers should prepare for hurricane seasonhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-centers-should-prepare-for-hurricane-season-40071469
Tue, 14 Jul 2015 12:05:50 GMTITWatchDogsIt's crucial for data center operators to have contingency plans in place for when the unthinkable occurs. Power outages and server room fires could spell disaster for the data center - something that not all companies are able to bounce back from. In 2013, the Ponemon Institute estimated that every minute of data center downtime costs an astounding $7,900. That number in itself was a 41 percent increase over figures from 2010, so it's only right to speculate that in today's economy, an outage would be even more expensive.

What causes data center downtime?
When it comes to power outages in the data center, equipment malfunction and data center temperature changes leading to overheating can both be culprits. However, nature doesn't often follow strict guidelines when it comes to leaving our infrastructure alone. Along with equipment-related woes, operators on the East Coast of the U.S. have an added worry when it comes to their facilities: hurricanes. In the Atlantic Ocean, hurricane season for 2015 began on June 1, bringing with it memories of past disasters.

In 2012, when Hurricane Sandy devastated vital portions of the East Coast, some facilities became submerged in the deluge, causing systems to short circuit. Information Week reported on a data center in lower Manhattan that had emergency generators set high above the water level, but the emergency generator fuel pumping system was in the basement and short circuited when lower floors flooded. It required quick thinking and a lot of hands to move fuel up 18 floors to ensure the facility stayed online.

What can data center companies do to prepare for the worst of hurricane season?

Build it to last
Some companies are designing and building their facilities to be prepared for disasters like hurricanes. For instance, IT infrastructure provider Peak 10 is opening a Tampa, Florida, data center in July that the company claims will be hurricane proof. The building is constructed of a concrete precast hardened shell, which can withstand a Category 5 hurricane. The facility is also designed to meet Tier 3 standards, which means it will provide 99.982 percent server uptime.

"Commissioning is critical to ensure that our data centers deliver the reliability and availability that our customers contract for," said David Kidd, vice president of governance, risk and compliance at Peak 10. "It is a rigorous process that validates the overall facility design from a mechanical, electrical and availability perspective, and attests that all design elements and product specifications were adhered to during the construction process."

Back up your data
This time of year should serve as a reminder for companies to make sure to have a disaster recovery plan in place. Peak 10's strategy was to build a hurricane-proof data center from the ground up, but what can operators do in their current facilities to make sure they are prepared for the event of a hurricane? The first step in any disaster recovery plan should be backing up important data and functions to a separate location. Data Center Knowledge contributor Dave LeClair stressed that data and application backups are crucial in making sure a facility experiences the least amount of downtime possible in the event of a disaster.

"Backing up and replicating data to a secondary site provides an added layer of redundancy," LeClair said. "In the event your primary data center goes down, critical information and systems are still available via that secondary site, and business operations can proceed unaffected."

Recovery time and recovery point objectives should be defined in order to minimize downtime experienced by critical applications.

Make sure employees are prepared
LeClair also mentioned that data center workers are the most important asset of any facility. Managers should determine key personnel and allow them access to work from a remote location in the event of a disaster. There should be strict guidelines of what employees should be doing during a hurricane event, as well. Automatic notifications are also a good idea so that employees can remain informed of what's happening in the data center.

Monitor the server room
Data center managers need to make sure they know what's happening on their server room floors at all times. The best offense is a good defense - and the same goes for the prevention of power outage due to hurricanes. While you can't actually prevent a hurricane, you can take steps to prepare the data center for potential flooding that could occur in the event that Mother Nature decides to strike. Even a small bit of excess water can cause sparking and, in the worst case, outages due to damaged equipment. During a hurricane, it's important to be able to see if leaks occur. Monitoring equipment like that offered by ITWatchDogs could mean the difference between maximum server uptime and expensive power outages.

]]>Achieve better PUE with server room monitoringhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/achieve-better-pue-with-server-room-monitoring-40071753
Fri, 10 Jul 2015 13:01:58 GMTITWatchDogsIt takes a lot of energy to power a data center. Cooling systems, lighting and the servers themselves contribute to the massive amounts of electricity it takes to make the computing world turn. A 2014 study by Anthesis Group and the National Resources Defense Council found that in 2013, U.S. data centers used 91 billion kilowatt-hours of electricity. That number is only projected to go up as more facilities are built and more computing power becomes necessary. By 2020, data centers are projected to use an estimated 140 billion annual kilowatt-hours.

All of this energy costs money. The 140 billion kilowatt-hours projected are going to cost American businesses $13 billion in electricity bills, and equally important is the expense to the environment that these facilities create. To that end, providers are constantly looking for ways to improve the energy use in their data centers and better use electricity in order to cut down on costs.

What is PUE?
Power usage effectiveness is the metric that most data centers use to determine the sustainability of their practices and make sure they're using energy the most efficient way possible. PUE is determined by taking a ratio of the amount of power entering a data center versus the power it uses. Efficiency improves as the ratio moves toward 1. For instance, a 1.2 rating would be better than a 1.8 rating. In 2014, an Uptime Institute survey found that the average PUE across companies' largest data centers around the world is 1.7, which is a slight increase in efficiency from 2011's measurement of 1.89.

Companies are finding ways to increase their efficiency. For instance, Google boasts a trailing twelve-month PUE of 1.12 across all their data centers. The computing giant has made critical improvements to their data centers in order to achieve such a low PUE, including the investment of more efficient cooling and power infrastructures. According to Computer Weekly contributor Archana Venkatraman, data centers could cut up to 40 percent of server room energy waste by improving energy efficiency, saving money in the process.

Where do data centers lose energy?
Beyond cooling and power infrastructure and server needs, computing facilities lose a decent amount of energy to so-called zombie servers that aren't in use. With nearly 30 percent of the world's servers sitting unused in data rooms, that creates an incredible amount of wasted energy. In fact, according to a more recent study by Anthesis Group, these comatose servers represent $30 billion of unused IT infrastructure. The potential savings from eliminating these unused servers are astronomical.

How can data centers increase energy efficiency?
Server room monitoring is crucial in order for companies to find ways to decrease their PUE ratio and cut down on electricity costs. Managers need to be able to monitor power in their facilities. Being able to locate where energy leaks occur can enable IT staff to take steps to eliminate these kinds of problems. For instance, if a temperature sensor finds that the temperature in one part of the facility is too low for industry standards, managers can safely adjust the cooling system and allow the temperature to increase in that area, leading to distinct energy savings.

Data center monitoring can also identify those zombie servers and save companies money by eliminating the wasted energy. Monitoring solutions like those offered by ITWatchDogs can be integral for the overall health of the data center, because temperature and power sensors provide real-time, actionable data to IT managers. They can then implement much-needed solutions and save companies money in the long run.

]]>Cooling in the data center: What are my options?http://www.itwatchdogs.com/environmental-monitoring-news/data-center/cooling-in-the-data-center-what-are-my-options-40071017
Tue, 07 Jul 2015 17:28:16 GMTITWatchDogsThe data center facility is a well-oiled machine, and the cooling system provides the oil. Without an effective server room cooling system, the data center could overheat, causing equipment malfunction or, in the worst cases, destruction. As a whole, the data center cooling market is set for expansion, with a projected compound annual growth rate of 6.67 percent from 2015 to 2019, according to a study from Research and Markets.

The American Society of Heating, Refrigeration and Air Conditioning Engineers is the group that releases guidelines for the temperature range that should be observed at data center facilities. Whichever cooling solution a data center facility employs, it should make sure to remain within the proper range. As of November 2014, the ASHRAE guidelines dictated that the temperature within a data center, in order to maintain optimal functionality and sustainability, should remain between 59 degrees Fahrenheit and 89.6 degrees F. However, ranges for temperature and humidity are under constant revision, so managers should make sure to check them regularly.

It's important to have a cooling strategy in place, but what are the options, and how do data center managers decide?

Use water
Liquid cooling is a popular method of maintaining data room temperature. According to IT Business Edge contributor Arthur Cole, liquid cooling is becoming steadily more popular as IT managers look for ways to stay on budget and maintain a minimal footprint as computing densities get increasingly larger. It may appear dangerous to have water and electricity inhabiting the same space, but Cole points out that "as infrastructure becomes more modular, and therefore insulated from either the water-filled pipes or air fans that populate server racks at the moment, the need to bring the heat exchange directly to the component is growing."

Indeed, a 2014 report from 451 Research indicates that liquid cooling is becoming more popular as more data centers have high-performance cooling requirements, despite the current relatively low adoption rate.

"In the [high performance computing] world, everything will move to liquid cooling," Paul Arts, technical director of Eurotech, told Data Center Knowledge. "In our vision, this is the only way to get to exascale. We think this is the start of a new generation of HPC, with enormous power. We are just at the beginning of the revolution."

Contain that cool air
As the drought in the western half of the U.S. rages on, however, liquid cooling may not be the best bet for companies located in California or the surrounding areas. Containment cooling strategies were developed to offset chaotic air distribution in the data center, which can be ineffective and costly if not properly managed. In this type of cooling, chilled air is sent directly to the air intakes of servers through the use of specialized containers that filter hot air out. According to Data Center Dynamics contributor John Collins, the benefits of containment cooling include improved efficiency and increased equipment reliability, because the containers prevent the re-entry of warm air.

The importance of monitoring
No matter the strategy data center managers choose to regulate the temperature in their server rooms, they should make sure they are getting proper climate readings from their equipment. Server room monitoring can help IT managers determine whether their current cooling strategies are working or whether they need an upgrade. Monitoring can also allow them to troubleshoot before any real disaster occurs. Equipment like the Watchdog 100, which comes with on-board temperature sensors, allows managers to keep track of climate and humidity while making the changes necessary to the data room floor.

]]>Hot days could spell disaster for data centershttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/hot-days-could-spell-disaster-for-data-centers-40070415
Mon, 06 Jul 2015 10:32:46 GMTITWatchDogsThings are heating up for companies in the northern hemisphere, bringing with it a range of heat-related problems. In the U.K., the hottest July day since 2006 was recorded July 2, with temperatures of 36.7 degrees Celsius - that's 98.06 degrees Fahrenheit - being seen at Heathrow Airport and even hotter in the London Underground, according to The Guardian. The results of the hottest day of July were dramatic. The U.N. urged countries to develop better warning signs for hotter days in the future, and speed restrictions were placed on the Network Rail due to fears that the metal tracks would melt.

Citizens and travelers weren't the only ones suffering. Data centers are especially feeling the pain of elevated temperatures. According to Computer Business Review, some businesses in the U.K. have had to stop services due to overheated data center equipment. Ironically, the Guardian's own live blog about the heat wave was stalled due to overheated servers.

"There may be a pause in live blogging after 4 p.m., amusingly because we're having to switch to backup servers because our main ones have overheated," the newspaper reported.

Heat: the killer of servers
Temperature is one of the most important metrics in the data center. When server rooms are too hot, they overheat and can cause costly downtime. Cooler is better - but cooler can also mean more expensive if the data room cooling systems being used aren't energy efficient. Making sure cooling systems are keeping cool is made easier through effective temperature monitoring.

Running equipment generates its own heat, with all the moving parts associated with the data center. Cooling systems keeping the data center at an average temperature, however, may not be prepared for intense heat spikes like that experienced by the U.K. The temperature recorded at Heathrow was nearly 13 degrees Celsius hotter than the average temperature maintained by U.K. data centers, according to Computer Business Review. Server room equipment couldn't keep up with the heat.

A lesson can also be learned from the Microsoft data center that overheated in March 2013. According to The Verge contributor Tom Warren, Microsoft Outlook experienced a 16-hour outage, and the company blamed an overheated data center on the unfortunate occurrence. An outage of such length could impact a company's numbers in the long run due to disgruntled customers and affected equipment.

Prepare for the worst
When summer hits, it's only a matter of time before cooling equipment becomes overtaxed with keeping servers functioning at optimum efficiency. It's important for data center managers to have a plan in place for dealing with rising temperatures. This could include investing in newer cooling systems before the hot months begin. Air containment strategies are gaining popularity among colocation companies. According to Data Center Journal contributor John Collins, air containment systems improve cooling efficiency, increase equipment reliability and decrease spending on energy over legacy cooling systems.

Investing in new technology isn't the only thing IT managers can do to prepare for the dreaded heat wave. During the hot summer months, it's important to maintain a cool environment within the data center, but improper monitoring could result in increased energy costs and inefficient cooling. Monitoring systems like the Watchdog 100-PoE provide real-time data to IT managers looking to keep the cost of cooling low while at the same time ensuring server uptime and beating the summer heat. With this data, managers can make decisions about cooling systems and be better prepared for the hottest days of the year.

]]>Best practices for the most important parts of the data centerhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/best-practices-for-the-most-important-parts-of-the-data-center-40069387
Mon, 29 Jun 2015 14:39:39 GMTITWatchDogsFor data center services providers, learning about the latest trends in data center management and design is important. Incorporating best practices into their operations is doubly so. Future implementation of certain solutions could positively impact the facility and lead to better operating conditions and more customers in the long term. As the colocation services market in particular expands, it's especially crucial to properly maintain the infrastructure a provider has so that when it comes time to upgrade, a company knows what their assets are and how to properly strengthen them.

Here are four best practices for data center management for some crucial parts of the facility:

1) Cooling
Keeping the servers cool is integral to maintaining maximum functionality and optimal computing power. The data center temperature can be maintained through the use of different types of cooling, including (but not limited to) hydrocooling, open air cooling, server room air conditioning units or a combination of all or some of these. Providers should be aware of their options and regulations when it comes to keeping their units cool. The American Society of Heating, Refrigeration and Air-Conditioning Engineers sets temperature standards for data centers in order to maintain efficient power usage. The current standards dictate that server rooms should maintain a temperature of anywhere from 59 degrees Fahrenheit to 89.6 degrees F.

2) Power distribution
Even power distribution in a facility can impact the power usage effectiveness. A higher PUE means the data center is utilizing power as efficiently as possible. According to a survey conducted by the Uptime Institute, in 2014, the average PUE was 1.7. However, that number hasn't really changed since 2011 even though equipment is becoming more power efficient.

In an effort to increase PUE, one solution would be for data center providers to consider cutting down on the "zombie servers" that are technically not in use but are still drawing power. A June 2015 study by the Anthesis Group indicates that 30 percent of servers worldwide are actually not in full use.

3) Disaster recovery
What happens when a data center goes offline? Data and applications can potentially be lost, not to mention the loss of revenue experienced by data center companies, as well. Disgruntled customers could result in lost business, and the facility's standing could come under fire as well.

The recent Aliyun power outage serves as a reminder to data center managers the problems of not having a proper disaster recovery plan in place. According to Caixin Media, on June 21 the cloud services provider of popular e-commerce site Alibaba experienced a 14-hour outage at its data center in Hong Kong, severing customers' ties to their infrastructure and causing general distress.

The facility was in the second-highest tier of the data center classification system. This means, according to the Uptime Institute, that downtime required by maintenance is negligible and that "every component needed to support the IT processing environment can be shut down and maintained without impact on the IT operation." With this outage, Aliyun may suffer a hit to their reputation or a need for reclassification according to the tier system.

4) Implement monitoring solutions in your server room
Monitoring helps all parts of the data center coexist and function at the optimal level. Managers need to know what is happening in their server rooms in order to, among other things, prevent issues before they occur. Power output and temperature monitors allow for the swift detection of abnormalities on the data room floor. Climate monitors like the ones provided by ITWatchDogs also help infrastructure providers remain compliant with ASHRAE's guidelines. Power, cooling and disaster readiness can all become easier and more efficient when monitoring solutions are utilized.

]]>Fire in the data center: How can it be prevented?http://www.itwatchdogs.com/environmental-monitoring-news/data-center/fire-in-the-data-center-how-can-it-be-prevented-40068890
Thu, 25 Jun 2015 17:52:45 GMTITWatchDogsFire in the data center is one of the most damaging circumstances imaginable and can wreak havoc on companies' wallets and reputations. Though fire is highly preventable, data center blazes have been known to completely destroy server rooms or entire facilities. For example, in 2008, a small data center operated by Camera Corner Connecting Point in Green Bay, Wisconsin, was wiped out. The fire destroyed 75 servers, routers and switches, and it took the company 10 days to get customer websites back online, according to Data Center Knowledge. This is obviously a nightmare situation that data center providers don't want to face.

Data center fire happens more often than is probably known. For instance, a June fire at one of the Belfast data centers run by U.K.-based communications company BT Group disrupted service, according to The Register. A BT spokesman mentioned that though a number of customers were affected by the event, no customer equipment was damaged.

"Today's incident disrupted the power supply to the data center only," the spokesman said.

Though the fire was described by the company as "small," one customer, Internet service provider Tibus, experienced downtime and had two of its power distribution units damaged during the return of power to servers. This fire, which was suppressed by several fire crews on the morning of June 24, will no doubt continue to cause BT issues as questions arise as to why it occurred in the first place.

Why do data center fires occur?
The data center environment is a highly sensitive habitat, and minute changes can affect equipment. Static electricity in particular is a tricky thing. When the air in a computing environment is too dry, it can cause static, which can start fires. Those blazes may be fed by the mountains of dust surrounding the servers - not a recipe for success.

Equipment failures are also the culprit of server room fires, as well. Surge suppressors can fail dramatically and cause fires within the containers themselves. Overheated systems can catch on fire, like the September 2014 blaze that occurred in a data room in Des Moines, when a malfunctioning coil in an air conditioning system became too hot, according to Fireline.

Effective fire suppression is the result of careful planning and good sense on the part of employees and managers. It's crucial to minimize the effect of such a blaze and make sure equipment remains as safe as possible. According to Tech Republic contributor Tom Olzak, suppression systems meant for use in areas not populated by humans can be hooked up to alarms and triggered when adequate time has lapsed for employees to evacuate the room. Emergency shutdown switches are also an important tool - but managers should have a backup of data and applications, because hard shutdowns are not good for the servers.

How can fires be prevented?
Prevention of data center fires starts and ends with server room monitoring. It's integral that data center managers have the necessary information on temperature and humidity in order to make informed decisions about servers and other mission-critical equipment. Power monitoring is also a crucial aspect of preventing fires, because surges in electricity can cause dangerous sparks and damage equipment.

ITWatchDogs' climate monitoring system can be plugged in directly to smoke alarms that trigger a power shutdown in case of data center fire. Server room temperature is an important metric to keep tabs on, as well. Overheated systems are prime for sparking and static. The Watchdog 15-PoE, for instance, offers careful climate monitoring so that managers can breathe easier in their data rooms.

]]>Downtime at Aliyun data center demonstrates need to monitor powerhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/downtime-at-aliyun-data-center-demonstrates-need-to-monitor-power-40068591
Wed, 24 Jun 2015 18:11:53 GMTITWatchDogsWhen disaster strikes the data center, no one wins. According to a 2013 study from the Ponemon Institute, data center downtime can cost companies upward of $7,900 per minute, an incredible 41 percent increase from 2010, and the numbers have certainly only risen since the study was completed. The bottom line: Companies cannot afford a data center disaster wherein the facility is taken offline for hours or even minutes at a time.

Aliyum, the cloud computing unit of Chinese e-commerce site Alibaba, learned that the hard way on June 21 when its Hong Kong data center, which serves more than 1.4 million corporate customers operating in Southeast Asia, suffered a 14-hour loss of power, according to Caixin Media. Fourteen hours of data center downtime in 2013 would mean a net loss of over $6.5 million, and that amount in reality is even higher by today's standards.

Why did the downtime occur?
There are conflicting stories from within the company as to the cause of the disruption. Officials from Aliyun told some concerned companies unable to access their data that the disruption was the result of an Internet cable having been severed, but others were told that the issue was due to a power outage. Regardless, the situation could have been avoided with proper monitoring and disaster recovery systems in place. An IT worker told Caixin Media that data centers should have backup generators in case of power-related emergencies such as these. He said that the disruption could have resulted from a failure in the electricity distribution system. No matter what happened, any amount of downtime experienced is unacceptable and can cause dissension among clients that house data at the facility.

The Hong Kong facility, which was launched in May 2014, is Aliyun's fourth data center, with the other three in the Chinese mainland cities of Hangzhou, Qingdao and Beijing, according to TechCrunch. The company also opened a data center in Silicon Valley in March 2015, hoping to offer its cloud computing services to customers in the U.S. However, with this power outage disaster, the company's reputation is on the line; in an extreme case, Aliyun could see expansion setbacks.

What can data center providers do to prevent downtime?
The loss of power at the Hong Kong data center demonstrates the critical need for server room monitoring in order to ensure maximum server uptime. In situations like the one experienced by Aliyun, issue detection and troubleshooting become crucial in figuring out problems with the facility before it escalates to a full-scale loss of power. If the disruption was indeed caused by a failure in the electricity distribution system, monitoring devices that measure power flow could help save the data center money in the long term - and the short term. Server room monitoring is key, and ITWatchDogs offers products and sensors that allow data center managers to monitor power and have an eye on servers at all times, making sure readings are normal.

]]>3 data center trends for the second half of 2015http://www.itwatchdogs.com/environmental-monitoring-news/data-center/3-data-center-trends-for-the-second-half-of-2015-40068241
Tue, 23 Jun 2015 15:59:27 GMTITWatchDogsTrends in data center design and cooling are important to keep an eye on through the rest of 2015. As the colocation services market grows, data center companies should know where they should improve their facilities in the coming months. Here are some important ways data centers are growing as the year wears on:

Moving toward sustainability
Data centers use a lot of natural resources, and there is a growing trend for facilities to move toward more sustainable practices and thus reduce their carbon footprints, especially in enterprise-class data centers. According to Data Center Knowledge contributor Kevin Leahy, at the beginning of 2015, it appeared as though enterprise infrastructure providers would be motivated by the growing use of the cloud to cut back on data center space.

"What portions of their environments can they move [to the cloud]?" Leahy said. "How quickly can they transition workloads? As organizations answer these questions, they're re-evaluating the amount of physical space they need for their data centers, often reducing footprints by up to 50 percent."

Large data centers, on the other hand, are continuing to grow, according to Leahy. The reason for their growth is a combination of market demand and already-substantial investments in efficient power and cooling. As a result, their PUE is higher than that of some smaller businesses, so they can afford to expand their physical reaches.

Drought factor
The drought conditions affecting much of the western half of the U.S. continue to be a serious issue for data centers, especially those that use hydrocooling in their data rooms. At the beginning of April, California Governor Edmund G. Brown Jr. announced state-wide mandates for water rationing due to the persistent drought conditions, with a goal of reducing water use by 25 percent. Water required for human health and safety, growing food and fighting fires will still be readily available. However, lack of resources might mean that data centers using hydrocooling are left without a paddle up an increasingly drier stream.

Drought persistence means data center design will more than likely shift to incorporating air or containerized cooling processes in lieu of hydrocooling, according to Data Center Journal. Along with reducing the use of fresh or salt water, this method also contributes to creating a more sustainable data center.

Data room monitoring still paramount
Facilities looking to cut their carbon footprint or change their cooling strategy to accommodate for the drought should consider investing in data center monitoring equipment. The data provided in real time by temperature and humidity sensors allow companies to maximize their data center cooling strategy and, in the long run, reduce their impact on the planet.

According to a recent report published by TechNavio, the global data center market is expected to grow at a compound annual rate of almost 14 percent from 2015 to 2019, an indication that the market for cooling systems will show similar growth as more facilities are built. An effective cooling strategy relies on temperature monitoring systems. A monitor like the Watchdog 100-PoE indicates when the air in the data room is too warm so that center managers can know when to make adjustments to cooling systems or facility design.

When data center companies incorporate new data room cooling methods into their facilities, an effective monitoring solution is necessary to make sure these systems are functioning at maximum capacity. Data room cooling is one of the most important aspects of data center operation, and if IT managers know what the temperature is in the server room, they can detect issues and troubleshoot before those issues escalate to overheating or, in the worst case, downtime.

]]>Data center sustainability improved with monitoringhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-center-sustainability-improved-with-monitoring-40067381
Mon, 22 Jun 2015 12:52:23 GMTITWatchDogsGreen initiatives are changing how technology businesses operate, and industry leaders are paving the way for sustainable data centers. According to TechTarget, a green data center is a facility wherein the electrical, mechanical, lighting and computer systems are designed with the environment in mind - in order to leave as little impact as possible. Along with initial facility design and implementation of green practices, server surveillance and data room monitoring can go a long way toward keeping the data center as sustainable as possible.

The push for greener computing infrastructure is causing some organizations to rethink how they're designing the data center space. Companies like Facebook and Google are leading the way with their sustainability efforts, according to a 2014 report from Greenpeace. Google, for example, has previously bought wind energy for its data center facilities in Sweden, Oklahoma and Texas. From the building itself to issues with data room cooling, there are ways to cut back energy use and minimize environmental footprints, along with cutting costs.

Renewable energy
When designing a green data center, an important part of the process is making sure renewable energy is present in the overall plan. Ron Vokoun, the mission critical market leader for the western region at JE Dunn Construction, stressed in an article for Data Center Knowledge that choosing the proper location for your data center can be crucial. He cited Facebook as one company leading the way with this strategy.

"When selecting a site, considering the availability of renewable energy, as Facebook has agreed to do, can lead to a greener profile," Vokoun said. "Although the grid doesn't know the difference between renewable and coal fired electrons, different geographic areas are certainly known for their production of renewable energy."

When companies don't have an option of locating their facilities in an area where renewable resources are easily acquired, Vokoun recommended implementing solutions anyway.

"An alternative is to site your data center in an area where you can install your own renewable energy whether it be solar, wind or geothermal," he said.

Cooling systems
Renewable energy doesn't just power the servers themselves. Data room cooling systems that use green power and as little water as possible are also good investments for companies that want to decrease energy waste.

"The potential for free cooling should be considered if you can be flexible in selecting the geographic location of your data center," Vokoun said.

Cooling systems that don't require water are even more important in regions affected by drought, like the western half of the U.S. Temperature monitoring is essential to making sure systems are working correctly and computing equipment isn't overheating when water-based cooling options are not available or advisable.

The power of monitoring
Even data centers that are built to use less energy and have a minimal carbon footprint run into issues in the power room from time to time. Being able to monitor equipment on the server room floor and make sure power levels are appropriate for a LEED or Energy Star-certified facility is essential to good business practices for data center operators. To assist in these endeavors, the Watchdog 15 comes with temperature monitoring capabilities that will help IT managers determine if cooling energy is being utilized correctly within the facility.

]]>Prepare the data center for hot summer monthshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/prepare-the-data-center-for-hot-summer-months-40067261
Fri, 19 Jun 2015 12:31:13 GMTITWatchDogsHeat is a killer for the data center. A lot of heavy processing is taking place in a small space, and heat generated by the servers themselves can cause slowing and even downtime if not managed properly. Therefore, it's important to keep the room cool in order to keep servers functioning at maximum capacity. For the coming summer months in the Northern Hemisphere, however, data center managers need to prepare their facilities for increases in temperature and humidity so that downtime can be minimized or, hopefully, eliminated altogether.

Assess the situation
Data room cooling strategies are going to become even more integral to data room health as we move through summer and into fall, no matter where a facility is located. According to BizTech, keeping the data room cool constitutes the biggest use of energy in data center environments, and when equipment overheats, it can be an even bigger expense. IT managers need to ask themselves and their teams specific questions in order to ensure they're prepared for the hot, humid summer months. Here are a few:

Is the current equipment sufficient for the heat generated by the machines themselves?

Will the current cooling equipment be able to deal with sudden spikes in temperature?

In the event that a cooling unit fails, is there a backup? For instance, is there a possibility that portable air conditioning units can be used?

The answers to these questions will allow IT managers to gauge where they need to strengthen their cooling systems and whether they are fully prepared for more strain. This is the crucial first step in order to be able to fully assess what solutions the facility will need.

Be proactive
Being prepared for increased temperatures and humidity will pay off in the long run. The expense of data center downtime is too great to take a chance on. Making sure airways are unobstructed is vital to maintaining proper airflow to the equipment. Cleaning and maintaining computer room air conditioners and computer room air handlers is an important step in preparing for summer. IT managers should also check and make sure there are no possible air leaks, such as in broken ceiling tiles or open windows.

"[P]reparation will also provide an opportunity to raise awareness of the potential dangers of an overheated computer room or data center," Kevin Ayling, business development director of Migration Solutions, told DatacenterDynamics. "All too often, a company's computer room or data center has evolved from a couple of PCs in a cupboard into [a] business critical system, and it is worth making the business think about the consequences of losing an application on a server for any length of time."

Rethink cooling strategy
Chaotic air distribution may not be the best option when it comes to server room cooling. According to DatacenterDynamics contributor John Collins, these types of cooling systems can result in dangerous re-circulation of hot air through the server, which can cause overheating. Containment solutions may be the better way to keep the data center from overheating, because the containment systems deliver cool air directly into server intakes, eliminating the problem of re-circulation.

The importance of monitoring
Keeping an eye on beleaguered equipment is crucial for data center health. Temperature sensors that connect to a central monitoring system can become essential tools in order to keep server rooms functioning properly. The Watchdog 1250 temperature monitoring system offered by ITWatchDogs, along with other temperature and climate sensors, comes equipped with alarms that alert data center staff of issues. With fast detection of server room problems, staff is better able to assess and deal with cooling issues. With summer coming up, this will be especially beneficial.

]]>What will data centers need in a growing Singapore market?http://www.itwatchdogs.com/environmental-monitoring-news/data-center/what-will-data-centers-need-in-a-growing-singapore-market-40067118
Thu, 18 Jun 2015 09:49:01 GMTITWatchDogsThe global forecast for the data center construction market is looking strong. According to an April 2015 report from TechNavio, the market is expected to grow at a compound annual growth rate of 10.93 percent from 2014 to 2019, which means demand for IT infrastructure and server room space is still high. Companies are moving more data to the cloud and trying to accommodate for increasing computing needs. With global market growth, colocation services providers based in the U.S. are looking outside North America to build data center facilities, in places like the U.K. and Asia.

One of the biggest hot spots for data center real estate right now is Singapore, according to Data Center Knowledge. Companies like Google and Equinix have built facilities in Singapore, along with many other businesses seeking to capitalize on increased demand for services in Asia-Pacific. George Slessman, founder and CEO of Phoenix-based colocation services provider IO, even moved to Singapore in 2013 with the opening of IO's first global data center - he thought it was that important to be present in order to take advantage of the rapidly growing market. Slessman believes that Singapore will be the biggest market for his company in the next 10 years.

What does this boom mean for data center infrastructure?
Colocation services providers looking to position their facilities in areas like Singapore will need to account for many variables. Certainly not the least of these is changing climates and temperatures. Singapore has a hot and humid climate - one that produces a lot of rainfall. Too much or too little humidity could produce spark-inducing static electricity, electrical part corrosion or even outright standing water that leads to short circuits. The Data Center Journal reported an average relative humidity of 45 to 55 percent as being the idea range, and efficient cooling systems are an essential part of keeping the humidity down.

Server room monitoring is crucial for companies to invest in so that they can make sure their data centers remain dry and cool, especially in hot, wet climates like Singapore. Monitoring technologies like the Watchdog100-PoE offer built-in power over Ethernet and temperature and humidity detection in order to give companies enough time to deal with server room issues in the shortest time possible. The kinds of systems provided by ITWatchDogs offer companies the best eye on equipment.

]]>Data centers in danger from floodinghttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-centers-in-danger-from-flooding-40065313
Tue, 09 Jun 2015 17:20:12 GMTITWatchDogsAs far as data center disasters go, flooding is probably one of the worst to experience. Even a small bit of water has the power to destroy machines by short-circuiting processing units, potentially leading to server room downtime and a need for swift maintenance. Floods are worse and can cripple a system for hours or even days or weeks. The Ponemon Institute estimated in 2013 that every minute of data center downtime costs a company $7,900, a 41 percent increase over data gathered from 2010.

For instance, InformationWeek reported on the innovation and quick thinking needed when Hurricane Sandy nearly took a New York data center offline. The company, Peer1, experienced flooding in the bottom floors of its data center facility, but backup generators on the 18th floor should have been immune to the lower-level floodwaters. However, the fuel pumping system for the emergency generators, which was on the lower floors, was knocked offline as soon as it went underwater.

Future planning against floods is not only necessary for data centers in places susceptible to hurricanes, either. According to the Proceedings of the National Academy of Sciences, thanks to climate change, sea level has been rising steadily over the last century and will continue on the same path until many cities are all or partially underwater within the next 100 years. This isn't a good projection for many data centers across the U.S. Developing strategies for the event of flooding, therefore, is integral to disaster planning.

Flood preparedness
In order to prepare data center facilities against the worst, companies need to be aware of all possible outcomes and design their systems in such a way that they can keep data centers running or reduce server room downtime that can often be the result of such a disaster.

"If you are thinking about the vulnerability of a data center, you have to think not only about the location of the servers, but also about the location of everything else — the cooling equipment, backup generators, et cetera," Benjamin Strauss, vice president for climate impacts at Climate Central, told Government Technology. "If you want to make a facility flood-proof, you have to make all its critical components flood-proof as well."

Strauss, who is also the director of the Program on Sea Level Rise at Climate Central, noted that backup generators for many data centers are still underground and could be affected by flooding.

In the meantime, processors are still highly susceptible to damage caused by leaks and excess water within the server room. Server surveillance and monitoring are essential aspects of data center disaster preparedness. Monitoring solutions like the Watchdog 1000 from ITWatchDogs, coupled with analog water sensors, can keep an eye on equipment and check for leaks and floods. Another advantage of installing analog water sensors is that they won't give false readings for humidity or condensation concerns - they will only provide companies with real, actionable information in the event of a flood or other water leak.

]]>Apple data center incidents highlight importance of server room monitoringhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/apple-data-center-incidents-highlight-importance-of-server-room-monitoring-40063979
Wed, 03 Jun 2015 16:58:48 GMTITWatchDogsData center disasters, despite being preventable, are fairly frequent. For instance, a server room fire would be one of the most dangerous situations a company operating a data center could find itself in, and the cause of a blaze in a data center could be any number of things related to server room temperature or electrical shorts. A 2012 report released by the National Fire Protection Association stated that 78 percent of server room fires started with electronic equipment, and 16 percent involved heating, ventilating or air conditioning equipment. In these cases, it's possible that high levels of moisture built up and caused circuits to short, or the environment was too dry and so created static electricity. Whatever the case, it's important that companies be aware of the risks and protect themselves where possible.

Apple woes
Tech giant Apple has suffered two major data center incidents in the last two weeks, including one fire. The incidents draw attention to safety concerns facing data centers around the country and highlight the importance of making sure organizations are taking the right precautions in keeping their data centers safe.

On May 26, fire crews, including about 100 firefighters, put out a fire on the rooftop of an Apple facility in Mesa, Arizona. According to Data? Center Knowledge, that fire may have been caused by a fault in some of the solar panels on the roof. The building is the future site of an Apple data center - the company announced in February that it would be investing $2 billion to convert the 1.3 million-square-foot former manufacturing facility into a data center. No one was injured in the incident.

Most recently, Apple announced June 1 that there had been a chlorine leak at the company's Maiden, North Carolina, data center. According to USA? Today, five people who were exposed to fumes from the chlorine were taken to a local hospital after mentioning being dizzy and light headed. The employees were treated at Catawba Valley Medical Center and were able to return to work after Hazmat crews declared the building safe.

Apple's North Carolina data center is its largest at 183 acres, and the company is continuing to expand it and incorporate sustainable energy options into the infrastructure. In fact, according to USA? Today, the facility runs entirely on renewable energy using biogas fuel cells and two 20-megawatt solar arrays. However, it's important to understand that even with all the innovation associated with a structure like this, companies like Apple shouldn't neglect to monitor their systems.

Apple uses chlorine to clean the cooling systems of its data centers. Data room cooling is a crucial part of data center maintenance, and it's important to clean the systems that do the cooling, as well. But when leaks like this occur, they can cause injury and can stain the reputation of the company involved.

Keep your data center safe
Companies big and small can avoid safety hazards like the ones faced by Apple. Too much heat in a server room could lead to overheating of important equipment, and when the room is too humid, moisture could build up and cause other electrical problems. Organizations can benefit from investments in monitoring equipment designed to inform personnel of changes in the data center environment. For instance, the Watchdog 100 provides a climate monitor that checks data center environments so that companies can be aware of changes in server room temperature or humidity and can take action in situations that require quick thinking.

]]>Intel buys Altera: How this impacts the server roomhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/intel-buys-altera-how-this-impacts-the-server-room-40063599
Mon, 01 Jun 2015 16:50:08 GMTITWatchDogsThe first day of June 2015 may be looked back on as when Intel boldly declared its presence amongst data centers. The California tech company recently announced the purchase of Altera Corporation for $16.7 billion. Speculation is rampant about Altera's higher-margin chips, with Reuters saying the chips will help speed up Web-searchers.

However, many seem to point to the future of data centers when talking about the impact of this purchase. According to Wired magazine, this deal represents the future. Editor Cade Metz wrote that Altera produces field programmable arrays that can be told to perform very specific tasks. In a statement, Intel believes the company's vast resources and already well-praised chips will help out Altera's chip.

Intel's move does back up some growing trends. Cisco's May 2015 VNI Global IP Traffic Forecast for 2014 to 2019 revealed many eye-opening statistics about the future of computing. According to the report, by 2019 there will be approximately 3.9 billion Internet users around the world, which will account for a little more than 51 percent of the population. The billions of connected people will get online by means of one of the projected 24 billion networked devices. Everything with some type of connection is part of the IoT and that traffic has to go somewhere - data centers.

Essentially, these chips will allow enterprises to customize algorithms used in various workloads, according to ZDNet. During a conference call, Intel CEO Brian Krzanich said FPGA improves performance while also reducing costs. As it stands, Intel's data centers, when equipped with these chips, will be able to handle an ever-growing IoT with little issue.

Why is performance a worry?
The IoT is taking a toll on data centers. Businesses in particular rely on constant connectivity in today's global economy. If a company's cloud infrastructure were to fail, the results would not be pleasant.

However, data center energy consumption is extremely high because of the need to keep the equipment on and maintain server room temperature and humidity, among other variables. These facilities are caught in a bind because their business model does not allow for certain areas to be shut off to save power, and a majority of data centers also do not have the resources to heavily invest in new infrastructure built with the latest cooling techniques and state-of-the-art equipment. With the IoT putting greater pressure on data center uptime, the need to avoid downtime at all costs becomes even greater.

Cost-effective and proven solution
For current data centers as well as future facilities built by Intel, two pieces of equipment must be used: power monitoring and climate control devices. Both units allow data center operators to remotely monitor conditions, therefore shifting work to other matters. A power monitoring device will help keep tabs on critical variables, such as volts, whereas a climate unit, such as the WatchDog-15 PoE, monitors factors like temperature and humidity.

Facilities can tinker when using cooling units because both devices collect data for later analysis. This allows for trends to be spotted when cooling is needed and when it is not. As a result, energy bills will be lowered.

Intel's deal stands to benefit data centers because if they can be programmed extensively, servers can then be instructed to use as little power as possible while completing intensive tasks. Combined, all three units can hypothetically increase current and future facilities' energy efficiency.

]]>Is it time to plan for the cloud data center of the future?http://www.itwatchdogs.com/environmental-monitoring-news/data-center/is-it-time-to-plan-for-the-cloud-data-center-of-the-future-40063469
Mon, 01 Jun 2015 13:13:06 GMTITWatchDogsCloud computing is here to stay, and has been for quite some time. Cisco projects the cloud to grow even more, according to a 2014 white paper from the company. The effects of growth will most likely be seen inside existing data centers, and the plans for future facilities.

The white paper said from 2013 to 2018, data center traffic is expected to triple. This is a direct result from numerous devices, organizations, people and services using the cloud. By 2018, 76 percent of global data center traffic will come from cloud applications and devices. Predictions like these are eye-opening in terms of how much traffic future cloud growth will account for.

In 2013, 1.6 zettabytes of data center traffic was from the cloud, and 1.5 ZB from traditional traffic. By 2018, those numbers are projected to increase to 6.5 ZB and 2.1 ZB, respectively. To put that in perspective, 1 ZB is equal to 1 trillion gigabytes. By today's standards, 8.6 ZB of data center traffic is the equivalent of 9 trillion hours of high-definition streaming video streaming. Not even Netflix has that much content.

The cloud is dominant
More organizations are utilizing cloud services for a variety of reasons, from delivering better services to customers to increasing collaboration and efficiency inside the company. As a result, the increase in traffic and bandwidth must be handled somewhere, and that is the cloud data center. The next three to five years will see continuous growth and development of data centers.

Most of the increase in data center usage will fall to enterprise centers. These are the sprawling facilities being built by tech and Internet giants. Millions, and lately, billions, of dollars are being invested to handle this rise in services. Colocation centers are also becoming a more frequently used option for cloud workloads.

Such massive infrastructure must also be built with energy efficiency in mind. Already, data centers consume vast amounts of energy to power the equipment needed for these rising workloads and manage server room temperature and humidity levels. As a result, these facilities are being built with advanced cooling methods and new design ideas, such as free-air cooling.

Yet, in an interview with Data Center Knowledge, Association of Data Center Management Professionals Denver chapter president, Hector Diaz, laid out future data center trends. Specifically, he believed a higher focus will be placed on energy efficiency and pushing current server room temperature guidelines by the American Society of Heating, Refrigerating, and Air-Conditioning Engineers. For example, future server rooms may set temperatures above the current maximum guidelines.

Monitoring will always be needed
No matter who owns and operates future data centers or how large the buildings become, monitoring power and atmospheric variables will always be needed. Current monitoring is important because it can help prevent data center downtime. Any time offline will be detrimental to the data center and those who rely on its services. For this, a current monitoring device will be needed for real-time updates.

These units will help keep tabs on energy consumption used to maintain and power servers. To help keep those favorable conditions, server rooms will need to be equipped with a device like the WatchDog 15, which is built with temperature and humidity sensors. This helps a facility not rely as much on an HVAC cooling unit, thereby decreasing energy consumption. Both pieces of equipment are designed to send out alerts if serious situations arise, such as temperature falling or circuit damage.

Due to cloud computing growth and always-changing mindsets, data centers will benefit from planning for the future rather than staying complacent. No matter what future designs and layouts may look like, energy and atmospheric conditions must still continue to be controlled.

]]>How to avoid data center downtime related to powerhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/how-to-avoid-data-center-downtime-related-to-power-40063108
Fri, 29 May 2015 15:16:14 GMTITWatchDogsMore than 45,000 Massachusetts residents were left without electricity after an early morning power outage May 29, The Boston Globe reported. Outages affected those living in the Malden, Melrose, Medford and Everett areas of the state.

Officials said they were still unsure of what caused the outage, although they did comment that it originated from a substation located in Everett. National Grid spokeswoman Mary-Leah Assad was also unsure if the region's severe weather Thursday night played a part.

However, the Massachusetts Emergency Management Agency did say a fault on a transmission line appeared to play a role. The state is also home to numerous data centers, however, none are located in the five affected areas.

Downtime impact on data centers
According to Data Center Map, there are 30 colocation centers and five data centers in Massachusetts.Those reliant on the Massachusetts' facilities were most likely relieved because data center downtime can be damaging.

Disaster tends to often strike during inopportune times. A 2013 study from the Ponemon Institute revealed 91 percent of data centers suffered an unplanned outage in the 24 months preceding the survey. Since 2010, the financial ramifications have also steadily increased. In 2010, the average cost per one minute of downtime was $5,617. By 2013, costs increased 41 percent to $7,908. Needless to say, data center downtime can have a negative financial impact while also potentially straining relationships between customers and providers.

Proper planning and equipment needed
To guard against downtime, or minimize the damage of an incident, data centers need to implement a strong prevention and response plan. The Ponemon study revealed only 36 percent of data centers "believe they utilize all best practices in data center design and redundancy to maximize availability."

A data center will be better protected against outages by monitoring power usage inside the facilities. First, power monitoring line equipment should be employed. This device can measure power readings such as real power, volts, apparent power and power distribution current's level and other variables to ensure maximum server uptime. Current unit monitoring equipment is also essential to track usage and prevent overloaded circuits. These devices are built to trigger alarms if problems were to occur at the circuit, aggregate, breaker and outlet level.

Together, these units will help keep downtime to a minimum, if not wholly prevent it. Data centers looking to reduce downtime should also consider these units because of their ability to remotely monitor the conditions through a Web browser. If potential problems start to form, employees can instantly take action to prevent downtime and ensure revenue is not lost.

]]>Internet of Things poised to increase data center workloadshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/internet-of-things-poised-to-increase-data-center-workloads-40062644
Thu, 28 May 2015 13:07:41 GMTITWatchDogsThe Internet of Things is the next big area for technology and Internet-based companies. It has evolved so much that Google is predicted to make a formal entry into this field during the company's annual developer's conference. As companies continue to develop and release smart devices, the IoT will continue to see some serious growth. As a result, data centers will most likely be relied upon even more so to handle the increased traffic.

What is IoT?
First, one should understand what relatively new term is all about. According to The Guardian, British visionary Kevin Ashton coined the term in 1999, just before the dotcom bubble burst. According to TechTarget, the IoT is when people, objects and even animals are given unique identifiers. Human-to-human interaction is not required to transfer data over a network. In terms of what the "thing" is, it depends. These days, any electronic device is almost certainly connected to the IoT. The home automation company Nest is one of the best examples of the IoT, connecting cooling and heating systems to the Internet and letting homeowners control settings from a smartphone.

Devices and "things" started out as a way to help consumers stay connected and improve their lives. Over time, however, gimmicks began to appear, such as the Wi-Fi-enabled Nabaztag. This device did little more than to read emails, but interest was so strong a dedicated developer community eventually formed around it.

Just because manufacturers can connect devices to the Internet does not mean they should, however. For one, the load on servers is increased for tasks that can be completed by a smartphone or computer.

"If you create products that are useful and that change lives, however big or small, people will buy them," Mark Lee, commercial director at the IoT company Intamac Systems told The Guardian.

Stress on data centers
The IoT affects data centers other than by placing a heavier load on server rooms. According to Bill Kosik of HP data center facilities consulting, the IoT also has an environmental impact on these facilities. These effects stem from naturally distributing more workloads to servers because energy is needed to power and maintain equipment. Thousands of these devices, and sometimes more, need that constant connection to servers, and this is only the beginning of a larger growth.

"Equal, or even greater, investments in the IoT platform services residing in the datacenter will be instrumental in delivering the IoT promise of anytime, anywhere, anyhow connectivity and context," senior IDC research member Rick Villars said.

Environmental impact
The need for energy efficient facilities is more important ever. According to GreenTech Efficiency, "energy costs are the fastest-rising expense for today's data centers." High costs stem from facilities running virtually all day, everyday. Energy is needed to not only power and keep equipment running, but also to maintain it. Typically, server room temperature and humidity levels are the most important conditions that must be constantly monitored. Just even the slightest buildup of condensation can lead to a server failing, for example.

To help keep an eye on conditions and increase energy efficiency, a climate monitor is needed. A device, such as the WatchDog 100, will allow server room operators to manage temperature and humidity remotely and at all times. A class A1 data center's temperature is recommended to not dip below 64.4 degrees Fahrenheit. The WatchDog unit can be programmed to automatically distribute alerts if the temperature reaches 60 degrees F, for instance. This unit also ensures HVAC units are not always in use because their usage can be planned due to the WatchDog collecting data to later analyze for trends.

As the IoT continues to grow, climate monitoring will gain even greater importance to help increase energy efficiency and maintain equipment as growing workloads continue to demand greater energy consumption.

]]>Phoenix-area attracting more data centershttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/phoenix-area-attracting-more-data-centers-40061942
Tue, 26 May 2015 14:49:25 GMTITWatchDogsRecent developments in Arizona have the Metro Phoenix becoming the next hot spot for data centers. In an interview with GlobeSt, Michael Ortiz, associate vice president at Collier International, said data center growth in one of the fastest growing technology markets can be attributed to state legislation known as S.B. 2009.

Governor Jan Brewer signed the bill June 19, 2013. According to Ortiz, the legislation took a diversified approach in the hopes of attracting small to medium data center developers. S.B. 2009 has appealed to both large-scale organizations and colocation providers. In fact, data center operators are offered commitment sales and tax incentives for approximately 10 to 20 years if their facilities consume as little as 500 kilowatts of energy, in addition to the area's lower average utility rates. Ortiz said the average utility rate in Arizona is $0.07 per kilowatt-hour, compared to $0.13 kwh in California. Over a 10 year period, a single megawatt user can expect $7.5 to $8 million in savings while also lowering a data center's power usage effectiveness rating.

There are a few factors working in the Phoenix area's favor. The region provides a viable alternative for companies outside of key data center regions, such as the Pacific Northwest, California, Texas, northern Virginia and Colorado. Redevelopment opportunities are also plentiful, according to Ortiz. Developers are repurposing existing infrastructure into data centers. For organizations based in Phoenix, these are promising signs. Much of the existing infrastructure already has suitable access to fiber connections and power, allowing potential developers to set up shop in a short time frame.

Other important factors
Existing infrastructure is not the only reason why data centers are sprouting up throughout Arizona. The state also has a friendly climate facilities can take advantage of. Server rooms require large amounts of energy to power and maintain the vast amounts of equipment. It is not feasible to simply "turn off" equipment when it isn't being used because of the demands of today's global economy. Even a short window of downtime can cause organizational disaster and lead to monetary losses.

To combat rising costs, infrastructure is being built with an increasing reliance on the surrounding climate to help maintain server room conditions. Server room temperature can be maintained through the use of free-air cooling in some instances. A misperception may exist about how the Arizona air can cool equipment because of the state's average high temperatures, but the air is actually helpful.

Recommendations say server rooms should not exceed 82 degrees Fahrenheit, with optimal temperatures between 65 and 80 degrees F, according to the American Society of Heating, Refrigeration, and Air-Conditioning Engineers. The hot, desert climate of Phoenix presents a suitable environment. First, the average temperature in 2014, according to the National Weather Service, was 77.1 degrees F, the warmest ever. Data centers do not also have to worry about any damaging precipitation, as earthquakes and tornadoes will not strike, and it does not rain enough for potential floods.

Supervision of conditions still needed
However, data centers must find the right balance free-air cooling and using an HVAC system to ensure equipment does not get too hot, or fall victim to too much condensation building up. Server room temperature should never exceed 90 degrees F, or fall below 59 degrees F. Likewise, the relative humidity should never be greater than 80 percent.

To help track and supervise these conditions, data centers need to invest in a climate monitor such as the WatchDog 15. The unit serves multiple purposes, from monitoring temperature and humidity to letting employees remotely keep an eye on conditions. As facilities become even more massive, the ability to get remote updates is essential because it frees up server room operators for other tasks. Data center operators looking to move to Arizona will need such solutions to really thrive in the state.

The device is also built to immediately detect atmospheric anomalies. One of the biggest appeals is the unit can distribute up to 50 alerts to multiple people. Here's how it works: If the server room temperature is set to 65 degrees F and begins to rise, notifications will be sent out. An employee might be alerted to when the temperature rises to 70 degrees F while more alerts will be sent out if the problem is not dealt with.

Equipment in Arizona data centers will be properly maintained when a climate monitoring device is used. When working with free-air cooling methods, a data center can vastly reduce energy consumption and increase energy efficiency.

]]>Climate control devices can help detect fireshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/climate-control-devices-can-help-detect-fires-40061522
Thu, 21 May 2015 18:04:24 GMTITWatchDogsAnytime a fire starts in a data center, disaster will follow. Fires in data centers can start in any sized facility. For instance, an Amazon data center in suburban Virginia was the victim of a massive fire earlier this year. No one was injured nor were company operations disrupted because the facility was still undergoing construction. But the fire's intensity led to damage that undoubtedly pushed back the timeline of the facility's completion.

However, some data centers are not so fortunate when fires occur. As a result, services go offline for hours, days or even sometimes weeks or months. How severe the outage is depends on the severity of the fire.

Recent fire
The Louisiana Office of Motor Vehicles halted operations on May 21 due to a fire at the state police data center in Baton Rouge. The outage meant all of the OMV locations could not process transactions and had to turn people away, according to an Associated Press report. Other services affected by the fire included department-wide email and Internet access, administrative processes and the ability of state police to process fingerprints or handgun permits. If a state trooper were to pull over a driver, the officer would also not be able to look up the vehicle's information because of the fire.

Spokesman Maj. Doug Cain said the outage contributed to all systems within the Department of Public Safety shutting down for approximately four hours.

"It appears to be some sort of short that took place because the electrical panel is burnt up," Crain said in an interview.

Fire prevention
According to its most recent study on server rooms, the National Fire Protection Association found that the number of data center fires has declined since the '80s and '90s. The study revealed 78 percent of fires resulted from electronic equipment. Of those reported fires, ventilating, heating and air conditioning equipment led to 16 percent of server room fires. The NFPA said 77 percent of locations where a fire was reported had detectors.

Server rooms are delicate areas and care must be taken to reduce the risk of fires there. One of most important safety tips an employee can take is to handle flammable materials with extreme care. Objects sometimes taken for granted, such as packing and boxing materials, pose a risk. It is recommended staging areas are implemented to unbox and unwrap equipment outside the data center.

There should also be regular inspections of fire prevention and safety equipment. Plans need to also be created in the event a fire does break out. These plans should include communicating with fire departments to let them know the layout of the facility and that some firefighting equipment may not be suitable for a data center.

Causes of fire
Server room fires can start from a variety of ways. Data centers should pay particular attention to how environmental conditions can potentially lead to disaster. Server room equipment must be maintained within certain temperature and humidity levels. Recommendations on these fronts are provided by the American Society of Heating, Refrigerating and Heating Engineers.

If temperatures are too high, server equipment will start to suffer and eventually stop working. Humidity must be accounted for because too high moisture levels will cause condensation buildup, which may lead to electrical shorts. But if there is too little moisture, electrostatic charges will start to escalate. Any static electricity discharges can damage equipment and potentially cause a fire.

It may seem like data centers are in a bind when it comes to monitoring atmospheric conditions, but that is not the case.

Monitoring devices
Every server room needs to be installed with a device such as the WatchDog 100. This unit is built with onboard temperature and humidity sensors. Data centers will benefit because these devices will trigger alarms and send out alerts if the server room temperature or humidity levels change outside the desired parameters. The unit can help detect fires before the disaster worsens.

The unit also allows for remote monitoring of the conditions through a Web interface. By installing a WatchDog climate control device, server room temperature will be maintained in ideal conditions. This means the reliance on HVAC cooling units will be reduced and so too will the risk of fires.

]]>Biogas usage on the risehttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/biogas-usage-on-the-rise-40061135
Wed, 20 May 2015 17:12:04 GMTITWatchDogsIn its pursuit of 100 percent renewable energy, data center giant Equinix installed biogas fuel cells at its Silicon Valley facility. According to Data Center Knowledge, Equinix utilizes a variety of clean energy solutions, from solar power to power purchase plans. However, at this point, 30 percent of energy used is generated from renewable sources.

The company's website revealed the resources used to generate power to its U.S. data centers. Coal was the most used, at 27 percent, followed by 22 percent of nuclear power and natural gas at 19 percent. Renewable resources were the least used at 17 percent.

The company said in a statement that the one megawatt Bloom Energy cell, "will provide an estimated 8.3 million-kilowatt hours" of clean energy, per year. There is a belief that Equinix's data center will become more attractive, especially for companies wishing to engage in sustainable energy practices.

What is biogas?
It must be noted that biogas differs from natural gas. While it is characterized as clean energy, natural gas is still a fossil fuel. Biogas is different because it is produced from the "gaseous product of anaerobic digestion of organic matter," according to the U.S. Department of Energy. In other words, decomposing organic matter produces renewable natural gas. When it is cleaned up, biogas can be used to generate heat and electricity.

Biogas and data centers
Equinix is not the first, nor will it be the last, data center to implement clean energy alternatives. In light of a growing strain on data centers, energy consumption is increasing. An April 2015 study by IDC revealed just over 25 percent of data centers reported an power usage effectiveness rating between 2.4 to 2.7. Equinix is looking to lower PUE ratings down to the 1.3 to 1.4 range. Currently, less than five percent of data centers achieve that low of a rating.

Also, an April 2014 study by Greenpeace gave the colocation company a clean energy index rating of 16 percent. The rating highlights the difficulties colocation centers face because of the immense energy needed to suit the needs of all their tenants.

Other companies are utilizing biogas as well. Microsoft's Cheyenne, Wyoming, data center produce fuel cell's biogas from human waste. The company will eventually donate the facility to the state, but generating a facility from waste indicates companies are serious about using renewable energy.

Why is consumption so high?
The same IDC report indicated data center energy consumption is so high because of the need to power and maintain all the equipment. Server rooms must be kept at certain temperatures and humidity levels, according to guidelines provided by the American Society of Heating, Refrigerating and Air-Conditioning Engineers. A server room going even slightly over the maximum temperature can potentially experience problems. Keeping servers and HVAC units running all the time requires a lot of electricity, hence the high industry-wide PUE ratings.

To maintain ideal conditions and lower energy consumption, server rooms need to be installed with monitoring devices such as the WatchDog 100 POE. This unit is built with onboard temperature and humidity sensors. If any of those atmospheric variables were to fluctuate outside set parameters, the unit will trigger alarms and send out alerts. Using this unit will help data centers rely less on HVAC units to cool server rooms and maintain ideal conditions.

A monitoring unit and a biogas fuel cell will help a facility become much more energy efficient. The fuel cell can provide the clean energy needed to power a data center while the WatchDog will ensure server room temperature and conditions are maintained. The technology also relieves pressure off server room operators because they do not have to constantly worry about temperature or humidity due to the ability to remotely monitor those conditions through a Web interface.

]]>Wind energy just got a big boosthttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/wind-energy-just-got-a-big-boost-40060977
Tue, 19 May 2015 17:48:27 GMTITWatchDogsAs new data centers continue to be built, large Internet and technology companies are seeking ways to properly implement renewable energy to power these facilities. According to the Sun & Wind Energy magazine, Amazon is having 65 wind turbines and 67 transformers delivered to its wind farm in Benton County, Indiana, for example. The energy generated from these turbines will be used to power an Amazon Web Services data center.

Just recently, General Electric unveiled a new development with regard to wind energy. It might be a while until it is widely adopted, but GE's advances prove promising for data centers wanting to operate on 100 percent renewable energy.

According to the Times Union, the company's new wind farm technology has led to an output increase of 20 percent. The Digital Wind Farm has spent the last 18 months in development. Essentially, turbines and cloud computing technologies work together to collect and analyze certain data measurements. The system then analyzes the data to improve a turbine's performance over time. Hypothetically, a data center getting energy from a wind farm will become even more energy efficient over time while reducing future turbine maintenance costs.

Energy report cards
Why are data centers so eager to go green? According to an April 2014 study by Greenpeace, only one company, Apple, has a 100 percent clean energy index rating. Other companies are getting there - Yahoo scored a 59 percent, while Facebook and Google scored 49 percent and 48 percent, respectively. The embrace of green initiatives is important because of the growing number of online users. The report indicated that in 2012, online users totaled 2.3 billion. Come 2017, that number will increase to 3.6 billion. Data centers should expect workloads to gradually increase in the coming years as a result. With rising workloads comes a need for more electricity to keep servers and ancillary data center solutions running at all times.

High causes of energy
High energy consumption stems from data centers needing to power and maintain equipment. Server room temperatures and humidity levels are especially important because those variables pose the greatest risk. If the temperature is too low or high, servers may malfunction, whereas too much humidity may cause electrical shorts.

Server room monitoring
A data center's energy efficiency is often measured by its power usage effectiveness score. The lower the rating, the more efficient and environmentally friendly a facility is. Data centers looking to lower PUE ratings should implement server room monitoring devices, if they have not already. A unit such as the WatchDog 100 is built with onboard temperature and humidity sensors. Server room operators can remotely monitor the atmospheric conditions through a Web interface. In the event server room temperature or humidity were to fall or rise outside the desired parameters, alarms will be triggered and alerts sent out. Cooling costs are therefore lowered because data centers are not relying as heavily on HVAC cooling units to maintain ideal conditions.

Data centers are arguably some of the most important facilities in the world. Many services, business operations and government functions are possible because of data centers. Energy produced by wind turbine can provide the power needed to ensure data centers do not go offline.

Wind energy, in conjunction with a server room monitor, will lead to improved energy efficiency.

]]>Consolidation can help increase energy efficiencyhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/consolidation-can-help-increase-energy-efficiency-40060445
Mon, 18 May 2015 17:35:26 GMTITWatchDogsBy 2017, the International Data Corporation estimates there will be 8.6 million data centers worldwide. IDC's November 2014 forecast also estimated worldwide data center space will increase to 1.94 billion square feet in 2018, up from 1.58 billion square feet in 2013.

However, the forecast also found the number of data centers will decline after peaking in 2017. This is not a bad sign because data centers will still be heavily relied upon. Organizations are instead developing new mindsets.

"Over the next five years, a majority of organizations will stop managing their own infrastructure," IDC data center and cloud researcher Richard L. Villars said in a statement.

This forecast indicates a growing trend: companies, such as Zynga, are turning towards data center providers to handle infrastructure, rather than the companies themselves investing in new facilities. This makes sense because data center space is expected to increase within three years as more expansive infrastructure is built.

IDC called these shared facilities mega data centers. These large facilities accounted for 19.3 percent of new data center space worldwide in 2013. When 2018 rolls around, that number is expected to jump to 44.6 percent of new, high-end data center space worldwide, according to IDC.

Smaller companies that do not have the resources to build Facebook-esque infrastructure may find it easier to rent out space, or consolidate existing hardware. As more tenants occupy a data center, the need for an energy efficient building increases. If a mega data center shuts down, hundreds, maybe thousands, of organizations will be negatively impacted.

What is consolidation?
Back in February 2010, the federal government created the Federal Data Center Consolidation Initiative. Started by the country's first chief information officer, the initiative was started to reduce hardware costs, promote green IT and reduce the energy footprint of federal data centers, among other goals. While it was a plan for federal agencies, businesses can also learn from it.

The federal government arguably started the consolidation trend. An organization will engage in the process of consolidation if they find themselves virtualizing servers and storage and implementing cloud computing to replace large, on-site mainframe computers.

To see how organizations can save money and increase efficiency by consolidation, they have to look no further than a fall 2014 report from the federal government. According to ComputerWorld, consolidation led to $1.1 billion in savings, in addition to a reduction of carbon footprints. Of the government's 9,658 data centers, 10 percent were closed. Senior editor Patrick Thibodeau said 44 percent of data centers will be shut down by the end of 2015.

Increased energy efficiency
It is no secret data centers consume large amounts of energy and as a result are expensive to run. A data center's efficiency is commonly measured by its power usage effectiveness. According to an April survey by IDC, a little more than 20 percent of data centers reported a PUE rating between 2 and 2.3. Of that group, approximately 45 percent of facilities wish to reduce energy consumption and improve cooling methods.

However, building state-of-the-art infrastructure that utilizes free air cooling methods is not always plausible. That's where consolidation can help. For example, the Lawrence Livermore National Laboratory received a Sustainability Aware from the U.S. Department of Energy. The lab's efforts to reduce costs and save energy by consolidation led to the award in early 2015. LLNL consolidation undertakings comprised of moving 126 servers into its enterprise data center, creating 140 virtualized servers, and closing 26 days centers representing 26,000 square feet. Less building maintenance led to a reduction in costs.

Consolidation may take a bit of time - evidenced by LLNL starting the process in 2011. As a result, LLNL saved $305,000 in energy bills. According to a statement, the goal of meeting an average sitewide PUE rating of 1.4 will be reached during fiscal year 2015. The lab was able to achieve a low PUE rating by centralizing data centers that were previously scattered. For comparison, IDC's survey revealed less than 5 percent of data centers reported a PUE rating between 1.4 and 1.6.

Monitoring devices
Data centers should already be utilizing devices to monitor server room temperature and atmospheric conditions. Yet, if IDC's prediction of more mega data centers comes to fruition, the need for monitoring equipment will become even greater. That's because the facilities that will remain will be handling more workloads than ever, making them even more mission critical than they are now.

A powerful device, such as the WatchDog 1250, is built with onboard sensors that can measure temperature, humidity, airflow, sound and light. This unit will trigger alarms if temperature or humidity were to fall or rise outside the desired parameters while also sending out multiple alerts. These devices will help current, and future, data centers and mega facilities to rely less on HVAC units to cool equipment. Ideal conditions can be maintained while lowering energy bills.

Organizations looking to lower their carbon footprint and save money may want to look to data center consolidation. No matter the size of the infrastructure, climate monitors will always be needed to protect against unexpected data center temperature and humidity changes. Even the slightest downtime can prove costly.

]]>Cold storage increase signals multiple trendshttp://www.itwatchdogs.com/environmental-monitoring-news/cold-storage/cold-storage-increase-signals-multiple-trends-40060191
Fri, 15 May 2015 17:03:56 GMTITWatchDogsThe latest statistics from the National Restaurant Association indicate 1 million restaurants throughout the U.S. The eye-opening number of establishments has played a part in the food and agricultural industry's growing demand for cold storage.

Need for cold storage
The growth of cold storage needs is significant for a few reasons. First, it represents a growing number of food establishments across the U.S. and other countries. This demand represents a challenge in terms of logistical support - restaurants and grocery stores have to find a way to quickly and effectively transport fresh foods. A poor supply chain can result in food arriving at an establishment when it has reached expiration.

Changing mindsets also presents another challenge. A late 2014 study by the NPD Group highlighted a growing trend. The study indicated fresh food consumption will continue to grow over the next five years, mainly because of the younger generation's interest in healthy foods and organic labels. The increasing demand comes after fresh food consumption had already grown by 20 percent from 2003 to 2013.

Logistics and cooling come into play because vendors, grocers and restaurants must all find the quickest way to transport fresh food in ideal conditions. Destroyed foods will most likely be calculated as a loss for companies.

Technology can help
According to a 2015 forecast by the NRA, 44 percent of fast casual restaurants said food costs presented an operational challenge. This number correlates with consumer's growing demand of fresh food. For food establishments, obtaining fresher foods typically costs more.

Technology can help with logistical planning and shelf life of foods, fruits and vegetables. A unit such as the WatchDog 15 can be incorporated into a cold storage facility, for example. The unit is built with onboard temperature and humidity sensors and allows for remote monitoring through a Web interface. If conditions were to fall or rise outside of the desired cold storage parameters, the unit will trigger multiple alarms and send out alerts. Restaurant workers can then act quickly to fix the problem.

The unit can be of great help to food establishments because it will help monitor conditions once the food is on-site, as the WatchDog will help a cold storage space maintain ideal conditions so food does not immediately go bad and maintains freshness.

As the number of cold storage spaces and restaurants increase, a temperature monitoring unit will help establishments and companies ensure quality fresh foods are maintained and later served to customers.

]]>Can wearables help data centers in the future?http://www.itwatchdogs.com/environmental-monitoring-news/data-center/can-wearables-help-data-centers-in-the-future-40059986
Thu, 14 May 2015 18:30:57 GMTITWatchDogsTechnology is constantly evolving and changing how daily interactions and processes are completed. Computers are able to handle increasingly large workloads, while smartphones have allowed employees to adopt a mobile mindset. Work can be completed from anywhere as devices can connect workers no matter the location or the time.

These new developments may change the way data centers are monitored. In particular, wearable technology may make it easier to complete certain tasks and projects within facilities.

Wearable pilot program
According to Data Center Knowledge, Compass Data Centers, in collaboration with American Electric Power and ICARUS Ops, began to outfit employees with wearables recently. The pilot is separated into three phases and Compass believes the wearables will be ready for live use at the end of 2015.

Compass is looking to make tasks easier for its facilities management teams. For example, paper documents are becoming digitalized. Software is being developed to integrate digital, interactive checklists to the employee's wearable. By embracing digital checklists, companies and data centers will have all the information of work completed, such as who did it, when, what was done and if the task was correctly completed. Such information may help reduce the number of issues within a data center.

Potentially monitor server rooms with wearables
With an early ability to check certain tasks, it is interesting to see if companies and data centers further develop wearable software. Specifically, will there be a day when employees can glance at their wrist to check the temperature of a server room? It is not that far-fetched, because some of the technology exists already.

Currently, data centers consume large amounts of energy. According to an April IDC Enterprise Data Center report, a little more than 25 percent of data centers have an operating budget between $1 million to $1.9 million. Further examination showed 24 percent of the budget is dedicated towards power and cooling.

Nearly a quarter of the budget goes towards maintenance because server rooms must be kept under certain conditions. Temperature and humidity can cause equipment malfunctions if not properly monitored. For example, if humidity levels are too high, the internal components of a server can collect moisture and break down.

Combining the equipment
To keep an eye on atmospheric conditions, data centers need to utilize units such as the WatchDog-15 PoE. This device is built with onboard temperature and humidity sensors and will trigger alarms if either of the conditions were to fall or rise outside the desired parameters. Data centers can lower energy bills by relying less on HVAC units and more on environmental monitoring solutions to manage ideal facility conditions.

How can a WatchDog-15 PoE work with a wearable? First, the unit already provides a Web interface to allow for remote monitoring and data collection. Hypothetically, this can be designed to work with a wearable's screen. Not only would employees be able to monitor conditions at all times, they will also receive any alarms or notifications.

Wearable technology is currently not heavily relied upon in data centers. However, if Compass's pilot program proves successful, the history of technology indicates wearables will be built to handle more tasks, including monitoring server room conditions.

]]>Hydroelectricity gains momentumhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/hydroelectricity-gains-momentum-40059630
Wed, 13 May 2015 16:46:33 GMTITWatchDogsInternet service provider CenturyLink recently opened a data center in the state of Washington, and the new infrastructure will support the company's hybrid IT portfolio. CenturyLink placed a big emphasis on using clean energy, and the data center may signal the start of a new trend among colocation providers.

Abundant hydroelectric power
The building, located in Moses Lake, Washington, utilizes the Columbia River for power. This type of green energy, known as hydroelectric power, is becoming common in the Northwest as more companies flock to the area due to the ideal surroundings.

Hydroelectricity is the production of electrical power from the gravitational force of flowing or falling water. The U.S. Geological Survey said hydroelectricity is one of the oldest forms of energy production. Since 1990, hydroelectric production has provided about 20 percent of the world's electricity, according to the World Water Assessment Program.

Colocation centers struggle for efficiency
CenturyLink's embrace of a renewable energy is important because the company is a colocation provider, and the embrace of clean energy may signal a changing mindset. Like any data center, colocation providers consume a lot of energy to power equipment and maintain server room conditions. However, colocation centers face challenges achieving energy efficiency because of space and flexibility constraints. Colocation providers have to build infrastructure where the market demands it, unlike other companies that can choose a facility's location based on other factors. Interxion's direct of product management, Bob Landstorm, told Data Center Knowledge customer demands often result in colocation centers being built where the richest connectivity is existent.

"This often means cities tight on space," added Landstorm.

In previous years, tenants of colocation centers cared little about energy efficiency. However, those mindsets may be shifting, as evidenced by Etsy's emphasis on energy efficiency for data center operations. CenturyLink believes its new facility will further entice customers to use an energy efficient provider.

Efficiency in atmospheric monitoring
According to a recent IDC Enterprise Data Center survey, 10.5 percent of a data center's power budget is dedicated to cooling. Power eats up 13.8 percent of the budget. These numbers are the result of data and colocation centers needing to keep server rooms within certain conditions. Equipment may fail if humidity is too high or low, for instance. To monitor these variables, data centers need to reply on certain devices, such as the WatchDog 1200. This powerful unit is build with onboard temperature, humidity, air, light and sound sensors. The unit provides remote monitoring feature through a Web interface. Most importantly, if any of the conditions were to fall or rise outside the desired parameters, the unit automatically triggers alarms while sending out alerts.

Working with hydropower
Current estimates point to hydroelectricity powering more than 85 percent of CenturyLink's utility energy. For colocation centers that can't relocate to a place with abundant green energy sources, they can increase and maintain energy efficiency by using a WatchDog unit because facilities will not have to rely as heavily on HVAC units to maintain server room temperature. This will allow a data center to convert the electricity to other facility needs.

]]>Can Tesla's battery power data centers?http://www.itwatchdogs.com/environmental-monitoring-news/data-center/can-teslas-battery-power-data-centers-40058955
Mon, 11 May 2015 17:18:36 GMTITWatchDogsEarlier in May, Tesla unveiled the Powerwall battery, a device for homes and businesses that generates and stores electricity from solar panels. The company hopes the battery, which measures 33.9 inches in width and 51.2 inches in height, will offer homeowners independence from electricity grids, provide energy efficiency and serve as an emergency backup. According to ZDNet, the battery is designed to store excess energy generation for later use.

However, one can argue the Powerwall battery has bigger implications, especially after Amazon partnered with Tesla. Amazon Web Services, the company's growing cloud computing platform, announced a 4.8 megawatt hour pilot program to supplement the data center power at its U.S. West region site. According to SlashGear, in late 2014, AWS committed to running 100 percent on renewable energy, and Tesla's battery can help achieve that.

How the battery helps
In a statement, AWS engineer James Hamilton said the two companies had been working together for some time to drive innovation of high-capacity battery technology in data centers.

""Batteries are important for both data center reliability and as enablers for the efficient application of renewable power," said Hamilton.

He went on to explain how batteries are important enablers of renewable energy. AWS hopes the Powerwall battery will "help bridge the gap between intermittent production, from sources like wind, and the data center's constant power demands."

Data center's energy needs
At the April IDC Developer Conference, a survey looked at current issues facing data centers. Respondents cited a drain on operating expenses and power utilization efficiency as two of the top concerns. Results showed data centers spent approximately 24 percent of operating budgets on power and cooling, with around 25 percent of data centers registering a 2.4 to 2.7 PUE rating. To offer a comparison, Rackspace's new data center has a 1.15 PUE rating reportedly.

Data centers are increasingly looking for methods to increase energy efficiency because of rising costs associated with the need to maintain data center temperature. AWS and Tesla are hoping this battery becomes a tool to better power and monitor server rooms, supplying data centers with renewable energy to keep everything going

Tools for monitoring
No server room is complete without some type of atmospheric monitoring device. Without one, equipment may fail due to hotter temperatures or high humidity levels. Data centers looking for a solution may want to consider a unit such as the WatchDog 1250. This powerful unit is built with onboard temperature, humidity, dew point, airflow, light and sound monitors. A Web interface also enables the remote monitoring of server rooms and if any of the variables were to fall outside the desired levels, escalating alarms will be triggered. Annual energy bills will be lower because data centers can rely less on HVAC units to maintain ideal conditions inside facilities.

Tesla and AWS hope data centers achieve efficiency by relying on the battery's storage capability and tapping into that energy during peak times. Those peak times can be discovered from the onboard sensors of the WatchDog 1250.

]]>How to utilize cold storage data centershttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/how-to-utilize-cold-storage-data-centers-40058572
Fri, 08 May 2015 16:36:01 GMTITWatchDogsFacebook is making gains when it comes to cold storage data centers. Instead of just utilizing conventional data centers, the social media company built new infrastructure in Prineville, Oregon and Forest City, North Carolina. This progress could potentially change current mindsets of how data is stored.

Approximately 2 billion photographs are shared daily on the website, and according to an official blog post from the company, this poses a problem. The infrastructure team faced the challenge of finding a place for photos to be instantly available as a recurring scale. Essentially, engineers and software specialists wanted to ensure old pictures used less storage and power, but could still be as easily accessible as newer photos.

Facebook decided to build two data centers exclusively for storing photos and videos. Because these data centers were built for a very specific function, the company was able to reduce energy consumption and use less equipment for storage there, according to Data Center Knowledge.

Hot or cold storage
Temperature is one of the most important atmospheric variables data centers must monitor. The right temperature depends on the type of data center. Newer and more frequently used files tend to go through Facebook's regular data centers. Temperatures vary, but most buildings abide by the American Society of Heating, Refrigerating, and Air-Conditioning Engineers guidelines. The organization recommends temperatures between 65 to 80 degrees Fahrenheit.

Cold storage data facilities are built differently. These facilities tend to store replicated files and as a result, these data centers are often built without backup generators or redundant electrical infrastructure. Cold storage data centers therefore consume less energy than their hotter counterparts.

The server equipment Facebook used was also optimized to increase efficiency. Facebook engineers created custom software to power up storage only when it is needed. This custom software led to the number of fans in each storage node decreasing from six to four, according to VentureBeat. Also, the number of power supplies per shelf decreased from seven to five.

Why is temperature important?
Energy consumption at data centers is large and quite expensive. A company may explore any option to reduce energy costs and increase energy efficiency. One such method is the installation of an environmental monitoring device. A unit such as the Watchdog 1200 is built with onboard temperature, humidity, dew point, airflow and sound monitoring capabilities. The unit, known as the WeatherGoose II, also provides Web-based remote monitoring and collection of data for trend analysis. In the event a server room gets too hot or cold, the unit will trigger escalating alarms and send out multiple alerts. This way, data center operators can rely less on HVAC units to keep facilities under ideal conditions at all times, thus lowering annual energy bills. The unit's importance cannot be overstated for data centers looking to increase energy efficiency.

The WeatherGoose II can help monitor cold storage temperatures by ensuring the temperature does not drop too low to the point of damaging server equipment.

Cold storage yet to catch on
Time will tell if Facebook's cold storage data center model will catch on. But the company hopes its Open Compute Project will lay the groundwork for more data centers to follow and adopt. As it stands, Data Center Knowledge said data from a hot data center on the West Coast is backed up to a cold storage facility on the East Coast.

Facebook says each cold storage data center holds hundreds of petabytes of data. If the company shows an increase in efficiency, minimal downtime and better energy efficiency, there is no doubt more companies will build cold-storage infrastructure.

]]>Data center emissions can be cut by multiple methodshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-center-emissions-can-be-cut-by-multiple-methods-40058214
Thu, 07 May 2015 16:34:16 GMTITWatchDogsMicrosoft has long been an innovator when it comes to data centers. During April's Data Center World Conference, Paul Slater, Microsoft's director of the Applied Incubation Team, told audience members how the company approaches data center design, from construction in locations that can utilize free air cooling to building for future flexibility as new technologies emerge.

The company's data center strategy is part of a larger effort to reduce its carbon footprint. In 2012, the company committed to 100 percent carbon neutral operations, according to Data Center Knowledge. It hoped to achieve this lofty goal by charging internal departments fees. A department's fee was dependent on the amount of greenhouse-gas emissions it was responsible for.

In the early days of May 2015, Microsoft can say the effort has been successful thus far. But how, and can other data centers emulate green initiatives?

Identify high energy usage
It is no secret data centers are some of the world's largest energy users. Vast amounts of energy are required to power equipment as well as keep it at specific temperature and humidity levels. IT-related carbon emissions currently account for 2 percent of the world's emissions, according to the Global e-Sustainability Initiative SMARTer 2020 report. Data centers are part of the fastest growing IT energy sector footprint and the report states demands are expected to rise by 81 percent in 2020, which will further contribute to high energy usage.

Other high energy consumption levels can result from technological limitations. Serial links, for example, are idle 50 percent to 70 percent of the time, according to TechRepublic. Up to 20 percent of microprocessing power is consumed, which in the grand scheme of things, can amount to 7 percent of a data center's power budget.

Set a goal and stick to it
If going green and eliminating carbon emissions is a goal for data centers, companies need to implement policies to achieve it, starting with investing the necessary funds to achieve the vision. Microsoft stuck by its goal of eliminating carbon emissions, and as a result, the company reduced carbon emissions by 7.5 million metric tons and saved $10 million.

Utilize current cooling methods
As research for better energy efficiency goes on, server rooms will always need to be equipped for the monitoring of certain atmospheric variables. Devices such as the WatchDog 1250 are built with onboard sensors to monitor temperature, humidity, air flow, light and even sound. The variables can be monitored remotely through a Web interface or through a company's intranet. In the event temperature or humidity drops outside predetermined parameters, the device will trigger multiple escalating alarms and send out alerts. This way, facilities can rely less of energy-intensive HVAC units without worrying out equipment overheating or otherwise malfunctioning.

Fund new projects
Companies need to actively find ways to cut carbon emissions, from building new infrastructure in colder climates to rethinking existing beliefs. CIO Australia said some managers have decided to run servers hotter than normal, "after realizing the impact on equipment is not as harsh as previously thought."

Elsewhere, researchers from the University of Illinois may have discovered a method to have serial links power off when in idle mode and quickly power back on. According to the university's Department of Electrical and Computer Engineering, this new approach can save data centers in the U.S. $870 million annually and drastically improve data center energy efficiency.

The money Microsoft saved by cutting emissions is subsequently being used to fund new innovations the company hopes will lead to further reduction in a data center's energy consumption.

]]>Poor oversight can decrease efficiencyhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/poor-oversight-can-decrease-efficiency-40057883
Wed, 06 May 2015 16:58:28 GMTITWatchDogsNot every data center is created equal. Older infrastructure simply cannot keep up in terms of efficiency and design with the multi-billion dollar facilities constructed by the world's largest technology and Internet companies. Newer data centers have the advantage of using homogeneous IT equipment, and as a result, standardization and efficiency are maximized. Older data centers can expect a further decrease in efficiency if the proper management and standards are not in place.

The Oak Ridge National Laboratory has been in use since 2001. Researchers remarked how inefficient the 20,000 square foot building had become, according to GCN. For instance, scientists were placing orders, setting up and labeling equipment without much guidance. As a result, electrical circuits were not updated, systems were renamed without documentation and cables were used with relabeling.

Proper management is a necessity
Data centers facing efficiency issues such as Oak Ridge need analyze the situation and develop proper standards. This can range from whom purchases and sets up new equipment to changing mindsets of how to handle various issues. Such unorthodox ways were not sustainable, according to Data Center Knowledge.

For example, Oak Ridge's computer facility manager, Scott Milliken, told GCN the data center's biggest issue was electrical and cooling. According to Milliken, downtime had to be scheduled due to a single source of dependency. Data center downtime varies by business and environment, but according to Gartner, one minute of downtime costs $5,600. One hour of downtime can cost approximately $300,000.

Proper cooling methods are needed
The lack of oversight at Oak Ridge led researchers to bring in their own equipment and purchase power distribution equipment and airflow management solutions. However, they were not experienced with data centers and often set up equipment in less than ideal ways. The center's occupants then became territorial about space. Everything combined led to a further decrease in efficiency.

Not everyone is an expert in data center cooling technology. However, there are devices available to help monitor atmospheric variables in a simple manner so researchers can focus on other matters. Units such as the WatchDog 100-PoE are built with onboard temperature and humidity sensors. If either of these variables were to rise or fall outside the desired parameters, the unit triggers multiple escalating alarms while also sending out alerts through various mediums until the situation is resolved. A Web interface also allows users to remotely monitor and collect data for collection and trend analyzation.

Change will not happen immediately

Once all the issues are identified, data center managers need to begin the standardization process but it will take time. There are multitude of public standards data centers must abide by, from code requirements to cabling and airflow. A dedicated staff will ensure equipment is properly configured and maintained. Such lessons can also be applied as new facilities are being built.

Slowly but surely, a chaotic data center will become more organized, reduce downtime and increase efficiency in every aspect, including energy.

]]>Data centers should pay attention to the Atlantic hurricane seasonhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-centers-should-pay-attention-to-the-atlantic-hurricane-season-40057549
Tue, 05 May 2015 15:35:47 GMTITWatchDogsData centers located along the East Coast will want to pay special attention to a recent analysis from The Weather Channel. The weather service recently said a subtropical depression or storm may hit the area relatively soon, potentially indicating hurricane season will potentially be starting earlier than normal. If the storm materializes and makes landfall, it will mark the beginning of the Atlantic hurricane season, which typically begins in June and lasts until November.

The storm is predicted to bring heavy rainfall to Florida and then move north, hitting the Carolinas. North Carolina in particular is home to many data centers, including facilities owned by Apple, IBM, Google and Facebook. While built to withstand the heaviest of storms from the outside, rainfall and strong winds can affect data centers in other ways on the inside, both beneficial and harmful.

Heavy rainfalls can flood data centers
The state of North Carolina itself does not see many storms make direct landfall. Instead, the state is most affected by storms moving north from Florida. The State Climate Office of North Carolina reported 73 storms making direct landfall from 1851 to 2014, compared to 289 non-landing storms affecting the state in the same period.

A major hurricane - those classified as Category 3 or above - has not struck the state since 2011, but tropical storms have. Floods typically result from a tropical storm's heavy rainfall. According to the North Carolina Flood Risk Management System, floods are, "among the most frequent and costly natural disasters in terms of human hardships and economic loss."

The current year has seen the state average 4.48 inches of rainfall in the month of April, according to research conducted by North Carolina State University and the total is expected to rise during the summer months.

Data centers benefit from water
During the month of July, North Carolina reaches an average temperature of 90 degrees Fahrenheit. To cool server rooms, some data centers require water for cooling methods. North Carolina has become an attractive home for many data centers due to the state's abundance of water.

Data centers have taken advantage of rainfalls in unique ways. Facebook's data center, for example, uses evaporative cooling. The building utilizes a misting system attached to water pipes to keep the air cool.

Dangers of water
While having some water readily available is good, data centers must be careful to monitor water and potential flooding. High levels of humidity will lead to condensation, which in turn can lead to electrical shorts. However, too little humidity can lead to a buildup of electrostatic charge and static electricity. These problems can lead to the destruction of delicate equipment. Clearly, the need to balance humidity levels is a must in a state where it can rain for two weeks straight and where hurricanes pose flood risks.

Monitoring is important
Devices exist to help data centers find and manage the humidity levels and data center temperature. Units such as the WatchDog 15-POE are built with multiple onboard temperature and humidity sensors. A Web interface allows for remote monitoring and collection of data to analyze trends. In the event one of the variables were to rise or fall outside the desired parameters, the unit will trigger multiple escalating alarms and send out multiple alerts through various mediums until the situation is resolved.

Recent trends may indicate the Atlantic hurricane season is starting earlier than normal. Since 1851, 39 Atlantic tropical storms or subtropical cyclones have formed before June 1. The Weather Channel said to expect pre-June 1 systems roughly every four years.

Data centers in particular need to pay attention to weather patterns and any unusual weather trends, especially after Hurricane Arthur made landfall early in July 2014. That hurricane became the earliest to ever make landfall in North Carolina.

]]>Liquid cooling's potential to change data center designs and lower costshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/liquid-coolings-potential-to-change-data-center-designs-and-lower-costs-40057119
Tue, 05 May 2015 10:02:09 GMTITWatchDogsAs companies look to decrease data center energy consumption, these buildings are being designed in unique ways. Large companies, such as Facebook and Microsoft, are utilizing free cooling opportunities to lower energy consumption. The idea is built around the notion of letting natural air filter inside to the server rooms and cool the areas, instead of relying on fans and other cooling mechanisms. Companies are looking to lower costs because maintaining equipment requires nearly half of the data center's energy consumption.

Over 25 percent of data centers have an operating budget within the $1 million to $2 million range, according to the latest IDC Enterprise Datacenter Survey Results. Further breakdown of budgets reveals that 24 percent of a data center's annual budget is allocated towards power and cooling. These high energy consumption levels are reflected by the average power usage effectiveness: 25 percent of data centers surveyed reported a PUE between 2.40 and 2.79.

Zien believed the legacy design is not necessary because data centers end up losing potential energy savings as they balance equipment monitoring with controlling certain atmospheric variables.

Energy consumption
Energy consumption is so high because sever rooms need to power the equipment while also maintaining ideal room conditions for variables such as temperature and humidity. If conditions are not properly maintained, server room equipment can be ruined. Certain devices are available for better data center temperature monitoring, such as the Watchdog 100. This monitor contains temperature and humidity sensors, and if either of those conditions were to fall or rise outside the desired parameters, the device triggers escalating alarms and sends out multiple alerts. Often, these sensors work in conjunction with other cooling mechanisms, such as HVAC units. Even the floors are designed to better provide cold air distribution.

New Design
If liquid cooling becomes available for data center usage, however, building designs will be unlike anything today. Liquid cooling, Zien says, will lead to a reduction in the equipment needed to keep data centers under ideal conditions at all times. Free air cooling and liquid cooling are two of the numerous techniques data centers utilize to lower energy costs.

Microsoft's Paul Slater echoed these ideas during April's Data Center World Conference. He believed data centers that will be used for over 10 years should be designed for the future and whatever new technologies emerge in that time period.

Even if data centers begin to embrace new design energy efficient philosophies, there will still be a need for data center temperature monitoring, as nothing is flawless.

]]>Is the future of data center design liquid?http://www.itwatchdogs.com/environmental-monitoring-news/data-center/is-the-future-of-data-center-design-liquid-40056675
Thu, 30 Apr 2015 17:49:18 GMTITWatchDogsUtilizing liquid cooling technology is something that many data centers have been thinking about over the past couple of years. However, is now the time for groups to move to it?

Herb Zien, CEO of LiquidCool? Solutions, thinks so. His recent article in Data Center Knowledge points out some interesting facts about liquid cooling, including that it can:

Erase the need for high ceilings and raised floors.

Lower the need for white space by 60 percent.

Reduce power consumption by 40 percent.

Eliminate the aisle system altogether.

The argument presented by Herb is that all modern data center design is fundamentally based on outdated models of how a center should operate. Air cooling has never been an effective solution, and all that current companies do when they utilize it is continue a trend.

No matter what kind of cooling system a company uses, they should be sure to utilize a data center temperature sensor. Without a sensor system in place, it's impossible to tell how efficiently any given part of a system is being cooled. Companies that have these set up can receive instant updates as to the condition of their facility at any time, and can therefore make more informed choices about their construction

]]>To be ready for the IoT you need a data centerhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/to-be-ready-for-the-iot-you-need-a-data-center-40056474
Thu, 30 Apr 2015 17:46:40 GMTITWatchDogsThe Internet of Things, like family visiting from out of town, is simultaneously too close and too far away. Have you been cleaning up your business? Are your things put away? Have you dusted and mopped like you feel you should, or will the IoT catch you unaware? Being able to say, solidly and without hesitation, that you are 100 percent ready to embrace new change is a powerful, secure feeling. You just need to make the investment first. The IoT is rapidly driving up the demand for data center space. Is your center capable of handling the load, or will you need to apply some quick fixes to make it all okay? Read through this guide to make sure that your center will survive the storm, or if you'll need to invest in another facility to keep up with the oncoming data deluge.

The changing role of the data center
Some people believe that how data centers are used will change as the IoT becomes more important. One idea is that the data center will become more like a factory than a processing engine, according to Gartner's Rakesh Kumar. Obviously, it would keep the same kind of scalabilty and still be using top-notch, current gen technology, but it will be more about driving innovation than running daily workloads. He sees it as an analytics machine first and foremost. Organizations would be able to use their large computing systems to crunch the numbers on massive amounts of stored information in order to make useful decisions about the future of their business. Whether or not that will happen slowly or quickly is anyone's guess, but the sheer amount of people pushing for business analytics suggests it will definitely happen. The major question is whether or not organizations will be ready for it when it does come around.

Can you grow 750 percent?
Data centers will need to drastically increase the amount of storage they have in their networks. Data center capacity used by the IoT will increase by 750 percent, according to a recent report by International Data Corporation. Investments in data centers will be overwhelmingly necessary for those who want to grow in the IoT. Further, data centers that want to court IoT services should be ready for their servers to expand continually in all directions in order to have room for that information. Perhaps the best model for a center that wants to hold IoT information would be a modular one that can be expanded as needed. The biggest question facing those operating centers today is not how much traffic you should expect, but how much information will you be able to hold in five years.

For all of this information, it is important that centers are able to protect themselves with data center temperature sensors. Those offered by organizations like ITWatchdogs will be able to protect servers from fires about disasters that could otherwise wipe or destroy important information. A temperature sensor can let emergency crews know whatever is happening within a data center, and can be programmed to send out alarms when fires or other disasters threaten to cause havoc on workloads. They can even automatically set off emergency cooling systems as a backup measure or trigger the system to power down in order to avoid losing important information. What is most important, however, is that an organization is able to protect itself and the company it is holding information for. With sensors, data centers will be able to comfortably move into the era of the Internet of Things.

]]>Is your data center volcano-proof?http://www.itwatchdogs.com/environmental-monitoring-news/data-center/is-your-data-center-volcano-proof-40055335
Thu, 30 Apr 2015 14:58:27 GMTITWatchDogsFor the first time since 1962, Chile's Calbuco volcano has erupted. The volcano erupted twice within a 24-hour span beginning April 22, 2015. The images were both spectacular and terrifying, as bright colors clashed with the night sky. Yet ash spewed from the volcano reached just over six miles into the air and reports of flowing lava sent reminders just how dangerous nature can be.

The volcanic eruption brings up important questions for data centers. Namely, is your data center prepared if a volcano was to erupt? Yes, that is a serious question.

Most data centers are not located by volcanoes due to planning. The land and conditions are not properly suited to run a data center. Most companies choose to construct large buildings on open, flat land for a variety of reasons: climate, wind and space. Data centers constructed nearby volcanoes would have to be situated on often hilly land and in the nearby vicinity of, well, a volcano.

But in the event a data center must be built near a volcano, there is an important measure to take to ensure your data center is volcano-proof.

Go underground
Underground data centers do exist. According to Data Center Knowledge, there are 14 known data center facilities housed underground in former military bases, mines or limestone caves. These facilities are said to be "nuke-proof." Companies that typically turn to old military bunkers to house servers do so as a means of security. For instance, the privately owned U.S. Secure Hosting Center is located underground near the Iowa tech corridor. No one knows specifically, but according to the official website, the structure is located miles away from population to avoid potential terrorist threats. Further, the data center can survive multiple contingency situations.

Energy pros
Building a data center underground is not as unusual as it sounds. Underground structures are secure and energy-efficient. Larger data centers located above ground are increasingly using open-air cooling techniques to lower energy costs. By housing servers underground, temperatures are already cool, thereby cutting down energy consumption. After all, energy consumption is one of the biggest costs for maintaining a data center. Temperatures are usually around 60 degrees Fahrenheit or lower, according to Data Center Knowledge. However, extra ventilation will most likely need to be drilled due to the large amounts of heat generated.

A notable example is the data center in Lincolnshire, U.K. It resides in a former NATO compound and is known as SmartBunker. The entire 30,000-square-foot compound's power is generated by wind energy, earning it the distinction as the first U.K. data center with zero carbon emissions.

No matter where a data center is built, temperature and humidity monitoring is a priority. The two atmospheric conditions can have a huge impact on server rooms if not properly maintained. Devices such as the Watchdog 100 are built with onboard temperature and humidity sensors. Data center employees can keep track of conditions through a Web interface, while also collecting and analyzing trends. If the temperature and humidity were to fall or rise outside of the desired parameters, the Watchdog 100 will send out multiple alerts through various mediums and also trigger escalating alarms.

As data centers are continually built, there may actually be a time in the future land near a volcano becomes valued. In that case, it is best for data centers to be constructed underground.

]]>Recommended humidity levels for data center may be changing in the near futurehttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/recommended-humidity-levels-for-data-center-may-be-changing-in-the-near-future-40054200
Wed, 29 Apr 2015 10:09:58 GMTITWatchDogsData centers around the globe consume vast amounts of energy, and half that consumption goes toward the regulation of equipment and temperature monitoring of rooms. Data centers have to be strategically designed with regard to cooling because poorly implemented cooling systems will lead to greater power usage. The energy required to cool and maintain equipment typically is greater than the energy required to run the equipment. Yet the rooms cannot just be set to a "really cold" setting.

Temperature
A white paper study by Cisco offered some temperature guidelines for rooms within data centers. It recommended rooms be kept between 64.4 degrees Fahrenheit on the low end and 80.6 F on the high? end. While temperature is important to monitor, humidity could arguably have a larger impact on data centers if it is not properly kept track of.

Humidity and data centers
Humidity is the amount of water vapor in the air and even though it is water, it is in a gaseous and invisible state. Without humidity, there would be no rain or snow. While those conditions may be good for the employees of data centers, inside, humidity is actually needed. According to The Data Center Journal, too much humidity leads to condensation, which can then lead to corrosion or electrical shorts. Yet if there is too little humidity, there may be a buildup of electrostatic charge, causing static electricity. Equipment may be damaged or destroyed as a result.

To lower energy costs, data centers are being designed to allow natural, cool air to flow within rooms, a method known as free cooling according to Data Center Knowledge. However, it is not possible to use natural air all the time because it tends to be drier.

No matter how data centers cool their equipment, they have to to rely on a temperature monitor such as the WatchDog 100. This monitor is built with onboard sensors to track temperature and humidity levels and if they were to fall or rise outside the desired parameters, escalating alarms will be triggered and multiple alerts sent out. The longer a server room is not within the desired parameters, the higher the chance for some type of failure.

New study on humidity forthcoming
The American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. Technical Committee 9.9 focuses on data centers. It has been working and researching to find ways to prolong free cooling. The committee believes if data centers use higher-than-customary temperatures, the number amount of free cooling opportunities worldwide would be expanded, along with lower energy costs.

After having recently partnered with the University of Missouri on the impact of low humidity on electrostatic discharge, the study's results will lead to ASHRAE TC 9.9 to modify its relative-humidity guidelines for data centers sometime in 2015 after releasing the study's results.

The implications of this study will lead to more free cooling opportunities for data centers in every part of the world.

]]>Missouri looks to entice data centershttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/missouri-looks-to-entice-data-centers-40055217
Tue, 28 Apr 2015 09:58:53 GMTITWatchDogsThe Midwest state of Missouri recently passed legislation, hoping to become another hot spot for data center construction. Missouri Governor Jay Nixon signed the legislation after earlier vetoing a similar act because of some key differences.

The incentives aimed at enticing data center construction require a $25 million investment and a guarantee of 10 new, high-paying jobs, with wages paying at least 150 percent the country's average wage. If data centers were to undergo expansion, there would have to a be a $5 million investment and the creation of at least five new jobs, according to the Kansas City Business Journal.

Brad Hokamp, CEO of Cosentry, operates two data centers in Missouri and believes the bill will help drive business and growth in the state. Missouri, along with neighboring lands, is part of an ever expanding data center location. The Midwest states are dubbed "The Silicon Prairie," a take on Silicon Valley.

Large companies, such as Google, Facebook and Microsoft have already constructed large data centers in the neighboring state of Iowa. Missouri hopes to see this type of construction within its state.

Ideal conditions
Other than tax incentives, Missouri is an ideal location for data centers because of climate conditions, which will help companies looking to lower high energy costs. According to Data Center Knowledge, Missouri's electricity costs are approximately 10 percent lower than the rest of the U.S. The state's Renewable Energy Standard will require 15 percent of energy to be renewable by 2020.

The wind and open land is arguably the most enticing. A recent trend in data center has been to incorporate open air designs to lower the power required to cool and maintain server equipment. This free cooling is available for about half the year Data Center Knowledge said, citing the National Oceanic and Atmospheric Administration. The state also experiences an abundance of rainfall. If data centers are designed properly, rain can be utilized in the building's cooling system.

Open Air Cooling
Rober?t McFarlane is a principal in charge in data center design at the international firm Shea Milsom and Wilke LLC. According to McFarlane, there are actually three types of free cooling data centers should consider: air-side free cooling, adiabatic and water-side cooling.

Air-side
The most common type of free cooling large companies use throughout data centers. Outside air is brought inside through filters and the air makes its way to server rooms.

Adiabatic
This type of cooling is a variation of the air-side method. A chamber brings in the air, and with water evaporation, cools the server rooms.

Water-side cooling
A method when water circulates through cooling towers. McFarlane says many data centers already use this type of cooling is the easiest.

McFarlane notes each method has positives and negatives. If a data center decides to use air-side cooling, the air quality must be taken into account and filters have to be added to costs. Adiabatic cooling methods are only used in low-humidity environments, and if rain is hard to come by, water-side cooling is not a smart investment.

No matter the cooling system, there needs to be constant monitoring of data center temperature. Compact devices are available for use in data centers that monitor temperature and humidity, two of the most important conditions that must be monitored. The Watchdog 100-POE is built with onboard temperature and humidity sensors. This data is available for tracking and analyzing through a Web interface. If either of the two conditions were to fall or rise out of the desired parameters, the device will trigger alarms and send out notifications through multiple mediums.

It will take some time for data centers to take advantage of Missouri's new incentives, but as data centers experience rapid growth, the state will no doubt be an attractive option.

]]>Long-lasting data centers need new drives and backup energyhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/long-lasting-data-centers-need-new-drives-and-backup-energy-40055467
Mon, 27 Apr 2015 17:59:34 GMTITWatchDogsHow can a data center outlast its competition? Better, more reliable storage and more resilient power solutions can make it easier for groups to get stuff done. By making information archiving a top priority and utilizing a wide array of back-up power features to make sure that clients can access their data at any time, a data center can offer a superior service. Ultimately, it is the speed with which information is stored and accessed that makes data centers useful. Any individual provider that is able to rise above the crowd due to advancements in how they offer their service will gradually beat others competing within the same area. Keeping data center temperatures low while accomplishing this goal is not easy, but the rewards are well worth it. Thankfully, two major technologies have come out that may make the process of choosing how to design the data center of the future a little easier. Fuel cells are rapidly being deployed as a powerful way to keep emergency energy stored on-site at a center, and solid state drives are becoming more popular for data storage thanks to their long? lifespans and rapid recovery rates.

Fuel cells are very new, but they are very useful for data centers that want to keep their information online no matter what the outside circumstances are. While each fuel cell individually only produces roughly 25W, which is a low-energy light bulb's worth, they each can connect to a very large energy server that provides much more power to a company's servers. These cells were originally used by NASA to help them power their space stations in the 1960s. Modern fuel cells have between 52 percent and 60 percent efficiency, which is very high for power generators. One of the only methods more powerful is hydroelectric power, which is usually not available to just any data center.

Storage systems and SSDs
Solid state drives are useful for organizations that are modernizing their current array of servers because they are faster and longer-lasting than previous varieties. Data center SSDs have the capability to provide high-powered information storage while eliminating a common bottleneck in performance. These new types of drives are distinct from their consumer-end variations in a couple of ways. First off, consumer SSDs are typically flash-based. They can survive a certain amount of wear and tear, but aren't built for long-term, always-on workloads that can cause most computers to slow to a crawl and eventually cease functioning. Flash data degrades whenever it is rewritten, so it isn't suitable for the data center. However, new types of SSDs designed for them use alternate systems of storage, and support performance of up to 500/450 MBs read/write, while staying active for roughly 950 TBs of data, according to Computerworld. Further, these types of data storage devices also come with built-in 256-bit encryption and power loss data protection, making them useful for all cases that a data center engineer would need.

The upside of fuel cells and SSD storage is that it allows for data centers to run more efficiently. This could result in higher allowable data center temperature ranges. Of course, if a company were to allow this kind of change within their center, they would need to utilize a temperature sensor like the Watchdog 100-PoE, which is capable of detecting a variety of variables within a center in order to provide the best possible coverage and service to an organization. Developing long-term strategies for maximum efficiency allows companies to try out these new services in order to make sure that they will provide the best service to their clients, no matter what

]]>Reliability and security are the true tests of data centershttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/reliability-and-security-are-the-true-tests-of-data-centers-40055525
Mon, 27 Apr 2015 17:58:57 GMTITWatchDogsPC sales are falling while more organizations start opting for dedicated data centers. But should groups be building their own mega computing hubs, or simply leasing out to experts? The total cost of ownership for these facilities is continuing to rise as security and legal costs increase. In order to get powerful computing on demand, many organizations are beginning to look at leased centers in order to get the best possible deal. Modern technology continues to surge along, but corresponding increases in security and reliably have become a little bit more difficult to keep up with. How should groups thinking about moving forward to the data center market consider their options?

One of the important things to consider is that major computer manufacturers may begin focusing more heavily on the data center market Many who would have in the past needed a computer now simply use an array of electronics devices like tablets, notepads and smartphones. Instead of saving up for a $1000 personal computer, individuals are using the cloud to store their information and making do with whatever amount of information they can store on their machine during the day. Businesses, too, are foregoing buying the biggest and most powerful computers for their employees. Instead, they are utilizing smaller computers that connect to data centers. This has lead to fewer possible buyers for companies like Intel, according to Fortune. Therefore, there may be pushes by Intel to make data centers more affordable for businesses as they struggle to develop a market of buyers to sell their current products to. In this case, organizations will want to invest in data center security elements like temperature sensors in order to protect their considerable investments. This would allow them to get powerful computing while also ensuring that they are able to protect themselves from fires and other events that can damage a system's ability to perform.

Data center total cost of ownership
Beyond just temperature sensors, however, organizations will also need a fairly competent legal and cybersecurity team on dial. Data center operators who sell their services to organizations are quickly beginning to find that there are some difficulties involved with ensuring the protection of data. The liability of a third-party provider for another company's data is zero. This means that if something terrible happens to the assets stored at an outside data center, it may be very difficult - if possible at all - to protect valuable organizational resources bound up with its information. This may make the appeal of owning a data center and hosting the organization's data there a little bit more appealing to those who were previously thinking about leasing. However, this doesn't take away from the fact that a company must now absorb the costs of security and legal fees, which an outside provider would presumably otherwise be paying.

Ultimately, it is up to every organization to make its own decision about how it will allocate its valuable resources to data center maintenance. Leasing and owning both come with their own attendant risks. Groups that do make the decision to own their own means of data production and analysis, however, should be sure to install data center temperature monitors in order to keep their information cool. This way, at least, an organization can make sure that it is able to protect its own confidential data. Those that absolutely require a certain amount of security will likely wind up purchasing their own data centers for colocation due to the amount of money they can spend on security their data. Organizations that don't have as much information to worry about may, on the other hand, lease their space from data center providers.

]]>Data centers are serving large sectors of the business communityhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-centers-are-serving-large-sectors-of-the-business-community-40055448
Mon, 27 Apr 2015 17:56:12 GMTITWatchDogsRunning all of a company's IT systems on in-house networks is becoming increasingly difficult. The amount of power that most organizations need is rapidly outstripping their own ability to manage that level of complexity. And it should! As business analytics, big data and other fruits of the modern age of computing become more useful, there is no reason why groups should not have trouble trying to reach the requirements of computing power those systems offer. They are too massive to be run on desktop computers or on an insufficiently equipped central server at even a mid-size organization. Instead, companies should examine ways to contract out the difficult and arcane art of information processing. With specialists dedicated to helping groups mine their own data stores, they can get something better and more interesting for their businesses without paying the difficult, often complex costs of IT, including data center temperature cooling.

This may be why many groups are relying on Amazon's cloud services. They let companies get access to a large amount of power that scales incredibly easily. This lets organizations process large chunks of data about themselves, the future of their industry or even their rivals relatively quickly. Further, it allows them to make sure they are well-protected from systemic rot and poor decisions that could cripple them in the long-term. With access to all parts of a groups' transactions at once, it is possible for a company utilizing the cloud to receive constant monitoring of their current status. This lets a group, for example, be sure about how much it is selling at any given time, which is ultimately how a company can know whether or not it is succeeding. Further, big data analytics can let an organization know how various choices it is currently making are impacting its business performance, leading it to make better decisions in the future.

Data centers and mobility
Further, companies have much to gain from the use of data centers as a way to improve their worker's flexibility. Organizations with a strongly developed mobile computing program can have employees message and call each other no matter where they are. They can even work out of coffee shops or their homes. This lets groups stay in touch when their workers may be flying out to other meetings or otherwise unable to come in to the office. In fact, utilizing this kind of technology can let groups stay overall more efficient. Cloud computing and its applications must be made to work with a variety of user phones, so they sidestep one of the most common problems that businesses have when rolling out proprietary software to employee computers, which is compatibility.

Ultimately, organizations that want to take advantage of the new technologies that have resulted from big data will need to work with data centers. Not every group has the technical skills required to maintain a low server room temperature for their technological solution, and they don't need to. Simply by using a well-connected, strongly enabled, well-overseen data center, a company can gain a lot. The key then is for data center operators to offer their high level of service with the best amount of uptime and safety. This means using technological advances like temperature sensors in a bid to reduce the amount of possible emergencies or incidents that could occur within a center. The future of business revolves around the use of data. As long as companies can find ways to make this an essential part of their operating process they will be able to energize their core functions.

]]>Cloud computing demands new data center designshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/cloud-computing-demands-new-data-center-designs-40055076
Mon, 27 Apr 2015 10:22:09 GMTITWatchDogsIBM recently opened up its second cloud data center in the Netherlands, built to house nearly 8,000 servers. This data center seeks to offer greater redundancy and disaster recovery for European customers, according to ZDNet. This is part of the company's plan to build 15 data centers throughout the world.

IBM's recent moves are in line with the rest of the industry. Cloud computing has changed the way companies look at data centers, simply because the cloud is such an integral part of the world. According to research firm Gartner, cloud computing will become the bulk of new IT spending in 2016. "Overall, there are very real trends toward cloud platforms, and also toward massively scalable processing," said Chris Howard, research vice president at Gartner. A similar study of the cloud computing trend by Cisco also revealed 78 percent of workloads will be handled by cloud data centers.

Amazon's recently unveiled the financials of its Web Services platform. AWS's revenue for the year's first quarter reached $1.57 billion. Management plans to match increasing demands with more capital spending. According to Data Center Knowledge, Amazon's property and equipment were valued at $7 billion, reinforcing the knowledge of how expensive it is to run a cloud data center. Google and Microsoft are also spending billions to grow their cloud offerings and build massive data centers to meet the demand.

Data centers are expensive investments for a few reasons. The buildings need to be large enough to house the thousands of servers. The cost to power data centers is also enormous, as power is needed to run and maintain the equipment. ComputerWorld called data centers "the new polluters" because it takes approximately 34 power plants and 500 megawatts of electricity to power the world's data centers. Part of the high costs result from the constant monitoring of temperatures. Data centers need to be kept within certain temperature and humidity levels, often recommended by the American Society of Heating, Refrigerating and Air-Conditioning Engineers.

Companies running data centers can help reduce these high energy costs by installing special equipment to monitor server room temperature and humidity. Units such as the WatchDog 100-POE are built with onboard temperature and humidity sensors. This device sends live updates, and should the conditions fall or exceed the desired parameters, the device can trigger escalating alarms and send out notifications so the problem is dealt with. A Web-interface provides an overview of environmental measurements with real-time data.

In all, cloud computing has led to new thoughts of data center designs.

The architect
No job will be more important than the man or woman in charge of designing a data center. He or she must take into account every factor to maximize efficiency, including location, cost, power, pricing and cooling. Data Center Knowledge says cloud computing is changing old data center thoughts because of new applications and demands from clients.

Design
Facebook opened up a state-of-the-art data center in Iowa last year, with the second phase currently under construction. The company boasted how it was 100 percent powered by renewable energy, including being cooled 100 percent by outside air. The company's open-flow design is just the latest trend to help lower energy costs.

Power and cooling
Combined with temperature and humidity monitoring and open air designs, data center architects must also look at new powering options, such as hydroelectricity. For instance, architects may want to look at building new data centers in Iceland, as three-quarters of the country's power comes from hydroelectricity. New powering options, with an emphasis on using renewable resources, will help lower costs in the long run.

Data centers of the future are only going to grow in size as cloud computing continues to demand more resources. Already, Amazon CEO Jeff Bezos said in a statement, "Amazon Web Services is a $5 billion business and still growing fast-in fact it's accelerating."

But to meet that demand, data centers must turn to new design ideas to build energy efficient buildings.

]]>The safest place for your data is 220 feet undergroundhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/the-safest-place-for-your-data-is-220-feet-underground-40054864
Fri, 24 Apr 2015 18:22:21 GMTITWatchDogsData center security is a problem that the entire business community is facing together. Every major organization is realizing the benefits of collecting data on its internal processes. Yet, it is becoming more difficult to keep that information out of the hands of those who could do serious damage with it. In order to make use of the rapidly expanding array of services designed to utilize vast data supplies, companies must simultaneously collect and protect their information. However, this is proving to be more difficult than it sounds. Even discounting the possibility of hackers stealing, subverting or destroying information, companies must also understand the fact that sometimes fires can damage data storage units. Natural disasters wiping out a given data center is one of the worst possible things that could happen to an organization, as it might lose millions upon millions of dollars in unrecoverable data if that were the case.

There have been a couple of advances in data storage dedicated to finding ways to fight against data loss. One way that many groups have tried is utilizing naturally defensive locations. The most dramatic of these may be EMC's recent decision to create a data backup storage facility 220 feet underground, which will protect sensitive business information for years to come. This strategy is considered to be a smart choice by many, as underground facilities are naturally cooler than those above ground. Further, these centers will be resistant to many natural disasters, including hurricanes. Because the site is built in Pittsburgh, Pennsylvania, it is far enough away from any fault lines that earthquakes should not matter to the center. This level of physical protection may be necessary to keep thieves and vandals away from extremely sensitive data.

Protection against virtual threats alongside physical ones
Of course, not every problem that a data center operator faces is a physical one. Virtual attacks from hackers and other ne'er-do-wells can leave a company stuck bailing themselves out of a deep PR hole. Further, any information that is actually lost or destroyed is irretrievable, and could be incredibly damaging for a group's business continuity. Companies have far fewer ways to naturally resist this problem. Agencies have struggled for years to keep users' data secure, but there are still major security weaknesses in a variety of states, including Oregon. It is likely that the problems with Internet and data center security will go on at a global level for some time. One major issue is that public critique of data center security may in and of itself be a security hazard. Once hackers know the vulnerabilities of a certain company or server, they will undoubtedly launch into attacking that one with new fervor. It is simply easier for them to go after targets they already know about.

Ultimately, the way that groups will have to dig themselves out of attacks is through the competent defense of themselves through various vectors. It is not enough to simply be strong at defending against physical or digital attacks. Instead groups should have a multi-pronged defense strategy that enables them to utilize a variety of different methods to defend themselves and their data. Whatever form this takes, it will need to happen quickly. Digital threats are becoming more and more common online, and only those that have the foresight to defend against them now will be able to keep themselves safe in the long term. It is the defense against unknown threats that is the most difficult part of data center operation. Companies will need to use everything in their power to protect themselves from a multitude of unseen attacks.

]]>Rackspace's new data center should be an industry blueprinthttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/rackspaces-new-data-center-should-be-an-industry-blueprint-40055127
Fri, 24 Apr 2015 14:55:45 GMTITWatchDogsA new U.K. data center opened up in the London suburb of Crawley and is said to be one of the world's most energy efficient. Rackspace, a U.S. cloud computing company based out of Texas, chose the U.K. location because of how easy it is to get power to the location, the company told The Inquirer.

Construction lasted approximately 15 months on the 15-acre campus. The 130,000 square foot data center will meet the needs of customers situated all over the world, however, Rackspace expects demand to be particularly high from U.K. and European customers, according to ZDNet.

Indirect air cooling
Rackspace utilized a new technology known as indirect air cooling, a first in the U.K. This technology reduces the energy and water required to keep the facility at optimal temperatures. The data center was designed to not use much water but heavily rely on the U.K.'s rainy conditions.

"We are using indirect air cooling technology, so essentially it's an 'all-air' system which takes the outside air and uses that to cool the recirculating data-cooling air, but it does it indirectly so you never get outside air coming into the data center," the company told The Inquirer.

Cooling and maintaining temperature and humidity levels are some of the largest costs associated with running a data center. The Natural Resources Defense Council says the city of New York can be powered twice over with the amount of electricity data centers consume. By 2020, the organization predicts data centers will consume 140 billion kilowatt-hours.

To help keep energy costs in check, devices exist to monitor conditions. The WatchDog 100, for example, is built with onboard sensors to monitor temperature and humidity levels in highly critical areas. No software is required for the compact unit, and a Web-interface is available to remotely monitor and collect data for analyzing. The device can also trigger escalating alarms and send out multiple alerts over various mediums if server room conditions were to fall or rise outside the desired parameters.

Interior design
Rackspace's new data center can house up to 50,000 servers. The server rooms are kept at 75 degrees Fahrenheit, whereas the hot aisles are kept between 90 and 93 degrees Fahrenheit. The company believes it has achieved maximum energy efficiency due in part because of the layout. To allow for better air circulation, structural columns are minimized and the floors are made of solid concrete. The company also opted for transparent-pane LED lighting and believes the investment will pay for itself after one year.

Outside design
The roof is not a typical roof. To collect rainwater, the roof was pitched. This water will then be used in sprays within the indirect cooling system. The data center's main spire uses natural light due to multiple sun tubes, again showing Rackspace's commitment to a greener building.

Remember that indirect cooling system? It is mounted on the roof. According to The Inquirer, chimneys run down to the server floor. Air passes over the cooling system, then collected and pumped into the room. The only movement in this process is the fan.

Energy savings
The designs and ideas Rackspace have utilized should be a blueprint for future data centers. The building's power usage effectiveness rating is 1.1, compared to an average PUE rating of 1.7. Nearly $60 per kW is saved a month, and ZDNet estimates that over the course of the 20-year lease, Rackspace will save millions of dollars.

Companies that wish to utilize energy efficient designs, such as temperature monitoring devices, will see energy costs drastically decrease, as well as less environmental damage.

]]>Plan ahead and stay agile, so your data center will lasthttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/plan-ahead-and-stay-agile,-so-your-data-center-will-last-40054822
Thu, 23 Apr 2015 18:07:36 GMTITWatchDogsData center construction is becoming an important consideration not just for colocation providers and large tech companies, but also for any business that processes a considerable amount of information. Those that are trying to find ways to keep track of their large volumes of data are interested in data centers for their sheer processing power. And those that aren't worried about how their company will handle information, should be.

The confluence of factors based around storing and holding massive quantities of information will soon make data critical. Data centers will soon operate as both factories and laboratories in terms of how they manage and produce data for consumption, and how they can study it to further company ends. The growth of modern industry is about the intersection of business and data science, and those that have data centers will be able to make the most of it.

But how can companies make sure that their data processing facilities are able to fulfill their needs? One important way that they can do this is to make sure that they have a plan for power. Operating costs of data centers are the most likely things to eat into their margins. Thus, it is essential that they are able to develop a plan to make sure they have access to power.

Power conditioning systems and back-up generators on-site are good ways to ensure continued operations in an emergency, but groups should also leverage this to get better deals on their day-to-day power supply. Don't just take a price. Instead, negotiate with local power companies and other energy providers to get the best deal you can. Organizations that are able to do this will be able to ensure better operations year-round, according to Data Center Knowledge.

Agility and cloud usage typify the new cloud generation
Once you've been able to secure your power supply, try to keep tabs on how your data center can make use of other opportunities. Protecting and securing information on a data center is difficult, according to Gartner, but finding methods of keeping information safe is vital. Similarly, building a cloud to be agile and able to respond to a lot of different possible future for how data is processed and used is important. Many companies are beginning to switch to Solid State Drives for archival purposes, for example, which could necessitate switching to that format over the next few years.

Modular design concepts for dat?a centers have been praised recently for providing operators with the ability to slowly adopt new changes as they arrive. This would let a company stay cutting edge while continuing operations-as-usual while they switch out new data center hardware. With the rapid onset of change for organizations with regard to technology, investing in a solution that makes that easier may be a good idea. Further, utilizing the cloud is useful for enabling control over how processes work on an off-site relatively easily.

No matter how a company decides to program its data center, they need to keep in mind that the costs of power will be very important to them. Utilizing something like the WatchDog 1000-PoE to monitor how their power-hungry heating implements are being used throughout a day may allow them to better protect their investment. Reducing the amount of money that must be put into a cost like cooling can drastically improve the efficiency of your center. Any money being used to unnecessarily cool information is wasted. By taking charge of your center, you can see large profits off of the investments you make soon.

]]>Key takeaways from Data Center World Conferencehttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/key-takeaways-from-data-center-world-conference-40053968
Wed, 22 Apr 2015 17:38:48 GMTITWatchDogsAs data center energy consumption becomes an increasing worry, companies are investing large sums of money into energy-efficient buildings. The space that equipment occupies depends on the size of the company running the data center. Smaller companies may use one to two floors of space, whereas large Internet corporations have multiple data center buildings scattered around the U.S. and world.

In 2013, data centers consumed some 91 billion kilowatt-hours. According to the Natural Resources Defense Council, this amount of energy could power every house in New York City twice. NRDC said a large portion of the energy consumption was due to small, medium and corporate data centers, as some may run older equipment. Data center equipment, such as the WatchDog 100 can help lower energy costs. With onboard temperature and humidity sensors, the unit allows users to track environmental data. The unit can also send out multiple alerts if there were to be any changes in the temperature, humidity or combination of the two. By keeping the two at recommended levels, energy consumption will be maintained.

Energy efficiency is becoming an important design factor of data centers. Paul Slater, director of Microsoft's Applied Incubation Team, plays a big role in the company's data center strategy. Slater has an initiative to share Microsoft's data center efficiency strategy because he believes the information will help smaller data centers located throughout the world become more efficient. At the Data Center World Conference held April 20, 2014, in Las Vegas, Slater recommended that if companies plan to use a data center for more than 10 years, design should play an instrumental factor. He expanded on that thought, and shared more points during his presentation:

Location, location
Microsoft heavily factors in the environment when looking to build a new data center, and other companies should as well. It helps explain why Microsoft is building a second data center in Iowa, where large sprawls of land are cheaply available. Renewable energy sources need to also be considered, another factor why Microsoft is increasing its data center presence in the Midwest state.

Flexibility
Slater and Dave Leonard, chief data officer at ViaWest?, both emphasized the need to design data centers with flexibility in mind. Slater said because technology is rapidly changing, so too does equipment housed in data centers. Data centers need to be able to adapt to industry trends, such as virtualization of servers. The addition of new, future technologies needs to also be considered. Overall, companies may or may not know what changes are on the horizon, but have to be aware that some type of change will happen.

Leonard, meanwhile, recommended to build data centers to ensure quality increases as completion of the building nears. For Leonard, data center designs means tradeoffs between cost, quality and energy efficiency.

Data centers that are designed with location and flexibility will have greater chance of adapting to whatever challenges the future may bring.

]]>Efficiency and novel engineering works for data centershttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/efficiency-and-novel-engineering-works-for-data-centers-40054474
Wed, 22 Apr 2015 17:12:35 GMTITWatchDogsData center engineers continually face problems relating to the design of their facilities. How can they create something that is sufficiently efficient for long-term use, especially with current concerns about eco-friendly facilities? Keeping data center temperatures low without breaking the bank is a key goal. Luckily, modern technology has made this easier. As data center design becomes more entrenched in the idea of maximizing the use of a company gets out of its cooling dollars, it will be easier for centers to cool themselves. Below are a few ways that organizations can build their centers around the idea of saving time and money through better data center construction.

Design for the environment
One major issue that many centers have is that their designs are more or less prefabricated. It can be easy to think of a plot of land as just a space to put a building on. However, many centers have seen success utilizing local lakes or the ocean for their cooling mechanism. Further, finding plots of land where space and power are cheap can allow a center to be efficient by making it large. With lower rack density, a facility can allow heat to naturally dissipate instead of having to be fanned away. Alternatively, a company could build a data center with solar panels in a sunny part of the world in order to take advantage of renewable energy. Or, they could put a center in the arctic and simply not worry about cooling costs at all - the Northern winter could do that for them.

Combine heat and power
One failsafe way to reduce energy costs is to utilize a combined heat and power plant in a data center, according to Terence Waldron, president of Waldron Engineering and Construction. These CHPs convert natural gas to energy more efficiently than most power plants, which is already a bonus. However, their main appeal is that they can use this in tandem with absorption chillers. This allows a data center to simultaneously power and cool its operations at once. By reducing the cost of both of these tasks at once, data centers can see rapid returns on their investment in a better cooling system.

Use data temperature sensors
A facility that uses a sensor like the Watchdog 100 PoE can see changes in their daily temperature cycle. This can be crucial for finding the most energy-intensive times of day and for averting crises related to malfunctioning or broken cooling systems. Data collection on system temperatures throughout an operating cycle can let a company know if they are overcooling their systems or undercooling them, which can lead them to drastically change their coolant system in order to achieve better results.

Be ready for change
Above all else, your data center should be flexible. The technology upon which you build your current servers may be obsolete with in the next couple of years. Because data centers process so much data very quickly, they always need to be near the cutting edge in order to maintain a competitive advantage. Already, the rise of SSD for storage may render many centers' servers obsolete, or at least make them try very hard to find ways to stay current with the modern economy, according to Data Center Knowledge.

By staying informed, flexible and resourceful, it is possible for data center companies to maintain very low margins on their centers. Reducing the amount of energy wasted by cooling can improve efficiency and overall cost of ownership very quickly. Just remember: Be constantly vigilant. It is important for data center operators to continuously re-evaluate their operating strategy to ensure the lowest costs for their facilities.

]]>Don't let the dust settlehttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/dont-let-the-dust-settle-40054049
Tue, 21 Apr 2015 16:37:15 GMTITWatchDogsAn English metropolitan city council has recently awarded a contract to a third-party vendor to help with the move of the council's data center, according to PublicTechnology.net. Solihull Metropolitan City Council said the move will help reduce energy consumption and lead to lower costs in the future. Its migration will begin sometime in April and continue for approximately one year.

The move was approved also because the building currently housing the equipment has had issues with floods. Additionally, a water pipe alongside the building poses a risk to the sensitive equipment. The flooding risks are expected to significantly increase as the building housing the data center is set to undergo construction next year. If flooding did not pose a big enough risk, the construction surely added to it due to the risk of dust.

To ordinary computer users, dust does not seem to be too worrisome. Yet data centers are always on the lookout for dust and other potential contamination sources because of the risks small particles pose. Proper contamination control policies need to be in place to ensure equipment does not fail due to preventable measures.

Establish a protocol
According to the maintenance company, Data Clean, steps should be taken to maintain data centers. It recommends some common sense don'ts: workers should not bring food or drink near sensitive equipment, cardboard or any type of paper product should not be stored inside the room, as these materials discard large amounts of contamination, and door should not be left propped open.

Dust sources
Dust can come from just about any source, but the biggest worry is bringing in outside dust particles to indoor settings. This occurs when doors are opened or soil particles are embed?ded in shoe soles, according to NPR. Dust can even come from skin flakes and pet hair. Data Clean recommended placing contamination control mats at entrances of computer rooms. These mats help to make sure dust particles are not brought inside from shoes or carts, for instance.

Dust effects
Dust buildup needs to be a worry for data centers because of its effects on equipment. The National Fire Protection Agency said dust can contribute to electrical fires due to its flammable nature. Another cleaning service, Sterile Environment Technologies, said many data centers have dust trapped in tiles, crevices of wire cables and equipment cabinets. Further, dust can lead to static electricity and just 250 volts of static electricity may lead to data memory loss and other problems.

Staying one step ahead
Data centers need to maintain an almost almost perfectly clean environment. However, if dust does manage to go undetected for some time, there are signs it may be causing problems in data centers. Internal components covered in dust start to warm up, as dust acts like a blanket of sorts. A temperature sensor can alert employees in real time if temperatures reach unusual levels.

By always keeping an eye out for dust, data centers can better maintain their equipment to protect against failure while also improving employee health. Dust can be just as harmful to people as it is to technology.

]]>Leading companies place emphasis on green data centershttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/leading-companies-place-emphasis-on-green-data-centers-40053880
Tue, 21 Apr 2015 13:38:18 GMTITWatchDogsThe world's leading electronics consumer brand, Apple, released its 2015 Environmental Sustainability Report. The company has made significant strides in recent years to reduce its carbon footprint, however, that figure rose from 2013 to 2014 because the company sold more products.

Even so, one area in particular demonstrates the company's pledge to save the environment. Since 2012, 100 percent of Apple's data centers have been powered by 100 percent renewable energy. The report stated, "That means no matter how much data they handle, there is a zero greenhouse gas impact on the environment from their energy use." Apple's data centers are powered through renewable sources such as wind, solar, biogas fuel cells, geothermal power and micro-hydro power. Important companies within the technology industry are investing billions in greener data centers as energy consumption rises.

Importance of green buildings
Data centers are an enormous source of energy consumption. By 2020, the Natural Resources Defense Council has estimated U.S. data centers will consume 140 billion kilowatt-hours of electricity, up from 90 billion KWH in 2013.

This much energy is used due to the power required to run the hardware, and the power to monitor and control data center temperature. Data centers are increasingly investing in power-efficient equipment to help with the first energy usage problem. Larger companies have the flexibility to build data centers in areas where nature can help solve the second problem.

However, the NRDC says the vast majority of data center energy is consumed by much smaller data centers not fitted to reduce energy. Equipment is available to help lower cooling costs in data centers. In the long run, this will equate to lower energy consumption and greener data centers.

Following suit
Other large corporations are placing an emphasis on green data centers, specifically in the U.S. Google announced April 17, 2015, it was investing $1 billion into its Council Bluffs, Iowa, data center, on top of its original $1.5 billion investment.

The state of Iowa is an attractive option for U.S.-based data centers because of access to renewable wind energy and the cool climate. Facebook is in the process of completing construction on its two-phase mega data center, set to be powered by 100 percent renewable energy. The company hopes to achieve this by the air-cooled design of the building. Another tech giant, Microsoft, is in the midst of building its second data center in the state.

Data centers are the backbone of just about everything today. As operation costs and energy consumption continue to rise, the importance on constructing green data centers grows.

]]>Data centers and machine-facilitated coolinghttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-centers-and-machine-facilitated-cooling-40052502
Wed, 15 Apr 2015 11:58:19 GMTITWatchDogsOne of the most difficult parts of modern data center design is maximizing cooling efficiency. However, the future of energy management software lies in connection to energy management hardware. This allows for on-the-fly adjustments to the intensity of coolant throughout a system, greatly reducing the complexity of monitoring. Sensors that are equipped to understand the cooling throughout a system, like the WatchDog 100 PoE, can instantly monitor a variety of important variables. Then, machine-learning systems can crunch through the aggregated data and let operators know what their recommended parameters for the next day are.

Many such services are cloud based, including the new Amazon Machine Learning platform, which is hosted on Amazon data centers. This is designed to be used by any data center operators that want to get a clearer picture of how information and temperature in their data center work together. All an operator has to do is give the cloud software access to the compiled information and run it through the machine learning algorithm. The process itself does not require much machine-learning experience on the part of the engineer. Soon, even those hosting cloud computer systems themselves will be using cloud computing services run by other, larger companies.

Let the cloud shade your systems
Your data center is likely unoptimized for tomorrow. However, This doesn't mean that it isn't generally optimized. It's likely that you and your staff have put a lot of time into making sure that your facility works well under most conditions. But your center probably isn't running the best way it possibly could for exactly tomorrow's weather, usage patterns and other variables. The difference between data center optimization now and the future is general versus specific optimization. Not only will we be able to get our centers performing at a high speed on a normal day, but also for abnormal days conditions.

Weather is one of the most difficult things to contend with in the data center. And, since days are only likely to get hotter on average as time goes on, there will likely continue to be moments when most people wish that computers were a little more heat resistant. Keep your temperatures low with software and hardware combined. Just using a few sensors to log information and store it while running that through a cloud-based analytics process could make it very easy for you to automate cooling year-round.

]]>Data center cooling systems see continued growthhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-center-cooling-systems-see-continued-growth-40051861
Mon, 13 Apr 2015 17:54:02 GMTITWatchDogsThe data center cooling systems market has continually developed over the years. More companies are entering the race to provide the cheapest and most efficient cooling mechanisms, and those already there are finding ways to improve their offerings. Getting data center temperature down to the right level has long been a problem for many in the world of IT, but some of the new possibilities brought up by these offerings make it difficult for groups to get it right. The varieties of containment and types of cooling drastically change the ways that systems cool, but they are still ultimately reliant on the systems working in order to prevent disaster.

Why is the market expanding?
The U.S. data center cooling market will grow at a CAGR of 6.67 percent from 2014 to 2019, according to Market Research Reports. Why is this? It has a lot to do with the increasing demand for data. More consumers and businesses are using cloud-based software, and that software is directly powered by data centers. As more data centers continue to arise, so to will the demand for cooling systems. As long as people keep using the cloud - and the rise of even more mobile devices like smartwatches suggest that this is likely - there will be increasing demand for data centers and cooling products.

The increased innovation on the side of those making those cooling products is one way that organizations are trying to meet the rising demand. The evolution of data center design and its efficiency will help larger groups to produce powerful, useful methods of cooling their data centers. Immersion cooling, for example, is a new type of cooling designed to facilitate the use of very high-volume servers. Of course, there are always worries when it comes to coating internal processors with oil - the chance of a leakage could lead to large messes.

No matter what new experimental technology is released over the next few years, it will require a data center temperature sensor. Using something lik?e the Watchdog to make sure that your choice of cooling system works is a good idea to prevent on-site accidents. No matter what happens, reliability will still be a key selling point in data center leasing, and utilizing data centers to provide this can help data centers operators to secure their lead.

]]>Green data centers are coming, but are their storage methods efficient?http://www.itwatchdogs.com/environmental-monitoring-news/data-center/green-data-centers-are-coming,-but-are-their-storage-methods-efficient-40050894
Tue, 07 Apr 2015 17:59:14 GMTITWatchDogsData centers using solar energy have been facing one major problem for years: They do not store their energy well. Although it has always been the dream of data center designers to limit their insatiable appetite for energy, there have been many roadblocks in the way. A central problem here is that solar energy collection occurs during the day, but data centers run all the time. How can organizations make sure that their energy is accessible whenever they need it? Some of the main solutions to this problem, including batteries, aren't much better. It is difficult to get batteries to store energy very efficiently. Much of the energy is lost due to the inefficiency of modern battery design. However, there are ways for companies to achieve gains in this field. Creating a stronger, more profitable data center is about getting rid of inefficiencies wherever they come from.

There is clearly demand for green data centers. Just recently, Green House Data, a Wyoming-based data center provider, bought cloud and colocation vendor FiberCloud, raising their total amount of data centers by three. Although Green House is a relatively small provider, it uses wind, solar and hydro power for all of its data center systems. Many groups that seek to process information without raising their carbon emissions use this group for that purpose. However, this does not mean that they are perfect at dealing with the realities of energy consumption in the modern age. The company does not elaborate on the fuel mix that powers some of their generators, which means it is possible that they are still somewhat reliant on fossil fuels. They are making considerable gains in certain realms, however. The organization has data centers in New Jersey and New York as well as Oregon. This provider also has been able to grow considerably with its latest acquisition, proving that it is capable of surviving as a business even with the relatively immature technology of today's green power generators.

A solution to the problem
What may solve this issue is the use of new types of water reservoirs to store power during the day, and let it out at night. This type of storage, known as solar pump storage, involves setting up two artificial lakes of water. During the day, part of the electricity generated by a data center is pumped into a reservoir, allowing it to rise above the water level of the other lake. At night, once the sun goes down, water flows back from the raised lake into the lowered one. This natural energy storage allows the company to control the way that it takes in and spends energy. Although it seems a little bit backward that moving water back and forth would store energy more efficiently than a battery, this is actually the case.

This solution describes one way that green data centers can catch up to their carbon-burning competitors. While it is likely that polluting energy sources will remain easier to use for some time, there will hopefully be a global shift to data center temperature maintenance done through less intrusive means. No matter what happens with regard to larger legislation surrounding green power, companies that choose this path stand to gain a kind of consistency that those that rely on the grid do not have. By using green power, a company can stay running even when a rolling blackout damages the ability of other local data centers to stay on. Building in this kind of reliance is a necessity for all data centers, but those that choose to pursue green options have some built in advantages that others don't get.

]]>More efficient data centers allow for better processinghttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/more-efficient-data-centers-allow-for-better-processing-40049934
Fri, 03 Apr 2015 18:27:49 GMTITWatchDogsHow important is efficiency in data centers? Moving to more efficient models of operation could drastically improve a company's ability to process information. The key here is that most organizations approach data center optimization from the wrong end - they see it as a cost, rather than an investment. However, putting time and effort into making sure that a given data center is reliable actually saves money and improves performance in the long run. By working to reduce the amount of energy used by a data center, a company can increase their processing capacity and grow their profits.

One unexpected outcome of lowering demand for power within data centers may be the ability to put more of them in one location. Oregon is set to receive greater expansions in data centers in their area due to a strong conservation philosophy within existing structures in the town, according to Oregon Live. By lowering the total consumption of power in a given data center, it is easier for companies to make sure that they get what they need out of their power supplies from the local grid. Similarly, understanding how to reduce overall power consumption allows for better reduction of costs.

Modular designs and lowered costs
Many organizations are moving toward modular designs as a way to get more for less out of their data centers. These kinds of designs allow for rapid expansion and typically are able to handle cooling over smaller, localized areas. This can reduce the amount of power needed to cool them overall, while still letting them do the job they need to do. For this reason, many organizations are starting to make the move toward them, according to Data Center Knowledge.

No matter what kind of system a company uses, they can always benefit from stronger power monitors. Using these appliances allows data center engineers to carefully track how their center uses their allotted power. This lets them find inefficiencies within a system by utilizing the natural duplication of modular systems to examine how they could work better. Having several iterations of a data center network set up to compare power usage numbers to means that it is easier to spot anomalous blips in usage that could indicate certain types of hardware failure. While this may seem like time-consuming work, the end result is worth it. Lowering costs by making a business more efficient is always useful.

]]>Report: Data center cooling market heating uphttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/report-data-center-cooling-market-heating-up-40049100
Tue, 31 Mar 2015 13:28:42 GMTITWatchDogsData center temperature remains a paramount concern for facilities everywhere, as the cooling market in this space looks to grow to new heights over the next five years, according to a new report.

A market research study recently released by Allied Market Research earlier this month on the market for chillers and air conditioners in the data center space will likely see major gains between now and 2020. Over that period, this market probably will have a compound annual growth rate of more than 13 percent. Should this prediction hold true over the next five years, the data center cooling market will be worth approximately $11.65 billion by 2020.

While certain industries - primarily retail, telecom, IT and health care - and certain regions - North America and Asia-Pacific - are leading this market, Allied Market Research noted that growth in the data center cooling sector is happening across the board. As IT trends like cloud computing, the Internet of Things and big data enter the fore en masse among just about all sectors, organizations will further leverage and rely on data center services. Indeed a March 2015 TechNavio report predicted the data center construction market worldwide to grow by a compound annual rate of around 7 percent between this year and 2019, yet again highlighting the overall rise in demand for data center services globally.

"The demand for cloud-based services is increasing among enterprises worldwide including many Fortune 500 companies," the TechNavio report noted. "Several industries around the globe are making use of advanced technologies to gain a competitive advantage. The complexities associated with business applications have increased because of the enormous growth in data volumes. This has triggered a greater need for construction and renovation of data centers around the globe."

However, as data centers become more central to the objectives and strategies of companies everywhere, the problems posed by even minutes of unplanned downtime grow accordingly. To make sure their data center investments remain online, more companies and facility operators are investing in chillers and other data center temperature cooling solutions to keep mission-critical equipment operating under ideal conditions at all times.

"For a data center to run successfully, it is necessary to keep the environment cool as the excessive heat generated from the processors can damage the systems which might lead to data loss or discontinued flow of information," the Allied Market Research report noted. "In order to prevent such damages and keep the environment at a regulated temperature, cooling systems are deployed in data centers."

The role of temperature monitoring in data center cooling
While the reports paint a bullish picture of the data center cooling market and indeed of the data center market as a whole, both Allied Market Research and TechNavio noted that energy efficiency and bringing down electricity costs remained paramount concerns for those in the industry. In particular, the AMR report stated that worries over the amount of resource needed to keep chillers and air conditioning units running all the time in a data center environment could inhibit the cooling market in the future.

Not only are data center operators always concerned about the bottom line and reducing operating costs, but a number of reports released over the past few years have cast a negative spotlight on the space and energy efficiency overall in many data centers. For example, last year the Natural Resources Defense Council reported that data centers in the United States used around 91 billion kilowatt-hours of electricity in 2013. Not only is this the same amount of electricity that 34 large power plants can produce, but it equates to about $13 billion spent by the industry on just energy. These numbers are likely only going to rise further, with the NRDC predicting that the data center industry will need 140 billion kilowatt-hours by 2020.

While a significant percentage of this electricity goes to servers and other pieces of compute equipment, a lot of it is for chillers, air conditioners and other ancillary appliances needed to keep a data center running at optimal conditions at all times. In order to make sure facilities are kept at the right temperature, Energy Star noted that some data center operators will keep server room temperatures at around 55 degrees Fahrenheit, even though some servers available today can operate at temperatures of 90 degrees Fahrenheit or even warmer.

By overcooling data centers, many facilities are using far more electricity than they actually need. Indeed, for every 1 degree Fahrenheit rise in data center temperature above 55 degrees, a data center operator can expect to lower electricity costs by up to 5 percent, according to Energy Star. These kinds of small tweaks and improved practices can help the industry overall cut its electricity usage by up to 40 percent, NRDC estimated, leading to yearly savings of more than $3 billion.

"New practices and policies are needed to accelerate the pace and scale of the adoption of energy efficiency best-practices throughout the industry," said Pierre Delforge, NRDC director of high-tech energy efficiency.

But, for data centers concerned with uptime first and foremost, altering the data center temperature too much can be a slippery slope. Raising temperatures is fine, but what happens if such a move causes servers to overheat and break down? To help prevent this scenario from happening, data center operators should adopt temperature monitoring solutions like the Watchdog 15 from ITWatchDogs. With such a solution in place, facilities managers can rest assured knowing that the temperature in a data center is always being tracked and that teams will be immediately notified of any significant change in environmental conditions.

]]>Best places for data centers (Part 2)http://www.itwatchdogs.com/environmental-monitoring-news/data-center/best-places-for-data-centers-(part-2)-40048520
Mon, 30 Mar 2015 14:50:48 GMTITWatchDogsIn part one of this piece we talked about two very different regions that are great places for data centers to be built. The first, Singapore, is appealing because it's both a relatively new data center market - meaning there's a lot of space for development - and also it provides a vital access point into the Chinese market. The second area, Oregon, we included due to recent legislation that will limit the taxes centers in the state have to pay. But those two areas aren't the only optimal ground for colocation facilities. Here are some more:

Hong Kong: In the Asia-Pacific region, there's no destination more desirable for data centers than Hong Kong, as IDC points out. A look at the data centers currently in the city reveals an enormous range of businesses, which is a testament to the popularity of the city as a colocation hotspot. One of the key reasons why the city is so appealing is due to the pre-existing presence of centers there, which creates an environment that is conducive to the growth of new facilities. But more broadly, as IDC explains, the Asian market is an increasingly attractive place for businesses from around the world to build data centers, and Hong Kong represents a part of the Asian market that already has a data center presence to begin with.

Michigan: There's a lot of elements that companies consider when deciding upon a data center location, but one of the first that comes to mind is climate. For instance, building a data center in an area that's oppressively hot year-round is probably not something most businesses are looking to do. But there's another dimension to climate, which is the probability of natural disasters. If you put up a center in a cool area that's relatively close to an ocean, for instance, you run the risk - however remote - of having to experience a catastrophic weather event like a hurricane. Therefore, the hunt for an optimal climate boils down to finding a region that's cool enough, but at a low risk for being hit by natural disasters.

As far as checking these boxes goes, Michigan is a true leader. The state, as Online Tech points out, is known for having a consistently cool climate, particularly in the summer, when temperatures don't tend to get extremely hot. This places Michigan in stark contrast to states in the southern and southwestern United States, which often experience incredibly hot summers that can send data center cooling costs through the roof. The energy associated with running a data center is significant, and this figure only shoots up when centers are located in areas where major heat is an issue. With its location in the Northern U.S. - as well as its proximity to the five Great Lakes - Michigan is dependably mild as far as climate is concerned, which can help keep data center temperature costs down. Colorado is another region like Michigan - a midwestern state whose cool climate allows for lower utility bills.

The Netherlands: Gigaom points to the Netherlands as a prime location for data center construction, which is no surprise at all considering how consistently cool the region is.

"The Netherlands provides a strong, clean power infrastructure," the article stated. "For North Americans, it's also the only mainland European country where you'll find that everyone speaks English. The Netherlands' northern European location also provides excellent climate for natural air cooling."

Beyond that, the Netherlands offers a great point of entry to the rest of Europe, which is important for businesses looking to acquire a more global focus.

The locations we covered in this piece and part one are all great places to launch a data center. But wherever a data center is located, it will need monitoring equipment to ensure that facility temperatures are optimal at all times. That's where technology from ITWatchDogs such as the Watchdog 15-PoE can prove invaluable. The 15-PoE is a cost-effective and self-contained unit that enables users to remotely keep tabs on conditions of a data center including temperature and humidity. Monitoring technology like this is an indispensable part of data center design - regardless of where the center is located.

]]>Best places for data centers (Part 1)http://www.itwatchdogs.com/environmental-monitoring-news/data-center/best-places-for-data-centers-(part-1)-40048502
Mon, 30 Mar 2015 14:15:52 GMTITWatchDogsThere are a lot of questions that arise if you're looking to get a new data center up and running. First of all, there will be the issue of cost: How much money are you willing to put into it? Of course, another factor will be size - what is the scale of operation that you're looking to run? But perhaps the most significant choice will boil down to location: Where do you put the center?

It's not an easy question to answer. After all, not every area is conducive to accommodating a data center. And it's not just weather and regional climate-related issues that can make a location unappealing for data center growth. Other factors, such as a sub-par renewable energy presence or a dearth of local talent that might be able to staff the center, can make certain regions less-than-ideal as far as launching a new center goes. Because location is such a pivotal concern, we decided we'd put together a two-part list of the best locations for data centers:

Singapore: Oftentimes, businesses in the United States won't look beyond the country in terms of where to locate their data centers. But that means they're missing out on a truly great location - namely, Singapore. As a Data Center Knowledge article points out, Singapore is quickly becoming a major player in the data center market. Here are a few of the reasons for that:

It's a new thing. Unlike North America and large parts of Europe, Singapore hasn't really had a big presence in the past as far as data center construction is concerned. Its newness to the colocation game makes it a lucrative option for prospective developers, since there's a lot of open space to build centers.

An entry point into China. Singapore itself is a small city-state in southeast Asia. The entire region measures only 276.5 square miles. But as the Data Center Knowledge piece explained, this small area presents a vital access point to a much larger one - China.

"The primary reason the small island nation has such an active data center market is that it has become an Internet gateway between China and the rest of the world," the article explained. This means that Singapore will likely prove an ideal location for Chinese companies looking to reach the North American demographic, and vice versa.

Provides a new market: What does a new market do that a long-established one rarely does? Grow quickly. And that's one of the key drawing points for businesses to Singapore. The burgeoning data center market in Singapore means new opportunities for growth - ones that couldn't exist in well-established data center markets.

Oregon: One of the things companies need to look for when finding a location for a data center is if the chosenarea has monetary incentives for colocation facilities. This is something that Oregon is right on the cusp of ensuring. According to a separate Data Center Knowledge article, a bill is currently headed for signing to the state's Governor Kate Brown that will limit the property taxes data centers will have to pay. It has been reported that the governor will sign the bill as soon as it lands on her desk.

Unlike Singapore, Oregon is a region that's known for having a lot of data centers, including those belonging to companies like Facebook, Amazon and Apple. What the bill will eliminate is the possibility of central assessment property tax being applied to data centers. This type of tax was initially introduced back in the 1800s as a way to get taxes from railroads, but some legislators have recently suggested using this old and outmoded tax to apply to data centers. Without the bill, data centers would face extra taxes. But the bill will stop this from happening, and since it is likely to pass, Oregon is set to remain a data center powerhouse state.

That's it for part 1, but tune in soon for the second part. However, regardless of where you choose to locate a data center, you'll need to make sure it's well-monitored. Otherwise, temperatures could spiral out of control and risk significant damage to equipment and information. Fortunately, products like the Watchdog 15 PoE from ITWatchDogs prevent this from happening.

]]>Study: Older blood, when stored properly, just as good as fresh bloodhttp://www.itwatchdogs.com/environmental-monitoring-news/research-labs/study-older-blood,-when-stored-properly,-just-as-good-as-fresh-blood-40048636
Mon, 30 Mar 2015 12:36:43 GMTITWatchDogsAccording to the American Red Cross and other leading bodies in the medical industry, blood can be safely stored and used for up to 42 days after it has been drawn from a donor - so long as it is kept in a room with high-quality humidity and temperature monitoring equipment, of course. But, despite this established best practice, some doctors believe fresher supplies are superior to samples drawn potentially weeks earlier. However, a new study from researchers in Canada and Europe has shown this preference to be false.

Recently, a team of doctors associated with the Ottawa Hospital Research Institute tested to see if blood stored for up to three weeks was more or less effective than freshly-drawn blood. Often doctors prefer fresher samples to administer to patients in need of a transfusion, for the prevailing notion was that red blood cells break down and toxins begin to accumulate in samples over time, even when blood is in an optimized cold storage facility with humidity and temperature monitoring equipment, according to United Press International.

What the study found
The study looked at 2,430 adults who were administered blood via a transfusion over a 90-day period, with about half being given older samples and the other half administered fresh blood. Not only did fewer patients who received older blood die within 90 days of a transfusion, but the researchers overall saw no material difference between the positive outcomes observed among patients in both groups.

"There was no difference in mortality or organ dysfunction between the two groups, which means that fresh blood is not better than older blood," said Dr. Dean Fergusson, a senior scientist at the Ottawa Hospital Research Institute and the University of Ottawa.

This news is significant, as the researcher's findings should hopefully assuage lingering legacy concerns some doctors have about the blood used in a transfusion. Many hospitals and other urgent care facilities find themselves facing blood shortages. On an average day in the United States, approximately 32,000 pints of blood are required for a wide range of procedures, according to Blood Centers of the Pacific. With the American Red Cross reporting that a significant proportion of U.S. residents that are able to donate do not donate blood, hospitals are sometimes faced with a shortage of available supplies. With this research supporting the efficacy of older blood supplies, however, hopefully medical care facilities will find themselves faced with critical shortages far less often than before, the researchers noted.

"Not only did blood transfusions help save my life, they also helped keep my mother alive, as she required many blood transfusions over the years, due to a blood disorder," said Ottawa resident Tony Brett, who participated in the research study. "I have also donated blood many times, so it is great to see that people are doing rigorous research to make sure that our blood supply is as safe and effective as possible."

How to properly store blood supplies
While the researchers involved in this most recent study found older blood samples to be just as effective as newer supplies in treating patients, these results can only be repeated elsewhere if the blood is stored under the right conditions. In particular, if a cold storage unit is too warm, the ability of the blood kept there to carry oxygen once it is inside a patient can be greatly diminished, thus severely limiting its effectiveness, according to the World Health Organization. That is why they recommend blood be kept between 2 and 6 degrees Celsius (35.6 and 42.8 degrees Fahrenheit) at all times in storage.

Temperature is not the only environmental variable to consider when storing blood, as other factors like air moisture levels can degrade supplies. In addition to keeping the storage room well insulated and between 2 and 6 degrees C at all times, that location should have a relative humidity level of 60 percent, according to WHO.

In order to make sure the room holding blood storage is always kept at the right temperature and humidity level at all times, medical care professionals should equip these storage units with high-quality, dependable environmental monitoring solutions like the Watchdog 15 from ITWatchDogs. Armed with this technology, doctors, health care facility technicians and others can keep tabs on the conditions inside a blood storage room even when located hundreds of miles away from the spot. That way, teams can take action to save samples as soon as an abnormality is detected.

]]>Study: Older blood, when stored properly, just as good as fresh bloodhttp://www.itwatchdogs.com/environmental-monitoring-news/research-labs/study-older-blood,-when-stored-properly,-just-as-good-as-fresh-blood-40048636
Mon, 30 Mar 2015 12:36:43 GMTITWatchDogsAccording to the American Red Cross and other leading bodies in the medical industry, blood can be safely stored and used for up to 42 days after it has been drawn from a donor - so long as it is kept in a room with high-quality humidity and temperature monitoring equipment, of course. But, despite this established best practice, some doctors believe fresher supplies are superior to samples drawn potentially weeks earlier. However, a new study from researchers in Canada and Europe has shown this preference to be false.

Recently, a team of doctors associated with the Ottawa Hospital Research Institute tested to see if blood stored for up to three weeks was more or less effective than freshly-drawn blood. Often doctors prefer fresher samples to administer to patients in need of a transfusion, for the prevailing notion was that red blood cells break down and toxins begin to accumulate in samples over time, even when blood is in an optimized cold storage facility with humidity and temperature monitoring equipment, according to United Press International.

What the study found
The study looked at 2,430 adults who were administered blood via a transfusion over a 90-day period, with about half being given older samples and the other half administered fresh blood. Not only did fewer patients who received older blood die within 90 days of a transfusion, but the researchers overall saw no material difference between the positive outcomes observed among patients in both groups.

"There was no difference in mortality or organ dysfunction between the two groups, which means that fresh blood is not better than older blood," said Dr. Dean Fergusson, a senior scientist at the Ottawa Hospital Research Institute and the University of Ottawa.

This news is significant, as the researcher's findings should hopefully assuage lingering legacy concerns some doctors have about the blood used in a transfusion. Many hospitals and other urgent care facilities find themselves facing blood shortages. On an average day in the United States, approximately 32,000 pints of blood are required for a wide range of procedures, according to Blood Centers of the Pacific. With the American Red Cross reporting that a significant proportion of U.S. residents that are able to donate do not donate blood, hospitals are sometimes faced with a shortage of available supplies. With this research supporting the efficacy of older blood supplies, however, hopefully medical care facilities will find themselves faced with critical shortages far less often than before, the researchers noted.

"Not only did blood transfusions help save my life, they also helped keep my mother alive, as she required many blood transfusions over the years, due to a blood disorder," said Ottawa resident Tony Brett, who participated in the research study. "I have also donated blood many times, so it is great to see that people are doing rigorous research to make sure that our blood supply is as safe and effective as possible."

How to properly store blood supplies
While the researchers involved in this most recent study found older blood samples to be just as effective as newer supplies in treating patients, these results can only be repeated elsewhere if the blood is stored under the right conditions. In particular, if a cold storage unit is too warm, the ability of the blood kept there to carry oxygen once it is inside a patient can be greatly diminished, thus severely limiting its effectiveness, according to the World Health Organization. That is why they recommend blood be kept between 2 and 6 degrees Celsius (35.6 and 42.8 degrees Fahrenheit) at all times in storage.

Temperature is not the only environmental variable to consider when storing blood, as other factors like air moisture levels can degrade supplies. In addition to keeping the storage room well insulated and between 2 and 6 degrees C at all times, that location should have a relative humidity level of 60 percent, according to WHO.

In order to make sure the room holding blood storage is always kept at the right temperature and humidity level at all times, medical care professionals should equip these storage units with high-quality, dependable environmental monitoring solutions like the Watchdog 15 from ITWatchDogs. Armed with this technology, doctors, health care facility technicians and others can keep tabs on the conditions inside a blood storage room even when located hundreds of miles away from the spot. That way, teams can take action to save samples as soon as an abnormality is detected.

]]>Green data center initiatives are growing strong in the East and Westhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/green-data-center-initiatives-are-growing-strong-in-the-east-and-west-40047823
Thu, 26 Mar 2015 16:46:22 GMTITWatchDogsGoing green for data centers can help them to reduce the amount of issues they face in their day to day processes. Not only does it help them to lower their overall carbon footprint, but it also cut the costs of operating a data center. In fact, data center operators across different continents are currently in competition to research better ways of crunching information. Cloud computing, data analytics, and the increasing use of databases to handle extremely large amounts of information have resulted in many organizations becoming reliant on large data processing centers to get business done. Green data centers can prevent this information processing from becoming prohibitively expensive.

Th?e increasing need for data centers has led for China's Ministry of Industry and Information Technology, along with a few other Chinese agencies, to release a document called the Guideline for Pilot Projects of Green Data Centers. This specific set of rules is designed to help companies follow energy saving advice put forth by the State Council. One major reason China is pushing for better efficiency is that they are lagging internationally. The U.S. has an average power usage effectiveness of 1.9, whereas China has one of 2.2.

Staying efficient with power consumption
One way that many organizations may try to make their centers' consumption a little lower is through the use of remote temperature sensors. Some of these, like the WatchDog 15, are able to monitor a variety of different types of data points. For example, the WatchDog 15 can monitor both temperature and humidity, allowing for a company to keep a close eye on exactly how hot their systems are running, and whether or not ambient humidity is something they should be concerned about.

Through the utilization of temperature sensors, companies can achieve extremely efficient data center designs. With greener centers, all of a company's operations can be done more quickly, and hopefully with less byproducts than they used to be. Keeping carbon footprints low while processing a large amount of information is quickly becoming not only an environmentally friendly, but also wallet-friendly way of doing business. By investing in a green design for their data centers, these companies are not only showcasing their strong status as corporate citizens of the world, but they are also showing business savvy.

]]>How temperature monitoring makes chocolate even betterhttp://www.itwatchdogs.com/environmental-monitoring-news/research-labs/how-temperature-monitoring-makes-chocolate-even-better-40047795
Thu, 26 Mar 2015 09:09:47 GMTITWatchDogsBy monitoring the temperature at and processes by which cacao beans are roasted and stored, the final product has a higher percentage of healthful antioxidants and a richer flavor, according to new research.

Americans love chocolate - according to Leatherhead Food Research, the average person in the U.S. eats more than 12 pounds of chocolate in a given year. Considering that a typical chocolate bar weighs around 1.5 ounces, that means people in the U.S. normally are consuming about 115 or so chocolate bars annually.

And while a chocolate bar more often than not contains a lot of sugar and fat - two things that are not very good to eat in large volumes - chocolate does have some health properties. According to the Cleveland Clinic, flavonol, a type of antioxidant found in chocolate, can help improve the flow of blood to the brain and vital organs while also helping to reduce blood pressure.

Flavonol levels are at their highest in raw unprocessed cacao, and they drop as the cacao bean goes through the fermentation and roasting processes necessary to turn the raw materials into chocolate. But, according to new research from scientists at the University of Ghana and the University of Ghent, flavonol levels do not drop off as much when cacao pods are allowed to settle prior to fermentation and when the beans are roasted at a lower temperature, the American Chemical Society reported.

Using conventional cacao processing methods, pods are set aside to ferment, then they are dried. After that, the beans are extracted from the pods and roasted at between 248 and 266 degrees Fahrenheit for 10 to 20 minutes, the ACS reported.

However, a team led by the University of Ghana's Emmanuel Ohene Afoakwa found that when pods are set aside for seven days prior to fermentation - a process called pulp preconditioning - and when beans are roasted at 242 degrees F for approximately 45 minutes, their flavonoid content was much higher than seen in beans processed using more traditional methods.

"This aided the fermentation processes and enhanced antioxidant capacity of the beans, as well as the flavor," Afoakwa said about pulp preconditioning, according to the ACS.

Afoakwa also noted that by setting pods aside for a while and roasting beans at a lower temperature, the flavor of the resulting chocolate is richer and more intense. For regions currently known to produce less flavorful chocolate, this subjective finding could be huge, as it means a change in processing could yield a more desirable final product likely to fetch a higher price.

Temperature monitoring's role in boosting chocolate's antioxidant levels
In order for cacao growers and chocolate producers to best use the findings of this new research, they will need high-quality humidity and and temperature monitoring solutions like the Watchdog 100 from ITWatchDogs in place. Without proper oversight, the cacao beans may spoil and the beans may yield an inferior product.

Any cacao pods that are set aside prior to fermentation must be kept under ideal conditions for the seven-day period. According to the German Insurance Association's Transport Information Service, cacao pods should be stored in locations kept below 77 degrees F, and ideally at 59 degrees F or cooler. In addition, these locations should have a relative humidity of around 70 percent. If the storage room is too warm or too moist, the pods could go rancid, prematurely ferment or over-ferment.

Furthermore, since even a variation of 6 degrees F in the roasting temperature can affect the product's final antioxidant levels, it's important to make sure the roasting apparatus is operating at a consistent, trackable temperature. By having humidity and temperature monitoring solutions throughout the locations where cacao pods and beans are stored and processed, farmers, harvesters and chocolate producers can be sure that the final chocolate product has a maximized level of antioxidants in it.

]]>How temperature monitoring makes chocolate even betterhttp://www.itwatchdogs.com/environmental-monitoring-news/research-labs/how-temperature-monitoring-makes-chocolate-even-better-40047795
Thu, 26 Mar 2015 09:09:47 GMTITWatchDogsBy monitoring the temperature at and processes by which cacao beans are roasted and stored, the final product has a higher percentage of healthful antioxidants and a richer flavor, according to new research.

Americans love chocolate - according to Leatherhead Food Research, the average person in the U.S. eats more than 12 pounds of chocolate in a given year. Considering that a typical chocolate bar weighs around 1.5 ounces, that means people in the U.S. normally are consuming about 115 or so chocolate bars annually.

And while a chocolate bar more often than not contains a lot of sugar and fat - two things that are not very good to eat in large volumes - chocolate does have some health properties. According to the Cleveland Clinic, flavonol, a type of antioxidant found in chocolate, can help improve the flow of blood to the brain and vital organs while also helping to reduce blood pressure.

Flavonol levels are at their highest in raw unprocessed cacao, and they drop as the cacao bean goes through the fermentation and roasting processes necessary to turn the raw materials into chocolate. But, according to new research from scientists at the University of Ghana and the University of Ghent, flavonol levels do not drop off as much when cacao pods are allowed to settle prior to fermentation and when the beans are roasted at a lower temperature, the American Chemical Society reported.

Using conventional cacao processing methods, pods are set aside to ferment, then they are dried. After that, the beans are extracted from the pods and roasted at between 248 and 266 degrees Fahrenheit for 10 to 20 minutes, the ACS reported.

However, a team led by the University of Ghana's Emmanuel Ohene Afoakwa found that when pods are set aside for seven days prior to fermentation - a process called pulp preconditioning - and when beans are roasted at 242 degrees F for approximately 45 minutes, their flavonoid content was much higher than seen in beans processed using more traditional methods.

"This aided the fermentation processes and enhanced antioxidant capacity of the beans, as well as the flavor," Afoakwa said about pulp preconditioning, according to the ACS.

Afoakwa also noted that by setting pods aside for a while and roasting beans at a lower temperature, the flavor of the resulting chocolate is richer and more intense. For regions currently known to produce less flavorful chocolate, this subjective finding could be huge, as it means a change in processing could yield a more desirable final product likely to fetch a higher price.

Temperature monitoring's role in boosting chocolate's antioxidant levels
In order for cacao growers and chocolate producers to best use the findings of this new research, they will need high-quality humidity and and temperature monitoring solutions like the Watchdog 100 from ITWatchDogs in place. Without proper oversight, the cacao beans may spoil and the beans may yield an inferior product.

Any cacao pods that are set aside prior to fermentation must be kept under ideal conditions for the seven-day period. According to the German Insurance Association's Transport Information Service, cacao pods should be stored in locations kept below 77 degrees F, and ideally at 59 degrees F or cooler. In addition, these locations should have a relative humidity of around 70 percent. If the storage room is too warm or too moist, the pods could go rancid, prematurely ferment or over-ferment.

Furthermore, since even a variation of 6 degrees F in the roasting temperature can affect the product's final antioxidant levels, it's important to make sure the roasting apparatus is operating at a consistent, trackable temperature. By having humidity and temperature monitoring solutions throughout the locations where cacao pods and beans are stored and processed, farmers, harvesters and chocolate producers can be sure that the final chocolate product has a maximized level of antioxidants in it.

]]>Iceland and Sweden: Data center hot (or rather, cold) spotshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/iceland-and-sweden-data-center-hot-(or-rather,-cold)-spots-40047742
Wed, 25 Mar 2015 16:58:16 GMTITWatchDogsWhen a company needs a data center, there's a lot of factors that go into considering where to put it. For starters, there's the issue of cost - after all, putting up such a facility is no cheap feat. And then there's location: Does the business want a center that's close? Can it afford to maintain a center located in a remote area? Finally, there's the consideration of overall efficacy, which centers around questions like, "Will we be able to have optimal server room temperature at the location?"

As a Computerworld article pointed out, questions like these often prompt businesses to broaden their horizons when it comes to deciding on where to launch a data center. Sometimes, the best option for a business center isn't in the neighboring town. Sometimes it's across the country - or even the world. That's what Iceland and Sweden are banking on.

Iceland could soon become a new data center hotspot
In Iceland, there's a power company for the country called Landsvirkjun. In early March, some high-ranking members of Landsvirkjun made a long journey to Boston. The purpose of their visit was to talk energy strategy with some experts at the Massachusetts Institute of Technology. But the conversation also went beyond that to examine the idea of Iceland as a place with enormous potential for data center growth.

It's not hard to see why this is the case: The country is cool all year long, which would mean that data centers would have to spend significantly less on HVAC systems and other tools to drive facility temperature down.

"Our power generation in Iceland is predominantly based on hydropower, but we are increasingly building out into geothermal turbines and now wind farms as well," stated Landsvirkjun EVP of business development Bjorgvin Sigurdsson, according to Computerworld. "Because all of these options are renewable, Iceland is able to make long-term agreements at fixed prices. And we're not influenced by the changes in commodity markets -- oil or gas or coal."

But while the optimal climate and renewable resources of the region are a definite advantage for Iceland, there is one drawback: its remote location. Will this be enough to keep companies away? Probably not, considering the highly mobile and connected nature of the world we live in. With the clear benefits it offers in terms of cost savings and leaving a small footprint, it is highly probable that Iceland will emerge as a key region in terms of data center locations in the near future. After all, the growth of data within enterprises demands more cost-effective and higher-capacity storage options, and this seems like something Iceland is more than prepared to offer.

Sweden is another notable data center location
Iceland isn't the only cool-climate country making data center headlines. As Gizmag reported, Sweden has been in the data center press lately due to the construction of EcoDataCenter, which, according to the article, "will be the world's first-climate positive data center, utilizing various techniques and technologies to ensure a positive impact on the wider world." That's a great thing to hear, but what exactly does it mean? According to the article, the upcoming center will:

Give back energy: Data centers are massive energy-consuming operations, but that doesn't mean they can't give back. Sweden, like Iceland, is a naturally cold place, with the temperature usually hovering at around 41 degrees Fahrenheit. But buildings in the region will get a little warmer with EcoDataCenter, which will employ a system wherein excess heat that the center generates will be routed into keeping area buildings warm.

Use renewable energy: As for the energy that the data center itself will use, it will come from renewable sources. Water, wind and solar energy will all be funneled into the center, meaning that its energy consumption will leave a small footprint.

Actually make a negative carbon footprint: Believe it or not, when all is said and done, EcoDataCenter will actually be giving back more than it's taking in the energy department. Its projected negative carbon footprint stems from the fact not only that it's using renewable energy and also supplying energy to local buildings, but also that it will use flowering plants on the roof to maintain cooling in the hot months, and that its equipment will partially be run by excess steam from an electric plant in the area.

Construction on the first part of the center is set to be completed by the first fiscal quarter of 2016. The emergence of both Sweden and Iceland as notable data center presence is clear proof that the data center market is truly a global one. But if you take advantage of Sweden or Iceland's optimal climate and choose to locate your data center there, don't forget about the importance of temperature monitoring. By deploying a real-time monitoring solution like the Watchdog 100, you can ensure the best center possible.

]]>New research finds major breakthrough for nanotechnology, temperature monitoringhttp://www.itwatchdogs.com/environmental-monitoring-news/research-labs/new-research-finds-major-breakthrough-for-nanotechnology,-temperature-monitoring-40047713
Wed, 25 Mar 2015 15:23:24 GMTITWatchDogsA team of researchers based in the United States and Japan recently discovered why a metal compound goes from being magnetic to non-magnetic at a specific temperature, and their findings could be critical for next-generation nanotechnology and even everyday temperature monitoring apparatuses like air conditioning units.

At around minus 384 degrees Fahrenheit, the material compound ytterbium-indium-copper-four (YbInCu4) goes from being highly magnetic to not magnetic at all. This change is not unique to YbInCu4 - it happens in some other materials and since the 1960s has been called the Kondo Effect, according to Physics World contributors Leo Kouwenhoven and Leonid Glazman. However, physicists did not know how precisely this change happened nor did they have any mechanisms for altering this - until now.

Recent research from University of Connecticut professor Jason Hancock, Brookhaven National Laboratory physicist Ignace Jarrige and teams from Japan and the Argonne National Laboratory has further illuminated how the Kondo Effect happens with YbInCu4 and how it can be more effectively controlled. According to R&D Magazine, they found an electronic spectrum gap that explains how the Kondo Effect occurs in this material.

"Our discovery goes to show that tailored semiconductor gaps can be used as a convenient knob to finely control the Kondo Effect and hence magnetism in technological materials," said Jarrige.

This is potentially a major discovery as it illuminates how materials conduct electrons and how nanotechnology can be further fine-tuned. Nanotechnology is one of the fastest growing fields out there, especially with the rise of smartphones and wearables. The nanotech market was worth around $26 billion in 2014, but its valuation is expected to grow at a compound annual rate of close to 20 percent between last year and 2019 to likely reach more than $64 billion by the end of the decade, according to BCC Research. Armed with this knowledge, those in the nanotech space can have greater control over the materials they use, R&D Magazine reported, meaning that the market's outlook is brighter than ever.

"In fact, interest in the Kondo Effect has recently peaked thanks to new experimental techniques from the rapidly developing field of nanotechnology, which have given us unprecedented control over Kondo systems," Glazman and Kouwenhoven wrote.

How this research affects temperature monitoring technology
As far as temperature monitoring is concerned, this research is relevant for two primary reasons. For one, it could provide the framework for more energy-efficient air conditioning units. According to R&D Magazine, this research could provide the framework through which scientists can use magnetic fields instead of electricity to change the surrounding temperature. Under such a system, the amount of energy needed to, for example, keep a refrigerator or a cold aisle at a specifically cool temperature would go down dramatically.

In addition, these recent experiments provide yet another use case for temperature monitoring equipment like the Watchdog 15. Since the outcome of this work is so temperature dependent - a dramatic shift in the temperature in the room would affect the magnetism of YbInCu4 so critical to the researchers' outcomes - scientists need to ensure that the rooms in which their work is taking place remain under consistent environmental conditions. Armed with tools like the Watchdog 15 from ITWatchDogs, researchers can make sure that external variables do not negatively impact their very critical work.

]]>Recent international incidents highlight benefits of remote monitoring solutionshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/recent-international-incidents-highlight-benefits-of-remote-monitoring-solutions-40046662
Fri, 20 Mar 2015 16:34:10 GMTITWatchDogsFor data center providers and facility managers, the need for high quality remote monitoring solutions tailor-made to their unique needs to keep tabs on everything from temperature and humidity to power and light is paramount. While all data centers and similar compute facilities rely on these tools, the necessity of these solutions is especially high internationally. Maintaining ideal conditions is hard enough when a data center is just down the road or the next state over from company headquarters, but keeping servers housed in appropriate setups internationally is sometimes exceedingly difficult.

The cost of downtime is high no matter where a facility is located, which is why companies with a global data center footprint need remote monitoring solutions to effectively keep tabs on their offerings. As two recent incidents show, the stakes are high for organizations with widely distributed infrastructure to guarantee uptime, as a failure to do so can lead to a catastrophic fallout.

Fire breaks out in Ghana data center
In early February, a data center operated by 4G LTE network services provider Surfline in the West African nation of Ghana was brought down for more than four hours by a fire, according to The Stack. At around 4 p.m. local time, the blaze was first spotted. Crews were reportedly able to quickly arrive to put out the fire, although IT News Africa reported that Surfline was only at 90 percent capacity by Feb. 16, which is approximately two weeks after the fire occurred.

To make up for the outage caused by the blaze, Surfline offered free data to customers in its aftermath, with plans to allow continued free network usage while the company worked to bring everything back to 100 percent capacity and service, according to IT News Africa. The affected data center is located in Tema, which is approximately 32 kilometers east of Accra, Ghana's capital.

Extreme conditions take down Perth data centers
Fujitsu, the international IT solutions and services provider, suffered an embarrassment in early February when a bad thunderstorm caused its data center in the Western Australian city of Perth to lose power. According to Data Center Knowledge, that facility first lost power for about 90 minutes, with a longer outage immediately following related to a separate failure with an unspecified control mechanism. The source of the issues has not yet been identified, but IT News reported that customers affected by this outage included Bankwest at Western Australia's Department of Health.

This incident is not the only example of the elements taking down a data center in the Perth area earlier this year. In January, a combination of events that included an extreme heat wave and an HVAC failure led to unplanned downtime in a data center owned by local Internet service provider iiNet, which the second largest provider of DSL services in Australia, Data Center Knowledge reported separately. On that day, outside temperatures reached a record high of 112 degrees Fahrenheit, making it an unfortunate day for the data center's air conditioning system to fail. All told the facility was affected for between six and seven hours, and while iiNet said 98 percent of its customer broadband services were not negatively impacted by this incident, email and Web hosting services were affected.

How to prevent similar outages
For data center operators and providers, these three stories provide yet more proof of the necessity of remote monitoring solutions. No matter how much planning is done, unexpected disaster can strike at any time. Considering that network downtime is now measured in seconds and minutes instead of hours and days, even the smallest issues can have major ramifications down the road. Not only is it expensive to repair after significant damages, but these kinds of incidents linger in the minds of customers and cause them to seek out alternative providers.

By employing remote monitoring systems, facility managers can keep a closer eye on mission-critical equipment at all times to make sure everything is always running smoothly and that no unexpected hiccups arise. Whether a company has just one local facility that is located one town away or a widely distributed IT infrastructure footprint with data centers across the globe, environment monitoring systems built specifically with the needs of data centers in mind can mean the difference between perpetually happy customers and a failed IT investment that causes irreparable harm to reputations and businesses.

]]>Factors to consider when building a new data centerhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/factors-to-consider-when-building-a-new-data-center-40043156
Fri, 13 Mar 2015 12:15:13 GMTITWatchDogsWhen building a new data center, companies need to take several important factors into consideration. Building these facilities is no small endeavor, requiring a large investment of both time and capital for researching, constructing and maintaining them. Data centers consume huge quantities of energy not only to keep the servers operational, but also to maintain the low temperatures these servers need to safely and effectively continue running.

The facilities used to house the tech giant servers are massive and oftentimes require coordination with local officials to ensure that utilities and public services are capable of handling these behemoth structures. With all the factors involved in building and operating the data center, the location of the facility should not be overlooked.

Avoiding and embracing nature
While one of the biggest dangers to a data center is a self-created fire from overheated servers, several external threats also exist. Mother Nature's wrath can be a beauty to behold, but it can be devastating for a data center. Tornadoes, earthquakes and hurricanes have the potential to wipe out entire facilities, laying waste to hundreds of millions of dollars in equipment.

Thankfully, Pingdom created a handy map that illustrates the hot spots of where the likelihood that one of these natural disasters might touch down. Collectively, the stretch of space in the U.S. where tornadoes, earthquakes and hurricanes are likely covers a majority of the country. However, several locations, notably in Montana, Minnesota, Wisconsin and Michigan, exist that sit outside of these hazard areas. The map does not claim that these spots are exclusively disaster free, rather that they have the least probability of being hit by one of these three destructive forces.

On the other hand, engineers can use nature to their advantage. According to the Wall Street Journal, data centers in Washington and Oregon take advantage of abundant hydroelectric power sources to keep electricity rates low, while Iowa provides abundant wind power that also brings down energy costs.

Some companies are even going so far as to build data centers in cold-weather countries to minimize power and cooling costs. Iceland, Sweden and Finland have all become enticing locations for businesses due to their chilly climates.

Follow the money
Tax incentives are another big lure for data centers. Many lawmakers have been passing legislation offering tax breaks for companies who build data centers in their states. Since data centers are such massive undertakings that can involve billions of dollars, companies naturally are more than welcoming for any additional incentives that cut costs and avoid roadblocks.

Many of these tax breaks involve exemptions on paying property taxes for a fixed number of years, or garnering tax refunds. Critics of these laws contend that the companies' contributions to local governments are negligible and that these are essentially giveaways. But states are not listening to the critics, as is evidenced by the 19 states that have jumped on the tax break bandwagon.

Iowa was an early pioneer of introducing tax credits aimed at drawing data centers to the state, having passed legislation back in 2007. Google, Facebook and Microsoft have used these tax breaks to build immense warehouse-sized structures.

Arizona passed a bill in 2013 offering incentives, and then amended it in early 2015 to include additional renewable energy tax credits. Already Apple and Microsoft have both taken advantage of the breaks and plan on spending millions of dollars to build sprawling facilities there.

Earlier this month the Oregon state Senate approved a new tax bill in the hopes of encouraging additional data center investment, while legislators in Nevada, North Dakota and North Carolina are all scrambling to pass bills they hope will attract data centers of their own.

Regardless of whether a data center has been built in a cold location or if it receives numerous tax breaks, it still needs to be properly monitored and kept at a precise temperature. Without constant up-to-the-minute climate readings, servers are liable to overheat and burst into flames, which can damage not only the server and the data stored on it, but also the entire facility. The WatchDog 100-P is a self-contained unit with built-in PoE and on-board temperature and humidity/dew-point sensors. This device provides facility managers with the precision and accuracy necessary to ensure the safety and well-being of even the biggest data centers.

Improvements in technology have created the opportunity for data centers to synchronize these processes. This means redundant processes can now overlap and facility managers can better track and analyze the mountains of data these machines are constantly producing, which leads to greater efficiency and increased savings.

What kinds of technology are paving this pathway to great efficiency?

Network virtualization
Previously, data center processes happened on segmented devices despite being in a shared, networked environment. With the advent of network virtualization, data centers can ditch significant amounts of hardware as virtual devices now perform these same tasks. This frees up funds that would have otherwise been allocated to powering, cooling and racking equipment.

Network virtualization should theoretically make data centers more agile while also simplifying device provisioning and centralizing network configurations into virtual devices. These should reduce troubleshooting times and costs as well as alleviate management involvement in these matters. Operators no longer need to physically wire up new domain connections, nor do they need to alter work they've already done.

Software Defined Networking
Various data center components, such as the server, the network, the firewall and the load-balancer, create and maintain logs of their own activities. Traditionally, these logs are stored in different locations around the data centers and are kept siloed off from each other. Reaching greater efficiencies involves correlating all of this information so that operators can access and analyze it.

Software defined networking takes aim at this segmentation and seeks to combine all this data into one streamlined function. With this sort of functionality, data center operators can program the network by separating the control plane from the data plane.

Even with this surge in technology types, due to the customization of legacy equipment, operators will most likely end up cobbling together a hybrid system of various types of network virtualization and software defined networking, among other features.

No matter how operators configure their systems, servers need to be stored in climate-controlled environments. Data centers can ensure that these temperatures are optimal with the proper equipment. Using the Watchdog 15, which has temperature, humidity and dew point sensors, or the Watchdog-1400 POE, which also monitors those indicators, but also the data center's airflow, light and sound with external sensors, keeps the facility working at its optimal capacity - one of the keys to better efficiency.

]]>Comparing copper cables and fiber optic for data centershttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/comparing-copper-cables-and-fiber-optic-for-data-centers-40043042
Tue, 10 Mar 2015 15:14:12 GMTITWatchDogsAs a company seeks ways to shed excess waste, it will oftentimes go right for IT, figuring that surely the tech department is using superfluous gadgets or overpriced hardware. In other instances, a CEO with little to no tech experience will read an article and demand that the IT department make unnecessary upgrades or ineffective retrofitting.

The emergence of fiber optic cables helped usher in a digital age of increased distances and decreased electromagnetic interference. Due to these unique features, many have been quick to jump on the optic bandwagon without carefully considering the options available.

Since data centers must handle the needs of companies to access and transfer thousands of data transmissions, it's important that they are working as efficiently as possible. These transmissions happen around the clock and are essential for relaying and storing sensitive business information. Data centers need to instantaneously compute these transmissions with minimal interference.

While it might still be some time before a company can allocate the appropriate resources for the IT staff to replace the cables, it still good to figure out early which one is better suited for handling the unique needs of a data center.

Benefits of copper
In some instances, next generation data centers can utilize copper wires instead of fiber optic lines and still see many cost-saving and performance benefits.

Since copper wires only need to extend a few meters, these make the perfect fit for data centers. These cables can be used to link up short distances without sacrificing any effectiveness or reliability. At of a cost of two to five times less than fiber optics, these cables can offer considerable savings.

From the PUE perspective, due to its unique thermal composition, copper doesn't consume power and requires less cooling. This equates to greater savings in temperature control. But even with copper wires, monitoring and maintaining a stead climate in a data center is crucial to its long-term success and sustainability. Using the Supergoose II/Watchdog 1250 Climate Monitor is a great way to keep tabs the data center's temperature.

Benefits of fiber optics
Compared with copper cabling, fiber optics offer a much larger range and less interference during the data transfer.

Some volume intensive data centers require high-performance cables capability of supporting the sheer amount of hourly transmission that come through it. Fiber optic cables have shown a better ability to handle the higher bandwidth involved in more complex applications such as cloud computing, e-commerce and data backup.

Perhaps the most promising aspect of fiber optic cables is their ability to handle connection speeds ranging from 40Gbps all the way up to 100Gbps, which allows for a higher frequency of superior quality transmissions.

While the jury is still out on which one is the better option overall, both have pros and cons that make a big impact on businesses. For now, the best bet is to utilize a hybrid system of both copper and fiber cables, since the cost of retrofitting the data center entirely might be too prohibitive to some businesses. Ultimately the person in charge of making the choice is whoever is responsible for the network's infrastructure building plan.

]]>Determining a data center's lifespanhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/determining-a-data-centers-lifespan-40042593
Tue, 10 Mar 2015 10:01:40 GMTITWatchDogsAccording to experts, a data center has an average "design" life of a mere 10 to 15 years. This lifespan will include numerous upgrades that come with additional power usage. And with data centers being no small investment either, with many requiring millions of dollars and a couple years to complete, many companies might choose to hold on to a facility for longer than necessary.

But what should a company do when the data center either runs the course of its life or no longer serves the specific needs it was originally intended to solve? By determining what to do with the data center as it nears its end of life, companies can more adequately adjust future costs.

Retrofitting
If a data center merely needs upgrades for its efficiency or capacity, retrofitting might be the best choice. Cooling, power and IT systems near the end of life can be upgraded to allow for maximized capacity.

Data centers need constant cooling, as overheated servers can lead to fried hard drives and even fires - both of which can devastate a facility. This makes constant temperature monitoring paramount for any data center. By effectively using the Relay Goose II/Watchdog 1400 Climate Monitor to maintain the desired environment, managers can keep their data centers operating longer and more effectively.

Replacing
Sometimes the expense for retrofitting an entire facility end up approaching the price of building an entirely new one. Despite this higher cost, building a replacement center can be the best option. For instance, if new, more efficient equipment is unusable at the old facility, or if there is no more room to expand, then building a new data center becomes a viable option. Other times, a company might have several data centers in different locations, and building a single location to store all the servers in one place can be much more efficient and cost-effective.

Selling
If a company's data center has reached storage capacity, or if the company simply wants to trim costs and neither retrofitting nor replacing are the best routes, it could be time to sell the facility. This option really only works if the facility is not nearing its end of life. It is wise to consider the data center's long-term lifespan to decide the best time to contemplate selling the space.

The old financial maxim of "Buy low, sell high" has a universal application in the world of business, and data centers are no exception. Right now, the time is right to sell. According to Data Center Knowledge, Silicon Valley, for instance, has seen a surge in demand for data center space, with the demand currently three to four times higher than the supply. Many private equity firms are looking into purchasing data centers. These facilities represent a stable asset since tenants are more likely to renew their leases after spending large sums for a center's equipment. By selling the unit, owners can recoup the initial investment and cease any additional revenue loss due to depreciation.

]]>Growth in cloud computing means more data centers for Microsofthttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/growth-in-cloud-computing-means-more-data-centers-for-microsoft-40042308
Wed, 04 Mar 2015 15:57:30 GMTITWatchDogsAs cloud computing continues to increase, data centers are moving from the background into the foreground as tech companies buy, build, and plan on building even more space for these facilities. A recent Gartner report concluded that cloud computing will increase to become the bulk of new IT costs by 2016.

"In India, cloud services revenue is projected to have a five-year projected compound annual growth rate (CAGR) of 33.2 percent from 2012 through 2017 across all segments of the cloud computing market," reported Ed Anderson, Research Director at Gartner. "Growth in cloud services is being driven by new IT computing scenarios being deployed using cloud models, as well as the migration of traditional IT services to cloud service alternatives."

In an effort to keep pace with this need to handle users' computing workloads, Microsoft will be building data centers in several locations over the coming months and years. The company will be building so many new centers that it is almost difficult to keep track of all the proposed plans - and these come in addition to the data centers the company already operates in Chicago, San Antonio and Quincy, Washington.

Microsoft's newest data centers
With so much activity happening around cloud computing, many of the tech giants need to grow their data centers to service the rising demand. Microsoft is no different from its competitors, having spent $5.3 billion over the past four quarters on infrastructure - costs which include real estate purchases, production equipment and data center construction.

The Redmond, Washington-based company is spending approximately $200 million to expand one of its existing data centers located in Cheyenne, Wyoming. This would bring the total investment into the site up to $750 million, which includes a $5 million grant from the Wyoming Business Council.

Construction workers have already begun work on a Microsoft facility in West Des Moines, Iowa. The company and city are both investing millions of dollars on the space and the surrounding infrastructure, and officials hope this will spur future developments. With two of the four buildings beginning to take shape, the sheer size of this project cannot be underestimated.

"This is the first type of project like this that Microsoft has ever done," announced Clyde Evans, West Des Moines Economic Development director. "It's probably one of the largest data centers that anybody has ever done in the country."

Meanwhile, Microsoft has announced that it will transform a former Honeywell International plant into a new data center. The facility, located in north Phoenix, has been sitting empty for a few years, but it will be the future home to 575,000 square feet of servers. This announcement comes on the heels of Apple recently stating it would be opening a data center in Mesa as well. Both companies are likely to take advantage of a recent state bill that grants tax incentives to companies that invest $1.25 billion dollars in the state over a 10-year period.

No matter how large or how small a data center is, constant temperature checks and climate control are necessary to ensure that servers do not overheat and no fires start on the premises. A device like the Room Air Controller (RAC10) transfers hot air with dual temperature-regulated fans keeping the server rack or an entire IT environment at the perfect temperature.

]]>Monitoring and efficiency go hand in handhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/monitoring-and-efficiency-go-hand-in-hand-40041885
Tue, 03 Mar 2015 18:05:14 GMTITWatchDogsPerformance management is often neglected by data centers. In general, if there is a single major problem that data center operators have, its viewing every part of their operations within its own silos. Data center temperature monitors exist by themselves, separate from power monitoring tools, apart from network trackers, and independent of larger views of the data center network's activity.

This isolated view of how centers work simply isn't as effective as measures that would involve the use of all of these different metrics together. In order for centers to realize peak efficiency, they should learn how to utilize different sensors holistically in order to get a sense of how their network works and on a physical and digital level.

One method that many companies are using in order to track their information flow is cloud-based infrastructure. This proposition actually works quite well: Data centers are free to outsource the number crunching of how their own processors are doing, which allows them the information without leaving an extra footprint. Many data center temperature monitoring solutions, like the WeatherGoose II, are compatible with this kind of setup. They can simply send emails or text messages directly to a cloud server, allowing that program to make note of the temperatures that the sensors are able to record as they get them.

This can give an otherwise all-digital view of the center a physical dimension, which is useful for administrators when it comes to understanding how a storm might effect a center based on how similar ones have in the past. Analytics of this kind are already being performed in data centers for other industries. It only makes sense that centers would begin to run analytics on themselves - they are one of the only fields where they already have computing power necessary to run them.

Data centers need to become more efficient as they consolidate
Government data center consolidation and network analytics are not always thought of as two concepts that naturally lead into each other, but they share an important thread in common. As the federal government continues to merge its existing data centers in the name of efficiency, it will need its facilities to become increasingly efficient even at very large scales. Yet, it is always more difficult to account for the heat fluctuations and network issues in large configurations than it is in smaller ones. This simply comes down to variables - two things are more difficult to keep track of than one, and a million things are harder to follow than one hundred. The United States wants to have less than 1,000 data centers soon, according to Federal Times, so they are examining the use of pod data centers in order to reduce PUE.

This example of moving forward with the science of data center technology is exactly why many other companies need to begin working with current configurations. Iterative design and the ability to quickly move from project to project as companies need is one of the things that can help groups that want to stay ahead of the curve in terms of efficiency. Data center temperature sensors and power monitors allow organizations to reduce their monthly costs, which is always important. By utilizing all the tools available to make serious changes in the way that current data centers are used and operated, companies can find superior ways to construct their networks. The physical and digital sides of a center don't exist independently, so utilizing sensors that monitor network performance alongside other variables is a valuable choice.

]]>Know how your hardware lives and dies with monitoring toolshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/know-how-your-hardware-lives-and-dies-with-monitoring-tools-40041873
Tue, 03 Mar 2015 18:04:54 GMTITWatchDogsMost articles on monitoring hardware focus on its ability to save a company money by keeping track of cooling and power use. In fact, you're likely to see a wide variety of articles on the subject of sensors like the WatchDog 100 PoE bring them up as a way of going green. These articles aren't wrong. We've written quite a few on the subject ourselves. However, it's important to consider all the angles on devices like these. They can not only help you go green, but they can actually reduce the amount of e-waste you go through on a yearly or even monthly basis.

Sensors aren't just for tracking how efficiently a company uses its power. They can also give you signals as to how hardware within the various parts of an organization work. By keeping records of temperatures throughout the day with a Watchdog sensor, or power usage with a Geist power monitor, a company can also gain insight into how their servers function. Do certain processors continually run hot? Do other parts of a center run cold for seemingly no reason? Are the power spikes at any point during the day? Getting just a picture of network diagnostics isn't always enough.

With virtualized servers, the physical effects of a malfunctioning piece of electronic equipment may be obfuscated. But temperature and power don't lie. If a processor is beginning to habitually run hot, you know you likely have an airflow problem. If a server continually takes more power than it needs to, you may be suffering from the beginning of a motherboard or hard drive failure.

Creating better centers through sensors
Understanding how and why different parts of a data center stop working is hugely important for operators. Because it takes so much time and money to replace equipment when it goes bad, companies should be able to track how long it lasts inside their facilities.

In February, Backblaze released a database of how the hardware powering their servers worked and how long it tended to last. This allowed them to make note of what kinds of drives tended to work for them under what circumstances, which helped them to overall make better investments into the physical upkeep of their business. This same type of savings can be applied at the level of server racks. Real-time data on how different servers function and how they accumulate heat can bring a data center the opportunity to be more selective with their purchasing decisions.

Data center operators can also bring iterative designs into their facilities through the use of these sensors. By allowing environmental variables to be involved in the analytics process, they can come to a valued part of the design of a center. Temperature and power sensors do not have to be relegated to merely being used as ways to save on cooling bills. They can provide valuable infrastructure insight as well. It is the use of all the tools available to a data center that separates those that truly thrive under a variety of circumstances from those who are left behind.

Choosing not to use the tools available to an engineer always makes them run the risk of failing to capitalize on new design opportunities. Thankfully, many facilities may already have data center temperature sensors in place. Thus, they could easily repurpose these as a part of their design space. In fact, choosing to pay closer attention to power usage and temperature in server racks may help them realize those savings discussed earlier, in helping to cool data centers more efficiently. This may be way so many companies are beginning to utilize sensors through out their server racks.

]]>Cybercrime rates not expected to slow anytime soonhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/cybercrime-rates-not-expected-to-slow-anytime-soon-40042110
Tue, 03 Mar 2015 17:42:50 GMTITWatchDogsThe purpose of cloud computing is to allow businesses to take advantage of the available applications and run at a higher efficiency. This, so the theory goes, will therefore increase profits. While it is effective in most instances, there is still an illegal revenue drain on the cloud's overall economy.

A recent study from McAfee found that database breaches and cybercrime annually extract over $400 billion from the global economy. With various reports estimating that the Internet's economy generates between $2 and $3 trillion a year, this means between roughly 15 and 20 percent of the value created goes to the black market.

Not only is the dollar amount pilfered by cybercriminals enough to make jaws drop, but more personal ramifications are just as terrible. The resulting identify theft that stems from these crimes costs victims countless hours of rectifying credit reports and dealing with whatever financial knots cybercriminals leave in their wake.

In addition to identity theft, cyber?crime has repercussions that ripple out far enough to cost people their jobs. The McAfee study indicated that the subtle shifts in GDP caused by cybercrime can cause the loss of up to 200,000 American jobs, or an almost 0.33 percent increase in unemployment.

Increasing risks
These problems are only expected to get worse in the coming years.

With data increasingly moving between corporate networks, mobile devices and the cloud, it creates more opportunities for data center breaches and more chances for cybercriminals to take advantage of this substantial uptick in transfers. The number of people using these services is expected to grow exponentially. For instance, ThreatMetrix, a context-based security and advanced fraud prevention solution, currently analyzes and protects over 1 billion mobile and Web-based transactions a month. It expects to service more than 15 billion transactions by the end of 2015.

Even with all those transmissions flying around and crisscrossing the globe, Data Center Knowledge recently reported about a Gartner study that revealed around 83 percent of all traffic in data centers travels "east/west" and goes undetected by traditional perimeter security. This leaves these transmission highly susceptible to infiltration and extraction.

Cybercrime is just as tangible as it is digital and not all breaches come via an external hacker. It can be wise for companies to take protective actions to safeguard against an intrusive presence by successfully utilizing a DCS-6112 Dome IP Surveillance Camera that can warn owners of a breach or record any nefarious actions.

]]>Apple to transform Arizona facility into data centerhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/apple-to-transform-arizona-facility-into-data-center-40041786
Tue, 03 Mar 2015 10:30:21 GMTITWatchDogsAfter receiving significant assistance from Arizona's legislature, Apple Inc. will progress with opening its 1.3-million-square-foot data center in Mesa, a suburb of Phoenix. According to the Tuscon Chamber of Commerce, the Arizona House recently approved a bill expanding a $5 million sales tax credit to include international operations centers that invest $100 million in new capital assets over ten consecutive taxable years, with a total $1.25 billion minimum capital injection into the state after ten years. The Washington Post reported that Apple is going to take advantage of that tax credit to open its newest data center.

The Silicon Valley tech giant will be utilizing a facility that up until recently housed GT Advanced, a sapphire production company that partnered with Apple in 2013. The company originally planned to produce scratch-resistant, transparent sapphire screens for tech gadgets. After struggling to get production up and running, GT ended up filing for bankruptcy about one year later.

In a commitment to keep jobs in the state, Apple will transform the former sapphire production facility into a brand-new data center that will control the tech giant's global networks.

Renewable energy usage
The House bill providing the tax credit also requires Apple to invest at least $100 million in new renewable energy facilities and ensure that some portion of that energy is used to power the data center. But with Apple recently releasing a report that states that all of the facilities they own and operate are powered by 100 percent renewable energy, the company has shown an unwavering dedication to using renewable energies.

"Apple's rapid shift to renewable energy over the past 24 months has made it clear why it's one of the world's most innovative and popular companies," said Gary Cook, Greenpeace Senior IT Analyst.

Apple expects the facility to employ about 150 workers and run on 100 percent renewable energy.

Whether a facility uses nothing but renewable energies or it plugs into the grid, maintaining a constant climate inside a data center using a device like the Weathergoose II/Watchdog 1200 Climate Monitor is paramount for both safety and success.

]]>Why you should consider Fueling your data center with eco-friendly solutionshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/why-you-should-consider-fueling-your-data-center-with-eco-friendly-solutions-40041816
Mon, 02 Mar 2015 20:55:59 GMTITWatchDogsMany data center operators are trying to follow the lead of major technology companies and build more climate-friendly data centers. This can be very difficult, however. Many information-processing facilities not only have to deal with the costs of keeping servers up, but also redundant power sources and cooling costs. These elements can all add up to make running a data center one of the most time and cost-intensive things to do in the IT world. Luckily, there are always solutions to be found by those willing to invest time in finding greener ways of running data centers. In fact, some data centers come out actually giving back to the environment.

One major new project is the EcoDataCenter, which is currently being built in Sweden. This new project is carbon-negative. It actually does more to help the environment than it hurts it. This is achievable through a couple of different means. First, the center exclusively uses wind, solar, and hydro power. It also uses the cold climate of the area (the average temperature is 41 degrees Fahrenhei)t to naturally cool servers. Excess steam from a nearby electricity plant and flowering plants on the top of the data center complete the project, both being used to power and cool building and servers. This center is truly a modern marvel of engineering, and it takes an incredible amount of work to put something like this together.

However, organizations don't have to start important their processing from Sweden in order to start being a little better about their carbon usage. Some companies are already beginning to see better results by monitoring their own usage and cutting down on unnecessary energy expenditures. A large amount of time and money spent on power and cooling on many centers simply isn't necessary, depending on how it is being used. For example, many organizations overcool their server rooms in an effort to really make sure that their servers don't suffer meltdowns. However, most modern processors have a much higher tolerance for temperature variance, and a lot of overcooling is done by habit.

Monitors and sensors enable efficient use
One set of implements that can be very useful to organizations that are interested in controlling costs are temperature and power sensors. These devices can help a company understand exactly how much they are cooling their servers by, and lets them figure out whether or not they are overcooling. This can be a way to easily and quickly save on recurring monthly costs, which can improve the financial outlook of a data center drastically. Using too much power is overspending without any real downside. Companies can avoid this by utilizing solutions like the WatchDog 100 PoE, which is a temperature sensor that also keeps track of humidity, noise and other variables. By giving data center operators another set of eyes, this sensor makes it far easier for them to track on how their center as a whole functions.

Along a similar line, Geist power monitors can let a company directly measure how much power they use for their servers, which may help them to make greener decisions down the road. Instead of having to continue to make a guess about how much power, precisely, is being used by the organization all the time, those in charge of making decisions at data centers can simply examine an array of statistics about past use. This is much more useful for coming to terms with how the organization uses its power overall. Companies that want to get a true estimate of how their organization runs should utilize these services in order to get the best value out of their servers.

]]>Sweden to open world's first carbon-negative data centerhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/sweden-to-open-worlds-first-carbon-negative-data-center-40041604
Mon, 02 Mar 2015 17:46:18 GMTITWatchDogsData centers are growing everyday - both in number and in size - and with this growth comes increased resource drains in the form of water coolants and energy consumption. With an estimated 3 million-plus centers scattered across the world, even minimal energy savings could have dramatic results.

Surveying sustainability goals
Although many big data centers' end users invest significant sums into renewable energy projects, most service providers care little about making the centers environmentally friendly. A recent poll of IT professionals by Green House Data found that fewer than 10 percent of IT professionals take sustainability into consideration as a primary focus when evaluating a service provider. At the same time, two-thirds of those surveyed said that overall environmental impact is either a "minor consideration" or "low priority but important."

Sweden's new data center
Fortunately for those who consider environmental impact very important a new data center in Sweden called EcoDataCenter hopes to remedy the increasing amount of energy necessary to run a data center.

Located in the city of Falun, Falu Energi & Vatten - the Swedish utility behind the project - hopes that by utilizing the local energy grid as well as a central district heating and cooling system, this new data center will ultimately be CO2 negative. Falun's local energy grid is powered exclusively by renewable sources, including wind, solar, hydro and a nearby cogeneration plant, which converts wood chips, sawdust and discarded wooden furniture into energy.

"We are connecting the data center to an already sustainable energy system and can make use of all the energy," Falu Energi Ceo Bengt Gustafsson said in a statement.

With three buildings covering about 250,000 square feet and with a respectable 18 megawatts of total power capacity, EcoDataCenter will also get overheating buffers from the open-air cooling design that takes advantage of the surrounding climate, which boasts an average temperature of 41 degrees Fahrenheit. In addition to this, the company will be planting flowering plants on green roofs to keep the buildings cool, while the local electricity plant will provide excess steam to power the cooling machines.

]]>How to keep temperatures and costs down in the data centerhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/how-to-keep-temperatures-and-costs-down-in-the-data-center-40040913
Thu, 26 Feb 2015 18:32:55 GMTITWatchDogsData centers are becoming increasingly competitive in their pricing structures. Even now in strong data center markets, it is difficult to predict how much the services of a given data center will go for, or how much one built to be sold will command. Thanks to the increasing number of data center companies in the market, it is becoming increasingly likely that those interested in selling a center will wind up having to deal with nearby competitors. However, there are a few ways for organizations to cut their costs.

There has been a 7.5 percent drop in data center pricing over the past four to five years, according to statistics cited by Data Center Knowledge. However, prices are beginning to level out, and there has been less of a drop in recent years. However, the price ceiling is more difficult to understand. Different centers offering wholesale service, retail colocation and other options have made it incredibly difficult for organizations to set prices. Often, there's little competition in a given area to give centers a reasonable estimate of what their clients are willing to pay. Many providers seek out a strategy of pricing low on larger strategic deals, and then making up the margins on hybrid customers.

Reducing costs and gaining efficiency
By far the most important step a data center can make when deciding to make its operations more efficient is to track how it uses its inputs. There are several ways to do this, but the most important variables outside of the actual data transmissions on servers are power and cooling. By tracking cooling with a WeatherGoose II, or power with a Geist power monitor, a data center can get a more complete picture of how it is using its inputs in order to process data. By understanding exactly how resources are being used, a company can make better decisions on how to price.

Finding the most efficient way of utilizing cooling and power systems is a way to stand out in a market that is becoming increasingly crowded. As more data centers pop up, it will be more important that operators not only run a good data center, but one of the best. This can be done, provided operators make a solid effort to use the tools they have to monitor their operations effectively. Without knowing how things are running through a center, it is easy to fall into traps that lower the efficiency of the operation.

]]>Options for cooling efficiency: Take a bite out of your bottom linehttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/options-for-cooling-efficiency-take-a-bite-out-of-your-bottom-line-40039845
Tue, 24 Feb 2015 18:52:10 GMTITWatchDogsKeeping data center temperatures low can be difficult for many companies. The sheer amount of money that powering fans and other coolant systems can rapidly take its toll on an organization's ability to work. Thankfully, there are options out there that enable data center operators to save on their cooling bills. Green computing technology is continually growing, and may be able to meet energy demands soon. Cooling a data center isn't just about a company's bottom line after all - it makes organizations better corporate citizens and allows the tech industry to escape some of the bad press that other major manufacturing plants have. Taking the step to invest in low-cost energy is just another way to keep a company's image strong.

Asian data centers are moving to a new Certified Energy Efficient Data Center Award (CEEDA) framework for green energy. One major tenet of this new philosophy is that most data center temperatures can afford to be higher than organizations keep them. While this is widely known, the new CEEDA framework rewards operators for actually carrying this practice out. Organizations that switch to this format can take advantage of the full range of their server's operational conditions. This places more emphasis on proper cooling techniques, however - when companies don't leave a very large buffer zone between a server's operational state and its potential point of failure, a more reliable cooling system is required. Companies are beginning to embrace the need for cleaner, greener data centers, which ultimately means investing in better data center temperature monitoring hardware.

East meets West in the quest for energy efficient data centers
Microsoft, not to be outdone by other centers attempting to clean up their energy consumption, is trying to create an entirely off-the-grid data center, according to E&E Publishing. The push is always toward cheaper, smaller and cooler centers. Microsoft's new attempt to power centers involves utilizing fuel cells that can be wired to computer centers. Microsoft's model would use natural gas as fuel in order to allow for continued use of them by the center without the data being reliant on the grid. This kind of power can also extend itself toward offering cheaper cooling solutions by finding fuel cells that don't run quite as hot as current generations ones do.

Of course, there are always drawbacks. Some organizations may not be able to pay for the electricity infrastructure that would be required to run their entire network off of on-premises fuel cells. However, investment into safer and cleaner energy usually winds up being less expensive for an organization in the long run. The best way to be sure of this, however, is through the use of many remote temperature sensors and power monitors. The WatchDog 100 PoE, designed to check a variety of variables including heat and humidity, is a good tool for this purpose. Many of Geist's power tracking tools can provide a similar function for companies that need to keep track of the long-term use of power and cooling within their organization.

Keeping cooling and power costs down are functionally the same thing for data centers. Investing in the best technology and being unafraid to implement changes naturally lead to lower power and cooling bills. By reducing the amount a data center spends on either one of these, they naturally save money by lowering their monthly operating costs.

]]>Everyone knows about PUE, but who is using the resource allocation index?http://www.itwatchdogs.com/environmental-monitoring-news/data-center/everyone-knows-about-pue,-but-who-is-using-the-resource-allocation-index-40039111
Fri, 20 Feb 2015 18:09:36 GMTITWatchDogsMost data center operators are already familiar with the power usage effectiveness (PUE) metric. It's an extremely simple way to measure the overall efficiency at which a data center operates. This makes it ideal for organizations that want to have a quick way to estimate how well they've constructed their center. This is intended primarily as a yardstick by which organizations can calculate how close to the ideal they are. No center will reach a PUE of 1, after all, if they have anything like lights or electric locks in their data center. However, the most efficient ones do manage to run at a PUE of 1.2 to 1.5, while the average is between 2.5 to 3.5, according to SearchDataCenter.

A newer metric, resource allocation index, is the ratio of normalized resource supply to normalized resource demand. In other words, this is the normal amount of electricity used compared to the maximum amount of electricity available, according to Data Center Knowledge. This way of measuring isn't used so much to determine the efficiency of a data center, but rather to understand how a company continually responds to incoming demand. Organizations utilizing tools like Geist's power monitoring equipment can easily calculate RAI in order to make important decisions about how their center operates. RAI is superior to PUE when a center wants to understand not only how much power they are using, but how they are using it.

Efficiency and economy
By understanding both the PUE and RAI of a center, operators can understand how to be more efficient as they continue to expand their operations. A center that is close to maximum use at all times may not be able to sustain a period of peak activity, which could lead to unscheduled downtime down the road. Similarly, a center that has too high of a PUE rating is possibly losing money due to its inability to control how it uses electricity.

Ultimately, data centers rely on their ability to create efficient processing systems. By investing in solutions that allow companies to reduce the amount of unnecessary costs they incur through the operation of a data center, engineers can improve the profitability of an organization. Server temperature monitoring and power monitoring tools are invaluable in this effort, as they provide the raw data necessary to understand exactly what is going on inside these facilities.

]]>New processors may help more effectively lower server room temperatureshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/new-processors-may-help-more-effectively-lower-server-room-temperatures-40038686
Thu, 19 Feb 2015 18:44:10 GMTITWatchDogsAs increased power density is leading to rising data center temperatures, organizations are looking for ways to be more energy efficient. With data processing demand growing, operators are often forced to increase the density of their server racks. Facilities have to find ways to reduce the amount spent on cooling to keep profit margins stable. Thankfully, new cooling technology can help data centers keep up with the growth in demand.

Thanks to the variety of energy-efficient processors, servers and rack configurations now available, it is likely that denser server arrangements will be possible and easy to maintain over the next couple of years. Utilizing these alongside real-time temperature monitoring tools will give centers the equipment they need to maintain their footing.

A recent study by the Data Center User's group found that average density will increase from between 2 to 8 kW per rack to 4 to 16 kW over the next two years. Processing twice as much information as before can potentially increase the amount of money a data center stands to make, but it could also lead to problems with heat management. Recent advances may help, however. Lenovo's new energy efficient servers, for example, are designed to help data centers and run off of ARM chips.RM, which is a small, independent chip design company, has nonetheless become the designer of choice for many. Many companies are able to get workload-specific servers up and running by utilizing these chips. Because these new processors are packed more compactly on ARM chips can be customized to fit specific needs, they are more efficient than other designs.

Designing data centers for higher capacity
Of course, servers need more than just specialized chips to remain efficient while dealing with larger data loads. The main factors necessary to plan out a server load effectively include understanding current and future rack density, power, cooling, business continuity arrangements, sustainability plans and budgeting, according to a recent white paper by Server Technology. Ultimately, this all falls under the heading of capacity planning. Knowing what all of the key resource and output factors are allows an organization to more effectively engineer its servers to operate continuously.

Integral to capacity planning is the use of real-time monitoring solutions like the WeatherGoose II. Server room temperature monitors allow companies to reduce the amount of energy they waste overcooling their servers. By tracking exactly how much energy is used at a given time,organizations better understand what their current capacity is and how their cooling systems may need to change. To avoid heat-related problems, data center designers should utilize a long-term strategy of monitoring heat levels to determine precisely how much heat will be necessary whenever they are adding density to their server racks.

Long-term use of real-time monitoring hardware is the most effective way for a company to understand how to handle server room temperatures. The use of real-time data points gathered over time is so uniquely effective for this because it allows a company to get precise answers. This kind of customized, specific data is important to much of data center work. Getting precise, reliable answers isn't just a good idea, it's the only way to guarantee success.

]]>Green is the color of money for data center professionalshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/green-is-the-color-of-money-for-data-center-professionals-40038266
Wed, 18 Feb 2015 17:55:47 GMTITWatchDogsThose who work within data centers often pride themselves on how green their servers are. Because of the large amounts of power that servers use in order to run the computational operations people require from their data centers, the industry has the reputation for being an energy hog. Do data center clients care about how much energy is being used, however? Is there any merit to striving for better power use efficiency? Many organizations offer free energy credits to those using their center, but is that doing anything for their bottom line? Recent studies suggest that going green in and of itself isn't what those using center services care about, but that they do appreciate servers that are run more efficiently.

A recent survey conducted by Green House Data found that less than 20 percent of companies measure their PUE in order to track efficiency. The two methods with the most widespread adoption were electronic recycling, and office recycling, at roughly 57 and 64 percent. In contrast, only 16 percent of those polled thought that data center PUE was important when choosing a new service provider. On the other hand, 38 percent thought that cooling design efficiency was important. Why is there this disconnect? It comes down to one simple explanation: Cooling design efficiency makes using a data center more cost-effective. Ultimately, the thing that matters for clients is how much they have to pay per byte processed. Keeping data center temperatures down as low as possible without paying too much is important.

Therefore, many clients will probably not care about having matching carbon credits given to them by their data center. It is unlikely, for example, that Digital Realty's new Renewable Energy Credits program will garner them that much actual business. It works as a marketing campaign because green energy looks good, but it doesn't change the nature of a business. It's just another way that a company can differentiate its product as more wholesome than the competitors. Instead, organizations that want to gain real benefits from the pursuit of more ecologically friendly goals should work on their cooling design efficiency and total energy consumption, as these were given the most consideration in Green House Data's study.

How to make an efficient data center
Utilizing temperature monitoring tools like the WatchDog 15-PoE allows companies to track variables such as the temperature and humidity of their servers. Because this piece of hardware can both email and text staff of the center with information on servers, it allows for instant response to any dangerous changes in the environment. Three different access account levels allow the WatchDog 15-PoE to display different information to workers with various levels of access. This allows customizable control over the exact level of information various employees will receive from the product.

While data center temperature control can help servers become more efficient, power monitoring is just as useful. Geist power monitoring products offer power readings within one percent accuracy of usage by outlet level as well as frequency and temperature reading. Data center managers can make better choices regarding their use strategies of different servers, appliances and other operational concerns. By continually trimming away excess use, organizations can find better ways to run their servers that cost them less in the long-term. In the end, clients care about the total efficiency of their data centers, so finding ways to reduce the power consumption on any level is important.

Data centers benefit not only the planet, but also their own bottom line by going green. Cooling efficiency and lowering total power consumption are important variables to consider.

]]>Data center temperatures can be cooled without shifting geographyhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-center-temperatures-can-be-cooled-without-shifting-geography-40037157
Tue, 17 Feb 2015 17:56:42 GMTITWatchDogsAs more data centers pop up, companies continue to find their own interesting takes on how to properly cool them. It is easy to notice the facilities that have been built into the sides of mountains or into lakeside areas in an effort to reduce cooling costs, but not every building can be placed in locations like that. For the majority of data center operators, the only way to get better cooling is to use solutions that can be built inside of an already existing building. Luckily, there are still many options to choose from from those that find that their server rooms are too warm or too cold. Data center temperatures are not unsolvable - the real trick is knowing how far they can go, and what combination of approaches yields the lowest price point. Utilizing solutions like increased airflow, alternate cooling methods and digital sensors is especially helpful for organizations.

Companies have taken extreme measures in order to find efficient cooling. From building in the side of mountains, to old paper mills off of Finnish coastlines, to deep underground inside of old mines, the limits of human ingenuity have been tested of late with data center design, according to InformationWeek. However, this is not where the quest for efficiency needs to stop. There are plenty of reasons why organizations interested in the growth of their cooling options could utilize both a strong location as well as more portable cooling options. Not everyone needs to be deep in the heart of a mountain or in the Arctic in order to get the temperatures they need. Instead of focusing on real estate-based solutions to these problems, simply using remote temperature sensors to detect how efficiently a company's cooling solutions are currently operating can greatly improve performance. Monitoring the results of hardware allows workers to achieve higher standards of efficiency.

Local cooling modules give companies options
One example of a cooling accessory organizations can use to same themselves is the outlet level power monitoring tool created by Geist global. This data center power monitoring tool allows a company to track critical climate variables while also understanding exactly how much power is being used by machines that are plugged into it. The monitoring information can be accessed throguh a web browser, allowing for mobile reading of the information in real-time. These tools are designed to provide only the amount of monitoring that a center need. Keeping servers within safe thresholds so that a spike in power or other disaster doesn't overheat them is important, but so is avoiding the long term effects of wasting resources. By utilizing this power strip, it can be possible for organizations to save money in the long run.

One way to resolve this problem is through the use of data center temperature sensors like the WeatherGoose II. Because the WeatherGoose provides data center admins with updates via SMS and email, it is possible for organizations to keep tabs on their servers no matter where their engineers are. This allows these groups to respond instantly to any sort of major malfunction or other disaster within moments. Allowing the servers themselves to call the cavalry when they encounter problems or unexpected temperatures is incredibly useful because it allows people to respond instantly, not just the next time there's a temperature check-up. When even seconds of unplanned downtime can have potentially catastrophic effects on the bottom line of data center clients, having a technician in the facility as soon as possible is an enormous gain.

]]>Energy and power efficiency priorities among top data centershttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/energy-and-power-efficiency-priorities-among-top-data-centers-40036780
Thu, 12 Feb 2015 18:03:04 GMTITWatchDogsThe largest tech companies follow rules and procedures for their data centers that the rest of the technology world later follows. What thought leaders in data center construction know is that it is important to control as many aspects of data center design as possible. This means not only understanding how to design a center and its servers on a technical level, but also finding the best possible deals on energy efficiency.

Many high profile data centers are going green, and it's not just because it's better for the planet. Energy efficient data centers are also able to provide a better bottom line for their operators, as they are far more resistant to price fluctuations in the energy market. While it is common for the price of gas and oil to rise and fall, renewable sources like solar energy will only cost less as it becomes easier to harness.

This emphasis on controlling every variable of a business is why Apple recently signed a 25 year agreement for the output of a solar project in California. Because they are building a new data center in Arizona, they need the additional energy, according to Data Center Knowledge. From an economic standpoint as well as an environmental one, this business plan makes perfect sense. It is a good idea to push energy where it needs to be in order to do more with less.

At the same time, Apple can also carefully monitor its power consumption through the use of remote sensors that let it track exactly where the power it is using is going. This would allow the company to understand exactly what its needs are and will be over time, letting it make better decisions about what kind of power to purchase. Plus, Apple will have no trouble with data room cooling thanks to its environmentally responsible purchasing choices.

Efficiency in cooling bring savings everywhere
While getting power at a cheaper cost can greatly help an organization's bottom line, it is only half the struggle. Finding ways to use energy more efficiently is just as important. This is why many groups are investing in liquid cooling.

These types of systems can provide temperatures equal to or lower than convention fan-based coolant systems without the same expense, allowing an organization to save as they cool their servers. This is why The California Energy Commission recently started buying liquid cooling for data centers, according to Datacenter Dynamics. They will also, in accordance with the best standards in the industry, purchase additional data center temperature monitoring equipment, like the Watchdog 100 PoE, in order to report on the savings. Because the bottom line is that centers can make investments in their cooling technology, but until they actually see the results it's hard to know if they've worked.

Using new methods of cooling is an important part of keeping energy costs down. Similarly, keeping the cost of power low to begin with by investing heavily in methods of powering servers that will not fluctuate with the market is important. But the keystone of this process is using monitoring equipment to make sure that the steps being taken to reduce costs are actually saving the data center money. Improperly installed equipment, faulty servers, leaking coolant lines and other disasters can have a slow, pernicious effect on the overall operations of a data center. By double checking the engineers' work through the use of remote sensors,, a data center operation can be sure that it is making the right decisions in the long run.

]]>Green engineering is the next step for data centershttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/green-engineering-is-the-next-step-for-data-centers-40036458
Wed, 11 Feb 2015 17:54:42 GMTITWatchDogsData centers, even more than the Internet and personal computers, are sign that the fully matured information revolution is in full force. What this means for businesses and researchers is a heavier use of data to make informed decisions. Analytics, data center use and big data now entirely change the way we use computers. Instead of using our devices to store information, we are using them to analyze it. This subtle but important change means one very important thing - the demand for data storage is going to rapidly rise. Data center temperatures and efficient energy use must become a priority for information processing facilities, because a small investment now will means enormous savings down the road. While many organizations have begun leasing to data centers, there is evidence to believe that there will be continued growth in this field, as most are aware. The maturation of the data center market should be lead by companies investing in more efficient ways of doing business. Getting power costs down while simultaneously taking on more data will let groups see an explosion in profit, and finance the changes the industry is likely to go through.

An increase in the amount of data to be analyzed means a similar increase in cooling and energy costs. The best way that companies can keep the oncoming explosion of new data from overwhelming themselves is to invest in methods of controlling their power and coolant use. Reducing all non-computer based power costs throughout a facility will be crucial for those looking to scale up their data processing. Major players like Google are already able to see very strong power usage efficiency scores thanks to the scale of their servers, but those data centers that aren't necessarily as massive as the biggest players in tech still have opportunities to realize major savings. Facilities are beginning to be designed to facilitate the use of advanced computing elements within research, according to Data Center Knowledge.

Data centers and efficient design
Getting energy costs down in the wake of major changes to the data center industry is important. As stated previously, the oncoming demand causing greater data center traffic necessarily implies rising energy costs, unless these can be prevented through the use of more efficient systems. Luckily, they can. There are many ways in which centers can find ways to reduce the amount of energy they use, both by using more efficient cooling systems and by paying attention to where their energy goes. Utilizing power management monitors as well as data center temperature remote sensors is a good way for organizations to take control of their PUE. Recently, the Chinese government has issued a mandate for their data centers to reduce their power consumption by 8 percent in 2015, according to Datacenter Dynamics. This is being done most commonly through designing centers in locations that allow for easier cooling, and also through the use of high voltage direct current using more efficient UPS systems.

What's good for the company is good for everyone else as well - we are all better off when data centers are able to reduce their power costs. This allows them to work with more information while simultaneously reducing their energy footprint. As the need for data processing continues to grow, there is the potential for data centers to continue to use an increasingly large portion of the earth's energy. However, modern technology and insight into both economics and ecology can allows us to design data centers that fit in gently with the landscape. By doing this, data center engineers can positions themselves as leaders in the realm of conservation as well as computer science.

]]>Data center power use must be monitored to be improvedhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-center-power-use-must-be-monitored-to-be-improved-40035755
Mon, 09 Feb 2015 18:18:22 GMTITWatchDogsBetter power use efficiency is forever on the horizon for most data centers. While there is always research and interest within centers to find better ways of allocating their power, increasing efficiency is often a difficult process. Cutting elements out of a system is always more difficult than adding new ones. Reducing energy spent on maintaining data center temperatures is one potential way for a company to keep its PUE rating low, but that isn't always enough. As data centers continue to rise - and the market for them is growing - the issue of how much electricity these massive server warehouses use will become larger. Not every server room can be built in the frozen north - some have to cool their instruments the old-fashioned way.

Looking at how some of the larger players within the market may provide answers. Efficiency tends to be an easier task for major centers that have truly massive amounts of servers housed within their establishments. Economies of scale dictate that money spent heating a large amount of hardware is generally more cost effective than funding toward cooling a single server. However, this doesn't mean that these centers get off scot free in terms of energy consumption. If they don't own many of the servers they run, they can't exceed the limitations set by the companies they host for. This means that they can't exceed certain temperatures that are technically required for servers to run, but in practice are suggestions rather than hard and fast rules.

Data center power use can be monitored
Some organizations even have live PUE feeds, like the Custodian Data Centers in Kent, UK, according to Datacenter Dynamics. This allows the clients of this colocation provider to monitor exactly how well the provider is doing at maintaining their servers while also noticing how efficiently the center is running. Data center engineers can get the same effect on a personal scale by installing power monitoring equipment on or nearby their servers, which can let them track how their power use changes on a moment to moment basis. This is useful for improving quality of service as a whole, because it lets the organization understand when and why it consumes large amounts of power, allowing its processes to be streamlined more easily.

Keeping energy use down is important even to those working with the most massive workloads to date. For example, the ASUS super computer at GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, Germany was named most efficient at 5.27 billions of operations per second per watt. More impressive still, the computer was also within the top 200 most powerful super computers on earth. The incredible speed and versatility of the computer, combined with its relatively low power usage, proves that it is possible for all data centers to be energy efficient if the engineering resources necessary are given to them. It is difficult, but not impossible, for companies to get the processing they need at a lower price point.

Organizations that are interested in reducing their PUE rating should do two things first: monitor their power, and monitor their cooling. Data center temperature lowering methods all vary wildly in terms of cost effectiveness. Newer designs in cooling allow for less money to be spent on cooling, while better server placement can help with airflow problems. Getting the information as to how a data center uses power and cools itself is the first step toward radically reducing the amount of money that it spends on data center temperature cooling and on electricity. Ultimately, it is those centers that invest in themselves that will be more profitable in the long run.

]]>Redundant power supplies are key to maintaining strong uptimehttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/redundant-power-supplies-are-key-to-maintaining-strong-uptime-40035447
Fri, 06 Feb 2015 18:30:12 GMTITWatchDogsOne of the hardest elements of data center design to get right is resiliency. No matter how a center is built, it is always dependent on outside elements and internal preparation in order to function. Centers cannot function without power to run servers and the roads around it in order to get personnel to and from the center. The world is chaotic, and these functions can be cut. In spite of losing access to some of these provisions, data centers still need to be able to provide the power and connectivity their clients ask them to. Utilizing monitoring solutions in order to take the guesswork out of using internal resources is necessary. To do this, data center designers need to build with an eye toward long-term resiliency, which means utilizing redundant on-site resources in order to keep mission-critical assets up.

However, it is impossible for groups to understand what kind of back-up generators to have on-site if they aren't able to accurately measure how much power they use. While external sensors can let a company know relatively how much power they use, this isn't often enough to get the specificity necessary to have a backup that powers a few critical servers. Utilizing power monitoring solutions can be an excellent way to remedy this. A modular data center is another solution along this line. With strong remote power monitoring tools and a modular design, a company can keep exactly enough servers up to power servers that they can't afford to go down. This use of additional information to take the guesswork out of running a data center is an important part of modern design. Ultimately, a data center can persevere through events that the engineers building it see coming. This requires using modern tools to understand what events could befall a data center. By having core servers set up to maintain 90 percent of a load, a company can make the concession to shut off auxiliary servers to weather the storm. Utilizing greenfield sourcing options options to add capacity quickly is an important part of this process, according to Data Center Knowledge.

Challenges to data center designs can come from unexpected places
Even without natural disasters, it is still possible for a project to fail. The Utah state legislature threatening to cut off water pipelines to the NHS's data center is one example of this. Another is a scenario that may be a recurring nightmare for many IT professionals - someone dropping a cup of water on a server. While it is always possible to build to face the inevitable, Murphy's law dictates that someone will eventually drop a plateful of brownies directly into an open server rack. While this type of challenge to a data center is a little bit out of the ordinary, it is nonetheless a good example of the kinds of things that those creating data centers should worry about. There are many variables when designing these information processing facilities. Reducing all unknowns is a way to make sure that no matter what happens, a center is still able to function.

Ultimately, the use of remote power monitors is a way for a company to get more information about the inner workings of its servers. Information directly translates into control in this case, because data center workers have the tools to take advantage of newfound information. Working with remote power monitors, in all cases, gives companies a better sense of how their data center works on a basic level. By getting highly accurate data, it is possible to reduce power consumption during high stress periods and optimize a center for peak performance. It is this kind of performance that separates the best data centers from the rest of the business.

]]>Cooling costs can be slashed with immersion systemshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/cooling-costs-can-be-slashed-with-immersion-systems-40034600
Thu, 05 Feb 2015 18:33:13 GMTITWatchDogsKeeping the cost of cooling down is a pressing issue for most data centers. The possible gains to be realized by reducing one of the major upkeep costs of running a data center are known to everyone within the industry. The problem is that it's hard to find technologies secure and reliable enough to base a data center off of. Up until the recent popularization of modular data center design, most commercial data centers simply didn't have the ability to transfer any of their existing workloads over to new systems without it being cost-prohibitive.Most of the data centers that have made the transition to liquid cooling have done so because of economic necessity. Luckily, they can still serve as examples to the rest of the industry in how they implemented these changes.

BitFury, a Bitcoin mining hardware design specialist, has had to delve into experimental methods of cooling its servers in order to deal with the drastically falling price of Bitcoin. The price crash has driven some mining operations into bankruptcy, but BitFury, as a seller to those groups, is only tangentially involved, according to Data Center Knowledge. The new cooling systems is designed to make data center temperature a non-factor in the design and stress-testing of its chips by allowing them to operate in a highly efficient cooling chamber. The server racks are filled with Novec, a liquid cooling solution. Chips generate heat and, as they do so, the Novec changes from a liquid to a gas. A client has reported a Power Usage Effectiveness rating of 1.02, one of the best possible scores. This movement is part of an overall trend toward more interest in the data center temperature cooling sector

Cooling system market grows
A recent report by Allied Market Research has shown that the data center cooling system market is set to grow dramatically. This has to do with the rise of cloud computing and other remote servers engaged in the business of processing data. As more groups outsource their processor-intensive tasks, it becomes more important for data centers to develop better ways of dealing with the massive demand for information processing. Data center temperature maintenance is more important than ever for organizations holding this amount of data, because it allows them to scale up better during periods of peak stress without having to deal with blown-out servers or other meltdowns. Failing to prepare for server outages can ruin a business's' chances with many clients, thanks to the overwhelming reputation-centered nature of the business.

Those that are interested in keeping a closer eye on how their cooling systems are doing should use solutions like the WatchDog 100 PoE. This device has all of the features necessary to track temperature, humidity, light, sound and other variables. By using an array of these sensors, a company can keep track of many servers at once. Preventing server meltdowns and other major data center catastrophes is all about relying on strong equipment, and the WatchDog 100 PoE is reliable enough to let an organization do that. With its ability to send out automated text alerts at user-defined changes in variables, it can let a center get the help it needs at a moment's notice. While still no substitute for routine manual checks, his device is nonetheless an essential part of long-term data center maintenance. How else can accompany make sure that its servers are reliable, other than checking in on them on a consistent basis?]]>Data center temperature and power consumption agreement tipshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-center-temperature-and-power-consumption-agreement-tips-40034232
Mon, 02 Feb 2015 19:23:44 GMTITWatchDogsManaging data center power use over the winter months can be difficult. Thanks to dramatic rises in area power consumption by consumers and other local businesses, rates can fly up at the same time that more energy is needed in order to keep facilities warm enough for personnel. Planning in terms of both logistics and business agreements can be necessary for a center to weather these months without spending far more than they need to on power. Data centers never catch a break - they are either forced to increase the amount of cooling they do in the spring, or have to content with even higher rates on energy in the winter. Below, however, are a few tips that companies can use in order to keep their centers processing information without breaking the bank.

Use fixed price agreements
One of the most common ways (and one that doesn't require any technical work) that organizations prevent their bills from rising as power becomes more expensive is to use a fixed price agreement. This contract allows a company to negotiate with a power supplier to always pay one price for electricity. For major power consumers like most data centers, this is a way to massively simplify accounting for energy costs, according to Data Center Knowledge

Get rid of outdated splines
New offerings in the realm of data center hardware can reduce the amount of splines located within server racks. Splines, the lengths of plastic cables with a star cross-section running through them, are useful for allowing for many small cables to be collected within a data center without having to deal with interference between the cables. However, using many of these can quickly use up a lot of space behind data center servers, which can reduce airflow. There have been recent attempts to modernize the design of these cables, however. One is the new Mini-6a structure, which allows for wires to be as separated but in a less compressed format, detailed Datacenter Dynamics.

Power down unused servers
Thanks to virtualization, data center operators can cut costs by simply turning off servers when they aren't in use. Operators used to believe that doing this would lower the life expectancy of servers, but this is a myth. Servers can power cycle as often as they need to and won't take a performance hit. By utilizing warm-start hardware, it is possible to bring servers up fairly quickly, allowing for a company to get data moving across them much faster.

Channel heat
Using the data center heat produced by your center in order to warm office spaces is a good way to demolish parts of the heating bills that companies face during winter months. As winter settles in, this can allow an office to actually reduce the amount of energy it uses, which is great when dealing with the more expensive costs associated with this time of year. Thanks to regulations regarding what different server parts can be made of, they don't have toxic contaminants held within them, according to Info World.

Use a data center temperature logger
Utilizing something like the MiniGoose XP II to monitor the temperature around data centers can allow an organization to more finely tune the amount of heat it uses. By getting accurate readings on a moment-to-moment basis, a company can afford to spend regulate its cooling systems a little more closely, allowing for lower power bills overall.

These tips should help organizations deal with the raises in power costs over cold winter months. Using just a few of these tips can make a big impact in terms of overall profitability of a data center.

]]>Future-focused efforts preserve data center temperatures and walletshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/future-focused-efforts-preserve-data-center-temperatures-and-wallets-40033055
Fri, 30 Jan 2015 19:13:59 GMTITWatchDogsKeeping data center temperature from soaring up to hazardous heights is an issue for many organizations. Because of the amount of preparation involved in allowing information facilities to continue moving along without succumbing to damage, it can be difficult to hold uptime. Service providers can be able to soften the blow, but will they be able to help a given server when it is about to have a meltdown? Ultimately, it will be those that invest in strong solutions that will be able to satisfy clients with the best uptime in the long run.

Reliable service providers
In order to secure a maintenance partner that is able to give your organization the help it needs, it is important to understand what to expect. Data Center Knowledge recommends that an organization go with partners that use tablets in the field, saying that those agencies are keeping up with the best technology. They are more likely to have trained, specialized technicians that are able to handle repairs on any equipment they come across, rather than simply having to wait for help from the manufacturer in order to fix a problem. Tablets also allow an organization to stay in touch much better, which can let them respond more quickly to emergency situations.

Technology of the future
Investing in new elements of the data center is what will allow a company to more easily deal with long-term issues. Virtualization, for example, allows for better testing, easier backups, and faster redeployments, according to Tech Republic. It can also reduce wear and tear on certain parts of data center infrastructure by better allocating loads across different processors. Similarly, solid state drives will allow organizations to preserve their drives for much longer, leading to fewer costs in the long term related to replacing these expensive pieces of equipment. Alongside data center temperature sensors like the WatchDog 100 PoE, a company can virtually eliminate the chance of a meltdown.

Ultimately, the future of data centers lies in those that are able to sort through all of the modern tools and find the ones that will help them the most in the long run. Because there is so much innovation in the realm of data processing, it is only those with a future-oriented outlook that will succeed. Thankfully, that skill is not uncommon within the tech industry. Utilizing services aimed at creating more reliable uptime is a major portion of any group's game plan.

]]>Longevity requires vigilance for data centershttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/longevity-requires-vigilance-for-data-centers-40031657
Fri, 30 Jan 2015 19:05:02 GMTITWatchDogsData centers are used to roughing it through a lot of different conditions. Because the need for uptime is so strict, it is imperative that data centers are able to guarantee that they will have access to all the inputs they require to operate. This can be difficult when there is so much that can go wrong. Inclement weather can destroy roads, power lines, and disrupt services. Blackouts and brownouts by energy companies can make it hard for an organization to continue functioning. Servers inside of a data center can overheat if not cooled properly, which uses up a large amount of water. Without any of these elements, data center temperatures can rapidly rise, and data processing can suddenly shut off.

This isn't even taking into account the human factor. There can always be errors made by people that get a plant shut down. The wrong allocation of power by an engineer, a mistake made in reading the data center temperature, or other issues can suddenly cause an information processing plant to have problems. Even architects who take all of these factors into account, however, don't necessarily plan for legislation to be introduced that shuts down their operations. However, politics can be just as dangerous a force as the weather when it comes to slowing down the processing of data. A recent bill introduced by the Utah state house would shut off water supplied sent to an NSA data center operating within that state. This means that the data center in question may be forced to shut down if the bill goes through and is signed. While the actual likelihood of this specific bill being passed is slim, it showcases the kind of long-term emergency planning that may be necessary for data centers if they operate in a hostile environment.

Former NSA technical chief William Binney has referred the legislation as the most "threatening situation since the civil war." in the region, thanks to the potential ramifications of banning the selling of services to a Federal building. However, it is unlikely to get the support it needs, as the Governor of Utah, Gary Herbert, has said he won't sign it.

"I know people have had some frustration with the NSA," Herbert said on a conference call with reporters in January, but according to him, the state's agreement with the agency is, "something I think we need to continue to honor."

Long-term protection for data centers
Those that want to secure the longevity of their information processing plants should focus on the ways that they can keep their information moving. This means investing in a variety of methods to protect the security of their servers. Among these should be data center temperature sensors like the WeatherGoose II. This sensor allows an organization to automatically keep track of the temperature across many servers at once. With instant messaging capabilities built into the hardware, it is possible for staff to be automatically alerted to any sudden increase in heat, humidity, light or sound. This can let support staff know when they need to make it to the center in order to prevent something dangerous from happening immediately, greatly reducing the chance of a meltdown or other dangerous situations.

It's impossible to know what's coming around the corner - that's why we put so much time and effort into preparing for whatever comes. By using the best tools available and working at utilizing top-tier tactics for avoiding unscheduled downtime, workers can find the best possible way to keep their data center going. Ultimately, it is this attention to quality that separates the good from the average in the realm of data center design.]]>Spending on data centers increases - will reliable profits continue?http://www.itwatchdogs.com/environmental-monitoring-news/data-center/spending-on-data-centers-increases---will-reliable-profits-continue-40032636
Thu, 29 Jan 2015 22:42:11 GMTITWatchDogsUltimately, data processing needs to be seen as a utility. Organizations and users have needs for their data to be kept and data centers hold them. Right now, the data center is able to heavily vary the kind of services various providers offer due to the freshness of the market. Some centers pride themselves on security, some offer frameworks that are better with different cloud platforms, and others are positioned to be closer to major network switches. However, most of these features will begin to bleed together over time. There will always be premium data centers for certain services - security and connectivity to major networks spring to mind as major points of price discrimination - but nothing is as useful as uptime. There will be many ways that a server can be set up for organizations in order to perform better in some areas, but only those groups that are able to guarantee highly reliable uptime will succeed in holding on to customers over the long haul.

Businesses need their data centers to work in order to function - that's now required for businesses. The reason that groups outsource to data centers is because they believe that will allow them to get better connectivity than they would otherwise be able to get. Because it is so important that organizations remaining connected to their data, and that they are able to get the information essentials that they need from providers, they are dependent on data centers to be reliable. More than anything else, business relies on the accessibility of their servers. E-commerce is all business now - everything runs through, or is facilitated by, omnipresent access to networks. This may be why organizations like Microsoft continue to invest in data centers. As the cloud becomes more popular, it seems increasingly likely that the future of computing will continue to rely upon Internet connectivity and continual access to large amounts of data.

Keeping uptime possible in difficult circumstances
What must be kept in mind is that data centers don't just have to make sure that they can run without being jostled. An unfortunate fact of life is that bad things happen to good data centers. These facilities occasionally suffer from natural disasters, industrial accidents, and criminal attacks. Utilizing sensor equipment like a data logger on different servers is a good way to mitigate this kind of damage. Unfortunately, it is difficult for centers to deal with the large variety of things that can come their way. Even the most basic attack, the distributed denial-of-service attack, can be very effective against servers. Typically, they are used as a screen to cover data theft and other, more long-term damaging data attacks, according to Datacenter Dynamics.

What this means for organizations is that they should utilize every tool in their power to keep their servers running. There is already a precedent for hacking tools being used to commit industrial sabotage. The Stuxnet program used to disable an Iranian nuclear research facility is an example of malicious software destroying equipment. There's no reason to believe that this kind of targeted attack couldn't happen to another type of business. Using data loggers and data center temperature sensors like the WatchDog 100 PoE can prevent an organization from being dealt a crippling blow by malicious hackers. Staying safe in the digital era is difficult for many people, but it is necessary as part of business practice. The continued move toward faster computers and always-on connectivity exacerbates this problem, but all that the business community can do is adapt.

]]>Data centers deal in the virtual but must contend with the physicalhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-centers-deal-in-the-virtual-but-must-contend-with-the-physical-40030842
Thu, 29 Jan 2015 22:41:31 GMTITWatchDogsThe most popular visions of data centers as they are portrayed in movies and film present them as constructs of pure light. Movies like "TRON" visualize the inside of a computer network as something made of two parts neon and one part ethereal fairy dust. However, these beautiful images don't describe the reality of data centers. While they are constantly utilize the inherently impermanent medium of electronic storage and data, they also must work with the simple physical realities of existence. Factors like data center temperature, can influence how servers deal with the information they are charged with processing. Center engineers must deal with supplying energy and trust that the ground they are building on is stable enough to keep their gigantic machines running, or face the dire consequences of unplanned downtime. In order to protect themselves from what can happen when the facts of physical existence impinge upon the virtual dreams of the data center, organizations should make use of remote temperature sensors.

An example of the problems that can happen when centers ignore the problems inherent to building wherever they want can be seen in Instanbul. Recent data center operators have been ignoring warnings and have been constructing centers in regions that have high risks for flooding and earthquakes, according to a recent report by Unisonius. While the necessity of building centers closer to Istanbul is apparent - there are many customers that will want nearby processing plants, which makes connectivity options more lucrative - this may not outweigh the cost of building in a place that is highly likely to be hit by an earthquake before 2030. The city's close distance to the North Anatolina Fault, one of the most active in the world, could potentially wind up causing those in the region to deal with near-catastrophic damage.

Utilizing elements around data centers can help
At the same time that physical circumstances can hurt a data center, they can also provide cheaper electricity. A case in point is Amazon Web Services signing a power purchase agreement from a wind farm in Benton County, Indiana. According to Data Center Knowledge, this will power the company's operations entirely with renewable energy. Power costs always fluctuate thanks to the economic circumstances surrounding a given organization. Signing long-term contracts like this one, then, allows for an organization to simultaneously make a commitment to clean energy while locking down a renewable energy source for themselves. In an era where many companies are being protested over their poor energy options, choosing to work with green energy makes a lot of sense.

Organizations want to be sure that, no matter if they are buffeted by gusts of wind or hit by cataclysmic earthquakes, they are aware of what is happening within their data centers. In order to protect investments, companies should make use of products like the WeatherGoose II. This type of remote temperature sensor can allow an organization to know exactly the temperature, humidity, and other important elements surrounding a server. If something begins blocking the air pathways between racks, the ground starts to split apart, or a bird manages to find its way inside of a server room, this sensor will pick it up. Sensors of this sort should be one of the first steps forward for companies that want to make sure that they can save time and money by lowering their risks of unplanned downtime, thereby saving themselves and their clients from a lot of headaches in the future.

]]>Physical data center security is often overlookedhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/physical-data-center-security-is-often-overlooked-40031219
Wed, 28 Jan 2015 10:44:22 GMTITWatchDogsMost people build their data centers to withstand cyberattacks. When there is a present threat that someone may breach a server and take the information off of it to sell, companies tend to make the right investments. There are few organizations that would willingly let someone walk into their store and rob them blind. However, it is possible for a designer become so focused on one type of attack that he or she forgets to defend against the rest of them. Unfortunately, many data center engineers have fallen for this trap. In their haste to develop the best defenses against a digital invasion, data centers, in many cases, have forgotten to protect the physical security of their data centers. While this threat is not as scary, thanks to not being as overwhelming or pervasive, a physical breach of a data center can cause just as much damage and lead to as much stolen information as a purely online strike.

Possibly the most humiliating way to be informed that a given piece of security is weak it to have it breached by accident. This fate fell upon Canada's Electronic Surveillance Agency, which recently had its data centers breached by firefighters responding to an emergency call, according to Data Center Knowledge. There were inoperative security cameras, an easily cut padlock, and a left over visitor's pass that quickly allowed the emergency response team far further into the building than they were meant to after they wound up at the wrong side of the building. This kind of mistake - lax security around the perimeter of a center - can cost companies a fortune if it compromises data.

Good intentions aren't enough for good security
By far the most common cause of these breaches are people on the inside who are unaware of the role they may play in letting intruders into the building, according to a 2014 SolarWinds survey. Training employees in proper protocols for handling visitors is a must for any organization that wants to have good security. It is not enough to merely be invested in equipment - strong safety protocols make it a must for the employees of an organization to be trained as well.

A good way to help to shore up defenses is by using data center temperature monitoring sensors. The WatchDog 100 PoE, for example, can detect beyond just heat and humidity - it can report on light, sound, and other environmental variables as well. Because it has a built-in capacity to respond to user-defined changes in these ambient readings, the WatchDog 100 PoE can function as an effective back-up detector of intruders within a data center. If lights turn on when they shouldn't, if there is noise, or even if the airflow of a server room is disrupted by someone standing in the way of some of the fan streams, this device can pick that up.

Data Center Knowledge listed protecting data center infrastructure and connectivity as one of the most important tips for securing a data center environment. By working with sensors as well as a trained security force, it is possible to prevent these kinds of breaches from happening. It takes the concerted effort of many team members working together, but the physical safety of a center is worth it. As important as the digital pathways of a center are, the building and server racks themselves are even more important. Nothing can hold the information stored on those servers, after all, if they are destroyed or compromised. In order to offer the security needed in today's tech world, companies should take care of the digital and the physical.

]]>Data center operators should expect the unexpectedhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-center-operators-should-expect-the-unexpected-40032083
Tue, 27 Jan 2015 17:56:51 GMTITWatchDogsIn the past months, we've covered a variety of issues afflicting data centers that have come out of the blue. From Amazon's fire in its data center to the possible shutting down of an NSA data center in Utah due to legal wrangling by the state legislature, there have been many surprising events. This serves to underscore one of the more important parts of data center design - it pays to prepare for everything. Companies that believe nothing will go wrong will wind up blowing away when they have to deal with something out of the ordinary. Data center temperature solutions are useful in this type of endeavor, as they can help a company to track how it uses cooling systems during normal operations.

Making decisions that will allow an organization to prosper even in the face of calamitous events is necessary for data center construction due to the sheer amount of variables that can affect the uptime of one of these facilities. Without this attention to detail, it is impossible to give clients the kind of service they need. As more groups begin utilizing colocation and cloud computing services from data centers, it will be important for them to be up as often as possible, which means not being reliant on routine services in order to function. Even utilizing data center temperature check-ups can be helpful to ensure that a group is able to maintain a reliable temperature range.

One example of the way that ingenuity can thwart even the most worrisome of external threats is in the Ukraine. The power industry of this country is under threat, and, as a response, the Parkovaya Data Centre in Ukraine has signed a deal with UPS dealer Madek. This power management dealer has supplied the center a standby power solution that will allow it to keep operating even when the power grid goes down. This is extremely important, because rolling brownouts and blackouts may be common throughout the Ukraine as the country's outdated power plants begin to fail. Compounding this is the fact that investors have been put off from placing money into the system, according to Data Center Dynamics, due to corruption and political instability.

Be Prepared
Ultimately, the quality of a data center is directly tied to its reliability. Being able to keep servers up and running and the information on them safe is hugely important for the long-term stability of their clients. The three major elements to consider are physical security, digital security, and disaster planning. By utilizing hardware design to help technology organizations relax with information stored on a center's servers, a company necessarily then sets itself up to want to use the most reliable hardware it can. By focusing on long-term sustainability, data centers can promise their clients that they can store information affordably, safely, and reliably. Like a bank the primary objective of any center is that it is able to keep its insides safely guarded from the outside, even when circumstances might make it otherwise difficult.

Utilizing hardware that helps a company keep track of its data center temperatures like the WatchDog 100 PoE is critical for long-term growth and stability. By utilizing this kind of device, organizations make it easy for their clients to trust them. Working to prevent surprises from getting the upper hand is a strategy that makes a successful data center. Without it, it is easy for a company to simply wind up displeasing too many clients that need constant access to their information.]]>Saving money, time and the planet involves limiting power consumptionhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/saving-money,-time-and-the-planet-involves-limiting-power-consumption-40030411
Mon, 26 Jan 2015 18:22:21 GMTITWatchDogsData center energy use is something that everyone is concerned about. Because these massive facilities can use so much power in order to make their processors work, it's the natural first place that engineers look to cut costs. It's simpler than any other type of waster to go after energy, and it usually has the biggest immediate payoff. By lowering the energy use of a plant, a company can not only limit the amount of money it spends per-month, but also drive down its carbon footprint by a similar amount. However, not every problem can be solved by reducing energy consumption. The technology industry makes use of several inherently toxic materials, usually in the construction of its servers and other essential elements of computer equipment, according to Datacenter Dynamics. In order to keep these from eventually clogging up landfills and leaching toxic elements into the ground nearby, organizations should learn how to preserve their servers as long as possible.

But how can a company keep its servers going? One way is to carefully monitor data center temperature in an effort to reduce the amount of servers lost to overheating and the undue wear and tear it causes. Something like the WatchDog 100 PoE can be very useful in this arena as a way to keep track of the airflow between all of the servers on various racks, making sure that those in charge of the center can automatically find out whenever there is danger of overheating or a similar problem. By bringing attention to these problems before they have the opportunity to blossom into larger issues, companies can drastically reduce their waste while also allowing their center to run more efficiently. Keeping things moving without having to change out parts is a delicate balancing act of making sure that servers stay stable even when dealing with unusually large loads, and it is part of the reason that data centers tend to invest so heavily in redundant devices.

Uptime and disaster recovery are crucial to green data centers
Nobody wants to deal with a full-scale meltdown or fire in their facilities, whether or not they work in a data center. It is no surprise, therefore, that so many different IT Disaster Recovery seminars are available to help companies deal with these dangerous events when they come up. Data Center Knowledge, for example, posted links to two different ones in disparate regions of the U.S. simply because of the demand that It professionals have for this. Data center temperature and other factors within a facility can easily get out of hand and potentially cost a company thousands of dollars while also having a severe ecological footprint in the surrounding area. Utilizing remote temperature sensors in order to keep track of the insides of a center before this kind of thing happens is an important strategy for reducing the likelihood of these events.

The ecological problems that data centers currently have demonstrate again that society at large is in the midst of a new type of industrial revolution, this time driven by information rather than manufacturing good one. The mechanical plants of yesteryear are the data center information processing facilities of today, and they will only continue to grow and require further tightening of restrictions. Luckily, the data center industry has the foreknowledge to begin to impose limitations on itself and its consumption, rather than having to be overseen by government officials, but this may change unless all companies involved can agree to act as good corporate citizens. Keeping track of the many elements of modern data center design requires paying attention to the ecological footprint of information processing.

]]>Times are changing for data centershttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/times-are-changing-for-data-centers-40029989
Mon, 26 Jan 2015 12:15:18 GMTITWatchDogsThere are vast changes in the world of business as to how technology is used. This, in turn, will affect data centers, which may have to deal with these changes in order to provide services that businesses now need. Thanks to the evolving circumstances around how companies interface with technology, it is hard to predict what the future of data centers will be. However, by dedicating themselves to being both reliable and flexible, the best data center vendors can stay ahead of the competition and court bids in an industry rapidly filling with talent. Very large customers are now demanding premium servers that offer them the best of modern day encryption and security technology in order to make sure that their information is safe. In order to deal with these circumstances, companies need to be ready to invest in the best data center temperature monitoring equipment and similar safeguards.

The U.S. Department of Defense has recently upped its requirements for security for any centers it uses, according to Data Center Knowledge. Any company that sells cloud computing services to other organizations should be ready to deal with the increased eye for security that many clients currently have. Some of those groups now moving to the cloud are doing so because of the security advantages that they offer, which means that data center providers should look toward keeping their information safe as much as possible. Strong encryption policies as well as strict parameters for dealing with people coming on-site are important for guaranteeing a very high level of security. At the same time, organizing the way that data centers are cared for with regard to their physical safety, including monitoring their heat levels and humidity, are important for detecting blockages in server racks and the possibility of an incident like a fire.

Analytics and the the cloud change how organizations utilize IT
One of the major ways that a company can guarantee its long-term growth is by providing services that are currently growing in demand, according to Datacenter Dynamics. Many organizations are interested in analytics for their information, and getting that from the places that they store their data already is a huge win for them. By being able to provide software that can help a company make use of the information they have stored, a data center can provide added value that separates them from their competitors. Similarly, vendors that specialize in the cloud are in a good position as they have the ability to offer powerful, flexible services that are highly in demand and growing. These organizations have positioned themselves at the head of their current market, which makes it far easier for them to react to the changes that can easily happen in this kind of industry.

By far, though, the organizations that are in the best position will be the ones that have already adopted the best safeguards against damage to their servers. This includes not only state-of-the-art cooling equipment, but also data center temperature remote monitoring tools like the WeatherGoose II. This can let an organization keep pace with the speed of the current economy by establishing itself as a trustworthy group. While the different types of technology will come and go, users will always be looking for a data center provider that is reliable and can keep their information secure. Being sure that they are able to rely on their data being safe and sound is more important than anything else for most companies, and the data centers out there to specialize in keeping their servers safe will be the ones that do well regardless of what other trends happen.

]]>Changes in data center markets spark new growthhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/changes-in-data-center-markets-spark-new-growth-40029659
Fri, 23 Jan 2015 09:39:25 GMTITWatchDogsLike any other industry, data centers are subjects to the whims of economic fate. As other parts of the economy grow or shrink, data centers adapt to changing circumstances. Data center processing is used by a variety of industries for many different tasks, and any of those industries can experience unexpected ups and downs that can wildly impact the data centers they use. Thankfully, those in the information processing industry have been able to roll with the punches and continue to find ways to make their services profitable. Some of these changing strategies may become the new default for dealing with consumers as a whole. From diversifying away from risk-prone areas of industry to moving toward smaller, localized data centers, there are many interesting changes in the market. Data center temperature planning will have to evolve with this as well, and make use of localized remote temperature sensors in order to deal with many small data centers as opposed to one large, big one.

The first element to take into consideration is the complete shuttering of Bitcoin processing firms. This topic was covered recently, but the ongoing problems may have even more of an effect on data center operations than previously imagined. A recent lawsuit by C7 Data centers against Bitcoin miner Cointerra for non-payment may just be the beginning of the legal troubles between these two types of institutions, according to GigaOm. The end result of this vicious cycle may be that centers stop doing business with Bitcoin miners altogether, or they at least diversify away from having to use this. While some mining operators have still been able to weather the storm, it is yet to be seen how many of them will make it through these times, and that could cause many data centers to suddenly have openings for other types of customers. Those that purchased hardware directly suited for mining however, may wind up having to replace the infrastructure they used.

Splintering data centers create more data real estate
A new trend in terms of data center construction is the creation of several small data centers that are closer to the companies that want to use them. Instead of a central data center in some remote place, there are many small ones closer to businesses in the city. This is useful because it cuts down on the latency that exists whenever a company has to send data through a pipeline in order to be processed. For organizations that rely on data centers to do analytics or other major data-processing algorithms, the reduction in latency can be an extremely important goal. Beyond just a usability feature, lower latency can make a data center easier to use.

These centers that continue to grow around town may now be run by the same company and even share staff. This is why it can be important for those invested in keeping these various servers up and running to make use of data center temperature tools like the WatchDog 100 PoE. With the WatchDog, an organization can keep track of a variety of data centers from one central location and get updates delivered to email or phone systems automatically. By automating the most difficult parts of these processes, they make it easier for everyone involved to get important information about the state of servers. In the end, the most important thing any data center architect can have is information, so finding ways to get more insight into how their data centers are working on a moment-to-moment basis is vital. Working with data centers is all about creating more efficient information pathways.]]>Economic pressures emphasize importance of reliable data centershttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/economic-pressures-emphasize-importance-of-reliable-data-centers-40029210
Tue, 20 Jan 2015 17:55:29 GMTITWatchDogsDealing with the pressures of different economic factors changing the circumstances around data centers is always difficult for engineers. Working with the fundamentals of things like data center temperature and information processing can be even more taxing when engineers and designers are faced with shifting priorities based on what is economically feasible. No matter how unstable the economic situation around the flow of data is, there will still be customers who need their information processed. Turbulent conditions don't have to find their way inside the data center. Thanks to remote temperature sensors and other tools of the modern age, it is possible to have very high uptime in a data center even though the world around those servers is constantly changing. A strong track record in proven ability to keep a data center up and running can be critical when dealing with the continually shifting sands of business strategy.

A major development that could impact how many data centers make money is the continued slump of Bitcoin, the online cryptocurrency. Although Bitcoin was at one point valued at $1,000 per BTC, it has now fallen to $220 per BTC and may yet fall lower, according to Data Center Knowledge. This has had unfortunate implications for organizations that hired data centers to "mine" for Bitcoin for them, which involves solving advanced mathematical hash sequences as a way to reward the processing of Bitcoin transactions through a distributed, ad hoc network. In more concrete terms, its as though companies had been hired to mine for gold while, simultaneously, the value of gold kept falling. Because the model that made the contract work was drafted while the price of gold was higher, the falling price of gold makes it so that the people who were paying others to dig gold for them can't afford to pay what they owe. The Bitcoin currency has been marked by several scandals, highs, and lows, and there may be yet more surprises in store for organizations that keep on processing Bitcoin.

Oil and fuel prices may drive the creation of new data centers
As the market prices for oil and other fuel sources continue to fluctuate, there may be more data centers built in locations that have access to them for cheaper prices. Because energy consumption is such a major cost of running servers, organizations are looking for ways to reduce them all the time. The use of fracking in U.S. territories has created much more oil in the Unites States, making it close to self-sustaining in terms of oil consumption for the first time in decades, according to Datacenter Dynamics. This may have the result of causing even more data centers to be built in the U. S., now that energy is cheaper.

When energy is cheaper, it's easier for organization to process more information. However, this means that they have to be extra careful to ensure that they still run within safe parameters. Utilizing data center temperature monitoring solutions like the WeatherGoose II can be an excellent way for a company to make sure that their servers remain below critical temperature levels. By using one of these to keep real-time analysis of different processors throughout a server rack, it is possible for a data center to get the most accurate possible model of how heat flows through their center. With this knowledge it can be much easier for the company to get the most out of their servers. More energy means that a company can afford to spend more, but they should still make every effort possible to reduce inefficiency.

]]>As data centers process more information, further redundancy is requiredhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/as-data-centers-process-more-information,-further-redundancy-is-required-40028824
Mon, 19 Jan 2015 18:13:09 GMTITWatchDogsRunning data centers is a demanding engineering challenge for many people thanks to the many factors involved in keeping them going. The tangle of various elements including energy, data, and cooling, present a series of important challenges that all must be met in order for the center to be useful to anyone. Understanding how to make sure that a given facility is operating to the best of its capabilities requires not just an understanding of how computer systems work, but how they can be used in tandem with several other interlocking factors in order to achieve the best possible results. This requires not only a mind that can puzzle through these different avenues of thought, but one that can keep several variables in the air at once. This doesn't just requires someone who understands how computers work, but someone who knows how to get an advantage by looking at how other people have solved the same problem.

The leaders of data processing have some important insight to pass on to those that are working at keeping up their own IT departments. A recent video tour of Google's data centers provides some insight as to how the Internet search giant manages to keep track of all of the data coursing through its servers. One interesting insight is that Google runs their data centers fairly warm, at 80 degrees, because that helps to keep the overall facility more efficient, according to Network World. This is just one example of the way that some of the major companies are able to understand how to run their centers that may slip by those who do not have the expertise or money that they do. Similarly, Google's security measures are very exacting. Only a very small percentage of their employees are allowed to enter their data centers, and their security systems includes badges, iris scanners, and under-floor intrusion detection with laser beams.

Defending the data center
One thing that is always important when dealing with data center temperature is the possibility of an abrupt overheating thanks to a malfunctioning fan. When considering how much of the future of data use will be in large loads, according to IT Business Edge, it becomes even more important that companies are able to deal with progressively larger loads while still maintaining their server's stability. Because processors output heat in proportion to the work they are doing, heat levels can rise dramatically when servers begin to process a large amount of work at once. Utilizing something like the WeatherGoose II can help a company make sure that they are able to keep critical servers from overly high temperatures while still getting their work done. The value of these kinds of devices is that they do the work of monitoring and alerting staff whenever a given server is overheating, and because they can track a variety of other factors as well, they can give service personnel a more complete picture of the circumstances around a data center when they call them. In this way, these data center elements can greatly assist workers in solving problems quickly and efficiently.

As time goes on and the market fills with more data center providers and colocation hosts, it becomes more critical that organizations are able to keep every aspect of their operation within their control. This means tending to data center temperature as much as data use and power efficiency. By keeping an iron grip over a large variety of circumstances, it becomes easier for employees to do the work they need to do, when they need to do it.

]]>Data center disasters can strike any organizationhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-center-disasters-can-strike-any-organization-40028435
Thu, 15 Jan 2015 18:41:26 GMTITWatchDogsThreats to data center stability come in many forms. Because fires, cyberattacks, and general degradation of parts over time all can cause a server to shut down, organizations have to be prepared to deal with any of these factors. The complexity of managing the flow of data and air as well as coolant throughout a facility can be the root cause of many accidents and mishaps within an information processing plant. By attending to these smaller details, companies can increase their reputation for reliability and corner the market on reliable, safe information processing. Fears about cybersecurity and disaster affecting a company's ability to do business can lead many to be very careful about where they store their data. For data centers, this means that proving that their facilities are the most reliable will be an important part of their business strategy.

Even very large organizations can be affected by the negative effects of poorly regulating their data center temperature. No less a company than Amazon suffered a fire in one of its data centers recently, according to CNN. Thankfully, there were no casualties and the fire was put out safely. According to the company, this was not a production center, so the fire had no impact on the company's ability to carry out its routine business. Further, company spokesperson Stephanie Krewson-Kelly said that there was "little to no impact to the development of the facility." However, that doesn't necessarily mean that there was no impact on business as a whole. The outbreak of a fire at a data center never looks great for an organization. While it may be true that Amazon didn't suffer much in terms of daily operations, it still suffered the loss of face. Further, Amazon's stock price dipped 1.2 percent, according to Zacks.

Regulating data center temperature
Dealing with data center temperature is always important. Creating safe environments for data to be processed in is highly important. While there are some elements of data center design that can help companies to save money and keep their servers cooler without any additional investment, organizations need to invest in safety equipment in order to guarantee fire prevention. It is important to set up consistent airflow through a data center or to invest in alternate methods of cooling, like liquid coolant options. Even more important than those is utilizing something like the WatchDog 100 PoE, a temperature sensor system that is designed to let workers know whenever something needs their attention. Thanks to a set of user-defined variables that can output alerts to email and SMS, it is possible for emergency teams to be automatically notified the moment something begins to go wrong with a server, preventing disasters that could otherwise cause an organization to suffer the same fate that Amazon did most recently.

Much must be done in the name of reliability for companies. The increasing pace of business means that every minute spent with servers down can dramatically impact a company's profits. Data centers that build a reputation for reliability will be able to outbid those that seem to be risky to the organizations that need data processed. By becoming the most secure and reliable company on the market, those that invest in the best data center temperature regulating equipment can win. Temperature monitoring equipment should be considered the first line of defense against major physical calamities at data centers. By using these tools, companies can stay ahead in the race for clients and guarantee that they can not only get information processed quickly, but guarantee its long-term stability.

]]>Data requirements for companies go up along with need for uptimehttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-requirements-for-companies-go-up-along-with-need-for-uptime-40027527
Wed, 14 Jan 2015 17:49:32 GMTITWatchDogsThere may be no better way to get a pulse on the way that companies are trying to grow and diversify their product lines for the upcoming year than the International Consumer Electronics Show. Anyone who was in attendance for CES 2015 can say that this upcoming year will be one marked by an even larger amount of expansion into data center construction and design. Because of the incredible amount of servers needed to operate many of the devices features at this year's CES, it is likely that this year will be marked by an increased expansion into the limits of what a data center can achieve, which will lead to equal experimentation with making sure that those centers run safely. The public's demand for information has not been sated, and there will likely continue to be growths along this line of inquiry for many years to come.

The sheer number of connected devices emerging these days means data centers will need to keep up the pace according to Data Center Knowledge. These gadgets rest on the idea that there is a server out there dedicated to processing their information. By understanding how to create a system that really values the power of data centers, organizations are building products that naturally demand even more processing and more centers in order to fuel their use. The Internet of Things naturally leads to this phenomenon with all of the potential it has for things to be tracked within a given piece of tech. Data needs will continue to rise for the foreseeable future in the realm of consumer electronics because organizations are finding more ways to better people's lives by tracking their information. As long as there is more incentive for people to track information there will be an even greater need for data centers.

Large organizations are also making use of higher quality data
It's not just on the consumer end that information use is expanding. A recent initiative by the U.S. Weather Service will exponentially increase the amount of processing power available to its supercomputers. This is being done in order to try to expand the accuracy with which the agency can track weather patterns and make predictions about the weather to come. What this shows is that it is not just consumers who are expanding their needs for data, but large institutions as well. There may not be a single part of daily life that is not touched in some way by the rise of analytics and data, which means that eventually everything may need a data center. Whether a person is going to the grocery store, staying at home, or even deciding whether to have a picnic, data will be involved.

This is going to cause an increased demand for reliability and uptime in data centers, which is why companies should consider the use of data center temperature sensors. With temperature monitoring components like the WatchDog 100 Poe, an organization can easily track the ways in which its servers are heating up and being cooled. This allows for tighter control over the internal process of server regulation, which can lead to better uptime in the long run. By analyzing their own use of cooling within the center, it is possible for organizations to make sure that they are able to keep their servers up and running over very long stretches of time. And it is those organizations that invest in this type of technology that will be able to win bids on building the next portions of the national information infrastructure.

]]>Data center cooling is essential and will grow toward 2020http://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-center-cooling-is-essential-and-will-grow-toward-2020-40027130
Wed, 14 Jan 2015 11:06:33 GMTITWatchDogsData centers are frequently tasked with the unenviable challenge of balancing the priorities of speed and safety in a competitive market. In order to deal with this problem, they are investing in technologies that allow them to make sure that they are able to offer the maximum amount of service in the safest possible way. Data center temperature remote monitors are a good way for these companies to keep track of their servers without needing to keep a surplus of staff on-site 24/7, effectively guaranteeing a certain minimum level of safety.

This pays off far more often than most would expect. A recent cooling failure brought down a data center in Perth, Australia, according to Data Center Knowledge. This facility was brought down when many of its cooling fans failed at once, brining the internal temperatures of some of its processors into the triple-digit range. They had to be shut down and fixed over the next couple of days. Thanks to a set of redundancy plans and remote temperature sensors, the plant was able to quickly shut down the afflicted servers before they were permanently damaged, and they managed to keep up 98 percent of their services during the time of the heat wave.

This was a perfect storm for iiNet. Not only did several of its fans fail at once, but the company is also struggling with a local heatwave. Temperatures outside the organization were around roughly 112 degrees Fahrenheit. This could have been far worse without the use of remote temperature monitoring hardware that was able to let system administrators know to shut down servers in order to avoid a meltdown. With the proper tools, iiNet was able to prevent a bad situation from becoming terrible. In the future, as more data centers arise, there may be even more potential for networks to go down.

Prevention is worth a pound of cure
Even now, the data center cooling market is continuing to rise. Thanks to higher amounts of data being processed by companies and an overall increase in the use of Internet services in everyday life, there is much more stress on servers and data centers in general. What this means for those who provide cooling solutions is that there is an ever-expanding market of data centers that need their servers sufficiently cooled in order to handle the heavy loads placed on them. With the rise of the Internet of Things and other modern digital information developments, it is likely that many organizations will be processing exponentially larger amounts of data as time goes on.

The use of hardware that tracks how cooling systems perform through the system, like the WatchDog 15, is important for keeping a stable and consistent level of cooling throughout a facility. Self-contained units like the WatchDog 15, with its ability to send messages through SNMP, email, and text messages, can keep an entire staff informed as to the goings-on of their cooling systems. By investing in the well-being of the data center in the long run, a company can expect to stay profitable for years after its competitors. Nothing is as important to clients as reliable uptime of their data center applications. Because so much rides on being able to access their software exactly when they need to, those services that provide the most regular access will be the ones that have the most loyal client base.

]]>Data centers now living in virtual realitieshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-centers-now-living-in-virtual-realities-40026753
Thu, 08 Jan 2015 18:20:39 GMTITWatchDogsServer room temperature is always going to be important for data centers because of how it keeps the physical components of centers safe and sound. There may be increasing movement, however, away from the idea that a data center is defined by its physical assets in the long-term. What this means, paradoxically, is that centers will need to focus more on guarding their physical assets while being ready to replace them at a moment's notice. This is because the future of data center engineering is about how to manage the virtual networks while continually upgrading and changing out physical servers after using them for maximum value. In this proposed model of data center networking, it is even more important that centers are reliable and are able to have adaptable cooling systems in order to carry the burden of computing while other servers are being replaced or switched out.

The new model of data center construction is all about cloud connectivity and the data center interconnect, according to Data Center Knowledge. Centers that are built with these goals in mind will be able to connect to the cloud and carry larger loads between themselves and other centers, which can result in the processing of more information quickly, which ultimately helps its clients. Knowing how to make use of the cloud to deploy applications in near-real-time, as well as extracting more value from commoditized network services, can help a data center outperform its rivals. The direction of the Internet and of information use is toward increasingly large volumes of data, which means that offering high-bandwidth applications and services will be necessary for data centers in the future. By working with the cloud and with DCI options for data centers, a company can keep firing on all cylinders even as it continues to change out its internal hardware. Maintaining server room temperature through all of this change would be almost impossible without a remote temperature sensor.

The ever-evolving data center
New data centers may find that they must keep their waste in mind as they transition to new hardware components. The new pace of rapidly switching out hardware and servers may lead to an unexpected rise in the volume of tech waste generated by data centers, with possible negative effects for the industry as a whole. Finding industry-specific guidelines fro the proper disposal of waste and recycling of old servers may be a crucial way that data centers could keep some of these changes from being ultimately too damaging. The year 2015 may become, as Datacenter Dynamics has put it, the year of the disposable data center. Rack sizes may have to be altered in order to incorporate higher density and airflow, but this may also cause older generations of hardware to have to be thrown out. Large amounts of waste may accumulate as companies continuously cycle through improvements to their data infrastructure.

One way that companies can get extra mileage out of current servers and keep their other hardware going strong while they upgrade parts of their complex is to invest in remote data center temperature sensors. These tools can help a company to track and scale their cooling processes appropriately over time, resulting in an overall more efficient use of cooling and heating and letting them save money through better use of their resources. Anything that gives a company more information about its internal processes can be used to make them more profitable, and something like the WeatherGoose II can greatly help a company understand exactly what is going on with their cooling systems for their servers.

]]>Safety and efficiency have the same roots in data centershttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/safety-and-efficiency-have-the-same-roots-in-data-centers-40026368
Wed, 07 Jan 2015 18:28:43 GMTITWatchDogsKeeping data centers efficient and safe is all about the regulation of power. In general, all of a data center's activities can be termed as "regulation." The servers regulate information by processing it and getting the desired outcomes, the cooling systems regulate the heat affecting the processors, and the temperature monitoring systems regulate the cooling applied to the processors. Even further, power monitors regulate the power going to all of these systems, and the systems themselves are governed by market forces that allow a given organization to charge a specific amount for the services they provide. From top to bottom, the entire axis of operation for data center management relies on careful tending to a variety of factors to make sure nothing spins out of alignment. Without paying attention to data center temperature monitoring from the beginning to end of a process, it is possible for disasters to occur.

Occasionally, data centers fail. This happens for a variety of reasons. Fail safes don't always work, and even very well-designed systems must occasionally contend with the chaotic laws of the physical universe. One such minor incident was a fire that brought down a cooling system in the Maryland Police data center, according to the Baltimore Sun. A small spark in a generator started a fire, which turned on the emergency cooling system and shut down the generator. This left the Maryland Police without access to their data center. Thanks to built-in fail-safes, no data was lost, but unplanned downtime can still be a somewhat disastrous consequence of an event like this. Because server uptime is critical to modern day business, unplanned outages like this can have very damaging effects. Finding out how to prevent these kinds of incidents from happening is crucial to the growth of any organization, whether it is a police station or a sales firm.

Data center temperature monitoring stops fires and starts savings
Security and remote temperature monitoring, as well as power monitoring, tends to be overlooked as non-foundational to the design of centers by engineers. This is because these services are all removed from the core functionality of actually processing data and cooling the servers themselves. However, without these kinds of systems in place a server may simply be unable to function. This could result in an inability to deliver the kinds of results a company needs to be able to send to its clients. Ultimately, the use of types of monitoring software is not only good for disaster prevention, but also for keeping costs down, according to Information Week. This has to do with the use of the data center cooling systems as way of measuring the efficacy of other tasks a data center performs. A company may over-cool certain processors without knowing it, essentially forcing them to spend extra money for nothing.

Utilizing something like the WatchDog 100 PoE can be a lifesaver for organizations that have yet to implement their own temperature monitoring systems. These kinds of hardware appliances can track numerous types of variables, including humidity, temperature, noise and light, allowing them to let those in charge of the data center respond to automated alert messages that give them a snapshot of the conditions around their servers. While the Maryland fire was able to be put out quickly by an automated cooling systems, there is always the potential for a fire to get out of hand. Developing layers of security around the servers inside a given facility is important for maintaining a high degree of professional service to contractors, and it is this kind of safety that allows organizations to rely on data centers for their information processing.

]]>Simplification becomes a goal for data centers as information grows more complexhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/simplification-becomes-a-goal-for-data-centers-as-information-grows-more-complex-40025946
Tue, 06 Jan 2015 19:03:43 GMTITWatchDogsData center technology invariably involves the connection of highly complicated systems to end-users. Technical elements are important for getting the information trusted to centers stored in order to be used properly. All of this complexity, however, is paradoxically being used in the pursuit of simplicity. The end user needs to be able to quickly understand how their information is being used, and how they can access it in order to make the best decisions possible. This means that no matter how abstract networks become, there is always the need for data centers to stay simple in other ways. Reducing outputs of unwanted materials and maintaining the avenues through which that companies interface with their data center vendors are important ways to reduce unnecessary complexity.

Some of the leaders within the world of data centers are currently working to reduce the burden of legal paperwork in how their businesses are run. Notable is IBM's new initiative to lower the page count of their cloud contracts, according to Data Center Knowledge. This is interesting because it lifts the burden of reading a book's worth of contract pages with IBM for their cloud hosting services. Instead of many pages of complex legal wrangling about server uptime, those that are ready to partner with IBM for cloud hosting can simply read through two pages and understand how far their partnership with IBM will get them and how they can work with them. By understanding the length and limits of their partnerships with clients more easily, IBM is solving one of the more commonly repeated laments of those purchasing processing from data centers, which is that their user agreement terms are difficult to understand.

Simplicity is golden for information processing
Cutting away clutter isn't just good for deals with partners. It can also help the planet if reduction allows a data center to become more green. Already, the U.S. federal government is working on an efficiency bill according to Datacenter Dynamics, which will allow it to cut back on energy use. It can do this because data centers are now more powerful than they have ever been, and it is easy for them to get a lot of information processed with relatively smaller facilities. This means that processes that used to take a long time can now be done more quickly, which ultimately reduces the amount of servers necessary for the government to operate. While the current bill is stalling, this piece of legislation is still a sign that the government would like its information processed more cheaply.

Server architecture moving toward more portable formats with the advent of software-defined networks allows for information processing networks to continue to reduce their burdens by finding ways of doing more with less. Cutting down on outdated hardware and investing in freely-allocated high-quality servers that can be used for a variety of clients at once is a way for data center operators to reduce the expense of working with information while also doing their part to make their jobs a little bit greener.

Those working to make their operations more efficient could also be interested in remote data center temperature monitoring options like the WatchDog 100 PoE. This device can allow for a company to effortlessly examine all of its servers and reduce its cooling elements use to just the moments that they are needed. Maintaining temperature can be a difficult project for data centers, especially those that are concerned with balancing energy efficiency and server uptime. Because of the many steps involved in creating the most efficient center possible, all of these avenues should be considered in order to provide the most powerful and least expensive service.

]]>Humans can't do everything - automation helpshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/humans-cant-do-everything---automation-helps-40022300
Fri, 02 Jan 2015 09:26:24 GMTITWatchDogsData center technology is about reliability, efficiency, and scalability as data processing demand continues to rise. In light of this, it makes sense that many centers make use of automation in order to keep their centers running smoothly. But are they using enough? There are elements of technical oversight that make sense for humans to oversee, but the time where people have to be involved in anything but very high-level day-to-day operations of a data center may be past. The oversight of administrators in these kinds of operations has always been a necessary evil of computing: No one wants to sit around and watch data numbers tick up and down. People with expertise to build new centers are being employed to simply maintain them, which is a waste of human resources.

Instead, it may be time for companies to embrace more full automation within their centers. According to Data Center Knowledge, Microsoft recently suffered an outage to its Azure servers due to human error. A policy that was supposed to be used when deploying changes across servers wasn't followed, and as a result a multi-billion dollar company let thousands of servers go down for almost ten hours. If the upgrade process and policy decisions were automated by Microsoft, this might not have happened. And, even if this process could not have been automated, it would have been easier if other elements of server maintenance had been automated. When administrators are given the job of watching over a set of computers, they wind up dealing with a lot of variables at once. It can be difficult to nail all of these down when they are tasked with performing so many other jobs as well. By eliminating the need for their oversight in all but the areas that require the most creative thinking, it may be possible to let them perform better at their jobs.

Skill shortages demand better use of resources
As it becomes harder to find employees with the skills necessary to work in data centers, there will be more pressure on companies to automate. According to a survey by Dynamic Markets Limited, less than a third of organizations out there have measures in place to ensure they get the staff they need to operate their facilities. Automation could be exactly the solution that companies facing this problem are looking for. By creating ways for their workers to be less involved in daily operation of a plant and instead focused on long-term expansion, they can reduce the amount of employees they need.

There are many pieces of hardware available today that can be used to make a data center run more efficiently. Something like the WatchDog 100 PoE can be very useful for making sure that an organization is able to keep an eye on the physical status of its servers while still keeping personnel primarily involved with planning, not upkeep. By reporting automatically on the temperature, humidity, and other variables associated with servers, this temperature monitoring piece of hardware can let a facility understand exactly how its servers perform on a daily basis. Automated alerts can keep people informed when and if their oversight becomes necessary.

Because even data center temperature monitoring options can be automated, there is truly no need for people to be as involved as they used to be. Using software-defined data centers with increasingly complex APIs can lead to? a golden age of computing where it really isn't necessary to put man hours in to solving certain kinds of problems. Thanks to the need for increasing amounts of data processing, this may come sooner rather than later.

]]>Proper testing and efficient choices can make data centers cleanerhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/proper-testing-and-efficient-choices-can-make-data-centers-cleaner-40022753
Wed, 31 Dec 2014 11:55:27 GMTITWatchDogsCreating a data center that has enough processor power to deal with modern data demands is easy. Keeping all of those processors moving at the same time for days and months on end without having any of them overheat, fail, or leak secure information out due to hacking attempts is the tricky part. As most organizations in the business of moving information know, keeping a bunch of plates spinning in the air at once is easier than dealing with the inevitable fallout of even one of those plates slipping and cracking on the floor. As hard as it is to keep those things in motion, it is even harder to guarantee the rest while things start crashing down. In order to ensure servers are running reliably, organizations need to invest in long-term maintenance procedures and ways of creating safety nets in their companies above and beyond the typical.

What many organizations don't know is that in addition to proper cooling infrastructure, well-designed testing needs to be implemented as well. Understanding what a cooling system is capable of doing in a vacuum is not the same as knowing how it will work within an actual server rack. Many server racks may have a large amount of cabling going through open ducts, which may inhibit airflow. Some elements of a given room may not be rack mounted, and might have different levels of air passing over or through them than the optimal amount for a given cooling fan. Making sure that a cooling system can cool one specific heat emulator doesn't track what might happen if several emulators were to become hotter at once, which could potentially lead to spillover effects that damage the other fans. According to Datacenter Dynamics, overlooking these kinds of concerns can cost companies millions of dollars per year due to a faulty understanding of how their own cooling systems work in real world scenarios.

Safety in numbers
An important factor in testing data center temperature cooling systems is making use of multiple pieces of remote monitoring hardware. Being aware of the heat and air passing through different ports and points on servers can let an organization understand where the airflow is being blocked in their server configurations. In doing this, it can become easier for a group to optimize the placement of their racks in order to cut down on unplanned downtime and the other disasters that can happen without proper temperature planning. Making use of the best automated sensors on the market like the WatchDog 100 PoE is a great way for an organization to ensure that its testing and real-time cooling systems are able to keep up with any heat problems that may arise no matter what. By understanding how to use these devices, companies can cut down on their cooling bills by reducing the temperature of their processors to exactly the needs they have and no further.

Groups are doing everything they can to reduce their spending on power. According to Clean Technica, IBM India recently cut its emissions by 40 percent by utilizing solar panels. These kinds of solutions are excellent for organizations with the capital to invest, but for those without direct access to consistent sunlight or those that need to stretch their dollars, better testing protocol is the way to go. Over time, organizations that can possess more information for less money will win out in the bidding wars that exist for customer attention. Finding any way forward is an important part of keeping pace in the marketplace.

]]>Data center cloud providers still going strong despite security concernshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-center-cloud-providers-still-going-strong-despite-security-concerns-40023179
Mon, 29 Dec 2014 12:50:56 GMTITWatchDogsIt is always easier to fear something new than it is to embrace the change. It doesn't take long for any one instance of spying or hacking or cybercrime to send commentators screaming about the end of all things cloud. However, data centers that are expanding to the cloud now are doing so in the company of more people than ever before. The demand for online data storage and other cloud services has not slowed down, and information processors are still trying to find ways to handle the new load of data while keeping their data center temperatures low.

According to IDC, a third of electronic hardware money last quarter was spent on cloud infrastructure. This means that for every two dollars spent spent buying a mouse or installing a monitor, another dollar was being put in the cloud. Data centers that are expanding into the cloud now should be happy to note that they have a bright future ahead of them as providers of one of the most in-demand business platforms on earth. Cloud usage is expanding, and there is no reason for it to slow down. It is simply too useful to ignore.

Data centers still need to be reliable
While the boom of massive data needs across different industries continues to buoy companies, there is still a need for reliability. According to 3DPrint.com, the major 3D printing website Shapeways was brought down recently because its hosting data center in New York suffered a major outage. As more people begin to adopt technologies like 3D printing for general use, it will become increasingly important that they are able to rely on them for dependable uptime. This means that providers will need to be careful to keep their website constantly up and ready.

Being sure that a given server is able to handle the load it is receiving is a matter of attending to the details. The WeatherGoose II can help a company make sure that its servers are always operating within set temperatures by alerting staff whenever they begin to overheat. With sensors like these available for businesses, it can be easier than ever for companies to make sure they will be able to meet their goals. Providing stable computing power is often tough, but the current market demands nothing but the most reliable.

]]>Security and efficiency gain importance together in data centershttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/security-and-efficiency-gain-importance-together-in-data-centers-40021870
Wed, 24 Dec 2014 12:59:00 GMTITWatchDogsSecurity and data center temperature have more in common than most data center operators realize. These seemingly disparate elements of data center construction both are involved as secondary elements of normal information processing operations. Further, they both offer powerful benefits from being heavily invested in that aren't often noticed. Cooling and security both require a lot of maintenance in order to get right, but are also foundational to the success of a useful, powerful data center - especially one that is rented out to clients or otherwise needs to have consistent uptime. With strong cooling systems for regulating data center temperature and a heavily vetted security system, a data center can be prepared for anything it faces.

Currently, the biggest obstacle to data center safety is the lack of attention given to the high-end level of security. While most centers are able to provide normal levels of encryption, transmission protection, and firewalls for their servers, this isn't always enough in a modern world filled with hackers. Instead, companies should look to find ways to differentiate themselves from the pack by being hard to target and difficult to unseat. There are many ways that an organization can be attacked, but utilizing tools to make all of those attacks less likely is possible. According to Data Center Knowledge, having cryptographers on staff can be important for making sure that security is actually used to drive business. The kind of expert knowledge that these security professionals have can make a business far more attractive to industries that must maintain heavy confidentiality over their data. Understanding how to leverage this knowledge in order to offer a superior product is a central way that an organization can save itself money and time in the long run. By competing only for the best clients, a center can offer premium services at a luxury cost.

Data center temperature as cost-cutting
Similarly, utilizing the best in temperature reduction hardware can save a center money. According to Datacenter Dynamics, the recently unveiled Petagen cooling system offers a lot of cooling power for high-end data centers. A major problem with most liquid cooling solutions is that they haven't offered enough cooling potential to lower the data center temperatures of facilities working with the newest chips. These processors tend to emit enough heat that only the best cooling systems will do, and in many cases the most effective have also been relatively inefficient. However, with the release of this new system, there is now a liquid cooling hardware solution that can work with high-end processors. Utilizing this solution along with a strong data center temperature monitoring piece of equipment like the WeatherGoose II could be an excellent way for a center to protect its investments.

By leveraging strong encryption, cooling, and temperature monitoring solutions, a data center can reduce its costs and market its services at a premium. Everyone wants their data analyzed, but those that can afford it want it done well, quickly, and securely. Cutting-edge cryptography techniques, super-fast processor times, and highly reliable servers can all push a center to the forefront of the industry that they are working in. Thanks to the great speed with which high-end chips being liquid cooled can get information sorted, even very large amounts of information can be processed quickly. By demonstrating a superior grasp of the technology, companies offering the best services can attract high-level clients. Resisting the commodification of data storage and services may be a winning strategy for firms desperate to separate themselves from the pack in this boom of information analysis.

]]>Technical innovations for every data centerhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/technical-innovations-for-every-data-center-40021018
Tue, 23 Dec 2014 13:58:30 GMTITWatchDogsTypically the trends adopted by new data centers are described in sweeping terms rather than in concrete ones. It is usually more helpful to talk about how the world of information processing is changing in the abstract than to worry about addressing every detail of how that is happening. However, some recent breakthroughs and patents may help organizations everywhere to deal with their data more effectively in ways that that don't have to do with software. Thanks to continued interest in cheap and efficient processing power, companies may see a rise in usage of economically-oriented tools that can help them to get more information handled swiftly.

Solar energy
A brand new service that is the recently patented solar energy panel by Foxconn, according to Data Center Knowledge. This major hardware equipment manufacturer is known primarily for its production of the Apple iPhone, Microsoft Xbox One and HP servers. The patent discussed is for a solar energy panel that offers the ability to convert the direct current received to an alternating current converter, which would then send the energy to servers. The circuits would automatically grab power from another power source when there is no solar energy available to use. This would allow an organization to use effective solar energy without needing to restructure their operations, and still let them have the security of being on the grid.

Storage
Recently featured on Cloud Wedge is a new security position from Sophos. Because they recently acquired Mojave networks, they are now offering cloud security services to a variety of networks, including data center operations. This kind of encryption power can be useful for any center that works with clients that demand certain levels of reliably secure transmission, according to Cloud Wedge. Many organizations have legal strictures binding exactly the way that their information can be stored, including HIPAA. As more organizations with heavy legal law tied around their storage capabilities move to the cloud, data centers that offer full security protection will be able to cater to them in a way that few others will. Utilizing this kind of opportunity to increase the ability of a center to hold sway over a part of a market can be an important objective in the long run.

Temperature monitoring
ITWatchDogs?' WeatherGoose II offers superior temperature and conditions detection for server racks that need to be kept carefully. Because this solution gives the ability to remotely monitor data center temperature and have updates sent via SMS and email, it is possible to continually make sure that everyone who needs to know is aware of a variety of factors in the center. This allows an organization to quickly fix any issues that may cause a given server to overheat, and lets them proactively respond to these conditions to prevent potential problems. Detecting what may be an issue ahead of time is always preferable to being stuck cleaning up the pieces after a server goes down, both literally and metaphorically.

While it can sometimes be difficult to see what the future holds for data center construction, it is easy to make sure that a given company is able to save itself and its clients time through modern technology. Making use of the best new hardware and software for a center can let a group process information faster and save time and money in the long run. As long as a center is dedicated to remaining near the head of the pack, it will wind up investing in continuing powerful advances from the rest of the data center industry. Data needs to continue to expand, and the power of processing continues to grow.

]]>The limits of human oversight plague data centershttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/the-limits-of-human-oversight-plague-data-centers-40020623
Mon, 22 Dec 2014 10:42:40 GMTITWatchDogsAs different data centers are developed, there are continuing trends toward increasing automation. The increase in needs for data demands that companies find more efficient ways of processing information. Understanding the way these trends will impact data center growth and development will allow organizations to improve their presence in the market place. Manual oversight and physically bound servers are on the way out as more efficient technologies take their place. This is due to natural competition in the market to make systems that can get more done with less, but also out of a physical necessity for operators to find ways of managing the information that they deal with on a daily basis. Utilizing modern tools that can interface with these kinds of systems is important for developing standards that can be reasonably applied to many different types of networks.

One new trend would eliminate much of the human oversight within data centers. This new way of thinking about information flow comes from the sheer amount of information pouring in through servers. There is enough of a reason for engineers to look toward creating machines to watch their computers for them. Automation provides a necessary amount of attention to the internal processes of servers so that engineers can focus on long-term goals. This doesn't mean that necessarily everything can be automated, but it opens up the doorways for everything that can be streamlined to be computer-run. According to GigaOm, the importance of automation is about letting developers think beyond individual elements of their centers like servers and partitions. Working and creating partitions, understanding how individual workloads should be distributed, and managing servers are all things that should not be taking up the time of those in charge of a data center. These types of tasks should be automated so that they can instead focus on important long-term objectives.

Cloud computing technology creates new opportunities
Telecommunication providers are making use of data centers in order to provide better service to their clients. This transformation of the digital landscape into one increasingly run by servers is a natural byproduct of increasing automation within the data center industry. As systems become more reliable and require less highly-skilled human resources to monitor them, they become increasingly useful in a variety of different industries. As time goes on, there will be more types of organizations that can make use of the innate power of data centers. According to Datacenter Dynamics, multi-vendor networks will be the typical layout for upcoming server administration.

Those that are worried about the upkeep of their servers while they are automated can utilize tools like the WatchDog 100 PoE. Because these types of automated temperature sensors can make sure that a data center temperature is within reasonable levels without bothering human overseers, it can help to keep one of the most essential elements of the data center - its physical stability - in check. These types of tools can even be configured to report their data to a central controlling program that could then decide whether or not it was necessary to raise or lower the temperature in a given room, allowing for an automated, highly efficient cooling system that doesn't rely on outside human oversight in order to function. Technology continually marches forward in terms of automation in order to allow workers to use their expertise to solve problems in the future, so that computers and other machines can take care of temporary predicaments as they happen. Moving forward means being able to focus human potential on planning for the future while machines take up the grunt work involved in caring for the present.

]]>Keeping data center servers cool may become easier with liquid systemshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/keeping-data-center-servers-cool-may-become-easier-with-liquid-systems-40021434
Fri, 19 Dec 2014 17:20:41 GMTITWatchDogsThe problem all data centers have has to do with how they allocate cooling power to their processors. There is nothing that single-handedly drives up costs unnecessarily than inefficient methods of bringing a data center temperature down. Although the speed and strength of modern day processors continues to climb, it does so in lock-step with the continually rising cost of keeping those chips cool. In order to combat this threat and reduce this cost, centers are investigating new territory. Two of the more likely trends that will reduce the power bills of companies are combined heating and power and liquid cooling. These are both options that have the potential to greatly reduce the amount of money needed to keep a center cool, as well as reduce the overall amount of energy that needs to be pumped into a given center in order for it to perform its job. Taken together, these could solve many organization's issues with cooling.

Data center temperature lowering has never been about ambient levels of heat. While many still keep their server rooms cool today, this has more to do with habit than technical expertise. There are few servers currently running that need to run in refrigerated rooms. Instead, understanding how and why to cool them to an ideal temperature is much more important than simply blanketing a given facility with as much coolant as possible. According to Data Center Knowledge, there will be an inevitable push for liquid cooling as time goes on. This will not only allow for better managing of computer density, but will also allow for components to last longer. The high up-front costs have traditionally been the element that kept data centers from going in on these new developments, but that may change permanently as higher data volumes drive up heating costs. Data center temperatures can easily be lowered through liquid cooling in a far less expensive fashion than traditional fan-cooling elements that have been used in the past.

Combined heat and power with liquid cooling
While liquid cooling becomes more popular, "district heating" is also growing. District heating refers to using waste heat in order to warm up businesses and residential housing, according to Datacenter Dynamics. An emerging idea is for data centers to make use of this too: they could pump out waste heat to nearby buildings and residential areas over colder months as a way of subsidizing their power usage. Combined with liquid cooling, this could drastically reduce the amount of money these organizations have to spend on heat. If consumers pay for this waste heat at current or even slightly reduced rates, it could offset many of the costs involved with keeping servers running. This kind of waste-reduction strategy will become more common as energy costs climb.

Finally, the most important element in guaranteeing efficient heating and cooling of servers is making sure that the servers themselves are being cooled to the right temperature. The use of remote temperature sensors like the Watchdog 100 PoE is important for understanding exactly how cool or warm a given server is. By using these devices, employees at a data center do not even need to be around a given server configuration to know if anything has gone wrong. Thanks to automated messaging systems embedded within the monitor, it is possible to notify staff immediately if a variable it measures exceed user-specified parameters. This provides peace of mind and lets data center operators focus on planning in the long-term, not managing in the short-term. Working with this solution combined with the previous two mentioned is a good way for data centers to move into the future.

]]>Safety and reliability are number one priorities before a stormhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/safety-and-reliability-are-number-one-priorities-before-a-storm-40019906
Tue, 16 Dec 2014 18:49:14 GMTITWatchDogsOrganizations that are interested in weathering storms coming their way while still maintaining a respectably high uptime should be ready to invest in data center temperature cooling solutions that work regardless of outside circumstances. The reason this is so important is that there are many different ways in which a storm can damage or bring down certain elements of a data center's operations, from knocking down electrical grids to causing debris to come in from ceiling panels to upsetting the schedule of personnel flow through a building. Making sure that a company is adequately prepared and has lines of communications that will remain in place through a breach is important for maintaining a solid reputation as a reliable data center.

Data Center Knowledge recently ran an article on the top ten tips a company can use to prepare for inclement weather. This advice includes safety preparations, taking down loose items, emergency food and bedding, as well as a list of available nearby staff who may be prepared to man the center in case of emergency. Poor driving conditions can hamper anyone who might have to use a car to get to and from the center, so maintaining a list of nearby hotels and other places they can rest can give your employees peace of mind.

For servers, there are many other considerations that should be given thought. Implementing a data center temperature monitoring hardware solution like the WeatherGoose II can be essential for maintaining control over how servers do during severe storms. Because the WeatherGoose II can send out alerts through SMS, email, and other services, it is always possible for workers to know how the servers are doing physically, even when communication lines may be down otherwise. Because it doesn't rely on a single medium to communicate, it can much more easily help an organization maintain a workload through even dangerous storms, effectively preventing the problem of not knowing how a given hardware center is faring during difficult times within a center.

Cooling solutions continue to grow
Utilizing modern-day cooling solutions that involve less guesswork than fans and room-cooling techniques may be another way to prevent servers from overheating during storms. According to a recent report by Research and Markets, global data center cooling solutions will grow at a CAGR of 13.21 percent between 2014 and 2019, which points to strong continued interest in creating better cooling systems. Organizations need to be able to use reliable cooling systems, which means that they must move beyond old standards and onward into using more efficient methods of cooling. Recent advances in liquid cooling, for example, have helped many data centers meet their quotas for different sectors of their industries. Understanding how to create a center that works under all circumstances necessarily means finding the most efficient way to cool something due to the necessity of cutting costs at every turn.

As data centers continue to grow, the industry will eventually reach a saturation point as demand slows down. If and when this happens, it is likely to make organizations begin to value companies that can provide more data processing for less. Those that have invested in more efficient methods of cooling will be in a better position to save their clients money through lower operating costs than those who have not made those decisions. Ultimately, those that are more efficient will be able to take care of themselves and their clients through the use of highly stable servers. Stability, reliability, and safety are watchwords both for dealing with the weather and the future of an industry.]]>More money goes into cloud hosting and fighting wastehttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/more-money-goes-into-cloud-hosting-and-fighting-waste-40019035
Fri, 12 Dec 2014 17:40:00 GMTITWatchDogsThe science behind making the best data center possible continues to improve. Over the past couple of years there has been an explosion in the amount of centers opening up, primarily because there has been a massive rise in the supply of data created by people and institutions. Now that there is so much information out there being utilized by different groups and services, they need places to hold and process it. This continued rise has led to a rapid expansion of the data center market, but this also means that there has been less of an emphasis on efficiency and more on expansion when it comes to meeting the demand for greater processing of data. In the long run, this has led to data centers across the world being less reliable than they need to be, which could have a bad impact on the industry's reputation as a whole unless resolved.

Laying down the groundwork for this kind of revitalization of the industry are the studies that bring these problems to light. A study by Veeam Software found that organizations lost $2 million dollars annually due to the loss of revenue and productivity that unplanned interruptions bring. Meanwhile, a recent inquiry into data center waste performed by TeamQuest found that only 22 percent of IT managers said that they would be able to predict either the timing or consequences of a disaster scenario. Issues frequently noted by the TeamQuest study include equipment failures, network slowdowns, and availability issues. Clearly, this is typical for a burgeoning industry. In many ways, the rise of high-volume data centers can be compared to the spread of railways across the continental United States in the 1800s. There is a focus on connection and getting people what they need as quickly as possible, but definitely also a lack of oversight with regard to reliability and (digital) safety.

Keeping the trains running on time
In order to graduate to the next phase of this business model, companies should examine the way that they can increase the reliability of their servers. This can take many forms, but one of the easiest is to implement better temperature monitoring equipment. Data center temperature issues can lead to long periods of unplanned downtime, which can be very costly for an organization. Using something like the WatchDog 100 PoE can be essential to maintaining a reliable data center in the midst of the chaos that is the current information processing industry. Utilizing this kind of security can mark an organization as one that cares deeply about continuous service to its customers. Plus, by monitoring the heat of which servers operate, this kind of device can help a company reduce the chances of losing money on equipment due to otherwise unneeded replacements.

Understanding the necessity of a better approach to data center reliability is just one half of the angle. Being sure that a given organization is able to take care of its servers in the long run requires maintenance and expertise in the field of data center engineering. With the use of sensors to monitor the data center temperature and other security elements, a company can pull out ahead of its competitors. Not everyone has the ability to throw around $50 million dollars around in venture capital in order to outbid opposing organizations. As long as the need for data continues to rise, there will be organizations continually looking for centers to help them run their analytics. Becoming the most trusted center out there through reliability and high uptime will be the way to get the best contracts possible.]]>Better uptime and application deployment are goals for data centershttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/better-uptime-and-application-deployment-are-goals-for-data-centers-40018193
Wed, 10 Dec 2014 01:28:27 GMTITWatchDogsDue to new developments in software trends, many of those working within the data center industry have found that they need to use new ideas to become leaders in the field. This is because of two fundamental shifts in the way that organizations make use of these information processing facilities, which are software is becoming constantly used and larger in scope, and that this requires servers that are both more reliable and able to work together on larger tasks. Just as software defined servers helped centers abstract their servers into processes running across several physical instruments, so to will the next wave of abstraction allow operators better control over how their processors are used together to solve large problems. The complexity and rigor of new data sets have become over time and the the utility of creating a center that can handle the scope of those projects is becoming apparent to all those in the business.

New standards are being set forth to test uptime, and there are an increasing amount of organizations that are functioning as watchdogs in this area. This is important because of the necessity of having independent fact-checkers to verify how well given companies live up to their promises of several nines of uptime. One such organization is the Uptime Institute, which recently said that, operations-related failures are the primary cause of data center outages," according to Datacenter Dynamics. This is important to note, because many types of failures caused by this can be solved by proper monitoring hardware, including the WatchDog 100 PoE, which is an excellent data center temperature monitoring tool which can help a company keep its servers running regularly. Through the use of these types of equipment, an organization can easily and safely run many servers at once in order to please a variety of customers.

Reliability and scalability continue to be challenges
Even once a center is able to live up to a promise of highly reliable architecture, it must then use that hardware in order to support the types of programs that are being run by today's businesses. Thanks to the rise of business analytics and big data, there is an increasing demand for powerful machines that are able to deliver a lot of performance very quickly. These kinds of jobs require that an organization be able to use many servers together in order to deal with the processing needs that they have. In order to meet this kind of demand, it is imperative that data centers deliver the kind of flexibility and scalability that companies need. The Radar's recent article about operating systems for data centers speaks at length on this issue, and recommends that data centers think about these new APIs as operating systems for data centers. This puts these new ways of controlling servers in the context of a user-defined, powerful and modular system that can let a center engineer easily allocate resources and physical spaces to a program.

These types of innovations are helping to bring data center servers into the future of modern business. Thanks to scalable, powerful solutions, companies can keep their data center temperatures low while continuing to invest in superior hardware that enables customers to get their jobs done. Through the utilization of increasingly abstracted server architecture, data centers deliver better data control. Understanding how to make this transition will be an important element of leading centers into the future. As long as an organization is able to understand how their physical servers are handled in the context of an increasingly abstracted environment, they will be prepared for the upcoming changes.

]]>Innovative techniques in data center cooling may come from carshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/innovative-techniques-in-data-center-cooling-may-come-from-cars-40017703
Fri, 05 Dec 2014 18:26:32 GMTITWatchDogsData center engineers are continually faced with some of the most puzzling dilemmas in design today. How can a company continue to deliver high power data processing while lowering costs all around? This difficult issue is a consistent problem that winds up bearing much fruit. If there was ever any doubt as to the truth of the saying, "Necessity is the mother of invention," it was cast away by the incredible amount of innovation that has taken place in the design of data centers over the past couple of years. Organizations continue to find ways to lower their Power Use Efficiency and maintain a high degree of service to clients. The next step is finding additional ways of reducing heating costs, so that data centers can continue to take on the amounts of information they are being asked to process by businesses.

As organizations have continued to develop new ways of saving money by cooling through alternate means, they have gone through many different ways of cooling their centers. From using cold water at the bottom of lakes, to stationing centers in the middle of freezing northern fields, to setting up solar panels and getting their energy off the grid, the grand experiment to lower costs has taken many forms. Recently, a new development has come not from space or the ground, but from cars. According to Network World, some data centers are looking at using the mirrored surfaces used to repel heat from the tops of cars in data center construction. This design would allow a center to repel any heat it would otherwise absorb from the sun by day. This is an excellent example of design that cuts out a cost that most would overlook, but that can nevertheless go a long way toward reducing the expenses associated with a data center.

Efficient data centers are the future
It's not just in terms of heat-resistant exteriors that data centers are becoming stronger. These highly useful pieces of industry are also seeing a lot of growth in terms of raw amounts of centers there are and information that each one can individually process. Many new technologies, including IBM's new VersaStack, reported on by MarketWatch, allow organizations to watch and track their data as it goes through their systems. This is important because it enables those working within a center to develop a better feel for how to allocate data, and gives them the tools to understand exactly how each server is being used by the center as a whole.

Other technologies that can help an organization to save money in the long run include the use of data center temperature monitors. These can allow an organization to fight against the problems many have with overcooling, and spend exactly as much as they need on reducing ambient heat. This combined with the heat reflective top of a building that some organizations are utilizing can wind up saving a company a lot of money in the long run. These two design techniques together can let an organization process far more data than it used to, while still maintaining a high level of output.

Understanding the future of design for data center operators is a crucial part of this business, and a commitment to saving as much money as possible while still processing as much information as they can is good way to move forward. With temperature monitoring hardware and weather-resistant materials, these centers can continue to bring the Internet to everyone while reducing their ecological footprint. That's good both for people interested in the green health of the planet, or their green piles of dollars.

]]>Keep downtime at bay with careful attentionhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/keep-downtime-at-bay-with-careful-attention-40017059
Thu, 04 Dec 2014 18:35:45 GMTITWatchDogsReliability is the foremost priority of any data center operator. Without a plan for managing downtime, it is possible to quickly run out of clients who are willing to work with a data center. While some types of downtime are very difficult to plan for - it is hard, for instance, to deal with the grid going down unless an organization is willing to move off of supplied power entirely - there are other elements that can easily be monitored in order to let a company save itself from unplanned downtime. While some organizations like Microsoft have the funds and power to move completely to self-run data centers, not everyone has that kind of capital. Instead, center operators should focus on how to create a system that is resistant to most forms of self-imposed downtime. Using temperature monitors to constantly verify that the data center temperature is within optimal levels and utilizing back-up systems are critical strategies.

Microsoft's recent data center deployment runs entirely on biogas, and was designed to be a carbon neutral center, according to TreeHugger. This kind of development is inspiring and may point out the future of information process design, but not everyone can immediately jump on board with this trend. Instead of trying to make the next center created entirely off the grid, it might be more useful to look at how a company can lower its carbon footprint and save in the long run simply by reducing costs. It's possible to do this in any type of center through the use of remote temperature monitoring software, because it can be used to keep tight control over exactly how much cooling is used. Understanding the physical limits of servers, as well as the methods behind keeping them at exactly efficient points without overcooling, is an excellent way to reduce costs without having to invest in other types of services.

Downtime and data centers
In order to reduce downtime as much as possible, it is a good idea to use tools like the WatchDog 100 PoE. This remote temperature monitoring hardware can send out SMS and email messages to staff whenever the temperature of a server exceeds certain user-defined thresholds, allowing an organization to understand exactly what temperature their servers are operating under. This can let them focus on other elements of data center design while still maintaining a watchful eye over the other parts of their center's construction. By allowing servers to run at a hotter, but still highly controlled temperature, a center can save money and lower their overall carbon footprint while reducing downtime.

Downtime is so critical detrimental because, as the Vaneem Data Center Availability Report of 2014 says, availability gaps due to downtime have cost enterprises $2 million dollars a year in lost revenue. Organizations that partner with data centers need to know that their information is being protected and kept on servers that won't go down. Protecting this investment and keeping uptime levels as high as possible are critical parts of working with clients. Data center temperature monitoring hardware can be highly useful here because it will let a company keep an eye on servers, allowing them to alleviate any crisis before it spills over and causes downtime within a server.

Understanding the vital necessity of consistent uptime, and the market's demand for it, is a crucial part of working with modern information processing. Clients need and expect high levels of uptime, and centers can only provide it by working with all of the tools they have available. Finding the best way to deliver high-quality, consistent information processing is what will allow successful centers to increase their market presence.

]]>Data center assets should always be protectedhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-center-assets-should-always-be-protected-40014770
Mon, 01 Dec 2014 19:53:45 GMTITWatchDogsUnderstanding the necessity of data center maintenance is an important part of modern-day information processing. Although many organizations rely upon centers to lift the heavy loads of processor power that most large-scale businesses need, not all of them understand the toll that this data maintenance can take on a facility. In fact, these information plants tend to generate data of their own - information on how their servers are working, how the different elements around those servers are fitting together, and how the overall set-up will work. There comes a point, then, when the major problem for companies to solve is not how to put their data servers together, but how to keep them running over a long period of time. Data center asset management is critical for ensuring the longevity of servers, especially those that deliver long-term loads or experience significant stress periods, according to a recent report by MarketsandMarkets.

Delivering data efficiently isn't just a good idea for companies that process information - it is a critical part of their business infrastructure. Companies should always be aware of threats to their long-term stability of their servers and of their plants in general. There are always risk factors when deploying in any market, and understanding[how a company is best able to deal with those risk factors is an important part of working with information in the first place. Even organizations that feel as though they are in a safe area, like Utah , can face unexpected, mission-critical threats if their servers aren't able to deal with stress. According to Wired, a recent bill has been introduced in Utah's state legislature that proposes cutting off the water to an NSA data processing facility, potentially keeping that federal building from functioning.

Tracking temperature in the data center
Understanding the varied nuances of how to keep a server up and running at maximum capacity can always be tricky. One of the best ways to keep a center from running into those kinds of problems is to utilize a data center temperature monitoring tool like the WatchDog 100 PoE. These kinds of hardware are able to collect vital information on the running of a center and send out emergency messages whenever those vitals start to look bad. With foreknowledge and the ability to accurately measure the heat of an individual server, organizations can have more control over their servers.]]>How the Internet of Things may effect data centershttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/how-the-internet-of-things-may-effect-data-centers-40012882
Tue, 25 Nov 2014 10:47:44 GMTITWatchDogsThe Internet of Things is coming, and may accelerate current trends toward larger data centers. Although right now the Internet of Things market is nascent, it may yet have big consequences for people who are holding and storing the information located on the Internet. Right now there are many companies currently developing smart products that can have information automatically uploaded to cloud servers and shared throughout the Internet, either for the use of the consumer or to help companies better track how to make their products stronger in the future. The result of this is a boom in the amount of information that needs to be stored, and a corresponding increase in the number of data centers that exist in general. One of the most important things to think about in terms of data center design over the next couple of years will be how to accommodate all of this information in such a way that it is useful and usable for everyone. While some information may be relatively trivial, other data - like the kind accumulated from health-focused wearables - may be of utmost importance.

Luckily for the data center industry, software-defined servers have become the norm. In the future, software-defined storage may also become an important part of the way that organizations keep track of data. This will allow those building new facilities to choose basic, inexpensive commodity servers that can use efficient software solutions as a means to allow those servers to punch far above their typical weight class. The real advantage of this kind of design has to do with the fact that it is so much cheaper to use than previous types of information storage. That is useful because it can let a center keep as much information as it needs to without having to increase the costs of its operations by much, according to Data Center Knowledge. Because of the sheer amount of information expected to pour in through the use of the Internet of Things, organizations need to prepare now to keep their servers ready for the deluge.

Power costs may become a concern
The Internet of Things may also impact the amount of power that data centers already use. This seems like an obvious problem - more data to process means more energy used processing them. However, the sheer scale involved is what makes this a concern. It is such an issue that Google just bought the contract to the entire output of a Dutch wind energy project in the Netherlands. The $750 million dollar deal, reported on by ABC News, is not the first time that Google has made a deal like this - it has made two other in Finland and Sweden - but the speed with which Google is making them suggests that the company believes it will be continually expanding its data centers across the world for the foreseeable future.

Companies that want to maintain a close eye on their power costs as they prepare to deal with the coming surge in data center traffic from organizations making use of the Internet of Things may want to invest in data center temperature monitors like the Power Manager X2. Devices like this can track the way that power is being used through a server rack, giving specialists the information they need to make strong decisions about how to allocate resources in the future. By being aware of your data consumption, you can take control of your data center and be prepared for the oncoming traffic brought on by the Internet of Things. Through this use, it is possible for many organizations to stay ahead.]]>Data center temperature cooling costs vary by regionhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-center-temperature-cooling-costs-vary-by-region-40013165
Fri, 21 Nov 2014 11:01:03 GMTITWatchDogsAs North America enters the winter, many companies are now saving money on the cost of heat of cooling for their data centers. Or are they? Do organizations hosting information processing plants actually save money when they head up north, or do other costs price them out? What is the cheapest way to ensure a low data center temperature? Many companies believe that taking their center to where it is cooler year-round can help them to reduce their Power Use Efficiency (PUE) number, but this is not always the case when factoring in state taxes and costs of electricity. Understanding exactly what is costing a company money and why they are paying for it is important for organizations that are always trying to run more efficiently than their rivals.

There are a few different costs that an organization faces when choosing where to build. First of all, there's state taxes, which includes property taxes. Then, companies have to factor in the cost of renting and the cost of power. Organizations can also check in to see if there have been any tax breaks made for their industry by a state government. Because of the desirability for many states to have tech sector jobs, there are more than a few tax breaks out there for data centers that want to be built in a variety of locations. Recently, a study published by CBRE showed that Atlanta was the least expensive market in which to lease a data center, with Colorado Springs right behind it. This shows that it isn't all about climate. No one could realistically call Atlanta's weather chilly, snowy, or even "on the cold side," but the power of lowered costs for renting has made it the number one place in America to lease a data center.

Data center temperatures at a lower cost
If a company cannot achieve a meaningful reduction in cost through the use of smaller data centers than before, then there should be other ways for them to deal with these problems. One of them its to make their cooling processes work more efficiently, and there is nothing that can help an organization do that better than a remote temperature sensor. Equipment like the WatchDog 100 PoE can be a major benefit to those who want to save money and time dealing with cooling costs by reducing them. Through the use of a remote temperature sensor, a data center can stake stock of its servers and their heat at regular intervals, and have emergency notifications sent to staff if the servers ever begin to run too hot to deal with, which can be important for saving the equipment in the case of danger. The recent fire in the Chinese Bitcoin mine proves that an organization can never be too careful when it comes to watching their servers. Beyond the element of disaster preparedness, though, data center temperature sensors can help by allowing companies to fine-tune their cooling elements so that they cool just to where a server needs to be.

By deploying cost-cutting measures like data center temperature sensors, organizations can cut cooling costs while still keeping their computers safe. Being able to simultaneously keep costs down while making the facility safer is a win-win proposal. Through the use of temperature monitors, a company can deal with many of the problems associated with heating costs, while gaining more control over their servers. Organizations that have more information about how their data is processed will be able to make better decisions down the road about how to expand.

]]>The cloud and the data center can coexisthttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/the-cloud-and-the-data-center-can-coexist-40012320
Wed, 19 Nov 2014 17:55:35 GMTITWatchDogsEmerging technologies frequently change each other as much as they change the rest of the landscape, and cloud computing and data centers are no exception. While data centers have obviously been around before cloud computing - one can't have cloud computing without the data center - they are still both in rapid states of development. In many ways, these types of systems are reliant on each other, especially with the amount of data centers currently being built in order to function as cloud computing systems for other organizations. One way that these trends are intersecting is the building of larger and larger data centers, which is in turn creating less demand for non-tech groups to build their own data centers. Smaller centers are turning into much larger centers, which are receiving more business because of the volume they are able to handle.

Emerging economies are the ones that are developing many of the new data centers, according to Enterprise Tech. There are many reasons for organizations to build and remodel data centers to meet the market and its demand for tall, strong "mega" data centers that are powerful enough to handle many clients and use several servers together in order to solve problems. But there are other ways that the cloud and data centers are moving together, and one of them is that the cloud itself is being used by centers. Because data centers are all about providing processing speed, even they can be greatly assisted through the use of a cloud server to coordinate communication between the servers. As long as none of the actual processing has to route through the cloud, those individual servers can operate at highly efficient speed through virtualized machines and then communicate through a cloud deployment in order to bring the power of that server to a mobile location.

Cloud services and the data center can offer strong monitoring solutions
Part of the reason that the mega data center is becoming such a fixture of the information processing scene is that organizations are realizing that they have much to gain from outsourcing their server needs, according to ZDNet. Company data centers have problems with needing to be constantly upgraded, maintained, supported, and run. Those costs are all pushed on to vendors when an organization chooses to make use of cloud computing instead of running their own large computing system. This change in the way that organizations use data will have dramatic effects for systems to come over the next couple of years, as more companies focus on larger data centers to work with a variety of business clients.

Undertaking major operations within the field of information processing is always a difficult challenge, which is why so many data centers take such pains to safeguard their equipment. The use of remote data center temperature monitoring equipment like the WatchDog 100 PoE can be exceptionally important in order to guarantee long-lasting and useful services. By using these kinds of solutions in order to make sure that servers are always running, data centers can afford to take on clients that demand high amounts of uptime. Being prepared for any variance in temperature can save an organization from dealing with many problems that might otherwise arise from unexpected hardware failure. The proper utilization of these kinds of monitoring tools can be extremely important for organizations that want to protect an investment.

As data centers and the cloud continue to grow, it is likely that these two service will rely on each other more, not less. The eventual rise of many different kinds of servers will continue to benefit business.

]]>Physical security, planning and data center temperaturehttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/physical-security,-planning-and-data-center-temperature-40011821
Mon, 17 Nov 2014 17:24:52 GMTITWatchDogsUnderstanding the difficulties inherent in dealing with physical security is a large part of guaranteeing uptime as a data center. While most information processing facilities contain many safeguards against digital threats, and enough physical protection to withstand most normal environmental damage, there are some incidents that can still pose a threat to the uptime of a center. Earthquakes, tornadoes, hurricanes, fires, and other natural disasters can cause serious damage and result in a large amount of downtime for a data center, which can have a lasting negative result on the facility's reputation. As a part of planning for larger loads and increasing client demands, companies should be sure to disaster-proof their structures. By simultaneously building to expand and designing server layouts that can withstand the disruption that a natural disaster might cause, a data center can position itself as a leader in the industry.

Backup supplies of water and gasoline and reinforced outer walls can help to ensure a center's uptime even in the midst of disasters. These two elements are the most useful all-around elements to keep in mind when designing a data center to survive dangerous scenarios. For large to medium sized data centers, IT Business Edge recommends 60,000 gallons of gasoline stored onsite, and a minimum of 300,000 gallons of backup water in order to maintain cooling temperatures for servers. These preparatory elements are essential when dealing with a natural disaster of any sort. No matter whether a facility is dealing with a hurricane or snow storm, reinforced outer walls and back-up supplies will help them to weather the storm. Because "the response to emergency scenarios is instrumental in solidifying patron trust, it is important to take care of these considerations as a company plans to expand.

Expansion and growth within the data center
Once an organization is taking care of its need for security, it should look at how it can expand efficiently. Some elements of organization are essential to properly planning to expand, including inventories, charts of current usage and planned growth, and a timetable for those growth projections. Especially when concerned with disaster planning, it is important to take in to account what Computer World terms as "headroom," which is the amount of space you have to use when dealing with spikes in usage. Typically, this is expressed as a percentage of current capacity. Skimping on headroom can lead to disaster when working with sudden surges in user pools.

Alongside server expansions and the protection of physical elements within a data center, companies should be sure to invest in temperature sensors like the WeatherGoose II. These devices can save an organization from a lot of unplanned downtime due to their ability to provide reliable information about the atmosphere surrounding different servers, including temperature, noise, and other elements that can quickly spiral out of control unless cared for. With a data center temperature monitor it can be very easy to quickly account for and adjust variables within a facility so that nothing overheats, even during very difficult climactic events like an earthquake.

Data center protection is probably the most logistically difficult part of the architect's' job, but it is important for ensuring long-lasting uptime. Without these kinds of services, a business can quickly lose customers as they flock toward other providers that are able to carry them through storms. By investing in these elements, data center engineers guard the future of their business and keep themselves profitable year after year. Understanding their use and place within the context of the information processing business is critical.

]]>Methane-powered data center signals to new direction for industryhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/methane-powered-data-center-signals-to-new-direction-for-industry-40009875
Fri, 14 Nov 2014 18:08:22 GMTITWatchDogsSuccessfully running a data center that is able to lower its emissions is good enough for most engineers, but that may not be enough as the amount of information online proliferates. In order to deal with skyrocketing demand for data processing, organizations will have to release solutions capable of dealing with the problems of managing bytes more efficiently than before. While this may come in many ways from streamlining the cooling of data center temperatures, there are other ways to reduce fuel use. One of these is to find alternate sources of energy that don't involve being connected to the grid. This can simultaneously increase the efficiency of a building while also reducing its dependence on the grid. That kind of reliability can be a major selling point for companies that want to know that their information will always be accessible, even during major disasters.

While in the past, organizations have constructed centers in the Arctic and solar collectors in the desert, that is merely scratching the surface of alternate power production. For example, Microsoft is opening a new prototype data center that is entirely powered by methane biogas, which are created as a result of the area's municipal wastewater treatment, according to Datacenter Dynamics. Normally, the bacteria that produce these fumes are incinerated because methane is such a powerful greenhouse gas, but this can be turned around for the data center and used as a source of fuel. The reason that this kind of experiment is important is because it broadens the horizons of what the public can call a data center, and it can serve as proof of the possibility of creating information processing facilities that don't take from the environment. Because it uses no energy from the grid and produces no waste, the old system of power-use efficiency doesn't even make sense as a metric of study.

Getting off the grid
This is only the first center that Microsoft is considering putting in the state of Wyoming. According to Data Center Knowledge, they are putting another $274 million expansion after having announced it in April 2014. Because the just-built plant functions as a laboratory for biogas and fuel cell research, it is a way of growing the state's technology sector, according to Governor Matt Mead. The use of this kind of out-of-the-box solution for data processing will continue to grow over the next few years because of the increasing pressure from various environmental lobbies on information processing facilities to clean their operations.

Many groups that are using this kind of technology would do well to utilize of state-of-the-art environmental monitors like the SuperGoose II. These new advances in the field of efficient energy use can detect a huge amount of data, including temperature, humidity, airflow, light, and sound, and send that information along to any parties that need to know about it. They can be programmed for SMNP, email, and SMS message alert systems for personnel that can be customized, with up to 200 alarms per unit. The monitor comes with an interface network IP camera that allows you to look at remote conditions, and doesn't require additional software to set up.

The various ways that centers can be made more efficient and less damaging for the environment continue to grow, but the technologies used to track that efficiency remain important. Using sensors to understand exactly the flow of power to various elements of heating and cooling can help organizations make informed decisions about their energy use and back-up power operations in both the early parts of their operations and later on.

]]>The centralized data center could bring savings to businesshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/the-centralized-data-center-could-bring-savings-to-business-40009509
Mon, 10 Nov 2014 18:01:51 GMTITWatchDogsUnderstanding the current trends in data center construction is as simple as knowing that all priorities are secondary to reliable processing of massive amounts of data. This means that whatever is being created around or through or adjacent to the servers is designed in such a way that it makes the processing of information cheaper or more efficient. While many companies have looked at increasing the efficiency of their servers by locating them in climates where heating is less necessary, like the Arctic or far northern countries like Greenland, others are now taking aim at reducing inefficiency through the use of smart design in buildings. By creating facilities that make cooling and energy use easier to manage, companies can drastically reduce the amount of energy they spend on non-server architecture and see greater returns on their investment. This will be more important than ever, since according to IBM the end-users have created over 2.5 quintillion bytes of data.

Designing buildings that are made to work with data center temperature sensors is an important part of the business, and it may well expand over the next couple of years. According to Ted Maulucci, CIO of Tridel Group, a network to connect sensors is an important part of building the next generation of architecture. An essential element of this will be the construction of fiber-connected smart buildings that can cost as much as or less than modern-day building materials. They should be able to hook into a data center or server in order to gain access to information regarding the heating and cooling of all parts of the building at once, so that resources can be more efficiently allocated in order to reduce heating costs and increase the efficiency with which heat is generated and powered through the building. This is not a fantasy about the future of construction, but rather a sustainable reality about the way that people are currently attempting to design.

Data center room temperature and architecture
Data center room temperatures could be used within a center in order to drive down heating costs. One of the major ways that this could be facilitated is through the use of the WatchDog 100 PoE, a device that can sense heat, dew point, airflow, sound and light, so that everyone who is connected to the server can understand exactly the environmental situations around every WatchDog monitoring sensor. This is important because it both send messages within the system in order to direct attention to a specific element of the server framework, and send texts or emails to employees who can immediately arrive in-person to deal with any external changes or blockages in the routing of airflow throughout a data center. Using the structure of a building to achieve maximum airflow for servers in order to provide the most efficient cooling process possible could save energy operating costs for information processing facilities while still allowing them to take on more data.

Because there will be a continual push for increasing amounts of processor power as more devices become connected to the Internet, the development of servers will be continually surging toward higher levels of efficiency and capacity. Paying attention to the intricacies of construction when developing new data centers will be important for the continual sustainability and growth of any new facility. By saving money in the long-term through utilizing the best possible layouts for efficient cooling and networking, companies can allow themselves the luxury of more servers and power to fuel those machines.

]]>Lowering energy costs may come from efficient storagehttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/lowering-energy-costs-may-come-from-efficient-storage-40009115
Fri, 07 Nov 2014 17:59:17 GMTITWatchDogsThe projections for the long-term sustainability of current data centers are mixed. Although data centers are poised to generate an increasingly large percentage of the amount of Internet traffic, there is still a necessary push for less energy consumption. This is incentivizing the creation of many different technologies aimed at reducing the overall cost of data center operation at every level. Organizations that currently run data centers need to pay attention to these advancing areas in order to stay current and not fall behind the competition. As the demands of clients continue to escalate because of the influx of data from customers and from objects hooked into the Internet of Things, the use of energy saving software and hardware that is still capable of storing more information will become increasingly important. Using these along with a variety of other options to lower costs will help companies keep their data centers competitive over the next few business cycles.

The reason that storage and efficiency for centers is so important for them to deal with is the amount of traffic to be taken up by data centers is set to increase over the next couple of years. According to a 2014 Global Cloud index by Cisco, traffic will grow by 23 percent to 2018, which may cause public cloud revenue to go up to $127 billion dollars. This explosion of the amount of money in data centers will result in many more being created as well as high expectations for size. Using highly efficient methods will be one of the only ways that those working within the industry will have of protecting their bottom line. The amount of data most organizations will be expected to process will grow to the point that more efficiency will simply be required. Luckily, there are a few methods for keeping costs down as well as dealing with the data center temperature problems that can come with high heat.

Distributed storage and temperature monitoring
There are essentially two ways to make centers cheaper - reduce the cost of secondary elements like dealing with data center temperature, and making the computing processes themselves less power-consuming. Software-defined storage is beginning to pick up the slack in terms of making the computer operations themselves cheaper, according to Datacenter Dynamics. This is because it allows multiple servers to deal with storing and integrating information at once, lessening the load on any individual processor, which results in a more efficient use of power. Because most centers have moved or are in the process of moving to software-defined storage, it's likely that further advances in this field in terms of how compactly data can be stored will be made in order to continue to save space.

The other side of the coin is reducing power use through something like the WeatherGoose II Climate Monitor. This kind of data center temperature monitoring tool can allow an organization to drastically reduce excess cooling in a facility. Because it can detect airflow, temperature, humidity, light and sound, it is perfect for monitoring exactly how the cooling system of a given organization is operating. This is important because that is a major element of efficiency loss in many facilities. Overcooling is a problem when organizations don't know how the temperature of every processor is in relationship to those around it, and how airflow through a center is working. With a WeatherGoose II, it is much more likely that a company will be able to make an informed decision about the state of its cooling strategy.

]]>Data room cooling may experience revolution with immersion, Bitcoinhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-room-cooling-may-experience-revolution-with-immersion,-bitcoin-40008084
Wed, 05 Nov 2014 17:51:17 GMTITWatchDogsAs data center construction becomes an increasingly complex task, architects are always looking for a way to get the upper hand. Some methods of cooling and of operating information processing facilities may lead to new answers for organizations that need to keep track of them. Because both energy costs and data processing needs are increasingly rising, there is a continual push for ways to reduce the energy used in order to work with a given amount of data. Investigating the competing solutions is practically the second job of data center engineers currently working. Staying on top of current and upcoming solutions is an important part of the construction and design process, and by understanding the way that information processing is changing, companies can save money in the long run through more compact server configurations and tighter efficiencies in cooling.

Other variants of data room cooling, including liquid immersion, are becoming more popular in colocation and hosting. This process involves using a solution of liquids into which processors are submerged, which is then cooled. The efficiency of this process greatly outpaces traditional air-based methods of cooling, while not being that expensive aside from the upfront investment in the cooling materials themselves. In a recent Q&A with Data center Knowledge, Chris Laguna, data center manager for CH3, a colocation center in Austin, Texas, said his business was able to maintain a PUE of 1.09 despite an operating room temperature of over 95 Fahrenheit. Incredible efficiency like this is extremely handy for organizations that are used to closely watching their margins in order to protect their investments. With the creation of a center that can maintain such a high efficiency rating even on a hot day, organizations can drastically lower the amount of energy needed to cool their data and therefore see far better ROI. Anything that directly impacts the bottom line like this is a strong choice for information processing firms.

Other types of savings still coming
There are different types of centers opening up that may have similarly low requirements in terms of cooling. One of these is the recent boom in services for the mining of Bitcoin, the popular cryptocurrency long associated with the Internet hacker community. Organizations based around mining this for end-users, including the newly formed Bitcoin Brothers, recently interviewed by Datacenter Dynamics, are interested in supplying processing services for those who want to mine the currency but lack the hardware. In Bitcoin parlance, "mining" refers to a specific kind of highly advanced hash-based math problem that has become steadily more difficult as more Bitcoin?s have entered the field. Users can mine to solve these equations in order to gain a certain amount of Bitcoin, and the details of each coin are contained in an extremely long string of numbers and letters, making each digital coin unique.

These kinds of servers are easier to run because they are running off of a FinFET ASIC chip, which is designed to be used with many others of its kind to create a very powerful processing system over a distributed network. This makes it so that specific chip requires as much heating - and the Bitcoin Brothers are housing their system in Berlin, north enough that they will not run into as many problems with cooling as they would have further south.

Understanding the utility of methods of reducing cost is an important part of creating powerful data center systems. As long as data room cooling costs can be kept low, it is likely that there will be an ever-increasing amount of servers being made with less ecological impact than the ones before them.

]]>Data centers of the future may go in any directionhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-centers-of-the-future-may-go-in-any-direction-40007213
Fri, 31 Oct 2014 16:37:24 GMTITWatchDogsThe beautiful thing about the data center industry is its explosive growth and innovation. Because the companies involved in creating these centers are, in general, so new, there is a lot of room for innovation to track down barriers within the industry and allow new ideas to come in. As the speed at which information processing happens necessarily happens thanks to the rise of cloud computing and the eventual coming of the Internet of Things, there will be an ever-increasing demand for a variety of different types of data centers. As these information-processing plants begin to jockey for space in a rapidly filling industry, it is likely that there will be further compartmentalization of organizations into different types of data center services so as to find niches for each company. This will ultimately be productive for end-users, as the increasing specialization of different types of information processing will result in better options for a variety of uses.

As different technologies and ways of processing information become more prominent in the data center space, there will still be some things that stay constant. One is the need for data center temperature sensors like the WatchDog 15 PoE. The utility of data center temperature readouts that can be remotely accessed is extremely important, because some physical elements will never be able to be automated by a center (at least not until the far future). There will always be edge cases that have to do with hardware failures that will require the intervention of a human hand and eye in order to keep a facility running. Without the use of automated sensors like this, the human element within a data center cannot keep track of the temperature at which its processors are running, which can lead to unplanned downtime and potential damage to the hardware that is the heart of the data center.

Understanding the future of information processing
Some of the possible ways that data centers could grow include one envisioned by the new startup NIMBOXX, according to Tech Republic. This company wants to bring highly scalable computing resources to companies of all different sizes, from large enterprises to small, local businesses. This kind of market diversification is an excellent way for those organizations that are new to the industry to attract a solid base of clients without having to compete with already established brands. Even organizations that are already involved in aggressively selling to their typical clients could benefit from releasing platforms or service targets slightly outside of their typical market in order to diversify their client base. Because the way that data is used may rapidly change over the next couple of years as organizations find their own ways of applying information techniques like analytics and cloud computing to a variety of industries, there will be considerable change among clients as to what they expect from a data center and how they want to receive their new information.

As computers and businesses move into the future of data storage and processing, there will be some bumps and scrapes along the way. While the incredible visions of data centers of the future portrayed in Hollywood movies are compelling, most systems will not be operating at the level of those for some time. In the meantime, data centers must focus on maintaining uptime and working with information in ways that can directly benefit their end users, whether they are larger businesses or individuals. The creation of powerful centers of the future will have to wait until the technology catches up with them, but for today's purposes the simple data center techniques used now will suffice.]]>Efficient data centers still need emergency systemshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/efficient-data-centers-still-need-emergency-systems-40005743
Wed, 29 Oct 2014 18:39:05 GMTITWatchDogsAs information processing continues its economic journey toward becoming a commodity, demands for efficiency and reliability in data centers will continue to escalate. It is a natural demand of markets that commodity products invariably go to those who have streamlined the process to make it cheaper and faster than any others. It is probable that information processing will never be a true commodity market - there are simply too many concerns about expertise and the security involved for every single data center to operate in the same way - but there will continually be a narrowing of what kinds of margins centers can expect to live off of unless they find ways to differentiate themselves. Ultimately, centers need to make themselves well-known for something special in order to survive in today's market.

One important way to be sure that a given processing plant stands out in the crowd is to find something unique about service that can be offered. High degrees of reliability are an important part of this. High levels of eco-friendly efficiency might be a way to appeal to organizations that are trying to limit their carbon footprint. This could include the use of new, experimental metrics in understanding the way that air flows throughout a data center, like Future Facilities' new computational fluid dynamics modeling, as reported in Datacenter Dynamics. The utilization of this kind of experimental technology can be very important for creating a system that allows customers to connect with a brand.

Working with data reliably
Other features like backup generators and data center temperature sensors are important for making sure that given center is able to work at all times. Using something like the ITWatchDogs 100 PoE can be incredibly important for crafting a service that is able to respond to a variety of conditions. Sensors are important not only because they can keep staff alert to constant changes in the conditions within a center, but also because they are able to massively reduce the downtime by keeping it from happening in the first place. This and a strong back-up generator like the kind detailed in Tech Republic can keep a given organization from suffering from many different kinds of problems in the long run.

The use of data to create a more powerful data center is not a new concept, but in fact a rather old one. Modern day sensors are just super powered thermostats, and scientists have always been trying to control reactions so that they stay within certain temperature thresholds. IT is just that servers respond so poorly to heat that these old styles of experiments take on new challenging highs when tasked with keeping an entire facility full of servers cool. Luckily, managing to keep computers from overheating is easy with modern day equipment like the ITWatchDog 100 Poe. The beauty of these kinds of sensors is that they naturally respond to the different levels of heat and cold and can transmit that signal to anyone, anywhere. In so doing, they remove the need for regular walk-in to check on servers.

These sensors should still be used while keeping traditional temperature readouts nearby. Finding out by a meltdown that a temperature recording device has broken is not a fun way to learn. Instead, viewing these as first line of defense and other thermometers as a secondary line of defense is the best bet for securing the future of a data center.

Fundamentally, data center design is about efficiency, reliability, and ROI. Creating a platform that allows engineers to deliver strong data processing power through the creative use of hardware should be the main goal of many organizations.

]]>Managing data center temperature with sensors and fanshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/managing-data-center-temperature-with-sensors-and-fans-40003242
Mon, 27 Oct 2014 11:05:21 GMTITWatchDogsUnderstanding how to build a data center so that it doesn't store excess heat can be tricky. The sheer volume of heat generated by strong processors means that any data center engineer that is trying to build an efficient system is going to have to contend with a large amount of heat during most of their server's uptime. In order to fix this kind of problem, data centers need to be planned in advance to effectively deal with and channel heat away from core system so that nothing is damaged by the amount of heat coming out of them. Below are some tips on modifying data center temperatures so that a facility can run at full power.

According to Information Week, utilizing best cooling practices is important. Make sure that cooling capacity and airflow are matched correctly to IT loads. Understanding what servers will be used during peak loads in order to process information will help an engineer make the right choice about where to place certain fans and coolers. Typically, fans are the biggest non-IT consumers of electricity as far as cooling is concerned. Using a variable-speed fan can be the way to reduce the power bill by letting it be automatically controlled according to load, so that a server is never running hot enough to be a problem while it is being cooled by the fan. If a fan is able to scale up and down along with the servers, it will go a long way toward making the center better at adaptively coping with the amount of power it needs to do its job.

Anticipate future trends
Knowing that, according to Midsize Insider, the future of data centers involves working with big data and demand-shaping techniques is important to understanding how to deploy servers. If demand-shaping is going to be a key component of the next phase of data center construction, the architects should currently have plans for how to implement the construction of their servers to take advantage of it. Loads can be put onto specific servers that are high load at specific times, so that the entire construction of cooling fans in a facility can be designed around a couple racks of high-demand servers and lower quantities of fans around the majority of the others, for example. These kinds of engineering decisions are extremely important to the long-term health of the machines and for the facility.

One of the best things a data center engineer can do to lower the data center temperature of his or her facility is to use a climate sensor like the WatchDog 100 PoE. This piece of equipment is highly useful for sensing not only heat but also humidity, light and sound. Without a sensor it is possible that a fan arrangement could go awry as one fan goes off. The WatchDog has two different modes to send out distress signals when it notices a temperature outside of a user-definable set of parameters, including sending out SMS signals and email. This kind of lightning-fast response to changes within a center can make a huge difference in terms of preparedness.

No matter what kind of choices a center makes to prepare its servers, it is important to plan for server loads to come. Demand is increasing for storage spaces as devices begin to be plugged into the Internet of Things, which means that there will always be a need for more space and more servers in order to keep up with demand. As this happens, it will be more common for companies to host large amounts of servers, which means that cooling costs will soar as high as the towers of servers will eventually stack.

]]>Temperature and Dental Healthhttp://www.itwatchdogs.com/environmental-monitoring-news/healthcare/temperature-and-dental-health-40002799
Thu, 23 Oct 2014 18:13:25 GMTITWatchDogsFor many hospitals and health offices, regulations about room temperature can be tricky to understand. Much of the specifics behind regulations for different temperatures can vary for the kinds of offices that are available. Understanding how to create an environment that is conducive to the comfort and health of patients is important. Utilizing the best equipment to do this for customers is a key element of the healthcare industry.

According to a question asked to OSHA on their official web forums, office temperature and humidity are matters of human comfort in offices . However, OSHA recommends temperature in a range between 68 and 76 degrees Fahrenheit. This range is designed to accommodate most people's sense of comfort. There is no specific guideline regarding dental offices and bacteria. Instead, the guidelines that are in place are there for covering any general office situations that may come up, like the comfort of employees and the general expectation that workers should be able to have a relatively nice climate while they are working.

Dental X-Rays
X-Rays, however, are a little bit different because of the process of making a picture out of radiography. The Delaware Department of Health and Social Services specifies that temperature and water for the darkroom when developing X-ray film should be between 68 and 72 degrees Fahrenheit. These temperatures can be critical for obtaining an accurate picture that clearly describes the skeletal structure of the jaw of the patient. If the room is too cold or too warm, this can adversely affect the chemical reactions that create the images on the film, which can ruin the samples needed in order for the dentist to have an accurate picture of what is going on with their patient.

Because the the speed at which many chemical reactions in a darkroom take place can be temperature dependent, utilizing something like the WeatherGoose II for detecting changes can be appropriate. Without the use of strong temperature monitors, it can be possible for variances to produce poor results in lab equipment. Being careful to utilize the strong test procedures for the careful recording of data is necessary for delivering the best possible product for clients. Standardized guidelines for many different types of equipment require a room temperature between 68 and 72 degrees, so being sure that temperatures remain within those bounds can be critical.

]]>5 tips on assuring food freshness and safety for your customershttp://www.itwatchdogs.com/environmental-monitoring-news/healthcare/5-tips-on-assuring-food-freshness-and-safety-for-your-customers-40001339
Tue, 21 Oct 2014 10:14:51 GMTITWatchDogsMaking sure that your customers are able to get the freshest food possible from your grocery store is an important part of modern-day shopping. A single case of food spoilage on the shelves can have disastrous effects for consumers and a chilling effect on potential sales. In order to be sure that your food is able to meet the toughest requirements and that your customers don't complain of spoilage, you should follow our tips posted below.

Stock brands with the most helpful packaging
Customers need help picking out what food will hurt them and what won't. Doing whatever is possible to help an audience that might be unaware of differences in texture, taste, or smell that might signal that food has gone past its sell-by date is important. Recently, Solveiga Pakstaite won a James Dyson award in the UK for a bio-reactive label called the Bump Mark, which causes packaging to become bumpy when the food inside of it reaches its sell-by date. This means that consumers will have a much easier time finding food that meets their freshness requirements.

Introduce freshness guides
Letting customers know what to look for in their produce is an important way to establish trust. You can do this by including hints that document the ways that patrons can tell that given vegetables and other foods will be ripe. Sample hints, like the fact that grapes should be firm or that avocados need to be green and soft, are important. Informing consumers about these kinds of tools can help them to make informed decisions about what they buy. While obvious effort should still be put into ensuring that anything that isn't fresh leaves the produce aisle, it is still good to help consumers.

Use monitoring systems
A variety of sensors, including the WatchDog 100 PoE, are important for understanding the temperatures at which certain foods are being stores. You should never take it for granted that your refrigeration systems are working - they need to be monitored and checked, and employees should be able to immediately respond if a refrigeration unit goes down. Utilizing the features of the WatchDog to send SMS and email notifications whenever it notices a variation in temperature can be very useful for making sure that, as soon as something being to waver in terms of cooling, a system can be checked or replaced before it causes anyone to suffer from food poisoning.

Regular checks
Employees and managers should regularly check on the quality of food on the shelves. Visual inspection of all items on display is important for making sure that customers get the impression that a company cares about what it is selling people to eat. In this way, the freshness guides can pull double duty,helping consumers and staff determine what should be on the shelf and what is in danger of spoiling. By helping consumers to only eat fresh food, a company lifts itself in their eyes and is able to earn their trust in the long-term.

Rank foods
Whole Foods has recently created a list of ratings that will inform shoppers as to where their food is coming from and why they can feel good (or not so good) about what they're eating. These go over different things like soil conservation practices and similar issues. In order to execute this same concept, but with an eye for freshness, let customers know where certain food comes from and how long they can expect it to last at home. This can allow them to make better decisions about what food to buy the day they want to use it and what they can buy several of and use as needed.

By educating customers, organizations create a healthier environment for them and for their staff. By checking on food consistently, they create healthy opportunities and can become their clients' favorite shopping experiences. These are the kinds of tips that will allow a store to be well-known for quality in an area, and allow it to rise above other competitors in order to help its customers buy the best food they can. As a grocery store, being a center of wellness is just a matter of education and vigilance.

]]>Monitoring for the VM Environmenthttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/monitoring-for-the-vm-environment-40000773
Thu, 16 Oct 2014 17:35:44 GMTITWatchDogsThere is no doubt that virtualized networks are the way to go to for the modern data center. Because they reduce the cost of doing business so immensely, many organizations have switched over to using these types of systems in order to the the job done more cheaply than they were able to previously. However, there are always challenges when adopting new technology, and some of that may be helpful to understand how to effectively monitor a virtualized machine system. Creating a system of monitors that are able to easily pinpoint fluctuations in temperature and other elements at a given moment can be difficult without tying specific networks to certain servers that have a physical presence.

The benefits of the software defined data center are many - they are more agile, they lower costs, they provide stronger security, and they lead to higher uptime for the organizations that use them. More than any other type of center, they are able to keep moving forward even in the face of many types of physical disruption. They do this by abstracting the actual elements of provisioning, messaging, processing, storage and all of those types of basic services and puting them in a software-based layer, which allows for tasks to be completed on any processor within a large system. Functionally, this means that more complex services are able to be broken up between many servers, and that a center is free to use much more of its resources at any given time without having to re-provision, according to Insurance Networking.

Efficiency and lower costs drove the rush to virtualized networks
Software-defined data centers simultaneously lower costs and make network technology more efficient. As time goes on, this will become even more important for centers as many are expected to work with big data. The new feature that every center engineer should be looking for, according to Mike Wronski, VP of Systems Engineering, is distributed in-memory analytics. This means that a center should be able to understand how it handles its own data in real time. This allows for speedier analytics programs that can help a center to most effectively manage its own data. This kind of power is also necessary for monitoring these kinds of sensors.

Use sensors to prepare for the worst
When working with a virtualized server array, it is important to remember that the machines themselves are still physical. This means that they need strong monitoring solutions in order to track their well-being with regard to heat, moisture, and other ambient elements that can eventually lead to breakdown if not tracked. The WatchDog 100 PoE is a strong candidate for this kind of sensor, as it packs the ability to track a variety of different data points. This sensor can also send out alerts via email and SMS to staff so that they can immediately respond to any sort of difficulties that arise when the sensors indicate a range of variables outside of accepted conditions. Because of these features, the WatchDog 100 PoE is an important part of any facility's monitoring system.

Getting the most out of virtualized networks necessarily involves creating a system set up to ensure their longevity. Strong sensors are a vital component in the increasing virtualization of networks. The efficiency of these new systems makes them worthwhile but strong attention to security and performance on the part of the data center engineers that operate them will be what makes them last. Keeping equipment in top shape should always be a high-level goal on the part of system administrators, and utilizing data center temperature monitoring tools to do this is a good idea.

]]>5 steps a data center manager should take to protect mission critical assets from environmental threatshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/5-steps-a-data-center-manager-should-take-to-protect-mission-critical-assets-from-environmental-thre-676345
Fri, 10 Oct 2014 18:05:30 GMTITWatchDogsThere are a variety of factors in the environment that can keep a data center from fulfilling its objective. High-priority servers need to be constantly protected if they are going to maintain the loads they need to bear, which means that engineers need to do their best to make sure that they are properly taken care of. This suggests that not only should architects build to protect the center from the elements, but that they should devote particular attention to making the server rooms as robust as possible. In order to create the kind of protected servers that engineers rely on to get their jobs done, they need to explore several different avenues of protection.

1. Use new cooling mechanisms
Utilizing the newest technology in terms of cooling can help to make sure that servers stay at very low temperatures while also providing consistency. Many data centers utilize liquid cooling in some way to provide heavy-duty cooling to servers so that they can function fine even on the hottest days. While this isn't always necessarily classified as emergency planning, an imperviousness to hot days is just as important as any other metric when counting on the reliability of a given center. In addition, the savings gained by how efficient these devices are can be spent on other areas of center construction, providing more funds for security in other areas.

3. Develop sustainably
Sustainable practices can also allow a center to protect mission critical assets, according to Automated Trader. Using free cooling from nearby environmental fixtures, like cold water from a lake, can be an excellent way to tie a center into the environment and allow it to keep working even when the grid is down. Redundancies allow a center to keep going, but all power systems need fuel. If water is pumped in from a nearby lake doesn't need to be carried in on trucks. That makes a difference when it comes to long-term protected data centers.

4. Use sensor technology
Using the best in sensors in order to react immediately to emergencies is an important part of developing a reliable center. Being sure that your temperature falls within the Energy Star guidelines is an important way to make sure that your servers are working well without using excess energy. The Watchdog 100 PoE is an excellent temperature, humidity, light and sound sensor that can provide SMS and email messages to all types of staff as soon as a variable reaches a certain threshold, allowing people on-site to immediately tend to their servers.

4. Go underground
Many of the different types of environmental threats that can befall a data center can be solved if a data center architect decides to go underground. It is incredibly difficult for a hurricane, tsunami, or other major environmental event to damage the data center that is located on the inside of a mountain. This kind of long-term thinking can create an extremely impregnable data center, able to withstand a lot of disasters and climatic effects that might cause serious damage to other types of buildings.

5. Use redundant systems
Having a redundant set of servers and extra power supplies, batteries, routers, cables, and other types of equipment around can be extremely important in the event of any sort of major break in any of a center's devices. It is always better to have these things and not need them, than need them and not have them. By having spares on-hand, an operator can deal with any emergency that pops up as long as they can quickly swap out systems. Using these and practicing exactly how to switch loads between servers should be a regular drill.

By using these tips, centers should achieve high degrees of reliability.

]]>5 Common Data Center Cooling Mistakeshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/5-common-data-center-cooling-mistakes-675838
Thu, 09 Oct 2014 17:33:15 GMTITWatchDogsThe problem with data center cooling is that there is so much to know and very few places to learn. The ways that most organizations keep their centers cool can vary greatly, as they find different ways of adapting to their environment and using specialized equipment. However, there are many angles and different types of strategies that may be used. Listed below are five common oversights that wind up happening when data engineers don't look for a creative enough solution, or when those in charge of running a data center operate too conservatively in their thinking. Utilizing the best practices for center construction means knowing what kinds of ways there are for a business to cool its servers.

Cooling too much
According to IT Business Edge, a new server released by EMC Corp is rated for operating temperatures of 204 degrees Fahrenheit. This is hotter than most people will want to deal with, but the server itself can chug right along. Computers are becoming more resilient about the temperature of air in the room, and an over?-reliance on super-cold computers could cost an organization potentially hundreds of thousands of dollars over the course of the year as it keeps enormous rooms in refrigerator-like temperatures. While this used to be a necessity, modern equipment doesn't need to be frigid to function.

Underestimating cost
Simply not knowing what a given organization is getting into by utilizing some measures of climate control can be just as bad as overcooling. According to Data Center Knowledge, eliminating uncertainty is a hugely important part of keeping a data center profitable. Being sure to set aside budget in the data for dealing with any technical foul-ups or other mishaps that can cause a center to require more cooling than originally through is necessary for people who are running them. To be sure that a given cooling system is secure, think about the last major incident that happened that made cooling costs over, and assume it will happen again sometime in the next year. Money allocated to dealing with accidents doesn't go away - it remains a secure investment that business will always be able to proceed as usual.

Failing to optimize
Using the best infrastructure is the most important part of dealing with the costs associated with cooling. Integrated infrastructure, according to a white paper produced by IDG Enterprise, is an extremely important way of maintaining lowered costs across the board. This can bring a lot of excellent returns to anyone focused in lowering the costs of running a data center (which is ultimately everyone's goal). Understanding exactly how to get the best return on given expenses is an excellent way to wind up becoming the most cost-efficient and, therefore, popular center in the area.

Not utilizing sensors
By investing in strong environmental monitors like the WatchDog 100 PoE, an organization can constantly keep track of its servers and the head and humidity of the surrounding area. By cross-referencing this with an index of how much work the server is getting done at a given time, it can be easy for a group to understand exactly how fast its center is working at any given time, ultimately ending up in having a very high degree of efficiency by cooling to a perfectly efficient point. This can also help a company understand exactly at what point it needs to be worried for the health of servers, which can keep it from calling in false emergencies, while also giving it peace of mind as to when it absolutely needs to be ready to call in the cavalry and fix a problem immediately.

Failing to deduplicate
Getting rid of excess information on a server seems like it might not directly affect cooling, but every bit of information that is processed takes up a cycle that doesn't need to be used, according to Data Center Knowledge. By getting rid of excess information quickly, a center can be much more efficient and therefore lower its cooling and power bills simultaneously. Efficiency in every aspect of production is important. Even though redundant power systems are also required, it is useful for a data center to be ready to make the information it streams as lean as possible. Without investing in a system that can make an organization highly efficient on data usage and processing, administrators are letting themselves give up money that is wasted on cooling inefficient processes.

Utilizing all of these tips can help to make a center stronger, faster, and more reliable. Cooling is deeply interconnected with every other act a center performs, and finding ways to make any part of a server's design more efficient ultimately saves on cooling costs over time. Centers that have found ways of cooling their servers more efficiently will inevitably outperform those who haven't, as they will be working with a system that is simply better at producing more for less.

]]>Deduplication and sensors together make work efficient and safe in centershttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/deduplication-and-sensors-together-make-work-efficient-and-safe-in-centers-674839
Tue, 07 Oct 2014 10:45:00 GMTITWatchDogsCreating strong solution for data centers means keeping pace with new trends as they develop. Because the pace of computer technology moves so swiftly, engineers need to be ready to make sure they are taking advantage of the newest advances. Otherwise, they risk falling behind in the race to handle information the swiftest and most safely. As virtualization and other tactics for data management have become increasingly popular, the new ideas for data security and processing have been ways that take virtualization as a given and now try to find ways that allow networks to work with worst-case scenarios. What does a center do when or if the network goes down, now that such a premium on efficiency has been placed? What are ways that data can be managed more effectively and yet still provide a solid platform for recovery if the worst happens?

Some companies are gaining increasing measures in efficiency by utilizing data deduplication, according to Data Center Knowledge. Deduplication is a process whereby excess data on servers that is "duplicate" information that simply repeats what another file says is removed. Duplicate files are created constantly by programs and by users that are transferring files, but the duplicates aren't always dealt with as effectively as they could be. In order to prevent this from spiraling out of control, deduplication is used to get rid of these excess files and reduce the load on the center as a whole. This is useful because it presents more efficiency, especially for virtualized networks where the sheer bulk of information might lead to information that has been doubled moving around without reference to its older partners.

Lack of copies can make problems when looking for backups
But, what is to be done if data deduplication is successful to the point that it becomes difficult to recover in the event of a crash? One possible answer, brought up by Processor, is that companies can use out-of-band networks to monitor the health of a network and perform archival work, keeping snapshots of the data in a given center in order to avoid the loss of information over time.This is a very powerful system that can even reduce downtime in the event of a crash, greatly reducing the possibility of a loss of sales or other issues that could result from this kind of event. Out-of-band management is also useful because it allows for a set of eyes and ears on the center that are, essentially, free. IT does not use the bandwidth in the network, nor does it wind up costing extra for employees to run it compared to many other security features.

Of course, out-of-band management systems must have strong sensors equipped to know if a given network is running too hot. An out-of-band system with ITWatchdogs' sensors like the WatchDog 100 PoE installed in servers in a data center could be capable of reporting a huge amount of data about a variety of variable including heat, light, sound, humidity, and others to data center engineers, allowing them to pinpoint disturbances within the physical center as well as any possibility of an overload due to excess heat. The best way to prevent downtime from becoming a problem is to keep it from happening at all, and strong sensors are an essential component for that purpose.

Utilizing the best equipment and keeping track of the physical state of servers is an excellent way to deal with many issues that can cause unexpected downtime within a center. With these goals in mind, it can be very easy for an organization to deal with the unexpected troubles that might otherwise trip up a less-prepared organization.

]]>Data centers need efficiency to survivehttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-centers-need-efficiency-to-survive-673663
Mon, 06 Oct 2014 10:09:53 GMTITWatchDogsCreating more efficient data centers is one of the best ways for engineers to lower the costs associated with churning through vast amounts of information. The problem is that the costs of keeping data center temperatures low can be very high, unless some creative problem solving is used. Because of the potential rewards involved in lowering the costs of cooling, there are many different approaches that organizations have tried in order to create more energy-efficient centers. Working with a variety of methods and utilizing a patchwork of different solution in order to reduce the cost of maintaining a data center temperature will be what is key for architects moving forward.

Recently, large groups have become involved in the data center temperature lowering game due to the White House Better Buildings Challenge, which a task set out by the White house to challenge organizations to design more efficient, less-polluting warehouses for their businesses, and two of them are the CoreSite Realty Corp and Digital Realty, according to Data Center Knowledge. These two firms will be working with properties in their U.S. portfolios and will aim to reduce energy consumption by 20 percent within the next 10 years. The goal is to reduce energy use on the side of facilities - not in the tech sector - so they will have to focus on creating and sustaining highly efficient methods of cooling and other climate control measures for their sites. This kind of focused effort at reducing power consumption will only become more popular in the years to come, as the cost of processing data becomes cheaper while energy prices stay the same or even escalate over time. Data center efficiency is all about reducing the cost of non-IT expenses, and this means finding ways to keep servers functional without breaking the bank.

Virtualization and other methods of reducing costs
The utilization of data virtualization may be a way to reduce cooling costs without significantly investing in hew hardware, according to The Data Center Journal. This way of increasing efficiency has to do with simply allowing servers to be virtualized within a data center as a way of more evenly allocating load. CPUs tend to generate much more heat when they are under high stress than when they are performing lower-intensity actions, so it is possible to save money in a data center by forcing the workload to be more evenly distributed, ensuring that no individual processing tower winds up taking on the bulk of the load. This sharing of the weight of heavier transactions can save a lot of money in the long run.

In order to make sure that there are no inefficiencies when lowering data center temperature, it is vital to use strong monitoring hardware like the WatchDog 100 PoE. This can allow an engineer to monitor heat, light, sound, noise and humidity very easily, letting all staff know whenever anything exceeds a user-defined limit on any of those variables. This kind of 24/7 monitoring is important when dealing with the kinds of data loads that many centers have to work with on a daily basis.

As the market for information processing matures, those centers that have more efficient cooling systems will be the ones that survive. By passing along savings to their customers, data center engineers will be able to capture areas of the market through the power of highly efficient and regulated servers. Information processing is heading toward being a commodity, so finding the cheapest way to deliver it to clients is a must for anyone involved in data management. By making the decision today to invest in efficiency, operators are saving their companies in the long run.

]]>Data centers found even amongst Swiss Bunkershttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-centers-found-even-amongst-swiss-bunkers-672216
Fri, 03 Oct 2014 10:08:59 GMTITWatchDogsWired's recent article showcases ancient Swiss bunkers that have been turned into high-tech data centers. These are centers designed to house very specific, highly classified materials that don't secure information to be used - just to be stored. In this way, physical protection may be the only security features that these centers need, as they can actually forcibly disconnect themselves from the Internet so they do not have to worry about digital attack. Unlike many other kinds of centers, these are designed to stand motionless for long periods of time, and have no uptime concerns.

In stark contrast to these are Oracle's new deployments in Germany, according to Datacenter Dynamics. These data centers will be used to "complement Oracle's facilities in the UK and the Netherlands," according to Oracle. Despite these two different types of facilities' major differences, they are very similar in one extremely important way: they need to be resilient. Both types of centers need to reliably hold a large amount of data and store it safely for long periods of time.

Climate monitors are necessary for these kinds of projects
Using the WeatherGoose II - Climate Monitor, a company can deal with many of the issues that could come up in either one of these centers, from those that deal with storing information to be held, or those process information to send it out again later. Both of these kinds of facilities require constant monitoring of the environment to ensure that there is no data loss. No matter what the situation is, a company can benefit from utilizing a climate sensor in order to ensure that does not lose information. Data loss is one of the worst disasters that can befall a center, and mitigating its effects through preventative measures is part of any data center engineer's chief responsibilities.

When it comes to the security of information in any center, it is important for data center architects to plan ahead and invest in systems that can give their support teams advance notice of variations in temperature, humidity, airflow, light and sound. For those situations, they need the WeatherGoose II. The usage of this kind of sensor can help any organization defend against a myriad of circumstances without needing to fear that they will not be able to react to events as they happen. They will always be ready to be alerted by the SMS and email alert features embedded within the system.

]]>ITWatchDogs products not vulnerable to Shellshock exploithttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/itwatchdogs-products-not-vulnerable-to-shellshock-exploit-672635
Wed, 01 Oct 2014 10:46:31 GMTITWatchDogsITWatchdogs has received a number of inquiries about whether any of our monitoring units are vulnerable to the 'Shellshock' exploit recently discovered in the Linux bash shell. We would like to assure our customers that the units do not use Linux as their core OS, nor do they have a 'bash' shell, so they are not vulnerable to the Shellshock exploit.

Our products are designed to be used to prevent emergencies, not cause them. When you are looking for a climate sensor that can detect variations in heat, humidity, light and sound, you can safety look toward our diverse range of products. Our WatchDog 100 PoE is a highly refined sensor system that can alert system administrators and support staff of different variations in a diverse array of ambient conditions through SMS and email, so that there is no time when you are unable to respond to any sort of urgent event. Our Remote Power Manager X2 is designed to provide the same function, but for the power supply. With such solutions in place, you know if there are any drop offs or sudden raises in power consumption by the servers in your data center, lab or other controlled environment.

]]>Large scale data centers require sensorshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/large-scale-data-centers-require-sensors-671824
Mon, 29 Sep 2014 10:34:49 GMTITWatchDogsData consolidation is a difficult art in the realm of data center engineering. Because the bulk of large-scale data center work is about creating an efficient workspace, many companies that deal with extremely large amounts of information find themselves dealing with complexities layered upon complexities. Architects dealing with problems at this level frequently have to invent their own solutions, as these kinds of issues are something that are only being touched upon by recent advances in computer science. In order to create a stronger future for themselves, data centers are learning from each in order to make facilities run more smoothly.

Heading this adventure into the realm of massive data retention is Facebook, which is working with other technology leaders to create a clearinghouse organization, according to Data Center Knowledge. This group will work to create best practices and publicly support open source projects that makes it easier for large-scale data movement to be done more easily. In this way, companies can be hopeful for a future where many of the current problems plaguing high-load centers are more effectively dealt with, or at least more widely understood. As many groups work together on problems plaguing the efficient structure of information, it is likely that productivity will grow and there will be even more incentive for companies to utilize large data centers.

Current technologies already achieving results
One sector that is helping to spearhead efforts regarding data retention is the U.S. government. Federal use of consolidated data centers has already resulted in a fairly massive increase in productivity for them, for example, according to Data Center Knowledge. Recently, the U.S. government actually wound up underreporting its gains from the use of consolidated data centers to deal with its information by $6 billion, meaning that it has been far more of a success than they had known initially. This is a great way for large-scale data centers to prove that they can make life better for everyone, not just companies organized around allowing people to like each others' photos.

These kinds of data centers, although they are extremely useful, do come with their own set of logistics aside from data, however. How can an engineer design these systems in such a way that any problems that arise can be dealt with effectively and efficiently? How does someone deal with the increased chance of a meltdown with so many processors firing next to each other?

One of the best ways to fix the problem of equipment failure is through the use of sensors. Designing a system to be monitored by a collection of Watchdog 100 PoEs can help an organization to keep track of the temperatures on all of their server racks, and even have each one designated by number to point to a different location. These units can send out SMS and email messages to support staff on 24/7 notice, so that there can always be ready hands and eyes on any element of the data center. With this kind of resiliency built into systems, it will be much easier for companies to rely on their data being stored in larger server farms, and allow them to reap the benefits of a large data center more easily.

As time goes on, there may be more groups utilizing the power of very large data centers, or possibly more spread-out, distributed networks. No matter the situation, it will be very important for organizations that are working in this manner to monitor closely the health of their systems by tracking different ambient variables like heat and humidity. Without checks on what is physically happening in a data center, companies are flying blind. Finding ways to make sure that there are constant checks on these centers will create a more reliable information system.

]]>Data centers experiment with new forms to reduce operating costshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-centers-experiment-with-new-forms-to-reduce-operating-costs-671702
Fri, 26 Sep 2014 19:27:30 GMTITWatchDogsDealing with everything that goes into running a data center is difficult. There are continuous processes of dealing with the recurring monthly costs and the different types of emergencies that can arise. Luckily, the future is changing the way that centers may operate and eliminating some of the monthly charges that they deal with. because it is imperative that companies receive the uptime they need from their hosts, centers are always trying to find ways to make it easier to work with their clients. In some cases this means creating new physical locations right next to a worker's place of business, but other times it means finding ways around the traditional center supplier relationship in order to create a solution that works out exactly for both the client and the center.

StarHub, one of the top three telecommunications providers in SIngapore, according to Datacenter Dynamics, is now offering modular, software defined data centers to enterprise businesses, for example. These new types of data centers will be deployable to any location, and could deal with anything from performance metrics to support infrastructure. These have been developed in order to provide a non-traditional support option for companies that may need certain services from a data center, but do not need to license out an entire center in order to achieve those goals. Also, enterprises that want to keep their staff on site could deploy their centers within their building, allowing them to have on-premises access to their information all the time. This new development could help to create a new, industry-wide class of modular data centers that are deployed in order to help customers deal with data in more specialized ways. Because many enterprise clients want to keep their data on-site, this kind of option is perfect for them.

Creating stability
Interoute, a U.K.-based cloud services platform that is the largest in Europe, has decided to stop charging for data center cables in monthly fees, according to Datacenter Dynamics. This industry-wide practice is being decried by the company as an illegitimate expense. It remains to be seen how this reduction in charges will affect other centers, but it does free up some costs that have traditionally plagued data centers. IT could possibly signal a change in direction for centers in terms of dealing with customers and clients.

An excellent way for centers to reinvest their newly freed income would be in climate monitoring solutions. Monitoring a data center temperature can be difficult without the use of strong products that can work with several different variables at once, but luckily Watchdog 100 PoE fits the bill nicely. This piece of equipment can do many things that ensure that the overall reliability of the center, including sending out email and SMS alerts autonomously to support personnel whenever the readings for temperature or any other metric exceeds a certain threshold. This allows a center to automatically have support staff in order to deal with problems before they happen.

Working in data centers means always dealing with certain amounts of risk - unplanned downtime happens, and it can costs companies a lot of money. The best way to deal with this is to organize around reinvesting profits into stronger standards of security and reliability in a data center. By doing this, data center architects make it easier to assure their clients that they as a center will be able to deliver optimum performance at all times. This kind of business plan can guarantee a center a strong reputation in the years to come as reliable. Focusing on safety and reliability will pay off in the long run.

]]>Staff turnover highlights importance of monitoring solutionshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/staff-turnover-highlights-importance-of-monitoring-solutions-670634
Wed, 24 Sep 2014 18:03:55 GMTITWatchDogsRecent staff turnover in major organizations within data centers highlight the importance of monitoring solutions. As data centers mature, there will be more instances of the original architect or engineer of a data center not being the person that is responsible for its daily operations. The role of data center supervisor and architect will separate more fully, and it is likely that some architects will be hired just to implement systems that others will then take over. As this happens, it will be come more important for those designing data centers to put them together in such a way that the inside workings are easy to understand from a causal observers' point of view. in order for this kind of networking to work, there needs to be an easy-to-understand system installed for dealing with emergencies, as well.

Because data centers are such complex constructions, making sure that they are easy to fix when something goes wrong is very important and not easily done. When centers do not do their jobs, either through malice or negligence, hundreds of thousands of dollars can wind up being lost through those actions. For example, A recent Bitcoin server scam in Missouri was shut down, according to Data Center Knowledge, as a company that developed servers that were supposed to aid in Bitcoin mining wound up never producing the vast majority of the merchandise it was supposed to sell. For most customers, this represented a severe loss of income, as these servers were supposed to be used to increase the speed of the mining operations of the users' bitcoins. Unless there is a steady head at the helm of any sort of major technical operation, things can get out of hand quickly.

What does this have to do with data center?
In order for a data center to be successful, it needs to be tightly monitored and controlled, like any business operation. Cisco's recent loss of staff at the head of some of its operations, reported by Network World, outlines the fact that there may frequently be times when people in charge of a given facility may have to abruptly leave for any of their own reasons, whether or not they find anything wrong with the company itself. In order to guarantee a long lifespan for data centers, it is important that there be automated systems set up to understand if anything happens during these periods of turnover, and why a response from specific people may be necessary.

Utilizing strong data center temperature monitoring hardware like the WatchDog 100 PoE can be the difference between a successful and an unsuccessful treatment of an emergency situation. The reason this particular piece of equipment is so useful is that it can automatically send notifications to a variety of staff members whenever some of the variables it tracks reaches a threshold, allowing those who are keyed in to it to easily understand the problem and respond accordingly. Without this kind of technology, data centers could burn up overnight if there aren't cameras watching them, which might reduce the efficiency of the center as a whole. Because the WatchDog 100 PoE is fitted in the server rack itself, it has a much better perspective on exactly what is happening with in a center and why it is important to the functioning of all of the services when that building a as whole. Utilizing automation and the technology of the future in order to save on servers today is a noble goal, and companies should use it as part of their disaster preparedness designs.

]]>Improving efficiency in electrical consumptionhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/improving-efficiency-in-electrical-consumption-670256
Tue, 23 Sep 2014 16:51:36 GMTITWatchDogsWhen creating a more efficient data center, money is green. By focusing efforts on creating a data center that is highly efficient, architects can not only help out the planet, but also save money in the long run. This is because keeping a data center running over a long period of time is expensive, and there are few things that chew up more of a data center's budget than the costs of powering it. In order to fight against this constant expenditure, instituting rules that help to keep the power use level down can be a great help. This can be done in a variety of ways, but what's most important is that the data center engineers know how to calculate the efficiency of their center.

According to Data Center Knowledge contributor David Wright, the most common way to measure energy efficiency is through the use of the Power Usage Effectiveness metric, which is determined by Total Facility Power divided by IT Equipment Power. This means, essentially, that the more of the actual power being used in a data center to power just what the data center needs, as opposed to any other extraneous equipment, the more efficient the data center is. This principal, promoted by The Green Grid, is used in many ways throughout the industry as a way for operators to understand how they are doing with regard to efficiency in their centers. Ideally, a center aims to have its PUE as close to zero as possible in order to cut down on extraneous costs. Elements like cooling and air control are included as non-IT costs, despite the fact that they are technically part of the server systems. After all, they represent elements of the server network that aren't necessarily essential for equipment to be run - if it were consistently cold enough, a given center might not need a cooling system at all.

How to become more efficient
Some experts believe that efficiency in data centers will skyrocket as they virtualize more fully, and become defined by software as opposed to by hardware, according to Network World contributor Pete Bartolik. In the meantime, however, there are plenty of data centers currently running that employee virtualization widely that still need to be made more efficient. There are many options for companies that need to deal with the lack of efficiency in cooling systems for data center. While some of these include choosing different kinds of cooling systems in order to reduce the cost of cooling as a whole, an often unconsidered strategy is better monitoring

Regulating data center temperature can be difficult without knowing exactly what temperatures work well for a given set of servers, and how they currently run in a given configuration. In order to fix this problem, it can be useful to use the Watchdog II PoE. {links needed!} This piece of equipment can monitor heat, light, sound and humidity within the data center environment, allowing admins to have access to all of the different variables that might change as a server heats. By tracking exactly how much strength is required for cooling units to run at to provide an optimal temperature each day, a data center can wind up running much more coolly than it used to.

Reducing the PUE of a center is difficult, but not impossible as long as the user has the right attitude. Data center administrators have their work cut out for them to economize their centers, but the process is entirely possible, plausible and can save them money in the long run as well. There's no reason not to make a center a little more Earth-friendly, so engineers should gear up to go green.

]]>More secure data centers utilizing diverse locations and predictive technologyhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/more-secure-data-centers-utilizing-diverse-locations-and-predictive-technology-669861
Mon, 22 Sep 2014 23:51:38 GMTITWatchDogsMany data center engineers are working with out-of-the-box solutions in order to deliver more uptime and better reliability for their servers. While not every new development will be worth adopting, there are still a number of ways that those who are currently running or otherwise responsible for building data centers can learn by keeping up with industry trends. Reliability and uptime are the watchwords of data center maintenance, and it is important to understand the advances that are being made with regard to them. Two major trends to look at in the coming weeks are data center hosts with geographically diverse centers in an effort to provide 100 percent uptime for clients, and providers utilizing the newest predictive technology to understand the strong and weak points of their current architecture.

Data center businesses that work with international clients can wind up in a very strong position quickly, said Berwick Partners consultant Callum Wallace in an article for Data Center Knowledge. He defined the major risks behind global data center networks as logistical - issues to do with who might make what decisions, and recommended a series of options that would reduce confusion as to whom employees and staff should report to under what specific circumstances. This kind of strategy is very sound for dealing with any client that is a multinational company that requires equal computing power wherever it is. Plus, it can allow a data center to prevent any local fluctuations in weather, temperature, or even a power grid from interfering with workflow. This kind of scaling up may become more common in the future, as Internet connection speeds increase and the grid itself has the opportunity to act as businesses' network cables.

Utilizing predictive power
Even more than building up, using data analysis to understand more thoroughly when a data center is using power at what exact times is very impressive to clients. New developments in network automation, according to Tech Radar, enable administrators to keep track of signal strength and predict laser failure in optical networks. This can help a data center know ahead of time when it is likely to need to replace certain parts or swap out cables in a network, allowing it to much more easily take care of the problems that it might face. By paying attention to what is going to happen before it happens, administrators can prevent many problems within the network.

These kinds of techniques in terms of predicting network equipment functionality are important, but a reliable guard to catch any mistakes that might fall through the cracks is equally necessary. The Watchdog 100 PoE, for example, allows a data center operator to notice immediately is there is any major variance in temperature, light, sound or humidity within a server room, allowing them to immediately take precautions to prevent a center from shutting down or a fire from breaking out. By utilizing strong tools, administrators can create a better uptime for the company and provide a strong sense of care for the client base.

Whether a data center is using the newest methods of network prediction, a diverse geographical base or the newest and most efficient environmental sensors, the goal of them all is to provide reliable, consistent uptime. Without reliable servers, centers quickly find themselves desperate to grab clients. In order for the world of IT to be profitable and usable for a majority of workers, centers need to maintain high degrees of professional excellence. By keeping abreast of industry trends, those who work with data can consistently outperform those who do not track the way the wind is blowing.

]]>Data centers have nothing to fear from automationhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-centers-have-nothing-to-fear-from-automation-669093
Fri, 19 Sep 2014 17:31:05 GMTITWatchDogsAlthough the specter of automation sends chills down many workers' spines, for most IT level jobs automation will wind up as a job creator, not a destroyer. And for data centers, automation presents some unique opportunities. Data centers themselves are fairly low upkeep in terms of human time once most of the ground work has been done. There are technicians that need to be on-call in case of an emergency, and the administrator may keep an eye on a center in order to monitor its efficiency, but most of the time the human hands that made a data center are content to ignore it once it is fully operational. Part of the reasoning behind this is that data center engineering has reached a point where it is very easy to keep a center working without a lot of direct interference unless something breaks, which is (luckily) easy to monitor for.

WIred's article from August on automation talked about it as an end goal, but as something that may eventually free people up to do other things while robots continue producing goods. For data centers, this future might be much closer than with other industries. While there will always be human oversight involved with running anything as complicated as a data center, that oversight can be safely assumed to be necessary while there is an emergency. The rest of the time, most current data center's automated systems do a good job of running themselves. The question here is not whether or not data centers will be able to automate, but how automated they can get. It is likely that they will be able to do quite a large amount of self-maintenance, but at what point is a pair of human eyes absolutely necessary to spot something out of the ordinary?

Lights out for data centers
Data Center Knowledge's recent interview with data center architect Jamie Fogal had some interesting insight into the nature of a fully automated, "lights out" data center. Fogal outlined two goals for centers in going to a lights out system: a reduction in power consumption and an improved client experience. He went on to say that some data centers only have the afternoon and midnight shifts of a data center left only to a lone security guard that patrolled the building and kept physical intruders out, and that the rest of the center could run fine without any specialists on site until they were needed to fix a specific problem.

Surveillance of the physical state of a data center remotely is fairly easy thanks to advances in monitoring equipment . The Watchdog 100 PoE can monitor a range of climate differences, including humidity, temperature, light and noise, and remotely trigger alarms to a suite of personnel depending upon what variables have been switched at what point. Automation techniques need to be built with this kind of human-driven oversight in mind in case anything goes awry. While systems can deal with routine problems by themselves, they also need to be constructed in such a way that they notify human overseers when things get out of hand.

As time goes on, automation will become the trend for most IT advances, but there will always be the need for emergency on-call support and other critical services, as well as guards who protect the integrity of the physical location of the data center. The design of new advances in server development will necessarily involve data center architects monitoring the way that current servers operate in order to make improvements, so some elements of non-critical oversight involved with design will still be necessary as well.

]]>The sky's the limit for data center constructionhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/the-skys-the-limit-for-data-center-construction-668159
Thu, 18 Sep 2014 10:04:47 GMTITWatchDogsThe factors involved in constructing an new data center make it a complex process. What kind of servers should the data center run? From what vendor or developer should server racks be purchased? Where should the data center be located? All of these questions and many more fill out a complicated inventory of things that need concrete answers before a company can get to work even beginning to design the center. In order for a center to be well-utilized by an organization, it needs to be well-maintained and function well. Many big-name companies have placed data centers all over the map, including Twitter's recent announcement that it will build one in Atlanta. This kind of news is common, but what might be slightly less common knowledge is the number of data centers that are being built in highly unusual locations.

Underground data centers
A recent article by Manufacturing.net provided the names of three large underground data centers, and pointed out that there are hundreds of them in the United States. A facility owned by Iron Mountain in Pennsylvania runs in what used to be a mine, for example. The reason these data centers are so hidden is that they are secure against natural disasters and physical security breaches, not to mention the ease of controlling the data center temperature in an entirely artificially created environment. Utilizing cold cave air and having the resources to exactly control the kind of air quality that comes out of and goes in to a underground data center is highly efficient just from the standpoint of center climate efficiency.

Even redundancy equipment for these underground labyrinths of data can be located underground. This means that against any sort of physical threat, weather or man-made, these data centers will be absolutely protected. Typically, these complexes are co-owned by a couple different organizations that work together in the interest of shared security to build these structures, but some are owned by independent groups. These subterranean information basins still need strong climate monitoring components like the WeatherGoose II, though, just in case something were to happen with the climate control systems. Although it is easy to get these centers to be cool enough for servers to run optimally at all times, it is still an exercise in engineering craft to allow them to hold up against the pressures of being an underground structure and to keep out moisture that could otherwise cause damage to their installations.

Other locations for data centers
Of course, not every organization needs or wants the heavy security and secrecy that a data center located underground achieves. Some engineers find useful building blocks for new centers in very unlikely places that are not underground. Lifeline Data Centers, for example, recently purchased what used to be a Target in the Fort Wayne, Indiana, area in order to build a data center inside the shell of what was once a big box retail store, according to Indiana News Center. The Tier IV data center will be well- served by utilizing the interior of a large retail store, as this gives it the opportunity to expand in a large space that already has a central cooling system installed. The layout of a large retail store and that of a data center are surprisingly similar, in that they both consist of an open floor plan with plenty of rack space for items to be put on. For data centers, of course, it tends to be servers on the racks as opposed to groceries.

There is plenty of room for innovation and ingenuity in setting up new data centers, and engineers out there who are ready to construct their next center should be prepared to think outside the box.

]]>Data center reliability means planning for the worsthttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-center-reliability-means-planning-for-the-worst-667637
Wed, 17 Sep 2014 10:32:58 GMTITWatchDogsThe most important part of any data center description is its uptime percentage. Nothing, not the best servers, the smartest architecture or the fastest connection can make up for a center that spends more than a fraction of a percentage down. This is because businesses and clients expect reliability from data centers above all things. While there are certain types of centers that specialize in low-reliability, high-volume processing, like Bitcoin mining operations, they are rare and function in almost entirely different markets compared to the traditional data center. In order for a center to provide the best uptime, it needs to understand what there is out there to guard against, and how it can most effectively do so. Unlike other businesses, data center engineering is still fairly new, and understanding what risks are real and what are not requires ingenuity and planning ahead, as well as a little bit of imagination.

Understanding under what circumstances very high-volume clients might need data processed lead to the creation of a data center in Boyers, Pennsylvania. According to Computerworld, this new data center was "purpose-built" to weather solar storms and electromagnetic pluses (EMPs). The materials used to make the center invulnerable against these frequently misunderstood events has been kept secret, but Kris Domich, President of Cyber Innovation Labs- Professional Services, said EMP protection can be built into a data center at very little additional cost.

Due to the fact that an EMP can destroy data, there are many institutions that might want EMP protection not because they are afraid of the likelihood of an EMP event, but because of the terrifying cost it might have if it were to happen. This kind of emergency system would be necessary to keep the magnetic data kept on servers from begin wiped out by a blast and permanently destroyed, unable to be recovered. That kind of threat represents an absolute disaster for many organizations that make their living online or store critical information in digital formats, and could be disastrous if it were to happen to, for example, a bank.

Reliability done right
Recently, Facebook made headlines at Data Center Knowledge for performing a test when it shut down one of its data centers to see if the company could sustain the load after losing an entire wing of its infrastructure. According to Jay Parikh, the global head of engineering at Facebook, the test went extremely well, and, "it was actually pretty boring."

Data center engineers know that there is no better word to describe how a test went than "boring." The kind of strong data center engineering involved in making sure that an enormous company like Facebook is still able to function with one of its data centers down shows how much thought they have put into their data center. Obtaining that kind of uptime is crucial for organizations that want to be recognized for their strong engineering prowess.

The common denominator that unites all data centers that understand how to keep their servers up is that they are prepared for the worst. Centers that have strong climate monitoring tools like the WatchDog 100 PoE are able to track their servers and organize cooling more efficiently. Tools like climate sensors can also help to delay or stop a shutdown in the first place by making sure that those who are in charge of keeping the center running get updates when temperatures fall out of acceptable levels. Knowing how to prepare for any failure is a key objective of those working within data centers, and this kind of knowledge will lead to a much more stable data center.

]]>Data centers need to be prepared for changehttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-centers-need-to-be-prepared-for-change-666822
Mon, 15 Sep 2014 12:26:06 GMTITWatchDogsBeing a successful business is always about adapting to change. In recent years, tech analysts have seen a variety of different companies rise and fall as they were able to take advantage of - and fall victim to - change. There is potentially no arena of business more filled with change than tech, and in tech those who wind up having to invest in hardware and other expensive, physical realities of business tend to have the hardest time moving forward as their older servers, out-of-date processors and tarnished cases slow them down. In order to deal with the fact that the business world is constantly changing, data center engineers and those who use data centers have to be prepared for seismic shocks though the industry and through the technological economy as a whole.

Moving forward means moving faster
Because data centers spend so much of their time helping other people to sort through their data, they are usually the ones thought of as being masters of information, carefully storing it away and taking it out when appropriate. And while it is true that security tends to be paramount within data centers and that they as organizations are extremely tech forward, there may yet be a series of disruptions in store for them, reported Datacenter Dynamics. These kinds of shocks will have to do with the rise of software-defined storage, extremely low-energy processors and webscale-integrated infrastructure. Because there are so many processors out there right now that can produce an amazing amount of power at a very low heat while still nonetheless having a very high heat capacity at which it can operate, there is the possibility that a server could begin to use many clusters of small, cellphone-sized processors in order to create one large super-powerful data center. Not only is it possible for data centers to being to take advantage of this modular design, but it is likely.

Disruption in technology and what data centers should do
The good news is that there already are services that will help data center out by buying their old servers and other pieces of hardware to use in alternate formats, allowing a lot of computer power to later go for cheap while still letting those who run data centers to make a large profit off of the server pieces they do hang onto. The program in question is headed by MarkITx, according to Campus Technology. Beyond garage sales, there are other ways for companies to leverage their current systems as they prepare to make room for new servers. For one, older systems can be used in order to conduct experiments on how to allocate data more efficiently. Secondly, older systems that a data center engineer knows the average running temperature of can be used to interdependently confirm data center temperature measuring devices like the WeatherGoose II which can help to make sure that your climate checking devices are working correctly.

As data center temperatures run hotter with the addition of heat resistant chips and as more organizations deal with the industry splits up into different types of centers serving different needs, those that adapt most slowly will be unable to keep up with those that have made preparations. For data center architects to secure the survival of their data center as a business, they need to future-proof, disaster-proof and change-proof their centers by making them as fully functional and forward looking as possible. By doing this they will have the stability and security to succeed where other centers fail.

]]>Data centers need to keep up with emerging trendshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-centers-need-to-keep-up-with-emerging-trends-666375
Fri, 12 Sep 2014 15:00:10 GMTITWatchDogsCustomized designs with software-defined infrastructure, liquid cooling and colocation may be the future of data centers, especially as trends mature. Data room cooling could be made easier by introducing new cooling systems to help larger data centers that wind up running operations for many different companies at once become more efficient, and this could in turn lead to a proliferation of larger heavy-duty data centers that run exclusivelyinfrastructure that is licensed out to other companies. There are many current large-scale companies that don't own their own servers because it is cheaper for them to outsource. As data centers mature, there will be more examples of centers being used for other companies to run important programs and analysis through them via colocation rather than being owned by a non center-driven business.

Data centers as a service
Netflix, the well-known provider of entertainment through a variety of digital platforms, doesn't own any of its own infrastructure, reported CRN. Organizations don't need to have physical investments in order to provide strong services that can be impactful in customers' lives if they are primarily online. This should drive up demand for outsourcing to data centers, as more innovators and new technology companies emerge in a sector that has a low capital barrier to entry. This will mean more business not only for those organizations that host servers, but also for those that design and customize architecture as different data centers begin to find what clients they work best with and tailor their machines to them.

The business part of hosting data centers will not be the only thing to change. Keeping data center temperatures down right now is difficult, but it will become easier over time as cooling becomes more efficient. Liquid cooling may quickly become a way for very high-intensity calculations to be carried out without an attendant rise in temperature and costs for a data center to keep running. As liquid cooling technology is perfected, data centers may be able to fight their current image problem as wasteful or needless polluters. As companies rely more heavily on them, they will still see more use and yet be able to drive their prices down, making the data center business even more lucrative. Climate sensors like the RelayGoose II will see use as mechanisms to verify that the cooling process is still working

Understanding virtualization
Server virtualization is important for many data centers that already provide colocation services because it allows for them to dynamically scale up or down the amount of space they give to different clients' projects. This allows them to be much more flexible in terms of the information they allow within their different data centers. According to Data Center Knowledge, different standards will emerge for centers as the science behind building a server that works perfectly all the time becomes more fully understood. Even servers with 100 percent uptime will no longer be thought of as mythical constructs, but as actual realities that can be used in business to achieve the results that their owners need. This change in perspective on what centers are capable of will tighten competition in the industry, but ultimately leave it stronger for its ability to fulfill client needs more fully.

For data center operators, understanding how data room cooling, server architecture and the way that centers will function in the world of business is important. Without continuing to investigate the next place that servers will go, it is possible to be left behind, unable to compete with rivals who saw more clearly what was likely to happen next. In order to avoid this, data center architects need to understand the way that things are changing and plan to incorporate technologies as they mature.

]]>Sensors and modules help ensure efficient coolinghttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/sensors-and-modules-help-ensure-efficient-cooling-665737
Thu, 11 Sep 2014 11:45:39 GMTITWatchDogsSensors and modular data center construction can help administrators to create energy efficient facilities. Because data virtualization is now the industry standard practice, more data centers are getting the opportunity to work with a variety of different infrastructures for their environments. One new model that has grown increasingly popular is the modular data center, which is able to have different leafs put on and taken on, allowing for a continuous iterative design process. Beyond the benefits of modular construction in terms of innovating, however, it can also help to reduce energy costs alongside strong sensor support.

Modular design invites increasing efficiency
According to Data Center Journal, the major bonus of modular data centers as opposed to traditional design methods is that they are 100 percent customizable. What this means in terms of efficiency is that they can be designed to maximize the utility gained out of hardware they run. By doing this, these more customizable types of centers have the option for energy cost-cutting measures to be put in place by being designed according to what a data center needs, and not to a one-size-fits-all model. Of course, it is difficult to understand exactly how much hardware a data center will need without testing and experimenting, which is another point where the ability of the modular data center to facilitate iterative design becomes important. Sensors that are affixed to data centers can tell the engineers who work on them when they exceed, fall below or stay in the sweet spots of ideal temperatures where the servers are being properly cooled but aren't being overcooled.

Sensors make sense
The utilization of strong server room temperature control sensors in order to prevent a center from losing money from overcooling is very important. According to Data Center Knowledge, the most efficient setup for climate control sensors is spaced evenly at the top, bottom and middle of server racks in order to be sure that all of the incoming and outgoing ambient air is being monitored effectively. A web of sensors can help technical teams to pinpoint areas that may need attention due to hardware or software failure if they begin to heat up. Strong climate sensors like the WeatherGoose II - Climate Monitor can sense light, sound, temperature, humidity and airflow in all-in-one kits, which can alert support teams to a variety of different environmental effects that may be affecting the inside of a data center. This construction of several different systems of protection together can be important if a data center is overcooled and winds up having a problem like a moisture leak in through a crack in a window or other scenario, which might have air cold enough to damage circuitry inside of the data center.

Data center temperature lowering efforts are commendable not only as a solid business practice, but as ecologically sound policy. Knowing how to get the most out of a cooling system without overcooling a network is part of maintaining a top-tier data center - whether it's modular or not - in terms of efficiency and reliability. Additionally, sensors can detect possible disasters that might directly impact server performance, so in that way they provide additional security against unplanned server downtime. By carefully monitoring server activity and then using the results of that data to make continuous improvements, data center architects could find increasing benefits from a modular data center design and climate sensors. Even in terms of immediate benefit, modular data center design provides the option to shut down certain modules when they are not necessary, potentially decreasing cooling costs by only using as much computing power as is necessary.

]]>Data centers and the cloudhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-centers-and-the-cloud-665152
Wed, 10 Sep 2014 11:11:51 GMTITWatchDogsData centers are beginning to increasingly utilize provide cloud computing services themselves. As this trend continues, it is vital for data centers to understand the priorities they should expect from other operators and from themselves as they design the architecture necessary to run cloud platforms. The promise of the cloud is built on efficient, reliable computing platforms that don't' compromise on security. It is important to vet appropriately any vendor that a cloud service is purchased from in order to make sure that the software being bought is worth the investment.

Netflix enters the cloud
There are many different companies that take advantage of cloud computing, and some of them are very large. For example, Netflix plans to be a completely cloud-driven company by 2015, according to Smart Data Collective, getting rid of all of their on-site software to run entirely on the cloud by the end of this year. This kind of adoption of the cloud showcases the trust that some organizations are putting in to it. Netflix has always been a technologically progressive company, but it is now taking a major step by going completely cloud-driven. This kind of behavior can be expected from more large businesses as teh cloud enjoys continued reliability.

According to Data Center Knowledge contributor Scott Azzolina, it is important to get the best possible service out of a cloud hosting company, which means carefully picking out a cloud hosting provider that is able to give a data center the kind of power and reliability that it needs. Data center operators are no strangers to the demands or uptime and reliability, and so should be able to make good decisions regarding the integrity of a cloud server that they license out to and use.

Cloud services are essentially just a way for cloud hosting providers to continue to provide reliability. By using a cloud server to run analytics on the way that that is used within a data center, they can reconfigure architecture in more efficient ways and innovate and iterate on existing network designs. For example, many different kinds of climate and power monitoring sensors, like the WatchDog 100 PoE, are capable of sending information to a cloud server that can then be remotely accessed by administrators and security personnel so that everyone is on the same page as to the health and well-being of the center itself. This kind of reporting can be what separates a temporary flare up in a center to a full-scale server meltdown.

Some data centers could transition into cloud hosting
Data centers that are considering using their expertise in delivering secure, reliable uptime in order to function as cloud hosts will find that they are well-suited to the task. The expectations that most high-level clients have of data centers are roughly equivalent to what the public wants from the public cloud: efficiency, transparency, security and reliability. Just as laboratories want to be sure that their data isn't being combed over by cyber criminals, those who are putting pictures in the cloud want them to be secure in the fact that no one is going to paw through them without express permission.

Data center administrators understand the risks of attack from cybercrime, and centers that are considering transitioning from dealing with big business to the public will do well to keep the same reliability they had in their primary operations. Engineers who are used to dealing with large scale problems will find that engineering in the cloud is more of the same. The fundamental problems of dealing with large quantities of information do not change - just the way that the data is being used shifts.

]]>Data center styles may differ, but demand for cooling is constanthttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-center-styles-may-differ,-but-demand-for-cooling-is-constant-664698
Tue, 09 Sep 2014 11:14:47 GMTITWatchDogsAs data is used in an increasing variety of ways, data centers are changing their infrastructure. The proliferation of models for how information can be used in the context of not only businesses but as the backbone of the economy itself is leading toward a diversification of data center models and an increasing heterogenization of the types of centers being developed. No longer is it simply good enough to hire a data center or build one and trust that it will be able to host a given company's data. It is now of paramount importance that clients and data architects clearly visualize what function they see the data center performing, and focus in on candidates that will be able to deliver on that thing specifically. Now that some centers have even risen just to deal with BItcoin assets, as well as micro-scale data centers that simply supplement telecom tower lines, there are more ways for data to be processed than ever.

More devices defining data
The Internet of Things is ramping up, so data centers that are used to processing cloud information for large-scale organizations will have to adopt a model that allows them to keep up with the amount of data used by IT-enabled devices. This will mean a further embrace of software-defined storage, according to Data Center Journal. In the context of IoT devices, relying on the old style of traditional hardware where servers are kept on individual pieces of hardware, rather than embedded as software across multiple machines, will leave some data centers trapped in and dealing with very expensive vertical growth.

The use of software defined storage will allow for an easy progression to higher levels of server space for companies that switch their architectural models over to it. This can also help to reduce the heating costs of a data center by increasing the efficiency with which it operates. The reduction of heating costs has the potential to be an enormous boon for engineers faced with the struggle of finding ways to reduce power bills month after month.

Bitcoin and hashing centers
Even for data centers that are not preparing for the IoT, there are other new models to consider. For example, The high-power, low-reliability model of Bitcoin processing centers is an entirely different model for new data center builders to explore, as they become increasingly popular. In addition, hashing centers, which experiment with alternate methods of cooling and the usage of high volumes of fast processing smaller servers, are undergoing a lot of experimentation with regard to their pricing models.

Bitcoin hashing centers are different from traditional data centers in that they are much less focused on reliability than traditional centers, reported Data Center Knowledge. These smaller operations keep overhead low by reducing the amount of redundancy adn back-ups in their operations because these are not seen as valuable by the clients who pay them to mine the Bitcoins Another major difference of this center from many others in its field are specialized chips called Application Specific Integrated Circuits that are customized for their ability to mine Bitcoin.

The common thread of these different data centers is their need for reliable cooling. The utilization of a strong climate monitor like the WatchDog 100 PoE can make the difference between an efficient cooling system and a drain on resources by telling operators how their coolant is working with their servers. When it comes to data, the best policy is keeping a close eye on the way that it is being processed and keeping the servers cool. As different types of centers begin to proliferate, the costs of maintaining a low temperature will always be there, so it is in everyone's best interest to understand how to most efficiently cool their servers.

]]>Mineral oil and other innovative methods to cool data centershttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/mineral-oil-and-other-innovative-methods-to-cool-data-centers-664292
Mon, 08 Sep 2014 10:38:35 GMTITWatchDogsThe easiest way for a data center to reduce its operating costs is to find a more cost-effective way to cool its servers. In the context of a world where power is rapidly becoming the only constraint on profit for technical companies, it is a major advantage for any organization with the innovative ability to find low-cost ways of cooling to do so as soon as possible. Keeping a data center temperature low doesn't have to mean pouring money into the local electronics utility - the need can be almost completely obviated by using cooling systems that are less expensive. Besides the using such varied methods as deep lake water and solar panels, there are even other types of methods that may be useful for a data center that needs to reduce its footprint quickly.

Mineral oil for cooling processes
Data Center Knowledge wrote a recent report on the NSA's usage of mineral oil to cool its servers. This is an immersive cooling technology from Green Revolution cooling. It uses liquid-filled enclosures that servers are submerged within to cool high-density installations at a mere fraction of the cost of other types of cooling.

Solar and other technologies may still be useful as well for those that have the capital to invest. Apple's recent usage of solar panels as a way to finance their data center temperature lowering goals is still widely considered to be a strong choice by the company. As the amount of information available to consumers and businesses rapidly expands, there will be more of a pressure on data center operators to find ways to cool their servers to keep up with the demand for data. Processors will be speed up, but the rate at which information is growing on the internet is exponential, and finding the best ways to keep costs from ballooning is going to involve making frugal choice about how to spend money on cooling.

Monitoring may solve problems
The usage of elements that can help to keep a data center more efficient without necessarily changing the way that it uses its power is a smart choice for many organizations that may still be in the planning phases of their attempts to cool their data centers in unique ways. Items like the Watchdog 15 or the Remote Power Monitor X2 are good candidates for this because they track temperature and power respectively, allowing an organization to know exactly what it is spending on power and what effects that is having.

Sometimes, data centers wind up operating at a lower efficiency than they mean to simply because they are not properly insulated. Although this is the most basic of all cost-cutting concerns, being sure that seals in the facility keep air out is important to maintaining a low monthly power bill.

While mineral oil doesn't have quite the same heat capacity as water - about 40 percent, according to the NSA - it still holds over 1,000 times more heat than air, meaning that it still a much more efficient method of heat exchange than air is. Mineral water also doesn't rely on fans to cool which means that a center using mineral water can already expect to save 10 percent of its bill, which is the amount of a cooling bill that is spent running the fans in the first place.

No matter how a given data center temperature is lowered, it is important to understand the necessity of reducing costs, temperature monitoring and making the most efficient choices possible.

]]>How can data center managers easily monitor assets?http://www.itwatchdogs.com/environmental-monitoring-news/data-center/how-can-data-center-managers-easily-monitor-assets-663728
Fri, 05 Sep 2014 18:03:52 GMTITWatchDogsFinding ways of making data centers more reliable and adaptable to new trends is the foremost design problem of any data center engineer. The difference between a center that can support innovative methods of data processing and one that doesn't is the difference between a center that is here tomorrow and one that isn't. As innovation continues in the realm of server architecture and processing, there will be more opportunities for engineers to find strong solutions that help to solve problems and create more efficient systems, and there will be more opportunities for failure. The consequences of failing to adapt correctly are severe, but there is a way forward.

Managed Risk
The fundamental problem at play with most data centers is a lack of ability to properly manage the risk of testing out new types of architecture, according to Compass Data Centers. Most centers continue to run the typical three-tier range of servers, which consists of core, aggregation and access servers. This design allows for scalable networks that are easy to build. However, the three tier design does not support easy modularity. These drawbacks, while not damaging now, could potentially be disastrous to centers in the long run.

An alternate method of data center design proposed by InfoWorld is the core and pod design, or "leaf and spine" network. This type of network calls for a spine of servers that a bunch of different pods report to. The beauty of this model is that pods can all be designed at different stages, and each individual pod can be an upgraded version or have different hardware so that there can be innovation within a data center. This allows for easy innovation through experimentation

By focusing on core and pod designs, data center engineers can embrace elements of risk that would allow them to craft stronger centers. Something that will be necessary, though, for each pod is a set of monitors that will allow engineers to understand the capacity at which each pod can operate, and any weaknesses they might have with regard to variance in temperature or humidity. In order to keep track of the conditions that different pods can work under, it would be best to install strong environment monitoring hardware like the WatchDog 100 PoE, which can sense a multitude of variables in order to provide exact data on when and why a cluster of servers might have gone down. Items like this can contribute to the resiliency of a structure as a whole by sending out informational broadcasts through email and SMS to engineers so that they can be constantly alerted to differences in climate in the data center.

Understanding change
By crafting systems that can be continuously tested, centers achieve not only strong innovation, but also better uptime. This is because a pod can be individually tested as to its electrical, mechanical and environmental systems in different states without bringing the rest of the facility down, creating the opportunity for rigorous testing of older and newer pods while the rest continue to function for clients and consumers. In this way the resiliency of different pods can be more formally ascertained even while the rest of the business continues to fulfill operations as normal.

As centers continue to develop, finding ways to continue to innovate while still keeping uptime as a high priority will be the primary task of most data architects. The systems behind creating strong informational infrastructures will increase in complexity and utility as this goes on, and being able to stay current and use strong monitoring hardware will be a great benefit.

]]>Large data centers and virtualization have lead to changeshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/large-data-centers-and-virtualization-have-lead-to-changes-663140
Thu, 04 Sep 2014 11:26:18 GMTITWatchDogsAs more companies make more complete usage of server virtualization as the cornerstone of their data centers, many changes are being felt throughout the data center industry.Due to the still-rapid growth of the data center industry as a whole, it is not a surprise that it is currently going through many twists and turns as more profitable operating models are found. Virtualization, and its ability to increase flexibility, improve security and enhance efficiency, has created an opening for an increase in massively scaled data centers known as hyperscale data centers. As the benefits of virtualization are made more pronounced by further adoption throughout the industry, it is possible that other practices will become standard as a result

Data Center Knowledge reported that virtualization is the most effective, most flexible network infrastructure. Because processes can run across a network as opposed to being bound to a given piece of hardware, the overall efficiency of the data center improves dramatically. Information, freed from its ties to any specific area of the center, can be processed at whatever point in the facility has the most spare power. This creates a more efficient process that can help to manage everything from storage to data center temperature.

Hyperscale changes infrastructure at hyperspeed
Virtualization has allowed for a monumental shift to hyperscale data centers, and Datacenter Dynamics's report on analysis by the Gartner Group said that these hyperscale centers prefer to do business with original design manufacturers, rather than original engineering manufacturers. Because ODMs understand their products at such a deep level, it is easier for them to give businesses exactly what they want in terms of hardware and scale, making it easier for those groups to get the exact servers that they want from the ODMs.

Because hardware has become a commodity in the face of the adoption of high server loads by hyperscale data centers, it is more important than ever for data center engineers to emphasize security and reliability in their physical locations. By using strong, state of the art monitoring technology like the Remote Power Manager X2, centers can deal with much larger configurations of servers without having to worry about power outages or power spikes as much as they used to. Monitoring technology that can notify personnel as to the location of the power spike at the same time that it sends SMS or email responses to technicians responsible for its maintenance is important for ensuring longevity and continuity as center operations mature and develop further.

Business continues onward
Because virtualization allows for fewer servers to do more work, the efficiency of data centers continues to climb. As data centers find more ways to use virtualization, they will increase the efficiency of their servers as a whole while reducing the amount of physical components they need in order to do processing work. As this happens, each individual server will be worth more to the routine operations of the data center, and the use of remote monitors like the Remote Power Manager X2 and the WatchDog 100 PoE will increase. Keeping every single server up by rigorously attending to its physical integrity is now more important than ever as virtualization grows, because each server is now a part of many tasks. The utilization of strong monitoring equipment alongside attention to detail in terms of how servers are set up can result in increasingly efficiently-cool servers that will allow data centers to operate even more profitably than before.

As data centers continue to develop, there will be more opportunities for increased efficiency and growth over time. The virtualization of data centers and the shift to software defined centers is merely one step forward in the overall growth of network infrastructure.

]]>Physical and digital security are equally importanthttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/physical-and-digital-security-are-equally-important-662736
Wed, 03 Sep 2014 10:19:32 GMTITWatchDogsUnderstanding the dynamics at play when major security breaches occur is one of the foremost responsibilities of data center operators, and using the technology at their disposal in order to make sure that disastrous events don't befall those who use their services is incredibly important. The utilization of modern day security and encryption techniques on data is essential to avoiding the grip of threats, and consumer confidence that those who are in charge are doing their jobs is highly important. As more sensitive data is stored by businesses and home users on cloud services that rely upon data centers, the obligation for operators to safeguard that data becomes increasingly important.

Celebrities, security, and data centers
The now-famous celebrity photos leaks on Apple's iCloud service should serve as a warning that hosting service providers can be responsible for leaks themselves, according to Data Center Knowledge. While much of the responsibility of hosting a data center relies upon those that charge for its physical needs, there is a similar burden on the ones in charge of IT services and processes that are executed internally. As cloud services continue to grow, there will be more scrutiny on privacy breaches like this in the future.

For intangible threats that come down the pipeline through the Internet, it can be hard to identify what variables to track. But, there may be cause to examine the security of data centers in another manner entirely - as an entity that can be physically destroyed in order to shut down the internet. Another recent article from Data Center Knowledge discussed the ways that the global Internet might be shut down by a terrorist group. If keeping the information stored on a data center's servers safe is priority number two, making sure that the right users have access to their information is priority number one. While a center that has leaked information is not a strong center, one that doesn't allow information to be accessed at all doesn't even qualify as a center.

Physical threats are real, too
There are a variety of ways that a dangerous hacker might take down the internet, and one of those might be finding a way to attack the physical location of the data center, according to Data Center Knowledge. While the typical picture of data center hacking involves a wire being cut, it could theoretically be possible to remotely over clock enough of the processors in a center in order to cause the facility to catch fire. Monitoring data center temperature with something the the Watch Dog 100 PoE is important for making sure that nothing is damaged through negligent - or malicious - acts.

The best way to be prepared is to understand that threats can come at any time, so that equipment like the WatchDog - which provides instant notifications if temperature or other variables fall out of a specified range - can be invaluable in a situation like this. Technicians need to use a variety of techniques in order to keep data center temperatures low, even in case of emergency. This means not only carefully monitoring the physical space of a center, but having cooling systems and other options on backup in case of attack. As information becomes analogous to the bloodstream of modern business and enterprise, the economic impact of a shutdown or even brief outage could be catastrophic.

Even without the global understanding of the role of an individual data center, adopting these security practices will help to guarantee uptime. Making sure that service-level agreements can be fulfilled is even more important than preparing for some Internet-less apocalypse. The recent iCloud celebrity photo leak has put cloud security in the limelight, but that does not mean physical security and oversight is any less important.

]]>Data centers must find more efficient ways to operatehttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-centers-must-find-more-efficient-ways-to-operate-661055
Thu, 28 Aug 2014 17:01:14 GMTITWatchDogsData centers are facing heavy criticism for their role in pollution, and the industry needs to use this time as an opportunity to adopt new standards that will reduce energy consumption. The best way forward for many data centers will be reducing their reliance on power. As the data center industry begins to come under fire from environmental activist groups for its use of energy, there will be more pressure on leaders to find ways of getting power to servers without using as much energy.

Because power is also a major operating cost for datacenters, there is a natural intersection of public and private interest for most datacenters to find ways to limit or at least decrease their power consumption as they expand into larger and larger spaces. This means dealing with many of the different ways a center can be configured to use power, and may also require finding new innovative changes in how they treat and manage their data center temperature.

Innovative solutions and problems
In order to economize energy, data centers need to find innovative solutions to their heating and cooling problems. Although data centers have found many different ways of getting their servers cooled more cheaply, including relocating to the Arctic, using solar panels, and building underground, not enough centers have adopted these practices.. In many cases, the issue may not simply rely on how cheaply or efficiently a server is being cooled, but whether or not the processors themselves are being used in efficient ways.

According to a recent report by the National Resources Defence Council, there are many ways for centers to cut their usage of electricity and end their dependence on fossil fuels. There are energy efficiency practices that large amounts centers could implement that would cut energy waste by 40 percent, claims the survey, including getting rid of ghost servers, using more server models, and being sure to prioritize the purchasing of equipment that is energy efficient. The problems of inattention in the industry are such that there are many ways for most data centers to begin saving money now if they would put the effort toward it.

Accountability and energy
Data Center Knowledge reported that the division of accountability is the worst culprit in multi-tenant facilities, where the data center owner pays the electricity bill and tenants pay for power blocks, regardless of how much power they use. This creates a system where tenants are not motivated to save energy, so they do not, and a lot of inefficiency is created. In order for organizations to deal with the amount of power they are consuming, a shift in priorities may be necessary. Utilizing products like the Remote Power Manager X2 will be useful to centers trying to monitor their habits.

Many IT managers worry about adopting new strategies for reducing consumption because they are worried about uptime. While this is a real concern, and there is always a pressing need for those who operate data centers to provide as much uptime as possible to those who contract out from them, there are situations where it may be more appropriate to chase the gains that can be achieved from reliably consuming less power. Proper data server techniques like virtualization will wind up making the whole center more efficient while actually making it more reliable, so some of the solutions that data centers need to adopt simply haven't been because of the difficulty in updating to new styles of servers. As is the case with technology, the improvements always surge ahead of the ability of professionals to manage them, and virtualization should be the number one priority for any data center operator who is concerned with the reliability, efficiency or temperature of a data center.

]]>Lowering data center temperatures may get a little easierhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/lowering-data-center-temperatures-may-get-a-little-easier-659375
Wed, 27 Aug 2014 17:46:55 GMTITWatchDogsThe race for more efficient and cheaper cooling strategies to lower data center temperatures continues unabated in the tech industry. Besides the actual processing of data itself, cooling costs seem to be the topic at the top of any data center operators' mind and for good reason. Most data centers spend a third of their considerable energy bills on cooling, and since energy bills comprise the majority of a data center's operating cost, it makes perfect sense that they would try to cut those expenses whenever possible. Recently, data center operators have been doing everything in their power to reduce the amount of reliance they have on paying for energy from the grid, whether they are putting up solar panels in the desert, building their centers inside of mountains, or even stationing them in the Arctic.

Data centers need cold air like fish need water, and it's unsurprising that a lot of recent innovations have been aimed at fulfilling this need. A recent startup out of Cambridge Massachusetts known as Tera Cool has found a way to reduce cooling costs by using liquefied methane gas, according to IEEE Spectrum. The way that this cooling process would work is that there would be heat exchange loops between the data center and liquefied natural gas terminal. The gas would move through a pump as a liquid, transfer its coldness to water and glycol which is sent to the data center, and then be sent back to the cooling center for another trip. The company adds a way for the heated gas to power a turbine as well that could supplement the data center's electricity budget, but that would make the setup much more expensive.

Testing and innovating
In order for data centers to be sure that they can trust new methods, they have to trust them enough to test them. Testing all manner of data center operations should be an ongoing concern, argues Edward Sharp of Entrepreneur. The need for companies to set up better disaster-proof data centers and to consume less energy can only be met when equipment is tested. By making sure, under controlled circumstances, that a given project will fulfill its obligations a data center can make great gains over competitors by simply having the willingness to try new things. There are few organizations that have the courage to test out experimental procedures, but those that do are the ones that succeed where others fail.

Using a remote temperature sensor like the WatchDog 100 PoE can help to alleviate some of the risk that managers so keenly feel when they are conducting tests inside their centers. A piece of equipment that measures light, sound, temperature and humidity can be a great way to let server personnel know if the data center temperature is approaching a critical point during a test without running the risk of losing any equipment. By taking just enough of a chance to see if something can work in practice, data centers can push themselves to the head of the pack in terms of efficiency.

Although there are a variety of ways to cool a center, there is only one way to stay in the lead. Continuous innovation allows organizations to perpetually be one step ahead of the competition, and whether that means adopting the use of natural gas, moving to the Arctic, or finding some as-yet unproven and new cooling technology, the ultimate end-result will be more data at a lower power cost for everybody. A data center that has lower operating costs can afford to charge lower prices than the other ones out there, resulting in an ability to command prices and bids that others cannot hope to match.

]]>Controlling data center temperatures doesn't have to be expensivehttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/controlling-data-center-temperatures-doesnt-have-to-be-expensive-660363
Tue, 26 Aug 2014 14:17:52 GMTITWatchDogsAlthough the costs of keeping data center temperatures low can be considerable, there are ways of reducing expenses without giving up efficiency. Finding ways to use the environment around the data center to the advantage of the company is a new strategy that many centers are taking into consideration as they try to find ways to keep their cooling expenses low. For some organizations, this means using lake water nearby, but others are even making sure that their interior cooling systems are set up in as effective a way as possible so as to minimize wastage and keep cold water and air flowing directly onto the processors of their servers. In recent weeks, more groups have been experimenting with different types of cooling, and many have proven to be highly profitable for the data centers.

The Utah-based company C7 Data Centers is keeping its data center temperatures low by increasing its storage density, according to Data Center Dynamics. The business is doing this by virtualizing more heavily than most centers, and are actually decoupling the enterprise data management software from the enterprise infrastructure. It is using Actifio's copy data virtualization techniques to allow data more freedom to roam around the center, which makes the data center more efficient by intelligently allocating tasks to whichever processors aren't currently in use.. The net result is that there are more servers running efficiently and fewer running at higher temperatures, which results in an overall cooler center.

Free cooling from the environment
Other recommendations include using something called water-side economizers. This refers to finding naturally cool liquids around the physical location of the data center itself like ponds and rivers and lakes. These can then be routed directly to the water cooling systems built into a data center in order to cool off servers without relying on additional power sources, reducing overall electrical energy consumption of the plant. There are many other types of data center cooling detailed in a recent article by Data Center Knowledge, including the Strainer Cycle, Plane & Frame Heat Exchangers, and Refrigeration Migration. It also details many kinds of free cooling that can be used during different seasons, allowing a veritable palette of cost-cutting measures for organizations to use. Using a few of these tips could go a long way toward reducing a company's power use efficiency ratio.

]]>New data centers use innovative strategies and expandhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/new-data-centers-use-innovative-strategies-and-expand-659937
Mon, 25 Aug 2014 16:23:56 GMTITWatchDogsData centers are becoming larger and are finding new ways to fit into their local environments as they seek to become more efficient. As data centers struggle to find ways to make the costs of cooling and running their servers more affordable, many are experiments with different methods for all of the excess heat from their servers to be useful while others are trying to expand to such a point that economies of scale help to mitigate their losses. Out-of?-the-box solutions for energy and equipment costs are becoming a standard in the industry as top-level engineers, architects and designers work together to solve the problem of how to process the Internet's data needs. As time goes on, there will be more innovative and creative ways that organizations are using data centers to solve two problems at a time.

Green innovation
Perhaps one of the most intriguing concepts in data centers today comes out of California, reported Data Center Knowledge. A plant in Moss Landing, California, will combine a water desalination plant with a data center to mitigate problems that plague both kinds of businesses. For desalination plants, water that is able to be desalinated that doesn't wind damaging local wildlife comes from very deep underwater, and ends up being very cold and expensive to heat up to the temperatures required to desalinate it, keeping data center temperatures low. However, data centers naturally carry a bevy of excess heat, and the cold salt water will be used to cool the center's servers while simultaneously warming up the water in order to make it more easily made potable. In this way, the combined desalination plant and datacenter has found a way to go green and cheap in an innovative way.

Expansion
In West Chester, Ohio, the data center known as Peak 10 Inc. just built a 5,000 square foot expansion, reported Journal News. This new area will increase the amount of storage available to the company, which is already planning for another expansion to an adjacent building in the coming years. These two tales of centers being opened speak to the considerable amount of growth going on in data centers today, and the continual struggle to keep data center temperatures low. None of these operations would be possible without the use of data center temperature measuring instruments like WatchDog 100 PoE, which can track a variety of ambient data points, including light, sound, and temperature.

]]>Recoverability is a key goal for data centershttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/recoverability-is-a-key-goal-for-data-centers-657738
Fri, 22 Aug 2014 20:11:57 GMTITWatchDogsReliability is the keyword for datacenters today. If there is anything most datacenters want to be able to promise their customers, it's that today will be the same as tomorrow, and that anything they store this minute will be there the next. In order to fulfill the promise of reliability, data centers spend a lot of money each year regulating every element of their business, from power to data center temperature to walkways and entrances. But is the drive toward infinite reliability actually worth it? Although for employees and end-users, the ability to know that something is always accessible is a big plus, it may not be an economically feasible option for most centers. Understanding how to find the middle point between reliability and economy is an important idea for most data centers to come to terms with.

A recent survey of Federal IT workers found that 42 percent had experienced downtime that prevented them from doing their jobs, reported FCW. On the other hand, real-time information access saves them about 17 hours per week, which is roughly 32.5 billion in productivity savings, so the overall effect that data centers are having is rather positive. In spite of this, reaching that always-on dream is still the goal, and understandably so. As workplaces increasingly go digital and embrace the idea of a paperless office, they will be more susceptible to disruption from the power structure and other elements of technology. This means that more attention has to go in to providing power and other devices to data centers that run and hold most of the information that is used by Federal employees.

Another way through the grid
A recent development in light of all of the current focus on up-time over all else over reliability is a new company's idea that relying on grid power may be the best thing data centers could do, reported Datacenter Dynamics. By utilizing on street power, the data center operator IO is trying to drastically reduce electricity bills and routing away from the usually expensive uninterruptable power supplies and generators. This should have the effect of drastically increasing their power use effectiveness.

There will still be a need for data center temperature monitoring in a place like this, though, and there may actually be more use for items like the Watchdog II PoE climate sensor in the future when more data centers may run on grids. Some companies are deciding that the costs involved with putting a data center up at maximum uptime simply aren't worth it when paired with the opportunity cost of tolerating a certain, still very high level of uptime. Becausxe it is never possible to completely predict the future and take care of all eventualities, it is better to invest in creating a data server that is capable of coming back up quickly than one that attempts to stay up forever.

Money that is freed up from dealing with the power configurations of old-style data centers can be used to create more servers and provide more processing power. Keeping costs down can increase a center's overall profitability. For small blips that happen outside of business hours, the downtime may be unnoticeable, so long as those blips are easily recovered from. Servers need to be managed in such away that they stay up and are able to be put back up at a moment's notice. Clients notice down time, but they also notice an immediate recovery from downtime and that is an excellent chance for a data center to prove itself to its lcients in the midst of a difficult period.

]]>Finding the right configuration for reliability is the new direction for data centershttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/finding-the-right-configuration-for-reliability-is-the-new-direction-for-data-centers-658953
Thu, 21 Aug 2014 17:48:58 GMTITWatchDogsThe times are changing for datacenters as more big players get into the game of running their own server megaplexes. In the wake of Facebook building its own center and joining the ranks of Google and Apple in its quest to host its own data in search of meltdown proof servers, the question every data center operator should be asking is, "How will data centers look in five years?" Recent trends have split the way that data centers are built: from high-processing power and low-security bitcoin mining operations to high-security high-power blockbusters like the new U.S. nuclear testing data center. There have even been small tower-hosted servers designed to speed up the processing of information for cell phone carriers, which is another example of the many different ways for servers to be put together. Finding what will work and stay reliable in the long run will be the only way to stay profitable in the long run.

Reliability is key
In order to examine where priorities are, it may be helpful to look at what happens to a data center when it goes down. Although, theoretically, turning a data center back on again and connecting it to its multitude of partners should be as easy as flipping a switch, the reality is that machines are complex and slow and take time to get running again. Even if they do run quickly, their clients may have delays or problems on their end. Building data centers so that they can quickly come back online and lose as little data as possible in any sort of crash is vitally important for the reputation of the center and of the client using the server. Even more important is to make sure that all avoidable crashes are actually avoided through the use of climate monitors like the SuperGoose II to measure temperature, humidity, airflow, light, and sound. This kind of sensor can help to create a data center that is fundamentally more secure from calamitous events like fire, hurricanes, earthquakes, or even simple overheating events that may happen when the cooling system fails.

Examining the strategy of Facebook's data center is important when looking at the potential way to make servers disaster-proof. By taking care to minutely fine-tune every element of Facebook's server's energy use, the company has made it much easier to spot blips in power usage through detection equipment. The smaller the range of typical conditions, the more likely administrators are to notice problems before the grow. Data center temperature can be controlled very tightly when deployed in small, scattered units the way that cell-phone companies are doing, but it can also be regulated by sheer mass in an operation organized like a Bitcoin mining rig. Each of these strategies has its own unique set of benefits, and making sure to maximize those benefits' potential will be the way to make the most of the strategies, reported Information Week.

Damage of outages
Outage costs consist of detection, containment, recover, post-recovery, replacement hardware and software, IT staff, user productivity loss, and third party expenses, according to No Jitter. It is estimated by the Ponemon Institute that the average cost of an outage is $690,204. These costs very quickly can lead to a data center losing clients due to a perceived lack of reliability. In fact, many of these costs have to do more with the consumer confidence that is lost when a data center goes down . In order to combat this, it is important for groups to structure their servers such that they are able to be put back up when they get knocked down.

]]>Layout and monitoring software are the keys to a lower datacenter temperaturehttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/layout-and-monitoring-software-are-the-keys-to-a-lower-datacenter-temperature-658429
Wed, 20 Aug 2014 17:53:59 GMTITWatchDogsFinding ways to keep data center temperatures lower without adding power costs can be difficult, but there are many things that most operators don't think of when assessing their center. Different kinds of centers can have efficiency raised simply by adding one or two small solutions that can make a data enter moreefficient without costing much. Using the tools provided to drive down costs is the name of the game for data center architects, so it can be surprising how sometimes the smallest details of server orientation and planning can wind up causing excess strain on cooling systems, ultimately costing a lot of money. Outlined below are some tips for making a center run cheaply and easily.

Layout
Attending to server racks is the most basic element of server room management. In order to be certain that all of the computers are being cooled at maximum efficiency, it is important to verify that the rack has been sealed properly to keep airflow moving appropriately. Blanket panels should go into any rack spaces that are empty in order to avoid having cold air leak out from where it's supposed to be, and any server opening should find some way to be appropriately sealed. According to Data Center Knowledge, a good place to check for any space to seal that isn't immediately obvious is the opening underneath the server cabinet.

Designing the layout in a long hot-aisle/cold-aisle configuration is a good way to make sure that any airflow that does wind up heading into the servers is pointed in one direction and allows all of the machines to breathe easily without swallowing anyone's exhaust. Using the machines in this way requires a keen eye for detail.

Keeping server racks filled with servers is actually very important in order to ensure that the airflow is properly balanced. When there are a lot of empty cabinet spaces, the overall balance of airflow is set off-kilter, which winds up allowing exhaust air back into the server racks. This, in turn, causes the cooling system to supply more air than it needs to, ultimately resulting in a greater expenditure of resources than is unnecessary.

Power
Investing in a temperature sensor like the WatchDog 100 PoE is important for maintaining ideal conditions for any data center. It is recommended that technicians recalibrate their sensors every six months to ensure that the readings of the Watchdog II PoE or any climate control system give accurate readouts. A poorly calibrated sensor can wind up having a devastating effect on the overall efficiency of a data center by causing it to expend too much energy cooling the servers. As all pieces of equipment, power and climate monitors require occasional oversight to verify that they are performing adequately. Used well, the Remote Power Manager X2 and the Watchdog 100 PoE can work together to ensure that the entire center is running smoothly.

A recent Atlantic.Net colocation data center was able to greatly boost its efficiency rating by installing a power monitor into its server room system, reported Consulting-Specifying engineer. Most system administrators can learn from this by implementing their own system of safeguard and power monitors into their data center networks, and then tracking the usage over several days in order to make effective use of their given space and resources.

Keeping a close eye on data center temperature is the way that many engineers wind up becoming so skilled at other technical fields. Finding ways to reduce airflow and keep the data center temperature low is one of the hardest jobs in technology, but it is a rewarding architectural accomplishment when it pays off.

]]>Data center experts needed in healthcare, securityhttp://www.itwatchdogs.com/environmental-monitoring-news/healthcare/data-center-experts-needed-in-healthcare,-security-657493
Tue, 19 Aug 2014 18:02:09 GMTITWatchDogsThe level of logistical and technical planning that goes into data center structure design makes engineers of these backbones of the Internet some of the finest around. This isn't merely the idle postulation of any one source, but the result of an industry-wide trend that has seen many fields trying to grab data center engineers as they leave their field, according to Datacenter Dynamics. These engineers are among the best because of the complexity and scope of their daily operations, and it is no surprise that many of them wind up employed in a variety of different industries in similar capacities. A major question is: Why is planning a data center so hard, and what can make it easier?

Back-ups and minimum costs
A difficult point for many data centers is dealing with the the incentive to always move toward longer uptime. Part of the physical reinforcement of centers involves making sure that there is back-up power available in case something were to happen to the grid the center operates on, which means that there are usually one or two redundant subsystems in place at all times to create the security necessary to allow operations to continue. This careful layering of redundancy is useful under many conditions, including security systems and hospitals that depend on power to keep patients alive. These industries could potentially learn from IT personnel in data centers who understand that the best way to keep a data center temperature from spiraling out of control is to invest in a climate?-sensing device like the Weathergoose II, which has on-board sensors for temperature, humidity, airflow, light and sound. These features allow it to check for a variety of potentially catastrophic events.

In light of recent hacking attempts, there has never been a better time to trust in the security and knowledge of data center operators who are used to dealing with threats both physical and digital. According to Reuters, Community Health Systems Inc. was the victim a break-in from hackers located in China, who looted Social Security numbers and other data that belonged to 4.5 million people. This kind of cyberattack is unfortunately common in industry. Data center engineers are used to dealing with major attacks like this. Creating systems to snare those that grab data - as well as protecting the data itself - is a high priority for many different fields now. Many groups will now turn toward those who have data center experience in order to get those who are most experienced at dealing with this kind of security issue. Data center operators have an equivalent of a black belt in this area, having been responsible for keeping one of the most-targeted and least-hit types of servers on the Internet today. This is because they understand the kind of tools and training that are needed for the equipment and the employees to stay two steps ahead of cybercriminals.

For most businesses, digital security is becoming as important as physical security. Trusting in experts who are able to weave those two elements together, simultaneously keeping data center temperatures low while also keeping security walls high, would be a highly effective way to build informational infrastructure that doesn't allow access to its secrets. Hospitals should invest in strong security officers who have the power to create policy based around limiting attack vectors for potential threats while still keeping the all-important lines of communication open between doctors and staff. Although it can be hard convince those whose expertise lie outside of tech to follow rigorous security patterns, the confidentiality of patients' information is at stake.

]]>Data center experts needed in healthcare, securityhttp://www.itwatchdogs.com/environmental-monitoring-news/healthcare/data-center-experts-needed-in-healthcare,-security-657493
Tue, 19 Aug 2014 18:02:09 GMTITWatchDogsThe level of logistical and technical planning that goes into data center structure design makes engineers of these backbones of the Internet some of the finest around. This isn't merely the idle postulation of any one source, but the result of an industry-wide trend that has seen many fields trying to grab data center engineers as they leave their field, according to Datacenter Dynamics. These engineers are among the best because of the complexity and scope of their daily operations, and it is no surprise that many of them wind up employed in a variety of different industries in similar capacities. A major question is: Why is planning a data center so hard, and what can make it easier?

Back-ups and minimum costs
A difficult point for many data centers is dealing with the the incentive to always move toward longer uptime. Part of the physical reinforcement of centers involves making sure that there is back-up power available in case something were to happen to the grid the center operates on, which means that there are usually one or two redundant subsystems in place at all times to create the security necessary to allow operations to continue. This careful layering of redundancy is useful under many conditions, including security systems and hospitals that depend on power to keep patients alive. These industries could potentially learn from IT personnel in data centers who understand that the best way to keep a data center temperature from spiraling out of control is to invest in a climate?-sensing device like the Weathergoose II, which has on-board sensors for temperature, humidity, airflow, light and sound. These features allow it to check for a variety of potentially catastrophic events.

In light of recent hacking attempts, there has never been a better time to trust in the security and knowledge of data center operators who are used to dealing with threats both physical and digital. According to Reuters, Community Health Systems Inc. was the victim a break-in from hackers located in China, who looted Social Security numbers and other data that belonged to 4.5 million people. This kind of cyberattack is unfortunately common in industry. Data center engineers are used to dealing with major attacks like this. Creating systems to snare those that grab data - as well as protecting the data itself - is a high priority for many different fields now. Many groups will now turn toward those who have data center experience in order to get those who are most experienced at dealing with this kind of security issue. Data center operators have an equivalent of a black belt in this area, having been responsible for keeping one of the most-targeted and least-hit types of servers on the Internet today. This is because they understand the kind of tools and training that are needed for the equipment and the employees to stay two steps ahead of cybercriminals.

For most businesses, digital security is becoming as important as physical security. Trusting in experts who are able to weave those two elements together, simultaneously keeping data center temperatures low while also keeping security walls high, would be a highly effective way to build informational infrastructure that doesn't allow access to its secrets. Hospitals should invest in strong security officers who have the power to create policy based around limiting attack vectors for potential threats while still keeping the all-important lines of communication open between doctors and staff. Although it can be hard convince those whose expertise lie outside of tech to follow rigorous security patterns, the confidentiality of patients' information is at stake.

]]>Innovative educational program helps next generation of data center operators deal with new set of challengeshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/innovative-educational-program-helps-next-generation-of-data-center-operators-deal-with-new-set-of-c-656828
Mon, 18 Aug 2014 11:11:28 GMTITWatchDogsSouthern Methodist University is running a brand-new program that offers a Masters of Science in Datacenter Systems Engineering while being advised by data center industry thought leaders, reported Datacenter Dynamics. This is being done to allow those involved with data centers to become more aware of the major developments in processing technology, which is useful given the move toward more data and more processing ability coming from all directions.

One new development that those attending this program will have to deal with is the rise of the data center as a Bitcoin Service Provider. More money is flowing into Bitcoin mining, which is a term that refers to servers that process Bitcoin transactions in exchange for more bitcoins. Because of the large volume of transactions these servers have to process, most of those who wish to rent these servers are interested in using a very large array of machines, which means that the servers have to be packed very densely in order to maximize profits.

Bitcoin hashing and large-scale systems
For many data centers that run Bitcoin mining configurations, the process started out as a hobby, reported Data Center Knowledge. Many of these then-hobbyists now pump out machines for those who are interested in large-scale mining. Drivers and software installation for the kinds of systems that run these configurations can be difficult to manage, as there are many different kinds of Bitcoin managing software, and each of those have been forked as different versions have been adopted or kicked to the curb by Bitcoin's technically discerning community. These hobbyists turned gatekeepers have made creating Bitcoin rigs more difficult, but also more fiscally rewarding: Without their efforts, there would be no community based around the encryption-based currency.

Because Bitcoin rigs tend to be focused on mass ahead of safety and security, there aren't as many concessions to cooling or heat reduction in the servers of organizations that provide Bitcoin services. For this reason, data center operators should invest in a Watchdog II PoE system if they want to be notified if anything seriously bad were to happen to their servers. Data center temperature monitors like this one are indispensable for any system that runs less than the average amount of cooling hardware in its servers. Without this kind of backup, it is possible to suffer thousands of dollars worth of damage in freak accidents or meltdowns.

]]>How to secure a data centerhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/how-to-secure-a-data-center-655982
Fri, 15 Aug 2014 11:11:16 GMTITWatchDogsData centers need to not only be strong enough to guard against cyber attacks, but physical attacks as well, according to InformationWeek. Keeping data center temperature low while also making it secure is a challenge that seems difficult at first, but is easy to solve once broken down into component parts. As data regulations tighten, it will be more and more important for organizations to protect their information not just from hackers, but also from fires and other physical disasters.

The Switch SuperNAP 8 data storage facility in Vegas may be the most secure server room currently extant, according to Vegas Inc. It has all of the elements that might be expected when building in a security-conscious fashion, including backup generators, fuel, and batteries, but there are also rotating key security systems and an extremely durable roof. All servers should also install climate control monitors like the WatchDog 100 PoE, which allows administrators to be instantly notified of any drop or increase in a variety of functions, including heat, humidity, and noise. With features like that it is very easy to instantly notify staff the moment something goes wrong, even in something as impregnable as the SuperNAP 8.

Reliability and security
Keeping servers away from any other element of the data center is important due to the recent proliferation of wireless bugs and other tools that are capable of sinisterly subtle hacking feats. Making sure that loading areas separate from the rest of the facility is very important due to the fact that this is where many hackers can use exploits to get their hands on different parts of the interior server architecture. Finding a way around this is as simple as making sure that the entrance to the facility and the servers themselves are kept far enough away that bug won't be able to trace signals on servers.

Another major element of physical security of data centers is power. Using a remote power monitor is an important way to keep data center temperature from getting out of hand. If, for example, A/C power happens to go while the servers stay on, that can be a recipe for a meltdown in a matter of moments. Finding a way to deal with this kind of issue is as simple as installing strong monitoring software. By making sure that a capable administrator is always aware of fluctuations in data center temperature, a company can keep their valuable investment from suffering catastrophic damage.

]]>New cooling technologies and chips will alter datacenter price structureshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/new-cooling-technologies-and-chips-will-alter-datacenter-price-structures-655213
Thu, 14 Aug 2014 17:22:21 GMTITWatchDogsNew liquid cooling systems may cause a disruption in datacenter industries by ending their reliance on low-tech costs like air conditioning and space management, according to Liquid Cooled IT, a report by The 451 Group. The new liquid cooling technologies will decrease power costs, noise, vibrations, and allow for greater flexibility in the production and deployment of servers, according to DatacenterDynamics. These new cooling systems may be the wave of the future, and when they become widespread many server forms will become much cheaper to manufacture due to the massive reduction in cooling costs, one of the most expensive operating costs for most data centers.

These advances in liquid cooling comes on the heels of news about Intel's new processors, which use a 14-nanometer process technology that will allow them to generate just half of the heat as that of the previous chips, the 22-nanometer Haswells. This means that servers can expect the average heat generated by their racks to go down by about 50 percent, according to Data Center Knowledge. The incredible reduction in heat will call for less cooling than before, and combined with the increasing efficiency of cooling technology now that liquid cooling is becoming so popular, it will make data centers far more efficient than before.

For most consumers, this new change will be a very good thing. When data centers can build without worry for how much power they will operate with, they can afford to license out their services much more cheaply than before. And if data processing organizations are able to license out their equipment more reliably, there could be a major uptick in the amount of information that is possible to be stored on the Internet.

These new core M chips are very important for the future of Internet-based work, because they are being introduced into personal computers, Internet of Things devices, and other smaller components like smartphones. Between these and new liquid cooling technologies, many current aspects of data center design will no longer be necessary .The need for an absolutely particle?-free atmosphere will disappear if processors are kept submerged in liquid, for example, which would greatly cut down on certain kinds of operating expenses. Because liquid dampens sound and the new processors themselves are much more quiet than they used to be, it is possible that new centers will not need to be built with the same level of sound and vibration-proofing that frequently goes into data center construction.

Temperature and data
The only thing that won't' change when these new technologies are introduced is the need for careful monitoring of the atmosphere of a center. While it is excellent that chips' heat will decrease and the overall amount of costs will go down, a certain amount of monitoring is essential to provide strong data security. Without data center temperature monitors like the WatchDog PoE it will be very difficult for those who monitor servers to know when and why their servers are overheating. As experimental new technology is used to cool servers, the need for monitors will go up. New technology is still new technology, and vulnerable to the kinds of accidents that don't necessarily get tested for in labs. In order to make sure a given server is stable and functional, those who are in charge of data centers should take care of their equipment by making sure that they can monitor data center temperature with powerful climate monitoring tools.

The future for architects of data looks promising, but rapid adoption of new technologies will be required for those who want to stay cost-effective as technology races ahead. By providing organizations with new tools, Intel and liquid cooling manufacturers have created powerful incentives for data businesses to stay vigilant.

]]>Innovative ways companies are keeping data center temperatures lowhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/innovative-ways-companies-are-keeping-data-center-temperatures-low-654267
Wed, 13 Aug 2014 13:59:00 GMTITWatchDogsFinding ways to keep data center temperature low can be difficult, and many companies are resorting to innovative measures in order to ensure the success of their operations. Recently, Facebook has used the traditional method to keep costs down by utilizing new software management tools to run its servers more efficiently, according to Data Center Knowledge. The new tool being used by Facebook is called Autoscale, and it assigns tasks along servers in each cluster to keep them operating at roughly the same speed, reducing overall energy costs through a more equitable distribution of power.

Other companies are finding more unorthodox ways of cutting costs, as reported by CloudTweaks. Although businesses in the past have cooled data centers with solar panels in the desert and frigid ambient temperature in the arctic, eBay's new center uses simple fuel cells. These fuel cells turn natural gas into electricity, meaning that eBay only plugs into the local grid to power this particular data center when it runs out of gas. Instead of using the grid first, eBay uses its own system , which help to greatly reduce the chance of blackouts and keeps monthly operating costs low.

By far the most trading data center operation to be seen using new materials is Apple's new North Carolina data centers that uses biogas captured from landfills. This renewable energy comes from the simple decomposition of trash that has been dumped in a landfill, which, Apple hopes, will help to reduce the amount of greenhouse gas emissions that tend to radiate out from area landfills and will help give Apple a better reputation with the environment in North Carolina. These kinds of cost-cutting measures are not uncommon from larger technology companies, which are always looking for ways to reduce operating costs of processing the enormous volumes of data that they deal with on a regular basis.

Move where it's ice out
For anyone who is trying to keep a data center temperature low, the modus operandi should center around the question "What if?" Allowing the mind to pursue new opportunities and pick out new locations is the number one way to find the new ideas that the larger enterprise clients are using to keep their servers running coolly and smoothly. When centers open in the Arctic, underground, in sunny North Carolina and the blazing heat of the desert, it is clear that there is no single way to reap natural benefits for optimal data center functionality.

The best procedure for understanding how to efficiently cool is to take a look at all of the different ways that organizations are cutting costs, use what makes the most sense for the given situation, and then aggressively monitor the data center temperature, using a strong piece of equipment that can measure climate like the Watchdog 100 PoE. Without knowing what kinds of cost-saving utilities are out there, it could be easy to make uninspired decisions, but would be be harmful to the group in the long run by not taking a chance on an experimental way of understanding data. Using yesterday's best practices is not good enough for dealing with the challenges of tomorrow.

The important part of dealing with data center temperature is not knowing when or how the data center is cooled, but understanding why and where it could be done the best. Except for certain high-demand services that absolutely cannot tolerate latency, data centers have a theoretically global reach. Companies should make sure that they are moving forward with their eyes on the future as they build new data centers by re-evaluating the data and being bold with their decision making.

]]>Porting data to a new data center doesn't have to be scaryhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/porting-data-to-a-new-data-center-doesnt-have-to-be-scary-653761
Tue, 12 Aug 2014 10:35:06 GMTITWatchDogsBuilding a large data center is a scary move for many organizations, but there are ways to make the process cheap and safe, reported DatacenterDynamics. Although high-visibility IT projects like this are nerve wracking for many administrators as the potential for embarrassment and losses are high, these kinds of mass data migrations need to be done when upgrading from an older data center or moving a company's base of operations around. In order to keep from panicking during the move, there are some important aspects about relocation that anyone involved with overseeing this kind of event should know about.

Data centers need to be created with an eye for storage and security. This means that they need to have backups in place to ensure access to the files is available at all times, which usually means a system of backup generators and other mechanisms that will keep servers running. The old standard is heavy amounts of cooling to offset the heat provided by fans and other mechanical parts of computer whining constantly, but this doesn't always work to reduce all of the different problems that might befall a data center. A good idea here is to invest in a data center temperature logger. In order to keep the data center temperature down, the best bet is to find ways to reduce the costs of heating altogether, as cooling the thing is one of the largest operational expenses of the data center itself - next to power, that is.

These kinds of data centers are important because they can host much of the data on customers and business performance that can allow an organization to make strong decisions, reported Cloud Tweaks. This data is so useful because it is uniquely personalized to the kind of operations that the given organization conducts, yet also contains enough information on how their customers operate as to be broadly applicable on how best to market to them in other contexts. It is much easier for advertising teams to draft strategies when they have a concrete knowledge of the various sectors they are speaking to, rather than just a general idea, for example.

Once a company has found a way to make sure that a given data center is well powered, using data loggers that are able to track exactly how well everything is doing at any given time is important . The Watchdog 100 PoE is very useful for knowing when spikes hit a data center that could potentially damage computers through the excessive heat. Other types of monitors, like the Remote Power Manager X2, are good for making sure that small spikes in electrical input don't mean that a server is fried.

Making sure data is safe
The major issue in getting a server relocated or built is being sure that the applications used in the server are ported over cleanly and clearly. Be sure to know whether or not the current software will properly interface with the new hardware. Not all interfaces are created equal, and some systems have a very hard time working with other systems. Virtual migrations involve simply moving servers and data to the new data center by simply dragging and dropping files as one can do in a network environment, but other methods, including physically shipping data, shutting down servers and restoring them, or application failover can be used in order to get information from one place to another place quickly.

No matter what kind of system that is used to set up a new data center, the important part is that is it secure. Monitoring the data center temperature is a good way to make this happen.

]]>When should companies invest in data centers and how should they be built?http://www.itwatchdogs.com/environmental-monitoring-news/data-center/when-should-companies-invest-in-data-centers-and-how-should-they-be-built-653320
Fri, 08 Aug 2014 10:14:05 GMTITWatchDogsEveryone wants to use big data in order to make their businesses operate more effectively, and the bigger companies out there want to be able to use it through private cloud systems to preserve trade secrets and reduce overall IT overhead. But, when should a business invest in a data center? What kind of software should they run on it? Are data centers scalable enough for companies that may experience heavy growth over the next couple years? In order to understand the answers to these questions, groups need to understand exactly what big data is, wrote Nicole Laskowki of SearchCIO.

The philosophy behind big data is that companies, especially large-scale businesses, have more access to data currently than they have the ability to use it to make meaningful decisions. If there is a way to change that so that information can be used to make decisions more accurately and more quickly, that could represent huge gains. It has resulted in large efficiency gains for many companies by reducing redundancies and providing better customer and/or client support in industries from telecom management to health, so now the idea for most companies is finding ways to make this process work for them. Business intelligence is just a facet of big data, but it is the one that most groups can use easily.

For organizations that are ready to take the plunge and explore data-driven solutions to their problems, the best way to do that is to work with the newest possible server architecture. Data Center Knowledge contributor Hemant Gaidhani wrote that solid state drives are the best way to make sure that data that is put on servers can be easily accessed from anywhere, and that using software-defined storage is the next logical step of this solution. This is the forward move from server virtualization that will allow for better flexibility. Because enterprise SSDs are so much faster than traditional hard drives, the bottleneck that would otherwise be present when writing and rewriting data from one server to another no longer exists, which makes for a much higher-performance server than would otherwise be available. Leveraging this into better overall resiliency and cost savings through software-defined storage makes for a more efficient data center on the whole, which even further reduces the costs of operation over time.

Tools for tech
Leveraging the best equipment available to keep the investment in SSDs profitable for larger businesses involves making sure that they are running the best possible security measures on their equipment. That means encryption protocols, strict security parameters for anyone with physical access to the servers, and state-of-the-art temperature monitoring and environment detection mechanisms. The WatchDog 100 PoE is an excellent example of the kind of equipment that can keep data running fluidly in server without worry. The features on this equipment ensures that, even if a major hazardous event were to occur within a data center, all possibly related staff could be notified immediately through email and SMS services. This means that even if the technology team responsible for maintenance of the data center are hundreds of miles away they will be able to respond in real time to whatever event happens while they're out.

By using equipment that keeps data servers safe, companies are able to more fully leverage big data analytics to make their businesses strong and profitable. Through the use of technologies like these companies gain an advantage in understanding how to outmaneuver the competition by providing better service faster. Having a well-maintained server is the first step in utilizing all of the available options that big data has to offer businesses, and server room temperature monitors like the WatchDog and other devices are an important component of that.

]]>The way data is being used is changing: Are data centers ready?http://www.itwatchdogs.com/environmental-monitoring-news/data-center/the-way-data-is-being-used-is-changing-are-data-centers-ready-652747
Thu, 07 Aug 2014 09:53:51 GMTITWatchDogsIndustry and education are continuing to find exciting new uses for big data, according to Data Center Knowledge. The use of analytics to make life easier for the business community is nothing new, but what is new is how they are using it. Beyond simply restructuring telecom expense structures in order to find gains, business is using data to improve safety, comfort and monitor customer behavior. When everything can be broken down into discrete informational units, there are many more ways that groups can use these points of reference to leverage strong solutions for problems both old and new. New styles of data loggers are beginning to bring in a flood of information that will be more important to people at a moment's notice than ever before.

There are two sides to the saying, "when you have a hammer, everything looks like a nail." While it is true that it is possible to over-apply solutions to problems that don't need them or won't benefit, big data is more like a Swiss army knife than a hammer. When a person has a multi-tool, everything looks like it needs fixing. What industry is learning with the advent of analytically-based solutions is that there are many things that stand to benefit from a good tightening of the nuts and bolts. An example of this is the way that U.S. high schools are trying to use face and motion detection software to identify students. This system would allow for children and teenagers at these high schools to be able to walk unencumbered by identification lanyards and similar methods that have low adoption rates, and would still automatically alert staff to the presence of non-permitted individuals on campus.

Many states are already using facial recognition software to prevent licenses from being duplicated fraudulently, reported Natasha Singer of the New York Times. While there is concern over the potential exploitation of this technology, the use of it in these types of situations - for security - is considered to be ethically sound. Utilizing this type of technology as a data logger for companies that are trying to limit breaches and keep employees' identities and accounts secure could be huge. For data centers, these data loggers could be huge achievements for keeping the physical location of servers safe. These new types of information will require servers that are capable of managing the load as the technology becomes more popular.

Practical data applications
For data centers themselves, using power management systems through the use of a power monitor or temperature sensor will be important in order to find ways to maximize the utility of servers and floor space. By understanding when different components of the center heat up, it will be easier for server administrators to make correct decisions about that use and allocation during stress periods. Investing in a backup power generator can be very important for a center that hasn't yet. Using different products such as the Remote Power Manager X2 as a power monitor can be critical for understanding where key usage points are that may not be reflected in data processing statistics.

Finding ways to use data will be the ongoing theme of the next couple years for all industries. Understanding how to properly format time and goal structures in general so as to achieve the best results is something that modern businesses are lucky to be able to do. There is no sector more ready to achieve results with big data analysis that the modern data center, due to its unique positions as a data receiver, deliverer and analyzer.

]]>Mobile and gaming markets will define new data center deploymentshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/mobile-and-gaming-markets-will-define-new-data-center-deployments-652275
Wed, 06 Aug 2014 10:51:36 GMTITWatchDogsTechnology currently used to power cellphones will be what fuels data centers in the near future, according to Wired contributor Peter Levine. The improvements to existing processor technology that previously made mobile devices pull weight like systems more than 10 times their size will soon be used to create new, more resilient servers. The new data center temperature variance will be wider, but will still require monitoring to understand the depth of the security and strength of the devices. This will come at an opportune time, as the demand for processing power and data continues to grow because more industries are contracting data centers to store, analyze and protect their data.

A major new player that now contracts out much of their information processing to data centers is the electronic entertainment industry. Gaming could account for 30 percent of the revenue of the technology industry in the U.K. alone by 2020, according to Datacenter Dynamics. Because more video and computer game organizations are switching to providing their games through digital content distribution and streaming services, there is more demand for data centers that have the capacity to send out all of the different files that may be accessed at once. As gaming moves increasingly mobile and multiplayer based there are more servers that are required to keep players connected to each other. While different sizes of entertainment companies tend to approach data storage in different ways, they all want to work with data centers to build facilities and find processes that best help them manage the amount of user and player data that they need to keep track of.

Because new processors are so lightweight, low-cost, and low-power the transition to data centers that run a much larger number of small machines that work together to process information may be the ultimate cost-cutting technique that data centers could employ, Levine wrote. Data center temperature could become less of a costly item if these small processors are used, because they are so much more heat-resistant, although temperature should still be monitored. The hardware, too, could become less reliant on proprietary prototypes and more perceived as simple commodities. Money could translate directly into a certain amount of power that would automatically support whatever server architecture a given data center intended to run. While most data centers (especially those that host public cloud services) already use server virtualization, the as-of-yet primarily unexplored benefits will come from being able to use many small processors together to churn very large data loads at a very low cost compared to modern set ups.

Virtualization is key
Hardware that is structured to accept whatever server software is used on it will become more important as different companies require different things from their information processing providers. The needs of electronic entertainment companies, while similar, are different in many ways from e-commerce retailers due to the scale and consistency needed by both. While an online retailer doesn't want to lose a sale by accidentally dropping an order from a customer and places a premium on consistency, hosts of electronic games can't afford to delay information from customers to the servers by even a second if they want to provide an enjoyable experience to players of fast-paced games.

Games point the way toward what end-users expect their online experiences to be like: interactive, fun and instant. What is common for games now will gradually become expected from online shops and other digital forms of interaction between users and providers. When that happens, the important question will be whether or not data centers will be able to meet steadily increasing demand. The best thing that can happen for the data center is the adoption of cell technology.

]]>International data requires international thinkinghttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/international-data-requires-international-thinking-651931
Tue, 05 Aug 2014 10:56:45 GMTITWatchDogsBecause the Internet is an inherently global medium, more customers for different types of data centers are international businesses, reported Data Center Journal. As foreign businesses cater to American clients, they may contract out to data centers that have a physical presence in America for cloud servers in order to work with U.S. citizenry with ping times that won't leave too much latency. Working with international clients and storing data on different servers requires that those that store information keep abreast of current topics in order to make sure that they are doing all that they can to protect the integrity of their data. It's important to monitor power and other key external variables in centers and in communities in order to understand how a data center is expected to behave.

For example, Microsoft was recently ordered to turn over customer emails that it stored in Dublin, Ireland, according to Data Center Dynamics. While navigating legal gray areas with regard to discovery and data storage, it is important that data centers keep scrupulous records and understand where they might be called on to give data and why. Without knowing what might be expected of them, it is easy to wind up in a situation where a company lands in legal trouble.

The current atmosphere regarding international data can be confusing, so the key point to remember is that it is the job of a data center is to store data, not to lose it. To this end it is important to be able to monitor power to servers and keep track of redundant systems so that nothing gets compromised in the wake of a crash or other natural disaster. Being able to keep data securely on servers is literally the number one priority of data centers no matter where they are located, so it is always better to be sure that something is stored properly. Devices like climate monitors are very important for this task.

Global business
Finding the right balance between international clientele and U.S. based clients can be difficult for some data centers, but in general the good thing about data is that once it is on a server, it doesn't really matter from what nation it originated. Most fears of data being collected by entities other than the clients themselves are overblown, and the more important priority for data centers should be security of data from physical damage like an overblown circuit or similar things rather than through something like a cyber attack. To this end, it is more important that they be ready for the damage that can happen from failing to secure a backup power station to preserve the uptime of data on their servers than anything else.

As data becomes more internationally driven, it behooves all data centers to start thinking about where their next branch should open up, or if there are opportunities that they had not thought of that might be important if they want to operate in a different area. A data center opening near the Arctic could benefit from the savings associated with lower power costs due to a lack of a need to air condition the building. A data center in the desert could benefit from solar panels. Even data centers in mountainous regions could be built in to the mountain to ward off any threat of a natural disaster raining on their parade. A far as data centers go, the possibilities are endless if the thought is: how can I keep this safe and secure myself and my client's data. Elements like cameras can be very useful here. The project here is not so much about whether or not any given location is good, but what's good about any given location.

]]>Bitcoin mining becoming popular new direction for data centershttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/bitcoin-mining-becoming-popular-new-direction-for-data-centers-651701
Mon, 04 Aug 2014 11:18:39 GMTITWatchDogsA rapidly expanding market for data centers is processing Bitcoin transactions, reports Data Center Knowledge. As the Bitcoin market expands, more data centers are expected to get in to the business of running the transaction on their machines. The reason that tenants want these transactions done is that every exchange offers a chance for the host to gain a collection of new bitcoins whenever they are processed. This is like playing a free lottery with no buy-in cost a hundred times a second. Although the vast majority of exchanges processed offer the hosts no return, the few that do can amount to a hefty surplus of digital currency after a couple months of processing. Frequently clients that are hosting transactions in these data centers are cloud service providers that license out a certain percentage of their rigs to customers who benefit in the long run.

Due to the nature of Bitcoin processing, a lot of fundamental assumptions about how a data center should function changes. While it is still important to keep track of the server room temperature, it doesn't matter if there are brief, periodic outages while maintenance is done. Uptime is not a problem due to the way the international Bitcoin network naturally routes to different servers to process transactions. Typically, servers that are used to host international bitcoin operations have strong processors and are easily deployed. There's no real demand for reliability, and frequently no service level agreement whatsoever for customers. This is not as bad as it could seem, due to the fact that the hosting of the Bitcoin processes tends to pay for itself as long as the minimum requirements are met.

Bitfury, which is a Bitcoin hardware vendor, has already entered the hosting business. Its data centers have been set up internationally, with spots in Georgia, Finland, and Iceland. Their server rooms have a different architecture that is central to how many Bitcoin mining operations are set up: high density, low redundancy. Because the winning ticket that allows a bitcoin hosting provider to earn a bitcoin through processing the transaction is randomly allocated through all of the mining rigs that are currently running transactions, it makes sense for those processing to set up as many servers as possible so that the odds are with them. Although as early as a few years ago it was possible for someone with just a few computers to put together a reliable mining operation, the amount of mass-hosting sites has increased to the point that anyone without a remote hosting deal will not be able to compete with the larger forces now hosting the coins.

It is expected that mining operation will spend at least $600 million dollars deploying these large-scale operations in the second half of 2014. That Bitfury was able to raise the money in May will wind up doing it a lot of good in the long run, as the more computers get in the game the harder it will be for any individual to make as much money off of it.

Server room temperature is the hidden factor
By far, the most important element that these large-scale Bitcoin rigs have in common is their use of server room temperature-cooling equipment to allow their processors to overclock as much as possible. When it comes to Bitcoin, processor speed directly equals money, so high-end cooling techniques like liquid-cooling and other technologies aren't just possible, they are required. The advances they are making in using ambient server room temperature and other important elements of cooling may wind up pioneering advances in techniques used by more traditional hosts in the years to come.

]]>Next generation data centers are building smarter and smallerhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/next-generation-data-centers-are-building-smarter-and-smaller-648485
Tue, 29 Jul 2014 10:56:37 GMTITWatchDogsVodafone, a UK mobile carrier, is conducting a grand experiment by hosting small-scale data centers directly at their cell tower sites, reports Android Authority. These data centers are designed to help reduce blockages during periods of peak demand on cell phone towers that are becoming more common as cell phones are increasingly used to stream entertainment live. It is not hard to imagine that this plan was developed in response to the recent World Cup, which broke online streaming records worldwide, according to FIFA. In order for cell phone streaming services to remain usable, something that can monitor power usage in these data center will be necessary.

But Vodafone isn't the only trendsetter in terms of data center production. BizTech reports that nonblocking, lower latency, layer 2 flattening and high availability are major requirements that all data centers are working with in order to edge out the competition. The reduction of latency and high availability requirements are being specifically addressed by Vodafone in its recent incorporation of data centers into their cell towers, but nonblocking will need to be a requirement for all data centers down the line in order to make sure that connectivity for cloud hosting applications remains swift.

New technical requirements for data centers
Due to the increasing demands of availability, there has never been a more important time for data centers to monitor power in their server configurations. Uptime is usually thought of as time a server can remain up without succumbing to technical problems like denial of service attack, but a server's resiliency against potential physical disasters is also a major factor. Sensors that let data centers be aware of the physical conditions of their equipment are highly important in maintaining the security and accessibility of their information.

Because so many server configurations are now high-availability, stronger safeguards against outages are becoming an industry standard. The ramifications of increased data flow and constant access can be felt throughout every end of the data center industry, whether it is in the oncoming changes in cooling, the redistribution of physical space in the data center, or the heavy reliance on virtualization as a response to variable user needs. This is a period of exciting developments in the industry, and by the end of it data centers will be stronger, more reliable and most likely quite a bit more widely dispersed. If data center are being built into telephone towers today, it's hard to predict where they will be in five years.

]]>New trends in data center locations help to control server room temperaturehttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/new-trends-in-data-center-locations-help-to-control-server-room-temperature-648951
Mon, 28 Jul 2014 10:12:35 GMTITWatchDogsGetting server room temperature under control is difficult for many data centers, but three new ideas highlight the ways different locations can naturally facilitate data center heat control. A major new development is in underground data centers, one of which is being constructed by Iron Mountain in Pennsylvania, according to Smart Data Collective. But there are other trends as well, including the movement of data centers to the Arctic Circle and to open plains where solar collectors can provide much of the energy needed to cool and run the center. With so many natural means of cooling and supplying energy to centers, It's an exciting time to develop a data center.

When deciding on how to control server room temperature, utilizing tech cooler air of an underground facility is an attractive proposition. These complexes are also highly resistant to any sort of weather disaster, and have been called "nuke proof" in certain instances, making them useful for governments who may need data to flow in wartime. Physical security becomes married to data security here, because these complexes make it extremely unlikely that anyone could walk in to a server room and take information without being spotted.

Arctic landscapes and open fields
There are other ways for servers to keep cool without going underground, and one of those ways involves heading to the Arctic Circle. The air of the frozen North naturally keeps servers cool without having to invest much in cooling systems, but also can introduce the problem of latency when transactions between the data centers and users worldwide are common. The Irish Times notes that many Asian technology companies are putting their centers in Nordic countries such as Finland in order to maintain a reasonable distance while still benefiting from the cold.

Apple's recent idea regarding maintaining cool data centers has been to put them in hot, humid climates like North Carolina, but to then use solar panels to offset the costs and provide free, green energy to power the server room temperature controls. The Guardian reports that much of the incentive behind this move is Apple trying to keep its carbon footprint low, but the combination of solar and geothermal it uses at plants in NC and Arizona makes good fiscal sense as well. As the race for cheaper data storage continues, it is likely that increasingly more innovative approaches will be used to keep costs low.

]]>Five stages of lifecycle development for data centers outlined in new Nlyte white paperhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/five-stages-of-lifecycle-development-for-data-centers-outlined-in-new-nlyte-white-paper-648018
Fri, 25 Jul 2014 12:40:49 GMTITWatchDogsThe five stages of lifecycle development are cataloged in a new white paper by Nlyte. The stages are broken up into service strategy, service design, service transition, service operation, and continual service improvement. Each of these stages works in a different way to make sure that the whole of the data center operates as effectively as possible in order to obtain the largest recoup on investment. This includes elements like risk protection through the use of a server room temperature monitor. In order to make sure that a given data center is operating at its highest potential, it is recommended to give this paper a read.

The stages of data center operation
The first step, service strategy, involves making sure that the objectives for the data center are clearly laid out. Deciding on what kinds of equipment will be needed in order to make sure that a given complex can perform its tasks is important, and different data centers will have different requirements.

Service design follows as a way to make sure that the company itself is using the data center to its maximum potential. This means that practices, IT services, and different compliance policies must be developed to ensure prudent use of the center.

The next step, service transition, is about testing and deploying new services, while service operation deals with the overall use and execution of services while the facility is in use.

The final step as laid out by the paper is continual service improvement, which is about making sure the effectiveness of the data center remains high.

In order to keep a data center operation working, it is important that companies take the proper safety precautions. The user of a server room temperature monitor, as well as proper back-up equipment, is necessary in order to safeguard against the threat of sudden data loss. Nlyte's suggestion is to build a reliable base for understanding how a data center should be designed and used in day-to-day practice, but keeping that together will require security systems to prevent the network from crashing. Data Center Knowledge's overview of the white paper is useful for getting a brief rundown on the specifics.

Safety in monitoring
Using a series of monitors that can log system room temperature, moisture, humidity, and other factors will help to keep a server room running efficiently and safely throughout a variety of conditions. Keeping data secured and using back-up tapes are also strong methods of preventing a server crash from deleting vast amounts of data.

]]>The Internet of Things and wearables lay groundwork for more data all the timehttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/the-internet-of-things-and-wearables-lay-groundwork-for-more-data-all-the-time-647203
Thu, 24 Jul 2014 14:01:10 GMTITWatchDogsNew developments in wearables that act as data loggers and the Internet of Things will lead to greater demand for data storage, reported Suresh Sathyamurthy of Data Center Knowledge. These trends in data accessibility will result in a new type of data collection by consumers - one Sathyamurthy labels as the difference between conscious and unconscious data accumulation. Sathyamurthy is not the only one working toward developments that will boost the need for data storage. Ed Healy, new CEO of RF Code, wants to jumpstart his company's software compatibility as an RFID provider and manufacturer of objects for the Internet of Things, reported Jason Verge of Data Center Knowledge. The combination of these two factors means that the Internet is about to get a lot bigger and a lot busier.

The major difference between data as it is now and as it will be when wearables and the Internet of things become mainstream is the incredible increase in constant accumulation of information that data centers will be responsible fo, due to the consistent collection of data from wearables and the IoT . The possibility of people recording a near-constant stream of video or audio means that the demand for high quality, high intensity data transmittance and storage will reach new heights. The newly lowered requirements for people to store data means that smaller events could have larger peaks in the stream of data as multiple people turn on their recording devices.

More capacity and frequency
Sathyamurthy suggested that multi-tenant storage infrastructure will be very important for data centers as data loggers in using different hardware become more connected to cloud-hosted software. The major companies involved in producing software for mobile apps, including Google, Apple and Yahoo, will invariably want to be able to connect to data on their own servers, but clients and customers may also want to keep copies in private data centers, especially if the technology is being used for security purposes.

The mass adoption of data logger software that takes physical or other types of information and transmits it to host servers means that clients will have a multitude of statistics to sort through. Whereas most data used to house text, Sathyamurthy expects much of it now to be video content. The data loggers could be used to collect data from surveillance cameras for tellers, corporate training, and from sight-seeing tours. Centers will need to understand how the information stored on their servers is being used in order to treat it with the priority it needs to be safe for its purpose.

]]>Computer room temperatures set to rise as liquid cooling and high-density racks become industry standardshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/computer-room-temperatures-set-to-rise-as-liquid-cooling-and-high-density-racks-become-industry-stan-646707
Wed, 23 Jul 2014 11:19:04 GMTITWatchDogsData center should be kept cool, but the era of computer room temperature being the same as a freezer may be a fast fading trend. IT World reports that liquid cooling is slowly becoming more popular as a way to keep processors from overheating. The benefits of this type of cooling system means that the room doesn't have to be an ice box - the costs of cooling can be decreased by half. This is done primarily through its ability to be as close to the heat source as possible, allowing it to directly cool the chip. This liquid cooling solution circumvents the older problem of cooling all the air in the room for only some of it to actually be used on the servers themselves.

IT World's article points out that this form of cooling doesn't have a larger share of the market yet, but Datacenter Dynamics suggests that server farms are set to be stacked higher and more compactly. The main reason liquid cooling isn't adopted as widely as it could be right now is that it requires a greater density of servers to be cost-effective. As the datacenter model trends more toward deep stacks of servers and higher performance requirements, the natural next step will be a solution that benefits from closely-arranged processors and prevents the computer room temperature from being an issue.

Piled higher and deeper
Today's ultra-dense systems are tomorrow's standard systems. The Global Data Center Rack Market 2014 showed that rack units have "nearly doubled" in the last ten years, driven primarily by cost and power constraints. Higher roofs will make air cooling an even more inefficient practice than it already is, due to the natural tendency for hot air to rise in a room. This means that servers located near the top of a facility may suffer from overheating problems even if the temperature for processors near the bottom of the datacenter are fine.

The combined effect of faster computers and more densely packed space means that energy costs will surge even higher unless businesses take pains to reduce them through the adoption of new technologies. Computer room temperature solutions like sensors and liquid cooling systems could very greatly help datacenters that need to control their power bill while still providing high quality service. Staying on top of the demand of users is the best way for data center operators to ensure their service remains relevant.

]]>Madigan calls for federal investigation of data breaches amidst many leakshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/madigan-calls-for-federal-investigation-of-data-breaches-amidst-many-leaks-645870
Tue, 22 Jul 2014 09:44:18 GMTITWatchDogsThe Illinois Attorney General Lisa Madigan called for the creation of a new federal agency to perform analysis on data breaches, reports CBS Chicago. This makes sense, in light of the amount of leaks that have been referred to as an epidemic by GCN. Businesses that have fallen victim to leaks recently include LinkedIn, Adobe, eBay, and other major tech companies. Much of the faults that have caused these security leaks have been due to end user error, including a user tendency toward very easily cracked passwords like "password."

If a new regulatory agency does wind up being in charge of data breach investigation, it may lead to more server surveillance being necessary on the part of data centers. The use of server surveillance to keep data from being stolen isn't new, but the pressure from governmental authorities may wind up causing real-time monitoring of servers to be a new kind of job for IT personnel.

Because end users tend towards poor data habits, those who host their data wind up picking up the slack. Smart cardspresent one option for security officials to protect user data. Other solutions have been advocated by security watchdog groups like the Fast Identity Online Alliance, which wants online authentication to be less password-reliant. even Google, which uses passwords for its email service, offers Google Authentication as an option for third parties who want to use the reliability of Google's security network as a way to make their own products and services safer from outside tampering.

Security solutions
Entertainment giant Blizzard uses 2-factor authentication with the authenticator keys tied to specific users. This allows for a second level of passwords on top of the traditional layer of security, which makes accounts much harder to take over by hackers. This doesn't necessarily prevent them from being attacked, however, since 2-factor verification is susceptible to hacks by those who understand how to get the underlying second factor, which is typically generated according to an algorithm that is possible to reverse-engineer, given enough time. Wired suggests that the tokens that generate the authenticator keys, too, are only as secure as the tokens themselves are.

Server surveillance is an excellent way for data centers to keep an eye on their data so that, even if it is breached, countermeasures can be put in place immediately. The problem with hacking efforts is doubled when they become hard to detect, and strong data surveillance measures can help to make them easier to track.

]]>Data centers compete to increase efficiency through innovative server configurationshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-centers-compete-to-increase-efficiency-through-innovative-server-configurations-645338
Mon, 21 Jul 2014 10:08:13 GMTITWatchDogsData centers have been going through a period of transformation recently as major players redevelop their data center strategies, according to InformationWeek. Companies like Google have reduced their power consumption down to a 1.1 power usage effectiveness, and Facebook's Open Compute Project has lowered data center costs by up to 80%. Meanwhile, the Data Center Interconnect market is exploding due to a sudden increase in now-disparate data centers belonging to various businesses trying to stay directly connected. The proliferation of data centers logically leads to an increase in DCIs, as they allow companies to treat all of their various interconnected data centers as one global network. Many of the companies leading this shift use devices such as a data logger to monitor their traffic.

These DCIs are, in turn, being helped out greatly by Internet Exchanges. these IXs help move content to local users, acting as a hub for an area's internet data. This reduces the stress on other servers, allowing for less latency and loss and greater bandwidth for area end-users. IXs are only beginning to open in America, Data Center Knowledge points out, but have had a great influence in Europe, and are rapidly gaining ground in the U.S.

Although most data centers administrators believe that demand will rise greatly this fiscal year, few have budgeted enough to deal with the investment in equipment that will require, reports InformationWeek. There may be expected benefits to the new levels of DCI and virtualization that will allow data centers to weather the storm of outages that may transpire as a result of this, but this is hard to predict at present. A data logger between these could help businesses monitor how much one server accesses the other, which could help them make logistical decisions about how and why each of their centers accesses the other.

A new Web
This new trend for DCI between data centers hosted by companies could be the beginning of a new Dark Web. The use of interconnected servers mostly partitioned off from the rest of the internet is not new for government agencies and other groups that need to keep information hidden, but hasn't been deployed by most tech companies until now. With this and the rise of co-leased data centers shared between different vendors, there may be entire business communities with the ability to talk to each other through the net without encountering connections from outside of their new private Internet. The security implications of this are interesting, although the networks would not be infallible, due to the presumed ability of most business users' computers to connect to the public Internet as well. A data logger could theoretically alert IT professionals to any uninvited guests who show up on the companies' server, but by that point it could be too late.

The new connections between data centers are multiplying in a plethora of interesting ways. It is hard to say exactly how the next stage of cloud and data center storage will shake up due to the sheer amount of innovation that is happening in the field. With everything from "roommate" companies sharing data centers to entire businesses selling their servers to permanently relocate to the cloud, there are a variety of options for enterprise-level users to choose from as their network providing options. Those questioning their next step in this field should install a data logger in their data center, and let the information provided by the logger help them make their decision.

]]>Cray to develop new supercomputer for the Los Alamos National Laboratoryhttp://www.itwatchdogs.com/environmental-monitoring-news/research-labs/cray-to-develop-new-supercomputer-for-the-los-alamos-national-laboratory-644083
Fri, 18 Jul 2014 10:46:39 GMTITWatchDogsThe Los Alamos National Laboratory in New Mexico is being supplied with a new supercomputer by Cray Inc., reports the Washington Post. The new computer, which will have a storage capacity of 82 petabytes and a 1.7 terabyte processing speed, is being named, "Trinity," in honor of the launch tests done in 1962. The computer will run simulations that verify the safety, effectiveness, and security of the U.S. nuclear arsenal. It is unknown as to whether or not the computer will use environmental monitoring technologies to ascertain the details of the weapons' composition.

This major new supercomputer is currently the fastest supercomputer projected to be built in the world. The data entered into the machine will likely be used to conduct simulations as to the continued efficacy of the U.S. nuclear arsenal as the weapons age. News Maine has reported that the Los Alamos National Laboratory and Sandia National Laboratories made combined efforts to design the machine.

Chip power
The device will leverage the new "Knights Landing" Xeon Phi processors, reports PC World. These chips use the Micron's Hybrid Memory Cube technology, and have "five times more bandwidth than the emerging DDR4 memory, which is not yet used in computers," PC World reports. The level of power involved will allow the computer to conduct a dazzling array of high-accuracy simulations involving trajectory, composition, environmental monitoring, and other potential risk factors that may go into the production, maintenance and launch of a nuclear device.

In many ways, this supercomputer is not merely a monitoring tool of the U.S., but a security measure in its own right. Trinity will be tasked with simulating the destruction of the nuclear stockpile reserves as well as hosting many classified national security applications, according to HPC wire. There have been no details released regarding the use of security systems based around the machine itself, like environmental monitoring, but there should be many for a project of this size. The sheer size of the deal - $174 million - marks this supercomputer as one for the history books, as it is one of the largest sums paid to Cray for its services as a network manufacturer. There hasn't been a single computer that runs as fast as this one around yet, and there may not be one even in 2016, when Trinity is set to launch.

]]>Cray to develop new supercomputer for the Los Alamos National Laboratoryhttp://www.itwatchdogs.com/environmental-monitoring-news/research-labs/cray-to-develop-new-supercomputer-for-the-los-alamos-national-laboratory-644083
Fri, 18 Jul 2014 10:46:39 GMTITWatchDogsThe Los Alamos National Laboratory in New Mexico is being supplied with a new supercomputer by Cray Inc., reports the Washington Post. The new computer, which will have a storage capacity of 82 petabytes and a 1.7 terabyte processing speed, is being named, "Trinity," in honor of the launch tests done in 1962. The computer will run simulations that verify the safety, effectiveness, and security of the U.S. nuclear arsenal. It is unknown as to whether or not the computer will use environmental monitoring technologies to ascertain the details of the weapons' composition.

This major new supercomputer is currently the fastest supercomputer projected to be built in the world. The data entered into the machine will likely be used to conduct simulations as to the continued efficacy of the U.S. nuclear arsenal as the weapons age. News Maine has reported that the Los Alamos National Laboratory and Sandia National Laboratories made combined efforts to design the machine.

Chip power
The device will leverage the new "Knights Landing" Xeon Phi processors, reports PC World. These chips use the Micron's Hybrid Memory Cube technology, and have "five times more bandwidth than the emerging DDR4 memory, which is not yet used in computers," PC World reports. The level of power involved will allow the computer to conduct a dazzling array of high-accuracy simulations involving trajectory, composition, environmental monitoring, and other potential risk factors that may go into the production, maintenance and launch of a nuclear device.

In many ways, this supercomputer is not merely a monitoring tool of the U.S., but a security measure in its own right. Trinity will be tasked with simulating the destruction of the nuclear stockpile reserves as well as hosting many classified national security applications, according to HPC wire. There have been no details released regarding the use of security systems based around the machine itself, like environmental monitoring, but there should be many for a project of this size. The sheer size of the deal - $174 million - marks this supercomputer as one for the history books, as it is one of the largest sums paid to Cray for its services as a network manufacturer. There hasn't been a single computer that runs as fast as this one around yet, and there may not be one even in 2016, when Trinity is set to launch.

]]>Cost reductions in data centers require innovative thinking, data analysishttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/cost-reductions-in-data-centers-require-innovative-thinking,-data-analysis-644477
Thu, 17 Jul 2014 10:38:14 GMTITWatchDogsLowering operational costs is a priority for any data center administrator. It's a hard goal to achieve, due to the amount of redundancy and security measures required to make sure data centers run at all. Although frequently cooling costs are kept to a minimum by those looking to reduce expenses, other techniques are useful to make sure that the servers set up are being maximally used. Power costs can be reduced by finding more efficient ways to allocate resources within the system. Efficiency can be increased by using power more thoughtfully. As data centers continue to be used by businesses for colocation and cloud storage, the need for data will only grow higher. The following tips will keep data throughput high and server room temperature low.

Use the land
Technology is, to most people, synonymous with ingenuity. The ability to come up with creative solutions to deal with the world is one of the hallmarks of engineering intelligence, which is why it should be obvious that data centers should incorporate whatever benefits they can glean from the world around them. Datacenter Dynamics reports that a new South African data center is using solar panels to cool their servers . This inventive design allows for a new method of cooling processes for servers, and proves that data centers don't need to be planted in the Arctic circle in order to get free cool air.

Using new types of cooling can also help to greatly reduce costs. Green Data Center News reports on liquid-based solutions for conveying large quantities of heat with small amounts of fluid. The fluid used by these centers can funnel heat away from servers at a lower power-cost than air cooling systems. Although they have been explored by industry leaders, polar regions aren't necessarily the best options for keeping a cool server room temperature, as the cost of building in polar regions offsets most of the benefits of keeping server room temperature low for free.

Rely heavily on servers
Ensuring maximum server use efficiency is one of the major ways that data centers can keep a cool server room temperature. For example, Dataquest points out that server virtualization can obviate the need for a separate server that maintains elements inside the data center through the use of server virtualization. The more that each server can be used, the less overall heat will be produced. Virtualization allows multiple virtual servers to work at the same time while being hosted by the same physical server. This is highly important for any data center that deals with a lot of incoming connections, as server virtualization is the only way to make sure that each client can get the proper allocation of data they need in order to fulfill their service level agreements.

Using better data storage software can help reduce strain on servers and processors over time, notes Dataquest. Automatically tiered storage puts high-priority information on easy to access servers, and puts low-priority information in the digital equivalent of an attic. This process can be automated, and has increased in recent years as more data center workers note its high rate of efficacy. Using this and integrating deduplication technology can reduce the amount of redundant files accidentally stored on servers, helping the center's records as well as performing a service for clients who don't want to sort through multiple copies of the same file while they search for their data. This type of back-end support can be marketed as a service towards clients who need help from the data center to properly store their files. These all can contribute towards an easy-to-maintain server room temperature.

]]>New data center trends include liquid cooling and hybrid optical-circuit networks.http://www.itwatchdogs.com/environmental-monitoring-news/data-center/new-data-center-trends-include-liquid-cooling-and-hybrid-optical-circuit-networks.-643476
Wed, 16 Jul 2014 09:56:42 GMTITWatchDogsThe bandwidth requirements for data centers are growing at a rate of 25 to 35 percent a year, reports Lightwave Online. This bandwidth growth is primarily not in their connection to the outside world through the internet, but within the data center. this means that not only are connections an issue that needs solving, but also processing speed. Luckily, optical circuit switching and liquid data room cooling are providing solutions for data centers that require greater strength in connections and hardware. These new solutions provide more efficient design for the infrastructure of modern data centers while also being cheaper in the long run due to their lower cooling costs.

The major benefit that optical circuit switches offer data centers is the ability to separate what are known as elephant and mouse flows. Elephant flows are big, slow moving flows of data such as database backups and other slower processes, and mouse flows are smaller queries that are more sensitive to the impact of latency such as downloading a single file off of a database. The problem with these flows is that they do not get along very well. Elephants slow down mouse flows and mouse flows "scare" elephants by forcing them to start and stop repeatedly. Hybrid packet-optical circuit networks address this by allocating mouse flows to a packet-based network, which have the low latency that mouse flows need, and tagging elephant flows to be sent to the OCS network, which has the speed necessary to process the elephant flows.

Using liquids for data room cooling is a solution primarily geared toward high-performance computing industries like Bitcoin or scientific research, Data Center Journal reports. Liquid cooling primarily uses nonconduc?tive, noncorrosive liquids in order to avoid the pitfalls normally associated with electricity and water mixing. It is best used in setups that are high-density, because under those circumstances ambient heat can become too much for fans or other traditional solutions to handle. Both immersion cooling, which involves submerging the chip in water, and direct-to-chip, which involves running water next to the heat exchanger, are seen as equal in terms of market share.

These data room cooling and network processing technologies are both designed to be used by high-efficiency data centers that cater to processing heavy clients. For most data centers, the question "Is it prudent to adopt this technology?" can be answered by "How many Kilowatts per rack does this data center use?"

Trends in bandwidth
The future of data centers will inevitably trend toward fuller racks and more virtualization to allow for the flexible, fast processing of data. Data room cooling will become only more important as efficiency becomes the number one concern. New developments in 3D technology and 4K resolution are beginning to trickle their way toward the consumer economy, which means that bandwidth needs may experience a sudden straining and groaning under the weight of these intensive applications. Disruption is a constant in the tech industry, and the next major disruption for data centers will likely be extremely high data loads caused by these trends.

As bandwidth needs increase, the necessity of data room cooling solutions and temperature monitors rise. The Watchdog 15 can give overviews of environmental measurements and real-time sensor data with secure access through the web, and has flexible access rights and security settings, allowing for three customized account levels with varying amounts of access. Systems like this one can track the all-important temperature of racks during peak hours and allow data center operators to make informed decisions as to how their current data room cooling products are working for them. Without the knowledge provided by accurate, up-to-date sensors, making a sound decision on data center hardware is extremely difficult.

]]>Companies value uptime and recovery in data centershttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/companies-value-uptime-and-recovery-in-data-centers-643204
Tue, 15 Jul 2014 10:45:21 GMTITWatchDogsNew research from IDC shows that data center clients internationally use the cloud for disaster recovery, as reported by Fuseworks Media. This use of data center technology means that these companies are in the cloud, but haven't given it their full trust yet. Most of the groups involved in this are backing up virtual, physical and cloud-based servers, and most of them keep duplicate copies of files on the cloud in company servers. The central lesson here is that clients are primarily using the cloud not as a way to make technology more mobile, but as a way to safeguard their information.

Most clients listed uptime as a huge concern for cloud servers. Twenty percent listed an average earnings loss of $100,000 per hour whenever servers are down. This means that it is highly important that data centers be able to keep their end up. Investment in backup generators, security and redundancy are the obvious choices to keep servers up, but this isn't enough on its own. Maintenance is very important for keeping data centers running at full operation. Data Center Knowledge contributor Jeff O'Brien asks "Can you afford to have one of your critical power distribution assets fail unexpectedly?" For almost all data centers, the answer is no.

Backups and maintenance
In order to make sure that uptime is as high as possible, data centers need to test their backup equipment on a regular basis. Nothing is more likely to turn clients away than a prolonged crash due to a failing backup generator. O'Brien recommends that data centers use a computerized maintenance management system, which is a software tool that manages and tracks maintenance activities. However, this is only one part of keeping the cloud up.

Using power and environment monitoring software is key to keeping mechanical failures in cooling systems from compromising the system. Being able to be connected to a consistent feed of information about your center is important for maintaining good client relationships when something does go wrong and it is necessary to fix certain elements of the complex. The Watchdog 100 PoE can notify support staff immediately when it detects abnormal ambient conditions, which means they can be on-site instantly if any disasters strike. This kind of always-on monitoring keeps clients informed and enables technicians to rush to the scene of the problem, allowing for more efficient, smart reactions to critical events as they happen.

]]>Bitcoin rigs bloom while older data centers fall by waysidehttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/bitcoin-rigs-bloom-while-older-data-centers-fall-by-wayside-642907
Mon, 14 Jul 2014 15:51:00 GMTITWatchDogsBitcoin mining rigs are a new trend in helping computer facilities to minimize cost and maximizing productivity, reports Data Center Knowledge. These new facilities don't offer much in the way of reliability and are housed on shelving racks from hardware stores. They don't outsource anything but the processing of Bitcoin payments, for which they (or customers purchasing mining rights) are rewarded at random intervals by the network with a cache of Bitcoins. The only major expense for these servers other than the equipment itself is data room cooling. Because of the activity the servers are engaged in, it doesn't matter how reliable they are- any Bitcoin transaction not solved by one computer is simply shunted over to another.

It's a good thing that data centers are finding ways to be profitable in the midst of the Bitcoin rush, as many older data centers are falling by wayside, according to Datacenter Dynamics. These reports have angered the data storage community, which claims that the demand for storage space will not "drop dramatically as a result of new techniques." Still, Datacenter Dynamics reports that many MESS, or massively expensive storage system, vendors will wind up going out of business as their hardware degrades prematurely in the face of new standards such as software defined storage.

The essential question is in what direction servers will need to go in order to remain profitable. The complex math behind Bitcoin ( ensures that the more computers mining, the less money they make on average, possibly not even enough to afford the data room cooling such projects need. When 51 percent of Bitcoin servers were a part of the same mining collective, the price of bitcoins temporarily plummeted as holders feared that group would dictate prices or otherwise manipulate the market. The union split up of its own accord, but did so only out of the goodness of the individuals who made up the group's heart, not because there was an excellent economic reason not to. A bad-faith actor could, for example, deliberately have crashed the market in another circumstance like this.

Data and storage
Meanwhile, data room cooling is still one of the major costs of larger scale, cloud-centered data centers. Companies whose hardware is old enough to not support software-defined storage may have a difficult time coping with the cost of replacing equipment without having the guaranteed return on capital that an industry with more slowly-evolving standards might have. Hardware is becoming less and less expensive, and storage space is becoming more fully utilized by computers thanks to industry innovations. Many businesses are trying to keep pace with what information should be put on the cloud, put on the hybrid cloud, or kept secure on home servers. This attitude towards information storage has the industry in a state of flux.

In order for data centers to be profitable in the long run, they need to develop the reputation of stability, security, and reliability. Investing in data room cooling technologies, power monitoring solutions, and secure server space is what distinguishes the players in this industry from the new crop of Bitcoin mining operations operating out of old industrial buildings. Using up-to-date temperature and power loggers can help data room cooling become more efficient than ever before, and push businesses to the next level in terms of profitability and power consumption. Even Bitcoin rigs need to be cooled. Whether data centers are old style information-holding facilities, new cloud-based software-defined storage facilities, or mining facilities, they all need to make sure their processors don't overheat.

]]>New law means stricter client notification rules for data centershttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/new-law-means-stricter-client-notification-rules-for-data-centers-642643
Fri, 11 Jul 2014 15:42:00 GMTITWatchDogsThe new Florida Information Protection Act of 2014 sets a tone in legislation that could lead to tighter regulation about how data centers treat client data, and the need for strong disaster management protocol. Business Insurance reports that Nathan D. Taylor, of counsel with the law firm Morrison & Foerster L.L.P. in Washington, said that there are provisions in the act that aren't found in typical state breach laws, and that cyber breach laws are "only getting broader, and Florida is not likely to be the last to introduce and pass" a law of this scale. This new legislation means that entire categories of data previously not treated as personal data must now be treated as personal in Florida, including email addresses, usernames, and passwords.

The new requirements include prescriptions for describing the circumstances around the data breach, and how many people in Florida were affected. Companies that are found to have not taken "reasonable measures" to protect personal information can be penalized for up to $500,000, reports the Tampa Tribune. This is part of a national trend toward state breach laws that have popped up in response to a lack a federal breach law stipulating disaster management procedures. While most supporters agree that a federal breach law would be ideal for providing a standard metric of accountability that all businesses operating in the U.S. would have to abide by, these state breach laws go a long way towards regulating the more mysterious parts of the internet.

The deadlines set by Florida give companies 30 days after discovering a breach to notify their client base. This means that data centers, in order to avoid heavy fines, must be ready to implement disaster management responses as soon as breaches are discovered. Florida's law will allow the state's attorney general to require a copy of the incident or forensic report, notes Business Insurance. This a remarkable step forward in oversight that will make data centers much more accountable toward Florida's government in the case of a breach.

National effects of state law
The lack of strong oversight has previously allowed some companies to cover up the tracks of breaches in the hopes that their failure of security wouldn't be uncovered, but those days are gone now. Any breach impacting more than 500 individuals will have to be reported in accordance with the state's breach laws. In this way, state laws have an effect on the national discourse in terms of forcing companies to adaptively create disaster management protocol. The ripple effect here may yet cause states to adopt their own laws in response to this bill, hopefully bringing about a wave of change that sees better protections for clients' confidential information across the country.

Data centers should take the time now to tighten their responses towards disaster management. Breaches can come from a variety of places. It is advisable to have remote monitoring equipment installed to be sure that anything happening on the servers is within normal operating parameters, and that only certified staff is on premises with the Dlink DCS 6112 Indoor Dome Surveillance Camera. Using the most up-to-date surveillance equipment can help to neutralize the threat of a breach through a physical agent in the building, while following encryption protocols, constantly maintaining servers for threats with anti-malware software, and hiring security-minded IT personnel are ways that data centers can keep their servers from leaking data. As time goes on, cybersecurity will become something more citizens are knowledgeable about, which means that they will have more pointed and tougher questions for data centers. It's important that data centers be ready to answer those questions when they come.

]]>New advances in data center temperature cooling and monitoring promise cheaper, greener futurehttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/new-advances-in-data-center-temperature-cooling-and-monitoring-promise-cheaper,-greener-future-641683
Thu, 10 Jul 2014 16:28:06 GMTITWatchDogsReal-time temperature monitoring is the way for datacenters to save money in the long run, since the tracking data produced by monitoring enables centers to stay within defined energy and temperature parameters, says Jeff Klaus of Data Center Knowledge. The intelligent tweaking of CPU clock speeds, a push for more temperature resistant boards, and other advances are leading to data centers being more cheap to power and greener than ever, but one of the most important advances may come from Oregon.

IT Aire, a start-up operating out of Oregon, has reduced energy consumption in a hometown datacenter by 78.5 percent, Upstart Business Journal reported. This incredible leap in efficiency has to do with the design of a radical new cooling apparatus that uses no compressors. The system's combination of evaporative cooling and swamp cooling allows for a much more economically efficient approach to cooling. This incredible reduction in cooling costs could be what it takes to dramatically increase the bottom line of many data centers. The careful moderation of data center temperature is a top priority of any data center, and the combination of this new cooling system with temperature monitoring may make that job much less difficult.

Data Center Knowledge points out that for every degree that ambient temperature is raised, cooling costs drop by roughly four percent. Frequently ambient temperature is lowered as a way to compensate for cooling systems that fail to meet requirements during peak hours at data centers, and this is a very cost-inefficient practice. Data centers should focus on monitoring temperature throughout the day so that processor speed and internal cooling mechanisms can be synchronized with heavy loads to provide consistent, even performance. The Watchdog 100 PoE Climate Monitor collects data center temperature information and allows for real-time viewing of temperature, humidity, airflow, smoke, and more through a graphical user interface online. The WatchDog can be configured to send alarms via email, SMS, and voice call notifications whenever emergencies happen, so managers can rest easy knowing that they will be notified if important fluctuations in the data are noticed.

Green is Good
For most data centers, power is the biggest monthly operating cost. Because of this, taking steps to reduce the power footprint is one of the best long-term goals a data center can have to limit expenses. In Apple's new bid for a third solar farm in North Carolina, they are leading the rest of the industry by example. The math works out: if a company owns the means by which their data centers are powered, they are much cheaper to operate in the long run. Not every company can purchase 100 acres in North Carolina, though, so smaller data centers may have to make do with putting solar panels on their roof, but that is still an important step towards long-term management of power costs.

Some data center temperature solutions are more technical than solar panels. Just finding the inefficiencies in a server can do wonders for the power bill at the end of every month. Being sure that idle servers are minimized can reduce cooling requirements between 10 to 15 percent, according to Data Center Knowledge. While clients deserve the speed they pay for, many data centers have a habit of over-provisioning to clusters that don't need the power, which winds up costing them in the long-run as inefficiencies become baked in to the standard operating procedure. The best way to track data center temperature is through temperature monitoring software, so data centers should be sure to invest in the tools they need to regulate their performance.

]]>Data center business paradigms change as information mobility becomes key concernhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/data-center-business-paradigms-change-as-information-mobility-becomes-key-concern-641997
Wed, 09 Jul 2014 14:01:41 GMTITWatchDogsRemote monitoring may be the wave of the future in the new paradigm of data center leasing. ZDNet reports that the new model of data centers treats them less like office buildings - where space is leased for a specific long-term purpose - and more like hospitals, which load data for jobs for a brief period of time. The ever-increasing use of virtualization and cloud technology in office suites has allowed for inexpensive renting of data centers by businesses whose needs for data processing technology are sporadic. At the same time, management systems have stricter requirements placed on them than ever before, with companies needing quick access to information and effortless share-ability between employees and other data centers.

Some of this revolution is leading to multi-tenant data centers, as reported by Datacenter Dynamics. These are typically formed between corporations that already have pre-existing business deals with each other, allowing them to form private connections between their companies while saving money on the cost of leasing data centers. This approach offers faster connections that allow for better communication between business partners and the employees of all clients. Software solutions for these data centers are being rolled out to help enterprise partners connect safely and securely on these private cloud networks. Part of the reason these can work is due to the remote monitoring of the facilities themselves that allow collaborating businesses access to the workings of the data center, alleviating fears of corporate espionage and accidents losing data.

Navigating the shift
In the midst of this sea of change, many data centers are using advanced tools such as temperature monitoring equipment and remote monitoring to keep track of data centers leased out to several clients at once. In situations like these, automated notification software like that stored on the Watchdog 100 PoE can help to keep all partners happy and informed. These kinds of monitoring devices can automatically send texts, emails, and voice messages to interested parties as soon as the sensors detect problems, cutting out the difficult task of sorting through different companies' hierarchies when emergencies do happen.

With remote monitoring software that is up to the challenge, this new style of leasing out data centers to a coalition of interested parties has the potential to be very lucrative. The shift from private storage to cloud storage was merely the first step. The second is multi-tenant leasing.

]]>Security through obscurity no security at all, experts warnhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/security-through-obscurity-no-security-at-all,-experts-warn-641484
Tue, 08 Jul 2014 16:11:01 GMTITWatchDogsAlthough it is a common belief that true security risks are only something that multimillion dollar corporations need to watch out for, small businesses are also a lucrative target for hackers and should be guarded accordingly. "We are absolutely facing an epidemic of attacks on our nation's infrastructure and attempts to gain access to information," said Jason Oxman to the The Los Angeles Times this week, reporting on the wave of hacking efforts that have hit organizations recently. Unfortunately, small businesses have been among those hit the hardest cybercriminals because they tend to lack the security information and funding of their big business counterparts. Server surveillance software, while useful, helps to notify its users when data is accessed or changed, not to prevent fraudsters from gaining access to insecurely stored data.

Small businesses have a tendency to believe that data theft isn't their problem, according to the North Bay Business Journal. This leads to an under-investment in security software and security protocols, even when many breaches come from the company's information leaking out accidentally through no fault of any external agent. This cavalier attitude toward security also means that when these smaller companies are finally breached, they wind up without the financial backing to quickly recover from the attack. Data centers need to pay the most attention to the security needs of smaller clients who may be unaware of the risks they put their data in by not having a security plan or server surveillance systems.

Especially at a small business, security is the job of every employee. Larger companies can have standards set in place that guide employees toward behaving in a security-conscious manner, but smaller ones need to have all team members on board to understand the potential fallout of negligence.

The cost of low security
Simple actions that don't seem like they can lead to security problems are potentially troublesome, security experts told the Times. The act of using social media and managing personal files or messaging friends from computers that are used to track customer records can lead to breaches. Even allowing employees to remotely log in to company networks can wind up as an avenue of attack, since it's far easier for a malicious intruder to exploit an employee's personal computing device than one that's physically at the company. Even accessing server surveillance software from a computer that hasn't been properly protected can be potentially harmful.

It can be very hard to regain the trust of customers once a breach has occurred. The Times reported on a small California-based wine bar that wound up losing many high-level customers in the wake of one incident. Customers expect and demand that their data be protected, and they tend not to expose themselves to risk by voluntarily using cards or giving out personal information to establishments that have a poor security track rating. Even one breach is enough to sour the reputation of a small business in the local community for a long time afterward. In these instances, it is best to contact customers directly as quickly as possible, so they know that they can still trust the business. Server surveillance software can help small businesses and datacenters get information to each other as fast as possible to prevent this loss of customer trust.

Using the proper suite of tools, including a high priority on employee security measures, antivirus and anti-malware software, and server surveillance software including other industry standards such as encryption is the only way to keep valuable customer data from falling into the wrong hands.

]]>New cloud research lab uses environmental monitoring for remote experimentationhttp://www.itwatchdogs.com/environmental-monitoring-news/research-labs/new-cloud-research-lab-uses-environmental-monitoring-for-remote-experimentation-640796
Mon, 07 Jul 2014 13:58:22 GMTITWatchDogsThe publicity for Emerald Cloud Lab's new initiative in democratizing life sciences marks a huge victory for the success of environmental monitoring equipment. As co-founder and co-CEO of Emerald D.J. Kleinbaum told The Wall Street Journal, "Anyone with a credit card and an internet connection will be able to go and run experiments." This new way of doing experiments, and the potential for fast-paced results, could make for a new wave of garage scientists similar to the IT boom of the '90s.

The speed with which the lab can conduct experiments is dizzying. The Journal reported that last December the team processed over 50,000 microscope images and done cell counts on them in 2 years. By the older methods, this would have taken a decade. The natural complement to this was reported on by Digital Journal: Emerald Cloud Lab's persistent online database, which is designed for clients to browse through experiments and sort through data with precision, including data gained from environmental monitoring. This breakthrough in processing could rapidly increase the speed with which biotechnicians could get results and form new ways of battling HIV and HPV, which is Emerald Therapeutics' current goal.

The reality of always-on, accessible scientific experimentation for anyone with an Internet connection and a credit card has never been seen before, but the possibility of cheaply performing mass experiments is breathtaking. The reality of new, highly sophisticated environmental monitors means that data that would otherwise go unnoticed can now be leveraged to make experiments highly reproducible. While scientists are thorough, there are always variables that can get left out. The new way of dealing with data through the usage of environmental monitoring sensors means that this information is reclaimable and viewable, even if it seems irrelevant at first glance. Because so much of reviewing data is finding patterns, this sorting can allow for more accurate predictions and a quicker rate at which life-saving cures can be found.

The role of environmental monitoring
Modern environmental monitoring equipment can collect a variety of data simultaneously with a high degree of accuracy. Data including dew point, relative humidity, temperature, airflow, light and sound can be tracked with to minute detail. These are all variables that need to be controlled when performing experiments, and research centers are seeing the value in sensors like this now. Because of the highly sensitive nature of many scientific experiments, spikes in any of these categories can cause different reactions to occur, potentially invalidating an experiment. One of the breakthroughs of this new paradigm of research is its ability to finely control many variables at once, allowing for more reproducible experiments.

Environmental monitoring serves as a safeguard in these labs by acting as a warning if any experiments are causing excess heat. The use of environmental monitoring in a staffed data center like Emerald Cloud Lab is as much of a safety precaution as it is a necessity for science. Janitors, IT personnel, and security guards need to know that they can trust their environment, which is why the RelayGoose II allows for sending messages to a variety of recipients when temperature or any of the other variables it tracks reaches specific values. With this feature, staff and management can immediately be notified if temperature, light, or sound fall within certain thresholds so that everyone is on the same page about what is happening in the lab. If cloud computing does wind up being used for more research labs, environmental monitoring will be front and center as a tool for ensuring the accuracy of results and the safety of the labs' workers.

]]>Energy politics affect datacenters and motivate environmental monitoringhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/energy-politics-affect-datacenters-and-motivate-environmental-monitoring-640500
Thu, 03 Jul 2014 12:56:31 GMTITWatchDogsAccording to Data Center Knowledge, UK datacenters are expected to get tax breaks on the UK's cap-and-trade environmental monitoring legislation designed to reduce their carbon footprint, as long as they can prove they have a plan to reduce their emissions. Meanwhile, Greenpeace is campaigning against US datacenters' substantial use of coal-powered electricity. No matter how these individual stories play out, the lesson here is simple: datacenters are a global business, and lowering power bills is the best way for them to see a better bottom line.

Datacenters are companies that have some of the clearest incentives to go green. They run primarily off of electricity from the grid, and have relatively fewer considerations as far as staff and other expenses relative to most other brick-and-mortar locations of their size. They are also are intensively energy consuming, and have little to no opportunities to switch off any circuits to save on the cost. This is why it is a good idea for datacenters to invest, as Apple has in the past, in alternative methods of producing energy for their servers, as well as environmental monitoring. The money that data centers can save using these solutions will lend new meaning to the phrase, "going green."

Technology-based solutions
Using environmental monitoring can help datacenters to adjust HVAC use throughout the day, turning fans on higher during the dynamic response to the datacenter's by-the-minute temperature can reduce cooling costs year-round. Using these types of monitoring can track and prevent even slight changes in temperature, which helps to keep servers firmly in place due to the lack of swelling and shifting that they would otherwise have to go through as humidity and temperature fluctuate.

Other strong solutions to help datacenters reduce their power bill include designing the building with insulation in mind. Wet, damp environments, while detectable through environmental monitoring, can severely hurt and degrade computer equipment over time. Datacenters should exercise prudence and build in extremely efficient ways that allow for air to be both insulated and circulated through the building easily. Although many lauded Apple for its decision to make a 100 percent solar-powered datacenter, it is primarily a business move. Energy that doesn't cost money in the long run is a good investment for any company that sees itself using a lot of power. Even cutting costs using environmental monitoring is just as financially wise a choice as it is environmentally.

]]>More thorough IT disaster planning needed for consumer confidence in the cloudhttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/more-thorough-it-disaster-planning-needed-for-consumer-confidence-in-the-cloud-640251
Wed, 02 Jul 2014 14:47:22 GMTITWatchDogsAccording to a recent survey by EMA (Enterprise Management Associates), 80 percent of respondents said that current cloud technology company policies lacked the necessary IT disaster planning for their businesses to be comfortable in the cloud. Although businesses are still adopting cloud technology, many are doing it in separate deployments, with departments using several different cloud storage databases due to fear of data loss. This fear isn't wrong: 88 percent of respondents noted that they had faced unexpected challenges while trying to wrangle cloud technology, and roughly half said that they viewed disaster recovery as a "key advantage" of hosting in the cloud. This means that for many businesses, IT disaster planning is vital for any datacenter hoping to host their information.

Good IT disaster planning means that datacenters need to not only make sure that client data is safe from security breaches, but also need to develop strategies for effectively dealing with extreme situations. Part of the original pitch behind cloud storage was its promise of constant uptime. When datacenters fail to ensure that their data is secure, they lose sight of this goal of cloud computing. Information Management reported that companies seeking to move to the cloud have begun diversifying their cloud computing providers. The fact that businesses are dispersing their information among different clouds to mitigate risk suggests that datacenters, as well as other services that leverage the cloud, need to firm up security and disaster preparedness in order to ensure client confidence.

Preventative planning
There is no better way to plan for disasters than by making sure they don't happen in the first place. Extreme weather can damage servers, leave data inaccessible, and leave major portions of data missing in its wake unless datacenters are prepared for a storm before it happens. The SuperGoose II can prevent major damage by monitoring for temperature, humidity, airflow, light, and sound. Servers that aren't protected as extreme weather hits can have suffer serious damage unless precautions are taken to cool the room, run back-up generators, and even run emergency shut-downs. Without IT disaster planning, these problems can sink a datacenter for good.

Datacenters can also benefit from a plan of what to do following a major incident like a storm. This plan should include a system of back-ups and a formalized time-frame of how to proceed after a disaster. By doing this, a datacenter can maintain positive relations with its clients after a major disaster hits. The difference between telling a client that their service will be restored in a day or a week or even two weeks and telling a client that no one knows when their service will be restored is the difference between having that client stay or leave. Thorough IT disaster planning includes good preventative care and also business practices that ensure clients feel cared for as their data is recovered.

With the features present in the WatchDog 100, it is possible to customize alerts so that clients are aware of changes in their servers as they happen. Because of its customizable alarms that can send email, SMS, and voice call notifications, it can provide clients with instant notification if serious action is required on the part of the datacenter. This means that emergency maneuvers can be performed instantly without worrying about notifying clients of problems in the case of power meltdowns, inclement weather, or other IT disaster planning scenarios. Just like physical security, digital security is primarily not about preventing disasters from happening, but about addressing them and moving forward through them with clients so that the worst effects are avoided.

]]>Datacenter monitoring may be only hope against cybercriminalshttp://www.itwatchdogs.com/environmental-monitoring-news/data-center/datacenter-monitoring-may-be-only-hope-against-cybercriminals-639909
Tue, 01 Jul 2014 18:13:38 GMTITWatchDogsProactive use of datacenter monitoring equipment may be the only way to prevent your server room from falling victim to waves of new state-sponsored cybercrime. According to Symantec, these attacks are coming from Eastern Europe, and are primarily targeting power companies. While currently these attacks have only taken information, rootkits installed by the hackers have allowed for potential sabotage and destabilization of the power-grid of those companies affected these exploits.

Without the use of datacenter monitoring to keep an eye on servers as they are in use, it would be possible for a blackout to take HVAC systems offline or even knock out a server altogether, threatening extended downtime. Backup generators are useful, but since the operating time of the electronic crime group known as Dragonfly is based in UTC +4, attacks could conceivably happen outside of normal business hours, making monitoring systems necessary for quick detection. Although many businesses keep servers going on emergency power grids, they fail to take into account the strain that hardware running on emergency power might face. Over extended periods of time, servers running off of emergency generators can suffer from a lack of HVAC maintenance. Backup power for servers doesn't guarantee backup power for the AC.

Monitoring power and heat
Using datacenter monitoring hardware to notify IT personnel of fluctuations in power source and heat can be vital in ensuring the safety of data and stable servers. The best-selling Remote Power Manager X2 is an excellent choice for companies that need to be sure that the power coming to their servers doesn't falter. Its ability to send alerts whenever power readings exceed specified thresholds keeps servers running and helps alert IT teams to the need to switch to back-up power, or even to shut off in the case of emergencies.

Datacenter monitoring can help warn staff ahead of time when HVAC systems shut off due to brownouts and power outages. The Watchdog 100 P uses its on-board humidity and dew-point sensors to detect even very subtle variations in temperature. This allows for decisive action on the part of datacenter operators to keep systems from frying in the absence of artificially cooled environments.

Power and data
Dragonfly's targeting of power centers goes to show how much modern business runs on electricity, and if this is true for most businesses, it is doubly true for datacenters. Keeping an eye on power through datacenter monitoring can make the difference between happy clients and unsatisfied clients. Although heat, moisture, and dust all can cause problems for servers' daily operations, nothing stops a server from functioning like an absence of power. This, above all else, should be a major concern in the new era of the always-connected servers and organized hacking outfits.

Beyond that, there is always the hidden, lurking threat of cybercrime: if hackers are targeting power companies, who is to say that the next weak link in the business infrastructure isn't datacenters themselves? Malware that has found its way onto the hardware of servers could potentially be used for sabotage in much the same way that malware installed in power companies' systems might be used. The trick to keeping servers cool is the same trick to bodies warm in winter weather: layers. Installing the proper safeguards and maintaining strict security guidelines is one layer, but making sure there are datacenter monitoring eyes on the servers themselves is another key line of defense between data and saboteurs.

]]>As cellular network demands grow, network monitoring becomes more importanthttp://www.itwatchdogs.com/environmental-monitoring-news/telecom/as-cellular-network-demands-grow,-network-monitoring-becomes-more-important-637053
Mon, 30 Jun 2014 10:32:02 GMTITWatchDogsThe number of people across North America and around the world who own mobile devices and use them to access the Web is growing at a massive clip, as telecommunications companies need to further invest in infrastructure and network monitoring in order to keep up.

Before BlackBerrys then iPhones hit the market, mobile Internet access was largely limited to scant Wi-Fi hot spots, if it existed at all. But, over the past 10 years or so, handheld computing has come to the fore and redefined how people get online. For example, according to the Pew Research Center