Editor's Note:This is part two in a series of articles on common mistakes made when
choosing a colocation data centre provider. Check out part
one here.

After I made the decision to hire rack space from a commercial provider, the first company I
went with turned out to be a misstep in a number of ways. I have named that provider
"Colocation-R-US" and from that bad experience, I've developed a checklist to share with others so
they don't make the same colocation blunders when choosing a data centre provider.

Beware of dark sites
While scoping out a new supplier after firing Colocation-R-US, it became clear why Colocation-R-US
had such strict rules about site visits and why it was so punitive when I wanted to arrange
off-hour maintenance visits -- the ideal time for performing intrusive work: The owners of
Colocation-R-US were running a "dark site". That's a term I hadn't heard until I subsequently began
working with a quality colocation data centre provider, Node4 Ltd.

"Dark site" is the term used to describe a location where rack space is provided, but the
site is not staffed 24/7. Instead, the support staff clock off at normal office hours and the site
is operated with either very few or no members on the premises. Colocation data centre providers
that operate in this way often pay their staff a nominal extra amount of money a month in exchange
for being "on call".

Since my colocation data centre hosts a non-business-critical lab configuration, I would very
rarely raise a ticket at 3 a.m. in the morning to ask for "remote hands" assistance. In the one
occasion I did, it was because I was in the US and the time difference was such that it was very
early in the morning in the UK.

After learning that Colocation-R-US was operating as a dark site, it did explain why the person
on the other end of the phone sounded like I had woken him from his comfortable bed -- because I
had. It also helped explain the punitive costs I had to pay to carry out maintenance during the
weekend: My colocation data centre provider had to cover the unscheduled overtime of employing a
staff member to be present when I arrived.

After I made the decision to hire rack space from a commercial provider, the first company
I went with turned out to be a misstep in a number of ways.

Mike Laverick, Contributor,

So how do you know if your colocation data centre is
operating as a dark site? The first step is to ask them bluntly and see if they squirm. It was my
mistake never to have asked this question of Colocation-R-US.

Another way to tell is during your site visit. Generally, professional colocation data centres
will have a dedicated space to house their staff -- which works on a shift basis at the data centre
-- with an impressive array of screens, seating and refreshments. Look for this type of set up when
visiting potential colocation provider sites. Rarely is such an investment made for dark sites.

Beware of half-rack packages
If your initial requirements are quite modest, you might be tempted to lease half of a conventional
42U rack server. This is especially true of folks who want to relocate a small number of noisy 1U
or 2U servers that they have in a home lab. A half-height rack is not unlike a house that has been
split into two apartments. You might not currently have anyone living above you, but you could in a
few weeks or months. Whilst most colocation businesses like to have some "slack" in their halls,
any unallocated space is a cost for their business and a lost chance to make some profit. So they
are keen to fill these spaces more often than not.

This raises issues of security and availability. Some racks are fitted with a single door from
U1 to U42. That makes it impossible to secure your equipment from access from another person. Not
only could someone use this physical access to breach your equipment from a security perspective,
but also, and even more important, someone else's stupidity could be the cause of an outage.
Literally, another business could accidentally uncouple one of your systems from the network.

The other issue -- of availability -- affected me personally. Initially, when I moved into
"Colocation-R-US" I had just four 2U servers and one 4U SAN unit, together with the usual
prerequisites such as an Ethernet switch and a firewall. Later on I was lucky enough to receive
enterprise storage from both EMC and NetApp on a long-term loan. Sadly, by then the space above me
in the rack was already occupied. The only short-term solution (apart from leaving Colocation-R-US
altogether) was to completely re-rack my kit and move to another rack a couple of feet away. As you
well know, once a system is cabled up and functioning, the last thing you want to do is relocate
it.

Not all racks are the same
I'm hoping this is a very basic issue that most IT professionals are aware of. You would think a
rack, is a rack, is a rack, but in the world of colocation data centres, you might find equipment
that's mounted in your own environment doesn't fit into the colocation data centre's
infrastructure. Although there's a document (EIA-310) defining a standardised 19" rack, there is
great variance in this standard.

Most colocation places I have visited use square holes for mounting equipment, but occasionally
I've found that the OEM rail kits use round pegs for their mounting screws. For example, Dell's
RapidRails only work with square holes; whereas Dell's VersaRails work in round non-thread holes.
All the EMC hardware I've ever had the pleasure of loaning comes with their own rails attached, and
in some cases are shipped in a rack of their own. EMC storage comes with round-screw heads that
don't fit the square holes at the back of most racks.

Whilst this isn't the end of the world, you might find yourself having to unrack and uncable
equipment to fit into the racks provided by the colocation, and in the process of moving a kit from
one rack to another, a screw or mounting post can disappear down inside the rack or under floor
cooling ducts. Most racks in colocation data centre sites are adjustable -- and in my new
colocation at Node4, the staff was very helpful in lugging my kit into the rack and doing the
necessary adjustments.

Note:These two photographs show the front and rear of the rack containing the new EMC
NS-120 that I have on loan. Node4 helped me unrack it and re-rack it into the location in the data
centre.

It's storage, not a server!
Going back to my experience at Colocation-R-US, it often used to drive me nuts about the sales and
technical staff there: They were unable to distinguish servers from storage. Yes, it might seem an
obvious distinction to you, but at many colocation sites, you will find rack after rack filled with
1U and 2U commodity servers. I think this might give the game away to some degree with the less
quality-oriented environments. Their staffs are often geared up to filling their space with
servers, not storage arrays, and this can sometimes be a pain point from a power perspective if
your rack is skewed more towards storage than servers, as was the case in my situation.

This distinction is a minor one, but it irked me that even after taking the plastic cover off a
storage array and pointing at the vertically mounted disks, the folks there would still insist that
my storage was a server.

Calculate your colocation power requirements
In the world of corporate environments, most people measure their power consumption by the Kilo
Watt Hour (Kwh). There are a couple of reasons to do this: Most OEM equipment is rated this way,
and you are billed in this format by the power generator that supplies your location.

However, in the world of colocation data centres where the infrastructure is sold by the rack
and the half-height rack, the measurement of choice is the AMP. The main reason is that colocation
data centres use the AMP meter that exists on most power distribution units (PDUs) to measure your
actual usage.

Additionally, they must take care that one consumer doesn't draw more power at the expense of
another. So on top of PDUs, circuit breakers are there to ensure your kit doesn't draw more power
than expected. As a consequence you will see most half-height rack packages sold with an 8 AMP
allocations, and most 42U racks sold with a 16 AMP allocations. The ratio of AMPs to rack is based
on the assumption that you will be racking up more servers and network equipment than you will
storage equipment.

Storage is more power hungry from the perspective of disk spindles that are constantly turning
night and day. So, if you are storage top-heavy, as I am, be careful about calculating your AMP
rating. You may find that a 16 AMP allocation to your rack is insufficient, which means you will
have to buy AMPs at increments decided by your provider. In some cases, providers sell by small
increments in the range of 0.5 AMP to 1 AMP; in other cases, they sell in blocks of AMPs such as 2
AMPs to 4 AMPs.

This can be an unexpected cost to your colocation data centre plans. If you are lucky, you may
be able to measure the amount of AMPs your kit will pull from an existing infrastructure as in the
case of re-location to colocation. If it's your first time, however, and your hardware has been
running via a domestic supply or in a small office, you might be totally unaware of the AMP
rating.

Don't assume that if you need more power you can simply demand it. Asking for more AMPs can be
difficult. For a start, the number of power sockets available from the PDU may be limited; the AMP
rating of your PDU/circuit breaker -- or even just the total amount of free power in your
particular side of the location -- might make turning up the AMP dial more difficult. So you might
find that you need the PDU or the circuit breaker upgraded or you need an additional PDU because
you have run out of sockets to plug equipment into.

Also be wary of the AMP rating. It varies depending on the load. In my case, my lab environment
is not in production use, so the AMP measurements are relatively steady and linear, but the same
cannot be said of production environments. What are the dangers of not calculating your AMP ratings
correctly? Well, you could have problems charging up the onboard UPS systems that some storage
ships with automatically. You then run the risk of blowing fuses or exceeding the rating of your
circuit breaker.

If you're shopping for a colocation data centre, and you think you might have power requirements
that exceed the normal expectations of the provider, make your sales rep and technical staff aware
of this at the earliest opportunity.

To find out more common mistakes made when choosing a colocation data centre provider, click
here for part
three of this series.

MIKE LAVERICK'S BIO:Mike Laverick is a professional instructor with 15 years of experience with technologies such as
Novell, Windows and Citrix and has been involved with the VMware community since 2003. Laverick is
a VMware forum moderator and member of the London VMware User Group Steering Committee. In addition
to teaching, Laverick is the owner and author of the virtualisation website and blog RTFM
Education, where he publishes free guides and utilities aimed at VMware ESX/VirtualCenter users. In
2009, Laverick received the VMware vExpert award and helped found the Irish and Scottish user
groups. Laverick has had books published on VMware Virtual Infrastructure 3, VMware vSphere 4 and
VMware Site Recovery Manager.

It can be tempting to stray from the security roadmap security professionals have put in place when data breaches like the Sony and Anthem breaches are all over the news. But experts say it's crucial to stick to the security basics.

The Open Data Platform has arrived, but not all Hadoop vendors are on board. The initiative, aimed at boosting interoperability, formed a backdrop for discussion at the Strata + Hadoop World 2015 conference.