Author: adonoghue

My new role at Vertiv has kept me busy – in an exciting way – over the last couple of months so I’ve been a bit remiss in keeping this blog up to date.

However, events last week deserve a special mention.

Vertiv held its first Innovation Summit in Zagreb, Croatia on 16th and 17th April. The event had a regional focus with more than 300 delegates from across Central and Southern Europe from Croatia to Israel.

The central theme of the event was edge compute.

If that term sends a small shudder down your spine – it shouldn’t do. Edge is an important trend but it has also attracted a lot of hype which has at times outpaced technical clarity on what edge actually means in practice.

As Vertiv EMEA president Giordano Albertazzi puts it succinctly in this LinkedIn post, there is something of an edge knowledge gap out there between how some suppliers are using the term and how end-users understand ‘the edge’.

“…one of the questions raised during a round table with journalists was about whether edge is really a new phenomenon or simply a re-branding exercise for existing branch office computing or content distribution networks?

I can understand that view, but I think edge as it is being defined now is most definitely something new and on a different scale to anything that we have seen before.

True, we have infrastructure – such as our range of prefabricated modular data centres manufactured outside of Zagreb – which predate the current focus on edge. But we are also already seeing demand for those systems in a range of new edge deployments.

So while there is certainly a ‘legacy edge’, there will also be a large number of clearly distinct and disruptive use cases which we believe will proliferate well beyond any pre-existing notions of edge.”

Vertiv is doing its bit to bridge the knowledge gap with an ongoing research project to put more meat on the bones of edge including defining a series of edge uses cases and archetypes.

Coincidentally, I recently returned from my second trip to Norway, where I witnessed first-hand some of the things it has to offer data center operators.

A chilly boat trip on the fjords just outside Bergen

First the positives: One that springs immediately to mind is the temperature. After visiting in February, I can report that Norway is indeed a great place for free cooling (the only thing that is free in Norway it seems). The temperature where I was, in Bergen, barely rose above 5°C (40°F), and it’s one of the warmer parts of the country, thanks to the Gulf Stream.

Mock-up of Lefdal Mine Datacentre

Norway also does have some established data center operators already, such as Green Mountain, Digiplex, and Basefarm. One of the most recent projects is also the most interesting: the Lefdal Mine Datacentre (which we have written about before) has ambitions to be the largest facility in Europe, and, as the name suggests, it is completely underground.

So given all that, why hasn’t Norway been able to attract a hyper-scale operator to date? Head over to Data Center Knowledge to read the full column.

NOTE: As mentioned above, this was my last column for Data Center Knowledge as I’ve got an exciting new role which I will be discussing soon.

Big thanks to Yevgeniy Sverdlik and the team for allowing me to contribute to the great editorial over at Data Center Knowledge. I look forward to continuing to work with them in my new role.

My latest column over at Data Center Knowledge is a timely riff on the potential for space-based datacenters off the back of the jaw-dropping SpaceX Heavy Falcon launch this week.

The commercialization of space is nothing new, nor obviously is the use of satellites for telephony, internet connectivity, navigation, or broadcasting. However, the idea of a network of data centers orbiting the Earth – powered by the Sun and cooled by the icy vacuum – still seemed more science fiction that fact until very recently.

Elon Musk is not a man who seems overly concerned with orthodox thinking. This week, his company SpaceX fired yet another rocket – specifically Falcon Heavy, the most powerful rocket in operation today – right through the space exploration rulebook. To emphasize his point, the payload was a cherry-red Tesla roadster that is now headed down a trajectory that will (unlike originally planned) take it beyond the orbit distance of Mars.

There are already a few different space-based data and networking start-ups out there worth checking out.

This week’s Critical Thinking column for Data Center Knowledge is based on a recent interview with Michael Dongieux founder and chief executive of Fulcrum Collaborations.

Fulcrum has developed a cloud-based platform for facilities management called MCIM. It can be used to automate a lot of the day to day management tasks that were previously done using spreadsheets or manual check-lists.

The main benefit that MCIM can bring, according to Dongieux, is the insight it can give on the cost of maintaining specific pieces of equipment but also how that equipment performs not just in one site but across multiple data centers.

“When someone logs an incident report, they are able to associate every asset or assets that were involved in the incident and then say what the source of that failure was. That information is crowdsourced and clustered automatically. That enables us to correlate not only what the asset condition index, or ACI, score is of a particular piece of equipment, but we can also say for example that at 85 percent of their useful life, centrifugal chillers typically start to see an increasing occurrence of a specific kind of failure, ” said Dongieux.

This week Vertiv, previously Emerson Network Power, made its second acquisition in as many weeks.

Even without going into the specifics, the deals are important in terms of proving that Vertiv’s new owner Platinum Equity is serious about investing in the equipment supplier’s future growth.

Drilling in deeper, the latest acquisition of PDU-specialist GEIST was important for a number of specific reasons:

There is increasing demand from hyperscales and large colos for integrated racks with power distribution, monitoring and other capabilities built in. The Geist purchase adds to Vertiv’s capabilities in this important and growing area.

Geist also has an innovative approach to manufacturing with production times cut down to less than a week for custom equipment. That fast turn around of custom kit is also important for large operators.

Geist also has existing customers including large hyperscales and colos which Vertiv has now got access to and should be able to sell additional products and services into.

All of these factors are important in the short term.

But it’s also interesting to think about how Vertiv will use, or enable its customers to use, data from PDUs and other equipment it has acquired and developed internally.

To understand more about how flash storage has gone from a relative outlier to an accepted and core part of the data center infrastructure stack, we spoke with Alex McMullan, CTO EMEA, of flash specialist Pure Storage.

As well as explaining how flash can help improve overall data center efficiency, he also discussed how it supports and enables other disruptive technologies, such as machine learning (ML).

McMullan estimates that up to 20 percent of Pure’s customer base are investing significantly into machine learning and deep learning right now, including what he says are some of the biggest AI projects in the world.

My latest Critical Thinking column over at Data Center Knowledge is part of the site’s focus on all things AI in the datacenter industry this month.

The hope is that AI-driven management software (likely cloud-based) will monitor and control IT and facilities infrastructure, as well as applications, seamlessly and holistically – potentially across multiple sites. Cooling, power, compute, workloads, storage, and networking will flex dynamically to achieve maximum efficiency, productivity, and availabilit

While it’s easy to get caught up in the exciting and disruptive potential of AI, it’s also important to reflect on the reality of how most data centers continue to be designed, built, and operated. The fact is that a lot of the processes – especially on the facilities side – are still firmly rooted in the mundane and manual.

And as Google nearly found to its cost, the answers and actions delivered by AI systems may not always be what was originally anticipated.

Just as Skynet in the film The Terminator took a dispassionate, logical view of preventing conflict, finding that mankind was the problem, Google’s algorithm reached a very simple and accurate conclusion about improving the efficiency of its sites:

The model’s first recommendation for achieving maximum energy conservation was to shut down the entire facility, which, strictly speaking, wasn’t inaccurate, but wasn’t particularly helpful either.