My latest Critical Thinking column over at DataCenter Knowledge looks at edge colocation start-up EdgeMicro which wants to build out a network of container data centres on mobile towers to serve content.

EdgeMicro co-founder Greg Pettine explained to me that the company’s business model is similar in some respects to edge colocation specialist EdgeConneX but involves building out capacity in much smaller chunks.

“If you look at the EdgeConneX model, they were funded by the cable guys, by Akamai and so forth, who said we need 500kW of capacity in Tier 2 cities,” says Pettine. “Our focus is to just spread that out. As they expand – Netflix, Amazon, any of them – they will need to put more capacity in multiple markets in the next 12 months. What if we spread out and put that in the wireless edge, where most of the content is being consumed anyway?”

Its secret sauce is an appliance called Tower Traffic Xchange, or TTX. The TTX essentially creates an intelligent interface between wireless devices, wireless networks, and content — something that Pettine says hasn’t existed until now. The current process for serving content to a mobile device can be extremely convoluted and expensive.

In a session entitled ‘The race to Exascale – meeting the IT infrastructure needs of HPC’ a panel of experts discussed the benefits of achieving the next big breakthrough in supercomputing.

Peter Hopton, founder of UK-based liquid cooling specialist Iceotope argued that China is focused on being the first to exascale but is less interested in the benefits the breakthrough could bring.

“They will be able to achieve the required number of flops to say they have done it. But it’s a bit like Neil Armstrong landing on the moon and then taking off again without stepping out of the door because they didn’t build a door on the capsule,” he said.

However, speaking at the London event Brown said the technology will be more widely adopted as rack power densities increase due to the expected growth in GPUs and other factors.

Immersive DLC – where servers are submerged in dielectric fluid – can enable operators to lower cooling costs by more than 15% compared to conventional air-based cooling, according to Schneider. The company is expected to release more details of its TCO analysis in the near future.

The European Union funded RenewIT tool is a finalist in the DCD Awards.

I was part of the team that helped to develop the tool which was recognised by the EU as one of the most successful datacenter projects it has ever funded.

The free online tool enables different datacenter locations to be compared across Europe in terms of energy efficiency and availability of renewables. The tool took three years to develop and involved organisations from across Europe.

My latest feature for Datacentre Dynamics has been published. I have to say the layout is excellent – just hope the content lives up to it. Big thanks to James Wilman over at Future-tech and Steve Carlini from Schneider Electric for contributing to the article.

The article looks at the process of investigating the causes of unplanned downtime. The fault might be in the mechanical and electrical systems but the detective work often begins at the server level and tracks back up the power chain. You can access a digital version of the article via the DCD website.

This week’s column for Data Center Knowledge looks at AMD’s plan to get back into contention in the data center. I spoke with the chipmaker’s de facto head of data center strategy Forrest Norrod. The interview was obviously heavily skewed towards silicon roadmaps but Norrod has had previous roles at Dell which included a more holistic view of the data center. We were able to talk cooling technologies, data center software management tools and rising power densities.

The idea that everyone has a perfect job, or niche, based on their personality and skill set could either be seen as a positive or potentially constraining. My latest blog over at Verne Global applies that idea to computing and looks at the idea of Best Execution Venue for workloads and whether there’s really an ideal venue for every workload to thrive.

HPC is obviously a nebulous term that covers a whole range of systems from mid-level servers right up to supercompute. The disruption from cloud is probably being felt most at the lower end but it’s still notable that a supercomputer supplier like Cray is now plugged into Azure.

The large scientific supercomputing institutions are unlikely to farm our workloads to cloud but there’s likely to be appetite from industry as demand for deep learning in fields such as automotive and aerospace ramp up.

Latest column for Data Center Knowledge is on direct liquid cooling (DLC). I’ve been researching and writing about DLC for several years now but it finally feels like there is demand-side momentum building to match the fervour from suppliers. The pros and cons of the various flavours of DLC still stand but with support from all of the main server OEMs, adoption by some hyperscales, and even interest from colos, liquid cooling may be finally reaching a tipping point beyond traditional HPC.