Could containerised server protection be coming too? It's likely

Backgrounder Commvault’s Chief Communications Officer Bill Wohl was in London yesterday, and wanting to emphasise how well Commvault has recovered from the dip in its fortunes caused by customers moving to the cloud faster than anticipated.

He said Commvault was, in our words, a bridge between product systems and their data and a protected and recoverable data resource which Commvault indexed and made searchable. As its customers application environments changed then Commvault changed too so as to provide the same data protection environment.

The firm has to keep in step with its customers so that, as their data centre systems change, from physical servers to virtualised ones, and to having a cloud component, Commvault’s software can work on the new systems and continue servicing the customers' data protection needs.

This is shown by the rise of hyper-converged systems and their entry into customers' data centres. Commvault added Nutanix support in March this year.

Protection partner Fujitsu has just launched its ETERNUS CS200c S3 integrated backup appliance, which uses Commvault software. It delivers, Fujitsu says, an all-in-one data protection concept for converged and hyper-converged systems and has SLA-keeping capabilities. The company thinks it provides data protection facilities that are lacking in hyper-converged systems. VP Uwe Neumeier says these often store backup copies on the same platform as production data copies; so the data is not safe from corruption or deletion. You need a separate target store.

Hadoop servers and protection

El Reg asked Wohl about Hadoop and Big Data analytics, and containerisation, at which Wohl replied that analytics needed access to data, lots of data, and Commvault could provide that. It seems logical to El Reg that Commvault would be able to protect data running on a group, a large group even, of servers involved in Hadoop jobs, with each server processing its own stored part of the overall data set.

We might envisage a Commvault agent running on each server and sending its data to be stored in a single central target, on-premises or the cloud maybe, and being able to restore it to a group of servers as separate jobs if needed. Wohl didn’t say that this is how Commvault would implement such a distributed Big Data analytics environment but did affirm Commvault was committed to protecting such environments.

Protecting containerised servers

With containerised servers we have the virtual machine (VM) protection environment only more so, as a server can be expected to run more containers than VMs, and an application could be formed from more micro-services running in containers than from component VMs. Wohl affirmed that Commvault was thinking about how to protect containerised servers.

Our back-of-the-envelope brain processes quickly dreamed up the notion of Commvault agents running in containers in such servers and sucking container files and their data to its big central repository from which would come app-aware recoverability and searchability. But it’s too early to say how implementation might happen and Wohl kept wisely away from such minutia.

Could Commvault provide analytic processing itself, we asked. It has no plans, focusing at the moment on being a data protector for, and data feed to, analytics routines. Our impression is that Commvault has no wish to step outside its data protection environment and start competing with analytics suppliers. Maybe we should remember to never say never, so, okay, never say never, but the chances look rather small.

Fujitsu’s CS200c S3 is said to be a simple-to-use, out-of-the-box appliance, and is available from August 1, 2016 in Europe, Middle East, India and Africa. It can be ordered direct from Fujitsu or its distribution partners, and pricing and specifications vary by configuration and country. ®