Sure – you can build the dedicated server of your dreams (for a price tag to match and a commitment) – and in which case you can be as fast as you like – but with the current trend for self-contained small deployments – almost containerized or appliances – then suddenly a VM makes all the sense.

The key to our new VM platform is the ability to distribute copies of the file system across multiple locations. While processing will ideally be local to one of the copies, it does not have to be.

The interconnects making this possible are all at 10 Gigabit Ethernet – outstripping the write performance of the collective drives, and further reducing the latency.

Files system and processing can (usually) be hot migrated meaning that you not only have the resilience of data* – but also of processing power.

* In the same way that RAID is not a replacement for a good backup strategy – neither is a remotely replicated drive. We offer a daily snapshot (where possible) for the VM platform allowing complete point in time recovery of the entire instance. However should you require a more granular recovery, or have compliance needs – we would recommend the deployment of an R1 solution.

With current hugely capable processors, and blazing fast RAM – our benchmarks and experiences have shown that these are very, VERY competent performers – when compared against both our earlier VM offerings, and our entry level budget dedicated servers.

In a nutshell: As current trends come full circle back to small, agile, fast deployments for servers (without the world of all the different SCSI cabling!) – If you are thinking of smaller deployments in the 50 to 150G range of storage – unless a very specific architecture is a driver – I would without question suggest our SSD based Cloud platform and a Windows or Linux VM.