Lenovo has also tag-teamed with Scale’s rivals such as Maxta, Pivot3 and DataCore.

So we took the opportunity to ask Jeff Ready, Scale Computing CEO, how he differentiates his company’s hyperconverged systems from the competition. We publish his reply in full below.

But first, let’s remind ourselves that Scale’s HyperCore software installs on bare metal and includes an operating system. By contrast, its competitors layer software onto an operating system.

Are you Ready for this?

Scale Computing CEO Jeff Ready

First (says Jeff Ready) Scale is a full infrastructure stack. HCI is just one component of that stack. Scale is managing everything that sits beneath the applications themselves, so our customers need not manage storage, virtualization, backups, disaster recovery, hardware bios, cpu microcode etc. All of that is handled by HyperCore.

Second (and tied to the first) is the self healing. Detecting, mitigating, and fixing problems anywhere in that stack, automatically, such that applications can keep running. Per the first part, this means anything in that full stack. The “secret sauce” of Scale lies in this self-healing framework, where we are monitoring thousands of different conditions in the infrastructure from the underling hardware, the intermediary software stacks, to the VMs, etc.

Finally, the biggest components of our infrastructure are homegrown and designed with this self healing framework in mind. Our SDS stack, orchestration layer, management layer, and the self-healing architecture all work together.

“On one hand, we have build the most efficient, high-performant stack out there. … we’re able to deliver native device speeds (such as 20 microsecond latency on Optane drives) all the way through our stack into the applications themselves.

While quite powerful, this was also results in a very lightweight framework, itself consuming few resources. This is very helpful when it comes to edge and retail environments, where small systems are the norm. While Scale HC3 can and does support very large data centre deployments, we also support very small deployments, where a server may only have 16GB of RAM for example.

We will only consume 2GB of RAM and a fraction of one core of CPU power, leaving the rest of the resources available for the applications themselves. Such efficiency is critical when looking at large scale deployments. Needless to say, deploying tiny servers with 16GB RAM at 3,000 locations is a whole lost less expensive than deploying large servers with 256GB RAM at 3,000 locations.

Taking all of the above into account, what the Scale HC3 solution does is provide a platform for running applications that is self-healing, lightweight, and extremely efficient. It takes care of itself. Thus, for deployments where you have perhaps zero or just 1 or 2 IT personnel on-site, this is the ideal solution.