"We didn't know how much of our SQL performance was being dampened by the nasty 'I/O blender' effect….."

As it turned out, it was HALF.

That's right. Their systems were processing HALF as many MB/sec than they should due to the noise of all their VM workloads meeting and mixing at the point of the hypervisor. The first thing the "I/O blender" effect does is tax throughput, so your application performance becomes far more dependent on storage IOPS than it needs to be.

So what is the "I/O blender" effect and how is it taxing application performance?

The "I/O blender" effect is a phenomena specific to a virtual server environment where the I/O streams from disparate VMs are "funneled" together at the point of the hypervisor before sending out to storage a very random I/O stream that penalizes overall application performance.

Every organization that has virtualized has experienced this pain. They virtualized their applications only to discover mounting I/O pressure on the backend storage infrastructure. This was the unintended consequence of virtualization. Organizations save costs on the compute layer via virtualization only to trade those savings to backend storage where a forklift upgrade is necessary to handle the new random I/O demand.

In the case of I.B.I.S., Inc., their IT Director wanted to look into this problem a little further to see what could be done before reactively buying more storage hardware for improved performance.

"We wanted to try V-locity® I/O reduction software first to see if it could tackle the root cause problem as advertised at the VM level where I/O originates," said Kevin Schmidt, IT Director.

As much as IT departments lack monitoring tools that show exactly how much performance is dampened by the "I/O blender" effect, V-locity comes with an embedded benchmark to give a before/after picture of I/O reduction and demonstrate how much performance is improved by combatting this problem at the Windows operating system layer.

As it turned out, I.B.I.S., Inc.'s heaviest SQL workloads saw a 120% improvement in data throughput. Before V-locity, it took 82,000 I/Os to process 1GB of data. After V-locity, that number was cut to 29,000 I/Os per GB. Due to the increase in I/O density, instead of taking .78 minutes to process 1GB, it now only takes .36 minutes.

"Since we're no longer dealing with so many small split I/Os and random I/O streams, V-locity has enabled our CRM and ERP systems to process twice the amount of data in the same amount of time. The best part is that we didn't have to spend a single dime on expensive new hardware to get that performance," said Schmidt.

According to our recent survey to Meditech hospitals, half receive staff or customer complaints regarding EHR performance while the other half do not. Since virtualizing, 62% purchased a new SAN, 10% added SAS spindles, 20% added server-side SSDs or PCIe flash, 24% added storage-side SSDs, 38% added additional servers, and 24% have not seen I/O performance issues. In the coming year, 29% will purchase a new SAN, 5% will add SAS spindles, 15% will add server-side SSDs, and 24% will add storage-side SSDs.

Most notably, merely 1/3rd were aware of the Meditech service bulletins alerting Meditech 5.x/6.x users of the FAL size growth issue from severe fragmentation that can result in unscheduled downtime if left unchecked.

To date, 65 Meditech hospitals have turned to Condusiv’s V-locity® I/O reduction software for automatic FAL remediation and improved EHR performance, so they no longer have to worry about unscheduled downtime and achieve 50-300% more performance from existing systems.

Since a lot of Meditech hospitals don’t fully understand the FAL size growth issue and how that affects their systems, here is a brief explanation:

When someone mentions heavy fragmentation on a Windows NTFS Volume, the first thing that usually comes to mind is performance degradation. While performance degradation is certainly bad, what’s worse is application failure. That is exactly what happens in severely fragmented environments when no more data can be added to files or no more files can be inserted under a folder file. This Windows limitation has a direct impact on the availability of the Meditech application, as several Meditech hospitals know too well. It is a show-stopper that can stop any hospital in its tracks until the problem is remediated.

The FAL size has an upper limitation of 256KB. When that limit is reached, no more mapping pointers can be added, which means NO more data can be added to the file. And, if it is a folder file which keeps track of all the files that reside under that folder, NO more files can be added under that folder file.

So what can be done about it?

The logical solution would be – why not just defragment the volume? The problem is that traditional defragmentation utilities can decrease the number of mapping pointers, but will not decrease the FAL size. Furthermore, due to limitations within the file system, traditional methods of defragmenting files cause the FAL size to grow even larger, making the problem worse even though you are attempting to remediate it.

A regular import of 150 million records into their SQL database would take 27 hours to complete.

ASL’s account team and clients needed access to the most current data immediately, but the 27 hour batch job meant that access would slip a full day of production or even two. That wasn’t acceptable as some clients would hold back business while waiting on new data to come online.

“Typically, IT professionals respond to application performance issues by reactively buying more hardware. Without the luxury of a padded budget, we needed to find a way to improve performance on the hardware infrastructure we already have,” said Ralph Ortiz, IT Manager, ASL Marketing.

ASL upgraded their network to 10GbE and was looking at either a heavy investment in SSD or doing a full rip-and-replace of the SAN architecture before its full lifecycle. Since that kind of hardware investment wasn’t in the budget, they decided to take a look at V-locity® I/O reduction software.

“I was very doubtful that V-locity could improve my I/O performance through a software-only solution. But with nothing to lose, we evaluated V-locity on our SQL servers and were amazed to see that, literally overnight, we doubled throughput from server to storage and cut our SQL batch job times in half,” said Ortiz.

After deploying V-locity, SQL batch jobs that used to take 27 hours to complete now take 12–14 hours to complete. The weekly college database import that used to take 17 hours to complete is now down to 7 hours.

Just before they pulled the trigger on a $2 Million storage purchase to improve the performance of their electronic health records application (MEDITECH®), they evaluated V-locity® I/O reduction software.

We actually heard the story first hand from the NetApp® reseller in the deal at a UBM Xchange conference. He thought he had closed the $2 Million deal only to find out that CHRISTUS was doing some testing with V-locity. After getting the news that the storage order would not be placed, he met us at Xchange to find out more about V-locity since "this V-locity stuff is for real."

After an initial conversation with anyone about V-locity, the first response is generally the same – skepticism. Can software alone really accelerate the applications in my virtual environment? Since we are conditioned to think only new hardware upgrades can solve performance bottlenecks, organizations end up with spiraling data center costs without any other option except to throw more hardware at the problem.

CHRISTUS Health, like many others, approached us with the same skepticism. But after virtualizing 70+ servers for their EHR application, they noticed a severe performance hit from the “I/O blender” effect. They needed a solution to solve the problem, not just more hardware to medicate the problem on the backend.

Since V-locity comes with an embedded performance benchmark that provides the I/O profile of any VM workload, it makes it easy to see a before/after comparison in real-world environments.

After evaluation, not only did CHRISTUS realize they were able to double their medical records performance, but after trying V-locity on their batch billing job, they dropped a painful 20 hour job down to 12 hours.

In addition to performance gains, V-locity also provides a special benefit to MEDITECH users by eliminating excessive file fragmentation that can cause the File Attribute List (FAL) to reach its size limit and degrade performance further or even threaten availability.

Tom Swearingen, the manager of Infrastructure Services at CHRISTUS Health said it best. "We are constantly scrutinizing our budget, so anything that helps us avoid buying more storage hardware for performance or host-related infrastructure is a huge benefit."

As you know, we just released V-locity version 5. Here’s the director’s cut.

We committed a slew of engineers to several months of development to build an enterprise-class management console for V-locity. In a world where a couple developers with a few pizzas can create a robust app from scratch in 6 weeks, that represents a lot of apps!

Our previous management console didn’t scale beyond 500 nodes and didn’t play well with modern environments that span geographic locations with a hybrid of virtual and physical servers while provisioning some workloads to the cloud.

That meant a console needed to be built that has the ability to auto-detect the most complex environments and batch deploy V-locity in seconds. A management console that is aware of the new world order of hybrid environments – virtual, physical, cloud – and deploy and manage to all from a single point.

Customers asked for flexible pricing models whether it be volume perpetual licenses or site licenses or even subscription, and so we listened. They asked for I/O performance management that delivers insight into the anatomy of I/O behavior on all their workloads from virtual server (or physical server) to storage to help take the guesswork out of performance troubleshooting. Customers wanted to be able to set up alerts based on workload thresholds. They asked for a console that could validate V-locity before/after performance across workloads and have ongoing performance validation for continued ROI transparency.

So we built it. The whole enchilada.

Typically, when the baton is handed to marketing, the first two questions are almost always the same – “What do we call it?” and “What do we charge for it?”

When you commit engineering resource the size of a small island, the very first temptation is to productize, to monetize, to ROI-ize what you put in because there is a cost to building products.

Then again, this wasn’t really a stand-alone product, but rather a big enhancement to an existing product.

A lot of companies charge for that enhancement. Many of you have purchased hardware or software products, only to find a separate line item and SKU for the management software itself to manage the product you purchased – the never ending high tech rabbit hole of monetization where you buy cars but batteries and steering and tires are not included.

As my daughter tells me, “Dad, everyone does it.” So, in our initial brainstorming session, we kicked around the idea of doing it too. But when it came down to it, we agreed it’s not in the core tenet of our business model to disrupt.

V-locity provides performance at 1/10th the cost of the hardware alternative. That’s disruption. And in that spirit of disruption, we decided against productizing and charging for the management console.

It’s bundled with V-locity and available for free to every V-locity customer under maintenance. No extra charge required. No extra hardware required.