Our team in the Solutions Lab recently conducted a series of single server scalability (SSS) tests comparing performance among storage devices. In this case we focus on local solid state drives for write cache storage.

Citrix XenDesktop hosted shared desktop allows the creation of sites to support users with multiple hosted shared desktop servers providing resiliency. These servers can be virtualized and streamed through Citrix Provisioning Services to manage the server image. Provisioning Services requires a file cache, referred to as the write cache, and is often stored on SAN storage . The file content gets deleted and re-created on every hosted shared desktop virtual machine during boot and contains temporary data. We decided to examine this scenario using local solid state drives on a single Cisco UCS blade to store the write cache files and take advantage of the Citrix XenDesktop site design for resiliency. In addition, we validated that the local solid state drives satisfy the write cache I/O requirements.

Hosted shared desktop configuration

The hosted shared desktops as a virtualized streamed VM (through Provisioning Services) solution has the advantage of leveraging local solid state drive tests. The space requirements for hosting write cache files is minimal, while the user density is higher when compared to hosted virtual desktop.

Test results

Hosted shared desktop test results show similar density between storage types. This confirms that we can run the load typically hosted on enterprise storage on local solid state drives on the Cisco blades, provided there is suitable I/O performance for the write cache.

Each test was executed and validated multiple times to ensure consistency.

Some advantages of using local solid state drive storage over remote storage (SAN) for write cache are costs savings while providing the required I/O performance. In addition, we found that the session density is very similar between local solid state drives and remote storage.

To improve redundancy when using local solid state drives storage, we suggest you use raid mirroring with hot-spare, and XenServer Storage XenMotion.

Yes, the system is CPU bound when reaching VSImax in this test. The purpose of the test is to validate that the lower cost SSD solution can perform similarly to a SAN solution under the same workload. In addition, maxing out the CPU will remove storage as the possible bottleneck.

It would be nice to see more clearly labeled axes on the graphs…
Also, this is a pure PVS implementation, so it does nothing for MCS customers. There are plenty of software and hybrid storage solutions that can handle I/O acceleration at least as well. The big question is scaling on the XenServer, as with too much I/O, things cannot keep up — in that regard, the new 64-bit XenServer architecture should help significantly.

XenServer hypervisor is already 64-bit. You are probably referring to Dom-0 64-bit update that should help with IO, though with 100-200 VMs on XenServer 6.2 hypervisor usually is not an issue, especially with small VM count found with XenApp workloads.

Regarding MCS – local storage can be leveraged with MCS as per best practices discussed at Synergy 2014, including my presentation SYN256 ” Storage I/O and capacity analysis for affordable VDI” .

Well, yes, of course I mean the dom0 component. Reports, including one by Felipe Franziosi (Project Karcygwins), pointed out the limitations of the architecture in terms of handling I/O with fast storage (e.g., saturation), and addressing this with improvements in dom0 and the I/O drivers should definitely help. Am in the process of testing the XenServer Creedence alpha release, which should definitely yield some improvements.