Virtualization often focuses on core computing elements like processors and memory, but I/O and disk storage also play important roles in the overall performance and responsiveness of virtual machines. Disk storage and I/O might be even more critical because disk-related functions are so much slower than processing tasks. Virtualization administrators who are eager to enhance virtual machine performance should invest the time needed...

By submitting your personal information, you agree to receive emails regarding relevant products and special offers from TechTarget and its partners. You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

to optimize disk operations.

Disk options that enhance VM performance

In virtualization, a hypervisor abstracts the workloads from the physical hardware that runs underneath, which allows easy allocation and sharing of computing resources, convenient migration of workloads and other features. Although modern hypervisors and virtualization-compliant processors impose very little overhead, there is a performance penalty introduced by the virtualization layer.

When disk performance is critical to a workload, some administrators may opt to configure the associated logical unit number (LUN) in pass-through mode, which allows the virtual machine's (VM's) operating system to bypass the hypervisor and communicate with the LUN directly. For example, a Windows Server VM may use pass-through mode to bypass Hyper-V and achieve a small performance boost for applications like SQL Server. However, since the guest OS (in pass-through mode) and the hypervisor could both try accessing the disk at the same time, the hypervisor must be configured to ignore the pass-through LUN.

The problem with pass-through mode is that some important virtualization features, such as VM snapshots or clustering support, aren't supported. As a consequence, the VM may actually benefit more from virtualization features than it would from the marginal performance improvement in pass-through mode. Administrators will need to evaluate the needs of each VM and determine the suitability of pass-through mode.

In addition to pass-through mode, hypervisors like Hyper-V can also apply other disk storage options. For example, fixed-size disks allocate all blocks for data and overhead on a fixed-size .VHD file at the time that .VHD file is created. The fixed-size disk cannot change once created. However, dynamically expanding disks creates an initial .VHD file with no blocks, and space provided as data is written to the .VHD -- up to the specified size of the disk. This is similar to the notion of thin provisioning because even though a sizable disk may be created logically, actual disk space is really only used when there is data to write.

A differencing disk is a special type of dynamically expanding disk. The idea is that a parent disk holds a fixed image and a differencing disk is associated to that parent, so any writes that change disk content are written to the differencing disk instead of the .VHD file. Reads are first checked against the differencing disk's .VHD file, and if no changes are present, the parent .VHD file is read. Differencing is a good choice when standardized disk images are needed and rollback capabilities are important, but maintaining parent and child disk configurations can be challenging for administrators.

Allocating the right amount of disk space

There is no single right amount of disk space, because many variables can affect the allocation of computing resources. Ideally, a workload running in a VM should need the same amount of computing resources that it would demand if deployed on a physical server. However, virtualization relies on a software hypervisor, and the added computing needed to operate a hypervisor will add some overhead to most virtualized workloads. As an example, Microsoft suggests that a virtualized workload should receive 105% to 110% of the disk resources needed by the same workload in a physical environment.

Still, it's important to note that this is only a guideline that should be applied loosely, because every application has unique resource requirements, performance needs, user traffic patterns and workload growth expectations. Administrators should consider each of these factors carefully before provisioning disks by testing and benchmarking in a test and development environment before rolling the workload out to production.

In addition, storage can be an expensive commodity, and over-allocating storage can be costly for an enterprise. Administrators can often employ technologies like dynamically expanding disks or other thin provisioning tactics to conserve storage space until it is needed, or use technologies like data deduplication to remove duplicate content and reduce storage demands.

For example, select smaller 2.5-inch form factor disks rather than larger 3.5-inch disks. The smaller platters have a correspondingly smaller circumference which can support faster rotational speeds for lower latency as well as faster track seek times. Smaller disks can find data much faster than larger disks. As an added bonus, the smaller platters take less energy to rotate, saving energy costs for data center storage.

Another factor to consider is the composition of disk groups. Fewer disks generally do not perform as well as disk groups because spreading data across multiple disks allows multiple disks to seek at the same time, and this improves performance. Instead of consolidating disks, rely on disk groups like RAID 5 or RAID 6 which support multiple spindles and provide comprehensive data protection within a storage array or server.

If possible, adopt dynamic data layout schemes that will automatically locate the most important or frequently-accessed data at the outermost disk tracks. Remember that the entire disk platter spins at the same speed, so the outer tracks are actually flying under the read-write heads faster than the inner tracks. This gets data onto and off of the platters faster, though the overall disk speed is still limited by the disk cache size.

VMs rely on storage, but limitations and bottlenecks in storage systems can noticeably impair VM performance. Using pass-through disks can offer a marginal improvement, but the loss of virtualization-related functionality is rarely worth it. VMs may require some additional storage compared to physical deployments, but the exact amount is best determined with hands-on testing, combined with established technologies designed to mitigate storage demands.

Which disk option has worked best to enhance your VM's performance?

0 comments

E-Mail

Username / Password

Password

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy