With the storage overcommit feature, you can reduce storage costs by placing more linked-clone desktops on a datastore than is possible with full virtual-machine desktops. The linked clones can use a logical storage space several times greater than the physical capacity of the datastore.

This feature helps you choose a storage level that lets you overcommit the datastore's capacity and sets a limit on the number of linked clones that View Manager creates. You can avoid either wasting storage by provisioning too conservatively or risking that the linked clones will run out of disk space and cause their desktop applications to fail.

For example, you can create at most ten full virtual machines on a 100GB datastore, if each virtual machine is 10GB. When you create linked clones from a 10GB parent virtual machine, each clone is a fraction of that size.

If you set a conservative overcommit level, View Manager allows the clones to use four times the physical size of the datastore, measuring each clone as if it were the size of the parent virtual machine. On a 100GB datastore, with a 10GB parent, View Manager provisions approximately 40 linked clones. View Manager does not provision more clones, even if the datastore has free space. This limit keeps a growth buffer for the existing clones.

Storage overcommit levels provide a high-level guide for determining storage capacity. To determine the best level, monitor the growth of linked clones in your environment.

Set an aggressive level if your OS disks will never grow to their maximum possible size. An aggressive overcommit level demands attention. To make sure that the linked clones do not run out of disk space, you can periodically refresh or rebalance the desktop pool and reduce the linked clones' OS data to its original size.

For example, it would make sense to set an aggressive overcommit level for a floating-assignment desktop pool in which the desktops are set to delete or refresh after logoff.

You can vary storage overcommit levels among different types of datastores to address the different levels of throughput in each datastore. For example, a NAS datastore can have a different setting than a SAN datastore.