1 IntroductionThe intended audience for this document is technical IT architects, system administrators, and managers whoare interested in server-based desktop virtualization and server-based computing (terminal services orapplication virtualization) that uses VMware Horizon (with View). In this document, the term clientvirtualization is used as to refer to all of these variations. Compare this term to server virtualization, whichrefers to the virtualization of server-based business logic and databases.This document describes the reference architecture for VMware Horizon 6.x and also supports the previousversions of VMware Horizon 5.x. This document should be read with the Lenovo Client Virtualization (LCV)base reference architecture document that is available at this website: lenovopress.com/tips1275.The business problem, business value, requirements, and hardware details are described in the LCV basereference architecture document and are not repeated here for brevity.This document gives an architecture overview and logical component model of VMware Horizon. Thedocument also provides the operational model of VMware Horizon by combining Lenovo hardware platformssuch as Flex System, System x, NeXtScale System, and RackSwitch networking with OEM hardwareand software such as IBM Storwize and FlashSystem storage, VMware Virtual SAN, and Atlantis Computingsoftware. The operational model presents performance benchmark measurements and discussion, sizingguidance, and some example deployment models. The last section contains detailed bill of materialconfigurations for each piece of hardware.

2 Architectural overviewFigure 1 shows all of the main features of the Lenovo Client Virtualization reference architecture with VMwareHorizon on VMware ESXi hypervisor. It also shows remote access, authorization, and traffic monitoring. Thisreference architecture does not address the general issues of multi-site deployment and network management.

The VMware Horizon Connection Server is the point of contact for client devices

Server

that are requesting virtual desktops. It authenticates users and directs the virtualdesktop request to the appropriate virtual machine (VM) or desktop, whichensures that only valid users are allowed access. After the authentication iscomplete, users are directed to their assigned VM or desktop.If a virtual desktop is unavailable, the View Connection Server works with themanagement and the provisioning layer to have the VM ready and available.In a VMware vCenter Server instance, View Composer is installed. View

View Composer

Composer is required when linked clones are created from a parent VM.vCenter Server

By using a single console, vCenter Server provides centralized management of

the virtual machines (VMs) for the VMware ESXi hypervisor. VMware vCentercan be used to perform live migration (called VMware vMotion), which allows arunning VM to be moved from one physical server to another without downtime.Redundancy for vCenter Server is achieved through VMware high availability(HA). The vCenter Server also contains a licensing server for VMware ESXi.

View Event database

VMware Horizon can be configured to record events and their details into aMicrosoft SQL Server or Oracle database. Business intelligence (BI) reportingengines can be used to analyze this database.

Clients

VMware Horizon supports a broad set of devices and all major device operatingplatforms, including Apple iOS, Google Android, and Google ChromeOS. Eachclient device has a VMware View Client, which acts as the agent tocommunicate with the virtual desktop.

RDP, PCoIP

The virtual desktop image is streamed to the user access device by using thedisplay protocol. Depending on the solution, the choice of protocols available areRemote Desktop Protocol (RDP) and PC over IP (PCoIP).

Hypervisor ESXi

ESXi is a bare-metal hypervisor for the compute servers. ESXi also containssupport for VSAN storage. For more information, see VMware Virtual SAN onpage 6.

Accelerator VM

The optional accelerator VM in this case is Atlantis Computing. For more

information, see Atlantis Computing on page 6.

Shared storage

Shared storage is used to store user profiles and user data files. Depending onthe provisioning model that is used, different data is stored for VM images. Formore information, see Storage model.

For more information, see the Lenovo Client Virtualization base reference architecture document that isavailable at this website: lenovopress.com/tips1275.

3.1 VMware Horizon provisioning

VMware Horizon supports stateless and dedicated models. Provisioning for VMware Horizon is a function ofvCenter server and View Composer for linked clones.vCenter Server allows for manually created pools and automatic pools. It allows for provisioning full clones andlinked clones of a parent image for dedicated and stateless virtual desktops.Because dedicated virtual desktops use large amounts of storage, linked clones can be used to reduce thestorage requirements. Linked clones are created from a snapshot (replica) that is taken from a golden masterimage. The golden master image and replica should be on shared storage area network (SAN) storage. Onepool can contain up to 1000 linked clones.This document describes the use of automated pools (with linked clones) for dedicated and stateless virtualdesktops. The deployment requirements for full clones are beyond the scope of this document.

3.2 Storage model

This section describes the different types of shared data stored for stateless and dedicated desktops.Stateless and dedicated virtual desktops should have the following common shared storage items:

The paging file (or vSwap) is transient data that can be redirected to Network File System (NFS)storage. In general, it is recommended to disable swapping, which reduces storage use (shared orlocal). The desktop memory size should be chosen to match the user workload rather than dependingon a smaller image and swapping, which reduces overall desktop performance.

3.3 Atlantis Computing

Atlantis Computing provides a software-defined storage solution, which can deliver better performance thanphysical PC and reduce storage requirements by up to 95% in virtual desktop environments of all types. Thekey is Atlantis HyperDup content-aware data services, which fundamentally changes the way VMs use storage.This change reduces the storage footprints by up to 95% while minimizing (and in some cases, entirelyeliminating) I/O to external storage. The net effect is a reduced CAPEX and a marked increase in performanceto start, log in, start applications, search, and use virtual desktops or hosted desktops and applications. Atlantissoftware uses random access memory (RAM) for write-back caching of data blocks, real-time inlinede-duplication of data, coalescing of blocks, and compression, which significantly reduces the data that iscached and persistently stored in addition to greatly reducing network traffic.Atlantis software works with any type of heterogeneous storage, including server RAM, direct-attached storage(DAS), SAN, or network-attached storage (NAS). It is provided as a VMware ESXi compatible VM that presentsthe virtualized storage to the hypervisor as a native data store, which makes deployment and integrationstraightforward. Atlantis Computing also provides other utilities for managing VMs and backing up andrecovering data stores.Atlantis provides a number of volume types suitable for virtual desktops and shared desktops. Different volumetypes support different application requirements and deployment models. Table 1 compares the Atlantisvolume types.Table 1: Atlantis Volume TypesVolume

Atlantis ILIO Diskless VDI

3.3.1 Atlantis Hyper-converged Volume

Atlantis hyper-converged volumes are a hybrid between memory or local flash for accelerating performanceand direct access storage (DAS) for capacity and provide a good balance between performance and capacityneeded for virtual desktops.As shown in Figure 3, hyper-converged volumes are clustered across three or more servers and have built-inresiliency in which the volume can be migrated to other servers in the cluster in case of server failure orentering maintenance mode. Hyper-converged volumes are supported for ESXi.

Figure 3: Atlantis USX Hyper-converged Cluster

3.3.2 Atlantis Simple Hybrid Volume (ILIO Persistent VDI)

Atlantis simple hybrid volumes are targeted at dedicated virtual desktop environments. This volume typeprovides the optimal solution for desktop virtualization customers that are using traditional or existing storagetechnologies that are optimized by Atlantis software with server RAM. In this scenario, Atlantis employsmemory as a tier and uses a small amount of server RAM for all I/O processing while using the existing SAN,NAS, or all-flash arrays storage as the primary storage. Atlantis storage optimizations increase the number ofdesktops that the storage can support by up to 20 times while improving performance. Disk-backedconfigurations can use various different storage types, including host-based flash memory cards, externalall-flash arrays, and conventional spinning disk arrays.A variation of the simple hybrid volume type is the simple all-flash volume that uses fast, low-latency sharedflash storage whereby very little RAM is used and all I/O requests are sent to the flash storage after the inlinede-duplication and compression are performed.This reference architecture concentrates on the simple hybrid volume type for dedicated desktops, statelessdesktops that use local SSDs, and host-shared desktops and applications. To cover the widest variety ofshared storage, the simple all-flash volume type is not considered.

3.3.3 Atlantis Simple In-Memory Volume (ILIO Diskless VDI)

Atlantis simple in-memory volumes eliminate storage from stateless VDI deployments by using local serverRAM and the ILIO in-memory storage optimization technology. Server RAM is used as the primary storage forstateless virtual desktops, which ensures that read and write I/O occurs at memory speeds and eliminatesnetwork traffic. An option allows for in-line compression and decompression to reduce the RAM usage. TheILIO SnapClone technology is used to persist the ILIO data store in case of ILIO VM reboots, power outages,or other failures.

3.4 VMware Virtual SAN

VMware Virtual SAN (VSAN) is a Software Defined Storage (SDS) solution embedded in the ESXi hypervisor.Virtual SAN pools flash caching devices and magnetic disks across three or more 10 GbE connected serversinto a single shared datastore that is resilient and simple to manage.Virtual SAN can be scaled to 64 servers, with each server supporting up to 5 disk groups, with each disk groupconsisting of a single flash caching device (SSD) and up to 7 HDDs. Performance and capacity can easily be

increased simply by adding more components: disks, flash or servers.

The flash cache is used to accelerate both reads and writes. Frequently read data is kept in read cache; writesare coalesced in cache and destaged to disk efficiently, greatly improving application performance.VSAN manages data in the form of flexible data containers that are called objects and the following types ofobjects for VMs are available:

VM HomeVM swap (.vswp)

VMDK (.vmdk)

Snapshots (.vmsn)

Internally, VM objects are split into multiple components that are based on performance and availabilityrequirements that are defined in the VM storage profile. These components are distributed across multiplehosts in a cluster to tolerate simultaneous failures and meet performance requirements. VSAN uses adistributed RAID architecture to distribute data across the cluster. Components are distributed with the use ofthe following main techniques:

Striping (RAID 0): Number of stripes per object

Mirroring (RAID 1): Number of failures to tolerate

For more information about VMware Horizon virtual desktop types, objects, and components, see VMwareVirtual SAN Design and Sizing Guide for Horizon Virtual Desktop Infrastructures, which is available at thiswebsite: vmware.com/files/pdf/products/vsan/VMW-TMD-Virt-SAN-Dsn-Szing-Guid-Horizon-View.pdf

3.4.1 Virtual SAN Storage Policies

Virtual SAN uses Storage Policy-based Management (SPBM) function in vSphere to enable policy drivenvirtual machine provisioning, and uses vSphere APIs for Storage Awareness (VASA) to expose VSAN'sstorage capabilities to vCenter.This approach means that storage resources are dynamically provisioned based on requested policy, and notpre-allocated as with many traditional storage solutions. Storage services are precisely aligned to VMboundaries; change the policy, and VSAN will implement the changes for the selected VMs.VMware Horizon has predefined storage policies and default values for linked clones and full clones. Table 2lists the VMware Horizon default storage policies for linked clones.

4 Operational modelThis section describes the options for mapping the logical components of a client virtualization solution ontohardware and software. The Operational model scenarios section gives an overview of the availablemappings and has pointers into the other sections for the related hardware. Each subsection containsperformance data, has recommendations on how to size for that particular hardware, and a pointer to the BOMconfigurations that are described in section 5 on page 45. The last part of this section contains somedeployment models for example customer scenarios.

Figure 4: Operational model scenarios

The vertical axis is split into two halves: greater than 600 users is termed Enterprise and less than 600 istermed SMB. The 600 user split is not exact and provides rough guidance between Enterprise and SMB. Thelast column in Figure 4 (labelled hyper-converged) spans both halves because a hyper-converged solutioncan be deployed in a linear fashion from a small number of users (100) up to a large number of users (>4000).The horizontal axis is split into three columns. The left-most column represents traditional rack-based systemswith top-of-rack (TOR) switches and shared storage. The middle column represents converged systems wherethe compute, networking, and sometimes storage are converged into a chassis, such as the Flex System. The10

right-most column represents hyper-converged systems and the software that is used in these systems. Forthe purposes of this reference architecture, the traditional and converged columns are merged for enterprisesolutions; the only significant differences are the networking, form factor, and capabilities of the computeservers.Converged systems are not generally recommended for the SMB space because the converged hardwarechassis can be more overhead when only a few compute nodes are needed. Other compute nodes in theconverged chassis can be used for other workloads to make this hardware architecture more cost-effective.The VMware ESXi 6.0 hypervisor is recommended for all operational models. Similar performance results werealso achieved with the ESXi 5.5 U2 hypervisor. The ESXi hypervisor is convenient because it can boot from aUSB flash drive or boot from SAN and does not require any extra local storage.

4.1.1 Enterprise operational model

For the enterprise operational model, see the following sections for more information about each component,its performance, and sizing guidance:

4.2 Compute servers for virtual desktops

4.3 Compute servers for hosted desktops

4.5 Graphics Acceleration

4.6 Management servers

4.7 Shared storage

4.8 Networking

4.9 Racks

4.10 Proxy server

To show the enterprise operational model for different sized customer environments, four different sizingmodels are provided for supporting 600, 1500, 4500, and 10000 users.

4.1.2 SMB operational model

Currently, the SMB model is the same as the Enterprise model for traditional systems.

4.1.3 Hyper-converged operational model

For the hyper-converged operational model, see the following sections for more information about eachcomponent, its performance, and sizing guidance:

4.4 Compute servers for hyper-converged

4.5 Graphics Acceleration

4.6 Management servers

4.8.1 10 GbE networking

4.9 Racks

4.10 Proxy server

To show the hyper-converged operational model for different sized customer environments, four different sizingmodels are provided for supporting 300, 600, 1500, and 3000 users. The management server VMs for ahyper-converged cluster can either be in a separate hyper-converged cluster or on traditional shared storage.

4.2 Compute servers for virtual desktops

This section describes stateless and dedicated virtual desktop models. Stateless desktops that allow livemigration of a VM from one physical server to another are considered the same as dedicated desktopsbecause they both require shared storage. In some customer environments, stateless and dedicated desktopmodels might be required, which requires a hybrid implementation.Compute servers are servers that run a hypervisor and host virtual desktops. There are several considerationsfor the performance of the compute server, including the processor family and clock speed, the number ofprocessors, the speed and size of main memory, and local storage options.The use of the Aero theme in Microsoft Windows 7 or other intensive workloads has an effect on themaximum number of virtual desktops that can be supported on each compute server. Windows 8 also requiresmore processor resources than Windows 7, whereas little difference was observed between 32-bit and 64-bitWindows 7. Although a slower processor can be used and still not exhaust the processor power, it is a goodpolicy to have excess capacity.Another important consideration for compute servers is system memory. For stateless users, the typical rangeof memory that is required for each desktop is 2 GB - 4 GB. For dedicated users, the range of memory for eachdesktop is 2 GB - 6 GB. Designers and engineers that require graphics acceleration might need 8 GB - 16 GBof RAM per desktop. In general, power users that require larger memory sizes also require more virtualprocessors. This reference architecture standardizes on 2 GB per desktop as the minimum requirement of aWindows 7 desktop. The virtual desktop memory should be large enough so that swapping is not needed andvSwap can be disabled.For more information, see BOM for enterprise and SMB compute servers section on page 45.

4.2.1 Intel Xeon E5-2600 v3 processor family servers

Table 3 lists the LoginVSI performance of E5-2600 v3 processors from Intel that use the Login VSI 4.1 officeworker workload with ESXi 6.0. Similar performance results were also achieved with ESXi 5.5 U2.Table 3: Performance with office worker workloadProcessor with office worker workload

Two E5-2680 v3 2.50 GHz, 12C 120W

Two E5-2690 v3 2.60 GHz, 12C 135W

These results indicate the comparative processor performance. The following conclusions can be drawn:

The performance for stateless and dedicated virtual desktops is similar.

The Xeon E5-2650v3 processor has performance that is similar to the previously recommended XeonE5-2690v2 processor (IvyBridge), but uses less power and is less expensive.

The Xeon E5-2690v3 processor does not have significantly better performance than the XeonE5-2680v3 processor; therefore, the E5-2680v3 is preferred because of the lower cost.

Between the Xeon E5-2650v3 (2.30 GHz, 10C 105W) and the Xeon E5-2680v3 (2.50 GHz, 12C 120W) seriesprocessors are the Xeon E5-2660v3 (2.6 GHz 10C 105W) and the Xeon E5-2670v3 (2.3GHz 12C 120W)series processors. The cost per user increases with each processor but with a corresponding increase in userdensity. The Xeon E5-2680v3 processor has good user density, but the significant increase in cost mightoutweigh this advantage. Also, many configurations are bound by memory; therefore, a faster processor mightnot provide any added value. Some users require the fastest processor and for those users, the XeonE5-2680v3 processor is the best choice. However, the Xeon E5-2650v3 processor is recommended for anaverage configuration.Previous Reference Architectures used Login VSI 3.7 medium and heavy workloads. Table 5 gives acomparison with the newer Login VSI 4.1 office worker and knowledge worker workloads. The table shows thatLogin VSI 3.7 is on average 20% to 30% higher than Login VSI 4.1.Table 5: Comparison of Login VSI 3.7 and 4.1 WorkloadsProcessor

Workload

Stateless

Dedicated

Two E5-2650 v3 2.30 GHz, 10C 105W

4.1 Office worker

188 users

197 users

Two E5-2650 v3 2.30 GHz, 10C 105W

3.7 Medium

254 users

260 users

Two E5-2690 v3 2.60 GHz, 12C 135W

4.1 Office worker

243 users

246 users

Two E5-2690 v3 2.60 GHz, 12C 135W

3.7 Medium

316 users

313 users

Two E5-2690 v3 2.60 GHz, 12C 135W

4.1 Knowledge worker

191 users

200 users

Two E5-2690 v3 2.60 GHz, 12C 135W

3.7 Heavy

275 users

277 users

Table 6 compares the E5-2600 v3 processors with the previous generation E5-2600 v2 processors by usingthe Login VSI 3.7 workloads to show the relative performance improvement. On average, the E5-2600 v3processors are 25% - 30% faster than the previous generation with the equivalent processor names.

Two E5-2650 v2 2.60 GHz, 8C 85W

Two E5-2650 v3 2.30 GHz, 10C 105W

Two E5-2690 v2 3.0 GHz, 10C 130W

Two E5-2690 v3 2.60 GHz, 12C 135W

3.7 Medium

316 users

313 users

Two E5-2690 v2 3.0 GHz, 10C 130W

3.7 Heavy

208 users

220 users

Two E5-2690 v3 2.60 GHz, 12C 135W

3.7 Heavy

275 users

277 users

The default recommendation for this processor family is the Xeon E5-2650v3 processor and 512 GB of systemmemory because this configuration provides the best coverage for a range of users. For users who need VMsthat are larger than 3 GB, Lenovo recommends the use of 768 GB and the Xeon E5-2680v3 processor.Lenovo testing shows that 150 users per server is a good baseline and has an average of 76% usage of theprocessors in the server. If a server goes down, users on that server must be transferred to the remainingservers. For this degraded failover case, Lenovo testing shows that 180 users per server have an average of89% usage of the processor. It is important to keep this 25% headroom on servers to cope with possiblefailover scenarios. Lenovo recommends a general failover ratio of 5:1.Table 7 lists the processor usage with ESXi for the recommended user counts for normal mode and failovermode.Table 7: Processor usageProcessor

Workload

Users per Server

Stateless Utilization

Dedicated Utilization

Two E5-2650 v3

Office worker

150 normal node

79%

78%

Two E5-2650 v3

Office worker

180 failover mode

86%

86%

Two E5-2680 v3

Knowledge worker

150 normal node

76%

74%

Two E5-2680 v3

Knowledge worker

180 failover mode

92%

90%

Table 8 lists the recommended number of virtual desktops per server for different VM memory sizes. Thenumber of users is reduced in some cases to fit within the available memory and still maintain a reasonablybalanced system of compute and memory.Table 8: Recommended number of virtual desktops per serverProcessor

E5-2650v3

E5-2650v3

E5-2680v3

VM memory size

2 GB (default)

3 GB

4 GB

System memory

384 GB

512 GB

768 GB

Desktops per server (normal mode)

150

140

150

Desktops per server (failover mode)

Table 9 lists the approximate number of compute servers that are needed for different numbers of users andVM sizes.Table 9: Compute servers needed for different numbers of users and VM sizesDesktop memory size (2 GB or 4 GB)

600 users

1500 users

4500 users

10000 users

Compute servers @150 users (normal)

10

30

68

Compute servers @180 users (failover)

25

56

Failover ratio

4:1

4:1

5:1

5:1

Desktop memory size (3 GB)

600 users

1500 users

4500 users

10000 users

Compute servers @140 users (normal)

11

33

72

Compute servers @168 users (failover)

27

60

Failover ratio

4:1

4.5:1

4.5:1

5:1

For stateless desktops, local SSDs can be used to store the VMware replicas and linked clones for improvedperformance. Two replicas must be stored for each master image. Each stateless virtual desktop requires alinked clone, which tends to grow over time until it is refreshed at log out. Two enterprise high speed 200 GBSSDs in a RAID 0 configuration should be sufficient for most user scenarios; however, 400 GB or even 800 GBSSDs might be needed. Because of the stateless nature of the architecture, there is little added value inconfiguring reliable SSDs in more redundant configurations.

4.2.2 Intel Xeon E5-2600 v2 processor family servers with Atlantis ILIOAtlantis ILIO provides storage optimization by using a 100% software solution. There is a cost for processorand memory usage while offering decreased storage usage and increased input/output operations per second(IOPS). This section contains performance measurements for processor and memory utilization of ILIOtechnology and gives an indication of the storage usage and performance. Dedicated and stateless virtualdesktops have different performance measurements and recommendations.VMs under ILIO are deployed on a per server basis. It is also recommended to use a separate storage logicalunit number (LUN) for each ILIO VM to support failover. Therefore, the performance measurements andrecommendations in this section are on a per server basis. Note that these measurements are currently for theE5-2600 v2 processor using Login VSI 3.7.

Dedicated virtual desktops

For environments that are not using Atlantis ILIO, it is recommended to use linked clones to conserve sharedstorage space. However, with Atlantis ILIO, it is recommended to use full clones for persistent desktopsbecause they de-duplicate more efficiently than the linked clones and can support more desktops per server.ILIO Persistent VDI with disk-backed mode (USX simple hybrid volume) is used for dedicated virtual desktops.The memory that is required for in-memory mode is high and is not examined further in this version of thereference architecture. Table 10 shows the Login VSI performance with and without the ILIO Persistent VDIdisk-backed solution on ESXi 5.5.

Table 10: Performance of persistent desktops with ILIO Persistent VDI

Dedicated with ILIO Persistent VDI

Two E5-2650v2 8C 2.7 GHz

Medium

205 users

189 users

Two E5-2690v2 10C 3.0 GHz

Medium

260 users

232 users

Two E5-2690v2 10C 3.0 GHz

Heavy

220 users

198 users

There is an average difference of 20% - 30% in the work that is done by the two vCPUs of the Atlantis ILIO VM.It is recommended that higher-end processors (such as E5-2690v2) are used to maximize density.The ILIO Persistent VDI VM uses 5 GB of RAM. In addition, the ILIO RAM cache requires more RAM andAtlantis Computing provides a calculator for this RAM. Lenovo testing found that 275 VMs used 35 GB out ofthe 50 GB RAM. In practice, most servers host less VMs, but each VM is much larger. Proof of concept (POC)testing can help determine the amount of RAM, but for most situations 50 GB of RAM should be sufficient.Assuming 4 GB for the hypervisor, 59 GB (50 + 5 + 4) of system memory should be reserved. It isrecommended that at least 384 GB of server memory is used for ILIO Persistent VDI deployments.Table 11 lists the recommended number of virtual desktops per server for different VM memory sizes for amedium workload. This configuration can be a more cost-effective, higher-density route for larger VMs thatbalance RAM and processor utilization.Table 11: Recommended number of virtual desktops per server with ILIO Persistent VDIProcessor

E5-2690v2

E5-2690v2

E5-2690v2

VM memory size

2 GB (default)

3 GB

4 GB

Total system memory

384 GB

512 GB

768 GB

Reserved system memory

59 GB

59 GB

59 GB

System memory for desktop VMs

325 GB

452 GB

709 GB

Desktops per server (normal mode)

125

125

125

Desktops per server (failover mode)

150

150

150

Table 12 lists the number of compute servers that are needed for different numbers of users and VM sizes. Aserver with 384 GB system memory is used for 2 GB VMs, 512 GB system memory is used for 3 GB VMs, and768 GB system memory is used for 4 GB VMs.Table 12: Compute servers needed for different numbers of users with ILIO Persistent VDI600 users

1500 users

4500 users

10000 users

Compute servers for 125 users (normal)

12

36

80

Compute servers for 150 users (failover)

10

30

67

Failover ratio

4:1

5:1

5:1

4:1

The amount of disk storage that is used depends on several factors, including the size of the original image,the amount of user unique storage, and the de-duplication and compression ratios that can be achieved.

Here is a best case example: A Windows 7 image uses 21 GB out of an allocated 30 GB. For 160 VMs that areusing full clones, the actual storage space that is needed is 3360 GB. For ILIO, the storage space that is usedis 60 GB out of an allocated datastore of 250 GB. This configuration is a saving of 98% and is best case, evenif you add the 50 GB of disk space that is needed by the ILIO VM.It is still a best practice to separate the user folder and any other shared folders into separate storage. Thatleaves all of the other possible changes that might occur in a full clone must be stored in the ILIO data store.This configuration is highly dependent on the environment. Testing by Atlantis Computing suggests that 3.5 GBof unique data per persistent VM is sufficient. Comparing against the 4800 GB that is needed for 160 full cloneVMs, this configuration still represents a saving of 88%. It is recommended to reserve 10% - 20% of the totalstorage that is required for the ILIO data store.As a result of the use of ILIO Persistent VDI, the only read operations are to fill the cache for the first time. Forall practical purposes, the remaining reads are few and at most 1 IOPS per VM. Writes to persistent storageare still needed for starting, logging in, remaining in steady state, and logging off, but the overall IOPS count issubstantially reduced.Assuming the use of a fast, low-latency shared storage device, such as the IBM FlashSystem 840 system, asingle VM boot can take 20 - 25 seconds to get past the display of the logon window and get all of the otherservices fully loaded. This process takes this time because boot operations are mainly read operations,although the actual boot time can vary depending on the VM.Login time for a single desktop varies, depending on the VM image but can be extremely quick. In some cases,the login will take less than 6 seconds. Scale-out testing across a cluster of servers shows that one new loginevery 6 seconds can be supported over a long period. Therefore, at any one instant, there can be multiplelogins underway and the main bottleneck is the processor.

Stateless virtual desktops

Two different options were tested for stateless virtual desktops: one is ILIO Persistent VDI with disk-backedmode to local SSDs, and the other is ILIO Diskless VDI (USX simple in-memory volume) to server memorywithout compression. For ILIO Persistent VDI, the difference data is stored on the local SSDs as before. ForILIO Diskless ILIO, it is important to issue a SnapClone to a backing store so that the diskless VMs do not needto be re-created each time the ILIO Diskless VM is started. Table 13 lists the Login VSI performance with andwithout the ILIO VM on ESXi 5.5.Table 13: Performance of stateless desktopsProcessor

Workload

Stateless

Stateless with ILIO

Stateless with

Persistent VDI with

ILIO Diskless VDI

local SSDTwo E5-2650v2 8C 2.7 GHz

Medium

202 users

181 users

159 users

Two E5-2690v2 10C 3.0 GHz

Medium

240 users

227 users

224 users

Two E5-2690v2 10C 3.0 GHz

Heavy

208 users

196 users

196 users

There is an average difference of 20% - 35% in the work that is done by the two vCPUs of the Atlantis ILIO VM.It is recommended that higher-end processors (such as the E5-2690v2) are used to maximize density. The17

maximum number of users that is supported is slightly higher for ILIO Diskless VDI, but the RAM requirementis also much higher.For the ILIO Persistent VDI that uses local SSDs, the memory calculation is similar to that for persistent virtualdesktops. It is recommended that at least 384 GB of server memory is used for ILIO Persistent VDIdeployments. For more information about recommendations for ILIO Persistent VDI that use local SSDs forstateless virtual desktops, see Table 11 and Table 12. The same configuration can also be used for statelessdesktops with shared storage; however, the performance of the write operations likely becomes much worse.The ILIO Diskless VDI VM uses 5 GB of RAM. In addition, the ILIO RAM cache and RAM data store requiresextra RAM. Atlantis Computing provides a calculator for this RAM. Lenovo testing found that 230 VMs used 69GB of RAM. In practice, most servers host less VMs and each VM has more differences. POC testing can helpdetermine the amount of RAM, but 128 GB should be sufficient for most situations. Assuming 4 GB for thehypervisor, 137 GB (128 + 5 + 4) of system memory should be reserved. In general, it is recommended that aminimum of 512 GB of server memory is used for ILIO Diskless VDI deployments.Table 14 lists the recommended number of stateless virtual desktops per server for different VM memory sizesfor a medium workload.Table 14: Recommended number of virtual desktops per server with ILIO Diskless VDIProcessor

E5-2690v2

E5-2690v2

E5-2690v2

VM memory size

2 GB (default)

3 GB

4 GB

Total system memory

512 GB

512 GB

768 GB

Reserved system memory

137 GB

137 GB

137 GB

System memory for desktop VMs

375 GB

375 GB

631 GB

Desktops per server (normal mode)

125

100

125

Desktops per server (failover mode)

150

125

150

Table 15 shows the number of compute servers that are needed for different numbers of users and VM sizes. Aserver with 512 GB system memory is used for 2 GB and 3 GB VMs, and 768 GB system memory is used for 4GB VMs.Table 15: Compute servers needed for different numbers of users with ILIO Diskless VDIDesktop memory size (2GB or 4GB)

600 users

1500 users

4500 users

10000 users

Compute servers for 125 users (normal)

11

30

67

Compute servers for 150 users (failover)

25

56

Failover ratio

4:1

4.5:1

5:1

5:1

Desktop memory size (3 GB)

600 users

1500 users

4500 users

10000 users

Compute servers for 100 users (normal)

12

36

80

Compute servers for 125 users (failover)

Disk storage is needed for the master images and each SnapClone data store for ILIO Diskless VDI VMs. Thisstorage does not need to be fast because it is used only to initially load the master image or to recover an ILIODiskless VDI VM that was rebooted.As with persistent virtual desktops, the addition of the ILIO technology reduces the IOPS that is needed forboot, login, remaining in steady state, and logoff. This reduces the time to bring a VM online and reduces userresponse time.

4.3 Compute servers for hosted desktops

This section describes compute servers for hosted desktops, which is a new feature in VMware Horizon 6.x.Hosted desktops are more suited to task workers that require little in desktop customization.As the name implies, multiple desktops share a single VM; however, because of this sharing, the computeresources often are exhausted before memory. Lenovo testing showed that 128 GB of memory is sufficient forservers with two processors.Other testing showed that the performance differences between four, six, or eight VMs is minimal; therefore,four VMs are recommended to reduce the license costs for Windows Server 2012 R2.For more information, see BOM for hosted desktops section on page 50.

4.3.1 Intel Xeon E5-2600 v3 processor family servers

Table 16 lists the processor performance results for different size workloads that use four Windows Server2012 R2 VMs with the Xeon E5-2600v3 series processors and ESXi 6.0 hypervisor.Table 16: Performance results for hosted desktops using the E5-2600 v3 processorsProcessor

Workload

Hosted Desktops

Two E5-2650 v3 2.30 GHz, 10C 105W

Office Worker

222 users

Two E5-2690 v3 2.60 GHz, 12C 135W

Office Worker

298 users

Two E5-2690 v3 2.60 GHz, 12C 135W

Knowledge Worker

244 users

Lenovo testing shows that 170 hosted desktops per server is a good baseline. If a server goes down, users onthat server must be transferred to the remaining servers. For this degraded failover case, Lenovo recommends204 hosted desktops per server. It is important to keep a 25% headroom on servers to cope with possiblefailover scenarios. Lenovo recommends a general failover ratio of 5:1.Table 17 lists the processor usage for the recommended number of users.Table 17: Processor usageProcessor

Workload

Users per Server

Utilization

Two E5-2650 v3

Office worker

170 normal node

73%

Two E5-2650 v3

Office worker

204 failover mode

81%

Two E5-2690 v3

Knowledge worker

170 normal node

64%

Two E5-2690 v3

Knowledge worker

204 failover mode

Table 18 lists the number of compute servers that are needed for different number of users. Each computeserver has 128 GB of system memory for the four VMs.Table 18: Compute servers needed for different numbers of users and VM sizes600 users

1500 users

4500 users

10000 users

Compute servers for 170 users (normal)

10

27

59

Compute servers for 204 users (failover)

22

49

Failover ratio

3:1

4:1

4.5:1

5:1

4.3.2 Intel Xeon E5-2600 v2 processor family servers with Atlantis ILIOAtlantis ILIO provides in-memory storage optimization by using a 100% software solution. There is an effect onprocessor and memory usage while offering decreased storage usage and increased IOPS. This sectioncontains performance measurements for processor and memory utilization of ILIO technology and describesthe storage usage and performance.VMs under ILIO are deployed on a per server basis. It is recommended to use a separate storage LUN foreach ILIO VM to support failover. The performance measurements and recommendations in this section are ona per server basis. Note that these measurements are currently for the E5-2600 v2 processor using Login VSI3.7.The performance measurements and recommendations are for the use of ILIO Persistent VDI with hosteddesktops. Table 19 lists the processor performance results for the Xeon E5-2600 v2 series of processors.Table 19: Performance results for hosted desktops using the E5-2600 v2 processorsWorkload

Processor

RDP

RDP desktops with

PCoIP

PCoIP desktops with

desktops

ILIO Persistent VM

desktops

ILIO Persistent VM

Medium

Two E5-2690v2

266 users

243 users

220 users

205 users

Heavy

Two E5-2690v2

213 users

197 users

173 users

163 users

On average, there is a difference of 20% - 30% that can be attributed to work that is done by the two vCPUs ofthe Atlantis ILIO VM. It is recommended that higher-end processors, such as E5-2690v2, are used to maximizedensity.The ILIO Persistent VDI VM uses 5 GB of RAM. In addition, the ILIO RAM cache requires more RAM. AtlantisComputing provides a calculator for this RAM. Lenovo testing found that the four VMs used 32 GB. In practice,most servers host less VMs and each VM is much larger. POC testing can help determine the amount of RAM.However, for most circumstances, 60 GB should be enough. It is recommended that at least 192 GB of servermemory is used for ILIO Persistent VDI deployments of hosted desktops.Table 20 shows the recommended number of shared hosted desktops per compute server that uses two XeonE5-2690v2 series processors, which allows for some processor headroom for the hypervisor and a 5:1 failoverratio in the compute servers.

Table 20: Recommended number of hosted desktops per server

Workload

Normal case

Normal utilization

Failover case

Failover utilization

Medium

150

73%

180

88%

Heavy

160

73%

192

87%

Table 21 shows the number of compute servers that is needed for different numbers of users. Each computeserver has 256 GB of system memory for the four VMs and the ILIO Persistent VDI VM.Table 21: Compute servers needed for different numbers of users and VM sizes600 users

1500 users

4500 users

10000 users

Compute servers for 150 users (normal)

10

30

67

Compute servers for 180 users (failover)

25

56

Failover ratio

4:1

4:1

5:1

5:1

The amount of disk storage that is used depends on several factors, including the size of the original WindowsServer image, the amount of unique storage, and the de-duplication and compression ratios that can beachieved. A Windows 2008 R2 image uses 19 GB. For four VMs, the actual storage space that is needed is 76GB. For ILIO, the storage space that is used is 25 GB, which is a saving of 67%.As a result of the use of ILIO Persistent VDI, the only read I/O operations that are needed are those to fill thecache for the first time. For all practical purposes, the remaining reads are few and at most 1 IOPS per VM.Writes to persistent storage are still needed for booting, logging in, remaining in steady state, and logging off,but the overall IOPS count is substantially reduced.

4.4 Compute servers for hyper-converged systems

This section presents the compute servers for different hyper-converged systems including VMware VSAN andAtlantis USX. Additional processing and memory is often required to support storing data locally, as well asadditional SSDs or HDDs. Typically HDDs are used for data capacity and SSDs and memory is used forprovide performance. As the price per GB for flash memory continues to reduce, there is a trend to also use allSSDs to provide the overall best performance for hyper-converged systems.For more information, see BOM for hyper-converged compute servers on page 54.

4.4.1 Intel Xeon E5-2600 v3 processor family servers with VMware VSANVMware VSAN is tested by using the office worker and knowledge worker workloads of Login VSI 4.1. FourLenovo x3650 M5 servers with E5-2680v3 processors were networked together by using a 10 GbE TOR switchwith two 10 GbE connections per server.

Server Performance and Sizing Recommendations

Each server was configured with two disk groups of different sizes. A single disk group does not provide thenecessary resiliency. The two disk groups per server had one 400 GB SSD and three or six HDDs. Both diskgroups provided more than enough capacity for the linked clone VMs.

Table 22 lists the Login VSI results for stateless desktops by using linked clones and the VMware defaultstorage policy of number of failures to tolerate (FTT) of 0 and stripe of 1.Table 22: Performance results for stateless desktopsStorage PolicyWorkLoad

VSI max for 2 disk groups

Dedicated

UsedCapacity

1 SSD and

1 SSD and

6 HDDs

3 HDDs

1000

10.86 TB

905

895

800

9.28 TB

700

668

These results show that there is no significant performance difference between disk groups with three andseven HDDs and there are enough IOPS for the disk writes. Persistent desktops might need more hard drivecapacity or larger SSDs to improve performance for full clones or provide space for growth of linked clones.The Lenovo M5210 raid controller for the x3650 M5 can be used in two modes: integrated MegaRaid (iMR)mode without the flash cache module or MegaRaid (MR) mode with the flash cache module and at least 1 GBof battery backed flash memory. In both modes, RAID 0 virtual drives were configured for use by the diskgroups. For more information, see this website: lenovopress.com/tips1069.html.Table 24 lists the measured queue depth for the M5210 raid controller in iMR and MR modes. Lenovorecommends using the M5210 raid controller with the flash cache module because it has a much better queuedepth and has better IOPS performance.Table 24: Raid Controller Queue DepthRaid ControllerM5210

iMR mode queue depth

MR mode queue depth

Drive queue depth

234

895

128

Lenovo testing shows that 125 users per server is a good baseline and has an average of 77% usage of theprocessors in the server. If a server goes down, users on that server must be transferred to the remainingservers. For this degraded failover case, Lenovo testing shows that 150 users per server have an average of89% usage rate. It is important to keep 25% headroom on servers to cope with possible failover scenarios.Lenovo recommends a general failover ratio of 5:1.22

Table 25 lists the processor usage for the recommended number of usersTable 25: Processor usageProcessor

Workload

Users per Server

Stateless Utilization

Dedicated Utilization

Two E5-2680 v3

Office worker

125 normal node

51%

50%

Two E5-2680 v3

Office worker

150 failover mode

62%

59%

Two E5-2680 v3

Knowledge worker

125 normal node

62%

61%

Two E5-2680 v3

Knowledge worker

150 failover mode

81%

78%

Table 25 lists the recommended number of virtual desktops per server for different VM memory sizes. Thenumber of users is reduced in some cases to fit within the available memory and still maintain a reasonablybalanced system of compute and memory.Table 26: Recommended number of virtual desktops per server for VSANProcessor

E5-2680 v3

E5-2680 v3

E5-2680 v3

VM memory size

2 GB (default)

3 GB

4 GB

System memory

384 GB

512 GB

768 GB

Desktops per server (normal mode)

125

125

125

Desktops per server (failover mode)

150

150

150

Table 27 shows the number of servers that is needed for different numbers of users. By using the target of 125users per server, the maximum number of users is 4000. The minimum number of servers that is required forVSAN is 3 and this requirement is reflected in the extra capacity for 300 user case because the configurationcan actually support up to 450 users.Table 27: Compute servers needed for different numbers of users for VSAN300 users

600 users

1500 users

3000 users

Compute servers for 125 users (normal)

12

24

Compute servers for 150 users (failover)

10

20

Failover ratio

3:1

4:1

5:1

5:1

The processor and I/O usage graphs from esxtop are helpful to understand the performance characteristics ofVSAN. The graphs are for 150 users per server and three servers to show the worst-case load in a failoverscenario.Figure 5 shows the processor usage with three curves (one for each server). The Y axis is percentage usage0% - 100%. The curves have the classic Login VSI shape with a gradual increase of processor usage and thenflat during the steady state period. The curves then go close to 0 as the logoffs are completed.

Stateless 450 users on 3 servers

Dedicated 450 users on 3 servers

Figure 5: VSAN processor usage for 450 virtual desktops

Figure 6 shows the SSD reads and writes with six curves, one for each SSD. The Y axis is 0 - 10,000 IOPS forthe reads and 0 - 3,000 IOPS for the writes. The read curves generally show a gradual increase of reads untilthe steady state and then drop off again for the logoff phase. This pattern is more well-defined for the secondset of curves for the SSD writes.Stateless 450 users on 3 servers

Dedicated 450 users on 3 servers

Figure 6: VSAN SSD reads and writes for 450 virtual desktopsFigure 7 shows the HDD reads and writes with 36 curves, one for each HDD. The Y axis is 0 - 2,000 IOPS forthe reads and 0 - 1,000 IOPS for the writes. The number of read IOPS has an average peak of 200 IOPS and24

many of the drives are idle much of the time. The number of write IOPS has a peak of 500 IOPS; however, ascan be seen, the writes occur in batches as data is destaged from the SSD cache onto the greater capacityHDDs. The first group of write peaks is during the logon and last group of write peaks correspond to the logoffperiod.Stateless 450 users on 3 servers

Dedicated 450 users on 3 servers

Figure 7: VSAN HDD reads and writes for 450 virtual desktops

VSAN Resiliency Tests

An important part of a hyper-converged system is the resiliency to failures when a compute server isunavailable. System performance was measured for the following use cases that featured 450 users:

Enter maintenance mode by using the VSAN migration mode of no data migration

Server power off by using the VSAN migration mode of ensure accessibility (VSAN default)

For each use case, Login VSI was run and then the compute server was removed. This process was doneduring the login phase as new virtual desktops are logged in and during the steady state phase. For the steadystate phase, 114 - 120 VMs must be migrated from the failed server to the other three servers with eachserver gaining 38 - 40 VMs.Table 28 lists the completion time or downtime for the VSAN system with the three different use cases.

Table 28: VSAN Resiliency Testing and Recovery Time

Steady State Phase

vSAN Migration Mode

Maintenance Mode

Ensure Accessibility

605

598

Maintenance Mode

No Data Migration

408

430

Server power off

Ensure Accessibility

N/A

226

N/A

316

For the two maintenance mode cases, all of the VMs migrated smoothly to the other servers and there was nosignificant interruption in the Login VSI test.For the power off use case, there is a significant period for the system to readjust. During the login phase for aLogin VSI test, the following process was observed:1. All logged in users were logged out from the failed node.2. Login failed for all new users logging to the desktops running on the failed node.3. Desktop status changed to "Agent Unreachable" for all desktops in the failed node.4. All desktops are migrated to other nodes.5. Desktops status changed to "Available"6. Login continued successfully for all new users.In a production system, users with persistent desktops that are running on the failed server must login againafter their VM was successfully migrated to another server. Stateless users can continue working almostimmediately, assuming that the system is not at full capacity and other stateless VMs are ready to be used.Figure 8 shows the processor usage for the four servers during the login phase and steady state phase byusing Login VSI with the knowledge worker workload when one of the servers is powered off. The processorspike for the three remaining servers is apparent.

Login Phase

Steady State Phase

Figure 8: VSAN processor utilization: Server power off

There is an impact on performance and time lag if a hyper-converged server suffers a catastrophic failure yetVSAN can recover quite quickly. However, this situation is best avoided and it is important to build inredundancy at multiple levels for all mission critical systems.

4.4.2 Intel Xeon E5-2600 v3 processor family servers with Atlantis USXAtlantis USX is tested by using the knowledge worker workload of Login VSI 4.1. Four Lenovo x3650 M5servers with E5-2680v3 processors were networked together by using a 10 GbE TOR switch. Atlantis USX wasinstalled and four 400 GB SSDs per server were used to create an all-flash hyper-converged volume acrossthe four servers that were running ESXi 5.5 U2.This configuration was tested with 500 dedicated virtual desktops on four servers and then three servers to seethe difference if one server is unavailable. Table 29 lists the processor usage for the recommended number ofusers.Table 29: Processor usage for Atlantis USXProcessor

Workload

Servers

Users per Server

Utilization

Two E5-2680 v3

Knowledge worker

125 normal node

66%

Two E5-2680 v3

Knowledge worker

167 failover mode

89%

From these measurements, Lenovo recommends 125 user per server in normal mode and 150 users perserver in failover mode. Lenovo recommends a general failover ratio of 5:1.Table 30 lists the recommended number of virtual desktops per server for different VM memory sizes.Table 30: Recommended number of virtual desktops per server for Atlantis USXProcessor

E5-2680 v3

E5-2680 v3

E5-2680 v3

VM memory size

2 GB (default)

3 GB

4 GB

System memory

384 GB

512 GB

768 GB

Memory for ESXi and Atlantis USX

63 GB

63 GB

63 GB

Memory for virtual machines

321 GB

449 GB

705 GB

Desktops per server (normal mode)

125

125

125

Desktops per server (failover mode)

150

150

150

Table 31 lists the approximate number of compute servers that are needed for different numbers of users andVM sizes.Table 31: Compute servers needed for different numbers of users for Atlantis USXDesktop memory size

300 users

600 users

1500 users

3000 users

Compute servers for 125 users (normal)

12

24

Compute servers for 150 users (failover)

An important part of a hyper-converged system is the resiliency to failures when a compute server is nounavailable. Login VSI was run and then the compute server was powered off. This process was done duringthe steady state phase. For the steady state phase, 114 - 120 VMs were migrated from the failed server to theother three servers with each server gaining 38 - 40 VMs.Figure 9 shows the processor usage for the four servers during the steady state phase and when one of theservers is powered off. The processor spike for the three remaining servers is noticeable.

Figure 9: Atlantis USX processor usage: server power off

There is an impact on performance and time lag if a hyper-converged server suffers a catastrophic failure yetVSAN can recover quite quickly. However, this situation is best avoided as it is important to build in redundancyat multiple levels for all mission critical systems.

4.5 Graphics Acceleration

The VMware ESXi 6.0 hypervisor supports the following options for graphics acceleration:

Dedicated GPU with one GPU per user, which is called virtual dedicated graphics acceleration (vDGA)mode.

Shared GPU with users sharing a GPU, which is called virtual shared graphics acceleration (vSGA)mode and is not recommended because of user contention for shared use of the GPU.

GPU hardware virtualization (vGPU) that partitions each GPU for 1 - 8 users. This option requiresHorizon 6.1 and is not considered in this release of the Reference Architecture.

VMware also provides software emulation of a GPU, which can be processor-intensive and disruptive to otherusers who have a choppy experience because of reduced processor performance. Software emulation is notrecommended for any user who requires graphics acceleration.The performance of graphics acceleration was tested on the NVIDIA GRID K1 and GRID K2 adapters by usingthe Lenovo System x3650 M5 server and the Lenovo NeXtScale nx360 M5 server. Each of these serverssupports up to two GRID adapters. No significant performance differences were found between these twoservers when used for graphics acceleration and the results apply to both.

Because the vDGA option offers a low user density (8 for GRID K1 and 4 for GRID K2), it is recommended thatthis configuration is used only for power users, designers, engineers, or scientists that require powerfulgraphics acceleration. Horizon 6.1 is needed to support higher user densities of up to 64 users per server withtwo GRID K1 adapters by using the hardware virtualization of the GPU (vGPU mode).Lenovo recommends that a high powered CPU, such as the E5-2680v3, is used for vDGA and vGPU becauseaccelerated graphics tends to put an extra load on the processor. For the vDGA option, with only four or eightusers per server, 128 GB of server memory should be sufficient even for the high end GRID K2 users whomight need 16 GB or even 24 GB per VM.The Heaven benchmark is used to measure the per user frame rate for different GPUs, resolutions, and imagequality. This benchmark is graphics-heavy and is fairly realistic for designers and engineers. Power users orknowledge workers usually have less intense graphics workloads and can achieve higher frame rates.Table 32 lists the results of the Heaven benchmark as frames per second (FPS) that are available to each userwith the GRID K1 adapter by using vDGA mode with DirectX 11.Table 32: Performance of GRID K1 vDGA mode by using DirectX 11Quality

Tessellation

Anti-Aliasing

Resolution

FPS

High

Normal

1024x768

15.8

High

Normal

1280x768

13.1

High

Normal

1280x1024

11.1

Table 33 lists the results of the Heaven benchmark as FPS that is available to each user with the GRID K2adapter by using vDGA mode with DirectX 11.Table 33: Performance of GRID K2 vDGA mode by using DirectX 11Quality

Tessellation

Anti-Aliasing

Resolution

FPS

Ultra

Extreme

1680x1050

28.4

Ultra

Extreme

1920x1080

24.9

Ultra

Extreme

1920x1200

22.9

Ultra

Extreme

2560x1600

13.8

The GRID K2 GPU has more than twice the performance of the GRID K1 GPU, even with the high quality,tessellation, and anti-aliasing options. This result is expected because of the relative performancecharacteristics of the GRID K1 and GRID K2 GPUs. The frame rate decreases as the display resolutionincreases.Because there are many variables when graphics acceleration is used, Lenovo recommends that testing isdone in the customer environment to verify the performance for the required user workloads.For more information about the bill of materials (BOM) for GRID K1 and K2 GPUs for Lenovo System x3650M5 and NeXtScale nx360 M5 servers, see the following corresponding BOMs:

29

BOM for enterprise and SMB compute servers section on page 45.

BOM for hyper-converged compute servers on page 54.

4.6 Management servers

Management servers should have the same hardware specification as compute servers so that they can beused interchangeably in a worst-case scenario. The VMware Horizon management servers also use the sameESXi hypervisor, but have management VMs instead of user desktops.Table 34 lists the VM requirements and performance characteristics of each management service.Table 34: Characteristics of VMware Horizon management servicesManagement

Virtual

System

service VM

processors

memory

vCenter Server

4 GB

vCenter SQL

4 GB

Storage

Windows

HA

Performance

OS

needed

characteristic

15 GB

2008 R2

Yes

Up to 2000 VMs.

15 GB

2008 R2

Yes

Double the virtual

Server

processors andmemory for morethan 2500 users.4

View Connection

10 GB

40 GB

2008 R2

Yes

Server

Up to 2000connections.

Table 35 lists the number of management VMs for each size of users following the high-availability andperformance characteristics. The number of vCenter servers is half of the number of vCenter clusters becauseeach vCenter server can handle two clusters of up to 1000 desktops.Table 35: Management VMs neededHorizon management service VM

600 users

1500 users

4500 users

10000 users

vCenter servers

vCenter SQL servers

2 (1+1)

2 (1+1)

2 (1+1)

2 (1+1)

View Connection Server

2 (1 + 1)

2 (1 + 1)

4 (3 + 1)

7 (5 + 2)

Each management VM requires a certain amount of virtual processors, memory, and disk. There is enoughcapacity in the management servers for all of these VMs. Table 36 lists an example mapping of themanagement VMs to the four physical management servers for 4500 users.Table 36: Management server VM mapping (4500 users)Management service for 4500

Management

Management

Management

Management

stateless users

server 1

server 2

server 3

server 4

vCenter servers (3)

vCenter database (2)

View Connection Server (4)

It is assumed that common services, such as Microsoft Active Directory, Dynamic Host Configuration Protocol(DHCP), domain name server (DNS), and Microsoft licensing servers exist in the customer environment.For shared storage systems that support block data transfers only, it is also necessary to provide some file I/O30

servers that support CIFS or NFS shares and translate file requests to the block storage system. For highavailability, two or more Windows storage servers are clustered.Based on the number and type of desktops, Table 37 lists the recommended number of physical managementservers. In all cases, there is redundancy in the physical management servers and the management VMs.Table 37: Management servers neededManagement servers

600 users

1500 users

4500 users

10000 users

Stateless desktop model

Dedicated desktop model

Windows Storage Server 2012

For more information, see BOM for enterprise and SMB management servers on page 57.

4.7 Shared storage

VDI workloads, such as virtual desktop provisioning, VM loading across the network, and access to userprofiles and data files place huge demands on network shared storage.Experimentation with VDI infrastructures shows that the input/output operation per second (IOPS) performancetakes precedence over storage capacity. This precedence means that more slower speed drives are needed tomatch the performance of fewer higher speed drives. Even with the fastest HDDs available today (15k rpm),there can still be excess capacity in the storage system because extra spindles are needed to provide theIOPS performance. From experience, this extra storage is more than sufficient for the other types of data suchas SQL databases and transaction logs.The large rate of IOPS, and therefore, large number of drives needed for dedicated virtual desktops can beameliorated to some extent by caching data in flash memory or SSD drives. The storage configurations arebased on the peak performance requirement, which usually occurs during the so-called logon storm. This iswhen all workers at a company arrive in the morning and try to start their virtual desktops, all at the same time.It is always recommended that user data files (shared folders) and user profile data are stored separately fromthe user image. By default, this has to be done for stateless virtual desktops and should also be done fordedicated virtual desktops. It is assumed that 100% of the users at peak load times require concurrent accessto user data and profiles.In View 5.1, VMware introduced the View Storage Accelerator (VSA) feature that is based on the ESXiContent-Based Read Cache (CBRC). VSA provides a per-host RAM-based solution for VMs, whichconsiderably reduces the read I/O requests that are issued to the shared storage. Performance measurementsby Lenovo show that VSA has a negligible effect on the number of virtual desktops that can be used on acompute server while it reduces the read requests to storage by one-fifth.Stateless virtual desktops can use SSDs for the linked clones and the replicas. Table 38 lists the peak IOPSand disk space requirements for stateless virtual desktops on a per-user basis.

vSwap (recommended to be disabled)

NFS or Block

User data files

CIFS/NFS

5 GB

75%

User profile (through MSRP)

CIFS

100 MB

0.8

75%

Table 39 summarizes the peak IOPS and disk space requirements for dedicated or shared stateless virtualdesktops on a per-user basis. Persistent virtual desktops require a high number of IOPS and a large amount ofdisk space for the VMware linked clones. Note that the linked clones also can grow in size over time. Statelessusers that require mobility and have no local SSDs also fall into this category. The last three rows of Table 39are the same as in Table 38 for stateless desktops.Table 39: Dedicated or shared stateless virtual desktop shared storage performance requirementsDedicated virtual desktops

Protocol

Size

IOPS

Write %

Replica

Block/NFS

30 GB

Linked clones

Block/NFS

10 GB

18

85%

vSwap (recommended to be disabled)

NFS or Block

User data files

CIFS/NFS

5 GB

75%

User profile (through MSRP)

CIFS

100 MB

0.8

75%

User AppData folder

The sizes and IOPS for user data files and user profiles that are listed in Table 38 and Table 39 can varydepending on the customer environment. For example, power users might require 10 GB and five IOPS foruser files because of the applications they use. It is assumed that 100% of the users at peak load times requireconcurrent access to user data files and profiles.Many customers need a hybrid environment of stateless and dedicated desktops for their users. The IOPS fordedicated users outweigh those for stateless users; therefore, it is best to bias towards dedicated users in anystorage controller configuration.The storage configurations that are presented in this section include conservative assumptions about the VMsize, changes to the VM, and user data sizes to ensure that the configurations can cope with the mostdemanding user scenarios.This reference architecture describes the following different shared storage solutions:

4.7.1 IBM Storwize V7000 and IBM Storwize V3700 storage

The IBM Storwize V7000 generation 2 storage system supports up to 504 drives by using up to 20 expansionenclosures. Up to four controller enclosures can be clustered for a maximum of 1056 drives (44 expansionenclosures). The Storwize V7000 generation 2 storage system also has a 64 GB cache, which is expandable to128 GB.The IBM Storwize V3700 storage system is somewhat similar to the Storwize V7000 storage, but is restrictedto a maximum of five expansion enclosures for a total of 120 drives. The maximum size of the cache for theStorwize V3700 is 8 GB.The Storwize cache acts as a read cache and a write-through cache and is useful to cache commonly useddata for VDI workloads. The read and write cache are managed separately. The write cache is divided upacross the storage pools that are defined for the Storwize storage system.In addition, Storwize storage offers the IBM Easy Tier function, which allows commonly used data blocks tobe transparently stored on SSDs. There is a noticeable improvement in performance when Easy Tier is used,which tends to tail off as SSDs are added. It is recommended that approximately 10% of the storage space isused for SSDs to give the best balance between price and performance.The tiered storage support of Storwize storage also allows a mixture of different disk drives. Slower drives canbe used for shared folders and profiles; faster drives and SSDs can be used for persistent virtual desktops anddesktop images.To support file I/O (CIFS and NFS) into Storwize storage, Windows storage servers must be added, asdescribed in Management servers on page 30.The fastest HDDs that are available for Storwize storage are 15k rpm drives in a RAID 10 array. Storageperformance can be significantly improved with the use of Easy Tier. If this performance is insufficient, SSDs oralternatives (such as a flash storage system) are required.For this reference architecture, it is assumed that each user has 5 GB for shared folders and profile data anduses an average of 2 IOPS to access those files. Investigation into the performance shows that 600 GB10k rpm drives in a RAID 10 array give the best ratio of input/output operation performance to disk space. Ifusers need more than 5 GB for shared folders and profile data then 900 GB (or even 1.2 TB), 10k rpm drivescan be used instead of 600 GB. If less capacity is needed, the 300 GB 15k rpm drives can be used for sharedfolders and profile data.Persistent virtual desktops require both: a high number of IOPS and a large amount of disk space for the linkedclones. The linked clones can grow in size over time as well. For persistent desktops, 300 GB 15k rpm drivesconfigured as RAID 10 were not sufficient and extra drives were required to achieve the necessaryperformance. Therefore, it is recommended to use a mixture of both speeds of drives for persistent desktopsand shared folders and profile data.Depending on the number of master images, one or more RAID 1 array of SSDs can be used to store the VMmaster images. This configuration help with performance of provisioning virtual desktops; that is, a boot storm.Each master image requires at least double its space. The actual number of SSDs in the array depends on thenumber and size of images. In general, more users require more images.

Table 40: VM images and SSDs

Number of master images

16

Required disk space (doubled)

120 GB

240 GB

480 GB

960 GB

400 GB SSD configuration

RAID 1 (2)

RAID 1 (2)

Two RAID 1

Four RAID 1

arrays (4)

arrays (8)

Table 41 lists the Storwize storage configuration that is needed for each of the stateless user counts. Only oneStorwize control enclosure is needed for a range of user counts. Based on the assumptions in Table 41, theIBM Storwize V3700 storage system can support up to 7000 users only.Table 41: Storwize storage configuration for stateless usersStateless storage

600 users

1500 users

4500 users

10000 users

400 GB SSDs in RAID 1 for master images

Hot spare SSDs

600 GB 10k rpm in RAID 10 for users

12

28

80

168

Hot spare 600 GB drives

12

Storwize control enclosures

Storwize expansion enclosures

Table 42 lists the Storwize storage configuration that is needed for each of the dedicated or shared statelessuser counts. The top four rows of Table 42 are the same as for stateless desktops. Lenovo recommendsclustering the IBM Storwize V7000 storage system and the use of a separate control enclosure for every 2500or so dedicated virtual desktops. For the 4500 and 10000 user solutions, the drives are divided equally acrossall of the controllers. Based on the assumptions in Table 42, the IBM Storwize V3700 storage system cansupport up to 1200 users.

Hot spare SSDs

600 GB 10k rpm in RAID 10 for users

Hot spare 600 GB 10k rpm drives

300 GB 15k rpm in RAID 10 for persistent

Hot spare 300 GB 15k rpm drives

400 GB SSDs for Easy Tier

Storwize control enclosures

Storwize expansion enclosures

16 (2 x 8)

36 (4 x 9)

desktops

Refer to the BOM for shared storage on page 61 for more details.

4.7.2 IBM FlashSystem 840 with Atlantis ILIO storage acceleration

The IBM FlashSystem 840 storage has low latencies and supports high IOPS. It can be used for solutions;however, it is not cost effective. However, if the Atlantis ILIO VM is used to provide storage optimization(capacity reduction and IOPS reduction), it becomes a much more cost-effective solution.Each FlashSystem 840 storage device supports up to 20 GB or 40 GB of storage, depending on the size of theflash modules. To maintain the integrity and redundancy of the storage, it is recommended to use RAID 5. It isnot recommended to use this device for user counts because it is not cost-efficient.Persistent virtual desktops require the most storage space and are the best candidate for this storage device.The device also can be used for user folders, snap clones, and image management, although these items canbe placed on other slower shared storage.The amount of required storage for persistent virtual desktops varies and depends on the environment. Table43 is provided for guidance purposes only.Table 43: FlashSystem 840 storage configuration for dedicated users with Atlantis ILIO VMDedicated storage

1000 users

3000 users

5000 users

10000 users

IBM FlashSystem 840 storage

2 TB flash module

12

4 TB flash module

12

Capacity

4 TB

12 TB

20 TB

40 TB

Refer to the BOM for OEM storage hardware on page 65 for more details.

4.8 NetworkingThe main driver for the type of networking that is needed for VDI is the connection to shared storage. If theshared storage is block-based (such as the IBM Storwize V7000), it is likely that a SAN that is based on 8 or16 Gbps FC, 10 GbE FCoE, or 10 GbE iSCSI connection is needed. Other types of storage can be networkattached by using 1 Gb or 10 Gb Ethernet.Also, there is user and management virtual local area networks (VLANs) available that require 1 Gb or 10 GbEthernet as described in the Lenovo Client Virtualization reference architecture, which is available at thiswebsite: lenovopress.com/tips1275.Automated failover and redundancy of the entire network infrastructure and shared storage is important. Thisfailover and redundancy is achieved by having at least two of everything and ensuring that there are dual pathsbetween the compute servers, management servers, and shared storage.If only a single Flex System Enterprise Chassis is used, the chassis switches are sufficient and no other TORswitch is needed. For rack servers, more than one Flex System Enterprise Chassis TOR switches are required.For more information, see BOM for networking on page 63.

4.8.1 10 GbE networking

For 10 GbE networking, the use of CIFS, NFS, or iSCSI, the Lenovo RackSwitch G8124E, and G8264RTOR switches are recommended because they support VLANs by using Virtual Fabric. Redundancy andautomated failover is available by using link aggregation, such as Link Aggregation Control Protocol (LACP)and two of everything. For the Flex System chassis, pairs of the EN4093R switch should be used andconnected to a G8124 or G8264 TOR switch. The TOR 10GbE switches are needed for multiple Flex chassisor external connectivity. iSCSI also requires converged network adapters (CNAs) that have the LOM extension.Table 44 lists the TOR 10 GbE network switches for each user size.Table 44: TOR 10 GbE network switches needed10 GbE TOR network switch

600 users

1500 users

4500 users

10000 users

G8124E 24-port switch

G8264R 64-port switch

4.8.2 10 GbE FCoE networking

FCoE on a 10 GbE network requires converged networking switches such as pairs of the CN4093 switch or theFlex System chassis and the G8264CS TOR converged switch. The TOR converged switches are needed formultiple Flex System chassis or for clustering of multiple IBM Storwize V7000 storage systems. FCoE alsorequires CNAs that have the LOM extension. Table 45 summarizes the TOR converged network switches foreach user size.

Lenovo 3873 AR2 24-port switch

Lenovo 3873 BR1 48-port switch

4500 users

10000 users

4.8.4 1 GbE administration networking

A 1 GbE network should be used to administer all of the other devices in the system. Separate 1 GbE switchesare used for the IT administration network. Lenovo recommends that redundancy is also built into this networkat the switch level. At minimum, a second switch should be available in case a switch goes down. Table 47 liststhe number of 1 GbE switches for each user size.Table 47: TOR 1 GbE network switches needed1 GbE TOR network switch

600 users

1500 users

4500 users

10000 users

G8052 48 port switch

Table 48 shows the number of 1 GbE connections that are needed for the administration network and switchesfor each type of device. The total number of connections is the sum of the device counts multiplied by thenumber of each device.Table 48: 1 GbE connections neededDevice

Number of 1 GbE connections for administration

System x rack server

Flex System Enterprise Chassis CMM

Flex System Enterprise Chassis switches

1 per switch (optional)

IBM Storwize V7000 storage controller

4.9 RacksThe number of racks and chassis for Flex System compute nodes depends upon the precise configuration thatis supported and the total height of all of the component parts: servers, storage, networking switches, and FlexSystem Enterprise Chassis (if applicable). The number of racks for System x servers is also dependent on thetotal height of all of the components. For more information, see the BOM for racks section on page 64.

4.10 Proxy server

As shown in Figure 1 on page 2, you can see that there is a proxy server behind the firewall. This proxy serverperforms several important tasks including, user authorization and secure access, traffic management, andproviding high availability to the VMware connection servers. An example is the BIG-IP system from F5.The F5 BIG-IP Access Policy Manager (APM) provides user authorization and secure access. Other options,including more advanced traffic management options, a single namespace, and username persistence, areavailable when BIG-IP Local Traffic Manager (LTM) is added to APM. APM and LTM also provide variouslogging and reporting facilities for the system administrator and a web-based configuration utility that is callediApp.Figure 10 shows the BIG-IP APM in the demilitarized zone (DMZ) to protect access to the rest of the VDIinfrastructure, including the active directory servers. An Internet user presents security credentials by using asecure HTTP connection (TCP 443), which is verified by APM that uses Active Directory.

UDP4172

SSL DecryptionAuthenticationHigh AvailabilityPCoIP Proxy

TCP 80

TCP/UDP 4172

Stateless Virtual Desktops

Hosted Desktops and Apps

Dedicated Virtual Desktops

Hypervisor

ExternalClients

TCP80

APM

Hypervisor

TCP443

ViewConnectionServer

Hypervisor

DMZ

vCenter Pools

Active Directory

InternalClients

Figure 10: Traffic Flow for BIG-IP Access Policy Manager

The PCoIP connection (UDP 4172) is then natively proxied by APM in a reliable and secure manner, passing itinternally to any available VMware connection server within the View pod, which then interprets the connectionas a normal internal PCoIP session. This process provides the scalability benefits of a BIG-IP appliance andgives APM and LTM visibility into the PCoIP traffic, which enables more advanced access managementdecisions. This process also removes the need for VMware secure connection servers. Untrusted internal

users can also be secured by directing all traffic through APM. Alternatively, trusted internal users can directlyuse VMware connection servers.Various deployment models are described in the F5 BIG-IP deployment guide. For more information, seeDeploying F5 with VMware View and Horizon, which is available from this website:f5.com/pdf/deployment-guides/vmware-view5-iapp-dg.pdfFor this reference architecture, the BIG-IP APM was tested to determine if it introduced any performancedegradation because of the added functionality of authentication, high availability, and proxy serving. TheBIG-IP APM also includes facilities to improve the performance of the PCoIP protocol.External clients often are connected over a relatively slow wide area network (WAN). To reduce the effects of aslow network connection, the external clients were connected by using a 10 GbE local area network (LAN).Table 49 shows the results with and without the F5 BIG-IP APM by using LoginVSI against a single computeserver. The results show that APM can slightly increase the throughput. Testing was not done to determine theperformance with many thousands of simultaneous users because this scenario is highly dependent on acustomers environment and network configuration.Table 49: Performance comparison of using F5 BIG-IP APM

Processor with medium workload (Dedicated)

Without F5 BIG-IP

With F5 BIG-IP

208 users

218 users

4.11 Deployment models

This section describes the following examples of different deployment models:

Flex Solution with single Flex System chassis

Flex System with 4500 stateless users

System x server with Storwize V7000 and FCoE

4.11.1 Deployment example 1: Flex Solution with single Flex System chassisAs shown in Table 50, this example is for 1250 stateless users that are using a single Flex System chassis.There are 10 compute nodes supporting 125 users in normal mode and 156 users in the failover case of up totwo nodes not being available. The IBM Storwize V7000 storage is connected by using FC directly to the FlexSystem chassis.

IBM Storwize V7000

IBM Storwize V7000

Storwize V7000 controller enclosure

Storwize V7000 expansion enclosures

Flex System EN4093R switches

Flex System FC5022 switches

Compute

Compute

Flex System Enterprise Chassis

Manage

Manage

Total height

14U

WSS

WSS

Number of Flex System racks

4.11.2 Deployment example 2: Flex System with 4500 stateless users

As shown in Table 51, this example is for 4500 stateless users who are using a Flex System based chassiswith each of the 36 compute nodes supporting 125 users in normal mode and 150 in the failover case.Table 51: Deployment configuration for 4500 stateless usersStateless virtual desktop

Figure 11: Deployment diagram for 4500 stateless users using Storwize V7000 shared storageFigure 12 shows the 10 GbE and Fibre Channel networking that is required to connect the three Flex SystemEnterprise Chassis to the Storwize V7000 shared storage. The detail is shown for one chassis in the middleand abbreviated for the other two chassis. The 1 GbE management infrastructure network is not shown for thepurpose of clarity.Redundant 10 GbE networking is provided at the chassis level with two EN4093R switches and at the racklevel by using two G8264R TOR switches. Redundant SAN networking is also used with two FC3171 switchesand two top of rack SAN24B-5 switches. The two controllers in the Storwize V7000 are redundantly connectedto each of the SAN24B-5 switches.

4.11.3 Deployment example 3: System x server with Storwize V7000 and FCoEThis deployment example is derived from an actual customer deployment with 3000 users, 90% of which arestateless and need a 2 GB VM. The remaining 10% (300 users) need a dedicated VM of 3 GB. Therefore, theaverage VM size is 2.1 GB.Assuming 125 users per server in the normal case and 150 users in the failover case, then 3000 users need 24compute servers. A maximum of four compute servers can be down for a 5:1 failover ratio. Each computeserver needs at least 315 GB of RAM (150 x 2.1), not including the hypervisor. This figure is rounded up to384 GB, which should be more than enough and can cope with up to 125 users, all with 3 GB VMs.Each compute server is a System x3550 server with two Xeon E5-2650v2 series processors, 24 16 GB of1866 MHz RAM, an embedded dual port 10 GbE virtual fabric adapter (A4MC), and a license for FCoE/iSCSI(A2TE). For interchangeability between the servers, all of them have a RAID controller with 1 GB flash upgradeand two S3700 400 GB MLC enterprise SSDs that are configured as RAID 0 for the stateless VMs.In addition, there are three management servers. For interchangeability in case of server failure, these extraservers are configured in the same way as the compute servers. All the servers have a USB key with ESXi 5.5.There also are two Windows storage servers that are configured differently with HDDs in RAID 1 array for theoperating system. Some spare, preloaded drives are kept to quickly deploy a replacement Windows storageserver if one should fail. The replacement server can be one of the compute servers. The idea is to quickly geta replacement online if the second one fails. Although there is a low likelihood of this situation occurring, itreduces the window of failure for the two critical Windows storage servers.All of the servers communicate with the Storwize V7000 shared storage by using FCoE through two TORRackSwitch G8264CS 10GbE converged switches. All 10 GbE and FC connections are configured to be fullyredundant. As an alternative, iSCSI with G8264 10GbE switches can be used.For 300 persistent users and 2700 stateless users, a mixture of disk configurations is needed. All of the usersrequire space for user folders and profile data. Stateless users need space for master images and persistentusers need space for the virtual clones. Stateless users have local SSDs to cache everything else, whichsubstantially decreases the amount of shared storage. For stateless servers with SSDs, a server must betaken offline and only have maintenance performed on it after all of the users are logged off rather than beingable to use vMotion. If a server crashes, this issue is immaterial.It is estimated that this configuration requires the following IBM Storwize V7000 drives:

Two 400 GB SSD

RAID 1 for master images

Thirty 300 GB 15K drives

RAID 10 for persistent images

Four 400 GB SSD

Easy Tier for persistent images

Sixty 600 GB 10K drives

RAID 10 for user folders

This configuration requires 96 drives, which fit into one Storwize V7000 control enclosure and three expansionenclosures.Figure 13 shows the deployment configuration for this example in a single rack. Because the rack has 36 items,it should have the capability for six power distribution units for 1+1 power redundancy, where each PDU has 1243

5 Appendix: Bill of materials

This appendix contains the bill of materials (BOMs) for different configurations of hardware for VMware Horizondeployments. There are sections for user servers, management servers, storage, networking switches, chassis,and racks that are orderable from Lenovo. The last section is for hardware orderable from an OEM.The BOM lists in this appendix are not meant to be exhaustive and must always be double-checked with theconfiguration tools. Any discussion of pricing, support, and maintenance options is outside the scope of thisdocument.For connections between TOR switches and devices (servers, storage, and chassis), the connector cables areconfigured with the device. The TOR switch configuration includes only transceivers or other cabling that isneeded for failover or redundancy.

5.1 BOM for enterprise and SMB compute servers

This section contains the bill of materials for enterprise and SMB compute servers.

5.4 BOM for enterprise and SMB management servers

Table 37 on page 31 lists the number of management servers that are needed for the different numbers ofusers. To help with redundancy, the bill of materials for management servers must be the same as computeservers. For more information, see BOM for enterprise and SMB compute servers on page 45.Because the Windows storage servers use a bare-metal operating system (OS) installation, they require muchless memory and can have a reduced configuration as listed below.

Added Atlantis USX hyper-converged solution.

Added graphics acceleration performance measurements for

Trademarks and special notices

Copyright Lenovo 2015.References in this document to Lenovo products or services do not imply that Lenovo intends to make themavailable in every country.Lenovo, the Lenovo logo, ThinkCentre, ThinkVision, ThinkVantage, ThinkPlus and Rescue and Recovery aretrademarks of Lenovo.IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business MachinesCorporation in the United States, other countries, or both.Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the UnitedStates, other countries, or both.Intel, Intel Inside (logos), MMX, and Pentium are trademarks of Intel Corporation in the United States, othercountries, or both.Other company, product, or service names may be trademarks or service marks of others.Information is provided "AS IS" without warranty of any kind.All customer examples described are presented as illustrations of how those customers have used Lenovoproducts and the results they may have achieved. Actual environmental costs and performance characteristicsmay vary by customer.Information concerning non-Lenovo products was obtained from a supplier of these products, publishedannouncement material, or other publicly available sources and does not constitute an endorsement of suchproducts by Lenovo. Sources for non-Lenovo list prices and performance numbers are taken from publiclyavailable information, including vendor announcements and vendor worldwide homepages. Lenovo has nottested these products and cannot confirm the accuracy of performance, capability, or any other claims relatedto non-Lenovo products. Questions on the capability of non-Lenovo products should be addressed to thesupplier of those products.All statements regarding Lenovo future direction and intent are subject to change or withdrawal without notice,and represent goals and objectives only. Contact your local Lenovo office or Lenovo authorized reseller for thefull text of the specific Statement of Direction.Some information addresses anticipated future capabilities. Such information is not intended as a definitivestatement of a commitment to specific levels of performance, function or delivery schedules with respect to anyfuture products. Such commitments are only made in Lenovo product announcements. The information ispresented here to communicate Lenovos current investment and development activities as a good faith effortto help with our customers' future planning.Performance is based on measurements and projections using standard Lenovo benchmarks in a controlledenvironment. The actual throughput or performance that any user will experience will vary depending uponconsiderations such as the amount of multiprogramming in the user's job stream, the I/O configuration, thestorage configuration, and the workload processed. Therefore, no assurance can be given that an individualuser will achieve throughput or performance improvements equivalent to the ratios stated here.Photographs shown are of engineering prototypes. Changes may be incorporated in production models.Any references in this information to non-Lenovo websites are provided for convenience only and do not in anymanner serve as an endorsement of those websites. The materials at those websites are not part of thematerials for this Lenovo product and use of those websites is at your own risk.