Network Architecture

The future of storage may not be in storage itself, but in the intelligence to manage it.

Major storage vendors and startups alike are now pushing software-defined systems spanning anything from a set of arrays to a whole enterprise. On Tuesday, IBM placed a big bet on this trend, announcing the first product in a portfolio called IBM Spectrum Storage and saying it will invest $1 billion in storage software over the next five years.

The strategy will see IBM offer its traditional storage systems in software form so customers can choose to buy them as appliance, software or service. The first Spectrum Storage product out of the gate is IBM Spectrum Accelerate, software that’s based on the company’s own XIV high-end storage appliance.

IBM envisions Spectrum Storage as a layer of software on top of arrays and other systems, including platforms from third-party vendors. It will span in-house data centers and cloud resources including IBM’s SoftLayer cloud service, moving bits around all that infrastructure to the best location for performance and cost, the company says.

Spectrum Accelerate, like the XIV platform on which it’s based, is designed for disk-based storage but can take advantage of flash as high-speed cache. Users can install the software on any Intel-based storage platform, giving systems they already bought the management intelligence and interface of XIV. The software also can run on IBM Power-based systems.

Among other things, Spectrum Accelerate lets enterprises pool their storage resources and add capacity in minutes, according to IBM. Pooling can cut down on unused capacity trapped in silos, saving space and hardware investments. Administrators can run Accelerate from a graphical user interface that in browsers on desktops and iOS and Android mobile devices. The management software can also be integrated with IBM Spectrum Control. It’s scheduled to ship next month.

IBM Spectrum Storage is also heading for the clouds. With cloud gateway software that’s coming out later this year, users will be able to migrate data to SoftLayer and other cloud services as tiers within their overall storage environment, said Jamie Thomas, general manager for storage and software-defined systems at IBM. This should help organizations deal with geographic and regulatory requirements as well as the changing needs of business.

In addition, users will be able to create a “cloud of clouds” in which one cloud can serve as a bulwark against possible service outages and data loss on another. The gateway will work first with SoftLayer and third-party cloud storage services based on IBM technology, but as customers demand it, IBM will be able to bring other clouds into that fold, Thomas said.

IBM is smart to point its storage strategy toward software, because hardware is no longer what distinguishes storage platforms, IDC analyst Ashish Nadkarni said. Though it’s made moves in that direction before, the new plan and a reorganization show the company really believes it now, he said. Its very visible commitment to the concept through IBM Spectrum Storage may push another big storage player, EMC, to place a bigger bet on software-defined storage, too, Nadkarni said.

With welcome improvements to key features, as well as the bundling of backup and recovery, the leading virtualization platform doesn’t disappoint

In the not so distant past, VMware held a long and commanding lead in the server virtualization space, offering core features that were simply unmatched by the competition. In the past few years, however, competition in virtualization has been fierce, the competitors have drawn near, and VMware has been left with fewer ways to distinguish itself.

The competition may have grown over the years, and VMware may not enjoy quite as large a lead as it once did — but it still enjoys a lead. With useful improvements to a number of key features, as well as the bundling of functions such as backup and recovery that were previously available separately, vSphere 6 is a worthy addition to the vSphere line. That said, some of the major advances in this version, such as long-distance vMotion, will matter most to larger vSphere shops.

Big changes in vSphere 6

The big changes in vSphere 6 revolve around expanded resource limits, enhanced vMotion capabilities, a more complete version of the Linux-based vCenter Server Appliance, storage offloading, and enhancements to the Web client. In addition, VMware has bundled extra technologies into vSphere 6, such as the vCenter Director content library that is used to store ISO images, templates, scripts, OVF files, and other elements, and to automatically distribute them across multiple vCenter servers. The Data Protection Advanced backup and recovery tools are now included as well.

VMware vSphere 6 offers advances in the previously existing Fault Tolerance feature. Fault Tolerance is the technology by which a single VM can have presence on multiple physical servers simultaneously. Should the physical server running the active instance fail, the secondary instance is immediately activated. Without Fault Tolerance, the VM could be automatically restarted on another host, but would require time to detect the failure and boot on the new host. With Fault Tolerance, that step is avoided.

In previous versions of vSphere, Fault Tolerance supported only a single vCPU per VM and four fault-tolerant VMs per host. In vSphere 6, the limits are now four vCPUs per VM and either eight vCPUs or four VMs per host.

The main screen of the new vSphere Web Client looks much more like the Windows client than in previous versions. Note the Recent Tasks window at the bottom.

Long-distance vMotion

The vMotion improvements will be more germane to those with multiple data centers spread over wide geographic areas. Prior to vSphere 6, live-migrating VMs over large distances was problematic and required high bandwidth and low-latency connections to succeed. In vSphere 6, the network tolerances have been extended, and vMotions can now be completed over links with 100ms latency or less, requiring 250 megabits of bandwidth per vMotion.

In addition, VMs can be vMotioned between vCenter servers, and with a proper underlying infrastructure, vMotions can be completed without common shared storage. There are restrictions that come with these expanded capabilities, mostly in the form of proper network layouts at each side to allow for proper communication of the VMs on each network.

The ESXi 6.0 hypervisor in vSphere 6 can handle up to 64 physical hosts per cluster, up from 32 hosts, and each instance can now support up to 480 CPUs, 12TB of RAM, and 1,000 VMs. Each VM can now be run with up to 128 vCPUs and 4TB of RAM, with vNUMA hot-add memory capabilities.

VMware vCenter Server improvements

On the management side, the vCenter Server Appliance is now feature-complete, on par with its Windows counterpart. Previously, you could run the Linux-based vCenter Server Appliance and manage ESXi hosts, but some of the more advanced features (notably Update Manager) of the Windows-based vCenter Server were not available. As of vSphere 6, the appliance can handle all the tasks that a Windows installation can. This is significant news to those who prefer to not manage a Windows server to run vCenter.

Those who run vCenter Server on Windows will notice that the installation procedure is simplified, though it takes quite a while to complete. All of the moving parts that make up vCenter Server are installed in a single installer action now, including the new Platform Services Controller, which handles SSO, licensing, and certificate management. vCenter Server can be deployed with all components on a single system, or it can be split across multiple systems with the Platform Services Controller and vCenter Server installed separately.

Both vCenter Server for Windows and the vCenter Server Appliance now use a local PostgreSQL database by default, though external Microsoft SQL Server and Oracle databases are also supported on Windows and Oracle databases on the appliance. The switch to PostgreSQL will be important to those running with local databases on earlier versions of vSphere due to the fact that the limitations of the previous Microsoft database are no longer present; thus, local databases can now support 1,000 hosts and 10,000 VMs.

The vSphere Web Client’s right-click context menus are still sluggish at times, but overall faster than before.

A better Web UI

The first version of the vSphere Web Client was slow, incomplete, and not nearly as fluid as the Windows client, and many users simply refused to work with it. In vSphere 5.5, we saw improvements to the Web client, but it still wasn’t quite to the level of the stand-alone client. In vSphere 6, further usability and speed improvements make the Web client more palatable, as does the addition of support for a broader range of client browsers and operating systems. The client integration tools that allow for important features like VM console access are now available for more platforms, including Mac OS X.

Users of the Web UI will note that it bears a stronger resemblance to the stand-alone client, including the recent tasks pane at the bottom that displays what actions have been taken within the infrastructure. Further, the context menus available via right-click are better laid out, and the overall navigation in the Web client is better than the previous iterations.

The success of the Web client is crucial to VMware. The company has been warning about the impending demise of the stand-alone client for several releases and currently stresses that using the stand-alone client will limit the functionality of vSphere to vSphere 5.0 levels. Features and enhancements from vSphere 5.5 onward are simply not available in the Windows client.

VMware Virtual Volumes

VMware introduces a new storage integration concept with vSphere 6 called Virtual Volumes. This is essentially tighter integration with SAN and NAS devices to manage storage operations at the virtual disk level. Virtual Volumes are designed to eliminate the need to carve out large numbers of LUNs or volumes for virtualization hosts and to offload storage-related operations to compatible arrays, with granularity at the virtual disk level.

This integration includes vSphere Storage Policy Based Management, which uses VMware’s storage API to communicate with storage arrays and connects the administration of VMs and storage through to the vSphere UI. Thus, policies can be created and applied to VMs through vCenter while related functions are performed natively by the arrays.

VMware now includes vSphere Data Protection with vSphere Essentials Plus and higher editions of vSphere 6. This is a VM backup and recovery tool that was previously known as vSphere Data Protection Advanced, a separate option. This tool can be used to provide application-aware VM backup and restoration, including support for Microsoft SQL Server, Microsoft Exchange down to the mailbox level, and other popular databases and applications.

Up from vSphere 5.5

With vSphere 6, VMware offers a collection of welcome features that are now bundled in rather than separate products, advances a number of pre-existing features, and streamlines the installation process. The Web client may still cause more than a few grumbles from those who have been using the stand-alone client from the beginning, but it’s significantly better than in previous iterations.

The advances in vMotion and other cross-site features are of limited use to shops not running multiple interconnected data centers with sufficient dedicated bandwidth to support those features. But as VMware increases the tolerances to lower bandwidth and higher latency, the viability of introducing such features grows.

There’s no mistaking the fact that VMware continues to hold the leadership role in server virtualization, but as the feature sets of the top vendors continue to converge and competing solutions continue to get more robust, we may see more of this feature bundling and simplified licensing in the future. For now, vSphere 6 maintains its place as the cream of the crop.

There is no question that the use of cloud-based resources affects IT organizations. But how much should your IT organization change to best leverage cloud computing?

I hear that question a lot, and it’s often grounded not so much in process concerns but fear of job loss or devaluation of individuals’ current skills or roles. Such fears are most acute among those who have resisted the cloud for years; they see the writing on the wall, and panic sets in.

The reality is that IT orgs have always changed around the use of technology. This need to adapt is hardly unique to cloud usage, so I’m always taken aback when such change comes as a surprise.

But there is a big difference in how the cloud affects IT compared to previous technology changes: The use of public cloud resources means a shift to resources that the IT org does not control. That change is more profound than individual jobs changing or disappearing — it’s giving up ownership of the actual technology systems, yet still being responsible for them from a business viewpoint.

Despite the control concerns, the cloud allure is too strong to resist. Don’t forget the positive changes it brings to IT. Provisioning, testing, and deployment are easier, for example. Databases can be stood up in a day, rather than the weeks or months of older methods. Thousands of server instances can be provisioned in seconds, and any amount of storage is just a few clicks away.

How will IT need to change due to the cloud? For the most part, cloud computing won’t chainsaw through existing IT orgs. Smart people will be needed to design and build these cloud-based systems and to figure out the synergy with cloud-based resources and existing legacy systems. Now’s the time to ask yourself what kind of structure and people will you need to support the use of cloud.

The changes are actually easy to predict. Security and governance become more important, as do management and monitoring. Development skills will shift some to cloud-based platforms and devops approaches. IT pros currently managing storage and compute services will have to serve double duty with new cloud-based resources to manage.

You should not be concerned if things change. You should be concerned if they don’t — that means you’re in a cocoon the world will pass by.

Recent Comments

Archives

Categories

Sydnic

SYDNIC Computer Systems Inc. will build added value through our partnerships to deliver sales, service, supplies and support, providing our Clients with a personal flare that is unmatched by our competitors. SYDNIC will strive to exceed the expectations of our Clients in all aspects of our business. As emerging technologies impact the marketplace, we will maintain our commitment to deliver “Personal Service in a Technical World”.