In the final tip in his five-part series on the best tips for maximizing efficiency in complex data storage environments, Jon William Toigo offers a look at storage performance efficiencyand why it shouldn't be considered in isolation from the other dimensions of storage efficiency.

By submitting your personal information, you agree to receive emails regarding relevant products and special offers from TechTarget and its partners. You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

When you say “storage efficiency” to many data storage administrators, they hear “performance acceleration.” That’s because they attend seminars and conferences with “efficient data storage” in the session hoping to get the latest tips for speeding up the IOPS of disk arrays -- both to meet the needs of transaction-intensive applications and, more recently, in an almost desperate attempt to address the sluggish performance of applications running as guests within virtual servers.

Performance is important because it's what users see and in the worst case scenario, what they complain about to their bosses. Poor application performance casts a pall on IT performance generally and storage mavens in particular, since serving up data from disk is what they do for a living. Performance is where the proverbial rubber hits the road.

Of course, storage performance efficiency may have little to do with your back-end storage infrastructure. The work that VMware has done recently to offload certain storage functions to “smart” storage arrays -- like inserting nine unapproved commands into the SCSI language in an effort to shift 20% of I/O workload from its server hypervisor’s direct administration -- is an acknowledgement that the log jam in application performance resides with the hypervisor and not with the storage infrastructure.

But just try to explain that to disgruntled users. What they see is slow system response times and the "World Wide Wait." And they conclude that slow storage is the culprit.

So, like jet pilots in the old Tom Cruise flick, many storage administrators feel the need for speed. To them, efficient data storage, like an efficient jet fighter, is storage that can deliver data at Mach speed then turn on a dime to engage the next I/O target request.

The industry plays into this meme, sporting IOPS test results from independent test labs or organizations like Standard Performance Evaluation Corp. (SPEC.org for network-attached storage) and the Storage Performance Council (SPC) to make the case that their latest rig has the hot hand. Listening, post-test, to vendor advocates of fast storage arrays is often like listening to drag racing enthusiasts bragging about the tweaks they made to their engines to break land speed records.

At the end of the day, storage performance issues need to be addressed sensibly and strategically. Optimizing IOPS is part of the equation, but so is ensuring that I/O traffic is balanced across the myriad pathways, usually a mix of Fibre Channel cabling and switching, iSCSI networking and direct-attach configurations. And, of course, implementing some means of spotting and resolving performance problems is key.

Managing and monitoring your storage system

In an ideal world, realizing optimal storage performance efficiency would be a simple matter of monitoring I/O traffic across the plumbing of the infrastructure, tracking round-trip times and spotting overcommitted links, ports and disks. This information would then be used to tune the infrastructure to accommodate changing workload requirements.

The problem is that coherent management and monitoring isn't usually baked into the data storage infrastructure. Most products (arrays, switches, host bus adapters and so on) in the infrastructure have their own management utilities (so-called element management software) that don’t integrate with each other to provide an end-to-end view of I/O traffic. Moreover, the tendency of the industry to “add value” to storage by joining proprietary software to array controllers has raised significant obstacles to unified management.

Storage resource management (SRM) software from CA Technologies, IBM, Symantec and others can help, but it's only as effective as its support for the kits you've deployed. The more heterogeneous your hardware, the less likely it is that SRM can map all its nuances with any granularity.

Supplementing SRM with physical and software “taps” that collect and correlate information about I/O traffic provides a better view of the situation. Virtual Instruments (formerly part of Finisar) has some good technology in this space. However, not many storage administrators have deployed these tools.

Storage virtualization provides another approach, and perhaps the least disruptive of the bunch. Products like DataCore Software’s SANsymphony-V and some of the other storage virtualization products in the market can deliver most of what the storage admin needs to optimize performance.

Using a storage virtualization product (or a “storage hypervisor” as some vendors have started to brand their wares), I/O receives a bump in performance automatically because it's serviced initially by the installed memory of the storage hypervisor host before being sent to the disk. This is a kind of spoofing (acknowledging writes as complete before they're actually performed, then queuing the data for orderly physical write to slower disk hardware) that has been used for years. Without it, many name-brand storage products wouldn't exist today.

Plus, the better storage hypervisors automatically balance I/O traffic across all the pathways available between the storage hypervisor host and the back-end storage rigs. And, of course, the back-end physical storage can be grouped (or pooled) into virtual volumes comprising arrays with similar speeds and feeds so they deliver predictable performance characteristics.

Any of the above strategies would help “bolt on” the means to deliver the best possible storage infrastructure performance, or at least provide some helpful tools that most admins lack today. To truly “bake in” performance efficiency management would require a unified language implemented at the level of each device that touches data traffic. This is possible today with REST-ful management protocols, a set of open standards from the World Wide Web Consortium (W3C) that nearly every vendor has publicly embraced but few have actually implemented in their wares. To see what's possible with REST, readers can visit cortexdeveloper.com.

The bigger picture

Performance efficiency shouldn't be considered in isolation from the other dimensions of storage efficiency. As mentioned in a previous tip, there are ways to significantly improve the response time of disk arrays that have begun to appear in the market that involve the caching of read requests directed to disk drives using either inexpensive DRAM or flash solid-state drives (SSDs).

X-IO (formerly Xiotech) took the early lead on this strategy with its patented “Hot Sheets” technology a couple of years ago. Similar approaches are now finding their way onto gear from IBM and other vendor boxes.

With “Hot Sheets,” data that has been written to disk and is subsequently receiving frequent and concurrent read requests is copied into the memory component (DRAM or SSD). Subsequent read requests are then redirected to the memory component where they're serviced at solid-state speeds (20,000 IOPS or more, far in excess of the I/O response rates of rotating disk). When accesses to the data diminish, and the data “cools,” read requests are pointed back to the data written on rotating media. The result is an extraordinary performance improvement documented by numerous SPC-1 benchmarks.

This architecture appears to be taking hold in the industry and is far superior from a power efficiency standpoint to other approaches for improving IOPS in a disk array such as disk clustering and short-stroking. The latter approaches require many disks, all consuming energy and subject to the 7% annual failure rate that plagues all spinning rust.

Do what you will to monitor and manage storage I/O and optimize IOPS, and you'll still be left with a conundrum. The lousy application performance users are experiencing may, at the end of the day, have little or nothing to do with storage performance. Often, the log jam has more to do with the application code itself, or the server hypervisor, than it does with the storage.

BIO: Jon William Toigo is a 30-year IT veteran, CEO and managing principal of Toigo Partners International, and chairman of the Data Management Institute.

E-Zine

0 comments

E-Mail

Username / Password

Password

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy