So a pretty exciting week ahead, Vmworld 2009 begins on Monday. This is my first time at VMworld, and I am quite excited to attend.

We are working on some pretty cool technologies around Virtualization and Storage and would really like to understand and spend some quality time with the Vmware engineers understanding the new V-Sphere and other Clould – Virtualization services.

We get thrown in some very complicated implementations of virtualization both internally and externally (services to customers) and successful implement it or sometimes stumble and learn. This would be a good chance to talk to some very smart people in the industry to build contacts to reach out to and also talk to some Vmware engineering folks about some existing issues.

Also looking forward to hanging out with the fellow mates from Twitter and bloggers from the Virtualization – Storage Blogosphere, try to relate faces to the extraordinary news and blogging activity we see from all of them.

Looking forward to spend some time at the Expo (Solutions Gallery) talking to the exhibitors about the technology and services they provide in this vertical market space and how we can further leverage those to grow our technology solutions practice.

I will be accessible by twitter @StorageNerve at all times and also plan to blog on StorageNerve the entire event on a daily basis (time permitting). Looking forward to seeing a ton of people from the industry, various customers and fellow twitter mates…. To get the highlights of VMWorld 2009, visit StorageNerve everyday from the 30th August, 2009 through the 4th of September, 2009. Look for tag Vmworld-2009 (http://www.storagenerve.com/tag/vmworld-2009) on Posts.

Some economic experts think that the economy is improving – or at least getting worse less fast. Let’s hope so. But for all you IT managers, the budget situation for the rest of 2009 and 2010 is likely to remain tight. Storage, which is consuming an increasing share of the CapEx budget, will be heavily impacted. Nevertheless, business continues – you need to address growing data volumes and increasingly stringent SLAs without increasing headcount or CapEx. Lots of platitudes available about doing more with less – insert your favorite here. Fortunately, there are some simple strategies for coping. Obviously, the best choice to make-do with what you have without sacrificing results or the budget.

Listed below are four steps to making the most of what you have. But first, a little math. According to The InfoPro, average array utilization in data centers is just 35%. The average storage growth rate is pegged at 50% compounded annually by some accounts. Thus, an “average” IT organization could last fully two years before hitting 80% utilization, the top end of the best-practice range. Of course, the trick is finding and enabling that storage. Now for the steps.

Step 1: Find out what you have

In contrast to The InfoPro numbers, user surveys report 70%-80% utilization. Who’s telling the truth? Probably both – the discrepancy is how you measure the numbers. Utilization can be measured by data written to disk (which might explain The InfoPro numbers), by storage provisioned (which might explain the user numbers) or other measures. The problem is, few users go through the laborious task of measuring every LUN, adding them together and doing the math on a regular basis, regardless of the measurement. Even fewer have visibility across the enterprise to generate a comprehensive number.

In the mainframe world, nearly all shops have a storage resource management (SRM) tool. In UNIX/Windows environments, very few do. Yet, SRM products can give – and maintain – utilization data essential to optimizing storage. A good SRM tool not only aggregates data and gives visibility across the enterprise, but drills down to find out which LUNs are over-provisioned and which are over-utilized. Of course, they can do much more, but unlocking 10s or 100s of TB of available space alone makes them worth the effort.

Step 2: Adopt thin provisioning

Nearly all Tier 1 and Tier 2 storage array vendors currently support thin provisioning. Lots of good resources on the Web to explain thin provisioning if you’re not familiar with it, so we won’t digress here. Bottom line, thin provisioning allows you to pool all the over-provisioned storage and makes it available to every application on an as-needed basis. No more guessing as to how much storage to provision to a given LUN and no more LUN-shrinking and expanding exercises.

Now, I can guess what you’re thinking: “OK, smart guy, I’ve got more than 100 TB of storage and hundreds of applications. How do I get from ‘thick’ to ‘thin’ without a major disruption to the organization?” First, select a “thin aware” file system. “Thin aware” file systems are essential to staying thin over time. Without one, a “thin” file will become fat over time and requires manual intervention and downtime. Second, look for a data movement tool that works across any operating system, is storage hardware independent, and can move data from “thick” to “thin” online (no app downtime) and can automatically reclaim the unused storage. Between the two, you’ll get thin and stay thin, technologically speaking. Can’t help with your waistline, though.

Step 3: Implement deduplication

Deduplication is one of those immediate-impact schemes to free up storage space. The biggest offender of duplicate storage is backup and recovery (B/R). By the very nature of B/R, we back up the same stuff over and over. Dedup appliances can address the issue, but add another layer of devices to manage in the data center and add more storage to it as well. A better solution is to have deduplication integrated directly with your B/R app, that can work at a global level (such as remote offices, data centers, and virtual servers) so that it’s never stored in the first place.

Step 4: Archive unstructured data

The bane of a storage manager’s existence is obsolete and orphaned user storage. E-mail is often the main culprit. Trouble is, manually removing it costs more in human effort than it saves in disk space. Fortunately, there are products out there that can automatically move this data to the archive storage of choice. Best of all, you get to set the policy regarding time frame, size or whatever other criteria you decide to trigger the movement to archive. All of those duplicate PowerPoint presentations will be consolidated in to a single instance. Once archived, the data should still be fully discoverable for legal requirements. The users still have full access to the data. It may take a few seconds to recover, but recover it will without tracking down tapes in a vault. Oh, by the way, did I mention it would dramatically reduce your B/R window as well? No need to backup the same thing over and over.

None of these steps are dependent upon the others. Any of them will extend the life of your current storage infrastructure. Taken together, you may be able to ride out the current economic downturn without buying a single MB of additional capacity.

Late last month, I had written a post on an open Invitation to readers about a Guest Post on the StorageNerve Blog.

After some careful selections, the first post is about to be released on Monday, stay tuned for it. Over the month of September you will see some additional Guest Posts on the StorageNerve Blog.

Hope you enjoy the post, and these topics bring forth a variety of subjects for readers that I have personally not been able to cover.

If you feel you can contribute a post about Storage / Virtualization or related topics, please feel free to get in touch with me on Twitter or though the Contact link. The requirements of a Guest Blog Post are on the Invitation post.

Virtualization

Categories

Disclaimer

The opinions expressed here are my - StorageNerve opinions. This blog and the content published here is not read or approved in advance by my employer (Accenture) or clients and does not reflect their views and opinions.