Month: October 2012

Sure the title was meant to be inflammatory, but at the same time I am seeing one of the most dramatic shifts in enterprise storage in the last 10 years. Some history would probably help here. I began my career in IT 15 years ago, in 1997 major companies ran their entire businesses on either a mainframe or a midrange system and green screens ruled the world. We barely had email, and it was surely not a collaboration suite. At the time, I was a systems admin and spent days and often nights working with the large direct attached storage systems for either the mid range or the windows environments. We slowly moved into shared storage, often for a single system. Our exchange server had a shared set of disks for the cluster, same goes for SQL, but we didn’t dare move the mid range systems(as/400 at the time) to a shared storage solution. Around 2001, I was insistent with my management that we should have a shared solution for both open systems (windows and Linux) and our iSeries but got amazing pushback. The more we virtualized the more traction I was able to get. Probably helped that many of the mid range systems were being replaced by monolithic sun and windows boxes, the IBM purists had less traction. About this same time, you saw IBM itself start to transform itself into a services and software company, the move that Sun never realized it needed to do. With the vast growth of virtualization, came the rise of EMC and then startups like NetApp. Over the next 5 years you would see shared storage become the go to accepted platform. As our data growth has exploded so has the size of the arrays we use to store the massive amounts of data
So if we have massive data growth, how can I say that storage has died? The answer is simple, I can’t, but what I can say os that the way we address storage has changed. We are reverting back to the direct attached storage days, with a few exceptions. In the direct attached days, the big reason for keeping the drives local was that the data was all controlled by the software. Software defined storage, just no one called it that. Today we are seeing the same back to software defined storage. The major cloud players have all found that users want the choice of where they data goes. VCloud Director now has storage profiles. OpenStack had already let you have tiers of storage. Object based storage is leading a way to move data between entities without the need for a set structure. Hadoop and Gluster are saying that the data does not matter and you should concentrate on how we process the data.
So where does that leave us? We need to look at hardware vendors right? After all they control the drives and we want to make sure our data integrity stays high and we can control where we place our data. I argue that we should be only looking at the hardware vendors to give us a place to put data but not a way to control it. The software defined storage of today allows for me to add data integrity, portability, and speed with what ever hardware I want. We have 4TB drives spinning faster than most personal computer drives, solid state drives that will give us 5 year warranties and in sizes approaching a TB. Now we need the likes of HP, Dell, and IBM to press on the manufacturers, the Sanmina and Quantas among others to produce for density, and environmental factors. HP announced with the Gen8 servers that their hardware RAID controllers on their servers could hold more disk and process at a faster speed. But what about when I want to control the data? Where is my dumb JBOD at density? Dell has started to trend towards higher density with the 3020 60 drive JBOD. Well almost, they say you have to have the 3260 to manage the JBOD. Seems like a conflict to me.
If we can start to get all our data controlled by software, on the hardware we want, with the best density rates we can keep moving forward to a point when storage as we know it may very well die.