There’s a good series going on over at the Storage Anarchist‘s page about defining storage tiers- if you’re trying to get some insight to better organize your own data, it promises to be a good series. Here’s the link to the first of four entries.

I’m going to digress from storage for a moment to discuss current events. In case you didn’t know, VMWare is going to become a public company tomorrow. I have been following this for several months as both an amateur investor and someone who deals with VMWare professionally, and I’ve been seeing lots of questions online about the IPO so figured I’d put together a quick post about some of the basics.

First, VMWare sells a software suite that allows multiple workloads to co-exist on the same Intel hardware. This is significant because Intel servers normally can not run more than one application at the same time, and Intel hardware is getting more powerful faster than applications can grow their basic requirements. Other platforms (like Unix and mainframe) were built from the ground up to do more than one thing at a time, but Intel cut its teeth in the desktop market, so did not inherit this quality. Now that Intel servers are powerful and reliable enough to trust many important company applications to, VMWare helps companies bring their average resource utilization from 10% up to 80% or higher by consolidating many light workloads onto the same physical machine.

VMWare is owned by EMC, a prominent storage solutions company. VMWare has been growing by leaps and bounds, and EMC wants to ensure that their investors can clearly see this jewel in their crown. Thus, they have decided to spin off about 10% of the VMWare stock publicly. Recently, Intel and Cisco both stepped up to the plate to buy a piece of VMWare before it went public.

EMC bought VMWare in 2004 for a steal, but had to agree to keep their noses out of VMWare’s business. This is relevant to VMWare’s bottom line because the biggest competitive differentiator they have with all the other virtualization solutions (like Xen and Microsoft) is that due to their two year head start, they have a massive list of solutions they’ve worked hard to ensure compatibility with (which includes some serious competitors to EMC). If your company uses a mainstream application and wants to run it under VMWare, chances are they’ve tested it and invested time and money into making sure it will work. Of course, they also have a head start in some of the niftier features like the ability to move around working applications from one server to another, but these features will eventually be canon for all virtualization while their partner ecosystem will still be years ahead of their competitors.

I think this addresses some common questions I’ve seen about VMWare and this IPO, but if anyone needs clarification, this is a Q&A blog, so ask away.

What’s your take on virtualization? VSAN from Cisco, SVC from IBM? What other virtualization products are available from other vendors?

Thanks,
John

Cisco VSANs and IBM’s SVC are different things for certain :)

The VSAN allows you to create multiple logical fabrics within the same switch- you tell it what ports are part of what SAN, and you can manage the fabrics individually. It’s especially useful if you’re bridging two locations’ fabrics together for replication or something because it allows you to do “inter VSAN routing” if you have the right enterprise software feature. That would allow you to have two separate fabrics whose devices can see each other, but if the link between the sites fails (which is more likely than a switch failure), you won’t have the management nightmare of having to rebuild the original fabric out of two separated fabrics when the link comes back. VSANs are also commonly used to isolate groups of devices for the purpose of keeping those devices logically separated from parts of the network they’ll never need to interact with.

IBM’s SVC is a different technology that is supposed to consolidate multiple islands of FC storage. It’s essentially a Linux server cluster that you place between your application servers and the storage. It allows you to take all the storage behind it and create what they call “virtual disks”- essentially a LUN that’s passed to a server but contains multiple raids (possibly from multiple controllers). This gives you the option of striping your data across more spindles than you would be able to normally, and allows you to do dynamic thin provisioning when your datasets grow.

The only downside of the Cisco VSAN technology I can think of is its cost- it’s bloody expensive compared to a cheap low end solution, and for anything less than a 50 device FC fabric, I would questionable whether it’s worth it. There is an alternative from Brocade/McData they call LSAN, however I am not as familiar with it. I have been told that it’s slightly less complicated, but harder to manage, and doesn’t have the full feature-set of Cisco.

The downside to the IBM SVC is that you create latency for all your disk reads- every time a server needs to perform a write, it has to go through the Linux cluster first. It has a much larger cache than most controllers, so there’s a better chance that the data you’re looking for is already there, but if it’s not, your read performance might suffer a little because of the extra few milliseconds. The advantage is that you can now use incredibly cheap controllers with tiny amounts of cache, and it allows you to migrate data from any manufacturer’s device to any other manufacturer’s device without interrupting your servers. Under a virtualized environment like this, an older DS4300 like you have will perform pretty much on the same level as a more expensive DS4800 or EMC CX3-80 (assuming the same number of drives) because you don’t really use the cache of the underlying system. Another advantage of the SVC is that most FC storage controllers charge you either one time or over time for the number of servers you’re planning to connect to them. IBM charges a “partition license” fee for LUN masking, and EMC charges a “multipath maintenance” tax. Either way, the multipath drivers for SVC are free, and it only needs one partition from the controller, so you might be able to save money that way.

Did you have any specific questions about these topics you want more detail on?

Also, one of the new bloggers in the storage world- Barry Whyte– focuses on IBM SVC. He just started, but his blog will hopefully become a real resource for people with IBM storage virtualization on their mind.

Aloha Open Systems Guy,can you take another question from me? I’ve got some questions about OS-drivers for disk subsystems…What’s up with all the RDAC, MPIO/DSM, and SDD? I’ll try and keep things consistent by limiting my question to one OS (Windows Server 2003).

I’ve heard talk about SDD being superior for the ESS / DS8000 line of storage. It’s apparently not even available in an active/passive array. However, I’ve got a mid-range disk subsystem from IBM, the DS4300 Turbo model.

Until tonight I thought there was only a single choice of multi-pathing driver for me, RDAC. However, when I went about installing my first Windows OS to be SAN-connected I ran into all kinds of new information like SCSIport and STORport and now MPIO / DSM.

Can you help de-mystify this enigma for me?

Mahalo nui loa,
John

Certainly! Always happy to get more questions. I’m a chronic sufferer of writers block, so your questions help by providing material ;)

Each vendor dictates the support they provide for multi-path drivers, and going outside these constraints is possible, but will usually void the warranty. My experience with IBM is that they usually support something out of the box if it works, or in special cases if it can be made to work. Since they only support RDAC with the DS4000 series, I’ll bet that nothing else would work. Whether through design or technical limitation, I do not know, but I suggest that you stick with the driver they recommend.

The only limitation to RDAC is that it does not dynamically load balance- however in terms of failover protection, it’s bullet-proof.

edited to add: The other drivers you mention are supported on other IBM systems, by the way.

I’m a recent convert to storage administration. I’m having a hard time cutting through the cruft to find the truth. Could you answer some of these questions?

1 – Which is faster, software-based RAID (e.g. Linux md, Windows Dynamic Disks) or hardware-based RAID? One person said that software-based RAID is faster because it has a faster processor and more RAM/cache (something like a Xeon 3.0 Ghz w/ 4Gb or RAM would be typical in my environment). But how could that stack up against my (little bit old) IBM DS4300 Turbo (2Gb cache).

2 – Which is faster, RAID-5 or RAID-10 (or is that RAID-01?) I know everybody says RAID-10, but what about those fancy XOR engines? Or have I fallen prey to marketing?

To answer your questions, I’m first going to give a bit of background info. If any of my statements don’t make sense, please reply and I’ll answer :).

The term “faster” can mean different things to different people. Each type of storage has its strengths and weaknesses, and different applications perform differently on the same storage systems. There are two primary application workloads- those that do random IOs, and those that do sequential IOs.

The random workloads are the hardest ones to provide storage for because it’s very difficult to “read ahead” by predicting where the next read will fall. An example of an application that has a random workload would be a database or email server.

The sequential workloads are easier to provide storage for. Pre-fetching the next block will most of the time yield a read that’s already in cache. An example of an application like this would be a backup server or certain file servers.

Another general bit of info is that in a RAID, reads (not writes) are usually the bottleneck. Writes are usually fed into the cache and acknowledged to the host server immediately. Reads, however, are typically 70% of the IO being done by a system, and as we discussed are often impossible to “pre-cache”.

When you’re calculating performance, the two stats you’ll want to know is IOs per second for random loads, and MB per second for sequential loads (abbreviated IOPS and MBPS). When you’re trying to tune a system to be quick for your applications, you need to know the different levels of your system and which one is the bottleneck. Normally, on a decent controller, the number of spindles you have in the RAID will determine the IOPS. You should get a linear increase in performance as you add drives to a RAID. Cache is important for the 30% of writes you can expect (your mileage may vary), however everything goes to disk eventually, and most people experiencing slow performance on their disk controllers simply don’t have enough disks.

Onto the specifics of your question:

1- Software or Hardware RAID: For most workloads, a dedicated hardware RAID controller is faster. Software RAIDs have to share resources with the operating system, which is usually not optimized for sharing on that level. The IBM DS4300 you have is actually an LSI box, and has a very powerful RAID controller for its price. Don’t let your sales rep try to replace your controller! Those boxes may be a little old, but the only major difference between that and the newer IBMs is that the newer ones use 4 gig fiber and more cache. It’s very rare that a workload can max out 2 gig fiber on the front end, and even more rare that the controller can fully utilize all the bandwidth on the disk side. The extra cache can be useful, but you will experience diminishing returns- the benefit of going from 2 to 4 GB is way less than from 1 to 2 GB. The controller should not be your bottleneck for anything under 80 FC drives on the system you have, so unless you want to go beyond that, keep your box until the maintenance costs more than the replacement. Add more drives if you need IOPS or MBPS, but don’t throw it out. These boxes are supposed to be like houses- only buy a bigger one when you need it. Not because the last one is obsolete.

2- RAID 5 or RAID 10: I will compare them in reliability and performance. RAID 5 uses the space of one disk for parity, and RAID 10 uses the space of half the disks. Reliability wise, RAID 10 is the obvious winner. You can lose up to half your disks before you lose data (assuming you don’t lose two of the same pair). If you lose a second drive while rebuilding a critical RAID 5 array, you will always have to go back to your last backup. Generally, this is more of a worry for large SATA drives than it is for the smaller and faster FC drives- SATA RAIDs take exponentially longer to rebuild because of the larger amount of data combined with the lower performance per spindle.

Speaking of performance, the performance (per drive) is better on RAID 5. Most people put two RAID 5s on each enclosure, and have 4 to 6 RAIDs per hot spare. The XOR engine you speak of performs the parity calculations for RAID 5, however is not needed for RAID 10 or any other non-parity type of RAID. Since you do have a fairly fast controller, RAID 5 is attractive, however you have to balance your decision based on performance and reliability.

Welcome to the newest storage blog in the blogosphere. Storage technology can be complex, and this is the place to come and ask questions to reduce that complexity. I can answer most architecture and design questions (like what is the difference between iSCSI and fibre channel?), and I can find the answers to most usage and best practices questions (like how can I script the CLI to take a snapshot?).

I am also looking for a co-writer who would be able to help answer the more technical questions- any takers? Please email me.

I am happy to join the community of excellent bloggers who write about this technology- I will add them to my blogroll as I find them.