Or "I can get storage for $0.15 per gigabyte from Amazon S3, what's the problem?"
–
Chris UpchurchMay 1 '09 at 14:26

@Chris Upchurch: But the problem is that you might have to write a report on whether to choose Amazon S3, Google App Engine or ... Gosh, that might be paintful. ;)
–
SungMay 1 '09 at 15:47

5

I might turn that around on you. My work is generating revenue, and I need a little more storage to do my job effectively. It's a solid investment, so why can't you just buy more storage?
–
user640May 1 '09 at 15:56

@Chris: Of course it always depends on the situation at hand, but I've found through (costly) experience that using Amazon S3 for baseline storage is not exactly cost effective. S3 is much better used to handle traffic peaks so that you don't have to invest in a system which can handle rare worst case scenarios - but if you start using it for day to day operations you might find that you're much better off paying capital cost...
–
Mihai LimbăşanMay 17 '09 at 16:56

8 Answers
8

Some home truths about storage, or why is enterprise storage so f-ing expensive?

Consumer hard drives offer large volumes of space so that even the most discerning user of *cough* streaming media *cough* can buy enough to store a collection of several terabytes. In fact, disk capacity has been growing faster than the transistor counts on silicon for a couple of decades now.

'Enterprise' storage is a somewhat more complex issue as the data has performance and integrity requirements that dictate a somewhat more heavyweight approach. The data must have some guarantee of availability in the event of hardware failures and it may have to be shared with a large number of users, which will generate many more read/write requests than a single user.

The technical solutions to this problem can be many, many times more expensive per gigabyte than consumer storage solutions. They also require physical maintenance; backups must be taken and often stored off-site so that a fire does not destroy the data. This process adds ongoing costs.

Performance

On your 1TB consumer or even enterprise near-line drive you have just one head. The disk rotates at 7200 RPM, or 120 revolutions per second. This means that you can get at most 120 random-access I/O operations per second in theory* and somewhat less in practice. Thus, copying a large file on a single 1TB volume is relatively slow.

On a disk array with 14x 72GB disks, you have 14 heads over disks going at (say) 15,000 RPM or approximately 250 revolutions per second. This gives you a theoretical maximum of 3,500 random I/O operations per second* (again, somewhat less in practice). All other things being equal a file copy will be many, many times faster.

* You could get more than one random access per revolution of the disk if the geometry of the reads allowed the drive to move the heads and read a sector that happened to be available within one revolution of the disk. If the disk accesses were widely dispersed you will probably average less than one. Where a disk array formatted in a striped (see below) layout you will get a maximum of one stripe read per revolution of the disk in most circumstances and (depending on the RAID controller) possibly less than one on average.

The 7200 RPM 1TB drive will probably be reasonably quick on sequential I/O. Disk arrays formatted in a striped scheme (RAID-0, RAID-5, RAID-10 etc.) can typically read at most one stripe per revolution of the disk. With a 64K stripe we can read 64Kx250 = 16MB or so of data per second off a 15,000 RPM disk. This gives a sequential throughput of around 220MB per second on an array of 14 disks, which is not that much faster on paper than the 150MB/sec or so quoted for a modern 1TB SATA disk.

For video streaming (for example), an array of 4 SATA disks in a RAID-0 with a large stripe size (some RAID controllers will support stripe sizes up to 1MB) have quite a lot of sequential throughput. This example could theoretically stream about 480MB/sec, which is comfortably enough to do real-time uncompressed HD video editing. Thus, owners of Mac Pros and similar hardware can do HD video compositing tasks that would have required a machine with a direct-attach fibre array just a few years ago.

The real benefit of a disk array is on database work which is characterised by large numbers of small, scattered I/O requests. On this type of workload performance is constrained by the physical latency of bits of metal in the disk going round-and-round and back-and-forth. This metric is known as IOPS (I/O operations per second). The more physical disks you have - regardless of capacity - the more IOPS you can theoretically do. More IOPS means more transactions per second.

Data integrity

Additionally most RAID configurations give you some data redundancy - which requires more than one physical disk by definition. The combination of a storage scheme with such redundancy and a larger number of drives gives a system the ability to reliably serve a large transactional workload.

The infrastructure for disk arrays (and SANs in the more extreme case) is not exactly a mass-market item. In addition it is one of the bits that really, really cannot fail. This combination of standard of build and smaller market volumes doesn't come cheap.

Total storage cost including backup

In practice, the largest cost for maintaining 1TB of data is likely to be backup and recovery. A tape drive and 34 sets of SDLT or ultrium tapes for a full grandfather cycle of backup and recovery will probably cost more than a 1TB disk array did. Add the costs of off-site storage and the salary of even a single tape-monkey and suddenly your 1TB of data isn't quite so cheap.

The cost of the disks is often a fair way down the hierarchy of dominant storage costs. At one bank I had occasion to work for SAN storage was costed at £900/GB for a development system and £5,000/GB for a disk on a production server. Even at enterprise vendor prices the physical cost of the disks was only a tiny fraction of that. Another example that I am aware of has a (relatively) modestly configured IBM Shark SAN that cost them somewhere in excess of £1 million. Just the physical storage on this is charged out at around £9/gigabyte, or about £9,000 for space equivalent to your 1TB consumer HDD.

I think it's a valid response. The point is, we each have our own areas of expertise, and members of a team need to trust one another. Flipping the question back to the developer like this will help them realize how pointless it is to try to second-guess one another.
–
PortmanMay 1 '09 at 17:21

1

Another valid response would be that the guy at Geek Squad could probably figure out how to do it, do it cheaper, and have a lot better attitude about doing it. Seriously, why is this the highest voted answer for this question? I did have a nice chuckle while reading it, but if this is going to be how the site members respond to naive questions, I'll stick with Google and Experts Exchange.
–
dfjacobsMay 5 '09 at 5:39

Number one thing people need to realize about storage is that there's a big different between capacity and IOPS. Things like durability etc are usually moot, it almost always comes down to IOPS vs. capacity.

Your dev might need more space, but maybe it is not "enterprise class" drivespace he is after. Maybe he just needs to have a place of storing .vhd's and ISO's that is case of a disk crash can be downloaded again from MSDN. Maybe test runs require large transient space requirements that only need to be there for the duration of the test run. For all these a $50 Wallmart drive can be a valid solution.

It depends on what kind of servers there asking about. For a basic dev or testing server one tb drives from Wallmart are probably good enough. If you’re dealing with a high end server that doesn't use off the shelf components ask them if they'd build a race car and buy tires from an auto parts store to save a few bucks.

A simple one-line answer: 1TB drives are usually SATA, but your server is SCSI. (Even if the server is not SCSI, this might stop the line of enquiry...for now.)

A 300GB SCSI drive is usually 4x the price, then there's backing up the existing data, getting downtime organised, doing the install, something might go wrong, the overtime, etc. etc. All in all, a simple storage upgrade can lead to all sorts of pain - none of which the dev is directly responsible for. Saying that you can buy an off-the-shelf drive that satisfies the current need is hopelessly simplistic.

But you know you should have put bigger drives in the damn servers when you bought them and you're kicking yourself now! But you wanted the servers installed and they would have added to the upfront cost and it might have had to go an extra round of approval...welcome to the sysadmin's world of pain...