Posts Tagged 'Hard Drives'

The jet flow gates in the Hoover Dam can release up to 73,000 cubic feet — the equivalent of 546,040 gallons — of water per second at 120 miles per hour. Imagine replacing those jet flow gates with a single garden hose that pushes 25 gallons per minute (or 0.42 gallons per second). Things would get ugly pretty quickly. In the same way, a massive "big data" infrastructure can be crippled by insufficient IOPS.

IOPS — Input/Output Operations Per Second — measure computer storage in terms of the number of read and write operations it can perform in a second. IOPS are a primary concern for database environments where content is being written and queried constantly, and when we take those database environments to the extreme (big data), the importance of IOPS can't be overstated: If you aren't able perform database reads and writes quickly in a big data environment, it doesn't matter how many gigabytes, terabytes or petabytes you have in your database ... You won't be able to efficiently access, add to or modify your data set.

As we worked with 10gen to create, test and tweak SoftLayer's MongoDB engineered servers, our primary focus centered on performance. Since the performance of massively scalable databases is dictated by the read and write operations to that database's data set, we invested significant resources into maximizing the IOPS for each engineered server ... And that involved a lot more than just swapping hard drives out of servers until we found a configuration that worked best. Yes, "Disk I/O" — the amount of input/output operations a given disk can perform — plays a significant role in big data IOPS, but many other factors limit big data performance. How is performance impacted by network-attached storage? At what point will a given CPU become a bottleneck? How much RAM should included in a base configuration to accommodate the load we expect our users to put on each tier of server? Are there operating system changes that can optimize the performance of a platform like MongoDB?

The resulting engineered servers are a testament to the blood, sweat and tears that were shed in the name of creating a reliable, high-performance big data environment. And I can prove it.

Most shared virtual instances — the scalable infrastructure many users employ for big data — use network-attached storage for their platform's storage. When data has to be queried over a network connection (rather than from a local disk), you introduce latency and more "moving parts" that have to work together. Disk I/O might be amazing on the enterprise SAN where your data lives, but because that data is not stored on-server with your processor or memory resources, performance can sporadically go from "Amazing" to "I Hate My Life" depending on network traffic. When I've tested the IOPS for network-attached storage from a large competitor's virtual instances, I saw an average of around 400 IOPS per mount. It's difficult to say whether that's "not good enough" because every application will have different needs in terms of concurrent reads and writes, but it certainly could be better. We performed some internal testing of the IOPS for the hard drive configurations in our Medium and Large MongoDB engineered servers to give you an apples-to-apples comparison.

Before we get into the tests, here are the specs for the servers we're using:

The numbers shown in the table below reflect the average number of IOPS we recorded with a 100% random read/write workload on each of these engineered servers. To measure these IOPS, we used a tool called fio with an 8k block size and iodepth at 128. Remembering that the virtual instance using network-attached storage was able to get 400 IOPS per mount, let's look at how our "base" configurations perform:

Clearly, the 400 IOPS per mount results you'd see in SAN-based storage can't hold a candle to the performance of a physical disk, regardless of whether it's SAS or SSD. As you'd expect, the "Journal" reads and writes have roughly the same IOPS between all of the configurations because all four configurations use 2 x 64GB SSD drives in RAID1. In both configurations, SSD drives provide better Data mount read/write performance than the 15K SAS drives, and the results suggest that having more physical drives in a Data mount will provide higher average IOPS. To put that observation to the test, I maxed out the number of hard drives in both configurations (10 in the 2U MD server and 34 in the 4U LG server) and recorded the results:

It should come as no surprise that by adding more drives into the configuration, we get better IOPS, but you might be wondering why the results aren't "betterer" when it comes to the IOPS in the SSD drive configurations. While the IOPS numbers improve going from four to ten drives in the medium engineered server and six to thirty-four drives in the large engineered server, they don't increase as significantly as the IOPS differences in the SAS drives. This is what I meant when I explained that several factors contribute to and potentially limit IOPS performance. In this case, the limiting factor throttling the (ridiculously high) IOPS is the RAID card we are using in the servers. We've been working with our RAID card vendor to test a new card that will open a little more headroom for SSD IOPS, but that replacement card doesn't provide the consistency and reliability we need for these servers (which is just as important as speed).

There are probably a dozen other observations I could point out about how each result compares with the others (and why), but I'll stop here and open the floor for you. Do you notice anything interesting in the results? Does anything surprise you? What kind of IOPS performance have you seen from your server/cloud instance when running a tool like fio?

After I spent a little time weaving together a story in response to SKinman's "Choose Your Own Adventure" puzzle (which you can read in the comments section), I was reminded of another famous logic puzzle that I came across a few years ago. Because it was begging to be SoftLayer-ized, I freshened it up to challenge our community.

In 1962, Life International magazine published a logic puzzle that was said to be so difficult that it could only be solved by two percent of the world's population. It's been attributed to Einstein, and apparently Lewis Carroll is given a claim to it as well, but regardless of the original author, it's a great brain workout.

If you haven't tried a puzzle like this before, don't get discouraged and go Googling for the answer. You're given every detail you need to answer the question at the end ... Take your time and think about how the components are interrelated. If you've solved this puzzle before, this iteration might only be light mental calisthenics, but with its new SoftLayer twist, it should still be fun:

Einstein's SoftLayer Riddle

The Scenario: You're in a SoftLayer data center. You walk up to a server rack and you see five servers in the top five slots on the rack. Each of the five servers has a distinct hard drive configuration, processor type, operating system, control panel (or absence thereof) and add-on storage. No two servers in this rack are the same in any of those aspects.

The CentOS6 operating system is being run on the Xeon 3230 server.

The Dual Xeon 5410 server is racked next to (immediately above or below) the server running the Red Hat 6 operating system.

The server using 80GB NAS add-on storage is racked next to (immediately above or below) the server with two 100GB SSD hard drives.

The server running the Red Hat 5 operating system uses Parallels Virtuozzo (3VPS) as a control panel.

The server running the Windows 2008 operating system has two 100GB SSD hard drives.

The server using Plesk 9 as a control panel is in the middle space in the five-server set in the rack.

The top server in the rack is the Dual Xeon 5410 server.

The Xeon 3450 server has two 147GB 10K RPM SA-SCSI hard drives.

The server using 20GB EVault as its add-on storage has one 250GB SATA II hard drive.

The server with four 600GB 15K RPM SA-SCSI hard drives is next to (immediately above or below) the server using 100GB iSCSI SAN add-on storage.

The server using cPanel as a control panel has two 2TB SATA II hard drives.

The server with four 600GB 15K RPM SA-SCSI hard drives is racked next to (immediately above or below) the server using Plesk 10 (Unlimited) as a control panel.

One server will use a brand new, soon-to-be-announced product offering as its add-on storage.

Question: What is the monthly cost of the server that will be using our super-secret new product offering for its add-on storage?

Use the SoftLayer Shopping Cart to come up with your answer. You can assume that the server has a base configuration (unless specifically noted in the clues above), that SoftLayer's promotions are not used, and that the least expensive version of the control panel is being used for any control panel with several price points. You won't be able to include the cost of the add-on storage (yet), so just provide the base configuration cost of that server in one of our US-based data centers with all of the specs you are given.

Bonus Question: If you ordered all five of those servers, how long would it take for them to be provisioned for you?

Submit your answers via comment, and we'll publish the comments in about a week so other people have a chance to answer it without the risk of scrolling down and seeing spoilers.

In 1965, Intel co-founder Gordon Moore observed an interesting trend:"The complexity for minimum component costs has increased at a rate of roughly a factor of two per year ... Certainly over the short term this rate can be expected to continue, if not to increase."

Moore was initially noting the number of transistors that can be placed on an integrated circuit at a relatively constant minimal cost. Because that measure has proven so representative of the progress of our technological manufacturing abilities, "Moore's Law" has become a cornerstone in discussions of pricing, capacity and speed of almost anything in the computer realm. You've probably heard the law used generically to refer to the constant improvements in technology: In two years, you can purchase twice as much capacity, speed, bandwidth or any other easily-measureable and relevant technology metric for the price you would pay today and for the current levels of production.

Think back to your first computer. How much storage capacity did it have? You were excited to be counting in bytes and kilobytes ... "Look at all this space!" A few years later, you heard about people at NASA using "gigabytes" of space, and you were dumbfounded. Fastforward a few more years, and you wonder how long your 32GB flash drive will last before you need to upgrade the capacity.

As manufacturers have found ways to build bigger and faster drives, users have found ways to fill them up. As a result of this behavior, we generally go from "being able to use" a certain capacity to "needing to use" that capacity. From a hosting provider perspective, we've seen the same trend from our customers ... We'll introduce new high-capacity hard drives, and within weeks, we're getting calls about when we can double it. That's why we're always on the lookout for opportunities to incorporate product offerings that meet and (at least temporarily) exceed our customers' needs.

If you've been looking for a fantastic, high-capacity storage solution, you should give our QuantaStor offering a spin. The SAN (iSCSI) + NAS (NFS) storage system delivers advanced storage features including, thin-provisioning, and remote-replication. These capabilities make it ideally suited for a broad set of applications including VM application deployments, virtual desktops, as well as web and application servers. From what I've seen, it's at the top of the game right now, and it looks like it's a perfect option for long-term reliability and scalability.