Oracle Blog

Blog for bonwick

Monday Sep 27, 2010

After 20 incredible years at Sun/Oracle, I have decided to try something new.

This was a very hard decision, and not one made lightly. I have always enjoyed my work, and still do -- everything from MTS-2 to Sun Fellow to Oracle VP. I love the people I work with and the technology we've created together, which is why I've been doing it for so long. But I have always wanted to try doing a startup, and recently identified an opportunity that I just can't resist. (We are in stealth mode, so that's all I can say for now.)

This team will always have a special place in my heart. Being part of the Solaris team means doing the Right Thing, innovating, changing the rules, and being thought leaders -- creating the ideas that everyone else wants to copy. Add to that Oracle's unmatched market reach and ability to execute, and you have a combination that I believe will succeed in ways we couldn't have imagined two years ago. I hope that Solaris and ZFS Storage are wildly successful, and that you have fun making it happen.

To the ZFS community:

Thank you for being behind us from Day One. After a decade in the making, ZFS is now an adult. Of course there's always more to do, and from this point forward, I look forward to watching you all do it. There is a great quote whose origin I have never found: "Your ideas will go further if you don't insist on going with them." That has proven correct many times in my life, and I am confident that it will prove true again.

Sunday Nov 01, 2009

If you already know what dedup is and why you want it, you can skip
the next couple of sections. For everyone else, let's start with
a little background.

What is it?

Deduplication is the process of eliminating duplicate copies of data.
Dedup is generally either file-level, block-level, or byte-level.
Chunks of data -- files, blocks, or byte ranges -- are checksummed
using some hash function that uniquely identifies data with very high
probability. When using a secure hash like SHA256, the probability of a
hash collision is about 2\^-256 = 10\^-77 or, in more familiar notation,
0.00000000000000000000000000000000000000000000000000000000000000000000000000001.
For reference, this is 50 orders of magnitude less likely than an undetected,
uncorrected ECC memory error on the most reliable hardware you can buy.

Chunks of data are remembered in a table of some sort that maps the
data's checksum to its storage location and reference count. When you
store another copy of existing data, instead of allocating new space
on disk, the dedup code just increments the reference count on the
existing data. When data is highly replicated, which is typical of
backup servers, virtual machine images, and source code repositories,
deduplication can reduce space consumption not just by percentages,
but by multiples.

What to dedup: Files, blocks, or bytes?

Data can be deduplicated at the level of files, blocks, or bytes.

File-level assigns a hash signature to an entire file. File-level
dedup has the lowest overhead when the natural granularity of data
duplication is whole files, but it also has significant limitations:
any change to any block in the file requires recomputing the checksum
of the whole file, which means that if even one block changes, any space
savings is lost because the two versions of the file are no longer identical.
This is fine when the expected workload is something like JPEG or MPEG files,
but is completely ineffective when managing things like virtual machine
images, which are mostly identical but differ in a few blocks.

Block-level dedup has somewhat higher overhead than file-level dedup when
whole files are duplicated, but unlike file-level dedup, it handles block-level
data such as virtual machine images extremely well. Most of a VM image is
duplicated data -- namely, a copy of the guest operating system -- but some
blocks are unique to each VM. With block-level dedup, only the blocks that
are unique to each VM consume additional storage space. All other blocks
are shared.

Byte-level dedup is in principle the most general, but it is also the most
costly because the dedup code must compute 'anchor points' to determine
where the regions of duplicated vs. unique data begin and end.
Nevertheless, this approach is ideal for certain mail servers, in which an
attachment may appear many times but not necessary be block-aligned in each
user's inbox. This type of deduplication is generally best left to the
application (e.g. Exchange server), because the application understands
the data it's managing and can easily eliminate duplicates internally
rather than relying on the storage system to find them after the fact.

ZFS provides block-level deduplication because this is the finest
granularity that makes sense for a general-purpose storage system.
Block-level dedup also maps naturally to ZFS's 256-bit block checksums,
which provide unique block signatures for all blocks in a storage pool
as long as the checksum function is cryptographically strong (e.g. SHA256).

When to dedup: now or later?

In addition to the file/block/byte-level distinction described above,
deduplication can be either synchronous (aka real-time or in-line)
or asynchronous (aka batch or off-line). In synchronous dedup,
duplicates are eliminated as they appear. In asynchronous dedup,
duplicates are stored on disk and eliminated later (e.g. at night).
Asynchronous dedup is typically employed on storage systems that have
limited CPU power and/or limited multithreading to minimize the
impact on daytime performance. Given sufficient computing power,
synchronous dedup is preferable because it never wastes space
and never does needless disk writes of already-existing data.

ZFS deduplication is synchronous. ZFS assumes a highly multithreaded
operating system (Solaris) and a hardware environment in which CPU cycles
(GHz times cores times sockets) are proliferating much faster than I/O.
This has been the general trend for the last twenty years, and the
underlying physics suggests that it will continue.

How do I use it?

Ah, finally, the part you've really been waiting for.

If you have a storage pool named 'tank' and you want to use dedup,
just type this:

zfs set dedup=on tank

That's it.

Like all zfs properties, the 'dedup' property follows the usual rules
for ZFS dataset property inheritance. Thus, even though deduplication
has pool-wide scope, you can opt in or opt out on a per-dataset basis.

What are the tradeoffs?

It all depends on your data.

If your data doesn't contain any duplicates, enabling dedup will add
overhead (a more CPU-intensive checksum and on-disk dedup table entries)
without providing any benefit. If your data does contain duplicates,
enabling dedup will both save space and increase performance. The
space savings are obvious; the performance improvement is due to the
elimination of disk writes when storing duplicate data, plus the
reduced memory footprint due to many applications sharing the same
pages of memory.

Most storage environments contain a mix of data that is mostly unique
and data that is mostly replicated. ZFS deduplication is per-dataset,
which means you can selectively enable dedup only where it is likely
to help. For example, suppose you have a storage pool containing
home directories, virtual machine images, and source code repositories.
You might choose to enable dedup follows:

zfs set dedup=off tank/home

zfs set dedup=on tank/vm

zfs set dedup=on tank/src

Trust or verify?

If you accept the mathematical claim that a secure hash like SHA256 has
only a 2\^-256 probability of producing the same output given two different
inputs, then it is reasonable to assume that when two blocks have the
same checksum, they are in fact the same block. You can trust the hash.
An enormous amount of the world's commerce operates on this assumption,
including your daily credit card transactions. However, if this makes
you uneasy, that's OK: ZFS provies a 'verify' option that performs
a full comparison of every incoming block with any alleged duplicate to
ensure that they really are the same, and ZFS resolves the conflict if not.
To enable this variant of dedup, just specify 'verify' instead of 'on':

zfs set dedup=verify tank

Selecting a checksum

Given the ability to detect hash collisions as described above, it is
possible to use much weaker (but faster) hash functions in combination
with the 'verify' option to provide faster dedup. ZFS offers this
option for the fletcher4 checksum, which is quite fast:

zfs set dedup=fletcher4,verify tank

The tradeoff is that unlike SHA256, fletcher4 is not a pseudo-random
hash function, and therefore cannot be trusted not to collide. It is
therefore only suitable for dedup when combined with the 'verify' option,
which detects and resolves hash collisions. On systems with a very high
data ingest rate of largely duplicate data, this may provide better
overall performance than a secure hash without collision verification.

Unfortunately, because there are so many variables that affect performance,
I cannot offer any absolute guidance on which is better. However, if
you are willing to make the investment to experiment with different
checksum/verify options on your data, the payoff may be substantial.
Otherwise, just stick with the default provided by setting dedup=on;
it's cryptograhically strong and it's still pretty fast.

Scalability and performance

Most dedup solutions only work on a limited amount of data -- a handful
of terabytes -- because they require their dedup tables to be resident
in memory.

ZFS places no restrictions on your ability to dedup. You can dedup
a petabyte if you're so inclined. The performace of ZFS dedup will
follow the obvious trajectory: it will be fastest when the DDTs
(dedup tables) fit in memory, a little slower when they spill over
into the L2ARC, and much slower when they have to be read from disk.
The topic of dedup performance could easily fill many blog entries -- and
it will over time -- but the point I want to emphasize here is that there
are no limits in ZFS dedup. ZFS dedup scales to any capacity on any
platform, even a laptop; it just goes faster as you give it more hardware.

Acknowledgements

Bill Moore and I developed the first dedup prototype in two very intense
days in December 2008. Mark Maybee and Matt Ahrens helped us navigate
the interactions of this mostly-SPA code change with the ARC and DMU.
Our initial prototype was quite primitive: it didn't support gang blocks,
ditto blocks, out-of-space, and various other real-world conditions.
However, it confirmed that the basic approach we'd been planning for
several years was sound: namely, to use the 256-bit block checksums
in ZFS as hash signatures for dedup.

Over the next several months Bill and I tag-teamed the work so that
at least one of us could make forward progress while the other dealt
with some random interrupt of the day.

As we approached the end game, Matt Ahrens and Adam Leventhal developed
several optimizations for the ZAP to minimize DDT space consumption both
on disk and in memory, key factors in dedup performance. George Wilson
stepped in to help with, well, just about everything, as he always does.

For final code review George and I flew to Colorado where many folks
generously lent their time and expertise: Mark Maybee, Neil Perrin,
Lori Alt, Eric Taylor, and Tim Haley.

Our test team, led by Robin Guo, pounded on the code and made a couple
of great finds -- which were actually latent bugs exposed by some new,
tighter ASSERTs in the dedup code.

My family (Cathy, Andrew, David, and Galen) demonstrated enormous
patience as the project became all-consuming for the last few months.
On more than one occasion one of the kids has asked whether we can do
something and then immediately followed their own question with,
"Let me guess: after dedup is done."

Sunday Dec 30, 2007

For over a year I have been the proud and happy owner of a Garmin GPS unit -- the Nuvi 360. I have practically been a walking billboard for the company. Go ahead, ask me about my Nuvi!

That changed today, permanently. When I powered on the Nuvi this morning, it alerted me that its map database was over a year old and should be updated. That makes sense, I thought -- indeed, how nice of them to remind me! So I brought the Nuvi inside, plugged it into my Mac, and went to Garmin's website to begin the update.

Wait a minute, what's this? They want to charge $69 for the update! Excuse me? This isn't new functionality I'm getting, it's a bug fix. The product I bought is a mapping device. Its maps are now "out of date", as Garmin puts it -- well, yes, in the same way that the phlogiston theory is "out of date". The old maps are wrong, which means that the product has become defective and should be fixed. Given the (somewhat pathetic) fact that the Nuvi doesn't automatically update its maps from Web or satellite sources, the least Garmin could do to keep their devices operating correctly in the field is provide regular, free fixes to the map database. I didn't buy a GPS unit so I could forever navigate 2005 America.

But wait, it gets better.

You might imagine that getting the update would require supplying a credit card number to get a license key, downloading the map update, and then using the key to activate it. Nope! You have to order a physical DVD from Garmin, which takes 3-5 weeks to ship. 3-5 weeks! Any reason they can't include a first-class postage stamp as part of the $69 shakedown? And seriously, if you work for Garmin and you're reading this, check out this cool new technology. It really works. Swear to God. You're soaking in it.

Assuming you ordered the DVD, you would not discover until after it arrived -- because this is mentioned nowhere on Garmin's website -- that the DVD will only work for one device. Yes, that's right -- after going to all the trouble to get a physical copy of the map update, you have to get on their website to activate it, and it's only good for one unit. So to update my wife's unit as well as my own, I'd have to order two DVDs, for $138. That's offensive. Even the RIAA doesn't expect me to buy two copies of every CD just because I'm married. And the only reason I know about this is because I checked Amazon first, and found many reviewers had learned the hard way and were livid about it. Garmin's policy is bad, but their failure to disclose it is even worse.

Moreover, the 2008 map update isn't a one-time purchase. There's an update every year, so it's really a $138/year subscription. That's $11.50/month. For maps. For a mapping device. That I already paid for.

What does one get for this $11.50/month map subscription? According to the reviews on Amazon, not much. Major construction projects that were completed several years ago aren't reflected in the 2008 maps, and Garmin still hasn't fixed the long-standing bug that any store that's part of a mall isn't in their database. (Want to find the nearest McDonald's? No dice. You just have to know that the nearest McDonald's is in the XYZ Shopping Center, and ask for directions to that. This is really annoying in practice.)

I can get better information from Google maps, continuously
updated, with integrated real-time traffic data, for free, forever --
and my iPhone will happily use that data to plot time-optimal routes.
(In fact, all the iPhone needs is the right antenna and a SIRF-3
chipset to make dedicated GPS devices instantly obsolete. This is so
obvious it can't be more than a year out. I can live with the stale maps until then, and have a $138 down payment on the GPS
iPhone earning interest while I wait.)

And so, starting today, that's exactly what I'll do.

I don't mind paying a reasonable fee for services rendered. I do mind
getting locked into a closed-source platform and being forced to pay monopoly rents
for a proprietary, stale and limited version of data that's already available to the
general public. That business model is so over.

Everything about this stinks, Garmin. You tell me, unexpectedly, that I have to pay for routine map updates. You make the price outrageous. You don't actually disclose what's in the update. (Several Amazon reviewers say the new maps are actually worse.) You make the update hard to do. You needlessly add to our landfills by creating single-use DVDs. You have an unreasonable licensing policy. And you hide that policy until after the purchase.

Way to go, Garmin. You have pissed off a formerly delighted customer, and that is generally a one-way ticket. You have lost both my business and my respect. I won't be coming back. Ever.

Thursday Sep 13, 2007

Every filesystem must keep track of two basic things: where your data is, and where the free space is.

In principle, keeping track of free space is not strictly necessary: every block is either allocated or free, so the free space can be computed by assuming everything is free and then subtracting out everything that's allocated; and the allocated space can be found by traversing the entire filesystem from the root. Any block that cannot be found by traversal from the root is, by definition, free.

In practice, finding free space this way would be insufferable because it would take far too long for any filesystem of non-trivial size. To make the allocation and freeing of blocks fast, the filesystem needs an efficient way to keep track of free space. In this post we'll examine the most common methods, why they don't scale well, and the new approach we devised
for ZFS.

Bitmaps

The most common way to represent free space is by using a bitmap. A bitmap is simply an array of bits, with the Nth bit indicating whether the Nth block is allocated or free. The overhead for a bitmap is quite low: 1 bit per block. For a 4K blocksize, that's 1/(4096\*8) = 0.003%. (The 8 comes from 8 bits per byte.)

For a 1GB filesystem, the bitmap is 32KB -- something that easily fits in memory, and can be scanned quickly to find free space. For a 1TB filesystem, the bitmap is 32MB -- still stuffable in memory, but no longer trivial in either size or scan time. For a 1PB filesystem, the bitmap is 32GB, and that simply won't fit in memory on most machines. This means that scanning the bitmap requires reading it from disk, which is slower still.

Clearly, this doesn't scale.

One seemingly obvious remedy is to break the bitmap into small chunks, and keep track of the number of bits set in each chunk. For example, for a 1PB filesystem using 4K blocks, the free space can be divided into a million bitmaps, each 32KB in size. The summary information (the million integers indicating how much space is in each bitmap) fits in memory, so it's easy to find a bitmap with free space, and it's quick to scan that bitmap.

But there's still a fundamental problem: the bitmap(s) must be updated not only when a new block is allocated, but also when an old block is freed. The filesystem controls the locality of allocations (it decides which blocks to put new data into), but it has no control over the locality of frees. Something as simple as 'rm -rf' can cause blocks all over the platter to be freed. With our 1PB filesystem example, in the worst case, removing 4GB of data (a million 4K blocks) could require each of the million bitmaps to be read, modified, and written out again. That's two million disk I/Os to free a measly 4GB -- and that's just not reasonable, even as worst-case behavior.

More than any other single factor, this is why bitmaps don't scale: because frees are often random, and bitmaps that don't fit in memory perform pathologically when they are accessed randomly.

B-trees

Another common way to represent free space is with a B-tree of extents. An extent is a contiguous region of free space described by two integers: offset and length. The B-tree sorts the extents by offset so that contiguous space allocation is efficient. Unfortunately, B-trees of extents suffer the same pathology as bitmaps when confronted with random frees.

What to do?

Deferred frees

One way to mitigate the pathology of random frees is to defer the update of the bitmaps or B-trees, and instead keep a list of recently freed blocks. When this deferred free list reaches a certain size, it can be sorted, in memory, and then freed to the underlying bitmaps or B-trees with somewhat better locality. Not ideal, but it helps.

But what if we went further?

Space maps: log-structured free lists

Recall that log-structured filesystems long ago posed this question: what if, instead of periodically folding a transaction log back into the filesystem, we made the transaction log be the filesystem?

Well, the same question could be asked of our deferred free list: what if, instead of folding it into a bitmap or B-tree, we made the deferred free list be the free space representation?

That is precisely what ZFS does. ZFS divides the space on each virtual device into a few hundred regions called metaslabs. Each metaslab has an associated space map, which describes that metaslab's free space. The space map is simply a log of allocations and frees, in time order. Space maps make random frees just as efficient as sequential frees, because regardless of which extent is being freed, it's represented on disk by appending the extent (a couple of integers) to the space map object -- and appends have perfect locality. Allocations, similarly, are represented on disk as extents appended to the space map object (with, of course, a bit set indicating that it's an allocation, not a free).

When ZFS decides to allocate blocks from a particular metaslab, it first reads that metaslab's space map from disk and replays the allocations and frees into an in-memory AVL tree of free space, sorted by offset. This yields a compact in-memory representation of free space that supports efficient allocation of contiguous space. ZFS also takes this opportunity to condense the space map: if there are many allocation-free pairs that cancel out, ZFS replaces the on-disk space map with the smaller in-memory version.

Space maps have several nice properties:

They don't require initialization: a space map with no entries indicates that there have been no allocations and no frees, so all space is free.

They scale: because space maps are append-only, only the last block of the space map object needs to be in memory to ensure excellent performance, no matter how much space is being managed.

They have no pathologies: space maps are efficient to update regardless of the pattern of allocations and frees.

They are equally efficient at finding free space whether the pool is empty or full (unlike bitmaps, which take longer to scan as they fill up).

Finally, note that when a space map is completely full, it is represented by a single extent. Space maps therefore have the appealing property that as your storage pool approaches 100% full, the space maps start to evaporate, thus making every last drop of disk space available to hold useful information.

Thursday May 03, 2007

Andrew Morton has famously called ZFS a "rampant layering violation" because it combines the functionality of a filesystem, volume manager, and RAID controller. I suppose it depends what the meaning of the word violate is. While designing ZFS we observed that the standard layering of the storage stack induces a surprising amount of unnecessary complexity and duplicated logic. We found that by refactoring the problem a bit -- that is, changing where the boundaries are between layers -- we could make the whole thing much simpler.

Suppose you had to compute the sum, from n=1 to infinity, of 1/n(n+1).

Expanding that out term by term, we have:

1/(1\*2) + 1/(2\*3) + 1/(3\*4) + 1/(4\*5) + ...

That is,

1/2 + 1/6 + 1/12 + 1/20 + ...

What does that infinite series add up to? It may seem like a hard problem, but that's only because we're not looking at it right. If you're clever, you might notice that there's a different way to express each term:

1/n(n+1) = 1/n - 1/(n+1)

For example,

1/(1\*2) = 1/1 - 1/2 1/(2\*3) = 1/2 - 1/3 1/(3\*4) = 1/3 - 1/4

Thus, our sum can be expressed as:

(1/1 - 1/2) + (1/2 - 1/3) + (1/3 - 1/4) + (1/4 - 1/5) + ...

Now, notice the pattern: each term that we subtract, we add back. Only in Congress does that count as work. So if we just rearrange the parentheses -- that is, if we rampantly violate the layering of the original problem by using associativity to refactor the arithmetic across adjacent terms of the series -- we get this:

1/1 + (-1/2 + 1/2) + (-1/3 + 1/3) + (-1/4 + 1/4) + ...

or

1/1 + 0 + 0 + 0 + ...

In others words,

1.

Isn't that cool?

Mathematicians have a term for this. When you rearrange the terms of a series so that they cancel out, it's called telescoping -- by analogy with a collapsable hand-held telescope. In a nutshell, that's what ZFS does: it telescopes the storage stack. That's what allows us to have a filesystem, volume manager, single- and double-parity RAID, compression, snapshots, clones, and a ton of other useful stuff in just 80,000 lines of code.

A storage system is more complex than this simple analogy, but at a high level the same idea really does apply. You can think of any storage stack as a series of translations from one naming scheme to another -- ultimately translating a filename to a disk LBA (logical block address). Typically it looks like this:

First, note that the traditional filesystem layer is too monolithic. It would be better to separate the filename-to-object part (the upper half) from the object-to-volume-LBA part (the lower half) so that we could reuse the same lower-half code to support other kinds of storage, like objects and iSCSI targets, which don't have filenames. These storage classes could then speak to the object layer directly. This is more efficient than going through something like /dev/lofi, which makes a POSIX file look like a device. But more importantly, it provides a powerful new programming model -- object storage -- without any additional code.

Second, note that the volume LBA is completely useless. Adding a layer of indirection often adds flexibility, but not in this case: in effect we're translating from English to French to German when we could just as easily translate from English to German directly. The intermediate French has no intrinsic value. It's not visible to applications, it's not visible to the RAID array, and it doesn't provide any administrative function. It's just overhead.

The DMU provides both file and block access to a common pool of physical storage. File access goes through the ZPL, while block access is just a direct mapping to a single DMU object. We're also developing new data access methods that use the DMU's transactional capabilities in more interesting ways -- more about that another day.

The ZFS architecture eliminates an entire layer of translation -- and along with it, an entire class of metadata (volume LBAs). It also eliminates the need for hardware RAID controllers. At the same time, it provides a useful new interface -- object storage -- that was previously inaccessible because it was buried inside a monolithic filesystem.

Wednesday Apr 11, 2007

Evidently, my previous post was just a tad too cheerful for some folks' taste. But I speak with the optimism of a man who has cheated death. And ironically, Pete's reference to George Cameron had a lot to do with it.

Several years ago, George and a few other Sun folks went off to form 3par, a new storage company. They all had Solaris expertise, and understood its advantages, so they wanted to use it inside their box. But we weren't open-source at the time, and our licensing terms really sucked. Both of us -- George at 3par, and me at Sun -- tried for months to arrange something reasonable. We failed. So finally -- because Sun literally gave them no choice -- 3par went with Linux.

I couldn't believe it. A cool new company wanted to use our product, and instead of giving them a hand, we gave them the finger.

For many of us, that was the tipping point. If we had any reservations about open-sourcing Solaris, that ended them. It was a gamble, to be sure, but the alternative was certain death. Even if the 3par situation had ended differently, it was clear that we needed to change our business practices. To do that, we'd first have to change our culture.

But cultures don't change easily -- it usually takes some traumatic event. In Sun's case, watching our stock shed 95% of its value did the trick. It was that total collapse of confidence -- that near-death experience -- that opened us up to things that had previously seemed too dangerous. We had to face a number of hard questions, including the most fundamental ones: Can we make a viable business out of this wreckage? Why are we doing SPARC? Why not AMD and Intel? Why Solaris? Why not Linux and Windows? Where are we going with Java? And not rah-rah why, but really, why?

In each case, asking the question with a truly open mind changed the answer. We killed our more-of-the-same SPARC roadmap and went multi-core, multi-thread, and low-power instead. We started building AMD and Intel systems. We launched a wave of innovation in Solaris (DTrace, ZFS, zones, FMA, SMF, FireEngine, CrossBow) and open-sourced all of it. We started supporting Linux and Windows. And most recently, we open-sourced Java. In short, we changed just about everything. Including, over time, the culture.

Still, there was no guarantee that open-sourcing Solaris would change anything. It's that same nagging fear you have the first time you throw a party: what if nobody comes? But in fact, it changed everything: the level of interest, the rate of adoption, the pace of communication. Most significantly, it changed the way we do development. It's not just the code that's open, but the entire development process. And that, in turn, is attracting developers and ISVs whom we couldn't even have spoken to a few years ago. The openness permits us to have the conversation; the technology makes the conversation interesting.

After coming so close to augering into the ground, it's immensely gratifying to see the Solaris revival now underway. So if I sometimes sound a bit like the proud papa going on and on about his son, well, I hope you can forgive me.

Oh, and Pete, if you're reading this -- George Cameron is back at Sun now, three doors down the hall from me. Small valley!

Monday Apr 09, 2007

When you choose an OS for your laptop, many things affect your decision: application support, availability of drivers, ease of use, and so on.

But if you were developing a storage appliance, what would you want from the operating system that runs inside it?

The first thing you notice is all the things you don't care about: graphics cards, educational software, photoshop... none of it matters. What's left, then? What do you really need from a storage OS? And why isn't Linux the answer? Well, let's think about that.

You need really good tools for performance analysis, so you can figure out how to make your application scale as well as the OS does.

You need extensive hardware diagnostic support, so that when parts of the box fail or are about to fail, you can take appropriate action.

You need reliable crash dumps and first-rate debugging tools so you can perform first-fault diagnosis when something goes wrong.

And you need a community of equally serious developers who can help you out.

OpenSolaris gives you all of these: a robust kernel that scales to thousands of threads and spindles; DTrace, the best performance analysis tool on the planet; FMA (Fault Management Architecture) to monitor the hardware and predict and manage failures; mdb to analyze software problems; and of course the OpenSolaris community, a large, vibrant, professional, high signal-to-noise environment.

The other operating systems one might consider are so far behind on so many of these metrics, it just seems like a no-brainer.

Let's put it this way: if I ever leave Sun to do a storage startup, I'll have a lot of things to think about. Choosing the OS won't be one of them. OpenSolaris is the ideal storage development platform.

I'm speaking, of course, of the rise of general-purpose computing during the 1990s. It was not so long ago that you could choose from a truly bewildering variety of machines. Symbolics, for example, made hardware specifically designed to run Lisp programs. We debated SIMD vs. MIMD, dataflow vs. control flow, VLIW, and so on. Meanwhile, those boring little PCs just kept getting faster. And more capable. And cheaper. By the end of the decade, even the largest supercomputers were just clusters of PCs. A simple, general-purpose computing device crushed all manner of clever, sophisticated, highly specialized systems.

And the thing is, it had nothing to do with technology. It was all about volume economics. It was inevitable.

With that in mind, I bring news that is very good for you, very good for Sun, and not so good for our competitors: the same thing that happened to compute in the 1990s is happening to storage, right now. Now, as then, the fundamental driver is volume economics, and we see it playing out at all levels of the stack: the hardware, the operating system, and the interconnect.

First, custom RAID hardware can't keep up with general-purpose CPUs. A single Opteron core can XOR data at about 6 GB/sec. There's just no reason to dedicate special silicon to this anymore. It's expensive, it wastes power, and it was always a compromise: array-based RAID can't provide the same end-to-end data integrity that host-based RAID can. No matter how good the array is, a flaky cable or FC port can still flip bits in transit. A host-based RAID solution like RAID-Z in ZFS can both detect and correct silent data corruption, no matter where it arises.

Second, custom kernels can't keep up with volume operating systems. I try to avoid naming specific competitors in this blog -- it seems tacky -- but think about what's inside your favorite storage box. Is it open source? Does it have an open developer community? Does it scale? Can the vendor make it scale? Do they even get a vote?

The latter question is becoming much more important due to trends in CPU design. The clock rate party of the 1990s, during which we went from 20MHz to 2GHz -- a factor of 100 -- is over. Seven years into the new decade we're not even 2x faster in clock rate, and there's no sign of that changing soon. What we are getting, however, is more transistors. We're using them to put multiple cores on each chip and multiple threads on each core (so the chip can do something useful during load stalls) -- and this trend will only accelerate.

Which brings us back to the operating system inside your storage device. Does it have any prayer of making good use of a 16-core, 64-thread CPU?

Third, custom interconnects can't keep up with Ethernet. In the time that Fibre Channel went from 1Gb to 4Gb -- a factor of 4 -- Ethernet went from 10Mb to 10Gb -- a factor of 1000. That SAN is just slowing you down.

Today's world of array products running custom firmware on custom RAID controllers on a Fibre Channel SAN is in for massive disruption. It will be replaced by intelligent storage servers, built from commodity hardware, running an open operating system, speaking over the real network.

You've already seen the first instance of this: Thumper (the x4500) is a 4-CPU, 48-disk storage system with no hardware RAID controller. The storage is all managed by ZFS on Solaris, and exported directly to your real network over standard protocols like NFS and iSCSI.

Thursday Jan 11, 2007

After sizing up the computers we have at home, my son Andrew made the following declaration: "I want Solaris security, Mac interface, and Windows compatibility." Age 10. Naturally, sensing a teachable moment, I explained to him what virtualization is all about -- bootcamp, Parallels, Xen, etc. And the thing is, he really gets it. I can't wait to see what his generation is capable of.

Saturday Nov 04, 2006

Block allocation is central to any filesystem. It affects not only performance, but also the administrative model (e.g. stripe configuration) and even some core capabilities like transactional semantics, compression, and block sharing between snapshots. So it's important to get it right.

There are three components to the block allocation policy in ZFS:

Device selection (dynamic striping)

Metaslab selection

Block selection

By design, these three policies are independent and pluggable. They can be changed at will without altering the on-disk format, which gives us lots of flexibility in the years ahead.

So... let's go allocate a block!

1. Device selection (aka dynamic striping). Our first task is device selection. The goal is to spread the load across all devices in the pool so that we get maximum bandwidth without needing any notion of stripe groups. You add more disks, you get more bandwidth. We call this dynamic striping -- the point being that it's done on the fly by the filesystem, rather than at configuration time by the administrator.

There are many ways to select a device. Any policy would work, including just picking one at random. But there are several practical considerations:

If a device was recently added to the pool, it'll be relatively empty. To address such imbalances, we bias the allocation slightly in favor of underutilized devices. This keeps space usage uniform across all devices.

All else being equal, round-robin is a fine policy, but it's critical to get the granularity right. If the granularity is too coarse (e.g. 1GB), we'll only get one device worth of bandwidth when doing sequential reads and writes. If the granularity is too fine (e.g. one block), we'll waste any read buffering the device can do for us. In practice, we've found that switching from one device to the next every 512K works well for the current generation of disk drives.

That said, for intent log blocks, it's better to round-robin between devices each time we write a log block. That's because they're very short-lived, so we don't expect to ever need to read them; therefore it's better to optimize for maximum IOPs when writing log blocks. Neil Perrin integrated support for this earlier today.

More generally, we'll probably want different striping policies for different types of data: large/sequential, small/random, transient (like the intent log), and dnodes (clumping for spatial density might be good). This is fertile ground for investigation.

If one of the devices is slow or degraded for some reason, it should be avoided. This is work in progress.

2. Metaslab selection. We divide each device into a few hundred regions, called metaslabs, because the overall scheme was inspired by the slab allocator. Having selected a device, which metaslab should we use? Intuitively it seems that we'd always want the one with the most free space, but there are other factors to consider:

Modern disks have uniform bit density and constant angular velocity. Therefore, the outer recording zones are faster (higher bandwidth) than the inner zones by the ratio of outer to inner track diameter, which is typically around 2:1. We account for this by assigning a higher weight to the free space in lower-LBA metaslabs. In effect, this means that we'll select the metaslab with the most free bandwidth rather than simply the one with the most free space.

When a pool is relatively empty, we want to keep allocations in the outer (faster) regions of the disk; this improves both bandwidth and latency (by minimizing seek distances). Therefore, we assign a higher weight to metaslabs that have been used before.

All of these considerations can be seen in the function metaslab_weight(). Having defined a weighting scheme, the selection algorithm is simple: always select the metaslab with the highest weight.

3. Block selection. Having selected a metaslab, we must choose a block within that metaslab. The current allocation policy is a simple variation on first-fit; it seems likely that we can do better. In the future I expect that we'll have not only a better algorithm, but a whole collection of algorithms, each optimized for a specific workload. Anticipating this, the block allocation code is fully vectorized; see space_map_ops_t for details.

The mechanism (as opposed to policy) for keeping track of free space in a metaslab is a new data structure called a space map, which I'll describe in the next post.

Saturday Sep 16, 2006

On Monday, September 18, Bill Moore and I will host a four-hour deep dive into ZFS internals at the 2006 Storage Developer Conference in San Jose. The four-hour format is a lot of fun -- both for us and the audience -- because it allows enough time to explore the architectural choices and trade-offs, explain how things really work, and weave in some amusing stories along the way. Come join the festivities!

Very cool. This is another step toward the ultimate goal: ubiquity. Consumer devices need data integrity just as badly as enterprises do. With ZFS, even the humblest device -- an iPod, a digital camera -- can provide fast, reliable storage using nothing but commodity hardware and free, open-source software. And it's about time, don't you think?