storage Archives - Anchor Cloud Hosting

As you’ve probably noticed, we’ve been evaluating Ceph in recent months for our petabyte-scale distributed storage needs. It’s a pretty great solution and works well, but it’s not the easiest thing to setup and administer properly. One of the bits we’ve been grappling with recently is Ceph’s CRUSH map. In certain circumstances, which aren’t entirely clearly documented, it can fail to do the job and lead to a lack of guaranteed redundancy. How CRUSH maps work The CRUSH map algorithm is one of the jewels in Ceph’s crown, and provides a mostly deterministic way for clients to locate and distribute data on disks across the cluster. This avoids the need for an index server to coordinate reads and writes. Clusters with index servers, such as the MDS in Lustre, funnel…

Share this:

RADOS Gateway (henceforth referred to as radosgw) is an add-on component for Ceph, large-scale clustered storage now mainlined in the Linux kernel. radosgw provides an S3-compatible interface for object storage, which we’re evaluating for a future product offering. We’ve spent the last few days digging through radosgw source trying to nail a some pesky bugs. For once, the clients don’t appear to be breaking spec, it’s radosgw itself. We’re using DragonDisk as our S3-alike client – what works? PUTing and GETing files works, obviously. Setting the Content-Type metadata returns a failure, and renaming a directory almost works – it gets duplicated to the new name, but the old copy hangs around. Wireshark to the rescue! We started pulling apart packet dumps, and it quickly became evident that setting Content-Type on…

We’ve been looking at Ceph recently, it’s basically a fault-tolerant distributed clustered filesystem. If it works, that’s like a nirvana for shared storage: you have many servers, each one pitches in a few disks, and the there’s a filesystem that sits on top that visible to all servers in the cluster. If a disk fails, that’s okay too. Those are really cool features, but it turns out that Ceph is really more than just that. To borrow a phrase, Ceph is like an onion – it’s got layers. The filesystem on top is nifty, but the coolest bits are below the surface. If Ceph proves to be solid enough for use, we’ll need to train our sysadmins all about Ceph. That means pretty diagrams and explanations, which we thought would…

Share this:

We don’t normally post about hardware wankery, but this little piece of shininess appeared for free in some of the newer Dell servers we’ve been ordering, and it actually sounds like it’s not an awful hack. Cachecade is an LSI technology (Dell PERC cards are rebranded gear) that adds a read-cache tier to the RAID logic, in the form of solid-state disks. While SSDs are still too expensive for mass-scale primary storage, they’re cheap enough that you can burn a few hundred bucks and get 50gb worth of faster reads. The real benefit of this style of read-cache should be for random block reads, where SSDs proverbially drop excrement over rotational media from a great height. The jury is still out for us – we’ve just started using cachecade on…

So you’ve just provisioned your shiny new OS instance with your host of choice, loaded in your confidential data and away you go without a worry in the world right? If your data consists only of captioned photos of cute furry animals, then all is well. Perhaps however, your data is worth just a wee bit more than that (not that we don’t ♥ cute furry animals!). Depending on your host and product used, your data could be sitting on anywhere from locally attached disks, a NAS/SAN or some clustered distributed block device/filesystem with no way to easily determine who has access to it, what snapshots exist, what will happen to failed media, etc. For certain customers with certain sensitive applications, that is simply not an acceptable risk. To protect your data…