Google Analytics

Google Custom Search

"... an engineer who is not only competent at the analytics and technologies of engineering, but can bring value to clients, team well, design well, foster adoptions of new technologies, position for innovations, cope with accelerating change and mentor other engineers" -- CACM 2014/12

Once the corosync and sheepdog services are configured and running, sheepdog only needs one more command: format the cluster. I used the
Erasure Code Support mechanism. The trick here, is based upon
the directory (by default '/var/lib/sheepdog') as set in initialization scripts, is where the format command gets applied.

Following on from the previous article, this describes a corosync configuration for three appliances configured together in a 'triangle'. OSPF/BGP is running on each appliance. With this routing configuration, I am able to apply an ip address to the loopback interface, and make each of those addresses mutually reachable from each appliance.

I think most corosync examples make the assumption that all nodes are within the same segment. This then suggests a multicast solution. As I am using routing between each appliance, I need a unicast solution.

The following is an example configuration file for the second of three nodes / appliances. Notice the bind_addr is for the loopback address, and a complete list of all three nodes taking part in the quorum. There is a 'mcastport' listed, but because of 'transport: udpu', unicast is actually used from that port number.
Continue reading "Corosync in a Three-Some" »

Monday, November 6. 2017

In my Sheepdog cluster, I have three nodes, with each node having two 1TB SSDs dedicated to the use of a ZFS file system. Each node stripes the two drives together to gain some read performance, and then Sheepdog will apply an Eraser Code redundancy scheme across the three nodes to provide a 2:1 erasure coded tolerant set (aka in this case similar to RAID5), which should yield about 4TB of useful storage space.

Creating the ZFS file system is a two step process: create a simple zpool, then apply the file system. This example uses two partitions on the same drive to prove the concept, but in real use, two whole drives should be used.

Thursday, November 2. 2017

I have been looking at various distributed storage solutions, hoping to find something reliable in an open source style of solution. Some names I've encountered (open and closed source):

Ceph: by some accounts, seems to be resource heavy, but at the same time, appears to be well used in the industry

Open vStorage: could be a strong contender for me, but I have a bias against Java based applications.

Lustre: I've been watching this for quite some time, but the features didn't quite mesh with my desires

Zeta Systems: a mixture of proprietary and open solutions, which almost fits in with my perceptions, and uses ZFS as the underlying hardware format

SheepDog: I keep coming back to looking at this. With a version 1 release a little while ago, the developers indicate is satisfies their 'single point of nothing' criteria, which overlap with some of my own criteria. In addition, it appears to be resource light, horizontally scaleable, and integrates with the tools I am trying to integrate: lxc, kvm, and libvirt.

As Debian doesn't have a very recent package built, I build from scratch. Since my test environment is small, I use corosync rather than zookeeper. Here are my build statements for a package build. I will need to add to this to show the build statement as well as the requisite packages:

Sheepdog is Ready: distributed block storage is turning from experiment to production use. has performance test scenarios and background on durability, scalability, manageability, and availability (can be run with multipath scsi targets).

On the Sheepdog mailing list, a mechanism, other than sheepfs, a way to present a file system:

You can do, through qemu-nbd, formatting it and mounting it.
sheepdog -> qemu-nbd -> /dev/nbd{x} -> xfs/ext3/ext4/.. -> mount
modprobe nbd
qemu-nbd sheepdog://localhost:7000/my_volume -c /dev/nbd1
# Optionally you can do the rest on a different machine using nbd-client on this step
mkfs.xfs /dev/nbd1
mount /dev/nbd1 /path/to/mount

Disclaimer: This site may include market analysis. All ideas, opinions, and/or
forecasts, expressed or implied herein, are for informational purposes only and should not
be construed as a recommendation to invest, trade, and/or speculate in the markets. Any
investments, trades, and/or speculations made in light of the ideas, opinions, and/or
forecasts, expressed or implied herein, are committed at your own risk, financial or
otherwise.