My Lifehttps://blog.jericon.net
Relaunch of this blog to cover more stuff than just mysql!Mon, 19 Mar 2018 03:03:28 +0000enhourly1http://wordpress.com/https://s2.wp.com/i/buttonw-com.pngMy Lifehttps://blog.jericon.net
Blog Relaunch!https://blog.jericon.net/2013/01/02/blog-relaunch/
https://blog.jericon.net/2013/01/02/blog-relaunch/#respondWed, 02 Jan 2013 19:08:53 +0000http://blog.jericon.net/?p=44With the start of the new year, I’m re-launching my blog. As a part of this, I have moved it to blog.jericon.net. This will allow me to use the domain on my own server for other stuff.

I am no longer going to focus solely on MySQL. MySQL posts will still be tagged #MySQL. 2013 is going to be a year of change for me. I have a lot of things that I want to od. I’m not going to call them resolutions, because resolutions are meant to be broken. Instead, I’m going to call them goals and wishes. So here they are (with the accompanying category I will be blogging about them under):

My habits I am going to be tracking using Lift. These are things that I hope to help me lead a healthier and more balanced life. My goals are all things that are very important to me.

I want to push myself to be more fit. To do that, I need an extreme goal. I believe giving myself 9.5 months to train for a full marathon is good. I will take part in smaller, 5k/10k races in the meanwhile to work up to it. I need to find a good training regiment that will help me meet this goal. Just “going out to run” isn’t going to do it. If anyone has suggestions, please let me know. I will be blogging about my progress on this goal under “#Marathon“.

Losing weight also goes hand in hand with #Marathon. I plan on doing this through eating better and regular exercise. I believe that my goal is very much attainable, and not too extreme. My progress and thoughts about this goal will be blogged under “#LoseIt“.

I likely will not blog a whole lot about #DebtFree, but getting out of the Credit Card debt that Jen and I have is an important focus for us this year. As is visiting Jen’s family in the UK.

Lastly, I will be doing 2 photo projects this year. One I will be putting on Twitter and Facebook that I’m calling #P365 (Project 365). That is to take a photo each day of something in my life. The other project I will put up after the end of the year. It is a daily photo of me using the app “EveryDay“.

Here’s to a wonderful new year!

Advertisements

]]>https://blog.jericon.net/2013/01/02/blog-relaunch/feed/0jericonChain Copying to Multiple hostshttps://blog.jericon.net/2012/05/17/chain-copying-to-multiple-hosts/
https://blog.jericon.net/2012/05/17/chain-copying-to-multiple-hosts/#respondFri, 18 May 2012 00:28:48 +0000http://jericon.net/?p=28This week I was given the task of repopulating our entire primary database cluster. This was due to an alter that had to be performed on our largest table. It was easiest to run it on one host and populate the dataset from that host everywhere.

I recalled a while back reading a blog post from Tumblr about how to chain a copy to multiple hosts using a combination of nc, tar, and pigz. I used this, with a few other things to greatly speed up our repopulation process. As I was repopulating production servers, I did a combination of raw data copy and xtrabackup streams across our servers, depending on the position in our replication setup.

For a normal straight copy, here’s what I did:

On the last host, configure netcat to listen and then pipe the output through pigz and tar to uncompress and untar. This needs to be run in the destination directory:

nc -l 1337 | pigz -d | tar xvf -

On any hosts in the middle of the chain, you do the same thing with one extra step. Using a fifo to redirect the stream to the next host:

To do this with an xtrabackup stream, the commands are similar. On each host, tar needs to add the “i” flag (to become “tar xvfi -“). The progress bar here became slightly less accurate, but was still a good rough estimate of the progress. On the source host, the command became:

I found that using this method, for a raw copy, I was able to achieve between 300 and 350 MB/sec copying large tables. Smaller tables averaged slower speeds. I didn’t do enough testing here to see where the bottleneck was. I can say that it was not network, cpu, or io. Our servers involved have 10 GBit network and FusionIO drives. Increasing the compression level may have helped add some throughput here as well. Copying a 1.4 TB Dataset to 3 destination servers took under 2 hours.

This is definitely a tool that I will be adding to my arsenal to use on a regular basis.