Once this is done, edit the /etc/init.d/pure-ftp file. This is the service start file. Apparently there is a fuckup in this file as the pure-ftpd service isn’t like the others. It didn’t contain daemon options and everything is just in the startup script. And I don’t like that >:(. So let’s change it.

nano/etc/init.d/pure-ftpd

nano /etc/init.d/pure-ftpd

Find this line:

OPTIONS="-4 -H -A -B"

OPTIONS="-4 -H -A -B"

and replace it with:

OPTIONS=$PUREFTPD_OPTIONS

OPTIONS=$PUREFTPD_OPTIONS

Voila FTP server is done, now reboot the machine and see if the service is running once you reboot:

Introduction

So recently I’ve started ASIC bitcoin mining and I’ve ordered some block eruptors (icarus protocl) and I have a spare ASUS EEE device lying around with 3 USB ports. Ideal for mining, low power, small and silent. However it’s a 32-bit device and cgminer isn’t built for this :(.

Time to take matters in our own hands.

Installing Xubuntu on the ASUS EEE-PC

Xubuntu, it’s a clear choice for this EEE-PC. Light, versatile, a lot of packages! You can get it here: http://xubuntu.org/

Install is pretty straightforward, I won’t cover this here.

Installing openssh

I like my installations to come with a openssh server. Just to see what’s going on. (For a web based stats page with phpSysInfo, see my tutorials on Slitaz)

sudoapt-get updatesudoapt-get install openssh-server

sudo apt-get update
sudo apt-get install openssh-server

Compiling and installing cgminer

The problem with cgminer is that I can’t find it pre compiled for 32 bit, and the usblib it uses is pretty non standard. So grabbing the usblib found in ubuntu won’t work.

First make sure you have all the latest packages installed, and grab the dependencies to compile cgminer.

So, it’s been a while since I created some content about Slitaz. Remember our first server farm?

This additional part expands the existing Slitaz network with a Varnish HTTP cache.

So where to place this http cache? To determine the location of our cache there are two startegies:
a) Place the cache(s) up front so a cache hit will not stress the load balancers.
b) Place the cache(s) after the load balancers, so each cache search will be balanced too.

It just depends on which component is the most powerful one. You might consider multiple caches for redundancy and performance issues, but I’ll be making just one and placing it up front. (See image above.)

Let’s get started shall we?
This part continues from the base system built in Slitaz project – Part 1 – Building a base system.
As a starter let’s edit the hardware of our base system. Add a disk of 1.5 GB (this will be used for caching). And increase the RAM to 128 MB. This is needed because I noticed that the poor little thing can’t seem to manage Varnish on 64 MB of RAM. It keeps throwing memory errors when starting the service at boot.

For this machine I’ll assign the ip 192.168.1.10. So remember the ip script? Let’s make a call:

/home/base/ip.sh httpcache 192.168.1 101
reboot

/home/base/ip.sh httpcache 192.168.1 10 1
reboot

Now let’s format the newly added 1.5 GB disk.

fdisk/dev/hdb

fdisk /dev/hdb

For convenience: press o, n, p, 1, enter, enter, w

Create a mount point for the cache.

mkfs.ext2 -b4096/dev/hdb1
mkdir/mnt/data

mkfs.ext2 -b 4096 /dev/hdb1
mkdir /mnt/data

And edit fstab to add this disk to be mounted.

nano/etc/fstab

nano /etc/fstab

dev/hdb1 /mnt/data ext2 defaults 00

dev/hdb1 /mnt/data ext2 defaults 0 0

Once this is done you need to change the Lighttpd server which still runs on port 80. We will need this port for sending our cache.

nano/etc/lighttpd/lighttpd.conf

nano /etc/lighttpd/lighttpd.conf

server.port = 81

server.port = 81

Once this is done install the required dependencies and the toolchain to build Varnish.

Now let’s create the configuration directory and config files for Varnish. Varnish requires two files, a secret file (for CLI access) and a configuarion file. The configuration file is quite complex and kind of resembles like C. In this example I just used a quick example script found on the web. Important is the .host property and the .port. This makes the redirect tho the virtual loadbalancer address. Tip: With Varnish it’s also possible to loadbalance the servers without the requirement of an external one. Just google for ‘varnish multiple sites config’. However this is not in the scope of this example.

mkdir/etc/varnish
nano/etc/varnish/default.vcl

mkdir /etc/varnish
nano /etc/varnish/default.vcl

The script:

backend default{
.host="192.168.1.200";
.port="80";}
sub vcl_recv {if(req.request!="GET"&&
req.request!="HEAD"&&
req.request!="PUT"&&
req.request!="POST"&&
req.request!="TRACE"&&
req.request!="OPTIONS"&&
req.request!="DELETE"){/* Non-RFC2616 or CONNECT which is weird. */return(pipe);}if(req.request!="GET"&& req.request!="HEAD"){/* We only deal with GET and HEAD by default */return(pass);}// Remove has_js and Google Analytics cookies.
set req.http.Cookie= regsuball(req.http.Cookie,"(^|;\s*)(__[a-z]+|__utm*|has_js|_chartbeat2)=[^;]*","");// To users: if you have additional cookies being set by your system (e.g.// from a javascript analytics file or similar) you will need to add VCL// at this point to strip these cookies from the req object, otherwise// Varnish will not cache the response. This is safe for cookies that your// backend (Drupal) doesn't process.//// Again, the common example is an analytics or other Javascript add-on.// You should do this here, before the other cookie stuff, or by adding// to the regular-expression above.// Remove a ";" prefix, if present.
set req.http.Cookie= regsub(req.http.Cookie,"^;\s*","");// Remove empty cookies.if(req.http.Cookie ~ "^\s*$"){
unset req.http.Cookie;}if(req.http.Authorization|| req.http.Cookie){/* Not cacheable by default */return(pass);}// Skip the Varnish cache for install, update, and cronif(req.url ~ "install\.php|update\.php|cron\.php"){return(pass);}// Normalize the Accept-Encoding header// as per: http://varnish-cache.org/wiki/FAQ/Compressionif(req.http.Accept-Encoding){if(req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$"){# No point in compressing theseremove req.http.Accept-Encoding;}
elsif (req.http.Accept-Encoding ~ "gzip"){
set req.http.Accept-Encoding ="gzip";}else{# Unknown or deflate algorithmremove req.http.Accept-Encoding;}}// Let's have a little grace
set req.grace= 30s;return(lookup);}// Strip any cookies before an image/js/css is inserted into cache.
sub vcl_fetch {if(req.url ~ "\.(png|gif|jpg|swf|css|js)$"){// For Varnish 2.0 or earlier, replace beresp with obj:// unset obj.http.set-cookie;
unset beresp.http.set-cookie;}}
sub vcl_deliver {if(obj.hits>0){
set resp.http.X-Cache ="HIT";// set resp.http.X-Cache-Hits = obj.hits;}else{
set resp.http.X-Cache ="MISS";}}
sub vcl_error {// Let's deliver a friendlier error page.// You can customize this as you wish.
set obj.http.Content-Type ="text/html; charset=utf-8";
synthetic {"
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html><head><title>"} + obj.status + "" + obj.response + {"</title><style type="text/css">#page {width: 400px; padding: 10px; margin: 20px auto; border: 1px solid black; background-color: #FFF;}
p {margin-left:20px;}
body {background-color:#DDD; margin: auto;}</style></head><body><div id="page"><h1>Page Could Not Be Loaded</h1><p>We're very sorry, but the page could not be loaded properly. This should be fixed very soon, and we apologize for any inconvenience.</p>
<hr />
<h4>Debug Info:</h4>
<p>Status: "} + obj.status + {" Response: "} + obj.response + {" XID: "} + req.xid + {"</p></div>
</body>
</html>
"};
return(deliver);
}

And let’s add the properties for our deamon to the daemons.conf file. In this case the options define a ‘varnish_storage.bin’ file on our added hard drive of 1 GB, and it’s listening on all interfaces, port 80. Also a CLI is enabled at localhost interface on port 6082 (just an example).

To test if your setup is functional, just point a web browser to http://192.168.1.10 and check the headers. I’m using FireBug (Firefox extension) to view the headers in this case. They show added varnish values.

So that’s about it. Easy no? And as always here are the download files: httpcache.7z (38,3 MB)

This last part will focus on the loadbalancers. These loadbalancers will balance the load over our four webnodes. Because I couldn’t get lvs working for heartbeat, I’ve created a simple python script which takes over the virtual IP. Basically the same functionality, only no pain in the ass with missing dependencies for compiling an open-source package.

Let’s start by prepping ‘loadbalancer1’, I will use IP 192.168.1.200 to manage incoming virtual connections. And IP 192.168.1.201 for loadbalancer1 and IP 192.168.1.202 for loadbalancer 2.

/home/base/ip.sh loadbalancer1 192.168.1 2011
reboot

/home/base/ip.sh loadbalancer1 192.168.1 201 1
reboot

Now before we can use HAProxy, we must disable port 80, which is default still open on the LighTTPD servers. Just edit ‘lighttpd.conf’ and change the default port to 81 (our admin instance).

nano/etc/lighttpd/lighttpd.conf

nano /etc/lighttpd/lighttpd.conf

# Port, default for HTTP traffic is 80.
#
server.port = 81

# Port, default for HTTP traffic is 80.
#
server.port = 81

Now we can start with HAProxy, grab the toolchain. As we need to compile.

This is because the service file is called ‘init.d’ in the package, and not ‘fcron’.Which tries to install the startup folder. I provided a renamed copy of this on this site. If you wish to do it manually, unextract the fcron tazpkg. It contains a small lzma filesystem in which you will see the ‘init.d’ file.
Grabbing it from this site:

Now it’s time to make our configuration file. I will place this in ‘/etc/haproxy/haproxy.conf’.

mkdir/etc/haproxy
nano/etc/haproxy/haproxy.conf

mkdir /etc/haproxy
nano /etc/haproxy/haproxy.conf

So now this is the configuration file for haproxy. I defined my gluster nodes and mysql servers too as a backend machine. This is easy so I can get a quick view of the ‘slitaz farm’ health on ‘http://192.168.1.201/stats’ or ‘http://192.168.1.202/stats’.

Once this is done, reboot the server to see if the file system mounts. After this it’s time to install the mysql dependencies for connecting with the database. I want database support for my webserver.

Almost done. This is a little test script I’ve made to test the functionality with the backend servers. Just put it on the cluster and each node will show it’s own name and if he has connection with the database. This will be the temporary index page.

nano/var/domains/web/index.php

nano /var/domains/web/index.php

<?phpecho"<h1>Welcome,</h1><br>";echo"You are now connected with server: ".exec('hostname');echo"@".$_SERVER['SERVER_ADDR']."<br>";echo"Connection with the database is: ";$link=mysql_connect("192.168.1.231","web","web");if(!$link){die("<font color=\"#FF0000\">inactive</font>");}echo"<font color=\"00FF00\">active</font>";?>

Now copy 3 times and change the IP of each node to reflect configuration discussed at the beginning of the post. (Tip: start with the last machine and work up to the first to avoid IP conflicts with 192.168.1.211.)

So this post will focus on creating a mysql master server and a slave replicating this master. Mysql replication is a good way to have realtime backups.

First let’s start with the master server. I will use the IP address 192.168.1 231 for the master and 192.168.1 232 for the slave.
Create a copy of our base and add a 1GB IDE disk drive to this. This disk will be where our database will be written to. (Still using the 512MB main disk seems wrong for me.)

Fire up our script, and reboot (as usual)

/home/base/ip.sh mysqlmaster 192.168.1 2311
reboot

/home/base/ip.sh mysqlmaster 192.168.1 231 1
reboot

So now let’s add our disk. I noticed that in the starup script, Slitaz uses ‘/var/lib/mysql’ a lot. To avoid any problems later on, I will just mount the disk to the place where the mysql database is kept.

fdisk/dev/hdb

fdisk /dev/hdb

Press: o, n, p, 1, enter, enter, w

mkfs.ext2 -b4096/dev/hdb1
mkdir/var/lib/mysql
nano/etc/fstab

mkfs.ext2 -b 4096 /dev/hdb1
mkdir /var/lib/mysql
nano /etc/fstab

dev/hdb1 /var/lib/mysql ext2 defaults 00

dev/hdb1 /var/lib/mysql ext2 defaults 0 0

Now our storage is prepped, it’s time to install the mysql server. On the master server I will install the package ‘php-mysqli’ too because it is required for phpmyadmin. (Note: also agree to any additional required packages)

Now let’s add it as a service (easy peasy stuff, you should be quite familiar with this by now.)

nano/etc/rcS.conf

nano /etc/rcS.conf

RUN_DAEMONS="dbus hald slim firewall dropbear lighttpd mysql"

RUN_DAEMONS="dbus hald slim firewall dropbear lighttpd mysql"

For our convenience, the default Slitaz installation comes with a configuration for machines with a low amount of memory. So I will be deleting the default configuration and using this configuration instead.

rm/etc/mysql/my.cnf
mv/etc/mysql/my-small.cnf /etc/mysql/my.cnf

rm /etc/mysql/my.cnf
mv /etc/mysql/my-small.cnf /etc/mysql/my.cnf

Now before we can start mysql, the config file needs a little bit of tweaking. The ‘bind-address’ setting is used to allow external machines to make a connection. This is needed because we want our webnodes to connect to this database. Also the setting ‘log-bin’ needs to be enabled. This allows our master mysql server to keep logs for our slave.

nano/etc/mysql/my.cnf

nano /etc/mysql/my.cnf

[mysqld]bind-address= 192.168.1.231log-bin=mysql-bin

[mysqld]
bind-address = 192.168.1.231
log-bin=mysql-bin

Now reboot the server. This will cause Slitaz/MySQL to generate the needed files for the MySQL database. These are generated the first time the service is started.

Now let’s login to our newly created server. Normally the password is left empty (just press ENTER).

mysql -u root -p

mysql -u root -p

Now this server needs a few extra accounts: One account for our slave server which is located at 192.168.1.232, and should be restricted to that IP. One account for phpMyAdmin (called myadmin here). One account to distribute to our web servers so they can connect to the database (Optional: restricted to this subnet).

GRANT ALL ON *.* TO slave@'192.168.1.232' IDENTIFIED BY 'slave';
GRANT ALL ON *.* TO myadmin@'localhost' IDENTIFIED BY 'myadmin';
GRANT ALL ON *.* TO web@'192.168.1.0/255.255.255.0' IDENTIFIED BY 'web';
exit

So now as promised, the installation of phpMyAdmin. I like this little piece of software, and because there is already a web server installed it’s easy to include. So just download it and unpack it to the admin domain.

Part one of our master server is done. Time to work on our slave. This slave will replicate all changes made in the master database. This is a great backup solution. If our master server fails the slave can be reconfigured to a master server. And bring the systems up and running again in no time.

Like with the master, we will continue from our base system and also add a 1GB IDE disk. I will be using the ip 192.168.1.232 for the slave.

/home/base/ip.sh mysqlslave 192.168.1 2321
reboot

/home/base/ip.sh mysqlslave 192.168.1 232 1
reboot

fdisk/dev/hdb

fdisk /dev/hdb

Press: o, n, p, 1, enter, enter, w

Also this disk needs to be mounted on ‘/var/lib/mysql’ like the master.

mkfs.ext2 -b4096/dev/hdb1
mkdir/var/lib/mysql
nano/etc/fstab

mkfs.ext2 -b 4096 /dev/hdb1
mkdir /var/lib/mysql
nano /etc/fstab

dev/hdb1 /var/lib/mysql ext2 defaults 00

dev/hdb1 /var/lib/mysql ext2 defaults 0 0

On our slave I will not install phpMyAdmin. (Installing this is optional, install instructions are described in the part of the mysqlmaster.)

Now let’s reboot this server too and let it generate all needed files.

Now the difficult part: creating a data snapshot. This is done to make the servers synchronize. First the master tables have to be locked, a data dump has to be made. Then this data dump needs to be uploaded to the slave. The slave needs to be started and all tables will need to be unlocked on the master database.

This requires two putty sessions to each machine. One on the master to lock the database and one to dump the database. One on the slave to stop the slave instance (and restart it) and one to restore the backup to the slave.

Now start our slave again in session1 (which is still connected to the mysql database.)
session1:mysqlslave

slave start;

slave start;

Now unlock our tables again in the master.
session2:mysqlmaster

unlock tables;

unlock tables;

And delete the database dumps on our slave server.

rm/home/base/dbdump.db

rm /home/base/dbdump.db

And on our master server.

rm/var/domains/admin/dbdump.db

rm /var/domains/admin/dbdump.db

All done, our tables should now perfectly synchronize. Open a MySQL Workbench on both databases and watch them synchronize! (I am not going to explain this program to you right now. Look it up if you don’t know how to use it.)

Finished!

And here are the files (contains mysqlmaster and mysqlslave): mysql.7z (20.7 MB)

Before I like to build the webservers, I like to take some time to build a GlusterFS client and Samba server. This machine is optional but it is a great practise creating a simple glusterfs and show a lot about the client. It’s also fun to test your nodes with.

This machine i’ll call ‘glusterclient’ and I will use the ip 192.168.1 229. Just copy another base machine and change the ip/hostname:

/home/base/ip.sh glusterclient 192.168.1 2291

/home/base/ip.sh glusterclient 192.168.1 229 1

As installing GlusterFS is explained in part two, I won’t go over all the details. I’ll just give you the commands:

So once GlusterFS is installed, we make a mount point for the partition:

mkdir/mnt/glusterfs

mkdir /mnt/glusterfs

For some strange reason using the command ‘mount -t glusterfs 192.168.1.221:/slitaz-volume /mnt/glusterfs’ doesn’t works. Also adding the mount point to /etc/fstab seems not to function at all.
The only workaround I have found thus far is adding a call to the glusterfs executable to the startup in ‘local.sh’. (If somebody knows a way to make fstab work: comment or contact me!)

You can also see that the ‘df -h’ command produces a line break after the mounted GlusterFS volume name. This is why I changed the ‘class.parseProgs.inc.php’ file. It would generate errors at http://192.168.1.229:81/

Now let’s continue and install samba. (Samba seems to need cups to work.)

tazpkg get-install cups
tazpkg get-install samba
tazpkg clean-cache

tazpkg get-install cups
tazpkg get-install samba
tazpkg clean-cache

For Samba I always like to create different users. (Just a security habit I guess…)

After this you can go to your samba share at \\192.168.1.229\glusterfs (and login using smb account.) It’s fun to add a few files and watch the cluster nodes. (Tip: try ‘ls -l’ in the ‘/mnt/data’ folder of a node and watch)

In this part I will create four GlusterFS nodes that will stripe the files across each server. GlusterFS defines a minimum hardware of 1 GB of RAM and 8 GB of disk space. Soooo… about the requirements: I’ll add a new disk of 8GB, but the memory limitations. I like a challenge, 64 Megs ought to be enough (for now).

So just copy the base system. And let’s rename it to ‘glusternode1’. Add a new 8 GB disk (or more) to it. Again IDE! and boot the machine.

Remember the ip script? Time to put it to some use. The first thing I like to do is assign a static IP so I can use putty to connect to it. After boot just type:

/home/base/ip.sh glusternode1 192.168.1 2211

/home/base/ip.sh glusternode1 192.168.1 221 1

Reboot the machine and voila. For the gluster nodes I intend to use the following ip’s:

I noticed that it seems impossible to generate a PID for this executable. Luckily GlusterFS can make it’s own with the ‘–pid-file’ option. So before it can be used as a startup deamon you need to pass this as an optional paramater. These are found in daemons.conf.

Also this first start generates the ‘/var/lib/glusterd’ which we need.

To identify each gluster node they have to have their own unique uuid. Because if we copy the glusternode1 3 times, each server will list the same uuid, causing gluster to think all four machines are localhost!
This uuid can be found in the file ‘/var/lib/glusterd/glusterd.info’. Let’s assign a random one for ‘glusternode1’.

echoUUID=`uuidgen -r`>/var/lib/glusterd/glusterd.info

echo UUID=`uuidgen -r` > /var/lib/glusterd/glusterd.info

Finished with glusternode1! Phew. 3 to go. Shutdown the glusternode1 (tip:halt) and copy it three times(glusternode2,3,4).

Now let’s start assigning the IP’s and random uuid’s beginning with the last cluster. (To avoid ip conflicts.)

Offcourse one of the drawbacks of using such a small amount of RAM (64 MB) is that gluster will shit himself trying to load. Default glusterfs tries to use 64MB, wich is too much for the machine to handle. So we’ll need to tweak the cache-size. I tested a few values and 16MB seems to work best. (4MB and 8MB are too small.)

gluster volume set slitaz-volume cache-size 16MB

gluster volume set slitaz-volume cache-size 16MB

So normally everything should be ok now and a ‘volume info’ command should succeed.

Why the 512 MB RAM and why a regular Slitaz 4.0 image?
Explanation:In Slitaz 3.0 I found the slitaz-installer located in the base system (~8MB iso) but since Slitaz 4.0 I found it much easier using the GUI installer to install the Slitaz base package. Which needs, offcourse, more RAM. 512 MB is way too much but we can still change the amount of RAM our machine uses later on.

Ok, let’s continue. Boot up the virtual machine and watch the Slitaz magic do it’s job. Select ‘SliTaz Live’ and continue the booting with default settings. (I changed my keymap to be-latin1 as that is the layout of my keboard.)

First things first. We need to prepare our disk. Slitaz comes with GParted built in, which is super easy to use.

Just jump to: ‘Applications > System Tools > Gparted Partition Editor’ pop in the root password (which is ‘root’) and off we go.

In Gparted we select our /dev/hda (if not already selected) and use, ‘Device > Create Partition Table’. Press Apply and rightclick on the gray block. Select ‘New’ and press the ‘Add’ button. (EDIT: In virtual box it’s /dev/sda by default)
After this just press ‘Apply’ to apply the settings.

Voila, partition set. Quite easy.

Now to install Slitaz: Go to the Slitaz Panel, which can be found ‘System Tools > SliTaz Panel’. Use root/root as username/password. And go for the ‘Install’ option on the right top corner.

Skip the partitioning ‘Continue Installation’ (we just did this).

Now we need the base system. From inside SliTaz download the base iso from http://mirror.slitaz.org/iso/4.0/flavors/slitaz-4.0-base.iso . (Use Midori)

Login with root/root and install nano (I really prefer this one above vi because its simplicity.)

tazpkg get-install nano

tazpkg get-install nano

By default dropbear is already installed. We simply need to enable it in our startup file. This file is located in /etc/rcS.conf. Find the line that says ‘RUN_DAEMONS’ and add dropbear.
By default apache and mysql are also in the deamon list but you can remove these. (Reboot the server and continue with putty)

nano /etc/rcS.conf

nano /etc/rcS.conf

RUN_DAEMONS="dbus hald slim firewall dropbear"

RUN_DAEMONS="dbus hald slim firewall dropbear"

reboot

reboot

Now we can use putty to connect to our server. (Hint: command for finding out the server ip address is ‘ifconfig’)

Usually I like to know what’s happening on my servers. For this I’ve used phpsysinfo in the past. It offers me a quick overview through the web. However this requires a webserver and php.

So download all packages needed for lighttpd (press ‘y’ to install missing dependencies).