This is very, very disappointing: Apple discontinues the xserve. It will be available for order until 31th of january 2011. The recommendation of Apple is to buy the Mac mini Server edition or the Mac Pro in future. While the Mac Pro is a great product, i can't imagine putting it in a 19" rack: two Mac Pro on a shelf would take the place of 12 units -- wow.

At ClipDealer we have two of the xserve. They are great machines and i will definitly miss the possibility to buy more of them.

A while ago i wrote about how to use nginx as a proxy to do cookie based redirects. We use this functionality at work to provide easy access and view the progress of development of each developer.

I thought it would be nice to have the information, of which developers machine you currently have access to, right on the website. But i always disliked to put functionality to accomplish this inside the framework or even the app itself or have to install / modify some special configuration on each developers machine. I always wanted my proxy to do this kind of work. ... And nginx can.

It's the substitution module of nginx, that can replace arbitrary text in a http response. Nginx must be compiled with the option --with-http_sub_module configured.

The following rows show how to fill a variable $name with the name of the developer we are accessing the machine of. The statement sub_filter defines the search pattern as first parameter and the replace string as second parameter -- very easy, isn't it?

After several days playing around with newt i am quite disappointed to come to a point where i have to say, that newt has not only a very limited documentation (i think there's only documentation of about 50% of the newt functionality) -- which would not be that of a problem, though. The bigger problem is, that things are very unstable. "Segmentation faults" reproducable on different operating systems and platforms -- not sure though, if the problem is newt or pecl_newt. However: this is definitly not what i want. So, let's move on to something better.

I had a look at the apple's opensource repository the other day and stumbled over a ncurses package. I had problems with the normal gnu ncurses package before, it would not install but hang when building the terminfo database. With the ncurses package i found at apple's opensource repository, i get it to install. Here is the howto:

Note: The first three steps below are for osx 10.5.x. For osx 10.6.6 simply download ncurses 5.7 and build it using ./configure --with-shared.

Download ncurses from apple's opensource repository. You can find it by clicking on the link of the Mac OSX version you are running, eg. 10.5.8.

Unpack the package und you get a directory called something like "ncurses-21" or so. Next cd into this direktory and run make and sudo make install. If make complains about a missing directory /tmp/ncurses/Build, just create it mkdir -p /tmp/ncurses/Build and start again.

Now an intermediate build should have been generated. Next cd /tmp/ncurses/Build, make and sudo make install. Now ncurses should have been successfully installed.

Ready to install the ncurses php extension by either executing sudo pecl install ncurses or downloading ncurses from pecl.php.net and building manually.

Don't forget to add ncurses.so to your php.ini

Now everything should be ready to start writing ncurses apps -- however: for me it sill did not work. The simplest ncurses app resultet in the following error message:

Error opening terminal: vt100.

What's going on? dtruss might be your friend to figure this out. Execute the following command, where example.php is your ncurses app you want to run: sudo dtruss php example.php. You should see output like:

You can see, that ncurses is looking for a file vt100 in a terminfo directory and fails looking at /usr/local/share/terminfo. I indeed had no directory /usr/local/share/terminfo but a directory /usr/share/terminfo. I was not able to figure out, where i could specify the correct directory configuration for this, so i just created a symlink: sudo ln -snf /usr/share/terminfo /usr/local/share/terminfo. After creating the symlink, i was able to execute my example ncurses application.

I always wanted to provide nice user interfaces for some of my commandline tools written in PHP, but was not able to solve one problem until recently. The big problem when writing user interfaces for commandline utilities written in PHP is: ... what library to use for actually building the user interface?

You could try to write your own library using ANSI escape sequences -- but your user interfaces would either be very limited or it would be a hell of work to write an extensive library providing more than just the basics.

... until recently. Every once in a while i tried to dig up more information of how to get newt to work, because this is the library i would prefer over the other solutions. However, it did not compile on my system. Recently i searched again and was able to dig up a patch for newt-0.5.22.11. So, here are the steps to get things work:

Download the patch for newt from the page above and apply it as described in the short tutorial provided on that page.

Download slang, which is required by newt. I decided to download the latest snapshot (pre2.2.3-60) and things worked just fine with it. Extract and build it -- i did not use any special flags for this.

Download popt, which is required for newt. I decided to download popt-1.16, which seems to be the latest release and can be found at the bottom of the download page. Extract and build it -- i did not use any special flags for this.

After installing slang and popt and patching newt, you should now be able to build and install newt.

I had a very annoying problem with proftpd, which seems a common one at first sight: slow login and the problem, that a lot of ftp clients out there have a low timeout setting configured. The problem is that googling "slow connection" or "slow login" in combination with "proftpd" led me in a totally wrong direction. A lot of people seem to have a problem with DNS lookups, which can be easily fixed by adding ...

UseReverseDNS off
IdentLookups off

... to the configuration file, to turn of any DNS lookups. But this did not change anything for me. Running a ftp client in debug mode it turned out, that the authorization itself took a very long time, which led to a timeout with most ftp clients:

The password was send, and than the ftp client had to wait 10 seconds and longer for a respone. Lot's of ftp clients have a timeout of less than 10 seconds, which results in a timed out connection for such a long response time.

After googling for quite some time without finding anything useful on this topic -- besides the DNS lookup problem -- i delved deeper into to the proftpd documentation and found a howto which gave me some hints of how to speed up ftp login.

As it turned out the problem was my SQLAuthenticate directive, which i just copied from the example configuration file of mod_sql. The configuration was set to:

SQLAuthenticate users userset

The problem with this configuration is, that the userset switch seems to be very, very expensive. I still don't know, why this switch is set in the configuration -- the documentation contains no useful examples of when to use / when to avoid this switch, but eventually i found a forum post of a proftpd maintainer, where he tells, that the userset switch is not necessary to be configured. After changing above configuration to ...

SQLAuthenticate users

... login is fast as hell. I'm still curious why the switch was there ...

In a previous blog entry i described a method of how to setup master-slave replication with mysql. In steps #4 and #5 i used mysqldump and mysql-client for creating a database dump on the master and importing it on the slave. The problem with this approach is, that the database tables are locked, as long as the dump is running. For small databases this might not be a problem, but as data grows, the time to create a dump takes longer and longer. @work we apparently reached some critical level -- mysqldump ran hours and hours and would probably still run, if i had not stopped it already.

Luckily there are more suitable tools for large databases available. Innodb hot backup and xtrabackup. I've decided to go with xtrabackup, because it's open-source, free and actively developed. Innodb hot backup is closed-source and not for free .

The following steps are ment to replace steps #4 and #5 of my previous blog post.

1. building xtrabackup

For Linux i had to build xtrabackup from the source package, because there was no binary package available for my architecture -- it's very easy, though:

Last year we purchased our first xserve. Yesterday our second one arrived at our office. Today "he's" standing on my desk. Tomorrow, when we moved him to our data center, he will help encoding videos ...

Write down "File" and "Position" ... you will need it later for starting replication.

Now you can unlock the tables:

UNLOCK TABLES;

5. Slave: import database dump

Copy masterdump.sql to the slave server and import the database:

mysql -u root -p... < masterdump.sql

This may take quite some time ...

6. Slave: start replication

Start mysql client on slave and enter the following commands:

CHANGE MASTER TO
MASTER_HOST='<master_host>',
MASTER_USER='<slave_username>',
MASTER_PASSWORD='<slave_password>',
MASTER_LOG_FILE='<mysql-bin file name you've written down in step 4>',
MASTER_LOG_POS=<master position you've written down in step 4>;
START SLAVE;

I'm currently preparing to switch my blog software -- again. After using wordpress and serendipity for quite some time, i came to the conclusion, that i will only be satisfied with my own blog software. Therefore i'm currently developing something based on the php5 framework i developed for work. I also decided to switch language ... now i can practice my english and increase the audience of people, who won't be interested of what i am writing .