Category Archives: Serverstuff

In order to prevent email delivery during development and log all email messages that would have been delivered, you can actually do a simple trick: Replace the file /usr/sbin/sendmail (on Ubuntu, use ‘locate sendmail’ to find it if it lies elsewhere) with this little shell-script, or rather make a _bak of the original and save the following instead of the sendmail binary:

In order to deploy bigger changes without bothering your visitors with strange behaviour during a data migration, updates and the like, you should use Apache2′s mod_rewrite. Just put the following lines in a .htaccess file in your webroot directory and all traffic (also deep links to subdirectories) gets diverted to the maintenance-page. The scond line sets an exception for your IP address, so you are the only visitor who is NOT redirected to the mainteance page:

I was having trouble with a server running Debian 4.0 (etch). Using the standard sources in the /etc/apt/sources.list the supported PHP5 version was 5.2.0-8+etch13 which contained a very annoying bug for my application.

A daily running script – let’s call it the Importer – regularly exited randomly with a “Fatal error: Out of memory (allocated 12320768) (tried to allocate 2851436 bytes) in …” and I had to restart it manually nearly every morning. I had…

So I took a look at the PHP5 Changelog to find potentially fixed bugs in newer releases. Bug #39438 described exactly my problem. So a simple upgrade would help me. But it did not work with ‘apt-get upgrade’ or ‘apt-get install php5=5.2.8′ since the highest version in the apt source I used was the one that I already had: 5.2.0-8+etch13, issued in November 2006… (pretty ancient)

Finally it was this page that had the information we needed: an alternative apt source

After getting an impression whether dotdeb was a trustworthy source, we first tried it on our dev-system with ‘apt-get update; apt-get upgrade;’. At this point I was once more glad to have written so many UnitTests. They all passed and everything looked good.

I have just discovered an issue if you store serialized objects into MySql.

At first I used fieldtype TEXT. If in this case somebody edited another field of such a record with phpMyAdmin and pushed the save button, the stored object in that record got currupted and the object could not be deserialized and instanciated anymore.

We now use the fieldtype BLOB instead, so phpMyAdmin does not give you the chance to edit the fieldcontent. And it works.

I have an individual CMS running for a customer, who can edit his events, news etc. in a typical admin area. Each of the items had an expiry date. On any call to a deep URL to one of the expired items (e.g. from Google search results) it would make a redirect to the custom 404-page like this:
header("Location: http://www.mycusweb.de/errors/404.php");
exit;

The 404-Page itself would then respond with its own 404 header and display a beautiful error page.

As I found out, this would NOT have the effect that I expected it to have, namely that the next Google bot crawling these URLs would kill the URL from the index since it ended up on a proper 404 page via the redirect. The headers in sequence with the above redirect look like this:
GET /pages/events.php?id=123 HTTP/1.1
HTTP/1.x 302 Found
GET /errors/404.php
HTTP/1.1 HTTP/1.x 404 Not Found

But wait, there is 302. To verify this for your own redirects, you can use the Firefox add-on Live HTTP Headers.

The problem here is the use of a default redirect, which uses a status code ’302 Found’. This equals the old ’307 Temporary Redirect’ and has the meaning for the bot, something hast just temporarily moved. In order to ‘tell’ the bot that this page should no longer be indexed and deleted, you must use a ’301 Moved Permanently’ and you do it like this:
header("HTTP/1.1 301 Moved Permanently");
header("Location: http://www.mycusweb.de/errors/404.php");
exit;

If you’re interested in the HTTP status codes, you can find the RFCs here.

After all of your redirects that should ‘tell’ bots to remove URLs from the search index use the 301 variant, you can then use the Google Remove Tool to speed things up and call the bot.

On the command line if you close a console with a running job, you kill the job. This is different with the tool ‘screen’, where you can attach and detach from a ‘screen’ without terminating it. You can even start a job in a screen on another machine, detatch, travel somewhere else and re-attach to it on another machine.

If you do not have screen yet, install it on your Debian box with: apt-get install screen

Since I am doing more and more stuff on the commandline, I noticed that sometimes I just wait for some task to finish to do a next step in a sequence to accomplish a certain goal. What if a running task would take an estimated 3-8 hours and it is Friday afternoon?

In this case you can use the commandline-tool ‘at’ and schedule something like ‘at 1am tomorrow do xyz’.

If you do not have at yet, install it on your Debian box with: apt-get install at

There are 3 commands available:

at <datetime> – Starts the scheduling-dialog for a specific date or/and time to execute certain commands.

atq – Shows a list of already scheduled and pending jobs.

atrm <job-id> – Deletes pending jobs you would like to remove from the job-queue.

Case: What do you have to do in order to set a sequence of commands as root at a specific time?

Let’s say it is Friday 15:00h (and the server-clock says that too) and you would like to fire the command

echo “Good morning!” > hello.txt @ 08:00h tomorrow

You would do the following:

Log in as root.

Type ‘at 8.00′ <enter>.

Since for today 08.00 is already in the past, at assumes that you mean tomorrow. But you can use all sorts of time- and date signatures, for example you could use ‘at 08:00 01.06.2008′.

Now the prompt changes to at> and waits for your commands to be executed sequentially at the time you specified.

Now you type your first command to be executet: ‘cd ~/stuff’ <enter> – to make sure where the next command is executed, since it writes a file.

Now type your 2nd command: ‘echo “Hello 8.00″ > hello.txt’ <enter>

The prompt shows another at>. If you have more commands add them. In our example we have only two. To finish we press STRG-D and you have your normal prompt back.

Type the command ‘atq’ to see the pending jobs in the queue. Your job, you just added has a job-ID in the first column.

On my network I have a Buffalo Terastation as network-share for backups. Today I figured out how to mount it on another Linux-box (Debian Etch) so I could do file operations directly on my backup. The following worked out for me:

Created dir /mnt/terastation2

Added the following line to /etc/fstab
//TERASTATION2/SHARE /mnt/terastation2 smbfs username=marco,password=,uid=marco,gid=businex,dmask=707,fmask606

And executed
mount -a
as root.

If you like to mount it only once in an while, you can also do it via the command: