Today I post something about the nice little tool fail2ban. As you probably know, fail2ban can be used to block those annoying brute force attacks against your servers. Other than the also popular and useful tool DenyHosts it allows the protection of other services than SSH as well (e.g. HTML login pages served by Apache). The working mechanism also differs from that of DenyHosts, as fail2ban uses iptables instead of the BSD style hosts.deny file to block annoying brute forcers. Installation is quite simple, on Debian for example, just install it through apt and you’re good to go even with the default config.

One thing that I was missing, was the option to ban IPs forever. You can basically do this by setting bantime to a negative value, but as soon as you reload your iptables rules (e.g. by restarting the fail2ban service or the whole system) the entries for the permanently banned IPs are gone.
To overcome this issue, I did some minor changes to the actions fail2ban executes on start-up and on banning.

IMPORTANT: I strongly advise you, to be careful while playing around with automated banning tools, especially if you can’t reach your server physically. Make sure, that you have something useful set in the ignoreip option under the [DEFAULT] jail (your current IP address) to not accidentally lock you out of the system (really nasty with permanent banning active…)

First, check the banaction currently used (you need that, to modify the correct actionfile afterwards)/etc/fail2ban/jail.local

#
# ACTIONS
#
...
banaction = iptables-multiport
...

Open up the corresponding actionfile and modify according to the sample below (changes are under the # Persistent banning of IPs comment)/etc/fail2ban/action.d/iptables-multiport.conf

Recently, while fiddling around with my home network I got really tired of my old Netgear Wifi router and its limited functionality. After I found out that the revision I have doesn’t allow to run any custom firmware (dd-wrt or tomato), I decided to look out for something more open (and fun). I already used IPCop and so I started from there, thinking about building up a small computer to run IPCop. But having to dedicate a computer only for use as a home router/firewall sucks a bit… So after some further web research i bumped into the ALIX boards. Basically, they’re fully featured PCs on a single board allowing to hook up a CF card and to run a OS from there. Many people are successfully running the open source FreeBSD based firewall distro pfSense (an offspring of the m0n0wall firewall distro, which btw. also runs on ALIX systems) on it, so I decided to give it a shot. I placed an order for the required hardware on www.pcengines.ch (see “My system”) and after two days everything arrived. Building the system was good fun and after 15 minutes, everything was up and running.

My system

1x ALIX.2D2 system board

1x Enclosure 2 LAN, black, USB

1x AC adapter 18V

2x Cable I-PEX -> reverse SMA

2x Antenna reverse SMA

1x Compex WLM54SAG23 miniPCI card

1x SanDisk ULTRA Compact Flash 4 GB

Here you can see some building steps and a screenshot of the pfSense dashboard.

Before you install anything, make sure your board has a up-to date BIOS installed. To do so, connect the ALIX to your computer using a nullmodem cable (with Serial to USB if needed). I used minocom on my ubuntu machine, using the settings 38400 8N1, without flow control. The most current BIOS as I’m writing this article is 0.99h. If you have an older version installed, you should upgrade it (check the ALIX manual).

Even though, ActiveDirectory communication through a NATed (and port-forwarded) interface is not officially supported by MS, there is a way to do that. I stumbled upon this issue, after forgetting it for quite some time (solved it with a nasty hack in the first place – keyword: read only DNS entries)

Situation:

[DC1]------------>[NATed interface]------------>[DC2]

DC1 addresses DC2 by the address of the NAT interface

CLIENTS address DC2 by its real address

Problem
DC2 updates its DNS record with its current IP address (real address)
DC1 can't reach DC2 through its real IP, instead it would need the address of the NAT interface.

Solution
Add the following Registry Key on DC2 to force it to add its real and his NATed IP to its Host DNS records

The nice thing is, that the DNS server serves the address of DC2 that is suitable for the host. If the host is on the same network as DC2 it gets its real IP, if its on the other side of the NATed interface it gets the NAT interfaces address.

Assumption
The fileserver is joined to a ActiveDirectory domain through Winbind

Issue
SMB/Filesystem permissions seem to not apply, if a folder is owned by a local group and the domain users are members of that group.
Observable effects are “Access denied” messages while trying to access the SMB share from a windows machine with a domain user, even though through SSH the domain user can access the respective folder.
A common scenario is, if the file server was recently integrated into a domain and there are still local, non-domain users working on it.

Install the Zabbix monitoring agent binaries
Installing the Zabbix agent is quite simple, you could try the RedHat RPMs… I tried with the generic Linux 2.6.x binaries and it worked.
The only thing you have to consider, that the ESX console doesn’t come with wget, so you probably will have to SCP the rpm package to your ESX server.

Create a Firewall rule for the in- and outbound monitoring ports used by Zabbix
There are two ways of doing that:

Issuing the following commands on the ESX command console – nice, but annoying for more that two ESXes:esxcfg-firewall -openPort 10050,tcp,in,zabbixClient
esxcfg-firewall -openPort 10051,tcp,out,zabbixServer

Or creating a XML file which holds the definition of the rule, which later allows more convenient handling (activating or deactivating) of the rule through the vSphere Client GUI – neat for larger farms of ESX servers.

Here is what you need to do to implement the second option (works for ESX 4):

Connect to the ESX console and create a new XML file in /etc/vmware/firewall called zabbixMonitoring.xml

Many of today’s businesses rely heavily on their application servers. The times of simple fileshares and single-document based processes are over and with them the time of simple filecopy as a method of backing up is over.

In this article I want to describe a method to backup the two most common components of a modern application service: filesystem and database.
No matter how you solve your backup, the approach should always make sure, that the database integrity is given and that the filesystem is in sync with the database state.

The following two scripts are deployed as a Pre- and a Post backup script. The Pre- Backup script stops the application, dumps the database contents, creates a LVM snapshot and re-starts the application. Post- Backup removes the snapshot.

The fileset for the backup application would then look like this:
Database dumps: /dbdump
Filesystem snapshot: /volume-snapshot

One backdraw of this method is, that the service has to be stopped in order to get a consistent state of the data. If the service has to be online 24/7 you would have to consider clustering (anyways, you would have to come up with something to cover unplanned downtimes).

Here is a small excerpt to show you how to configure the Pre- and Post backup scripts with the open source backup software Bacula. I assume if you use some other backup software, you can click your way through the GUI yourself :)

I assume, you use LVM volumes for your XEN guests. I’m not going to use “xm migrate” here, the method used works by dd’ing the LVM volume over to the new Dom0, so make sure, you have a fitting LVM volume in place on your destination system.
I recommend you to stop the machine you’re going to move (or you could consider to create a LVM snapshot). Anyways, if you know nothing will change you can try it with the running machine (I did this once, and it resulted in a fsck upon boot but without any further problems).

With this one you can dd the LVM volume to the new host:

dd if=/dev/x bs=1M | ssh username@remote-server "dd of=/dev/y bs=1M"

To check the status of the copyjob, open a new console and issue (note: the USR1 signal lets dd print some infos):

If you work with virtual machines, you most likely already played around with snapshots. Its a really handy feature which lets you roll-back to a earlier stage of the lifetime of a system, just in case something goes wrong. During the extended lifetime of some VMs there might accumulate quite numerous snapshots which bloat the folder of the VM noticeably. One might think, that he just deletes the old snaps through ssh console access and the sky is blue again…?

If you just delete the old stuff by ssh console, you might run into some serious pains. The way here is merging the snapshots back to the vmdk. The way through the vSphere Client is the following: “Right-click on VM -> Snapshot -> Snapshot Manager -> Delete all”. Here is also where the trouble can start, in case you run out of storage. The way the snapshots get merged is the following:

So if you’re really tight in disk space, you might try to delete snapshot by snapshot instead of the “Delete all” option, starting with the newest.

If you have messed up totally and can’t delete the snapshots, a last effort could be to attach a Harddrive to your physical system (e.g. USB, eSATA you name it…) and use the VMware Converter to clone away the messed up VM in a clean vmdk.

The conclusion here is to carefully use snapshots and merging them proactively, avoiding to have too much system states flying around.