Open source software security

Protecting Your LAMP Site with a Robots.txt Honeypot

One standard form of information discovery and reconnaissance used by malicious attackers is to scan a target website and search for robots.txt files. The robots.txt file is designed to provide instructions to spiders or web crawlers about a site's structure and more importantly to specify which pages and directories the spider should not crawl. Often these files are used to keep a spider from crawling sensitive areas of a website, such as administrative interfaces, so that search engines don't cache the existence of such pages and functionality. It is precisely for this reason that a malicious attacker will look in a robots.txt file - they often provide roadmaps to sensitive data and administrative interfaces.

Knowing that malicious attackers might look into your robots.txt file and explore the listings there allows you to employ a few defensive techniques, or at least provide some early warning measures. One possibility is to simply waste an attackers time. For instance, if your site has an administrative interface at /admin you might want to list a couple hundred non-existent sub-directories and sift /admin into the list near the middle or end. This would provide frustrating for an attacker looking through the robots.txt entries by hand. If an attacker was using an automated tool, however, they likely won't be slowed down by false entries in the robots.txt file.

The system I'm describing can be implemented in a number of ways. The basic idea is the same though. You fill your robots.txt file with numerous false entries. Each of these false entries leads to a server response that triggers a blacklisting of the offending IP address. This means that real subdirectories and files can still safely be embedded in the robots.txt, but the time to search each entry becomes exhaustive for an attacker.

In principle the system functions in a fairly straightforward manner. Assume we have an administrative login page at /admin that we want to hide from attackers. We create a robots.txt file that contains the following entries:

Assuming someone requests any of these pages (except /admin) their IP address is added to the firewall deny listing for a set amount of time. We don't want to permanently ban IP addresses because this then creates a denial of service opportunity for an attacker who could spoof legitimate IP addresses in requests for bad pages. A ten minute ban is usually sufficient.

Using this method, and the robots.txt file above an attacker with a static IP address must spend at least 70 minutes in order to discover the /admin part of the site. This is a resource exhaustion defense. Of course, an able attacker will use some mechanism to shift their IP so it helps to have quite a few listings in this robots.txt honeypot (a dozen or so is clearly insufficient to slow down a determined attacker).

Now, to implement this functionality we need to do two things. The first thing we need to do is create a custom script that will add offenders to our blacklist. For the purposes of this example we'll assume a LAMP based site. We need a script that will add offenders to our iptables. We'll put all of the scripts for this task in /var/www/robots_honeypot. This directory and the scripts should be writable and executable by the web server. The IP blocking script, in it's simplest version such a script looks like:

When called with an IP address as an argument this IP will be dropped from the firewall. Now, this script doesn't have any capability to de-list an IP address, so you'll have to implement another mechanism to do that. The easiest way to do this is to log when IP addresses are added to the firewall and then run another script via cron that cleans them out. The cleanup script is fairly simple and reads and manipulates log files in /var/www/robots_honeypot.

The script waits for 600 seconds (or 10 minutes) to elapse. Once this time has elapsed then the script will remove the listing from /var/www/robots_honeypot/robots_honeypot.log and place it in /var/www/robots_honeypot/robots_honeypot.log.expired as a permanent record. There are a number of other ways you could keep a permanent record of who has tripped over the robots.txt honeypot, however.

Assuming all of these scripts are in /var/www/robots_honeypot we can schedule a cron job to take care of them. To do this just add the following to the appropriate crontab:

0,10,20,30,40,50 * * * * /var/www/robots_honeypot/remove_ip.sh

The final step in this entire equation is a trigger on the robots.txt file content. This is a little trickier and will rely on some PHP as well as a custom 404 page that we can use to catch any and all bad requests. You will have to consult the documentation for your version of Apache in order to find the particular modifications you must make in order to call a custom 404 error page. For the purposes of our example let's say we change our 404 error page so that it points to /var/www/error/custom_404.php. We then implement the following script as custom_404.php:

This script will add the offender to the correct log file and email the administrator of the offense.

Once we're done with this we're nearly set. The last hassle is to make sure the security permissions are in place for this script to run. In order to do this we'll need to utilize sudo, so that the webserver (in this case the apache account) can make changes to iptables. To do this we have to add the following line to our /etc/sudoers file:

apache localhost = NOPASSWD:/sbin/iptables

This will allow the apache account to issue iptables rules without a password. Be aware that doing this could introduce certain security concerns that you should carefully examine before implementing this solution. For instance, if you use this method and your web server is compromised, then an attacker could manipulate your firewall!

Once all of this is in place your blacklist should be up and running. Now when attackers attempt to scan the contents of files listed in your robots.txt they face the possibility of being blacklisted.