Blog

There are some good resources guiding people to make shell scripts for creating incremental backups using rsync, but it’s still a lot to wade through if you’re not very familiar with some of the ins and outs of *nix terminal commands.

** NOTE: I’m still testing these scripts to make sure they all work. In theory, and by appearance they do. Use at your own risk (you’ve been warned =) **

These two scripts will give you daily backups for a week, and three weekly backups (from the incremental).

1) make backups on a separate partition, on a separate disk, on a separate server. Don’t have the money for buying a new server? Well, your money can buy you more than you realize. You’ll do just fine to set-up a PIII ~500MHz with a reasonable amount of ram (512MB) and a new hard disk drive. 200GB drives are going for less than $100 these days, so you’ve got yourself a backup server!

Get on with it! Two scripts – one for the daily tasks, one for the weekly. Ok – four. One will set-up your cron jobs, but it’s so easy it won’t really count. I’m using BASH for the examples, but T/CSH or any other scripting language should be fine with some modifications

daily.sh
#!/bin/sh

##################################################
#needs to be run as root (to preserve permissions)
# NOTE: unlike the weekly script that saves backups
# on the same day every week (e.g., every Sunday),
# this script makes backups that are X days old
# instead of indicating the day of the week they
# were made (daily.0 is made every day, whereas
# weekly.1 is made every Sunday
##################################################

#the mount point of the drive using for your backups
BACKUP_DIR='/backup'

#mount drive read-write so we can back-up to it.
#you'll want to change the entry in your fstab so it's mounted as read-only by default
/bin/mount -o remount,rw $BACKUP_DIR/

#the good stuff. Note that you need to set-up a public key and allow list between
#the two computers (will cover in a later post)
/usr/bin/rsync -e ssh -vcpogrt --numeric-ids --delete --exclude-from=/path/to/backup_exclude_list.txt --link-dest=$BACKUP_DIR/daily.1 root@serverIPaddress:/ $BACKUP_DIR/daily.0 > /tmp/rsyncAll

/bin/echo "n" >> /tmp/rsyncAll

#send you an email with any messages (errors) returned by the script.
mail -s "rsyncAll backupserver from mainserver" -c youremail@domain.ext root

exclude list
You’ll need this for sure on your system. I haven’t quite figured out all the things that need to be excluded, but I’ve put most of the important ones here. Note that I’m using Fedora (FC4), so your milage may vary. Make sure to put each entry on its own line.
/proc/
/tmp/
/mnt/
/backup/
/media/
/media/backup/
/sys/devices/pci*

weekly.sh
The weekly script is real easy. just some copying/moving.

#!/bin/sh
####################################################
#needs to be run as root (to preserve permissions)
# Run this script BEFORE the daily script ...
# Weekly backups are taken on the same day every week
# and are NOT weekly from the current day. Note that
# distinction.
####################################################
BACKUP_DIR='/backup'

#mount drive so we can back-up to it.
/bin/mount -o remount,rw $BACKUP_DIR/

All this means is the following: Every Sunday at midnight, run the weekly script (you’ll want to do this BEFORE your daily script runs so that the daily.6 gets moved over to the weekly set before it gets overwritten by the job that’s to run thirty minutes later. The numbers and stars are basically used to tell cron when to run the scripts. Just run a Google search to find a LOT more detailed information.

Enter it by typing into the terminal (as root): crontab /path/to/crontabfile.txt
you can verify your scheduled jobs by running: crontab -l

The new DFL Beta site is up. It’s finally at a place where the other collaborators can begin working to add content without me having to be an intermediate. It’s a little sparse in the fish department right now, but that’s up to our friends down at SIO to do for us – write roughly two fish accounts per week. This is also the first version of the Beta that I’m actually proud of. It’s been a tough meeting some of my boss’s imposed deadlines (like the arbitrary groundhog’s day), but the app’s classes are developed well enough that I can now prototype new features fairly quickly.

There’s some news last night about the first Mac OS X “virus” in the wild. First of all, it’s NOT a virus. It’s a Trojan horse, which requires the user to execute the program him/herself. It’s low-risk and probably won’t spread beyond the few people who’ve already been infected. Good news is that it appears to be broken and doesn’t seem to do anything malicious.

Here are a few tips you can use to protect yourself from this and other attacks in the future (adapted from an email I sent several of my Mac loving friends and colleagues):

Same as in the Windows world: If you don’t know what the file is, or where it came from, don’t open it.

Make sure your Mac OS X software is up to date.

Make sure you have a non-blank password on your user account.

In Safari, go do the preferences (Safari -> Preferences…) and click on the “General” tab (in Mac OS 10.4, should be similar in previous versions). UNCHECK the box that says “Open ‘safe’ files after downloading”. This is big, folks. Make sure you’re the one opening downloads, not Safari. It’s a little less convenient to have to open your downloads folder and open the file, but at least you have control over what gets opened and when.

From the open the Finder Preferences (Finder->Preferences…) and select the “Advanced” tab. Check the box, “Show all file extensions.” It’s not as pretty, but you’ll be able to immediately spot something like Leap-A: A JPEG image file should end with .jpg

Unless you know the program to be safe, don’t enter your password when an application requests it.

Keep regular backups just in case anything from this trojan or any other does something bad to your computer. I typically keep weekly backups to an external firewire hard disk drive.

Not news to some: I’ve been working on writing a CMS for my church. We considered some of the readily available FOSS solutions, but none really fit the bill in the ways that we need for one reason or another. So with this post I’m adding a Flood category to my blog.

It’s still in the nascent stages and quite frankly, very rough and awkward in many cases, but it’s all part of a learning experience. It will run on PHP5/MySQL 4.1/SMARTY templating, and object-oriented. First strike against us is that this is the first or second project any of us PHP guys have ever done in OOP, so we’re learning a lot and making a ton of mistakes, but its all worthwhile. I’m beginning to realize the version 1.0 release is going to be a lot rougher and incomplete than we were hoping, but given the timescale for an part-time, all-vounteer force it’s expected (roughly six months for two programmers). Luckily we don’t need to focus on design or content, but we’re still going to be fairly busy.

In this first release (slated for May, 06): Articles with parent/child relationships, a versioning system, public/private content, and a basic calendar system (for events scheduling and notification).
Second: email blast system, and event registration, and possibly a survey application (survey/poll)
Somewhere in there will probably be a LOT of code refactoring – especially as we get more in-tune with common OOP practices and principles.

Future posts will cover some of the ups and downs of rolling your own CMS, some tips, and advice.

Earlier this week I had the unpleasant task of trying to fix/update the NVidia drivers on my Fedora workstation. Video was working fine. What was giving me problems was the hardware 3D support, which is needed by the uber expensive visualization app I use for data analysis.

I’ve never had a pleasant, much less easy, time upgrading the video drivers on this machine. It started with the RedHat Enterprise Linux distribution – better known as “Always at least two full versions behind current,” or, “This software base is at least two years behind other distributions.” Dell suggests you use their “custom” NVidia drivers. NVidia says to use theirs. Neither worked very well, though Dell’s seemed to be more problematic than it was worth.

Fast forward to Fedora.This Link, though it’s probably the best option, didn’t help – the instructions didn’t work because Yum couldn’t resolve some pretty idiotic dependency issues with my kernel. Next option: the actual NVidia installer. To be honest, it didn’t work the first time around, which is why I tried the Yum installer, but I thought I’d give it another shot.

Download the Linux NVidia driver install script.

Change your /etc/inittab to run level 3 (from 5 in the first line of executable code), reboot (or just log-out, hit control-alt-F1 and as root run telinit 3).

As root run the script. At this point, if I recall correctly, you’d actually be able to telinit 5 and use full 3D capabilities (maybe modprobe nvidia would be required first). I ran into a problem that the NVidia ftp site was down/not responding, so it couldn’t download a pre-compiled profile for the video drivers. Just answer “Yes” when it asks if you want it to compile it for you.

The problem: if you reboot (run level 5), no more nvidia driver. You’ll see an error that the .ko file can’t be found at boot. So something is happening between the install and the boot process. glxgears won’t work – nothing – which is as expected.

To make a long story short – look in your kernel’s drivers directory. The script doesn’t put the .ko files in the correct places – at least not initially. What you need to do is make sure that the nvidia.ko file can be found, in one way or another in the …/kernel/drivers/ and the …/kernel/drivers/video/ directories… yes, that’s both of them.

open, as root, the /etc/inittab and change the run level back to 5. save. close.

reboot.

Here’s another link that ultimately wasn’t too useful to me, but gave me some ideas:Here.

I listened to the php|architect Pro PHP podcast, Jan 26 episode, the other day. In it there was an interview with Andi Gutmans about PHP6 and the “new” PHP Collaboration Project.

There are three parts to this project:
1) A PHP Framework
2) Eclipse IDE integration for PHP (plugin)
3) Best practices.

A few points: The PHP Framework is NOT based on the Ruby on Rails model. Rather, it will borrow some of the best features from other languages/framworks, including Java/J2EE, .NET, and others. Though a mature framework may be some time away, it’ll definitely a very good thing for PHP and PHP developers, especially for agile web development.

First off, I don’t want people to think I’m some kind of web genius or something. I’ve been doing web design in my spare time (save my college years when I really didn’t have the time), since 1995, and it hasn’t been until recently that I’ve put much more effort into developing my skills. Designs are getting better and I’m very pleased with my self-taught PHP skills.

That said, I haven’t really had much chance to dabble with CSS until this past fall. I’ve used it many times for simple styling, but never serious layout. After struggling with some layouts ImageReady prepared from a mockup, I decided it was time to leave the table behind (except for the exact sliced image layout). I must say that given the same amount of time trying to get both methods to work, I was considerably farther along using CSS than I was with using tables. Even better, the code is so much cleaner – expected for a CSS project. I love CSS!

Feb 2 is the tentative due date for the DFL rewrite. Actually it’s the first phase of the upgrade, which will bring it up to speed with the current site in many ways. Additional functionality will be added every few weeks after that.

One of the biggest new features in this update will be the addition of the MRI dataset viewer/rendering application. Let’s just say this is groundbreaking work – the first time anything like this has been done successfully on the web. My coworker, German (yes, that’s his real name), has been working very hard on this application.