There is a directory A whose contents are changed frequently by other people.

I have made a personal directory B where I keep all the files that have ever been in A.

Currently I just occasionally run rsync to get the files to be backed up from A to B. However I fear the possibility that some files will get added in A, and then removed from A before I get the chance to copy them over to B.

What is the best way to prevent this from occurring? Ideally i'd like to have my current backup script run every time the contents of A get changed.

Cool. Would I just put the above script in my bashrc so that it run's when i log in? Is there another way I can have it running all the time?
–
oadamsNov 7 '11 at 11:53

For login, yes /etc/profile for system-wide or .bash_profile for just your user. To run it after boot, it depends on your flavour of Unix/Linux; /etc/rc.local,/etc/rc.d/ or /etc/init.d/
–
jasonwryanNov 7 '11 at 17:05

For larger directories you might want to consider using the --monitor switch (and pipe the output to your loop instead), otherwise there is a lot of overhead when inotifywait is started over and over again
–
Tobias KienzlerJan 10 '13 at 17:10

You can use fswatch, a portable tool which selects the appropriate event mechanism if available (Linux, Mac, *BSD) or just stat(2) elsewhere. I did not write it, but I use it. It's open source (GNU GPL).