copy a chmod binary from a similar system to root's home dir and run that.I think you could probably even compile it from the source (probably in http://www.gnu.org/software/coreutils/ somewhere) without a working chmod.if things are really messed up, mount the filesystem from some rescue disk or other system by attaching the disk to it, and run chmod from there.lots of ways

Go after the dumbass that did it with a BFH now, worry about fixing it later.

(Aside from copying in the binary from another machine, pretty much *any* higher level language will make syscalls to the exact same system function that the chmod binary does - perl, php, python, c, c++, ruby, java, whatever.)

1. usually "rootkit" means a hacked kernel, and if you hacked the kernel, there's not much reason to do something as grossly obvious as make a userland binary like sshd immutable - it's actually counterproductive, because it makes it MORE obvious that the box is compromised.2. chattr -i /bin/chmod to allow a reinstallation of coreutils to replace the (presumably hacked) binary, along with a jillion other potentially compromised ones.3. as always, of course, the safest thing to do here is nuke from orbit and reload (data only!) from known good backups, checking all configs with the finest-toothed of combs.

Nobody ever sets bad permissions on one file, they do it recursively on a crap load of important stuff. The command below saved my rear once.

for p in $(rpm -qa); do rpm --setperms $p; done

There's a certain risk to accidentally leave sensitive data, for example a private key, readable to world after such an incident.

We do have config management to enforce (and monitor deviations) certain permissions/ownerships, and on some boxes also selinux contexts, for sensitive data. Still, such a large scale permission accident would likely lead to a nuke&reinstall recovery. The main reason is that no-one is interested in finding out at $ungodly-hour that a certain file neither had the permissions it ought to have nor supervision by either packet or config management. Those cases are getting more rare as the time goes by as we improve the setup, but I'm pretty sure we didn't catch them all yet.

This was in an interview I had once, and I have heard from others that this is asked in a lot of interviews. I said to mount a rescue CD and fix it that way, which whomever was interviewing me nodded with acceptance. But I want to say:

Job: The chmod binary was set to 000, how do you fix pls?You: I don't. The server is corrupt has been compromised. Permissions are the least of my problems.Job: No, how u fix?You: I shut the machine down, mount the entire hard drive as read-only. Determine what data can be saved. Did this server have backups?

I know a lot of these are hypothetical situations, but they annoy me. You might as well ask something completely ridiculous, like:

1. Due to a mass renaming gone wrong, every file in your web server has been renamed to a random hex number. For instance, chmod is now named 0x452ec1, index.html is 0xee396a, and so on. This is a remote ssh login, and you have no rescue CD. This server is allowed 2 minutes/year downtime. Using this pen and Post-It pad, write a bash script that would determine what the real file names were via inodes, and change them back to their original state.2. You have written an SQL query that accidentally deleted your primary keys. How do you restore the live data using only mysqldump, sed, and bc?3. Your power supply has caught fire, but miraculously, the server is still running. Show how you would replace the power supply without any downtime, using only Venn diagrams.