best practices - documentation

story is this, we are cleaning some stuff up making all accounts centralized (ldap via AD). and are nuking all the local accounts on all the Unixmachines except a couple of DR accounts. there has been push back a few folks are saying they need local accounts etc etc etc....

i need to find some "best practice" stuff, that spells out local accounts on 100s of machines are bad....

I am under NDA, but we are are pretty security aware type of shop. part of the issue with local accounts is;

1 you have no idea who is provisioned where, sure you can run scripts to check but this is messy2 when they leave or change job function, you need to remove that access locally on all those machines. sure you can script it but again its messy3 When new machines are provisioned your basically embedding all your "local" accounts in your image, or install scripts. 4 by using centralized auth tools, you can control alot. and have just a few DR types of accounts or super admins type of thing.

when there is 500 servers and growing managing local accounts sucks....

Unnecessary local accounts should be avoided because they bring additional management overhead and are a potential security risk, especially if left unattended, and doubly especially if they have a local password set.

That being said, I'm not sure I agree with your approach of the issue. You have users that claim they need local account for $reason. Don't go around and look for blanket reasons why local accounts are bad; rather check if $reason is actually a valid need, if there is an alternative approach instead that doesn't require local accounts and how much effort it would be, etcetcetc

Also, your criticisms 1-3 are actually a tooling issue. With a config management framework like puppet, I can centrally assign local account creation and SSH public key deployment to nodes. Revoking an account from a system, or revoke a public key, can be literally done by changing the value of a parameter from 'present' to 'absent', with all systems purging the account on the next run.With a decent report filtering/retention (for example keep all change reports 6 month, and extract security-relevant changes to keep a record of those indefinitely) and svn/git logs, you get very decent auditing too.

actually, its not my idea though i agree with it. its part of our new security Policy's and we are being mandated. I am just the "bad guy" who gets toimplement it. so, they have network logons, wheel group for root access, a Jump box with root keys, and a password vault with all the root accounts....

One option that may help you is to mount your home directories from a central server over NFS using autofs. We combine this with LDAP auth for some of our VMs. This way, in LDAP we define which hosts a user has access to, and when they log in it will mount their home directory so all their ssh keys, scripts, files etc are available everywhere.

Backups are simplified because there is only ever one set of home dirs to worry about, and granting/revoking access is done at a single point.

I'm partial to having a root account with a devilish password which is written down and locked under key (+ on people's secure password storage so that it is readily available on emergencies), to be used only when the directory is down.