21 Answers
21

It's really, really, really hard. It requires a very complete audit. If you're very sure the old person left something behind that'll go boom, or require their re-hire because they're the only one who can put a fire out, then it's time to assume you've been rooted by a hostile party. Treat it like a group of hackers came in and stole stuff, and you have to clean up after their mess. Because that's what it is.

Audit every account on every system to ensure it is associated with a specific entity.

Accounts that seem associated to systems but no one can account for are to be mistrusted.

Accounts that aren't associated with anything need to be purged (this needs to be done anyway, but it is especially important in this case)

Change any and all passwords they might conceivably have come into contact with.

This can be a real problem for utility accounts as those passwords tend to get hard-coded into things.

If they were a helpdesk type responding to end-user calls, assume they have the password of anyone they assisted.

If they had Enterprise Admin or Domain Admin to Active Directory, assume they grabbed a copy of the password hashes before they left. These can be cracked so fast now that a company-wide password change will need to be forced within days.

If they had root access to any *nix boxes assume they walked off with the password hashes.

Review all public-key SSH key usage to ensure their keys are purged, and audit if any private keys were exposed while you're at it.

If they had access to any telecom gear, change any router/switch/gateway/PBX passwords. This can be a really royal pain as this can involve significant outages.

Fully audit your perimeter security arrangements.

Ensure all firewall holes trace to known authorized devices and ports.

Ensure remote WAN links trace to fully employed people, and verify it. Especially wireless connections. You don't want them walking off with a company paid cell-modem or smart-phone. Contact all such users to ensure they have the right device.

Fully audit internal privileged-access arrangements. These are things like SSH/VNC/RDP/DRAC/iLO/IMPI access to servers that general users don't have, or any access to sensitive systems like payroll.

Work with all external vendors and service providers to ensure contacts are correct.

Ensure they are eliminated from all contact and service lists. This should be done anyway after any departure, but is extra-important now.

Validate all contacts are legitimate and have correct contact information, this is to find ghosts that can be impersonated.

Start hunting for logic bombs.

Check all automation (task schedulers, cron jobs, UPS call-out lists, or anything that runs on a schedule or is event-triggered) for signs of evil. By "All" I mean all. Check every single crontab. Check every single automated action in your monitoring system, including the probes themselves. Check every single Windows Task Scheduler; even workstations. Unless you work for the government in a highly sensitive area you won't be able to afford "all", do as much as you can.

Validate key system binaries on every server to ensure they are what they should be. This is tricky, especially on Windows, and nearly impossible to do retroactively on one-off systems.

Start hunting for rootkits. By definition they're hard to find, but there are scanners for this.

Not easy in the least, not even remotely close. Justifying the expense of all of that can be really hard without definite proof that the now-ex admin was in fact evil. The entirety of the above is not even doable with company assets, which will require hiring security consultants to do some of this work.

If actual evil is detected, especially if the evil is in some kind of software, trained security professionals are the best to determine the breadth of the problem. This is also the point when a criminal case can start being built, and you really want people who are trained in handling evidence to be doing this analysis.

But, really, how far do you have to go? This is where risk management comes into play. Simplistically, this is the method of balancing expected risk against loss. Sysadmins do this when we decide which off-site location we want to put backups; bank safety deposit box vs an out-of-region datacenter. Figuring out how much of this list needs following is an exercise in risk-management.

In this case the assessment will start with a few things:

The expected skill level of the departed

The access of the departed

The expectation that evil was done

The potential damage of any evil

Regulatory requirements for reporting perpetrated evil vs preemptively found evil. Generally you have to report the former, but not the later.

The decision of how far down the above rabbit-hole to dive will depend on the answers to these questions. For routine admin departures where expectation of evil is very slight, the full circus is not required; changing admin-level passwords and re-keying any external-facing SSH hosts is probably sufficient. Again, corporate risk-management security posture determines this.

For admins who were terminated for cause, or evil cropped up after their otherwise normal departure, the circus becomes more needed. The worst-case scenario is a paranoid BOFH-type who has been notified that their position will be made redundant in 2 weeks, as that gives them plenty of time to get ready; in circumstances like these Kyle's idea of a generous severance package can mitigate all kind of problems. Even paranoids can forgive a lot of sins after a check containing 4 months pay arrives. That check will probably cost less than the cost of the security consultants needed to ferret out their evil.

But ultimately, it comes down to the cost of determining if evil was done versus the potential cost of any evil actually being done.

+1 - The state of the art with respect to auditing system binaries is pretty bad today. Computer forensics tools can help you verify signatures on binaries, but with the proliferation of different binary versions (particularly on Windows, what w/ all the updates happening every month) it's pretty hard to come up with a convincing scenario where you could approach 100% binary verification. (I'd +10 you if I could, because you've summed-up the entire problem pretty well. It's a hard problem, especially if there wasn't compartmentalization and separation of job duties.)
–
Evan AndersonAug 18 '10 at 15:51

1

@evan The binary problem is REALLY bad. There are just so many library files, in so many locations, it's hard to keep up.
–
sysadmin1138♦Aug 18 '10 at 16:00

+++ Re: changing service account passwords. This should be thoroughly documented anyway, so this process is doubly important if you're to be expected to do your job.
–
Kara MarfiaAug 18 '10 at 19:03

42

Great answer. Also, don't forget to remove the departed employee as an authorized point of contact for service providers and vendors. Domain registrars. Internet service providers. Telecommunications companies. Ensure all these external parties get the word that the employee is no longer authorized to make any changes or discuss the company's accounts.
–
MoxAug 25 '10 at 1:44

I would say it is a balance of how much concern you have vs the money you are willing to pay.

Very concerned:
If you are very concerned then you may want to hire an outside security consultant to do a complete scan of everything from both an outside and internal perspective. If this person was particularly smart you could be in trouble, they might have something that will be dormant for a while. The other option is to simply rebuild everything. This may sound very excessive but you will learn the environment well and you make a disaster recovery project as well.

Mildly Concerned:
If you are only mildly concerned you might just want to do:

A a port scan from the outside.

Virus/Spyware Scan. Rootkit Scan for Linux Machines.

Look over the firewall configuration for anything you don't understand.

Change all passwords and look for any unknown accounts (Make sure they didn't activate someone who is no longer with the company so they could use that etc).

This might also be a good time to look into installing an Intrusion Detection System (IDS).

Watch the logs more closely than you normally do.

For the Future:
Going forward when an admin leaves give him a nice party and then when he drunk just offer him a ride home -- then dispose of him in the nearest river, marsh, or lake. More seriously, this is one of the good reasons to give admins generous severance pay. You want them to feel okay about leaving as much as possible. Even if they shouldn't feel good, who cares?, suck it up and make them happy. Pretend it is your fault and not theirs. The cost of a raise in costs for unemployment insurance and the severance package don't compare to the damage they could do. This is all about the path of least resistance and creating as little drama as possible.

@Kyle: That was supposed to be our little secret...
–
GregDAug 18 '10 at 15:28

3

Dead-man switches, Kyle. We put them there in case we go away for a while :) By "we", I mean, uh, they?
–
Bill WeissAug 18 '10 at 15:49

11

+1 - It's a practical answer, and I like the discussion based on a risk / cost analysis (because that's what it is). Sysadmin1138's answer is a little more comprehensive re: the "rubber meets the road", but doesn't necessarily go into the risk / cost analysis and the fact that, much of the time, you have to set some assumptions aside as being "too remote". (That may be the wrong decision, but nobody has infinite time / money.)
–
Evan AndersonAug 18 '10 at 15:56

First things first - get a backup of everything on off-site storage (e.g. tape, or HDD that you disconnect and put in storage). That way, if something malicious takes place, you may be able to recover a little.

Next, comb through your firewall rules. Any suspicious open ports should be closed. If there is a back door then preventing access to it would be a good thing.

User accounts - look for your disgruntled user and ensure their access is removed as soon as possible. If there are SSH keys, or /etc/passwd files, or LDAP entries, even .htaccess files, should all be scanned.

On your important servers look for applications and active listening ports. Ensure the running processes attached to them appear sensible.

Ultimately a determined disgruntled employee can do anything - after all, they have knowledge of all the internal systems. One hopes that they have the integrity not to take negative action.

backups may also be important if something does happen, and you decide to go with the prosecution route, so you may want to find out what the rules for evidence handling are, and make sure you follow 'em, just in case.
–
Joe H.Aug 19 '10 at 13:51

3

But don't forget that what you have just backed up may include rooted apps/config/data etc.
–
Shannon NelsonAug 24 '10 at 22:54

If you have backups of a rooted system, then you have evidence.
–
XTLMar 20 '12 at 14:04

If these tools are in place properly, you will have an audit trail. Otherwise, you're going to have to perform a full penetration test.

First step would be to audit all access and change all passwords. Focus on external access and potential entry points-- this is where your time is best spent. If the external footprint is not justified, eliminate it or shrink it. This will allow you time to focus on more of the details internally. Be aware of all outbound traffic as well, as programmatic solutions could be transferring restricted data externally.

Ultimately, being a systems and network administrator will allow full access to most if not all things. With this, comes a high degree of responsibility. Hiring with this level of responsibility should not be taken lightly and steps should be taken to minimize risk from the start. If a professional is hired, even leaving on bad terms, they would not take actions that would be unprofessional or illegal.

There are many detailed posts on Server Fault that cover proper system auditing for security as well as what to do in case of someone's termination. This situation is not unique from those.

Don't forget the likes of Teamviewer, LogmeIn, etc... I know this was already mentioned, but a software audit (many apps out there) of every server/workstation wouldn't hurt, including subnet(s) scans with nmap's NSE scripts.

Periodic program that initiates a netcat outbound connection on a well known port to pick up commands. E.g. Port 80. If well done the back and forth traffic would have the appearance of traffic for that port. So if on port 80, it would have HTTP headers, and the payload would be chunks embedded in images.

Programs that check to see if one or more of the other backdoors is still in place. If it is not, then a variant on it is installed, and the details emailed to the BOFH

Since much in the way of backups is now done with disk, modify the backups to contain at least some of your root kits.

Ways to protect yourself from this sort of thing:

When an BOFH class employee leaves, install a new box in the DMZ. It gets a copy of all traffic passing the firewall. Look for anomalies in this traffic. The latter is non-trivial, especially if the BOFH is good at mimicking normal traffic patterns.

Redo your servers so that critical binaries are stored on read-only media. That is, if you want to modify /bin/ps, you have to go to the machine, physically move a switch from RO to RW, reboot single user, remount that partition rw, install your new copy of ps, sync, reboot, toggle switch. A system done this way has at least some trusted programs and a trusted kernel for doing further work.

Of course if you are using windows, you're hosed.

Compartmentalize your infra-structure. Not reasonable with small to medium size firms.

Ways to prevent this sort of thing.

Vet applicants carefully.

Find out if these people are disgruntled and fix the personnel problems ahead of time.

When you dismiss an admin with these sorts of powers sweeten the pie:

a. His salary or a fraction of his salary continues for a period of time or until there is a major change in the system behaviour that is unexplained by the IT staff. This could be on an exponential decay. E.g. he gets full pay for 6 months, 80% of that for 6 months, 80 percent of that for the next 6 months.

b. Part of his pay is in the form of stock options that don't take effect for one to five years after he leaves. These options are not removed when he leaves. He has an incentive to make sure that the company will be running well in 5 years.

This is pretty tough to do at a small company (i.e. 1-2 Sys Admin type folks)
–
Beep beepJun 14 '11 at 3:53

It's a pain to enforce, but it is enforceable. One of the big ground rules is that nobody just logs onto a box and administers it, even via sudo. Changes should go through a configuration management tool, or should happen in the context of a firecall-type event. Every single routine change to systems should go through puppet, cfengine, chef, or a similar tool, and the entire body of work for your sysadmins should exist as a version controlled repository of these scripts.
–
StephanieNov 9 '12 at 7:16

Unless you're really really paranoid, then my suggestion would simply be running several TCP/IP scanning tools (tcpview, wireshark etc) to see if there is anything suspicious attempting to contact the outside world.

Change the administrator passwords and make sure there are no 'additional' administrator accounts that don't need to be there.

Check logs on your servers (and computers they directly work on). Look not only for their account, but also accounts that are not known administrators. Look for holes in your logs. If an event log was cleared on a server recently, it is suspect.

Check the modified date on files on your web servers. Run a quick script to list all the recently changed files and review them.

Check the last updated date on all of your group policy and user objects in AD.

Verify all of your backups are working and the existing backups still exists.

Check servers where you are running Volume Shadow Copy services for previous history to be missing.

I already see lots of good things listed and just wanted to add these other things you can quickly check. It would be worth it to do a full review of everything. But start with the places with the most recent changes. Some of these things can be quickly checked and can raise some early red flags to help you out.

Basically, I'd say that if you've a competent BOFH, you're doomed... there are plenty of way of installing bombs that would be unnoticed. And if your company is used to eject "manu-military" those who are fired, be sure that the bomb will be planted well bofore the layoff !!!

Best way is to minimize the risks of having an angry admin... Avoid the "layoff for costs cut" (if he is a competent and vicious BOFH, the losses you may incur will probably be way bigger than what you'll get from the layoff)... If he made some unacceptable mistake, it's better to have him fix it (unpaid) as an alternative to layoff... He'll be more prudent next time to not repeat the mistake (which will be an increase in his value)... But be sure to hit the good target (it's common that uncompetent people with good charisma reject their own fault to the competent but less social one).

And if you're facing a true BOFH in the worst sense (and that that behaviour is the reason of the layoff), you'd better be prepared to reinstall from scratch all the system he has been in contact with (which will probably mean every single computer).

Don't forget that a single bit change may make the whole system go havoc... (setuid bit, Jump if Carry to Jump if No Carry, ...) and that even the compilation tools may have been compromized.

Good luck if he really knows anything and set anything up in advance. Even a dimwit can call/email/fax the telco with disconnects or even ask them to run full test patterns on the circuits during the day.

Seriously, showing a little love and a few grand on departure really lessens the risk.

Oh yeah, in case they call to "get a password or something" remind them of your 1099 rate and the 1 hour min and 100 travel expenses per call regardless if you have to be anywhere...

I suggest that you start at the perimeter. Verify your firewall configurations make sure you do not have un-expected entry points into the network. Make sure the network physically secure against him re-entering and getting access to any computers.

Verify that you have fully working and restoreable backups. Good backups will keep you from loosing data if he does do something destructive.

Checking any services that are allowed through the perimeter and make sure he has been denied access. Make sure that those systems have good working logging mechanisms in place.

+1 - if a server is root-level compromised you have to start again from scratch. If the last admin couldn't be trusted, assume root-level compromise.
–
James LAug 18 '10 at 15:09

2

Well...Yes...Best solution...Also kind of hard to convince management to redo everything. Active Directory. Exchange. SQL. Sharepoint. Even for 50 users this is no small task...much less when it's for 300+ users.
–
Jason BergAug 18 '10 at 15:18

If you can't redo the server, the next best thing is probably to lock down your firewalls as much as you can. Follow every single possible inbound connection and make sure it is reduced to the absolute minimum.

Presumably, a competent admin somewhere along the way made what is called a BACKUP of the base system configuration. It would also be safe to assume there are backups done with some reasonable level of frequency allowing for a known safe backup to be restored from.

Given that some things do change, it is a good idea to run from your backup virtualized if possible until you can ensure the primary installation is not compromised.

Assuming the worst becomes evident, you merge what you are able to, and input by hand the remainder.

I'm shocked no one has mentioned using a safe backup, prior to myself. Does that mean I should submit my resume to your HR departments?

I can't understand the relevance of the suppliers to the question.
–
John GardeniersAug 19 '10 at 11:09

Because the supplier could be a friend or could be connected to the previous IT team. If you keep the same supplier and change everything else, you risk to inform the old IT team and made everything worthless. I wrote this based on previous experience.
–
lrosaAug 24 '10 at 17:32

Well, unless you've handed your private keys to the supplier, not sure what the previous IT team stands to gain by this: "So as you say, Bob, they generated new keys, new passwords, and closed all access from outside? Hmm. [opens a Mac laptop, runs nmap; types for two seconds] Ok, I'm in." (CUT!)
–
PiskvorAug 25 '10 at 7:09

It's not only a matter of perimeter access, but a matter of internal IT infrastructure. Say you want to carry on an attack based on social engineering: knowing internal structure is very handy (Mitnick rules).
–
lrosaAug 26 '10 at 13:33

Try to take his point of view.

You know your system and what it do. So your could try to imagine what could be invented to connect from outside, even when you no longer be sysadmin...

Depending on how are the network infrastructure and how all this work, you are the best person that may know what to do and where this could be located.

But as you seem speaking from an experimented bofh, you have to search near everywhere...

Network tracking

As the main goal is to take remote control of your system, accross your internet connection, you may watch (even replace because this could be corrupted too!!) the firewall and try to identify each active connection.

Replacement of firewall won't assure a full protection but ensure that nothing left hidden. So if you watch for packet forwarded by the firewall, you must see everything including unwanted traffic.

You may use tcpdump for tracking everything (like US paranoid does;) and browse dump file with advanced tool like wireshark. Take a few time to see what this command (need 100Gb free space on disk):