Is there any way to make a seasoned Linux syadmin productive without giving him full root access?

This question comes from a perspective of protecting intellectual property (IP), which in my case, is entirely code and/or configuration files (i.e. small digital files that are easily copied). Our secret sauce has made us more successful than our smallish size would suggest. Likewise, we are once-bitten, twice shy from a few former unscrupulous employees (not sysadmins) who tried to steal IP. Top management's position is basically, "We trust people, but out of self-interest, cannot afford the risk of giving any one person more access than they absolutely need to do their job."

On the developer side, it's relatively easy to partition workflows and access levels such that people can be productive but only see only what they need to see. Only the top people (actual company owners) have the ability to combine all the ingredients and create the special sauce.

But I haven't been able to come up with a good way to maintain this IP secrecy on the Linux admin side. We make extensive use of GPG for code and sensitive text files... but what's to stop an admin from (for example) su'ing to a user and hopping on their tmux or GNU Screen session and seeing what they're doing?

(We also have Internet access disabled everywhere that could possibly come into contact with sensitive information. But, nothing is perfect, and there could be holes open to clever sysadmins or mistakes on the network admin side. Or even good old USB. There are of course numerous other measures in place, but those are beyond the scope of this question.)

The best I can come up with is basically using personalized accounts with sudo, similar to what is described in Multiple Linux sysadmins working as root. Specifically: no one except the company owners would actually have direct root access. Other admins would have a personalized account and the ability to sudo into root. Furthermore, remote logging would be instituted, and the logs would go to a server only the company owners could access. Seeing logging turned off would set off some kind of alerts.

A clever sysadmin could probably still find some holes in this scheme. And that aside, it's still reactive rather than proactive. The problem with our IP is such that competitors could make use of it very quickly, and cause a lot of damage in very short order.

So still better would be a mechanism that limits what the admin can do. But I recognize that this is a delicate balance (particularly in the light of troubleshooting and fixing production issues that need to be resolved right now).

I can't help but wonder how other organizations with very sensitive data manage this issue? For example, military sysadmins: how do they manage servers and data without being able to see confidential information?

Edit: In the initial posting, I meant to preemptively address the "hiring practices" comments that are starting to surface. One, this is supposed to be a technical question, and hiring practices IMO tend more towards social questions. But, two, I'll say this: I believe we do everything that's reasonable for hiring people: interview with multiple people at the firm; background and reference checks; all employees sign numerous legal documents, including one that says they've read and understood our handbook which details IP concerns in detail. Now, it's out of the scope of this question/site, but if someone can propose "perfect" hiring practices that filter out 100% of the bad actors, I'm all ears. Facts are: (1) I don't believe there is such a perfect hiring process; (2) people change - today's angel could be tomorrow's devil; (3) attempted code theft appears to be somewhat routine in this industry.

first thing that came to mind when reading your final question ... Snowden.
– Hrvoje ŠpoljarFeb 1 '16 at 18:25

33

You can get far with appropriate SELinux policies, but this will be quite expensive to implement. At the end of the day, sysadmins must have some access to the system and the files thereon, in order to do their jobs. Your problem is not technical, it is in the hiring process.
– Michael Hampton♦Feb 1 '16 at 18:26

6

The military uses security clearances and two-person integrity. Even then, sometimes there are breaches. The chances of both people having nefarious plans are much less.
– SteveFeb 1 '16 at 22:00

14

The reason this smells like a people problem is because it's a people problem.
– SirexFeb 2 '16 at 4:08

13 Answers
13

What you are talking about is known as the "Evil Sysadmin" risk. The long and short of it is:

A sysadmin is someone who has elevated privileges

Technically adept, to a level that would make them a good 'hacker'.

Interacting with systems in anomalous scenarios.

The combination of these things makes it essentially impossible to stop malicious action. Even auditing becomes hard, because you have no 'normal' to compare with. (And frankly - a broken system may well have broken auditing too).

There are bunch of mitigative steps:

Privilege separation - you can't stop a guy with root from doing anything on the system. But you can make one team responsible for networking, and another team responsible for 'operating systems' (or Unix/Windows separately).

Limit physical access to kit to a different team, who don't get admin accounts... but take care of all the 'hands' work.

Separate out 'desktop' and 'server' responsibility. Configure the desktop to inhibit removal of data. The desktop admins have no ability to access the sensitive, the server admins can steal it, but have to jump through hoops to get it out the building.

Auditing to a restricted access system - syslog and event level auditing, to a relatively tamper-resistant system that they don't have privileged access to. But collecting it isn't enough, you need to monitor it - and frankly, there's a bunch of ways to 'steal' information that might not show up on an audit radar. (Poacher vs. gamekeepers)

apply 'at rest' encryption, so data isn't stored 'in the clear', and requires a live system to access. This means that people with physical access can't access a system that's not actively being monitored, and that in an 'anomalous' scenario where a sysadmin is working on it, the data is less exposed. (e.g. if the database isn't working, the data probably isn't readable)

Two man rule - if you're ok with your productivity being crippled, and your morale likewise. (Seriously - I've seen it done, and the persistent state of working and being watched makes it extremely difficult working conditions).

Vet your sysadmins - various records checks may exist depending on country. (Criminal records check, you might even find you can apply for a security clearance in some cases, which will trigger vetting)

Look after your sysadmins - the absolute last thing you want to do is tell a "trusted" person that you don't trust them. And you certainly don't want to damage morale, because that increases the chance of malicious behaviour (or 'not-quite-negligence, but a slipping in vigilance'). But pay according to responsibility as well as skill set. And consider 'perks' - that are cheaper than salary, but probably valued more. Like free coffee, or pizza once a week.

and you can also try and apply contract conditions to inhibit it, but be wary of the above.

But pretty fundamentally - you need to accept that this is a trust thing, not a technical thing. Your sysadmins will always be potentially very dangerous to you, as a result of this perfect storm.

I occasionally wear the sysadmin hat. I have found it necessary to keep powerful tools lying around where I can get at them, some of which have as their designed purpose circumventing system access controls (in case somebody manages to lock everybody out of a workstation of course). As the very nature of their operation, auditing is weakened or ineffective.
– joshudsonFeb 3 '16 at 17:43

3

Yes. For all the reasons locksmiths can break into your house legitimately, sysadmins might need to break into the network.
– SobriqueFeb 3 '16 at 17:55

@Sobrique : there’s something that I still don’t understand : why we hear about rogue sysadmins stoling lot of data and not about rogue maids doing the same (they could do it during the time everything was on paper).
– user2284570Jul 28 '16 at 20:39

Volume - terabytes of hardcopy is big. And damage. Sabotaging an it system can literally wreck a company.
– SobriqueJul 28 '16 at 20:52

Everything said so far here is good stuff but there is one 'easy' non technical way that helps negates a rogue sys admin - the four eyes principle which basically requires that two sysadmins be present for any elevated access.

EDIT:
The two biggest items that I've seen in comments are discussing cost and the possibility of collusion. One of the biggest ways that I've considered to avoid both of those issues is with the use of a managed service company used only for verification of actions taken. Done properly the techs wouldn't know each other. Assuming the technical prowess that a MSP should have it would be easy enough to have a sign off of actions taken.. maybe even as simple as a yes/no to anything nefarious.

@Sirex: Yes, that's the problem with security - it always has a cost.
– sleskeFeb 2 '16 at 9:55

10

No, I've worked in fairly small (100-1000 person) organisations that did exactly that. They just accepted that their procedures would make all sysadmin activity cost between four and ten times as much money as it otherwise would, and they paid up.
– MadHatterFeb 2 '16 at 10:07

10

This is the only real answer. Our kit sits inside some secure gov't locations and (amongst other steps) we use this approach to ensure that no one person can access an elevated terminal. A detailed request is raised for work to be done, lots of people sign off on it (50+, sigh), then two admins get together and do the change. It minimises risk (and also silly mistakes). It's expensive and a monumental pain to get anything done, but that's the price of security. Re: 50+ signatories, that includes network team, DC team, project managers, security, storage, pen tester, software vendor, etc.
– BasicFeb 2 '16 at 10:12

4

You'd probably struggle to hire, yes. It'd be an instant show stopper for me as I like to actually get things done and it would cripple my work enjoyment.
– SirexFeb 2 '16 at 17:42

3

Note that the increased costs doesn't apply to every sysadmin action. However, it does apply to both the elevated actions themselves as well as their preparation. IOW, you can't say : "sysadmin works 40 hours a week, 4 of them elevated, so the cost increase would be only 10%". On the plus side, the scheme also catches normal, honest mistakes. That saves money.
– MSaltersFeb 2 '16 at 22:13

If people truly need admin access to a system then there is little you can do to restrict their activities on that box.

What the majority of organisations do is trust, but verify - you might give people access to parts of the system but you use named admin accounts (e.g. you don't give them direct access to root) and then audit their activities to a log they cannot interfere with.

There's a balancing act here; you might need to protect your systems but you do need to trust people to do their jobs too. If the company was formerly "bitten" by an unscrupulous employee then this might suggest that the companies hiring practices are poor in some way, and those practices were presumably created by the "top managers". Trust begins at home; what are they doing to fix their hiring choices?

This is a great observation-- good auditing lets people get their work done yet remain accountable for their actions.
– Steve BondsFeb 1 '16 at 19:59

1

Proper use of auditd and syslog can go a long way as well. That data can be monitored by a myriad of security tools to look for odd or clearly bad behavior.
– AaronFeb 2 '16 at 4:25

3

The real problem with auditing is that there is a LOT of noise. After some months noone will look at the logs except when something happened.
– TomTomFeb 2 '16 at 10:40

1

I agree TomTom, getting the signal to noise ratio right on security logs is an issue but you still need to log, I think. @Sobrique I would say, though, that 'evil sysadmins' are predominantly a hiring issue rather than a technology issue; you need to close both sides of the gap, so I would require 'best practice' day to day and improve hiring processes, consider '4 eyes' for true secret sauce stuff as Tim alluded to, as well as sift logs
– Rob MoirFeb 2 '16 at 11:24

1

A little bit, but bear in mind over a 'job cycle' a previous good-faith actor can turn into a malicious one. That's more about disenfranchisement and morale than hiring practices per-se. The things that make someone a good sysadmin also would make them a good hacker. So perhaps on that point - hiring mediocre sysadmins is the way forward?
– SobriqueFeb 3 '16 at 15:11

Without putting yourself into an insane technical mind twist to try and come up with a way to give a sysadmin power without giving them power(its likely doable, but would ultimately be flawed in some way).

From a business practice standpoint there is a set of simple solutions. Not cheap solution's, but simple.

You mentioned that the pieces of IP you are concerned about are divided and only people at the top have the power to see them. This is essentially your answer. You should have multiple admins, and NONE of them should be an admin on enough systems to put together the complete picture. You of course would need at least 2 or 3 admins for each piece, in case an admin is sick or in a car accident or something. Maybe even stagger them. say you have 4 admins, and 8 pieces of information. admin 1 can access systems that have piece 1 and 2, admin 2 can get to pieces 2 and 3, admin 3 can get to 3 and 4, and admin 4 can get to 4 and 1. Each system has a backup admin, but no admin is able to compromise the complete picture.

One technique the military uses as well is the limitation of moving data. In a sensitive area there may only be a single system that is capable of burning a disk, or using a USB flash drive, all other systems are restricted. And the ability to use that system is extremely limited and requires specific documented approval by higher ups before anyone is allowed to put any data on anything that could lead to information spillage. Along the same token, you ensure that network traffic between different systems is limited by hardware firewalls. Your network admins who control the fire walls have no access to the systems that they are routing, so they cant specifically get access to information, and your server/workstation admins ensure that all data to and from a system is configured to be encrypted, so that your network admins cant tap the network and gain access to the data.

All laptops/workstations should have encrypted hard drives, and each employee should have a personal locker they are required to lock the drives/laptops in at the end of the night to ensure that no one comes in early/leaves late and gains access to something they aren't supposed to.

Each server should in the very least be in its own locked rack, if not its own locked room, so that only the admins responsible for each server have access to it, since at the end of the day physical access trumps all.

Next there is a practice that can either hurt/help. Limited contracts. If you think you can pay enough to keep attracting new talent, the option of only keeping an each admin for a pre-determined set of time (IE 6 months, 1 year, 2 years) would allow you to limit how long someone would have to attempt to put together all the pieces of your IP.

My personal design would be something along the lines of... Split your data into however many pieces, lets say for the sake of having a number 8, you have 8 git servers, each with their own set of redundant hardware, each administrated by a different set of admins.

Encrypted hardrives for all workstations that will touch the IP. with a specific "project" directory on the drive that is the only directory users are allowed to put their projects in. At the end of each night they are required to sanatize their project directories with a secure deletion tool, then hard drives are removed and locked up(just to be safe).

Each bit of the project has a different admin assigned to it, so a user would only interact with the workstation admin they are assigned to, if their project assignment changes, their data is wiped, they are assigned a new admin. Their systems should not have burning capabilities and should be using a security program to prevent the use of USB flash drives to transfer data without authorization.

Thank you, lots of good stuff there, and lots we are doing. While I like the multiple admins idea, we aren't really big enough to need that. I really only need one admin, so if I had four, they would generally be bored out of their minds. How do we find the top-tier talent we want, but only give them a teeny-tiny workload? I'm afraid smart people will get bored quickly and move on to greener pastures.
– MattFeb 2 '16 at 15:36

2

Yeah, that is a big problem, and actually one the government field suffers from heavily. The place I got my start as an admin was often referred to as "the revolving door." The whole thing is a difficult problem to handle. Information assurance in general is a pretty tough nut to crack :\
– GravyFeb 2 '16 at 16:07

@Matt How do we find the top-tier talent we want, but only give them a teeny-tiny workload? To some extent, you can mitigate this by also giving them a large test/R&D environment, access to the cool new toys and encouraging them to spend significant portions of their workday on developing their tech skills. An effective 25% workload is probably pushing that too far, but I would be absolutely over the moon about a job that's 50% actual work and 50% technical development/R&D (assuming pay is at the same level as a normal ~100% "actual work" job).
– HopelessN00bFeb 23 '16 at 18:57

It would be similar to the challenge of hiring a janitor for a building. The janitor gets to have all the keys, can open any door, but the reason is that the janitor needs them to do the job. Same with system admins. Symmetrically one can think of this age old problem and look at ways trust is granted historically.

Although there's no clean-cut technical solution, the fact that there's none shouldn't be a reason that we don't try any, an aggregation of imperfect solutions can give somewhat great results.

A model where trust is earned:

Give fewer permissions to begin with

Gradually increase permissions

Put a honeypot and monitor what happens in the coming days

If the sysadmin reports it instead of trying to abuse it, that's a good start

It is very very hard to secure hosts against those with administrative access. While tools like PowerBroker attempt to do this, the cost is adding both something else that can break AND adding barriers to attempts at fixing it. Your system availability WILL drop when you implement something like this so set that expectation early as the cost of protecting things.

Another possibility is to see if your app can run on disposable hosts via a cloud provider or in a locally hosted private cloud. When one breaks, instead of sending in an admin to fix it, you throw it away and auto-build a replacement. This will require quite a lot of work on the application side to make them that run in this model, but it can solve a lot of operational and security issues. If done poorly, it can create some significant security problems, so get experienced help if you go that route.

You split your responsibility by having security engineers whose job it is to make system configurations and installs, but they get no credentials or access to machines in production. They also run your audit infrastructure.

You have production admins who receive the systems but don't have the keys to boot the machine without the SELinux policies being active. Security doesn't get the keys to decrypt sensitive data stored at rest on disk for when they get a broken machine pulled from service.

Make use of a centralized authentication system with strong auditing like Vault and make use of its crypto operations. Hand out Yubikey devices to make keys absolutely private and unreadable.

Machines are either wiped on breakage or handled by ops and security together, and if you feel the need executive oversight.

Admins by the nature of the job have access to everything. They can see every file in the file system with their admin credentials. So, you'll need a way to encrypt the files so that admins can't see it, but the files are still usable by the teams that should see it. Look into Vormetric Transparent Encryption (http://www.vormetric.com/products/transparent-encryption)

The way it would work is that it sits between the filesystem and the applications that access it. Management can create the policy that says "Only the httpd user, running the webserver daemon can see the files unencrypted". Then an admin with their root credentials can try to read the files and only get the encrypted version. But the web server and whatever tools it needs sees them unencrypted. It can even checksum the binary to make it harder for the admin to get around.

Of course you should enable auditing so that in the event an admin tries to access the files a message gets flagged and people know about it.

Who, then, can update the files? How do they go about it?
– Michael Hampton♦Feb 1 '16 at 19:54

3

Plus... If I'm root on that box, the odds are I can subvert the webserver daemon. Even if it's checking binary hashes to make sure I haven't replaced the daemon, there's going to be some way I can trick the webserver into making a seemingly-legitimate request for the data.
– BasicFeb 2 '16 at 10:08

1

Actually SysAdmins may not have access to all the files- C2 and better secured systems can block admins. They can FORCE access, but this is irrevocable (setting them as user) and leaves traces (log, which they can delete but not change easily).
– TomTomFeb 2 '16 at 10:44

This wouldn't help since an admin can 'become' httpd... Admins can also read /dev/mem and therefore all keys.
– John KeatesFeb 2 '16 at 20:29

2

@TomTom: The possibility of an operating system satisfying C2 is a myth. Windows NT4 passed certification, but it turned out to be a fraudulent pass. The ability to back out the force access always existed, and I have used it, and we have a procedure that depends on it working because some program tries to use it to verify that its files weren't tampered with, but we need to change them.
– joshudsonFeb 3 '16 at 17:48

The only practical way is restricting who can do what with sudo. You could potentially also do most of what you want with selinux but it would probably take forever to figure out the correct configuration which may make it impractical.

Non-disclosure agreements. Hire a sysadmin, they have to sign an NDA, if they break their promise, take them to court. This may not prevent them from stealing secrets, but any damages they cause by doing it are recoverable in court.

Military and government sysadmins have to obtain security clearances of different grades depending on how sensitive the material is. The idea being that someone who can obtain a clearance is less likely to steal or cheat.

The idea being that 1) getting and maintaining that clearance limits their ability to do shady businesses; 2) jobs requiring higher security clearances pay better, which means a strong incentive to keep that clearance regardless of whether you like your current employer.
– ShadurFeb 2 '16 at 9:29

That said, again, the OP specifically said he's asking for preventative measures -- sure, you can sue them for NDA violation afterwards but does a sysadmin strike you as likely to make enough money to recover the kind of damages he's implying?
– ShadurFeb 2 '16 at 9:31

you're not just recovering losses from the sysadmin, but from whomever or whatever other business is making money from those secrets.
– Michael MartinezFeb 2 '16 at 22:52

This is a very secretive environment to begin with. So say bad sysadmin steals secret sauce, tracking down who he sold to is basically impossible. What if he steals some code, leaves on good terms, sells code to a competitor? Suddenly our profits are eroding, but we don't know how (this is finance, an anonymous market place).
– MattFeb 3 '16 at 23:34

@Matt That's as much a risk for the people in charge as it is for those who aren't. People with secret sauce worrying about someone else stealing it when it's just as likely one of them will.
– Michael MartinezFeb 4 '16 at 3:56

I see where you're coming from, but I think most of us have more professional ethics than that. I don't steal my clients' secrets, but it's not because they pay me enough that I don't feel the need, it's because it'd be wrong to do that; I suspect I'm not the only person here who feels that way.
– MadHatterFeb 2 '16 at 16:33

1

Yes, as Ben Franklin once wrote, "Never charge someone more than it costs to kill you." But in reverse.
– Bruce EdigerFeb 2 '16 at 18:15

You can't bribe someone into integrity. That's just not going to work. As a wise man once said: we have established what sort of person you are, now we are just haggling over price. But you can earn someone's loyalty by doing right by them. Pay is part of that, but so are a lot of things. Autonomy and mastery - give freedom and training and development. Problem is, the temptation might be to oppress them so they don't stray, and then bribe them to "fix" that. But like all abusive relationships, that will backfire.
– SobriqueFeb 3 '16 at 20:36

2

I just wrote it is another way, not good way. I think that when someone is hiring new sysadmin, there must be trust. Because sysadmin is - next to CEO and CFO - someone who can destroy company in minutes. Good admin will not work for small money (or someone of you will?) and sysadmin who will works for little money is more dangerous.
– Ondra Sniper FlidrFeb 4 '16 at 15:36

I'm thinking this question might not be possible to answer fully without some more details, such as:

How many sysadmins do you expect to keep "restricted"?

What do people need "sysadmin" access to do?

First of all, be aware of what you can do with sudo. With sudo, you can permit elevated permissions to run only a single command (or variations, like commands that start with "mount -r" but other commands are not permitted). With SSH keys, you can assign credentials that permit a person to only run a certain command (like "sudo mount -r /dev/sdd0 /media/backup"). Therefore, there is a relatively easy way to allow just about anybody (who has an SSH key) to be able to do some specific operations without letting them do absolutely everything else.

Things get a bit more challenging if you want the technicians to be performing fixes of what is broken. This can typically require a higher amount of permissions, and perhaps access to run a wide variety of commands, and/or writing to a variety of files.

I suggest considering the approach of web-based systems, like CPanel (which is used by a number of ISPs) or cloud-based systems. They can often make problems on a machine go away, by making the entire machine go away. So, a person may be able to run a command that replaces the machine (or restores the machine to a known-good image), without necessarily giving the person access to read lots of data, or making small changes (like combining a minor fix with the introduction of an unauthorized back door). Then, you need to trust the people who make the images, but you are reducing how many people you need to trust to do the smaller stuff.

Ultimately, though, a certain amount of trust does need to be provided to some non-zero number of people who help design the system and who operate at the highest level.

One thing that large companies do is to rely on things like SQL servers, which store data that can be remotely accessed by a larger number of machines. Then, a larger number of technicians can have full root access on some machines, while not having root access to the SQL servers.

Another approach is to be too big to fail. Don't think that large militaries or giant corporations never have security incidents. However, they do know how to:

recover,

limit damage (by separating valuable things)

have counter-measures, including threats of litigation

have processes to help limit undesirable exposure to press, and have plans how to influence the spin of any negative stories that do develop

The basic assumption is that damage that does occur is simply a risk and cost of their doing business. They expect to continue to operate, and ongoing developments and improvements over the years will limit how much damage a single incident is likely to actually affect their long term value over a period of time.

Place the root administration machine into the room locked with two keys, and give only one for each of the two admins. The admins will always work together, observing each other activities. It should be the only machine containing the private key to login as root.

Some activities may need no root rights, so only part of the work would require to go to that room for "pair programming"

You may also video record activities in that room mostly for making more sure that nobody works alone (that much is easily visible). Also, make sure the working place is such that screen is easily visible for both people (maybe large TV screen with the big fonts).