Secure Your Unix Infrastructure (Part 2)

Break down your network by server type and follow this guide to batten down the hatches.

Last week, we began to discuss infrastructure
security, and promised a more detailed look at our suggestions. It may seem strange that
we keep saying "infrastructure security," instead of "server" or "Unix" security, but there
is a reason.

We're trying to back up a bit, and examine the entire set of servers at once. Often we're
focused on the gritty details of host-specific security issues, and forget to look at an
overall design. The design of an entire network can make a tremendous difference in the
end.

We defined three types of servers, which can be further broken down according to your
needs. They were:

Public servers, which are Internet-accessible.

Login servers, which allow non-admin users to log in.

Everything else, such as the MySQL, internal LDAP or NFS servers, which are only
reachable from internal networks.

Based on how we've defined these servers, the network layout and firewall rules we'd
apply are obvious.

The question that remains is how do we manage these, and still remain secure.
This is the part of our design that's difficult, because one poor decision can spell
disaster.

At a higher-level view, we know that some servers, even in the set of "everything else,"
are going to be more important than others. One or more servers will need to be "trusted" by
all others, so that automated changes can happen. Account creation, host integrity
monitoring a la Tripwire or Samhain, and even configuration file backups all need to be
configured and maintained from a server that is able to access other servers as root.

Such a server, a master server we'll call it, should only have administrator accounts
available for login access. Administrators' account passwords need to be different from
those on all other servers, and the master should provide no services to the outside world.
We want zero chance of a compromised public server leading to a compromise of the master.
When a seemingly unimportant machine is compromised, a hacked login or ssh binary will be
part of a rootkit, which can lead to exposed user account passwords. This is why sudo is a
bad idea: It gives your user password root access. A compromised su binary can disclose a
root password too, and that's why a master server is so important.

The master server should be able to ssh into all other servers as root, but only via an
ssh key. Password-based root logins can never be allowed via ssh. If the master is
compromised, then yes, every server is too. That's why a master is a fortress, running only
the ssh service, and only connecting to other machines, not the other way around.
Configuration file backups, host integrity databases, etc. can all be stored on the master.
The idea is to never, ever run su or sudo on a potentially insecure machine, instead, simply
ssh in as root from a secure server.

Publicly accessible servers are mostly vulnerable due to the applications they run, but
login servers also pose problems, and they are vulnerable in many more ways. Your users, be
they developers, students or customers, don't care about security. They will run whatever
they get their hands on, including SQL servers, PHP-based web applications with a poor
security record, and anything else that seems useful. When unknown users make their way in
through these holes, you'd better be up-to-date on operating system patches.

Patching the OS isn't optional, nor a leisurely activity. To reiterate: all servers need
to be updated when a security update is available; immediately. It is extremely trivial to
gain root access on Unix servers that are only patched on weekends, because exploits come
out very quickly after the updates that fix them. There are also exploits that are brand
new; those are the ones to fear. SELinux, or an appropriately configured Unix machine can go
a long ways toward preventing these exploits from working. If you're unlucky enough to be
hit with a zero-day attack, then the best you can hope for is an overall secure
infrastructure.

The servers which are insecure, and possibly allow user logins, are special in more ways
than may be apparent at first glance. Assuming that we require shared home directories, it
may be difficult to support the broken-up environment that's described here. NFS shares
exported to insecure clients need to be carefully scrutinized — especially when
developers or researchers require root on their machines.

Given that NFS implies zero security, granting access to NFS shares for an uncontrolled
client is quite scary. Essentially, you must assume that everything in the shared filesystem
is going to be compromised, since root on the other side can easily su to anyone who happens
to own files. The old standard workaround is to move these types of shares to their own
partitions, and then share it to the evil client. There's kerberized NFS, which is a pain to
set up, and there's also some alternative file systems that provide a bit of help. AFS comes
to mind, but if you need enterprise features, you'd better stay away. It doesn't support
snapshots or compatible ACLs, among other things.

We could carry on for pages and pages about best practices and pitfalls. The general idea
with an infrastructure-wide view is to minimize your risk in two manners: hard to penetrate,
and hard to fan-out once they're in. With proper monitoring in place, you should be able to
detect an intrusion quickly and put a stop to it.

It doesn't matter if you have 200 or 2,000 servers, the general principles are the same.
It may seem simple, but it is often taken for granted in the heat of configuring a new or
damaged server. Remember: