Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

ers81239 writes "I've recently become a Linux administrator within the Department of Defense. I am surprised to find out that the DoD actually publishes extensive guidance on minimum software versions. I guess that isn't so surprising, but the version numbers are. Kernel 2.6.30, ntp 4.2.4p7-RC2, OpenSSL 9.8k and the openssh to match, etc. The surprising part is that these are very fresh versions which are not included in many distributions. We use SUSE Enterprise quite a bit, but even openSUSE factory (their word for unstable) doesn't have these packages. Tarballing on this many systems is a nightmare and even then some things just don't seem to work. I don't have time to track down every possible lib/etc/opt/local/share path that different packages try to use by default. I think that this really highlights the trade-offs of stability and security. I have called Novell to ask about it. When vulnerabilities are found in software, they backport the patches into whatever version of the software they are currently supporting. The problem here is that doesn't give me a guarantee that the backport fixes the problem for which this upgrade is required (My requirements say to install version x or higher). There is also the question of how quickly they are providing the backports. I'm hoping that there are 100s of DoD Linux administrators reading this who can bombard me with solutions. How do you balance security with stability?"

I thought the DoD would forbid to run newer versions that haven't been ran and scrutinized enough by a lot of people.

I though they would do like many big iron companies that run older versions with security patches applied. I mean if I remember right, no later than last week, exploits were found in newer versions like Linux kernel 2.6.30 and Firefox 3.5. I think this is more likely to happen with newer releases of software than with older releases tested through the years.

I think there might be some changelog analysis going on too. If you see "Huge exploit xyz fixed in this patch", you're more likely to use the new, untested version just because a known exploit is closed. With security software, they're always usually fixing, improving, and generally securing their software.

I personally keep pretty up-to-date, and I can understand that a government agency would want to be completely on top of things.

Regardless of what the actual machine does, it's still attached to the DoD's primary network, which has a lot of less boring info traversing it. By securing the boring stuff, you can keep it from becoming a host to those looking for the other stuff.

The submitter says using back-ported security fixes (presumably from some official repository) is not an option because it doesn't give a guarantee of fixing the vulnerability the original update was for. If this is a problem I'm curious as to why he thinks manually installing the latest versions is any better. Is someone being paid to guarantee the efficacy of security fixes but only in those latest versions? If that is the case why not just pay them to audit the back-ported fixes in a repository instead?

Just build it off of slackware and distribute the whole thing using apt. That way, you just need to build the whole thing on one set of systems and distribute out to all the boxen you need to update/install.

Yes, except I would recommend using the same Linux distributions already in place, but adding your own package server to their repository list (or better yet, create a local mirror, modifying only the packages you need).

For example, if you were running an RPM based distribution, create a YUM server, add it to the existing machine's/etc/yum.conf, and set up a nice little makefile system to easily build new RPMs from the.tar.gz packages; that way you only do the build once.

I'm pretty sure you didn't mean to imply that Firefox is a part of Linux, but you did.

And this is why I have arguments with my Windows admin friends (I'm Faust) about how many security flaws "Linux" has. Arrrrgh!But I don't need to preach this to the/. crowd. Just be careful with this stuff.

I think you missed the point my post was trying to make. I was talking about software. It applies as well when newer versions of Windows come out. This is a principle above the OS that you are using that is pretty well known.

Another poster has stated it better than I did:

"the sweet-spot between security and stability *is* using back-ported security fixes."

Linux and Firefox were only examples, the holes did NOT exist in previous versions of Linux, same for Firefox. They are newly introduced holes that go wi

Its not as treacherous as it sounds. Security through obscurity. It does theoretically allow a vulnerable machine to be accessible to the network, but it makes the network collective less vulnerable to a targeted hack into the system, for botnet or node search. This is sort of what some security pundits were referring to when talking about a diverse OS ecosystem better able to survive a virus.

Also, the reality is you're not going to keep any machine, accessible through a network, 100% secure. They all b

I run Fedora 11 and normally keep up with the latest updates including the kernel. On my machine:

kernel-2.6.29.5-191.fc11.x86_64

ntp-4.2.4p7-2.fc11.x86_64

openssl-0.9.8k-5.fc11.x86_64

openssh-5.2p1-2.fc11.x86_64

Ok that was just a few and it appears that Fedora 11 is a bit behind considering that this distro is fairly cutting edge and IMHO not appropriate for current enterprise usage. The DoD people who suggest really bleeding edge releases appear to have a crystal ball which must tell them that that

You'll probably have to solve it by using build scripts and tarballs. If you're feeling really ambitious, up the ladder and find out why it's only by version.
Finding versions for each distro is probably more work for the people who do the list.

The most logical thing, surely, is to have a script that grabs the latest source, build suitable binary RPMs and a binary DEB, and then move these files to the correct directory for a repository manager.

(For RPMs, you could simply use the distro-supplied SPEC file and have the script replace values as needed. This only breaks when files are added/deleted, which usually doesn't happen.)

Alternatively, standardize on Slackware and banish the distro-specific issues to history. The drawbacks are less support and fewer fixes, but since the DoD can't track or test all variants, it's reasonable to assume they only track issues for the vanilla version. Distro-modded versions could have flaws added ad well as flaws removed, and in the DoD, it's better to have an absence of known threats.

"The most logical thing, surely, is to have a script that grabs the latest source, build suitable binary RPMs and a binary DEB, and then move these files to the correct directory for a repository manager."

Which, more or less, is exactly what it's done at the distribution level... and then you find that it takes about two years to stabilize the compatibility problems that arises with such a practice.

Then you look elsewhere and find that's what Debian does (to name an example) and that's why it takes Debian a

The most logical thing, surely, is to have a script that grabs the latest source, build suitable binary RPMs and a binary DEB, and then move these files to the correct directory for a repository manager.

I smell something fishy. Sounds to me like whoever is making money off securing DoD systems is also involved in specifying what versions to use. If you run something that's been battle tested and known to be "safe" (relative term) then there's no money to be made.

Here's a cheap way to make DoD Linux systems safe: don't connect them to the public internet, period.

Jesus christ on a crutch. If I see this stupidly retarded statement one more time...If you've been here on/.more than about 3 seconds, you would have come across a little tidbit of information alluding to the different networks within the US DoD, and their various levels of security. Not everything that lives on a hard drive in the DoD is sooper sekrit and needs to be cut off from the outside world.

Some of these networks are truly open. Some are only acecssible from a.mil domain. Some are not connected to the internet at all, and split with an air gap. And some even more restrictive than that.

Your oh so insightful remark is also a cheap way to hamper operations.We need a -1 Dumbass.

An air gap means the network isn't connected to the public internet, or to unsecured networks. The "air gap" is the open air between the secure computers and the insecure computers. At present most networking gear has a hard time routing packets through open air, but I hear they're ginning up a new RFC to address that.

An air gap just means that the computer networks aren't physically connected to each other at all. They exist entirely on separate networks, and the secure one usually isn't connected to any network with computers outside the building, much less on the internet.

I can give you some examples of their restrictions, and I didn't even see the room. In fact, I was supporting an enterprise management package used at their site, and I wasn't even allowed to know where their site was. They're not allowed to have any communications lines or devices inside the room, so doing support involved talking to three people really; one person actually on the phone, shouting to the guy holding open the door, shouting to the guy sitting at the keyboard. And keep in mind that this is a

Right away, classified generally means no connections to public networks/communications. (It's in theory possible, with sufficiently sophisticated security software, but practically never done.) Air gap. The only way to transfer data off the secure "island" is via hand-carried media (sneakernet). For most systems, any media mounted on the system is automatically cl

If I was working on something that needed real security, I'd prefer a steel reinforced concrete gap with armed guards outside. But I guess an air gap would do. It doesn't do much for the physical security though.:)

He's not kidding. The waiver is called a Plan of Action and Milestone (POA&M) if he's going by the DoD/DISA IA vulnerabilities and their vulnerability management system. This is the only way they can actually set maintenance schedules. A lot of the admins submit these 'waivers' with a plan of action which includes quarterly or monthly patch days, otherwise they'd have to run patches every other day, possibly breaking their applications and services. It's a lot easier to bulk patch and test the app/service once a month or quarter than every day. The frequency of DoD IA notices is so high that this is the only manageable solution.

1) Does the DoD contribute heavily to security software programs or packages? If so, they probably know which libraries are needed as they've been using them to provide the updates.

2) Maintenance of multiple server systems is always difficult. This is why Rocks was developed and why some develop their own startup and build scripts for clusters or server farms. Advanced scripting techniques are a must in a large environment.

3) Even if DoD doesn't contribute, they'll always point out the latest stable software and security fix. If you're talking about the defense of the country, how could you say, "We recommend this version...the one with the security hole that was fixed in the next version."

If you need to stay cutting edge, why not use a rolling distrobution such as Arch or Gentoo? You could also set up your own repository where you build the Suse packages once and then push them out to all systems.

"If you need to stay cutting edge, why not use a rolling distrobution such as Arch or Gentoo?"

The DoD, in general, *really* doesn't like do-it-yourself stuff. Yah, you can run Linux, but it has to be from Officially Approved vendors (Red Hat, SuSE). Only they have the secret decoder ring or whatever it is the DoD wants.

I'm sure any number of arguments against this will occur to people. You learn real quick: What you think doesn't matter for squat. You're in the army now, and they do it the army way.

Gentoo is great for learning how to fix things that are broken. That's because it's not uncommon to find a new version you want to install breaks something - eg. the config file syntax has changed subtly and the ebuild doesn't warn you of this before you carry out the upgrade. The longer you leave between doing an emerge -uD world, the more likely this is to happen.

And while you should keep a separate test network to test these things on, IME the test network is never

Let's take it a step further. Why not have the DoD make DoD Linux SS (super secret) version and DoD Linux RE (regular edition) with the specific packages they want. Lots of people roll their own, why not the DoD? Then the DoD posts the links to their new distros on DistroWatch.org.

Because only the major vendors have been approved for use within the DOD?

Been there done that for 9+ years...it all really depends on how much common sense your IT security group has and how tech savvy they are. My favorite place was where the head IT security guy was an avid computer geek, so when the new vulnerability lists came out, as long as we could provide a memo for the record explaining how we mitigated the vulnerability (backporting the fix, upgrading to the next version, removing the software, e

Why would not CyberCommand (or whatever it is called) maintain a mirror of OS they approve of.It is easy enough to set up and they can log all the machines to make an inventory. Even make sub-mirrors for different commands.

That was my first thought. If the DOD requires specific versions- they should maintain repositories of them on their own servers. Perhaps one on their secure/classified network, and one on a more accessible network. They could be writable by only a few key people, so their chances of become corrupted would be very low.

For all the "waiting to compile" jokes, Gentoo is a highly servicable distribution.

Really, Gentoo has also been called a meta-distribution, because of its customization capabilities. In that light, it can be highly appropriate for a "controlled shop", where more-or-less locked-down systems are centrally administered.

"Why on earth would you need to update all the time? If it were me, I would install gentoo once, then only update those packages on the DoD's list."

You don't think they came up with a list saying "kernel 2.6.30" a year ago, do you? That list basically ends up saying "use the latest published releases" so, yes, you need updating all the time to stay up to the list.

Gentoo lets you build binary packages too, build the packages on one box and send binaries to the rest... That way you can update what you need to and leave what you don't, with binary packages dependencies often have to be replaced too... For example, something built against glibc 2.8 will require glibc 2.8 at runtime, even if its perfectly capable of being compiled against a much older version.

That the fresh versions aren't included in many distros shouldn't be surprising. However, that the DoD demands these bleeding edge versions should raise an eyebrow.I can't see how they can possibly have done a thorough job at certifying something when it's just out the door and bugs are still being weeded out.

If I were to set up a highly secure system, I would probably go with Trusted Solaris from a few years back, or something similarly well tested, like Red Hat Enterprise Linux (kernel 2.6.18) with SElin

Get ready for paperwork! You will need to apply for exceptions for everything that's out of compliance... I've worked in similar institutions, tho not the DoD, but most places run this the same way. The list of software in compliance is usually generated by the infosec team, and it's more of a wish-list than a demand... but to pass your audit, you will need to have permission to run out-of-spec software, and document why it's out of spec (vendor doesn't support that ver) and what you're using instead (the ver. the vendor supports). This is generally so the pen-test, NIDS and Intrusion Response people know what they're dealing with.

Have a chat with your info security shop - they'll walk you through it, and they're secretly envious of unix admins. They yearn for your aura of splendor and awe.

Either way there's no excuse to be compiling packages on each server and managing the usual/usr/local &/opt mess, not to mention with autoyast iirc you can configure it to update packages at specific times of the day unless there's a reboot necessary (and even to reboot automagically for new kernels)

Find the guy's email address of who writes those specs (located somewhere on the doc, I'm sure), or it's on the server that hosts those docs, and email him and ask him where your local depository for the latest.mil approved packages are.

There are many, many ways to deal with this, but fortunately while DoD says "update to this specific version," what they really mean is "close this specific vulnerability." Get used to hearing about IAVMs and VMS (Vulnerability Management System).

Taking the case of OpenSSL specifically, it's not uncommon for there to be patches released for vulnerabilities affecting a previous version. If you're using a vendor like Redhat (and in the mind of DoD, Redhat/SuSE = Linux, and nothing else) what you'll end up with is a version of OpenSSL that appears vulnerable, but in fact has a backported patch applied to the vulnerable distribution. Once you've applied the updated RPM, you can say in good conscience that you've mitigated the vulnerability, and you can close the finding.

Where it gets stickier is where you have code that depends on a specific version of a library that might be vulnerable. In that case, you need to dig in and understand the specific uses and how you might be able to mitigate the vulnerability by turning off a publicly listening service or applying some strict file controls, or maybe you don't exercise the vulnerable function in the library and can justify it that way.

Ultimately, you have to be able to convince your DAA (Designated Approving Authority) to accept the risk. If you can't immediately close the issue, you have the option of doing a POAM (Plan of Action and Mitigations) that will outline how you're going to mitigate the issue until you can close it.

There are a ton resources, but specifically I'd start here:

http://iase.disa.mil

You also might find this interesting as a way to secure Redhat machines:

First of all, if you work for the Navy, the distribution must be within DADMS, so you can't just run any random distribution.
I also run a few linux machines for the DoD (the Navy specifically). The rules are enforced by the scanners. I take the vendor's (RedHat in my case) backported patch at their word, that they have fixed it. If you read their patch documentation, when the security alert is issued, that they have implemented the patch. The network security scanner doesn't pick up that you have patched it, because the version number doesn't match. I submit the RedHat's patch document with the report, as evidence that I have done it. It satisfies the auditors, because, to them, it's no worse than trusting Microsoft that they have patched their stuff.
I don't have the time to investigate and test to see if the vendor actually fixed the problem with their backported patch. I leave that for the security exports to ping on them if they failed to do their job. Besides, that's what I'm paying RedHat for. I don't have the time to make sure that Microsoft fixed all of their stuff either. I patch and go, and document it what I have done. As long as their is a paper trail to prove that you have been patching, all is well.

DADMS is DoN-only for a reason; nobody else has the NMCI problem, and it didn't exist prior to NMCI. It's somewhat disconcerting to sit in on a meeting for a joint POR system, and have flag officers wonder WTF the Navy isn't implementing. "Uh, it's not in DADMS, sir." Sparks fly to say the least.

That said, the procedure is pretty simple, and since DITSCAP/DIACAP provide for it, you run specific vendor patches for whatever vendor-supported OS you're running (sorry Gentoo fanboys, roll-your-own isn't allowed in production systems). The Unix SRR script *should* be able to figure out if the backport is applied in a vendor-supplied patch, and pronounce it okay.

(The SRR scripts are publicly-available to everyone; if you're not running a commercial distro, you'll probably get some weird results, but it's still pretty good at picking out possible problems, even on systems that aren't officially-supported. I've run it on everything from Debian, including GNU/Hurd to OS X. http://iase.disa.mil/stigs/SRR/index.html [disa.mil])

If something is revealed that's not accurate, you document:a) why you can't fix it (i.e. whatever system is running on top ceases to work, the vendor refuses to fix, the vendor is tango uniform, it's Wednesday and you don't feel like it, etc.),b) why the scanner goofed up and picked out a problem that doesn't exist (yes, this version is different, but the vendor backported the fix [with proper vendor reference] to this, which is applied).orc) the fix hasn't been released and fully tested yet.

Cases a and c are what a POA&M is for, which is normally submitted along with the accreditation package, and updated periodically.

RHEL5 is getting a little stale and we often need more recent versions for various reasons; I found that downloading SRPMs from koji.fedoraproject.org and recompiling them on RHEL usually worked. The only annoying thing is that from F11 on the RPM compression has changed and RHEL can't unpack them; so I have to unpack them on my Fedora system first.Then I just build them, sign them with our GPG key, and copy them over to our loca repo, and just run "createrepo." It's not that big a deal.

Or get the sources straight from <a href="http://fedoraproject.org/wiki/Using_Fedora_CVS">CVS</a> -- the downside is you won't know whether a particular revision results in a successful build or not, without checking Koji.

I am a Linux administrator at a DoD site. I have never seen anything that says that you must run kernel 2.6.30 or anything like that. Can you please provide a link to where you read this? (links to CAC-authenticated websites are ok)

DoD I-8500.2 requires you to run an OS that is EAL certified at a certain level depending on your classification. The only Linux distributions I know of that have EAL certification are SLES (9 and 10) and RHEL (4 and 5). I keep hearing about people that run things like Fedora, CentOS, and Ubuntu on DoD networks, but I have no idea how they get away with that.

As far as software versions go, what versions you must be at are dictated by IAV-A, IAV-B, and IAV-T notices. The IAV-A may say that there is a vulnerability that affects kernel versions = 2.6.30 and that you must go to 2.6.30 to be compliant, but as long as your vendor's kernel version addresses the CVEs that the IAV-A references then you are covered.

In this, like in many other things, the Windows way of thinking has poisoned the issue. The way Windows people think, reinforced by Microsoft's implementation of Patch Tuesday, has been picked up by systems auditors and managers and bureaucrats everywhere. So the mantra today is that you must patch. Hurry! There's a new version! If you don't install it now we're all gonna die! This comes from the fact that that is a pretty simple metric that can be written in policies and checked during audits.

If you lose data or your system gets abused and you're patched to the latest version you're off the hook. If you don't have the latest patch however you're fired. Even if the latest patch fixes a local privilege escalation on libgd2 and all your server does is DHCP and it was actually exploited by someone cleverly guessing your co-worker's password.

Same thing with firewalls: if all you run is a web server, I say you make sure nothing else is running that opens any ports. It's no use to setup a firewall, because the thing that is most vulnerable, port 80, will need to be open anyway. But get caught without a firewall in some places and you're fired.

It's a lot easier to write a meaningless list of requirements than to think about needs and policies and design the requirements

It's a lot safer to follow some dumb list of requirements than to try to understand what your systems are doing and configure accordingly

It's a lot easier for an auditor to check a list of requirements against the output of some version-checker than to actually know what these things do

It's the dumbing down of engineering that passes for systems administration these days. It's the Windows way of thinking.

Congratulations. It's now your job to check every *single* *freaking* *package* where the DISA specs proscribe a particular version, and see whether or not your vendor backported the security fix. Usually the DISA specs will contain a vulnerability id (CVE-ID or similar) that you can reference against. Google is your friend. The overall process is murder. It's a big reason why I got out of government IT.
On a related note, I find the Linux vendor practice of keeping old version numbers, but backporting new fixes into their own trees (Red Hat's "version x.y.z-ELsomeothernumber" system for example) to be categorically infuriating, but that's a different rant.
--jwriney

As a former DoD Linux admin (one of the first for that organization), the best way I've found to keep everything in sync is to build updates yourself (essentially, you're doing the vendors work for them). I know of the guidelines you speak of and the regular advisories and it was quite a task to implement something reasonable. In the end though, the only way I could both satisfy both the security concerns and maintain the rpm database integrity was to build updated versions of the vulnerable software myse

Run ArchLinux - pacman is *perfect* for this role. Just set up a local repository and have your client image include only it. Set up a cron job on the image to do a "pacman -Syu" nightly - that's "update your package list from the repo, and install any newer versions"

Then you have a test system that you can test new versions on, and when you're ready to launch, update that package in your local repo. That night, all your clients will update to the new version.

I do IA work for the DoD. I primarily do Certification and Accreditation for the Department of Navy.
The DoD 8500.2 controls require your operating systems to be Common Criteria certified. The EAL level is going to depend on your classification. There are several Linux distributions that have gone through the certification process [commoncriteriaportal.org]. For specific versions of specific software (Linux Kernel, OpenSSL etc.) you're probably referring to the IAVA (IAV-A, IAV-B IAV-T) notices. These are specific known vulnerabilities that usually come from CVE [mitre.org] or some other repository. They change as often as I change my underwear (insert joke about average slashdotter here). It would be impossible to keep a system up to date without significantly breaking functionality.

The thing I keep seeing is lazy DISA auditors that see the STIG's [disa.mil] as black and white. Most of the testers I've run into aren't technical people. They run the automated SRR scripts [disa.mil] and ding you for having your kernel version out of spec. If I were to sit them down and ask why a particular control was an open finding they'd tell me "Because the STIG said so" without digging deeper as to why.

The most recent test I was on, the testing team hit the sys admins for an out of date Kernel on a VMWare ESX box. VMWare uses a highly customized version of RHEL. Installing the most recent Kernel would turn the box into a paperweight. The best advice I can give you is to first check with the tester to find out exactly what the vulnerability is and what their recommended fix action is. Depending on your tester you may be wasting your time. I've see far too many tester leave comments like "Not up to STIG compliance".
Check with your vendor to see if they have issued a patch to address that vulnerability. Once you have that information you can place your comments into a POA&M and go back to your DAA and explain why a given open finding isn't really a finding and/or won't be fixed. You can also look into mitigation factors to see if you can reduce the severity. Many controls will state "If you're doing X, Y and Z this finding may be reduced from a CAT I to a CAT II".

Good luck with your C&A and be glad you're not on the documentation side of things:^)

Setup a few servers to host the patches and pay someone to turn those newer versions into valid rpm and deb packages so they can be used to patch said systems.

Past that, the policy needs to be reviewed. If they're patching because of a physical vuln vs a remote attack then that's just plain stupid. If the system is properly secured according to TSSCI etc then it's not a huge issue to ensure upgrades for at the keyboard attacks. If you lack phys security then you have much bigger problems, any linux syste

You ought get to work polishing your resume. Discretion is key in the defense community and blabbing about your job on Slashdot is a bad idea. For a start you should have phrased the question in a more vague manner rather than outright naming your employer. They will find out who you are and you can kiss any any access permissions goodbye in the near future.

While it might be, is that any reason not to want to find a software solution to make his life easier? Heck, I thought that was what software solutions were all about? Also, have you considered it might not be his only job?

Ya, I'm pretty sure that's a no-no. Unless he didn't really work for DoD, and he was just hoping that blurb would make him browny points with the Slashdot crowd. Reading the comments so far, it doesn't look like it made him any new friends.:)

I hadn't bothered to look, but if you look a little bit... Well, lets just make a trail.

From the posting, you see his Slashdot username.
From the username, you can look at his public Slashdot profile.
From his profile, you can see his site.
From his site, you can see where he's deployed; what he brought with him; the names, types, brands and models of equipment; the typ

You hit the nail pretty much on the head. There are good reasons DoD personnel go through all those "silly" information awareness courses. Apparently, none of that sunk in with this individual.
I hadn't bothered to follow the trail at all, knowing others undoubtedly already have.