Posted
by
kdawson
on Friday July 18, 2008 @07:14AM
from the remind-me-of-your-name-again dept.

alexs writes "Red Hat's response to update bind through RHN, patching the DNS hole, made a fatal error which will revert all name servers to caching only servers. This meant that anyone running their own DNS service promptly lost all of their DNS records for which they were acting as primary or secondary name servers. Expect quite a few services provided by servers running RHEL to, errr, die until their system administrators can restore their named.conf. Instead of installing etc/named.conf to etc/named.rpmnew, Red Hat moved the current etc/named.conf to etc/named.conf.rpmsave and replaced etc/named.conf with the default caching only configuration. The fix is easy enough, but this is a schoolboy error which I am surprised Red Hat made. Unfortunately we were hit and our servers went down overnight while RHN dropped its bomb and I am frankly surprised there has not been more of an uproar about this."

Actually, I caught the error just from looking at the output of up2date/yum. It clearly said named.conf saved to named.conf.rpmsave. So all you have to do is compare what changed, implement any changes and copy named.conf.rpmsave over named.conf.

Just as I said on the day of the release, be careful, don't just blindly update things.

People make mistakes. Even _________ (insert linux vendor here) packagers. This patch should be remembered and trotted out every time a new sysadmin is taught how to do the job. Remember the DNS bind patch of '08? That's why you test before patching production servers.

Oh, and the 'I don't have enough money for a spare of every server' excuse won't play. That's one reason to buy consistent models. Your test equipment can serve as emergency replacements or vice versa. And if all else fails, testing on

You know, not everyone has non-production servers. Every server we have IS production. And if you are paying for Red Hat Enterprise, you expect Red Hat to have tested these updates themselves. If this was a Microsoft error, Slashdot would be all over Microsoft for allowing this to happen.

To be sure, red hat ballsed it up, but if you're running a service that "can't go down", you HAVE to test your patches out. If you don't have spare physical machines, test them in a virtual machine or repartion your workstation to have enough room for a server install.

If it's important enough to not go down, it's important enough to test.

>You know, not everyone has non-production servers. Every server we have IS production. And if you are paying for Red Hat Enterprise, you expect Red Hat to have tested these updates themselves. If this was a Microsoft error, Slashdot would be all over Microsoft for allowing this to happen.

You are wrong; stop whining. You're just painting yourself as misinformed.

1) The updates WERE tested.2) The admin installed "caching-nameserver", then configured his install to act far outside the default.3) He allows automatic updates straight into production. So do you it seems. Good luck with that! RHEL documentation says to not do this, but you're a bigshot "paying" for something different. I suggest you get a sidekick, and stick to the Windows side of your "enterprise".4) He didn't revert his.conf file, as is usually needed when some new line is added to a server.conf. This is SO NORMAL you'd have to be a n00b to get bitten!

Your MS comparison is apples and oranges. If this guy did TEN MINUTES worth of testing he'd realize something's up, and he could revert the rpm package. How many MS updates prohibit uninstall? Quite a few!

In Windows, you can't diff the before & after config, since Windows admins would rather be blind to what they're installing, since that's the norm and it's accepted.

You are right about making do with what you get, but exactly how did he lack resources in this case? He already has RHEL (and updates, so I'm guessing his support contract is up to date).

It's not like they're charging more for a non-caching domain name services server. In fact, he took a perfectly good non-caching name server, and then installed pre-packaged configuration files to make it a caching-nameserver. Then he started hacking away at the config file. Small wonder that fixes to the caching-namese

What the fuck is wrong with you people? You think every System Admin out there had just one job to do and that's administer the servers? In my job I do everything. VOIP Phones, new employee setup, updates, backups, desktop support, fix the copier, follow up with accounting and executive assistant as to why we ran out of paper yet again etc. etc. etc. The point is the company SHOULD hire another IT person but they can't afford it and there is no freakin way I could ever test every update that comes out. Of c

You mean to tell me you don't even have an old desktop machine sitting around with RHEL on it to "play" with?
Come on, pull the other leg. Or maybe find a new line of work.
Not being able to afford non production servers and test lab is one thing, but not taking the old computer you replaced on the secretaries desk and using that to do some basic testing for mission critical updates is ridiculous.
Or hell, just dual boot your machine if it comes to that. You have to do SOME testing of SOME things.

And/or, if your company is so broke it can't afford a couple hundred bucks to put together a low-end box to run update tests on, then you are doomed anyway.

"Free Software", even when serviced and supported by a corporation such as RedHat, is about knowing WTF you are doing and being responsible for your own stuff, as opposed to being a drooling button-pusher assuming everyone else will take care of you and suing them when they don't.

I'd rather prefer three 98% secure systems layered, than one 99.9% secure system (100% won't ever exists).
It lessen the chance to be attacked by a person knowing how to defeat each layer. Also, if a 99.9% system could exists, ill just put it under or atop of two three other security layers, just in case. This is especially true if each layer could be put in place with almost zero effort (java polcies, chroots, filesystem privileges).

Note as well that the initial release included a default conf file which specified a fixed source port, which of course breaks the fix.

[Updated 10th July 2008]
We have updated the Enterprise Linux 5 packages in this advisory. The
default and sample caching-nameserver configuration files have been updated
so that they do not specify a fixed query-source port. Administrators
wishing to take advantage of randomized UDP source ports should check their
configuration file to ensure they have not specified fixed query-source ports.

Personally I'm surprised there's not been more uproar about the requirement to move internal DNS servers (yes, that means your Windows Domain Controllers in most corporate environments) outside any NAT'ing devices (eg: firewalls), as many NATs also break the fix by rewriting outbound UDP DNS queries to use the same or incremental source ports, which also breaks the fixes. Anyone here moved their AD outside the firewall?

Apparently [checkpoint.com] they did, either that or Checkpoint's protection feature for the general class of DNS poisoning attacks just happened to protect against this one too. However, even if it did protect against it I doubt they could have release a same day press release stating it did if they hadn't been notified of the vulnerability ahead of time.

Judging by the CERN details, it sounds like there are two things you need to do. You need to be able to predict the 16-bit random number, and the 16-bit random port. My reading (and this was very brief, so someone *please* correct me if I'm wrong here) is that the older DNS servers had two flaws: a flaw in the RNG for the 16-bit transaction number, and they used fixed or predictable ports.

A NAT will reintroduce only the second problem because it gives you predictable ports, but obviously, relying solely on

Judging by the CERN details, it sounds like there are two things you need to do. You need to be able to predict the 16-bit random number, and the 16-bit random port. My reading (and this was very brief, so someone *please* correct me if I'm wrong here) is that the older DNS servers had two flaws: a flaw in the RNG for the 16-bit transaction number, and they used fixed or predictable ports.

A NAT will reintroduce only the second problem because it gives you predictable ports, but obviously, relying solely on the unpredictability of a 16-bit transaction id is a little scary. Because of the birthday paradox, (assuming the attacker has perfect knowledge about which port you're choosing) an attacker would need to send only something on the order of 2^8 packets to poison the cache.

No, the birthday problem doesn't apply when you are trying to match a specific person's birthday.

Personally I'm surprised that this is getting modded Informative. I suppose the NAT piece is informative, but I think "Anyone here moved their AD outside the firewall?" qualifies as either -1 Job or +5 Funny.

Hand off DNS queries emerging from AD servers inside your firewall to caching-only servers in your DMZ. I have all my AD servers on RFC1918 IP numbers with no NAT, because they strike me as devices I'd prefer to keep as far away from the big bad Internet as possible.

I'm not sure what you're getting at with building from sources. Seems like overkill and doesn't solve the main problem because you can still screw it up. All anyone's saying is that you should test this on a server that you don't care about, or at least test it on one, before upgrading all of them.

the thing youre forgetting is that microsoft REGULARLY does that, and even with irrelevant minor updates. thats why people are too worked up because of microsoft. they will gonna let this red hat incident slip by, because red hat doesnt have a track record of messing it up.

Yup. I've got all the torches, can you grab an extra pitchfork? I'll pay you back when we get to the castle.

Seems like the general opinion is that no admin worthy of avoiding the boiling oil treatment wouldn't have applied the patch blindly to a production environment, but it still doesn't let RedHat off the hook.

it is not a bug to get a caching nameserver if you install caching-namesever... it would be a bug to install caching-nameserver and NOT GET a caching nameserver.
A caching name server IS one that does not have any zones and only looks up zones from the DNS root servers. It is a configuration error to install the caching-nameserver package on a machine that doing anything other being a caching name server.
Stupid admins have been complaining about this for 5 years... but the documentation and bug entries all make it clear NOT to install the caching-namesever packages on DNS servers that control zones.

Yes but a caching name server is (or at least was for a long time) the Red Hat default. Go figure. That's all most people want or need. Any bets that lots of the people who got bit with this built the machine internally with the caching name server RPM installed and then just edited or copied over the production named.conf file to turn it into their "real" name server when the box went into production?

I guess the syadmins could put in an option in a configuration file somewhere on what files to "keep untouched" when doing package upgrades, no? So that the configuration file wouldn't be overwritten. I think I've seen something similar in Debian distros. Anyway when I install a new (custom) kernel in Ubuntu for example, synaptic asks me if I want to overwrite GRUB's menu.lst with the newly generated one, view the differences or keep my old one etc. Surely there's something similar in Redhat?

Umm. . . I disagree completely. The only way I would consider a patch "put out properly" if it was tested in my exact, or near exact environment. I can only assume that I'm not important enough for that.

The user has misconfigured their DNS and has installed a package called, SURPRISE, caching-nameserver along with the other bind packages.

caching-nameserver IS just that, a caching-nameserver. It SHOULD NEVER BE installed on a DNS server that is used for Primary or Secondary DNS control. The bind packages do not in any way modify named.conf, but if you want a caching nameserver and if you have installed the caching-nameserver package, then you would EXPECT that it would replace the named.conf file.

The real question is, how does crap like this get posted as a feature article on slashdot.

BUT... how can you create a caching-nameserver without changing that file???
If you do not change that file, you do NOT have a caching-namesever... which was the whole point of installing that package.

I'm not familiar with the package in question, but I assume it also installed some binaries. If it found that there already was a configfile of that name, it should have asked what to do.

If setting up the caching-nameserver was a matter of changing config options, you don't need a package for that, you need a HOWTO.

I would hazard to guess that unfamiliarity with the package is the real root cause of this. From the package description for caching-nameserver-7.3-3 (which could be a very old version):

The caching-nameserver package includes the configuration files whichwill make BIND, the DNS name server, act as a simple caching nameserver.Many users on dialup connections use this package along with BIND forsuch a purpose.

If you would like to set up a caching name server, you'll need to installthe caching-nameserver package; you'll also need to install bind.

And so there we have it - a package designed to install and maintain the very generic files needed to configure a caching DNS server. DNS server not included.

And sure - this could be a HOWTO. But making a package allows for quick-and-simple configuration. And since this kind of thing is so generic, it really lends itself to packaging. I disagree that it should only be a HOWTO.

What kind of environment are you in where you don't first test your patches that are going out to live production machines? Regardless of the fact that it is linux and not windows, you should always test your patches before you roll them production.

What kind of environment are you in where you don't first test your patches that are going out to live production machines? Regardless of the fact that it is linux and not windows, you should always test your patches before you roll them production.

Disclaimer: I test first.

You know, lot of people work in small shops that can't afford multiple redundant servers. I suspect that business with a single DNS/web/mailserver are a lot more common than Slashdotters this morning seem to thing. What are those admins supposed to do? They're receiving a critical security patch from a trusted vendor, and I imagine a lot of them feel pretty safe applying that to their sole production server. This doesn't make them stupid or incompetent.

I have the luxury of lots of hardware that can fill in for other gear in a pinch, but lots of people don't. They don't deserve scorn for it.

Couple of points. Firstly, the smart but under resourced admin should use this incident as evidence of what happens when management won't cough up for required equipment. A test environment is not a luxury, it's a necessity for a reliable system. Secondly, something like vmware can let you set up and test an environment on whatever hardware you do have lying around - you could even run it on the prod box at a pinch.

You nailed the demographic, but these are EXACTLY the group that should not be running their OWN exposed servers.This would be true for any server, but "double" so for DNS.

DNS hosting is cheap, and the expenses are unpredictable (unlike the commotion raised when you have "Windows IT" people whining about "what they pay Redhat for". UNIX does what you ask. If you want 10 meters of rope, tie it to a beam and stick your head in... it will let you.

A few months prior to the release of RHEL 5.2, they released a kernel update (2.6.18-53.1.6.el5) in which they had added a patch for an issue that could make a system oops upon when files with names of a certain character were present on NFS shares. However, this patch also contained a bug which broke NFS lookup caching and subsequently crippled NFS performance to the point of NFS being completely unusable when working with multiple smaller files. They released a patch for it, but it would only apply cleanl

Red Hat makes this mistake a LOT. It makes the update process very unreliable. SuSE isn't as bad but they still have problems if you customize a piece of software's configuration in an unexpected way.

Debian is king here. The incremental patches almost never break a configuration and the major release upgrades tend to work; they often change package names if the new "version" has a major incompatible change in the configuration.

openSUSE has warnings in those files NOT to use them and tell you which one you need to use. e.g. for Apaches http.conf it says:# If possible, avoid changes to this file.It then goes on to exlain a lot about what to place where.

There are many other files like that. Another example:# PLEASE DO NOT CHANGE/etc/bash.bashrc There are chances that your changes# will be lost during system upgrades. Instead use/etc/bash.bashrc.local# for your local settings, favourite global aliases, VISUAL and EDITOR# variables

openSUSE has warnings in those files NOT to use them and tell you which one you need to use.

True enough. Doesn't help when I want the application to do something it is capable of but which wasn't envisioned by the SuSE packagers. Like binding sendmail to a non-privileged port as a non-privileged user and then using iptables to redirect port 25 up to that port.

But you should be testing things like this first, and whenever you upgrade you should really be looking at/for all.rpmsave or equivalent files first to make sure nothing has changed in the meantime. Otherwise, you're just removing your config and replacing it with the default whatever happens. You should also be checking.rpmnew (or equivalent) each time to check that it hasn't changed in terms of syntax, defaults etc. (which, let's be honest, is quite likely for such an important update - especially given that we hardly know what the exact problem is yet). I wouldn't go so far as to suggest intimate analysis of packages while they are still packed unless the systems you are running are quite critical to the operation of a business.

Part human-error on RH's part (it happens). Part incompetence in not testing the updates yourself first. Chances are that if I were affected by this, I would catch it as part of "right, what did that package change?", or notice as part of usual testing later, and then just move the file. I probably wouldn't even bother to send RH a note.

If you have a DNS server, that suggests that there are reliant computers. As courtesy to all those reliant computers you HAVE to test changes and check carefully what they are doing first. If you were "stung" by it, it suggests you hit this problem on ALL your DNS servers and/or that you only have one DNS server anyway. To deploy packages like this on such a setup is just asking for trouble.

Red Hat will create an rpmsave file when we make a significant change to the configuration file, or a mandatory change. Other than that, we keep the original config file, and store the rpm-config as rpmnew.

Have you considered using a configuration management tool such as Bcfg2 [bcfg2.org] or cfengine to make sure your own config files are restored after package updates are made? You can never really trust those package maintainers...

I must say that I am very suprised that this patch acted one way in the posters test environment and another when it was installed on their production machine... That's very odd.

What, he didn't test it before placing it in production? Never mind, move along - nothing to see here.

If the poster made an error (as suggested by a previous post), or if he installed a patch without testing it, bad on the original poster - but if the patch truely was bad (a possibility), then bad on RHN for letting something bad ou

Does the red hat version of apt-get (yum? I've used debian exclusively for a long time, so I forget what command it was) not prompt you when it wants to overwrite a config file? On any debian (or debian derived) machine I've used, apt-get always asks what you want to do if your config file is different than the package's.

This is news? Redhat (like every OS vendor I've ever dealt with) have been pushing out updates with broken assumptions for years.

In fact, this isn't even the first time they've done something similar when updating bind:back in 2004 they released RHEL 3 update 4 and many people had precisely the same experience. Additionally, when applied, Update 4 removed the/etc/rc*.d/S*named and/etc/rc*.d/K*named and then shut named off.

As a quick glance at redhat's bugzilla [redhat.com] shows, the first problem (the same one you experienced in this release) wasn't a schoolboy mistake on the packagers part, or a bug. It was the result of a poorly understood choice on the part of the person who originally provisioned the machine.

Rather than installing just the original bind-9.2.4, the people who had their named.conf overwritten had installed bind plus a package called caching-nameserver. It's that package that, when updated, backed up and overwrote their bind config. The "caching-nameserver" package should only be installed if you want to run a caching nameserver, because the caching-nameserver package isn't an application at all - it's simply a named.conf file.

The real bug (back in 2004) wasn't actually in Update 4's bind package. As it turns out, the package it replaced incorrectly contained a `chkconfig --del named` in its uninstall script.

Anyone without proper alerting and a good QA process found that one out the hard way. I had customers who'd gotten so blasè about performing nighttime maintenances without proper reversion testing that they scheduled nightly cronjobs that ran up2date at midnight and rebooted the production machine, Naturally, they woke up in the morning to find they'd just suffered 8 hours of downtime.

This sounds like how RPM's behaved as long as I can remember. It looks at three versions of a config file: #1 the one from the old package, #2 the one currently on disk and #3 the one in the new package. If the config file hasn't been customized (1 and 2 are identical), it moves the old file to.rpmold (if 1 and 3 differ) and puts #3 into place. If the config file has been customized, it checks whether 1 and 3 differ. If they haven't then nothing's chanced, the customized config file's still valid and it drops #3 in with the.rpmnew extension. But if 1 and 3 differ, then something in the config file may have changed and the customized config file may no longer be valid. But it's got customizations in it that the admin may need to refer to. So it outputs a warning message about what it's doing, moves the customized config file to.rpmsave and installs #3, and the admin's expected to have seen the warning and to merge their customizations into the new config file. You do watch for warnings and errors during the update, right?

In this case RPM is right, old named.conf files aren't valid. If they're based off RH's old stock config files, they have the source port locked and that disables much of the security fix. So the admins do have to check and modify their customized files before the system's finally ready (or at least RPM has to assume they do, since it can't know exactly what their changes were). That's exacerbated by probably having caching-nameserver installed, but I think a stock BIND install has a similar named.conf until you add your own zones to it.

I'd chalk this one up to admins who a) don't understand an inherent limitation of package-management systems (namely, it doesn't know why you changed something, only that you changed it), b) didn't watch the update process for errors, and c) didn't check the systems for functionality after the update.

Don't entrust the function like DNS to a single vendor. With some services it is hard, as authors support a limited range of OSes/hardware or charge too high a price for each installation to make redundancy affordable.

But not DNS. Free solutions abound, and the commercial ones are quite cheap too. They are available for all imaginable "server-grade" OS/hardware combination. If you use more than one servers for DNS in your enterprise, and both of them use the same platform, you aren't doing your job.

Mind you, I don't blame the victims here — Red Hat screwed up royally, and that's that. Just advising on how to avoid being hit by such (inevitable) mistakes — from any vendor — in the future.

The accountants doing the computation just called and they need a little more time before they have a good estimate on the cost of this: they are waiting for their computers to finish defragmenting their disks and for their antivirus apps to scan every word file for vba worms.

Because the named.conf file gets stomped, the 'backup' RPMSAVE file it creates is the caching-only file, not the original named.conf file.

I caught this a couple of weeks ago on a test server (where *all* patches should be tested first, Microsoft or otherwise) best way to fix? cp/etc/named.conf/root/named.conf.backup ; up2date-nox -u ; cp/root/named.conf.backup/etc/named.conf ;/etc/init.d/named restart

On most (all?) other distros it works perfectly. I had Debian for ages in production (supporting piles of services) with apt-get update/upgrade running regularly. SuSE and Gentoo also do good job keeping you informed about changes in updates and if post-update human interaction is needed.

The crucial difference here is mindset of RH. It didn't changed the damm yota in the decade. The very same problem why I threw away RH6/7 in past from production, the very same stupidity of RH, is still there.

RH is only distro I have ever tried - and I tried many of them - would silently without any warning or prompt replace your config files with shipped version. It took them ages to learn that files can be renamed - yet it didn't went thru completely it seems.

This is not a single mistake. This is happening now for more than a decade now: RH during maintenance can and does override your configuration. The RH folks simply have no trivial respect to their users...

RH is only distro I have ever tried - and I tried many of them - would silently without any warning or prompt replace your config files with shipped version.

First, it doesn't do this without any warning...the output of rpm (which does the actual install) is forward to yum, or rhn, or whatever is running the "figure out everything I need and get it" process, and that is displayed to you when you are applying the patch. It clearly states in that output what happened with the file.

Second, for some updates (particularly security updates like this one), it is appropriate to save the old config file and load a default one, especially if that default one helps provid

I just pulled up the SRPM and looked, and bind-chroot has:
%ghost %config(noreplace) %prefix/etc/named.conf
%ghost %config(noreplace) %prefix/etc/named.caching-nameserver.conf
%ghost %config(noreplace) %prefix/etc/rndc.key
It should not replace that file with an.rpmsave file

There is a good reason this was tagged, "kdawsonfud." The person who reported the problem had caching-nameserver installed, not just bind, which explains why we aren't seeing widespread outages; most people don't install caching-nameserver when they don't want a caching nameserver.

Did the OP have the package caching-nameserver installed? If so, that packages whole point is to change the bind configuration into doing just caching.

I PAY REDHAT GOOD MONEY FOR THIS!

I don't need you implying that they can't prevent my mistakes, or read my mind.

(joking, but look at all the "if this were Microsoft.." people skimming right OVER this fact. You saw it, I saw it, and that should be enough to shut them up... and never mind the bad practices by the submitter. He installed the WRONG PACKAGE, folks