Critical crypto bug in OpenSSL opens two-thirds of the Web to eavesdropping

For a more detailed analysis of this catastrophic bug, see this update, which went live about 18 hours after Ars published this initial post.

Researchers have discovered an extremely critical defect in the cryptographic software library an estimated two-thirds of Web servers use to identify themselves to end users and prevent the eavesdropping of passwords, banking credentials, and other sensitive data.

The warning about the bug in OpenSSL coincided with the release of version 1.0.1g of the open-source program, which is the default cryptographic library used in the Apache and nginx Web server applications, as well as a wide variety of operating systems and e-mail and instant-messaging clients. The bug, which has resided in production versions of OpenSSL for more than two years, could make it possible for people to recover the private encryption key at the heart of the digital certificates used to authenticate Internet servers and to encrypt data traveling between them and end users. Attacks leave no traces in server logs, so there's no way of knowing if the bug has been actively exploited. Still, the risk is extraordinary, given the ability to disclose keys, passwords, and other credentials that could be used in future compromises.

"Bugs in single software or library come and go and are fixed by new versions," the researchers who discovered the vulnerability wrote in a blog post published Monday. "However this bug has left a large amount of private keys and other secrets exposed to the Internet. Considering the long exposure, ease of exploitations and attacks leaving no trace this exposure should be taken seriously."

The researchers, who work at Google and software security firm Codenomicon, said even after vulnerable websites install the OpenSSL patch, they may still remain vulnerable to attacks. The risk stems from the possibility that attackers already exploited the vulnerability to recover the private key of the digital certificate, passwords used to administer the sites, or authentication cookies and similar credentials used to validate users to restricted parts of a website. Fully recovering from the two-year-long vulnerability may also require revoking any exposed keys, reissuing new keys, and invalidating all session keys and session cookies. Members of the Tor anonymity project have a brief write-up of the bug here, and a this analysis provides useful technical details.

OpenSSL is by far the Internet's most popular open-source cryptographic library and TLS implementation. It is the default encryption engine for Apache, nginx, which according to Netcraft runs 66 percent of websites. OpenSSL also ships in a wide variety of operating systems and applications, including the Debian Wheezy, Ubuntu, CENTOS, Fedora, OpenBSD, FreeBSD, and OpenSUSE distributions of Linux. The missing bounds check in the handling of the Transport Layer Security (TLS) heartbeat extension affects OpenSSL 1.0.1 through 1.0.1f.

The bug, which is officially referenced as CVE-2014-0160, makes it possible for attackers to recover up to 64 kilobytes of memory from the server or client computer running a vulnerable OpenSSL version. Nick Sullivan, a systems engineer at CloudFlare, a content delivery network that patched the OpenSSL vulnerability last week, said his company is still evaluating the likelihood that private keys appeared in memory and were recovered by attackers who knew how to exploit the flaw before the disclosure. Based on the results of the assessment, the company may decide to replace its underlying TLS certificate or take other actions, he said.

Attacking from the outside

The researchers who discovered the vulnerability, however, were less optimistic about the risks, saying the bug makes it possible for attackers to surreptitiously bypass virtually all TLS protections and to retrieve sensitive data residing in the memory of computers or servers running OpenSSL-powered software.

"We attacked ourselves from outside, without leaving a trace," they wrote. "Without using any privileged information or credentials we were able steal from ourselves the secret keys used for our X.509 certificates, user names and passwords, instant messages, emails and business critical documents and communication."

They called on white-hat hackers to set up "honeypots" of vulnerable TLS servers designed to entrap attackers in an attempt to see if the bug is being actively exploited in the wild. The researchers have dubbed the vulnerability Heartbleed because the underlying bug resides in the OpenSSL implementation of the TLS heartbeat extension as described in RFC 6520 of the Internet Engineering Task Force.

The bug, which is officially referenced as CVE-2014-0160, makes it possible for attackers to recover up to 64 kilobytes of memory from the server or client computer running a vulnerable OpenSSL version

This bit needs to be emphasised. The bug doesn't allow you to grab private keys. It allows you to grab a small chunk of memory from the target. This *might* contain private keys, or nuclear launch codes, but it's just as likely (or more likely, depending on your level of paranoia) that it contains junk, or is empty.

(I haven't seen anything that notes if it allows an attacker to control the memory chunk it reads, or if it's arbitrary, which would change things up a bit).

The article did say the researchers were able to attack themselves from the outside using no privileged information and recover "the secret keys used for our X.509 certificates, user names and passwords, instant messages, emails and business critical documents and communication". Unless you want to assume they're lying, that seems pretty devastating.

Whether or not the attacker is able to control which memory chunk it reads, it's obviously an exploitable hole in practice. Just contemplating the fallout from this gives me heartburn, and I'm not even a sysadmin. Given that the attack leaves no log traces, I don't see any way to safely handle this besides getting the patch out there and then revoking and reissuing every certificate and credential. And even then, there's no guarantee the attacker wasn't able to install some kind of backdoor while they had access.

Edit: Ninja-ed.

I wonder how it is that it leaves no trace? Clearly information has to be sent out from the vulnerable server for the attacker to get it. Is there just no way to distinguish it from normal traffic?

If it's a man-in-the-middle then the attacker already has access to the info, they're tapping the wire. No need to touch the server aside from reading the key. The diagram used for the article suggests it is a MITM.

The Heartbleed bug is not a man-in-the-middle attack. It exploit's OpenSSL's handling of TLS's heartbeat, which is an encrypted portion of the connection. The reason it leaves no traces is not because it can't be detected, but because it's just not logged by OpenSSL. Theoretically it could be logged (though there might be enough false positives that it's not helpful... I've not found out whether that's the case or not yet, though)..

Attackers apparently also have control (or at least influence) over what 64KB of memory they can capture, and can keep requesting more memory with each heartbeat (so the 64KB limit isn't that limiting).

Heartbeat can be disabled in OpenSSL, but only via a recompile (in which case you might as well apply the patch).

It compromizes everything you ever sent, if somebody was savvy enough (i.e. your traffic was high-value enough) to record it in case "SOME DAY" that data becomes decryptable. SOME DAY is now, and recording of all high-value data can be presumed to have commenced.

If Perfect-Forward Security was not used? Yup, screwed. If PFS was used? Your sessions from before the private key was stolen are still completely safe. Your sessions after the private key was stolen are only vulnerable to an active man in the middle attack (the attacker isn't just recording the data, but actively modifying it. Your sessions, with PFS, even after the private key was stolen are still just as immune to a passive MITM attack.

A group like the NSA might be able to record a huge amount of data, but it makes it absurdly more difficult (as in, orders of magnitude more so) if they have to actively MITM every session in real time. They can't just decrypt-on demand, but have to throw a huge amount of CPU power into it as well.

Considering that I just download 1,221MB of updates on my Windows 7 / Office 2013 desktop I think we can safely conclude that security vulnerabilites are not unique to open or closed source. It's a software problem, however it happens to have been developed.

No one says explicitly, but with Apache, you can turn off TLS negotiation. This (logically) should preclude the heartbeat, no? Is this a quick workaround? heartbleed.com suggests that even this would not stop the issue but was very vague about it.

It seems to me security researchers have a lot to gain by over-hyping their own discoveries. Nonetheless, **perhaps** the memory chunk varies between attacks, thus getting the key is inevitable given enough attempts..

*edit*Still, kudos to the researchers. That's a major problem.

It definitely allows you to recover the SSL private key from Apache in practice, I've tested it on a local copy of Apache with the vulnerable version of OpenSSL and managed to obtain its private key without too much difficulty. I get the impression that other HTTP servers may be a little harder to pull this off on though; all the successes I've seen reported so far were with Apache.

If the login was done, say, two weeks ago (prior to the time this flaw was discovered/published) and the current login is done via cookies, are the login credentials still exposed?

Can't know without details about what's going on server-side.

For a pathetic example: if auth server-side is done by comparing the password one types in to a database that's entirely loaded into memory (which I can't rule out from where I'm sitting), then your login credentials could still be exposed even if you have never logged in to the service at all.

Keep in mind that RedHat generally doesn't rev the version of the deployed code, they choose a version for a particular release, then simply patch it, so you likely won't see 1.0.1g. That's why their kernel says 2.6.32 when it's got a ton of backports in it, support for 3.x stuff.

It should, and I'm only guessing about WebSphere - I know they run IBM, and WebSphere is the most likely candidate - but the site does show up as vulnerable.

It's pretty common to run the application server behind a web server too. Frequently something like:

- Domain resolves to CDN- CDN forwards any requests it can't fulfill to load balancer- load balancer picks a web server- Web server forwards any requests for dynamic content to the app server using a connector like mod_jk

I'm not sure how the CDN might factor into this. And I think the load balancer normally does a straight pass-through to the web server - I don't think they usually negotiate the SSL connection themselves. But I would expect the web server to be the really vulnerable bit.

The app servers are often not directly accessible to the internet, so your SSL connection is with the web server and the web server uses the internal connector to facilitate your communication with the app server. If that's the case I would expect it to be the web server leaking memory contents, not the app server.

The web server's memory would potentially contain any info sent between it and the app server, and thus between you and the app server, so your communications are still at risk. But anything strictly internal to the app server, like in-memory database caches, at-rest data encryption keys, credentials/certs for communicating with back-end web services - all that should be safe.

I'm a developer, not a sysadmin, so I welcome any corrections. But I've worked on a lot of J2EE stacks, including some with WebSphere, and that's almost always been the setup.

Actually, since the load balancer usually needs to see inside the session to make sure it goes to the right webserver node, it is common practice to terminate the SSL session on the load balancer, and then either proceed with straight http over the trusted internal network, or re-encrypt it and establish a new SSL session between load balancer and webserver.

Actually, since the load balancer usually needs to see inside the session to make sure it goes to the right webserver node, it is common practice to terminate the SSL session on the load balancer, and then either proceed with straight http over the trusted internal network, or re-encrypt it and establish a new SSL session between load balancer and webserver.

Also, the bank being Israeli, I doubt CDNs are involved.

Good point, I forgot about that. Can't have sticky sessions if the lb can't see the cookies. Maybe environments that don't need sessions to stay pinned to a specific node, due to some kind of shared session-store, would be able to terminate directly on the web server. But I have no idea how common that kind of setup is.

Still, it's worth noting that the kind of layered architecture we're talking about should prevent the app server's memory from being exposed directly. That doesn't do you any good if you're only worried about what you yourself are sending and receiving, but there's a lot of other juicy stuff on the app server. And I would expect a site of any size to at least be behind a load balancer, so that's probably (minor) good news for everyone.

However, if the load balancer itself is vulnerable (I believe F5 is), then it's probably trivial to steal the cert it uses to terminate SSL sessions, and if it's not revoked, it can be used in MITM attacks later.

However, if the load balancer itself is vulnerable (I believe F5 is), then it's probably trivial to steal the cert it uses to terminate SSL sessions, and if it's not revoked, it can be used in MITM attacks later.

Absolutely. Anything you're actually sending and receiving is up for grabs.

One of our boxes accepts uploads via web file upload (apache2/centos 6.5). A file uploaded yesterday was plain as day in the clear with the hbtest.py tool. On successive runs, we got full session cookies.

Oh for F! sake you all... If you're on *nix you don't need the distro to do anything.

EDIT: If your distro is updating then no, you don't need to do any of this. Use aptitude or whatever. If, on the other hand you're using a slow-on-updates or no longer supported distro then you need to take some kind of action yourself.

Then md5sum, sha-sum or ssl-whirlpool the package against signatures you can find on various sites. If your distribution doesn't have it installed use you package manager to install GCC and your kernel headers. (apt-get, rpm, or if you're lucky you can just emerge the thing in gentoo.)

Go root..

Code:

tar -xf openssl-1.0.1g.tar.gzcd openssl-1.0.1g./configure --help

It should be pretty straight forward from there. If you can write a java program you can compile C/C++ from source. I can't remember if it needs g++ or not, most likely not. But if it complains about g++ then download it the same way you did gcc.

If you want the man files installed you may need to issue MANDIR and MANSUFFIX in the "make install" line, that should be covered in the configure help. If you don't have automake/autoconf then look for the README or the how-to on the openssl website.

Most applications that use OpenSSL should still work... If they do not you may need to recompile them linking to the new SSL libs. The easy way to find out what's using openssl is to use your package manager to look at the openssl package and see what's in the "needed by" area. Those configure statements will look something like:

I've been using a command line tool to probe a bunch of sites today; amongst about 500 sites, I've only found 6 vulnerable. It's looking like the hole is closing fast, generally. Of all the big American sites I tried, financial/e-commerce/tech service, none were *currently* vulnerable. Of the vulnerable, 4 were high-traffic consumer-oriented sites that most Americans will have never heard of. I've contacted the 2 other sites (startups); one was aware and working on it (the engineer I got forwarded to cursed a blue streak about the business people who wouldn't let him down the site while he got the fix in place - he had to beg for an emergency maintenance window) and the other was *apparently* clueless and said they'd forward the news to the engineers. (Some people are in freakout mode, so I'll give Joe Random Support Dude a small benefit of the doubt.)

Considering that I just download 1,221MB of updates on my Windows 7 / Office 2013 desktop I think we can safely conclude that security vulnerabilites are not unique to open or closed source. It's a software problem, however it happens to have been developed.

There is more code on any full-fledged *nix system than any one unaided individual can effectively manage. Every time you leave the repos to install cowboy code like you're recommending here, that's another mailing list you need to be on and ACTIVELY MONITORING and another package you need to fiddle with building, installing, managing dependencies, hoping the new version is backward compatible with all you have installed that depends on it, etc.

If you fall down on any of this, you fall down on the NEXT security update. Which you won't notice for months - or years - until suddenly oh, hey, you're completely owned and not sure how LONG you've been completely owned. Fun.

Package managers exist for a REASON, and every major distribution out there had a patched version available within hours of the release of this vulnerability. If you were using Ubuntu or Debian with something like the unattended-upgrades package running, you didn't even need to do ANYTHING to get patched - the cron job would have taken care of it for you. If you weren't using unattended-upgrades, apt-get (or yum, depending on your distro) update and apt-get (or yum) upgrade was all it took.

There is more code on any full-fledged *nix system than any one unaided individual can effectively manage. Every time you leave the repos to install cowboy code like you're recommending here, that's another mailing list you need to be on and ACTIVELY MONITORING and another package you need to fiddle with building, installing, managing dependencies, hoping the new version is backward compatible with all you have installed that depends on it, etc.

If you fall down on any of this, you fall down on the NEXT security update. Which you won't notice for months - or years - until suddenly oh, hey, you're completely owned and not sure how LONG you've been completely owned. Fun.

Package managers exist for a REASON, and every major distribution out there had a patched version available within hours of the release of this vulnerability. If you were using Ubuntu or Debian with something like the unattended-upgrades package running, you didn't even need to do ANYTHING to get patched - the cron job would have taken care of it for you. If you weren't using unattended-upgrades, apt-get (or yum, depending on your distro) update and apt-get (or yum) upgrade was all it took.

Please, STOP giving terrible cowboy advice about ops.

It's not horrible advice if they're running a system that is late or never in getting patches. If their distros are on top of it then let the distro fixt it. If NOT it's up to the person sitting in front of the machine (*provided it's THEIR machine) to fix the problem.

If their vulnerability isn't being fixed then its up to them to fix it. That's not what I meant about "you don't need a distro to do anything." I meant, if your distro isn't doing anything... but it's too late to edit that out.

No, I don't feel bad at all... Cause if it's not fixed by a distro then the user has to start somewhere.

Who cares about ars ... really no disrespect but some perspective needed.

It's still not patched on the login site of one of my Banks. I've got a Mastercard with them dammit.I've recorded the SSL Cert fingerprint and will not be logging in until this bug is fixed _AND_ the cert is changed.

Think this is paranoid - check some of the data coming back from this bug. 64KB of data is a LOT when it's leaking from address space of the process using openSSL.

Even that is not enough.

Unless you want to check the thumbprint everytime you go to your banks website, you will have to wait for the certificate authority to blacklist the certificate, and then wait for the browser vendors to update their browsers to include the blacklisted certificate. Otherwise you could be man in the middle attacked by the old certificate.

If it really has effected 66% of servers on the net, there is no way browser vendors are going to blacklist 66% of all existing certificates.

My guess is this whole thjng will just be white-washed. Servers will upgrade their software and won't bother about changing the certificate, or they will get a new cert and not worry about black listing the old cert.

One thing is for sure: Unless you know the new thumb prints NEVER access any banks on public wifi etc.

If 66% of servers are leaking their keys, SSL should essentially be considered broken for protection against man in the middle attacks.

I've been using a command line tool to probe a bunch of sites today; amongst about 500 sites, I've only found 6 vulnerable. It's looking like the hole is closing fast, generally. Of all the big American sites I tried, financial/e-commerce/tech service, none were *currently* vulnerable. Of the vulnerable, 4 were high-traffic consumer-oriented sites that most Americans will have never heard of. I've contacted the 2 other sites (startups); one was aware and working on it (the engineer I got forwarded to cursed a blue streak about the business people who wouldn't let him down the site while he got the fix in place - he had to beg for an emergency maintenance window) and the other was *apparently* clueless and said they'd forward the news to the engineers. (Some people are in freakout mode, so I'll give Joe Random Support Dude a small benefit of the doubt.)

OpenSSL code is supposedly a huge mess, and they keep piling on to it...remember the donation effort to get TrueCrypt audited? This is like 100x more urgent.

To be more precise, even though up to two-thirds of Web servers may have the TLS heartbeat extension installed, only 17.5% have it enabled. The problem is big, but only a quarter as big as the title of this article would lead one to believe.

Just a note... I've been on the internet since the early 90's. You know, back then if you knew how to fix an issue people were generally positive about that sort of thing.

No, not now. Now, doing things yourself, or knowing how to, makes you "dangerous" or "unpredictable" and you get called a "cowboy" for posting shit that may actually help someone out whom has no other means.

I like this <my bad> (how about knee-jerk-reactionary?) internet less every day... It's just about time we geeks made a new medium where you don't have to deal with sensational and alarmist reactions to every thing you post.

Question from someone who's not a techie (and whose head 99% of this discussion goes over): On top of forums, are sites like Facebook & Twitter, web mail pages like Bell Mail, and stores like Amazon.ca affected by this? What if we use email clients instead of webmail (I suppose no difference if this affects the sever that stores our email)?

Just a note... I've been on the internet since the early 90's. You know, back then if you knew how to fix an issue people were generally positive about that sort of thing.

Dude, I've been on the internet since the early 90s too. I'm also a professional ops guy who manages a LOT of servers. Stuff that flew in the early 90s does not fly any more. Systems are MANY orders of magnitude more complex, the attackers are orders of magnitude more numerous, and their tools are almost incomprehensibly better.

As to that whole "OpenBSD / LFS / Gentoo users are doing fine!" thing... yeah, not so much. There's a reason you don't see many orgs running any of those distros in production, and "hard to effectively maintain" is by far the biggest one. You may THINK your OpenBSD, LFS, or Gentoo box is super secure... but is it? Really? You are pretty much your own distro maintainer. How many mailing lists do you monitor? And I don't just mean "subscribe to, dump the messages in a folder, and maybe skim something out of it every now and then", I mean MONITOR. Daily. Carefully. Reliably.

What does the dependency chain look like on... well... every application you use? Because you need to know. Worse, if you're compiling from scratch, you can't even really compare notes effectively with anybody else. You may know what version of Apache you're running... you may know (some, maybe even most) of its dependencies... but what about their dependencies? Which ones are dynamically linked, and which ones statically? What's your update and upgrade plan? When it's time to rebuild it, which libraries will you rebuild it with? If you don't know immediately which you've linked dynamically and which statically, how will you test to be sure you've updated vulnerable versions of old libraries in THIS application that you statically linked there, when there is a separate copy of the library in the system itself that other apps link dynamically to?

Again, package managers exist for a reason.

And, seriously, that's just for a relatively simple headless server. You want to get into a full-on desktop and talk about managing it effectively as a one-man show? Noooooooooope. I have 829 packages installed on a relatively simple mailserver (including dependencies)... that number spikes to 2,298 on a GUI'ed up workstation, even though that's still relatively simple and clean (for a workstation). One single person cannot manage that much code effectively. Trying to do so just means you're going to inevitably leave old vulnerabilities in, and one day one of them will bite you.

As to that whole "OpenBSD / LFS / Gentoo users are doing fine!" thing... yeah, not so much. There's a reason you don't see many orgs running any of those distros in production, and "hard to effectively maintain" is by far the biggest one. You may THINK your OpenBSD, LFS, or Gentoo box is super secure... but is it? Really? You are pretty much your own distro maintainer. How many mailing lists do you monitor? And I don't just mean "subscribe to, dump the messages in a folder, and maybe skim something out of it every now and then", I mean MONITOR. Daily. Carefully. Reliably.

Just a note... I've been on the internet since the early 90's. You know, back then if you knew how to fix an issue people were generally positive about that sort of thing.

Dude, I've been on the internet since the early 90s too. I'm also a professional ops guy who manages a LOT of servers. Stuff that flew in the early 90s does not fly any more. Systems are MANY orders of magnitude more complex, the attackers are orders of magnitude more numerous, and their tools are almost incomprehensibly better.

As to that whole "OpenBSD / LFS / Gentoo users are doing fine!" thing... yeah, not so much. There's a reason you don't see many orgs running any of those distros in production, and "hard to effectively maintain" is by far the biggest one. You may THINK your OpenBSD, LFS, or Gentoo box is super secure... but is it? Really? You are pretty much your own distro maintainer. How many mailing lists do you monitor? And I don't just mean "subscribe to, dump the messages in a folder, and maybe skim something out of it every now and then", I mean MONITOR. Daily. Carefully. Reliably.

What does the dependency chain look like on... well... every application you use? Because you need to know. Worse, if you're compiling from scratch, you can't even really compare notes effectively with anybody else. You may know what version of Apache you're running... you may know (some, maybe even most) of its dependencies... but what about their dependencies? Which ones are dynamically linked, and which ones statically? What's your update and upgrade plan? When it's time to rebuild it, which libraries will you rebuild it with? If you don't know immediately which you've linked dynamically and which statically, how will you test to be sure you've updated vulnerable versions of old libraries in THIS application that you statically linked there, when there is a separate copy of the library in the system itself that other apps link dynamically to?

Again, package managers exist for a reason.

And, seriously, that's just for a relatively simple headless server. You want to get into a full-on desktop and talk about managing it effectively as a one-man show? Noooooooooope. I have 829 packages installed on a relatively simple mailserver (including dependencies)... that number spikes to 2,298 on a GUI'ed up workstation, even though that's still relatively simple and clean (for a workstation). One single person cannot manage that much code effectively. Trying to do so just means you're going to inevitably leave old vulnerabilities in, and one day one of them will bite you.

And again, I'm going to disagree. Of the 2,298 packages a distro installs... how many do you really actually need or use? I can spend 3 hours on a Slackware installation saying "no, no, no, no" because if I don't need it, then it doesn't need to be installed. Why waste internet bandwidth downloading updates for things that you're not using? That's a waste of time and disk space.

If you want to talk about vulnerability and management complexity... Why are we installing literally millions of lines of code with untold number of bugs in packages that we either don't need to won't use? I can build a system with everything that I need in 300 or so packages and setup a cron job to query the latest versions and then send a sys-mail if one gets updated. I can recompile it and dependent applications in a few minutes. The only time I have to recompile almost everything on the machine is when the sanitized kernel headers or glibc must be changed; that situation is an uncommon occurrence.

My advice was not directed at server admins. They should know how to do this stuff. My advice was directed at the guy who's distro isn't supporting and has no other choice. In that situation it's far more irresponsible to not even try to update than it is to leave things alone.

Gentoo users shouldn't have any issue. Portage will tell them "Important news items are awaiting" at which point they can emerge the updates and portage will recompile them and any dependent applications from scratch automatically. That's what portage was designed to do...

Again, I wasn't speaking of corporate desktops or servers. But if you want to go there then I'll let you in on something: tools do exist that do this sort of thing, while being platform and distribution agnostic. If I had custom desktops, I could compile and package for internal distribution with any one of about a dozen or so systems management platforms. I do this for a living, windows, linux, OSX... I can create custom patches and deliver to them all, even if they're scratch built Linux boxes. Since the desktops we have are Red Hat, I just use RPM. But the situation would be absolutely doable if these were custom boxes.

I stand by my statement that calling me "cowboy" was an alarmist reaction. Since when is suggesting that someone, who has no other choice in the situation, compile software from scratch and install it a super dangerous thing? I didn't say hey, using gasoline to wash oil out of cloths is good. I said, try compiling from scratch. The worst case scenario is the person has to use their package manager on the boot disk to reinstall that app and dependencies.

I'll admit that my initial post was ambiguous about who should try compiling from scratch, and I've since fixed that. I didn't mean it as a try-this-first scenario, I meant, and still mean... Try compiling if you're out of options, it can't hurt any more than an in-place upgrade (to fix a security hole) can also break your applications.

As to that whole "OpenBSD / LFS / Gentoo users are doing fine!" thing... yeah, not so much. There's a reason you don't see many orgs running any of those distros in production, and "hard to effectively maintain" is by far the biggest one. You may THINK your OpenBSD, LFS, or Gentoo box is super secure... but is it? Really? You are pretty much your own distro maintainer. How many mailing lists do you monitor? And I don't just mean "subscribe to, dump the messages in a folder, and maybe skim something out of it every now and then", I mean MONITOR. Daily. Carefully. Reliably.

Ummm, the BSDs have the pkg* package management tools.

Yeah, I know, but it's really hard to stay on them, and they generally aren't as up-to-date as they really should be. In actual practice from my experience you almost always end up needing to resort to the ports tree for one application or another, which then tends to break the package management system before long, and then you're on ports for pretty much everything, and then you're deep into management hell of having to chase down dependency issues between one port and another, long recompile sessions for every update, etc.

It was totally manageable to do it that way in the 90s. Now... not so much.

Since when is suggesting that someone, who has no other choice in the situation, compile software from scratch

That isn't what you actually said, or my reaction would have been very different.

What you actually said was:

Quote:

Oh for F! sake you all... If you're on *nix you don't need the distro to do anything.

Download the source for 1.0.1g here...

... and you said it something like 20 hours after patched packages were already available on all major linux distros. (For reference, Debian sent out an advisory with recommendations to upgrade to already-released binaries on 4/7 at 17:36 EST, Ubuntu followed up with the same on 4/7 at 18:01, and you made your recommendation at 17:52 on Apr 8.)

I'm genuinely sorry if this makes you feel attacked, but the advice you wrote, as you wrote it (not as "if you have no other choice", which is unlikely to be the case for anyone who would be reading that advice anyway) was actively harmful.

Since when is suggesting that someone, who has no other choice in the situation, compile software from scratch

That isn't what you actually said, or my reaction would have been very different.

What you actually said was:

Quote:

Oh for F! sake you all... If you're on *nix you don't need the distro to do anything.

Download the source for 1.0.1g here...

... and you said it something like 20 hours after patched packages were already available on all major linux distros. (For reference, Debian sent out an advisory with recommendations to upgrade to already-released binaries on 4/7 at 17:36 EST, Ubuntu followed up with the same on 4/7 at 18:01, and you made your recommendation at 17:52 on Apr 8.)

I'm genuinely sorry if this makes you feel attacked, but the advice you wrote, as you wrote it (not as "if you have no other choice", which is unlikely to be the case for anyone who would be reading that advice anyway) was actively harmful.

Yes, I admitted that in the last post. I've got this little problem where I end up with too much response to fit in my ability to get it out. I wrote that whole thing because I read in one comment where a guy had an unsupported distro... My reaction was "geeze, try compiling!" and so all that came out and the "as a last resort" part fell out of my output buffer. I'm not trying to give folks bad advice, I just missed posting the caveat on the first go. I'm a habitual post editor because sideways details slip my mind in the initial writing. I, like most of us, think at a speed far in excess of what I can type.

Yeah, I know, but it's really hard to stay on them, and they generally aren't as up-to-date as they really should be.

I'll disagree with that.

In regards to FreeBSD at least (I have no experience with Open or Net), it's painfully easy to stay within the package management system. Generally things are straight forward enough to install binaries, while at other times I want finer control and I'll configure options in the ports tree first before installing. Whether you compile from source or use binaries (or a mix), the management system handles dependencies and versioning brilliantly.

And all the major projects are up to date, generally no more than a day or so behind the vendors.

Quote:

In actual practice from my experience you almost always end up needing to resort to the ports tree for one application or another, which then tends to break the package management system before long, and then you're on ports for pretty much everything, and then you're deep into management hell of having to chase down dependency issues between one port and another, long recompile sessions for every update, etc.

I have no idea what this even means. I've been using FreeBSD for over a decade now, and that's just nonsense (you're really displaying a linux mindset). Package management was one of the major reasons I left the dependency hell of Debian for FreeBSD. Ports is a part of the package management system of FreeBSD, which handles both source and binary installs.

You can install apps outside of the package management system if you want, but then I'd have to call you a cowboy.

As to that whole "OpenBSD / LFS / Gentoo users are doing fine!" thing... yeah, not so much. There's a reason you don't see many orgs running any of those distros in production, and "hard to effectively maintain" is by far the biggest one. You may THINK your OpenBSD, LFS, or Gentoo box is super secure... but is it? Really? You are pretty much your own distro maintainer. How many mailing lists do you monitor? And I don't just mean "subscribe to, dump the messages in a folder, and maybe skim something out of it every now and then", I mean MONITOR. Daily. Carefully. Reliably.

Ummm, the BSDs have the pkg* package management tools.

Yeah, I know, but it's really hard to stay on them, and they generally aren't as up-to-date as they really should be. In actual practice from my experience you almost always end up needing to resort to the ports tree for one application or another, which then tends to break the package management system before long, and then you're on ports for pretty much everything, and then you're deep into management hell of having to chase down dependency issues between one port and another, long recompile sessions for every update, etc.

It was totally manageable to do it that way in the 90s. Now... not so much.

You need to have a look at pkgng in addition to a local repo built with poudriere. It's the best of all worlds, and well suited to a large production environment.

Also 800+ packages on a "simple" mail server? Good god, man. My most dependency-happy FreeBSD box in productions top out under 350.

I have no idea what this even means. I've been using FreeBSD for over a decade now, and that's just nonsense (you're really displaying a linux mindset).

I actually got my start in the BSD world. My first dedicated webservers were BSD in the mid-late 90s, and I didn't even seriously flirt with Linux until Ubuntu Feisty in 2007 - prior to that, I'd managed sites and services on some Linux systems - mostly Red Hat, mostly with out of date subscriptions - and absolutely hated them.

I still professionally maintain a significant number of FreeBSD servers (20-ish), though I no longer personally deploy it by preference.

You can tar me with a lot of brushes, but "never done it any other way than Linux" definitely isn't one of them.

A simpler relay server on a VM whose host I control weighs in at 399. Which, yes, is a higher number of packages than a comparable FreeBSD server... But that's also partially feature, not bug; those packages include apparmor, a lot of core utilities that are generally just referred to as base system on a FreeBSD box, and considerably fewer statically linked dependencies.

Regarding security... FreeBSD didn't issue notifications and patches on heartbleed until 25 full hours after Debian and Ubuntu did. Not a great showing, on probably the heaviest impact security patch in the last 5 years. :-\