They have taken all the reasonable precautions, and if their passphrase was strong, then the danger of my servers being compromised by meteor strike is a much greater worry.

The only thing that concerns me is this: In the Fedora announcement, they said with a high level of confidence, they don't believe the passphrase for their signing key was compromised, because they hadn't signed any packages during the period of time the box was compromised. They are going to change the signing key anyway just in case. This is a good thing.

In the Redhat announcement, we can infer the passphrase and signing key were compromised, because the attacker signed invalid openssh packages. Even though the official RHN distribution channel was not compromised, the attacker most likely still has their private key and passphrase and can continue signing packages and attempting to distribute them. Redhat needs to step up and reissue a new signing key. There was no announcement of this.

Or we could infer that the system was used for its purpose by the attacker. He signed those packages. Redhat looked at the logs, no other packages were signed. So the passphrase is still very likely to be save.

Or we could infer that the system was used for its purpose by the attacker. He signed those packages. Redhat looked at the logs, no other packages were signed. So the passphrase is still very likely to be save.

God, I seriously hope they don't have the passphrase saved so that you don't need to type it in to sign a package. If that is the case their security is very lax. Also, if it's saved, it almost certainly is compromised, because it's stored on disk somewhere. It would be trivial for the attacker to pull it out of whatever script or text file it's saved in.

God, I seriously hope they don't have the passphrase saved so that you don't need to type it in to sign a package. If that is the case their security is very lax.

I don't know about anyone else, but I am surprised that their package signing machine is connected via a network to other machines.

Our code signing machine is locked in a cage and powered up only for purposes of code signing. Executables to be signed are written to a previously wiped USB drive which is attached to the signing machine only when packages are to be signed. The signing machine has not been connected to a network since before the keys were generated. The private key only exists on that machine and in a single separately encrypted backup.

I've always considered that to be a minimally paranoid means of keeping private keys private. Really paranoid would be "signed on one machine, checked and signed again on another machine."

Our code signing machine is locked in a cage and powered up only for purposes of code signing. Executables to be signed are written to a previously wiped USB drive which is attached to the signing machine only when packages are to be signed. The signing machine has not been connected to a network since before the keys were generated. The private key only exists on that machine and in a single separately encrypted backup.

Meh!
Well my code signing machine is more secure. We don't put USB sticks directly into the signing machine. We copy the package to a USB stick and then to the 'transfer' machine. The code signing machine is then 'connected' to the transfer machine by infared link which is unblocked by lifting a large steel slab out of the way. The transfer happens via zmodem, and it scanned on both the transfer machine and the code signing machine. Finally we sign the package and transfer it back just before the poor intern's strength gives out and the steel slab slams back down, killing the connection and the intern...(just in case he saw me type in the 42-character passphrase to the private key).

Red Hat needs to offer more info before you can make a solid judgement about this.

If the attacker gained access to the actual system where signing takes place then Red Hat needs to change the key.

But from the announcement wording - they are suggesting that the attacker was able to submit packages to be signed but the actual signing server was not compromised.

They should not have been so vague about this because it is a crucial distinction to make for their customer to make a security judgement.

As a customer I'm not happy with the vague details on what was compromised. I'm sure they did it because they have concerns about describing their package signing systems in detail but they need to open the kimono and give us the detail we need to make a decision about reloading our systems.

Merely saying, "trust us - anything that came from the official channel is safe" doesn't fly. You let an attacker gain unauthorized access - you need to re-earn trust at this point by giving us some detailed info.

What surprises me about this the most is that the system was connected to the network/Internet at all. I had always been of the understanding that to prevent this, the signing server was a stand-alone system accessible only by sneaker-net with physical media. You take your package on CD/DVD/USB key to the server, sign it, then take the signed package back via physical media and distribute it.
One Federal Gov.t agency in my home town does this and the server is behind three locked doors too, with three different people needed to get physical access. Why didn't RedHat/Fedora do something like this? It really isn't that much of a pain in the ass when you think about it...

You're missing the most interesting possibility in my mind: employee sabotage. Why should open source be immune to a bad apple attempting to subvert the system for their own gain? A mid-level employee signs a package and distributes it, a customer running a rootkit checker or clamav on their system notices that the copy they have is suspicious, reports it, and suddenly you have a situation where the key itself may or may not be compromised and some checking needs to be done everywhere.

Yes, that is what surprised me, too. However, I'd think they would know what they are doing, and are acting in good faith, because they could have tried to keep the whole incident secret instead.

I don't see why an attacker would sign the packages one that server, instead of just taking the key and signing them elsewhere. Because of this, Red Hat now has the signatures of the tampered OpenSSH packages. If the attacker had signed them elsewhere, they wouldn't, making the packages more valuable.

Also, I assume this means any historic packages, signed with the old key, not already in your possession at the time of the intrusion cannot be trusted. With this I mean any old versions of packages downloaded after the time the attacker got his hands on the passphrase.

Good point. If the attacker still has the private key and passphrase, he can trivially repackage any older RPMs and sign them again.

Our RedHat TAM tells us that "the signing infrastructure is completely different between fedora and RHEL" and that RHEL uses "a submit to be signed" method. So essentially, someone submitted packages and the system automatically signed them.

The targeted POLICY is the weaker of the two most common policy. The other is strict which is a bit too harsh for most.

SELinux also has modes. The only one worth using in production is enforcing which actually enforces the rules. There's also permissive which logs when rules are violated but lets them happen anyway; this is good for development but obviously won't save you from a real attack.

Don't worry, whatever this "linux" thing is, it can't possibly run without an Operating System to support it, e.g. Microsoft Windows®. All applications require an Operating System to run, including "linux".

I think the parent is talking more about general viruses that are just sent out into the tubes with the intent of auto-rooting insecure boxen.

What you're saying is true "Any system with something desirable on it is at risk of getting wHacked", but one system with important information on it is not going to spawn a breed of viri meant to just root ALL of the boxes with that OS.

Given that Linux has a lot of market share in the server department, I would imagine that the reward for compromising a system would be greater for linux right now than windows. After all, would you rather hack into 1000 home desktops or get a server from EBay, Slashdot, or any major to medium site that gets credit card numbers at some point?

However, since infecting a server is lower profile than infecting 1000 home computers, people looking for notoriety won't be doing it. I imagine that if someone fin

The most scary and amusing thing is how quick some people on this site and others are to dismiss local exploits. They all think "you have to be on the console, so fuck it, this isn't important and doesn't affect me". They are wrong. These days, a remote exploit is just a human operator and a local exploit.

There's absolutely nothing to stop anybody from installing an executable that runs automatically under a user account, without ever needing root. And that executable can do a lot of the things it may want to do without ever needing root access, either.

The point is, there's no need to change system files or bind to privileged ports.

Your documents contains LOTS of yummy personal information for people to steal. Identity thieves and credit card thieves will love all that stuff.

Spammers need relays to send their spam through. You can run a relay just fine as a normal user. Same thing with the DDoS bot for exortotionists and script kiddies.

You can mess with the internals of Firefox without root access too, through plugins. Easy to put a password stealer in there. Or you could mess with your desktop settings so that when you try to launch a browser, you get a compromised version instead.

I'd say I've covered all the major reasons somebody would want to infect your machine here, and not a single system file or privileged port was needed for it.

Not if you don't have access to the firewall settings which will open the port that allows someone to connect to your relay.

Unless you happen to run one of the desktop distros which usually have a default policy of ACCEPT.

Of course, the "only works for one user" argument is better if presented in reverse. If your less-computer-literate kid/spouse/parent can't accidentally run code that (...)

Read all my documents through the world-readable home folders? Another convienience feature.

My experience is that people don't keep the accounts truly separate, that's just for convienience. "Hey, can I just check my webmail for a sec?" "Sure" and your email's compromised.

Furthermore, you'll be in a position to be able to clean their account up for them without having to wipe and reinstall the whole machine

in theory. In practise, I expect the malware authors to find so many ways of hiding (or just when you "rescue" his documents) that it won't practicly happen. Least not without someone more experienced than the average guy.

Not if you don't have access to the firewall settings which will open the port that allows someone to connect to your relay.

They don't need your relay. If they're running on your machine, they can fetch their payload and then start sending it out through your local MTA or configured SMTP server. If you can send e-mail, so can they.

Yes, but without access to the system FF folder, that plugin will go in your per-user plugin directory, and will only run for you. So only your passwords will be stolen, and not those of anyone else on the computer.

Given that most computers running Firefox these days are single-user systems, whether running Linux, OS X, or Windows 98.

Then consider Linux systems. Most systems these days are set up with sudo access, as is OS X. All the bug has to do is watch to see when you run sudo yourself, and then bam, it has a (usually) five-minute window to run itself as root and infect the rest of your system.

It can also grab your ~/.ssh/known_hosts and then reach out to those to see which ones accept your private key; install itself there, and, again, watch for sudo access. It's not hard for someone to go from there out to infecting every machine you have access to, and root on every machine you have root on, and potentially every system that every user on that system has access to, and so on, and so on.

Like change system files? Nope. How about bind to privileged ports? Nope. So... it can mess up my documents? Darn.

Why do you have the computer? Just so you can have some privileged ports and system files that a remote exploit to an unprivileged account can't touch? Or do you actually, you know, use your computer for stuff?

Because if you're using your computer for anything, then that's what's really valuable.

Case in point: The private keys used to sign Red Hat/Fedora packages qualify as "documents" in your scenario.

Mounting/home with noexecUsing the grsecurity patch, which can deny execution of files not in directories owned by root, as well as usage of network sockets.Using SELinux

The tools are there. All that's needed is to use them.

The need to download random binaries to your home directory and run them is infrequent in Linux. The most frequent case is application installers, but many of those need root access anyway (nvidia drivers for instance), and most come with the distribution. A way to fix the occasional need to do this would be a sudo-like tool that needs to be used to execute a file, but doesn't grant root privileges.

The need to download random binaries to your home directory and run them is infrequent in Linux. The most frequent case is application installers, but many of those need root access anyway (nvidia drivers for instance), and most come with the distribution.

Quite a few people who have posted comments to other Slashdot articles have claimed that the difficulty of installing software that did not "come with the distribution" holds back the spread of GNU/Linux on the home and small-office desktop. There are plenty of apps that just aren't suitable for the major distributions' repositories. For example, some apps are not notable because they're for a vertical market [wikipedia.org]. Others have good reason not to be free software with free content, such as many video games.

A way to fix the occasional need to do this would be a sudo-like tool that needs to be used to execute a file, but doesn't grant root privileges.

If you're going to mount/home noexec, you should also mount/tmp as noexec as well. In fact, I'd wager you should do that well before you bother with/home. A lot of wormy/trojany stuff wants to write, unpack, build and execute in/tmp. In fact, while you're at it, make sure only root can run make and gcc, or get at any of the libs. All command line network tools (wget, ftp, etc) should also only be run by root. Now go through and get rid of most (all?) of the setuid root stuff. Then crank down the firewall to only allow incoming 22 and 80 (or whatever). That will take care of a wide range of automated stuff.

So cleanup is easier. But the damage may already be done, as criminals may now have your passwords, your credit card numbers, and your personal information. Plus you were probably sending spam up until the moment you noticed the infection.

PS: It's pretty disingenuous to make a point of that the Windows virus doesn't let you "search for help online", when your Linux scenario was all about asking help from a friend in the first place.

The Windows cleanup is a merely a little longer, as it requires an OS re-install and backup restore (also, that is what most people would do on Linux anyway). The vast majority of systems out there are single-user, you know.

A keylogger wouldn't need root access. All it has to do is monitor the keyboard and send out packets.

In an ideal operating environment, any process that monitors the keyboard would show up in a list of accessibility tools, and the user could view this list using the "access" icon (shaped like a stylized man in a wheelchair [wikipedia.org]) in the system notification area.

Synergy, a keyboard sharing app, must run as the user. I use this to use one keyboard between an Windows laptop and Linux desktop at work. As the keyboard is only hooked up to my Linux box, and synergy runs as me I must assume that user level access is all that is needed for a keylogger.

With the growing interest in Linux, I wonder if we'll see more parity of viruses between Windows and Linux.

It also goes to show that the human side is usually where compromises come in to play. Most likely some admin had a weak password that was hacked, and that admin had permission to signing packages or things he should not have had.

I don't care how secure your OS is. If you don't follow proper security procedures, including using strong pas

Thats correct. And as much as there are many issues with Windows security that -could- be exploited, usually, even there, the human side is easier to exploit... So those "skills" are portable... Will be interesting to see how the ecosystem reacts when it starts happening more and more... technological fixes won't do...

I wouldn't even assume it was an admin. My guess is that a HR person of some sort had a weak password, and that from there the attacker was able to sneak into Red Hat's internal network. Within that network, the attacker would have had a much easier time getting into higher security systems, and eventually start getting those packages signed. Whoever it was probably spent several weeks working on this, especially given the sophistication of the attack (targeting the signing server to apparently compromis

Given enough time and energy, practically any network-connected system can be hacked. That is because security is *hard*, and there are few people who have the means to create chains that contain only strong links, and put those strong chains in the hands of a big audience.

But given workable tools, I think security comes down more to procedures, and a competent sysadmin than anything else. I'd put more faith in a well-managed Windows server than a Linux server with an idiot as admin. With all factors equal,

No, Windows' popularity is only a small part of the reason it is the only OS with viruses in the wild. The biggest reason is that it uses the discredited "security through obscurity", but it, too isn't the only reason Windows is insecure.

Mac and Linux are based on UNIX, which was developed for mainframes; mainframes have always needed security. Windows was developed before the wide popularity of the internet for stand alone computers. Stuff like Active-X is fine on a computer until you network it.

What do you mean, "even Linux server"? How about "even every single computer in the world"?

Whoemever told you that Linux, Windows, OS X, OpenBSD or whatever is 100% invulnerable to authorized accesses is either ignorant or lying. Nothing is perfect, and there is no reason why Linux advocates should deny that fact. Saying "haha, look, Linux IS insecure after all!!!!1111" is not any more useful than "haha, look, you're a human being and you made a mistake after all!".

The advantage (for the virus distributers) is that Windows and OSX and iPhones have is that they share a common denominator of services and applications. It is much easier to target a system if you know what services are available.

For the most part Linux systems are custom builds with a variety of applications and services (and versions of) enabled. Someone could target a specific distribution or sets of distributions, and this has happened several times (slap

However, I'd say this is totally unrelated to the Debian bug. The Debian bug was caused as a result of a change a Debian package maintainer made. Since he only made the change for the Debian package and didn't push it back upstream, it's highly unlikely that they are related.

Last week? Does that mean earlier this week, or the week before the week I'm in? At what point in whatever week was last week? If I did an install/update after a certain date am I covered?

It would be nice if they weren't so vague about the time frame. Maybe it is to encourage people to check and not assume they will not have problems, but in a situation like this, the more accurate a picture I have of what is going on, the better I feel.

On a related note, you should not use Fedora in a production environment anyway. That's what RHEL is for. Fedora = Testing. RHEL = Stable. At least in theory.

I thought it was, RHEL == RedHat Support, Fedora == Community Support. Fedora might have some bleeding edge stuff in it, if you upgrade often, but it seems about as stable as the corresponding RHEL release. The difference is the support you can count on.

Well, RHEL also mantains a stable kABI within the entire major release, and only rebases packages when absolutely necessary (maintaining most library ABIs as well). For example, RHEL 4 ships apache 2.0.52, and has since launch. Security and bug fixes are backported, but the fundamental behavior remains the same for any instance of RHEL 4. This is also true of libraries.

This means that a given piece of 3rd-party software is more likely to keep working after an update in RHEL than in Fedora.

I thought it was, RHEL == RedHat Support, Fedora == Community Support. Fedora might have some bleeding edge stuff in it, if you upgrade often, but it seems about as stable as the corresponding RHEL release. The difference is the support you can count on.

I thought that too, but I was wrong, and it bit me in the ass. Fedora is NOT appropriate for a production environment, period. Fedora drops all support 13 months after release, which means they stop issuing security patches, period. In a production environment where you're likely to have 13 month uptimes, that would mean a reinstall every time you reboot the machine.

If you want a RedHat-based distro with long-term community support, the one you're looking for is CentOS, or so I'm told. For your desktop

I'd suggest reading both advisories again. But I'll be nice and spell it out. It seems neither OS's repositories were compromised. From the Fedora advisory: "Among our other analyses, we have also done numerous checks of the
Fedora package collection, and a significant amount of source verification as well, and have found no discrepancies that would indicate any loss of package integrity."From the RHEL advisory: "Based on these efforts, we remain highly confident that our systems and processes prevented the

Fedora is changing their key as a precaution "because Fedora packages are distributed via multiple third-party mirrors and repositories". While it seems Red Hat doesn't care as much about people getting packages from non-RHN sources, so they just issued an advisory.

Your two statements seem to contradict each other, if you consider the third-party mirrors and distribution sources as "Fedora" repositories.

The Fedora repository and signed packages may or may not have been compromised. RHEL packages are believed to be safe. Ergo, it's not much of an issue for production (read: critical) servers, since they should not be running a non-production distro.

Pretty sure most of us are above this anyway, but let's avoid a distro flamewar. You can look through my past comments and see that RH is far from my preferred distro, and I love to take shots at them. But now is not the time. Anyone can get hacked, and it sucks. And they're being responsible about reporting and mitigating.

Our RHEL5/x86_64 system has been affected by this problem: I have ran the script from Red Hat openssh blacklist page [redhat.com], and found that all four openssh packages (openssh, openssh-clients, openssh-askpass, openssh-server) had their checksum on the blacklist. I took the server down, created a backup snapshot of the root disk, and I am currently reinstalling it, while checking other volumes and the root volume snapshot for any signs of intrusion.

The most annoying thing is that Red Hat remains silent on the main problem: what the compromised packages contained, how to determine whether the possible attacker exploited the access offered by those packages or not, when exactly were the packages signed, what other precautions to do on other servers (notify users which use the same password as on a compromised server, check for other modified binaries, etc.). I have verified that I had a trojanized binaries on my system, but apart from that, it is not clear what else the possible attacker managed to do.

Red Hat says the packages were not distributed over RHN, so I wonder how I got them. I had another repository in my yum.conf: rpmforge. Maybe this was the source of the malware. My syslog (even a copy on a syslog server) did not say anything about upgrading openssh in the last month or so. However, on Aug 15 it upgraded the YUM RHN plugin. On the same day our dovecot stopped responding, saying the time went backwards (and yes, there was time move several weeks back and then forward, according to dovecot log). Also the rpm -qi said the package was built on Aug 13 13:13:03, and signed five minutes later. However, the install time reported by rpm on my system was July 25 (which would corelate with the time slip reported by dovecot).

the most likely attack was probably from those lame SSH dictionary scans on port 22. This is usually just an extreme annoyance to admins who must provide port 22 service and haven't heard of 'SSHguard'.

Or just use SSH key authentication, this is what it's for. Anyone clever enough to use SSH on a redhat project server should be able to manage key authentication.