Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

An anonymous reader writes "Following recent compromises of the Linux kernel.org and Sourceforge, the FreeBSD Project is now reporting that several machines have been broken into. After a brief outage, ftp.FreeBSD.org and other services appear to be back. The project announcement states that some deprecated services (e.g., cvsup) may be removed rather than restored. Users are advised to check for packages downloaded between certain dates and replace them, although not because known trojans have been found, but rather because the project has not yet been able to confirm that they could not exist. Apparently initial access was via a stolen SSH key, but fortunately the project's clusters were partitioned so that the effects were limited. The announcement contains more detailed information — and we are left wondering, would proprietary companies that get broken into so forthcoming? Should they be?"

And so, when Microsoft gets raped by a bunch of hackers you think they are going to let the public know?No, they are going to keep it under wraps.

No, they are *not* going to keep it under wraps, at least not if the break-in puts its users or customers at risk.

The reason is simple: Microsoft is required by law to disclose [proskauer.com] any such breach. The penalties for "keeping it under wraps" are severe and could include paying restitution/punitive damages to each individual customers/user.

But don't let such minor detail stand in the way of spewing your MHD [slashdot.org] all over slashdot.

"requiring notice to individuals when the security of their personal information has been compromised"

Those laws have nothing to do with a security breach of this sort if their own personal information isn't stored on the machine as well and in this context, the only people who would be notified **might** be the people writing the code. .

Although this is a troll, there still is an unanswered question: how did the ssh key get stolen? While its nice to see that FreeBSD wasn't breached due to a vulnerability in *its* systems, someone obviously had a vulnerability in their system. To all the sysadmins out there, I think that's what keeps you up at night: How do you ensure that your users safeguard their secrets? Other than a "corporate policy" document imploring them to use "good judgement"?

[t]here still is an unanswered question: how did the ssh key get stolen? While its nice to see that FreeBSD wasn't breached due to a vulnerability in *its* systems, someone obviously had a vulnerability in their system.

The explanation is simple enough, and provided on the compromise notice:

The compromise is believed to have occurred due to the leak of an SSH key from a developer who legitimately had access to the machines in question, and was not due to any vulnerability or code exploit within FreeBSD.

It only takes one instance of walking away from your workstation leaving it running to have a co-worker slip into your chair and email your.ssh directory to some obscure off shore email address, then remove the outgoing email from the "sent" list. A stolen phone, a purloined laptop, the possibilities are endless, although in the latter two instances you would expect revocations to be issued (assuming you had a backup copy somewhere)..

Once someone has your private key they ARE you, and it it was done without being immediately discovered, the key could linger in the wild for months or years.

If you run on freebsd, examine your tar and tar.gz
Access via ssh key, someone may have changed the tree
If you only use base release, power down and anti-freeze
For package add post 9/16, SVN and confirm you're clean

They wouldn't be until they were forced to due to possible leaking of customer data. I don't blame them, I've worked at a company whose ad servers got hacked and used to spread malware causing customers of ours to be blocked by google. After fixing the compromised servers we got contacted by some of our customers and had to lie (blame 3rd party) not to lose them.Another thing, companies rarely go after the hackers, even if they're dealing with total scriptkiddies (which is usually the case). While patching

If you had to lie about a security issue, you should immediately lose all trust and your company should immediately go out of business. Simple as that. Especially sleazy fucking advertising companies, which already tend to be some of the worst culprits.

Worthless, lying, malware-serving companies such as your own are exactly the type I make every attempt to block in every major way possible (cookies, scripting, advertisement images, etc.). Of course, I don't discriminate--I block them all; none of their b

and we are left wondering, would proprietary companies that get broken into so forthcoming?

No, we are not left wondering (unless one thinks that FreeBSD has a patent on especially leaky SSH developer keys) so instead we pretend that we are left wondering to justify hanging around and scribbling on the bathroom wall.

If Apple can't keep their mitts on an iPhone prototype and Google can't keep their mitts on a Nexus prototype, do you really think these butter-finger organizations have any better control over t

Really do seem to know what they're doing, and are very proactive with their security.

The security team and cluster admin has also been working very hard over the past few months to partition the FreeBSD cluster a lot better. If this attack had happened in a month or two, you wouldn't be hearing about it because nothing of value would have been compromised. The attack was against the legacy package building infrastructure, which is due to be retired soon. It was able to get access to more systems because it had the developers' home directories mounted (this isn't be the case with the new

"...and we are left wondering, would proprietary companies that get broken into so forthcoming? Should they be?"
Short answer:
No, they do not want to scare the stockholders.
and... Yes, they should be because openness allows people to recover or protect themselves faster.

I wonder, is it insider trading if you openly and publicly give ALL the information you have on a break-in that you (the company) detected, see that the immediate reaction of the market is far out of proportion to the actual harm, and then buy stock like crazy in the company you work at, only to sell it at a large profit a few weeks later? Are there laws against that? Considering that you have not hidden any information, you simply believe that you have a better appreciation of that information as part of w

That's precisely why there are requirements in some cases for executives of companies to file notices when they buy and sell certain stocks in advance. As long as those are followed then it is usually fine.

There should be as it's a major conflict of interest that opens a lot of bad doors to stock manipulation. You shouldn't be allowed to use(play with) the stock of the company you are employed by unless you own the company lock, stock, and barrel thereby only shooting your own foot. Your described situation is technically insider trading.
Are there laws, probably not. Legal responses, depends on who you piss off.

Unless the shareholders decide to throw a short sighted tantrum and force a company's hand. a company should be aware of the very bad PR possible from being caught withholding this sort of information.

Tattling on yourself is good karma and protects you from being embarrassed later.

...that any company which holds personally identifiable information (so that's all of them - it goes from CRM databases to employee records and payroll) has a Statutory obligation to register Company details with the Information Commissioner's Office and to report any breaches to the Information Commissioner [ico.gov.uk].

For the definition of "breach", read: lost or stolen mobile phone, laptop, notepad, application or registration document, tablet, audio recording, video capture, or any other method, known or unknown, of recording personally identifiable information.

I believe this has already become a EU directive. If you lose person-related data, you have to make it known within 24 hours after becoming aware of it, otherwise your company faces fines. And the fines have been increased to make companies feel it.

You don't seem to be aware that SSH keys are typically encrypted, and still require a password to unlock. Yes, some people foolishly enable passwordless use of SSH keys, but that does not reflect on the principle of SSH key login in general.

From a recent security audit I participated in, you are mistaken. The number of SSH keys that were _not_ passphrase encrypted, in a typical multi-user environment, vastly exceeded the number that were encrypted. These keys were stored on an unsecured NFSv3 environment, and on poorly secured backup tapes. This configuration is common, and we even found several github and Sourceforge SSH keys for known participants in open source projects there.

While the number of security errors in those environments were quite large, they're quite commonplace. They are partly the result of the fact that SSH servers have no way of restricting users to the use of passphrase protected keys, and SSH key generators, especially those in the OpenSSH codebase, do not enforce the use of passphrase protected keys. (They issue a warning, but do not enforce the use of passphrase.) There are certainly tools available to help manage passphrase protected SSH keys. but even where available, they remain rarely used.

This is compounded by the lack of effective centralized management tools for SSH key access, and the nonexisting or recently implemented and rarely used expiration or revocation technologies for SSH. SSH should only be considered robust for protecting individual sessions from decryption. Its "key" technology should not be considered a robust authentication technology due to these commonplace flaws.

There are better general authentication approaches: SSH, both OpenSSH and SecureCRT's tool suites, now offer Kerberos authentication. This is a much safer technology, not vulnerable to the various "stolen passphrase free key" issues of normal SSH. Unfortunately, I've not seen any way for it to mesh well with SSH configurations that rely on the "ForceCommand" option being tuned for individual users and their SSH keys, especially source control technologies such as the "git" and "Subversion" and "CVS" access at Sourceforge.

This is compounded by the lack of effective centralized management tools for SSH key access...

It works both ways. Precisely because there *is no* centralized control of SSH keys, my workplace cannot implement crazy password aging schemes, or demand at least one digit in each passphrase. End result: I take much better care of my ssh keys than of my plain login passwords.

Whats the point encrypting private key, it would add hassle of typing password every time without real security benefit - if attacker got access to your account installing keylogger is not a problem, right? Just use common sense - do not store unencrypted backups of your keys.

I'm afraid that you are mistaken: your ignorance of the technology is widespread, and leads to precisely the behavior I described of leaving the SSH keys unencrypted and widely available.

Look into the "ssh-agent" tool, the wrappers for it, and the various system keychaiins on different operating systems. It may take thought to handle it for your particular environment, but the simplest approach from a common shell environment is below:

You can password protect SSH keys. Furthermore, you can store them on an encrypted volume.
Passwords can be bruteforced rather easily, because most people tend to use weak passwords. Bruteforcing an SSH server server that enforces PKI however... I guess the only way to get in is... to steal a user's key, which means you need physical access to it or the user has been really careless.

Shit happens. One intrusion through OpenSSH into two out of an entire cluster FreeBSD servers doesn't mean jack shit as to the overall security of using SSH as your authentication method. I'll continue to use SSH, and I'm sure pretty much anyone else who uses it now will see no reason to stop just because an encryption key was used as an entry point to a high-profile server. According to TFA, steps are being taken to prevent this from happening again by deprecating legacy services... and SSH doesn't look

On this topic, a certain Frenchman is foremost in my mind. He's arrogant and reckless childlike asshole who isn't nearly as smart as he likes to think he is. His half assed "innovations" are leading the project down a bad path, and I get the sense they won't realize what happened for quite some time and when they do, they'll have to redo things that have just been redone, at great cost in manpower and project confidence.

Could you possibly state plainly and precisely what you mean. For Christ's sake, man, yo

Uhh, people use OpenSSH because it's free, it's everywhere, and they don't need any goddamn hardware token generators and other nonsense like that.

You security hardware guys have been pushing this crap since at least the 1980s. Seriously, it hasn't taken off in 25+ years because it's not practical and it's not what the public in general wants to deal with.

So give it a rest already! Aside from a few niche users, the public at large is not going to pay good money for a hardware token generator, and they sure

Amen. Not to mention, if you lose your token, your screwed.
To create a strong password that is easy for you to remember, follow these simple steps:
Do not use personal information. You should never use personal information as a part of your password. It is very easy for someone to guess things like your last name, pet's name, child's birth date and other similar details.
Do not use real words. There are tools available to help attackers guess your password. With today's computing power, it doesn't take

But the real reason is not there: it's that people frequently store such passwords in unencrypted, easily accessible locations such as a file called "passwords.doc" on their desktop, or send them via unencrypted email because they're too hard to remember or explain on a voice

Yes! These things have finally gotten cheap enough (around $20) that those of us with access to a lot of servers ought to have one.

For those not in the know, these things look like a USB flash drive, but have more number-crunching power than storage. You load your SSH private key onto the USB fob and the key never leaves the device. Plug the fob into a USB port and ssh offloads the private-key RSA operations to the fob, which won't do anything unless you enter a PIN. As the private key never leaves the devi

BTW, have we ever seen a satisfying explanation for what happened at kernel.org and linuxfoundation.org? We were initially told that it was something similar (stolen password/compromised user system), but AFAICT they have never explained how that could lead to the servers being root'ed. A rootkit *was* installed. That requires careless use of root privileges or an exploit of a privilege escalation vulnerability. Which was it?

I had an account on a machine in the same rack as kernel.org, and that machine was taken away for forensic analysis and still isn't back. Apparently (I don't do security research, but I work on a team that does) kernel.org contained the world's best collection of rootkits found to date, which was incredibly useful to people doing work in this area.