Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

darthcamaro (735685) writes "The Heartbleed OpenSSL vulnerability has dominated IT security headlines for two weeks now as the true impact the flaw and its reach is being felt. But what will all of this cost? One figure that has been suggested is $500 million, using the 2001 W.32 Nimda worm as a precedent. Is that number too low — or is it too high?"

There's been a lot of time and effort that marketing and legal departments have put in on this. The IT side would be expensive, but I keep hearing from my CFO about the post-Target world.

Yeah, he sounds like a moron. Nothing changed with the Target breach except for his recognition that this computer stuff can be serious. There are a lot of people like that and they took notice of the Heartbleed vulnerability.

If you've been sending and receiving sensitive SSL traffic for a while before the vulnerability was disclosed, then you should consider all that traffic retroactively compromised, and that might involve calling up a lot of customers and breaking some bad news to them.

Being well-prepared helps, but it's ridiculous to suggest that this only costs money for the the ill-prepared.

True - being able to manage your browser recognized CAs should be a core function of IT anyways, along with cert replacements. The real cost will be born by customers who largely are unschooled and don't know enough to install new CAs (the worst case scenario where CA certs are replaced across the board and no SSL/TLS CA certs are valid.) On the other hand, it might be enough to do a quick browser check and get them to finally upgrade to a decent browser version that does include the latest CAs. Which, in r

Patches are free, but I hear that Akamai covered the cost of the few thousands dollars per cert to revoke and several thousand to get a new cert, for each of their customers. Certs aren't free and not all CA did this for free.

Read the woes of the the MirOS maintainer (short version: startssl.com are being jerks, and will leave possibly compromised certificates active since this hobbyist hacker cannot afford their pricing):https://www.mirbsd.org/

That's ridiculous. I download firmware patches, software patches, etc on a daily basis. Patching heartbleed wouldn't even be out of the ordinary for my job as CIO. It basically costs IT nothing.

If you are downloading patches,you are no CIO regardless of the the title you gave yourself. Any company large enough to need a real CIO would have a gone through an extensive testing/qualification process for an emergency out-of-band patch. You would be lamenting the many man hours your teams lost while testing the patch (which, due to the urgency, meant that it could not go through the normal QA process you use before deploying patches). It took Amazon all day to deploy the patch across their load balanc

Testing department are useless when you can take a snapshot and rollback in case a problem is detected. Also, if you are into an organisation as big as you claim, your critical system run unecrypted behind an SSL accelerator&application firewall. Testing is so 200?ish...

Sure.... I've heard that before... rollback fixes everything... When the time clocks lose punches because they can't upload data to the attendance system you can just tell managers to manually reconcile timecards for 10,000 employees since IT didn't bother to test anything.

patches are free and part of your normal cycle. But there is cost in new certificates, time and cost in getting thousands if not millions of user credentials changed/reissued etc. patching is the cheap and easy part. some companies will incur considerable cost because of this.

That's ridiculous. I download firmware patches, software patches, etc on a daily basis. Patching heartbleed wouldn't even be out of the ordinary for my job as CIO. It basically costs IT nothing.

Yes and no. If you are patching for home use then there is basically no cost, however if you are patching corporate systems then the cost can be considerable since you actually have to involve all the managers that have an interest in all the relevant applications on the systems that need patching.

Just patching a corporate system without testing if the update breaks any applications is almost a sure way of getting fired. Many business have or should have a change management process and the professional IT

If Heartbleed leads to subsequent intrusions, then the pricetag will definitely go into the ten digit range. If web services are patched, both external and internal [1], then it still will be expensive.

[1]: There are a lot of embedded devices that use OpenSSL, some may not be able to be updated, especially if they are used for constant production runs. All and all, if one factored in man-hours and opportunity costs, the factor is larger. Especially because the OpenSSL patches are not done yet, so even patched systems will need re-patched once a stable, "blessed" release is made.

NPR this morning mentioned that, in all of 2013, OpenSSL received just $2000 in donations that they could use for "maintenance of the code base" work. (All of their other income was earmarked for specific work for specific customers.)

Funny enough, they said they've gotten some $10,000 this year, in the last few weeks, though note that most of this is small donations from other countries. There's no indication yet that any of the big U.S. corps most affected by this want to pony up the cash for a full security audit, though maybe some have employees working on it internally (for their own servers' versions, or maybe to share upstream).

I liked the analogy made in the NPR story, that OpenSSL is like public works infrastructure, except it has no tax authority for maintenance income. Not that I think paying for software should be mandatory, but hopefully some people will decide that, even when they don't have to pay "tax" on something, sometimes it's in their best interest to do so.

You are forgetting a tenant of economics: there is no such thing as a free lunch. Someone is paying for that software. Heartbleed shows how you pay for it. You can use open source and never donate, living off the work of others for free. That's perfectly acceptable. But when the shit hits the fan, you have to pony up OT and scramble to patch and fix.

It is absolutely mandatory to pay for software. Everything requires resources to be built and maintained. If someone used OpenSSL and did not contribu

There's no indication yet that any of the big U.S. corps most affected by this want to pony up the cash for a full security audit, though maybe some have employees working on it internally (for their own servers' versions, or maybe to share upstream).

Perhaps the money is going to a more qualified team, the OpenBSD team (fyi - OpenSSH is also theirs, OpenSSL was not). They are doing a massive cleanup pass on the OpenSSL code which is to be followed by a security audit of the code.

There was some exploitation of the bug very soon after disclosure, but I can't see a way to win here. You can't tell everyone about the bug without telling the bad guys...

Actually you can. An AC in another story figured it out, and was promptly modded all the way up to +1.

You simply tell everyone that there is a vulnerability, but you do not tell them any details about what the vulnerability is. Instead, you simply announce a release date & time for a patch. People can either shut down their servers until the patch is released, or, if they're feeling lucky, they can keep running the old code until the patch is released since no one actually knows what the vulnerabilit

You simply tell everyone that there is a vulnerability, but you do not tell them any details about what the vulnerability is. Instead, you simply announce a release date & time for a patch.

This is brilliant, and I'm kicking myself for not having thought of it.

The only problem I can see is that of whether the average repair-averse manager can be properly jolted by a good-faith announcement. Businesses often prefer PR bullshit to actual repairs, and will only invest in proper repairs if they're going to be utterly humiliated otherwise, and if they see no other way out. It's not unheard of for security researchers to be threatened with lawsuits should they disclose, for instance.

I might as well beat all the fear mongering "security" companies that will state all kinds of absurd numbers, so I am going to say 1 trillion and countless lives lost.

Years ago I worked for an IT consulting company and those bozos made a lot of hay from the Y2K bug. They had guys going around saying to customers that they should stockpile food because all the cummins diesel engines had a Y2K bug that required advanced mechanical repairs to solve and basically all food trucks, fuel trucks, fire trucks, etc were all going to be shut down for at least a month. So I made a bet with the guy that this was total BS. On speakerphone I called Cummins very quickly got onto the phone with one of the top guys in their engineering. He said that the only clock in the engines was to keep track of hours of operation and it didn't actually know what date it was, just total hours. He had a guess that the other clock in many trucks would be on the dashboard to say what time of day it was.

This IT guys bozo answer: "Cover up"

So while the heartbleed bug was pretty damn good and definitely cost money, and I am willing to bet that it cost way more money than Y2K (in damage). I am now willing to bet that Heartbleed will go on to cost way more in fear mongered consulting fees and anti Open Source fear mongering. My brother-in-law just stated that Heartbleed showed how weak Open Source really is. He didn't have the faintest idea of what open source was. This guy is in a position to influence government decisions and is surrounded by the decision makers who probably have half the IT knowledge he does. So when the Mega consultants are done whispering in the government's ears I suspect that there will be fewer Open Source projects and that the mega consultants will start selling services such as "Open Source code Audits" and these audits will show vulnerabilities such as "widely leaked source code".

So while the fear mongering will tally up some absurd numbers it will be the defrauding of customers that will really make heartbleed expensive.

Heartbleed will re-write your hard drive. Not only that, but
it will scramble any disks that are even close to your computer. It will recalibrate your refrigerator's coolness setting so all your ice cream goes melty. It will demagnetize the strips on all your credit cards, screw up the tracking on your television and use subspace field harmonics to scratch any CD's you try to play.

It will give your ex-girlfriend your new phone number. It will mix Kool-aid into your fishtank. It will drink all your beer

Point out to your brother in law that weak closed source software has killed people, destroyed hundreds of millions of dollars worth of spacecraft, caused blackouts, loss of continental long distance service, etc. etc.

Nothing is perfect. But this bug may have caused a leap in open source evolution. It seems that many (myself included) people have been complaining about the SSL project. But nobody did anything about it. But now it looks like at least 1 group has taken the reigns and is renovating the project. I don't know how much of the new project is going to include the people who were running the project a few weeks ago. But it seems that more people will be looking at the mega fork (as opposed to the usual dumb littl

no, a single case doesn't prove what is the norm. those of us who follow the forums of major open source projects see the many eyeballs doing great things all the time. and now many eyeballs are being focused on openssl, the world will benefit.

yea, is difficult to see how it could cost *that* much. although, I would argue that it could be a little more complicated than you mention, if you don't have a perfect inventory of all of your software and devices.

it was/is a serious enough bug that it was drop everything and start patching/mitigating the problem...since it can take time to determine if your software/devices are vulnerable, it is likely that people had to work overtime (does anyone actually get paid overtime anymore?).

Yup, it is similar to trying to add up the cost of the Malay plane search in the Indian ocean. The Aussie defence force is paid for anyway, they just happen to spend time in one spot for several weeks.

High. You can't compare the virus in 2001 with the vulnerability today. First, most sites were patched immediately. So it is likely existing staff that was paid their regular wages did the work. But accountants have a funny way of assigning costs. Even if no extra pay or workers were required, if it took 4 hours to fix it, they will assign 4 hours of labor plus overhead to it. So, while it is possible that the number, based on assigning costs could reach 500M, it would also mean that all the affected

Just brainstorming... Would it be possible to create an open source license, which would mostly resemble GPL, but which had an additional clause that would require companies to pay the developers royalties when the code is used for commercial purposes?

That would be infeasible. There wouldn't just be one royalty, there would be a separate royalty for each separate piece of code being used. I see you're using bash, that's a royalty to the bash project. I see you're using Apache, that's a different royalty to the Apache foundation. You see where this is going.

In the 2000's (before Oracle), I negotiated a license with MySQL that allowed our company to bundle the software in my commercial app (for ease-of-install, especially demo time) even though someone could have downloaded and installed their own copy of MySQL for free. The OEM license cost something like $150-250/license (kept going up, of course).

Heartbleed was introduced into the OpenSSL software library by 31-year-old Robin Seggelmann, a Frankfurt, Germany developer who says that it was likely introduced while he was working on OpenSSL bug fixes around two years ago. “I was working on improving OpenSSL and submitted numerous bug fixes and added new features. In one of the new features, unfortunately, I missed validating a variable containing a length.” The error was also missed by a reviewer responsible for double-checking the code, “so the error made its way from the development branch into the released version,” Seggelmann said.

Cost to fix? free.Cost to roll out? 1 trillion dollars, because the companies like to milk every excuse in the book.

In a very small non-technical business which relies on some ssl based services, where I am the only nerd, here's my experience.

I had to:

- Test everything with SSL that we use in-house (we got off easy), then patch openssl on our internal web server. That was mostly for fun, since our network is fairly secure, and nobody that uses our internal network would be smart enough to exploit heartbleed. But still, NAT invaders, you never know. Maybe an hour spent, probably less.

- Explain this bug to everyone that isn't tech saavy, how it probably wont make a difference for us, but what it means for security. It wasn't worth calling a meeting over, so I did it individually, took a while, though.

- Make all employees reset ALL of their passwords on the SSL websites we use, after testing a small sample of them and finding several were affected by the bug, better safe than sorry. From a micromanagement standpoint, this is actually a gigantic expense of time, since we generally don't cycle passwords on many of these sites very often, and often share non-critical accounts between employees. There's wasted time when everyone types the old password, scratches their head, tries to remember the new one, has to find someone else to ask, etc.. A customer could walk away in frustration if it takes too long. Probably an hour or two spent.

- Contact any of the web service providers that we use, that I know were affected, sit around, wait on hold (for a long time obviously) to try to get some kind of plan of action or disaster report out of them. Many hours spent, but probably a waste of time anyway.

- Loss of business from downtime of two critical sites that shut down for a few days when they discovered the bug. Not as bad as it could have been if it were a larger business.

So how much did it cost our organization specifically? A couple hundred bucks in time total might be a reasonable estimate. Definitely not a problem for an end user like us.

This is nothing in contrast to a bad IT problem - for example when our entire network got raped by Zeus.....

We're talking every email account compromised, our static ips placed on god knows how many blacklists, practically worldwide email blacklist of our entire domain, very difficult removal, loss of HUGE amounts of business data to cryptolocker, loss of reputation when many of our customers also got the virus from opening emails from us, or received spam under our name, our ISP even cut us offline until repairs were done, we were down for a week.

It even hit a backup drive with cryptolocker because someone left it plugged in, which was very unfriendly when the banks needed to audit some business data that was cryptolockered in two places. Management freaked and required very expensive antivirus software that slowed our computers to a crawl, requiring upgrade or replace of every system in the entire building.

I bet Zeus cost us over 50 grand, we had to change our domain name, which is the worst way out, and who knows what kind of data those assholes got while they were abusing our mail server.

We were tempted to burn the building to the ground and change our name to recover from that one.

ynow... there is a moral to this tale: if businesses and individuals making money from software (libre) had properly funded it, putting some of the money that they saved from not purchasing proprietary software into the hands of those software teams, would we be talking about this now? in all probability, the answer is no. the reason is because those teams would be able to expand, take on more people, pay for security audits and so on which they would otherwise, as we have discovered, not be in a position to do.

so my take on this is that it is really really simple: businesses have received what they paid for, and got what they deserved.

i have been through this experience - directly - a number of times. i worked on samba - quietly - for three years. whilst the other members of the team were receiving shares from the Redhat and VA Linux IPOs, which they were able to sell and receive huge cash sums - i was busy reverse-engineering Windows NT Domains so that businesses world-wide could save billions of dollars.... and not one single one of those businesses called me up to say thank you, have some cash. as a result, about a year after terminating work on samba i was working on a building site as a common labourer.

it was the same story with the Exchange 5 reverse-engineering, which the Open Exchange Team mirrored (copied, minus the Copyright and Credits).

there is a moral to this tale: unlike proprietary software, which has a price tag commensurate with its perceived value, the process of even *offering* payment to individuals working on a software libre project that has been downloaded, usually from a completely different location (via a distro), is completely divorced from the developers actual efforts.

even in shops in rural districts, it is understood that if the door is unlocked and the shopkeeper not there, you help yourself, open the till, sort out your own correct change and walk out. but in the software libre world there is often not even that level of expectation! the software is quotes free quotes therefore it is monetarily zero cost therefore we should not have to pay, right? and businesses are pretty pathological about taking whatever they can get without paying for it.

so the short version is: there is a huge disconnect in software libre between service provision (the software) and paying for that service, and i really cannot see a solution here. perhaps this really should be bigger news: perhaps in this openssl vulnerability we have an opportunity to make that clear.

ynow... there is a moral to this tale: if businesses and individuals making money from software (libre) had properly funded it, putting some of the money that they saved from not purchasing proprietary software into the hands of those software teams, would we be talking about this now? in all probability, the answer is no.

And that's a flaw in the open source model. There is the assumption that people will review code and give back to the code... but it is just naive.It assumes that companies actually care about utopian ideals and not just making money for shareholders.

Additionally in the field of system administration, when issues like this occur it is always about appropriating blame. Some places would rather let hackers break their systems than risk upsetting customers with downtime to fix issues. If a hacker gets in, the

Agreed, but what should that mechanism be? My business runs on open-source software. Pretty much everything is behind our reverse proxy, Pound. One of the numerous libraries which Pound relies on is OpenSSL.

To whom do I give money? Debian? The applications I use like Apache and Pound? Do I enumerate all the libraries that all the applications use and give each of those hundreds of projects a few pennies?