Meta

I’ve often said that the End User License Agreement (EULA) is going to eventually be challenged and defeated in court. It is quite simply too broad, too vague, and too one sided to pass the “reasonable person” test. And, from Bruce Schneier’s Blog, there appears to be a step in that direction. Granted it is a selected case, and deals with company to company contracts and situations, but it is a first step. So let’s talk about this for a second.

The EULA was put in place to (presumably) protect the software vendor from lawsuits stemming from the fact that errors are always going to exist in software, so they needed to be protected from lawsuits against them for reasonable errors and issues that can be fixed quickly. This was a reasonable step (IMO) to help software innovation. However…

Software companies have taken this idea way too far, and in doing so, have tried to absolve themselves of any responsibility for any error, no matter how damaging or preventable, and from any responsibility for restitution, no matter how reasonable. This is entirely silly. If your software sucks, you need to be held accountable for it. If your software is good, but you make reasonable errors, and a reasonable attempt to correct those errors in a timely fashion, you shouldn’t be held to account too deeply for that. It’s a matter of common sense vs lawyers, and the lawyers get paid to see only the side of their client.

Maybe it would be reasonable for the users and producers to get together, without lawyers, to ensure both sides come to equitable answers, but until then I still will be on record as saying the Death of the EULA is inevitable.

Ok, I saw this on the site of the Veracode folks a while back, but it still bears mentioning, or as they say in some circles QFT.

Is my Code Good?

So lots of folks have been wondering about a securitymindset and how that maps to product creation or implementation of software. Some even state that this mindset is in fact part of mathmatics and can be taught there.

Security is about thinking about stuff and how it can be broken, and the usual computer engineering is more busy thinking about stuff can be built. The trick is, we need our builders to think about BOTH while building systems, or we can’t sufficiently and cost effectively move away from security as a separate governance to an engineering process. And if we can’t do that we’ll be stuck paying for security as an after thought, vs having security “built-in”. I see trending that this might be changing at the OS / platform level, however the Web 2.0 and Cloud / Grid folks really seem to need to pick up on this lesson.