Posted
by
EditorDavid
on Saturday June 17, 2017 @06:06PM
from the end-times-for-end-user-license-agreements dept.

mikeatTB shares an article from TechRepublic:
Software engineers have largely failed at security. Even with the move toward more agile development and DevOps, vulnerabilities continue to take off... Things have been this way for decades, but the status quo might soon be rocked as software takes an increasingly starring role in an expanding range of products whose failure could result in bodily harm and even death. Anything less than such a threat might not be able to budge software engineers into taking greater security precautions. While agile and DevOps are belatedly taking on the problems of creating secure software, the original Agile Manifesto did not acknowledge the threat of vulnerabilities as a problem, but focused on "working software [as] the primary measure of progress..."

"People are doing exactly what they are being incentivized to do," says Joshua Corman, director of the Cyber Statecraft Initiative for the Atlantic Council and a founder of the Rugged Manifesto, a riff on the original Agile Manifesto with a skew toward security. "There is no software liability and there is no standard of care or 'building code' for software, so as a result, there are security holes in your [products] that are allowing attackers to compromise you over and over." Instead, almost every software program comes with a disclaimer to dodge liability for issues caused by the software. End-User License Agreements (EULAs) have been the primary way that software makers have escaped liability for vulnerabilities for the past three decades. Experts see that changing, however.
The article suggests incentives for security should be built into the development process -- with one security professional warning that in the future, "legal precedent will likely result in companies absorbing the risk of open source code."

Because of the move toward more agile development and DevOps, vulnerabilities continue to take off...

You cannot build a secure application without planning the whole thing out first. This ADHD / MBA / lazy fuck / quick profits / fuck the customer approach to development ("agile") is a cult kool aid, and all the young ones drank it.

It will take computer science decades to recover from this, if it ever does. I think we may have already peaked.

Just look at medical devices. They don't cost that much to make but have to go through a long certification process that needs to be paid back.

Same with software. Something like SOX, PCI or HIPAA will pop up to certify "secure software" and software that is patched on a regular basis and people will end up paying for it. And on top of that every piece of software will be "certified" on some platform, similar to a game console. If you run it outside of the certified hardware you lose the ability to sue.

You're all idiots if you think you will be able to run software on some of the ridiculous configurations I've seen in my time and expect vendors to pay for it when it breaks because of your stupidity.

Doesn't matter. After the lawsuits of the 80's and 90's there are now "best practices" and "standards of care" and standards for almost everything because you can't just sue. You have to prove someone did something wrong.

Same here. Industry will make up some best practices, it will be a certification or some other process that costs a lot of money, it will mean hiring people to push the paper and make sure the paperwork is right and everyone will pay.

The whole medical ecosystem is seriously screwed up, starting with the reimbursement models. They scream high costs, but one particular med device company I worked for spent $600 per device full up for production and FDA overhead. They sold for $15K and the company was barely breaking even. Where did the other $14K+ go? Mostly sales and marketing, also lobbying for increased reimbursements.

... software takes an increasingly starring role in an expanding range of products whose failure could result in bodily harm and even death. Anything less than such a threat might not be able to budge software engineers into taking greater security precautions.

What you are seeing is the maturing of software engineering as a profession. A few hundred years ago if you needed surgery you would go to your barber [wikipedia.org]. The reason for this was that they were usually in possession of the right tools. The medical profession eventually matured to what we have today, where a surgeon is a specialized physician. But that didn't happen overnight and lots of people died in the process. In fact, we didn't even have a theory of infectious disease until the 1830s.

The point is that right now hardware, including its firmware components, is oftentimes made without the involvement of a software engineer. It wasn't that long ago that software engineers didn't even exist and in time as the profession matures we will get to the point where developing a piece of hardware without the participation of a software engineer will be unthinkable. But we are not there yet.

An important side note is that there is a difference between a coder, a developer, a programmer, a software engineer, and several other specialized disciplines in the software arena. I think that a precondition to solving the problem identified by the article has less to do with things like development methodology (that is not central the problem at hand) and more to do with establishing minimum standards for some who claims to be a software engineer. For instance, a surgeon in 2017 has to meet vastly different minimum qualifications than a surgeon did in 1917. We didn't even have software engineers a hundred years ago, so who knows what it will actually looks like by the time the field really starts to mature.

Sorry, I stopped there, at "Even with the move toward more agile development and DevOps". What's the link, supposed positive here, between the two ?Both "old" and "new" method won't never mean better software than the people using them.

Bad engineers using old method (V cycle ? Tons of documents ?) or new methods (you said agile, as in "get as many things done as possible, as quickly as possible, using shiny web app like Trello or Kanban-something" ?) won't make secure software.May be with good engineers, you can achieve good results, whatever the method is.

More or less related : ISO9001 doesn't mean that the certified company makes good products, it means that it produces always the same quality, good or bad.

This may sound a bit like a troll, but I'd like to add that, since young engineers favor more agile methods, and considering the lack of experience, combined to the messy sensations I sense in agile methods, I tend to think that agile methods would produce less secure software...

I tend to think that agile methods would produce less secure software...

Couldn't agree more. The notion that the road to computing security runs through agile and Devops seems to me to be as unlikely as the notion that the way to get you and your bicycle from New York to Bermuda is to head off on said bike for the Bering Strait (in Winter) so you can get to Singapore then think out your next step.

FWIW, I think the road to computing security probably is ill paved, difficult, unpleasant and involves shrinking attack surfaces by eliminating unneeded capabilities (e.g. #@$%^ Javascript) . It probably also requires shrinking the toolset to a bare minimum of proven libraries and protocols. That's not much fun, so it probably won't happen until we've exhausted the long list of entertaining but ineffective alternatives.

Software engineers have largely failed at security. Even with the move toward more agile development and DevOps, vulnerabilities continue to take off. More than 10,000 issues will be reported to the Common Vulnerabilities and Exposures project this year.

How about you get what you pay for? Many management teams have decided that adding security costs money and it's more cost effective not to spend many cycles on it, but rather to just deal with problems as they pop up.

I don't think you can spin that as software engineers "failing." If the management wants security, they can pay for training, consultants, audits, bug bounties, etc. There are lots of ways to address this issue. Besides, perhaps the number of bugs is skyrocketing as a natural consequence of all of the new software projects and products.

So very much this. Given the notion that out of fast, cheap, and $x, you can have any two; it's practically a truism that PHB/MBA types will always choose fast and cheap, no matter the value of $x. The only exceptions are when you're contractually or legally obligated to have $x, such as in PCI or HIPAA environments. And even then, fast or cheap is only given up for $x very begrudgingly, and sometimes only on paper but not in reality.

How about you get what you pay for? Many management teams have decided that adding security costs money and it's more cost effective not to spend many cycles on it, but rather to just deal with problems as they pop up.

Software hasn't had it's "Pinto" moment yet, where a jury decides that a company needs to be punished for that type of calculus.

The problem with that is that most security issues are a result of attack vectors that were not known yet when the software was under development.

Software patches address that - so software when released may be tested against known attack vectors, security patches are also tested against known attack vectors, just a few more of them as more become known over time.

Reading the article, it's all people with an interest in peddling solutions to the problem, naturally. This is a marketing paper.

Claiming that Software Engineers have "failed" at security is akin to claiming that police have "failed" at crime stopping crime. And the courts aren't going to suddenly start blaming companies for the actions of threat actors unless there is some representation that the products they're creating are unhackable.

If it were literally only companies which were on the hook, you'd see a bloom (not a renaissance, because there has not been a dark age yet) of OSS. If companies are liable, but J. Random Coder on the street is not, then you're going to see FoSS take center stage simply due to lack of liability.

Uh, no. Given a choice between "use product from A and if there is a problem they are liable" and "use FOSS product and if there is a problem I am liable", who do you think is going to go with the second option?

Uh, no. Given a choice between "use product from A and if there is a problem they are liable" and "use FOSS product and if there is a problem I am liable", who do you think is going to go with the second option?

So you're saying that there are EULAS today where the developers ACCEPT liability? No, today EULAS deny liability, just like FOSS. As far as liability is concerned, today proprietary and FOSS are equivalent.

But you are talking about a situation where they are unequal - proprietary software can be held legally responsible while FOSS cannot. In that case, one would have to be nuts to choose FOSS.

The tiniest bit of actual research reveals that the issue with Therac-25 was actually the lack of physical safety interlocks which the software, written for an earlier model in the Therac line, assumed were present. Software developers were left out of the development of Therac-25 as the hardware and product guys assumed they could just use the existing software as-is and didn't bother to ask anyone who knew better.

The problem, then, was poor product (hardware) engineering and a series of lapses in judgme

The earlier models had two, redundant, safety mechanisms in place to prevent killing patients, one in software and one in hardware. Yes, it was an unforgivable management decision to deliberately compromise that redundancy by removing the hardware safety mechanism in Therac-25, but that does not excuse the bug in the software safety mechanism.

The software was responsible (not solely responsible, before Therac-25, but still responsible) for preventing fatal radiation doses, and i

Consider that it wasn't a bug as that particular failure mode was not something the software was responsible for handling in the models for which it was written. When writing software for very specific and well-defined hardware, how long do you spend on use cases specific to undefined hardware? How do you even develop for undefined hardware?

If (and that's a big "if") companies become liable for software failures then it will be most likely that there will be a guideline of standard programming practices. Likely it would restrict companies to using programming languages that have already been heavily analyzed and their security weaknesses identified. CMU has composed guidelines for multiple languages and platforms [cert.org] which could easily be identified programmatically. Such regulation would be a deathblow to companies using script kiddies to scra

So who're you going to sue? The retailer? The distributor? The importer? Or will you try going after the producer - overseas, different jurisdiction, and possibly out of business already (or simply shut this company and moved on to the next).

Look what happened to general aviation when cessna, piper, et al got the pants sued off them. A small 4 place plane used to cost about as much as a mid range Cadillac, after the lawyers got through with them they cost $200k.

... to define a "state of the art" regarding security. It should contain things like not mixing user input with SQL-queries, unless it goes through a whitelist of characters or is escaped by a proven to work function.

Essentially that "state of the art" should always be a bit above what idiots do in order to weed out idiots. Ideally it's defined in a way that that compilers can prove it working. (in the above case, user input strings and SQL-queries could have different types)

Large corporations have armies of attorneys to cover their asses. Liability for software faults would benefit them because they have the resources to kill most any lawsuit against them. The opensource world, however, would whither and die because no weekend coder is going to risk everything because of a mistake. Expect large corporations to fully endorse software liability laws since it will remove the one kind of competition that they can't compete against on cost or functionality.

If the brakes go out on your car and you crash into a tree, is person who made the brakes liable?If the software 'goes out' on your car and you crash into a tree, is the person who made the software liable?

If you build a car with an easily hackable lock, and someone breaks in, are you liable for theft? (tennis ball trick)if you build a car with an easily hackable electronic lock, and someone breaks in, are you liable for theft?

Do you see the parallels here? Just because someone can do something bad to yo

Agile and Devops won't do anything on their own to improve your security. I'd have a really hard time taking seriously anyone who thought they did. Also, the current state of the industry is not likely to change as long as there are intelligence agencies that feel that it's beneficial for software to not be secure. If you OS were truly secure, you could be that there'd be a constant push by those guys to introduce backdoors they could exploit.

Even with the move toward more agile development and DevOps, vulnerabilities continue to take off...

*needs citation
Seriously, I'm a software developer and often have to be involved in a variety of security-related aspects of development and I've been doing it for twenty years. My anecdotal evidence is that security exploits are way *way* down in terms of risk and severity compared to when I entered the industry... I could be wrong (the plural of anecdote is not data) but it feels the opposite for me.

If it's such a security issue, shouldn't it already be done correctly in the library or the logging system? These sorts of things is exactly what a developer shouldn't have to worry about. If the underlying system receives a string from a Log library, it should either be cleaned or the underlying system should clean those up.

LOL, HP Fortify, the tool that marks almost every line as a vulnerability to cover its own ass. It generates so many false positives that it is beyond useless. We'll just keep doing our own reviews....and if junk in your log manages to cause a hack, then it is not your software at fault. It is the log viewer software that is at fault. If that happens to be VIM or your shell, then yes, I boldly claim that is a bug in those pieces of software.

This doesn't align with the business interest. If it costs money and doesn't save money or make money you're wasting your time.

If your company were going to be held liable for security vulnerabilities, finding and plugging these holes during development would be part of your job. As things are, there's no reason to look for or deal with them unless there's a way to make your customers pay for it. This holds true for all custom software, either open or closed source.

If your company were going to be held liable for security vulnerabilities, finding and plugging these holes during development would be part of your job. As things are, there's no reason to look for or deal with them unless there's a way to make your customers pay for it. This holds true for all custom software, either open or closed source.

It really depends on how big the company is, how often they get busted, and what exactly they are liable for. As it stands now, the average small company can go 20 years without an incident. The small company that skips on security can likely outcompete and outlast the small company that doesn't. Sure if they get unlucky and have a security incident, it could bankrupt them but the odds are in their favor that skipping security gives them a competitive advantage to the company that doesn't.

I'm glad that someone here recognizes this fact. I don't know how many companies I've seen that did things "right" went under or were bought by companies that took every software shortcut known to man.

The basic fact is that if the customer is ignorant of the intangibles like quality, they'll prioritize reputation and then price. If you're a smaller company, you won't survive long enough to get a reputation for excellence if you don't go cheap enough to allow you to undercut all your competitors. And (in

The developers aren't at fault. The people in charge have to be the ones to demand security. Blaming pros and cons on Agile or DevOps misses how companies really work. If the management puts security as a required feature, then it'll get added in even with Agile. Nobody should be dumb enough to allow bottom tier developers to set their own goals.

You also need management to actually hire security experts. A lot of failures come from having novices work on security (novices can mean those with decades of software experience but only a superficial understanding of security and zero academic understanding of crypto).

No, Agile won't allow security to be built in. Agile builds dirty snowballs with little integration other than slapping feature on top of another. There is no mechanism for going back and developing a model for how the features are integrating together to produce security holes. DevOps is no better.

In spite of people confusing inflight entertainment systems with avionics, yes is is done all the time j. The aviation industry. Every piece of software that controls the airplane must be built to RTCA DO-178B/C design processes. Among other things, every input and output to every module is specified in the design process, and out of bound input responses are chosen. Then in writing the software, the inputs are checked, and then validated against random and maliciously crafted input. Bogus states are inject

It's not really that much more expensive, as mature engineers aren't really more expensive than programmers, are a lot more effective, and the debug cycle is a lot faster when it's designed in at the front.

Dude, I worked in this industry. Its FUCKING INCREDIBLY EXPENSIVE. Like on the order of 100x more expensive than writing line-of-business commercial software. A 10 line subroutine can EASILY require 100 hours of engineering and testing to meet spec. Everything has to be written out in some sort of design document beforehand, every requirement flowed down to lines of code that cover it, documented test cases that cover each requirement, total coverage of all possible inputs at every call boundary, etc.

I mean, yes, theoretically it would be great if all of this was done in every piece of software, but software's PURPOSE is to be flexible and quickly and efficiently implement functions in a way that can be modified without exorbitant cost. If you force things to the level of safety of flight critical software then you might as well literally build dedicated silicon for everything, because it will be cheaper.

The truth is software will probably never achieve this kind of level of reliability and security in general. It just isn't worth it. Even safety critical software needs to be cheaper than that. If we want the functionality and convenience of embedding software in cars, airplanes, etc then we better be willing to accept the consequences. The only alternative is likely automated software development performed entirely by AIs, but I doubt that will fix the problem either. There's always some guy that can make the smarter AI that can figure out the security hole in the software your dumber one wrote.

According to the sites about the CGA I've found, it does cover software. However, it doesn't cover anything sold "for commercial use", so business software wouldn't be covered. An OS or program bought for home use would be, though. Open source would not be covered as only item sold "in trade" are. Private sales are not covered, so custom software made on contract would not be covered.

This is in addition to warranties, so the warranty for a refrigerator may be 12 months, the CGA would say 10 years.

The CGA also says that the supplier is also liable for any additional harm, e.g. your phone catches fire and burns the house down, the supplier is able for all losses including the house, accommodation while it is rebuilt etc.

You can also NOT contract you way out of the CGA

And then everyone comes around and cries out "Why does the US price for [item] is $500 and I'm paying $1000 for it?!"

Liability for general purpose computing is not going to happen. It would make software way more expensive, and mean locked down desktops and laptops that prevent users from downloading, connecting, and configuring. People are not going to accept that.

For safety critical software, such as automotive control (not infotainment), elevator systems, etc. we already have liability regulations.

Liability for a insulin dispenser makes sense. Liability for a free webapp does not.

Liability for general purpose computing is not going to happen. It would make software way more expensive, and mean locked down desktops and laptops that prevent users from downloading, connecting, and configuring.

In addition to that, we have the most vulnerable OS being the biggest OS, and the Chinese building the Internet of things essentially open systems, so what would we do? Sue them?

It isn't to blame the victims here, but the ascendency of personal computing for the masses means that most computing devices are owned by people with very little idea of security. In a world where people click on random stuff they get in email, it's gonna be very hard to get any real security.

Because people want to do what they want and if the AI is getting in the way many people will find a way around the AI. Then complain when they get burned. That's just human nature and until you can change that I don't see a way to fix the problem.

There are various laws in computer science that show mathematically that it's impossible for a computer to do such a thing once programs reach a certain complexity, which we reached a long time ago. Unfortunately I think the ultimate solution will be something like a Secure Model of Computation with many restrictions.

The Secure Model of Computation would be defined over a less flexible subset of a Turing machine that deliberately avoids the properties on which the proof of the halting problem's undecidability rests.

The profit margin is pretty thin for many devices and the software to run them, and the lifetime of a device or software is likewise very short. Security is about the last thing on their minds. Milking whatever profit can be had out of product A while Product B is getting ready for release is a problem.

Then there is the we issue. The collective we is still using stupid passwords like Password1, and don't think twice about clicking on email links. At this point, it is obvious that the collective "we" is not going to be of much help in matters of computing security.

It's nothing short of amazing that 30 year old SMBv1 is still being shipped toggled on. (it is being removed from the OS finally) This is the part that might be conjecture. It's been known to be a gaping security hole for years, so why was it still there. Microsoft had no problems making a shitload of peripherals obsolete with Vista, and no issues with abandoning Windows 7 users. But SMBv1? That must be included, and it must be turned on by default. So it's not hard to imagine that someone wanted it turned on by default.

You and I know what best practices are, so why the fuck don't we "AI" the computing devices?

Oh, very simple: Because it is not possible today and it may never be possible. Strong AI is a dream/nightmare, but not anything that we can reasonably expect to ever exist at this time. There is actually no indication that it is even possible in this universe. And should it be possible, it may well come with self-awareness and free will and may flat-out refuse to work for you.

Liability for "free" software rests with the people who use it to make money. They are the ones on the hook to ensure that the "free" software is suitable for the purpose for which they are selling it.

Organizations which use "free" software directly are themselves responsible for whatever happens as a result of using that "free" software.

GPL is rather long-winded - take a look at the MIT license for a notion of where liability for "free" software lies.

Before you say "that's gonna change when liability comes into the picture" - no, not at all - people writing software who don't know how it is going to be used cannot conceivably be held liable and more than Sir Issac Newton's estate could be held liable for a mishap on the space shuttle.

It's management that won't pay for properly written, properly tested software. That takes time (measured in metric shit-tons), and that makes it too expensive in every case I've ever seen.

Security cannot possibly be the result of dotting every i and crossed every t. It cannot require exhaustive testing, massive expenses, being very careful with critical attention to every detail. Any approach requiring these things for success is almost certainly guaranteed to fail. Security must never require human perfection.

The only realistic way to get to a secure system is by perusing designs which are inherently secure where coders are required to intentionally create a vulnerability or otherwise kn

Could have been the entry point for an insightful analysis. You didn't comment (or notice) that it's not a spurious correlation. The EULA was perfected by Microsoft so that liability was eliminated as a real concern for the developers.

I think the other major wrinkle perfected by MS was selling to the makers, not the actual users.

There are other possibilities, but because the rules of the game are biased for YUGE companies, not increasing consumer choice and freedom, we're screwed. I suppose Trump's voters h

Indeed. Devops has basically failed (not many people that can do it and those that can have already done it before). Agile is mainly a method of making sure management does not stand in the way of developers too much, but again, it needs highly competent people to work well.

As such, claiming the failure of two hype-movements is responsible for insecure software is excessively stupid or a marketing lie. I suspect the later.

If you disagree, please do show me a practical way how to write completely secure code.

There is no need for that and asking for it shows you are a novice at software security. In actual reality it just needs to be harder to break in than what your target adversary can do or can afford. That is often pretty easy to reach, given competent architects, designers and implementers. The real problem is that most software is written by incompetent people without the first clue about security. Hence breaking in is often excessively simple. Just look at the recent Intel vulnerability (management engine