Posted
by
Soulskill
on Monday October 25, 2010 @04:47PM
from the you-can't-goto-there-from-here dept.

Ponca City writes "Until now, computer security has been reactive. 'Our defenses improve only after they have been successfully penetrated,' says security expert Fred Schneider. But now Dr. Dobb's reports that researchers at Cornell are developing a programming platform called 'Fabric,' an extension to the Java language that builds security into a program as it is written. Fabric is designed to create secure systems for distributed computing, where many interconnected nodes — not all of them necessarily trustworthy — are involved, as in systems that move money around or maintain medical records. Everything in Fabric is an 'object' labeled with a set of policies on how and by whom data can be accessed and what operations can be performed on it. Even blocks of program code have built-in policies about when and where they can be run. The compiler enforces the security policies and will not allow the programmer to write insecure code (PDF). The initial release of Fabric is now available at the Cornell website."

Yes, and it does not prevent burglary either. If you mess up the transport & application protocol you are in trouble, but what has that to do with secure *programming*? Christ, I bet you can make programs with it that display your password in 10 feet high numbers as well (given a large enough monitor).

Secure software development takes longer to develop. That is the primary reason it is not widely practiced. Unless this new language makes secure programming as quick as unsecure programming, then corners are always going to be cut and security will suffer.

Assuming that out side forces are never brought to bear. Until companies are held accountable for exploits in their code it's not going to change. Requiring companies to share even a portion of the expense when a vulnerability is exploited would do wonders for the situation.

Requiring companies to share even a portion of the expense when a vulnerability is exploited would do wonders for the situation.

Yeah: since the expense is potentially unlimited, it would become far too risky for any mere mortal to write code, thus leaving only the large companies that can duke it out in a court willing to do that. I can feel Microsoft drooling...

Well, it will maybe bring down the barrier. And don't forget that insecure software costs loads of money. Maybe not in front then certainly later during the maintenance cycle. Of course, if you are able to charge for maintenance hours, you may be in luck.

Secure software may make a lot of sense for core security components. Proving that something is secure can be hard, any software that will bring that cost down is very welcome. If this is required depends on the type of software of course.

Or raise the barrier to those who understand the policies -- the rest will be writing crappier code than ever, because they think that their ill-written policy will protect them so they won't have to think security anymore.

That sounds more like it would get in the way - perhaps I've come up with a more secure and robust algorithm than they've thought of and all it requires is a bit of data transfer from one section to another - but its deemed insecure due to their constraints - even though I've handled security in a different section.

On top of that - his initial statement basically makes no difference:

Our defenses improve only after they have been successfully penetrated

So you expect all programmers to switch to Fabric just overnight? Who's going to go through the hassle of learning a new langu

it's deemed insecure due to their constraints - even though I've handled security in a different section.

Yep - sounds like more bloat to me. In ten years time, we're going to be running our software on hardware five times as powerful as that which we use today and the software will do the same things it does today no faster.

And then some old person will implement an email client in C using only the oldest and slimmest of libraries and everybody's heads will explode with shock at the speed of it.

Mutts nice, but I like to send HTML emails and last time I looked at it, that wasn't Mutt. If someone made something as lean and powerful as Mutt, with the HTML editing capabilities of Thunderbird (nowadays a bloated corpse of an email client), it would be my dream email client.

It's essentially mandatory access controls at the object level. In which case, you shouldn't need to add it to the language syntax - you should be able to code it into the virtual machine and use the security labeling available in the native OS. The security would then be scripted as data, rather than hard-coded, allowing any existing program to gain this security with no modification to the code, merely a suitable XML file with the MAC labeling data. Minimal bloat, the speed should be unaffected (since the

...you should be able to code it into the virtual machine and use the security labeling available in the native OS.

No, this REQUIRES the native OS to cooperate to be successful.

According to the summary, the security policy is enforced by the COMPILER. That means if you change the access policy to a data file, you need to RECOMPILE all the code that touches that data, since the access is compiled in. At least, if the summary is correct, you do.

You're computer is only five times as powerful as one built a decade before it?

I was trying to extrapolate where I think things are going. I don't know how much further we're going to go in terms of sheer processor speed. Less far in the next ten years in terms of multiples of what has gone before, than in the previous ten years. There might be some revolutions that lead to much faster computers in the areas where that's needed (scientific computing, perhaps) but less likely in general public computers. We're probably going to see increases in cores and that doesn't translate as well

You would switch to fabric if you are concerned with security, eventhough it will have found its way into your code without fabric, the point is fabric is security oriented making it HARDER to do insecure things. On top of that it would seem your argument about coding your own mechinism being deemed insecure, im sure that would be a user error as opposed to a problem with the language itself as there has to be a mechninsm to specify what is and is not to be trusted within the language, if you wish to do suc

That sounds more like it would get in the way - perhaps I've come up with a more secure and robust algorithm than they've thought of and all it requires is a bit of data transfer from one section to another - but its deemed insecure due to their constraints - even though I've handled security in a different section.

Much like SELinux. At some point, the security aspects are frustrating enough that you just turn the damn thing off.

Your response doesn't give any reasons for not using Fabric other than maybes and what ifs that may or may not apply to anyone including yourself. You might as well say "why in the world would I ever use Fabric, I live in a log cabin and forage for food, it makes no sense!"

Also who said they expect all programmers to switch to Fabric AT ALL let alone over night? It's a choice. They are offering a potential solution for java based systems that can, in their opinion, improve the overall security of the s

I find the quote about defenses only improving after it's been penetrated funny for a different reason. Fabric may be built to be more "secure", but it's going to work exactly the same way as any other language or program built with a language. More likely than not if Fabric becomes popular someone is going to figure out a way to exploit the code to do something it wasn't intended for and they'll have to some up with a way to solve that problem after the fact, just like everyone else. Isn't the reason th

They'd have needed to make unbounded string the default literal character type, and given it a better name. Say, just "string". They'd have needed to make it easier to use the heap. Garbage collection would need to be built-in (optionally disableable) rather than optional, and never implemented.

And they should start from the Spark subset of Ada.

But Ada won't ever go anywhere, and wishing it would is futile. It's been purposed to the embedded systems market, and it's not likely to change.

Unfortunately, you're 100% right. I had a somewhat similar system just occur where a library for processing credit cards has an option to check the card to make sure it's valid before even trying to process it. Well, using my personal card as a test it said it was invalid. Take that check out and it works perfectly, so I'm not sure what it is actually checking, according to the docs everything required was there, but what should have been a useful feature just gave me a false negative, so I can't trust it.

Considering that many of the people who are going to write this are the same who think that 0777 is a good permission for files and directories, I say that it's a given that this won't be too useful.

Security comes from a mindset, not from more tools.Sure, tools can be helpful, but only in the hands of those who understand how the tools work, how to use them and the technical reasons why they use them. Else it will only instill a false sense of security, and may very well make the devs skip the analysis of

As experience teaches us, the first thing that people who need to share do is "chmod -R a+rwx."

So, any security which requires signing of code to run will become looser and looser over time as problems are encountered. That bug is causing problems in production and it takes a week to validate and sign it? Loosen the validation to get it to 15mins, or turn it off completely.

You are correct. Actually, strong languages like Occam-Pi are better for security as the security is largely a product of making things much more specific. It is ambiguity that allows a lot of bugs to creep in.

As experience teaches us, the first thing that people who need to share do is "chmod -R a+rwx."

Which is why I believe for a company, there is only one path to security - a combination of upper management understanding that security is important or they, not some lowly cubicle worker, could be out of a job, and Mandatory Access Controls.

"People" shouldn't even be allowed to determine the permissions of a file. And they absolutely should not ever be able to change the "a" value, which should be hardcoded to 0 everywhere. Remember "default deny"? We'd not dream of having firewalls like that, but our fil

It sounds good but it only addresses one security aspect of a system. It runs on top of Java which I seem to recall is blessed with a few bugs - how do they avoid those including all the ones that will appear in future versions.

Then the Java stack sits on top of a OS and that is a massive "attack surface" or whatever is the current bullshit from the consultants (OK that includes me)

Then the OS sits on top of some sort of hardware with its own built in software (BIOS etc) problems.

They partition it into several pieces so that you have modular access conditions. Java is already build in such a way that you cannot directly access the hardware - you can just run byte code. Of course, there may be bugs in native libraries or in the byte code execution, but that is a rather small attack surface. Basically, that's always what you try to do; you limit the exposure of security relevant features. There will of course still be bugs, but they should be much more localized.

But I may at least be sure that that bug in the TCP socket library is not exposed to the part of the code that verifies user input, or badly written code in library X.

You may be sure of nothing. You may have increased confidence in resistance of your software to flaws, but there's always a set of very clever attackers who are constantly defeating these kinds of security measures: discovering new, untapped flaws in old software; or discovering new, untapped flaws in the users pushing buttons on your systems.

For an example of why you still need to worry even if your OS supports the NX bit, see Return Oriented Programming [wikipedia.org]. And ROP can be coded to use an application vulnerab

I don't know about anybody else, but the first thing I thought of when I read TFS was SELinux. [wikipedia.org] The only difference, really, is that instead of having the OS preventing various files from being accessed in insecure fashions, it prevents programs from doing insecure things to its own data. It's an interesting idea, but it's based on the assumption that you can't ever trust programmers to Do Things The Right Way. Of course, when you look at all of the buffer overflow exploits in Windows, it does begin to lo

but wouldn't it be better to teach proper programming technique in the first place?

Good God, no. How much failure do you need to see in the real world before you guys stop with this old saw about improving or hiring perfect programmers? Programmers are people, and they make mistakes, even the best of them. Any tool that helps automate away common mistakes is a good thing.

How much failure do you need to see in the real world before you guys stop with this old saw about improving or hiring perfect programmers?

Where did you get that strawman from? I never wrote anything like that! I'm talking about simple things, such as using a function that only copies a specified number of bytes into a buffer instead of one that copies everything so that you can't overrun your buffer. Making that a habit doesn't make you a perfect programmer and it doesn't prevent any and all bugs but

I don't see any straw man. Tripe about "trust programmers to Do Things The Right Way" and "teach proper programming technique" when it comes to buffer overflows is the same old shit we've been hearing for years from C programmers. It's much better if the language makes it impossible to have a buffer overrun, or in regards to the current article, to violate security policies.

The straw man, of course, was your introduction of "perfect programmers," which is something I'd never mentioned. If you can, and do make it a habit of using string copy functions that prevent buffer overflows, all well and good; if not, it's not a bad idea to use a language that requires you to specify how many bytes to copy. The important thing isn't what language you use, but that you write programs where buffer overflows can't happen.

I said "improving or hiring perfect programmers", because while you didn't say perfect, your message about "trusting" the programmers and educating them is part of the same old message from C programmers that these mistakes can be eliminated with sufficiently educated/good programmers. To prevent all buffer overflows, programmers need to be perfect.

No, just using something like a strncpy is not good enough. Even that requires the programmer to get the n right, and there are many, many places buffer overruns

Please understand me: first, I'm not that interested in arguing with you and second, I don't think this is a bad idea. My original point was simply that it's similar in concept to SELinux. Yes, I realize that making a habit of using strncpy() instead of strcpy() isn't a Magic Bullet, but it will block the simplest forms of buffer overflow because it won't let you copy more bytes than the buffer will hold.

wouldn't it be better to teach proper programming technique in the first place?

The thing is, people have been saying that for years. Every iteration of the programming system, whether it be automatic program correction, garbage collection, references-that-are-not-pointers, object-orientation, modularity, high-level programming, type systems, virtual memory, or whatever other language abstraction designers have come up with, there's always been some crusty old programmer in the back of the room shouting about how "kids these days should just learn how to program."

I realize that you intended this as a rhetorical question, but I'm going to answer it anyway. I don't care if that guy (or gal) knows about optimal register allocation, but I agree that it shouldn't be required for such a task. Still, writing code that minimizes the chance of buffer exploits or other common security issues isn't that hard, and I'd hoe that whoever wrote my medical r

I do development as well, and you can *bet* I don't ever trust programmers to do things the right way - including myself. The trick is to minimize exposure of the system and limit the severity of the bugs.

Java is only so so. For instance, it does offer memory protection between classes, but it is not as modularized as it should be, and you do have many mutable and non-thread safe constructs (e.g. Java byte array).

Yeah, well, that claim is neither in the introduction or in the conclusion. Actually, the word "impossible" does not seem to be present *anywhere* in the paper, so we may safely assume that that is another gross Slashdot overstatement (actually the article is about data access within a development model / runtime system, so you may even state that it is plain BS).

Depending on policy-based security requires that you also have a language for policies that make it impossible to write bad policies.

See, that's a good example of some sports very dear in the Corporate Olympic Games: blame-shifting (you need stamina) and finger-pointing (a game of speed). The software engineer is in a much better position now to win these games - bad policies is what happens at deployment/admin time.

Back in the 1970's at System Development Corporation (SDC) in conjunction with groups at SRI, RSRE (in the UK), and elsewhere we were doing a lot of work on provably correct systems, including operating systems.

(The notion of "correct" was limited to a security criteria - a correct box did not need to work, only that it met the security criteria.)

We used languages such as Ina Jo and Pascal filled with lots and lots of formally shaped assertions about explicit and side effects.

People interested in this should also have a look at the E language. It is also a secure programming language. It goes a different route - there are no policies, instead a reference to an object gives the right to access the object. This works because there is no global access to objects. They call it object capability security. There is also a java compiler addon to enforce capability security. The relevant website is http://www.elang.org/ [elang.org]

Yeah here's a tip the reason there is a lot of insecure code out there is that there are a lot of programmers out there that think that security problems are overstated and a bunch of hype. That's one of the reasons there are so many problems they think they have it all figured out when they clearly don't. They think security just slows them down and gets in their way. So how are you going to get this recalcitrant bunch to use the super secure language?

I'd love to be able to use existing class libraries (like JDK), because I know them and there's lots of existing software dependent on them, but add features like these in Fabric to that class hierarchy. By making the existing Class Object inherit from something with those new features, like Class SecureObject. Then satisfy all the requirements of the newly featured objects in all the source code that calls the class library. But I don't see any way to insert other classes deeper towards the root of a class

The language is either not Turing complete and then mostly useless for practical general computing, or it is Turing complete and then it provides no real security.

It might avoid some class of problems, but it will never free a programmer from having to clarify his/her intentions. Security is an abstraction-level free problem, meaning that it equally can be an issue at the x86_64 instruction set level and also at the level of high level contractual/social agreements that code has to handle.

Security is also a tradeoff between a system being secure and usable. You can make things more secure by allowing a system to do less. I'm not saying that this new programming language is useless, but it all comes down to a careful description of the language. If the creators advocate it as a secure programming language that makes code written in it secure by default, then they are almost certainly wrong and will quickly become a laughingstock. On the other hand, if they market it as a language that avoids or makes it impossible to commit certain classes of security problems, as a language that pays attention to it's core code for security issues and as a language that makes it clear security is a mindset, then I see it being useful.

The language is either not Turing complete and then mostly useless for practical general computing, or it is Turing complete and then it provides no real security.

You say that based on what analysis?

Sure, a programming language alone will never bring you 100% perfect security. Neither does any other system, method or tool. Sure you and I can write great, safe code in C. But the large majority of programmers has too little experience, doesn't care all that much, and is under perpetual time pressure to deliver functionality. The choice of language can dramatically change the quality and security of your code.

The language is either not Turing complete and then mostly useless for practical general computing, or it is Turing complete and then it provides no real security.

It doesn't have to be all-or-nothing, though. You could make a similar argument about the usefulness of ACLs or Unix permissions or virtual memory: "these won't fix everything!" And yet all of those do make our systems more secure, and we certainly wouldn't want to get rid of them just because they're not perfect.

Speaking of ACLs - you could bolt something like that onto an existing language pretty easily. Imagine a "hardened Python" (that's what she said!) that runs in a Java-like sandbox with no access to

A bad and/or lazy programmer will copy-and-paste code with a bug in it without realizing they duplicated a bug. It's better to fix code or at least verify code you are "borrowing" by copy-and-paste works.

Also, if you are copying-and-pasting code several times, it's almost always better to make a function (or inline function, or macro, or template) so that if there is a bug in the code, then fixing it once fixes it everywhere it's used.

The point is that certain mistakes shouldn't be made again. Many languages provide facilities for it. Sometimes it requires more drastic measures like deprecating a call and replacing it with something new. Not that I really understand the nuances, but there was that whole strlcopy() change in the past. It wasn't strictly speaking necessary, however it did cut down on a lot of mistakes that programmers would make.

It was from what I gather important enough that it's not just what the designers of Pascal d

"As long as you remove PEEK and POKE, you will have a completely secure system..."

Absolutely wrong. If the language implementation doesn't include run-time buffer overflow checking, which is often overlooked in high-level languages and really can't exist in lower level ones, you've got an attack vector. Directly peeking at and poking into memory locations is only.01% of the battle. What if you want to use your "secure" basic as a cgi? Now you're got a whole new bushel of issues. The most secure language still has to contend with the problem of unintended consequences unintended uses.