Posted
by
BeauHDon Friday March 18, 2016 @02:34PM
from the money-is-a-good-incentive dept.

wiredmikey writes from an article on SecurityWeek: Pwn2Own 2016 has come to an end, with researchers earning a total of $460,000 in cash for disclosing 21 new vulnerabilities in Windows, OS X, Flash, Safari, Edge and Chrome. On the first day of the well-known hacking competition, contestants earned $282,500 for vulnerabilities in Safari, Flash Player, Chrome, Windows and OS X. On the second day, Tencent Security Team Sniper took the lead after demonstrating a successful root-level code execution exploit in Safari via a use-after-free flaw in Safari and an out-of-bounds issue in Mac OS X. The exploit earned them $40,000 and 10 Master of Pwn points. This year's contestants earned nearly $100,000 less for their exploits compared to Pwn2Own 2015, when researchers walked away with more than $550,000 for their exploits.

This kind of stuff is depressing. You'd like to say, "Oh, the programmers are doing the best they can," but when you have an open bug list that looks like this [mozilla.org], you can't possibly ensure that your code is secure, not even close. That kind of codebase is like a playground for hackers.

I thought you were linking to some sort of security-related bugs. But these are just plain bugs. And the codebase involved in rendering web pages is huge, because it's not an easy thing to do (try it; I maintained a text-mode browser for a couple of years). And huge codebases have many bugs, because the effort to keep them without minor bugs is just not worth it to anyone unless it is flying airplanes or directly responsible for hauling over hundreds of millions of dollars.

Welcome to the real world - we just don't know how to write software without bugs without it being too onerous, expensive and boring (and the code running slow). And there's no short term prospect of learning it either. The only thing we can do is fix the major ones and security-wise, design the whole thing so that most bugs don't matter.

I thought you were linking to some sort of security-related bugs. But these are just plain bugs.

You're making an interesting distinction. When the folks at OpenBSD, (renowned for proactive security), audit their code [openbsd.org], they intentionally avoid this distinction:

During our ongoing auditing process we find many bugs, and endeavor to fix them even though exploitability is not proven. We fix the bug, and we move on to find other bugs to fix. We have fixed many simple and obvious careless programming errors in code and only months later discovered that the problems were in fact exploitable.

And huge codebases have many bugs, because the effort to keep them without minor bugs is just not worth it to anyone unless

From this statement, I know what your code looks like, and I hope that I never have to work in it.
I invoke upon you every insult of wrath ever to have been uttered from the mouth of Linus, oh bug producer.

I was expecting a more verbose and lively response as I'd noticed who started the thread and then seen the reply. I'm also familiar with your varied signatures, comments, and journal posts. I'm familiar with your programming philosophy (or what you've shared of it) and how you feel about bugs and security - as well as your feelings about their production and those who produced them.

I read the usernames before reading the comments, very frequently, out of habit. It helps me make a mental profile. As you know, I like and respect your views. So, I was already expecting you to comment. This is, after all, bugs. Then I saw there reply. I had my hopes up.

I'm kind of surprised that you didn't go with something akin to, "All bugs are potential security problems." Or, "Code should be bug free." (That would have been funny.) I was then expecting a bunch of links to books on the subject.;-)

I was really looking to make it a teaching moment, show him that actually there are people (like Donald Knuth) who program with a very low bug count such that their bug tracker is always empty (because they have few enough bugs that they can fix them as soon as they are reported), and that there are people who even teach how to accomplish that kind of programming [jamesshore.com],

But if he's gone this long without coming to that awareness, what can I say to him that would change his mind? Is there anything? He seems too

I dunno... It's not *that* huge? I've been playing with Dillo lately. I sometimes use eLinks and Lynx. The code base for all of those is not that large. On Windows, I used to sometimes use something called Off-By-One which is kind of neat, actually. I almost licensed the source at one point to build my own browser just for gits and shiggles. It was not all that large and a quick look tells me that it's still not all that large - when built.

Yeah, you're right, and I'm interested in seeing how Rust turns out; it's a good project and I support it.
However, I've seen enough security bugs in Java code to know that memory protection and array overflow checking isn't enough to stop security bugs. I don't think a strong type system is enough either.

It's up to the programmers to improve their skill. They need to try to think of everything that can go wrong, instead of focusing on "getting it to work."

The OS I linked to (which I'm tying on now) shows your apps in the Launcher menu under each 'domain' or VM you've set up. For instance, I have an 'untrusted' VM and a 'personal' VM, as well as 'banking'. Each one of those has a Firefox entry in the launcher; each is completely isolated from the others (especially the VMs I setup with no network access).

Qubes also has disposable VMs that you can use to quickly launch a browser, and the VM is destroyed when you close the browser.

"Jails" sounds impressive and strong, but its still kernel-based and therefore built on sand. Kernels are great at supplying functional features -- and that's what Qubes uses them for -- but their complexity means their isolat

I'd no more rely on that than I would rely on anything else. Security is a process, not an application.You'll note the first one was patched and I'm thinking that SELinux isn't the same as firejail.I don't actually have an Intel NIC, at least not in this box. That's happenstance, not an objective.

But, you're right. Don't rely on it. Absolutely not. If you're relying on one thing then you're damned stupid, regardless of what operating system you use. I'd like to think that I might be stupid bu

I don't rely on SELinux or Linux namespaces or anything else based on that stuff because they are ground up and packaged like a big block of bologna. The nature of the tools you choose are important, and large monolithic kernels are the last thing anyone should use for security.

That sounds nice and looks good on paper but it's entirely unrealistic in the real world. Starting at the top, nothing will ever be secure. There are just degrees of security. There are goals and risks, what risks will you take to meet your goals? I've used several OSes with microkernel designs. I've used Qubes-OS, MINIX, and QNX. They're fine OSes if you want serious limitations. Until that's no longer a problem, they're entirely unrealistic options.

No OS should be a paragon of consumer convenience out of the box. That is the "realist" position of convenience taken by the industry for the past couple decades and it hasn't worked.

I don't mind Qubes telling me there are some processes that won't work for risky behavior. The act of not thinking about/where/ workflows take place doesn't fly here... I always have to choose and that in itself is awesome.

Qubes' biggest challenge is hardware support, but that is a factor of consumer attitudes as well. There a

They might get there and be a more realistic option. Remember, they've changed the industry to reflect the needs of the many and that has resulted in aiming for the lowest common denominator. There's not a lot that can be done, with any immediacy, in that area. I suspect I dislike it as much as you - I'm big on personal responsibility and knowing how to use your tools if you're going to use them at all.

I'm not a fan of the monolithic kernel architecture. I use it because it, its ecosystem more so than anyth

LOL It's all good. The gist of it is that I support Qubes and hope to be able to use it someday. Right now, I have needs it doesn't really fill. I've even aided them financially 'cause I want to use it.

"User after free" is now a common vulnerability term that refers to vulnerabilities that referencing memory after it has been freed, which can cause a program to crash, use unexpected values, or execute code. It's pretty common in C and C++ applications.https://cwe.mitre.org/data/definitions/416.html

I worked on a code base where we took elaborate precautions to be 100% sure we had no use-after-free bugs (macros that would crash the system any time it happened). I was just shocked how many we found, and how frequently people kept generating new ones. Too many C programmers who shouldn't be, I guess.

I worked on a code base where we took elaborate precautions to be 100% sure we had no use-after-free bugs (macros that would crash the system any time it happened). I was just shocked how many we found, and how frequently people kept generating new ones. Too many C programmers who shouldn't be, I guess.

Usually it's because of two things.

1) Race conditions - you need to get rid of an object but the object is being used in another thread. Freeing the object now would mean the other thread would be using an in