1clue - I can see that you're really into viciously enforcing rules, which is reasonable if the software isn't completely understood, which has its base but still very arbitrary. It's clear all the examples you're giving because we know each of them have exhibited bugs in the past in both the actual server/program AND poor user configuration, both of which will benefit from privilege separation. A lot of application developers are suggesting privilege separation simply because they are not willing to find and fix bugs that can cause privilege escalation, mostly because they know it's becoming too difficult to know every possibility (and thus not "correct by design"). And again, it doesn't matter if it's escalating from a "VM" environment like HTML to running arbitrary machine code on the machine as a regular user or root, it's still a security hole that needs to be fixed.

But what really should be the lower bound? Why not run 'ls' as a unprivileged user as well, why does it get an OK for being run as root. Granted 'ls' is simple enough to be fairly well checked such that specially crafted directories with bad filenames will never cause buffer overflow and cause 'ls' to run arbitrary code. Why shouldn't other software also be subject to similar scrutiny, even if it's much harder? And if it were carefully inspected, why shouldn't it get the same treatment like 'ls' is trusted by most people? After all, after 'ls' grabs disk blocks and inode contents getting past permissions, it no longer needs root to process and print on your display - yet nobody is running it under privilege separation when it too could also harbor bugs.

Now curl/wget must also be run under privilege separation by your rules too? I suspect so, but I do I wonder how many people wouldn't bat an eye before running wget as root when they wouldn't and shouldn't run firefox as root.

You get two scenarios:

Go to the nearest beach and get a 1 liter container of sand. Examine each particle of sand in the container, discover what type of rock/mineral it is, the size and the irregularity of surface. Put this information into a database.

Perform the same test on ALL sand on the planet. Anything from the center of the earth to the edge of space. This includes at the beach, on playgrounds, in random hills in nature, sand which is underground a mile. Every particle of rock or mineral which is of an appropriate size to be called sand must be included. Include micrometeorites which fall from the sky every day. Include sand made by rocks bumping in rivers, wave action at lakes and oceans, human machine activity, volcanic action, earthquakes, anywhere where sand is created. The list must always be current.

I and every admin on the planet chooses item 1. You choose item 2. Item 1 is acceptable and reasonable and not particularly difficult because the tools one must use as root on a UNIX system are fairly static and relatively small. Item 2 contains everything on the planet, sources you can think of and sources you can't think of, deposits of sand which have never been seen or detected by humans, particles suspended in water or ice or floating around in the air. Even if you could name all the places to find sand, there are so many examples of sand that you would need a billion programmers on each of a billion planets to catalog everything, and there's no point because it's an unnecessary task that nobody wants to do and certainly that nobody would pay for. In the case of software you must vet not only the software but also the testers because you know nothing of their abilities or intentions. Some would gladly take your money to catalog bugs but then put that information into their own database while putting false goodness in your database, and then develop malware to exploit the bugs.

Again you're fixated on bugs. Bugs we can live with. Bugs are funny that way, they exist without anyone's knowledge because nobody has thought of the reason why they're bugs. Nobody has triggered the badness, and nobody has figured out why that particular bit of code is wrong. Bugs don't accidentally encrypt your hard drive or accidentally send your credit card info to Nigeria, or accidentally coordinate with thousands of other boxes to initiate a DDOS attack on some other site. Or accidentally turn on your webcam and microphone and then accidentally send that stream to someplace like China or North Korea or Russia or some section of the middle east occupied by ISIS.

Deliberate malware is what counts here. You do understand that malware exists right? This thread is about a guy who ran a web browser as root and got ransomware from it. Ransomware is not buggy software, it's functional software written to take your system hostage for money. You understand that people dedicate their time to stealing things to which they have no right and no permission to take? Malware would be hidden from public view since the authors and users of it don't want you to know about it. If it were vetted, it would surely be vetted by people who want to use it. Those people would give it a big gold star because they want everyone to trust it.

This is my last post on your off-topic discussion. If you can't see the obvious line in the sand then you need to start using a Mac. The OP figured out that he'd done something wrong before he even started the thread.

You're simply not accepting the fact that bugs (or rm -rf / stupidity) is/are the only reason why running any piece of software as root becomes a security risk. Ultimately the VM should NOT be letting malware running outside of the VM. Sure it can run amok INSIDE the VM but it should not escape, which is what happened to the OP. This is excepted if the malware wrote on the screen:

I sure hope the OP did not do this and I'm sure this did not happen. (If this did happen, then we're done with this discussion and thread.) Rather I give benefit of the doubt and suspect malware was automatically installed without knowledge, and the only thing we know that "usual suspect" software has been running. As root.

We are assuming the "malware" running inside the adobe-flash/firefox "VM" escaped (which we have not confirmed). The malware is allegedly HTML or flash. But these things should only affect the browser (as it is html) or adobe-flash (as it is flash). This malware is NOT supposed to affect bash (it is NOT a shell script) or ld-linux.so (it is NOT a Linux binary).

I still want to see the exact HTML or flash that actually did in his system if it is indeed the vector. If he would post the websites visited, I would gladly set up a VM running a VM (i.e. adobe-flash / firefox) AS ROOT to learn what the vector is, and report to adobe-flash and/or mozilla.org what API call had a bug to allow running the native malware code. Only then we can prove without question that this was the entry point. But right now we're only pointing fingers. As most adobe-flash exploits are written for Windows and maybe Macintosh, I still have a hard time believing that malware was written specifically for Linux, as they do know *MOST* people behave like you and I and DON'T run firefox as root. This is absolutely ludicrous a hacker writes malware for the 1% Linux boxes out there times the 0.01% Unix users who run firefox as root giving a very, very small number of target victims. This isn't worth it.

Yes I've had enough fact twirling, because you know that we now have too much complex software with hidden bugs that with mischievous code allow unwanted code to run where they shouldn't. The ultimate goal is that we need to prevent unwanted code from running on our machines in the first place, and more effort needs to be done to write software correctly._________________Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSDWhat am I supposed watching?

You're simply not accepting the fact that bugs (or rm -rf / stupidity) is/are the only reason why running any piece of software as root becomes a security risk. Ultimately the VM should NOT be letting malware running outside of the VM. Sure it can run amok INSIDE the VM but it should not escape, which is what happened to the OP. This is excepted if the malware wrote on the screen:

That's the first hit in google. There are tons of hits. Any more questions?

I haven't run virtualbox for awhile, but anything like vmware-tools is software on the guest which hooks into an API on the host. Meaning there is a guest-side component and a server-side component. You can create your own tools, and you can install tools someone else gave you. This is trusted software, whether rightly trusted or not. It can be either something hooking into the hypervisor API or it can be separate from it.

KVM has shared drives using 9p and other approaches. This is also guest code hooked up to host code, at the kernel level. Guest gets a virus or some sort of malware and it gets on that shared drive, sure enough it's on the host and any other guest too, they simply need to touch that file and they're done.

Long story short there are lots of ways that a host or co-guest can get malware from a single guest, and they mostly boil down to convenience tools or to stupid security shortcuts because it's a VM. Use Google instead of ranting about how things should or shouldn't be.

Quote:

As most adobe-flash exploits are written for Windows and maybe Macintosh, I still have a hard time believing that malware was written specifically for Linux, as they do know *MOST* people behave like you and I and DON'T run firefox as root. This is absolutely ludicrous a hacker writes malware for the 1% Linux boxes out there times the 0.01% Unix users who run firefox as root giving a very, very small number of target victims. This isn't worth it.

I'm not a flash programmer but most languages I use have some sort of equivalent of getFilesystemRoot(), or getHomeDirectory(), and programmers are encouraged to use that as it goes across all platforms, not just the PC platforms but mainframes and other oddball stuff as well. Given that scenario if the black hat is half competent then the malware might work on IBM's AS/400 other odd hardware as well, potentially without even knowing these platforms exist.

If you set up a 9p (or nfs or whatever, it doesn't matter) share that gives a VM full read/write access to the home directory of your host, this is the equivalent of being able to run rm -rf $HOME on your host too. The answer is still the same: be careful configuring - don't do stupid stuff. The hypervisor should, if written properly, deny all access outside of its well defined API which is used to gate keep all commands issued by the guest, checking and translating them to host commands. If it normally allows unfettered access to the host, there really is no reason to run a VM as you might well run the programs straight on the box and not pay the virtualization performance penalty.

I set up QEMU to run a 9p share to pass data back to the host, and gave it access only to a specific directory deep within the host filesystem. Unless there's a QEMU bug, there should be no chance for any code to write on my host's root directory - One "possible" bug is that QEMU shouldn't be allowing 9p file access to "../../../../../../etc/passwd" which accidentally gets passed through the host VFS to /etc/passwd. If it does, this is clearly a bug.

1clue wrote:

I'm not a flash programmer but most languages I use have some sort of equivalent of getFilesystemRoot(), or getHomeDirectory(), and programmers are encouraged to use that as it goes across all platforms, not just the PC platforms but mainframes and other oddball stuff as well. Given that scenario if the black hat is half competent then the malware might work on IBM's AS/400 other odd hardware as well, potentially without even knowing these platforms exist.

However flash is not a language used general purpose programming: it's meant for web applications only. There is no need for one of these flash programs to know your home directory, only perhaps a well shielded directory of which the untrusted flash program doesn't need to know the path. If the untrusted flash program somehow breaks out, say with opening "../../../../../../etc/local.d/baselayout.start", the flash "VM" is broken and is clearly a bug. (In fact it seems that Flash supposed to have its own sandbox ... but it leaks...badly...) So under normal situations, flash should only be able to write to files the user has specified - not /etc/motd . Once containment escape happens, all bets are off.

Flash is no different than a traditional VM specifically meant to provide separation between questionable applications/OS and the machine hosting the VM. Why should there be an API to pass to a client absolute paths on the host? These separation VMs really has no need to know what directory its images are lying. There may be a "bypass" configuration option in case you really do need to do it, but by default this should be an option and manually enabled, and thus a configuration problem.

On the other hand, Java is a special exception, only because Java "VM" was actually intended to be running on the host computer like the output of a C compiler and was not meant to provide isolation. Thus anything written for Java was indeed meant to affect the host despite java being a virtual machine. Java indeed has functions like getcwd(), chdir(), getpwent(), etc. and they are indeed useful for applications. On the other hand, a Java web plugin running a java web app I would hope would have isolation - which I'm not sure if it does or not. By this aspect, and the fact that Java does have reason to use chdir(), it should never be a plugin to run questionable webapps, regardless if it's run as root or as a regular user. We have seen time and time again that Java has all sorts of security violations, all in turn Oracle submits fixes to, indicating it was not intended behavior._________________Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSDWhat am I supposed watching?

I would think it probably be like 4 or more bug reports since one of the teams went through 3 different items (VMware, Edge Browser, the Windows Kernel, among others too). Sadly, there security ends up being a compromise of usability and security. The most we can do, is try getting as secure we can do without loosing too much of usability in the process. Even on the increasing use of VM's, we have more of a need to share information in and out of that VM. So you end up needing to decided what is secure enough to minimize the risk associated with it. Even then, it doesn't get rid of the biggest security risk, the person behind the keyboard.

If you set up a 9p (or nfs or whatever, it doesn't matter) share that gives a VM full read/write access to the home directory of your host, this is the equivalent of being able to run rm -rf $HOME on your host too. The answer is still the same: be careful configuring - don't do stupid stuff. The hypervisor should, if written properly, deny all access outside of its well defined API which is used to gate keep all commands issued by the guest, checking and translating them to host commands. If it normally allows unfettered access to the host, there really is no reason to run a VM as you might well run the programs straight on the box and not pay the virtualization performance penalty.

I set up QEMU to run a 9p share to pass data back to the host, and gave it access only to a specific directory deep within the host filesystem. Unless there's a QEMU bug, there should be no chance for any code to write on my host's root directory - One "possible" bug is that QEMU shouldn't be allowing 9p file access to "../../../../../../etc/passwd" which accidentally gets passed through the host VFS to /etc/passwd. If it does, this is clearly a bug.

1clue wrote:

I'm not a flash programmer but most languages I use have some sort of equivalent of getFilesystemRoot(), or getHomeDirectory(), and programmers are encouraged to use that as it goes across all platforms, not just the PC platforms but mainframes and other oddball stuff as well. Given that scenario if the black hat is half competent then the malware might work on IBM's AS/400 other odd hardware as well, potentially without even knowing these platforms exist.

However flash is not a language used general purpose programming: it's meant for web applications only. There is no need for one of these flash programs to know your home directory, only perhaps a well shielded directory of which the untrusted flash program doesn't need to know the path. If the untrusted flash program somehow breaks out, say with opening "../../../../../../etc/local.d/baselayout.start", the flash "VM" is broken and is clearly a bug. (In fact it seems that Flash supposed to have its own sandbox ... but it leaks...badly...) So under normal situations, flash should only be able to write to files the user has specified - not /etc/motd . Once containment escape happens, all bets are off.

Yet both flash and javascript have available functionality to use files on the client hard disk. I spent 5 minutes verifying both. I don't know how good their security is, but let's just assume for a moment that some black hat knows something nobody else knows.

Back when Java made a big splash, everyone thought it was good for a web browser and not really much else. The original intent was that Java byte code would be natively executed by an embedded controller, so of course files were in the language. Later everyone figured out that Java sucks inside a web browser, to the point that Java support will be officially eradicated from all major browsers by next year some time. However it turns out that on the server side, Java is a huge deal, even today. It has fantastic filesystem support, and an outstanding networking stack. And finally as of a few years ago there is now CPU hardware which uses Java byte code as its native machine code.

I guess my point here is that most languages start with one or two fairly elegant goals and then either die in obscurity or blossom into something nobody had quite imagined in the beginning, often the end result bears little resemblance to the original plan.

Quote:

Flash is no different than a traditional VM specifically meant to provide separation between questionable applications/OS and the machine hosting the VM. Why should there be an API to pass to a client absolute paths on the host? These separation VMs really has no need to know what directory its images are lying. There may be a "bypass" configuration option in case you really do need to do it, but by default this should be an option and manually enabled, and thus a configuration problem.

Who's going to know about turning those settings on, and who's going to know why not to do it? Most office people I know would have no idea how, or if you explained it to them no idea why it wouldn't be desirable.

Quote:

On the other hand, Java is a special exception, only because Java "VM" was actually intended to be running on the host computer like the output of a C compiler and was not meant to provide isolation. Thus anything written for Java was indeed meant to affect the host despite java being a virtual machine. Java indeed has functions like getcwd(), chdir(), getpwent(), etc. and they are indeed useful for applications. On the other hand, a Java web plugin running a java web app I would hope would have isolation - which I'm not sure if it does or not. By this aspect, and the fact that Java does have reason to use chdir(), it should never be a plugin to run questionable webapps, regardless if it's run as root or as a regular user. We have seen time and time again that Java has all sorts of security violations, all in turn Oracle submits fixes to, indicating it was not intended behavior.

There's a security library involved in Java. If you have Java code which was written or installed onto a host, then it's running in unprotected mode and therefore has pretty good access to the hardware within certain limits. If you opened an applet in a web browser there are fairly decent security measures to prevent access, but of course if it's code then there is a way around that whether we know about it or not.

None of this actually is affected by root user directly. Your premise that all code should be vetted is ridiculous and extremely obviously impossible to do. The purpose of root user is not to perform normal user tasks at all, but to perform a limited subset of system administration tasks which cannot be performed without escalated privileges. It is obvious to everyone except you that many tasks are extremely undesirable to do as root.

For example, I'm writing a C app which will be modifying large blocks of the disk. If I use my own user account then if I mess something up it's probably limited to my own account. If I use root then I could literally destroy my entire installation by having a misplaced character. And yet the C compiler is heavily reviewed by capable developers and as such it should be allowed to run as root right? And even if I download some C code as root from some website in Afghanistan I've never heard of, the C compiler should be smart enough to prevent that source code from doing anything dangerous to my system right? Because it's vetted? Surely every bit of source code on an actual web page does exactly what the documentation says it does right? If it's on a web page then it's been vetted or "they" wouldn't let it be on the site.

Flash is no different than a traditional VM specifically meant to provide separation between questionable applications/OS and the machine hosting the VM. Why should there be an API to pass to a client absolute paths on the host? These separation VMs really has no need to know what directory its images are lying. There may be a "bypass" configuration option in case you really do need to do it, but by default this should be an option and manually enabled, and thus a configuration problem.

Who's going to know about turning those settings on, and who's going to know why not to do it? Most office people I know would have no idea how, or if you explained it to them no idea why it wouldn't be desirable.

That's why it's default off in "secure mode" and should never be turned on except if the user knows why it's necessary. The idea is the write flash code such that enabling these are unnecessary and they shouldn't be necessary. Any code that you didn't write yourself is automatically suspect if it requires enabling. One would hope they would know enabling writes to your hypervisor by default on your VMware VM is potentially a risk for the system.

Quote:

Your premise that all code should be vetted is ridiculous and extremely obviously impossible to do. The purpose of root user is not to perform normal user tasks at all, but to perform a limited subset of system administration tasks which cannot be performed without escalated privileges. It is obvious to everyone except you that many tasks are extremely undesirable to do as root.

You're making a completely incorrect assumption and therefore making an nonsensical conclusion, seemingly out of self righteous spite here, adding no value to your argument. After all, you first suggested vetting and now you say it's ridiculous, so why did you suggest vetting? Of course there is no reason to run things as root if it's not necessary specifically because one is never absolutely sure what "real" code will do - whether it's buggy code or untrusted code (or to protect yourself from accidentally running system killing commands) but you have to admit it does get frustrating after a while (note 1). Also "extremely undesirable" is an extremely vague term which has different meaning to different people - at the very least this statement creates an unclear and very arbitrary line between what's OK and what's not OK.

As a curiosity, how do people handle OpenSSL server keys and the signing tools. Handling OpenSSL server keys absolutely does not need root to handle at all. Yet I suspect significant number of people use openssl as root because the servers need to be started as root anyway and it's a "usability issue" to switch back and forth to create and sign keys, yet you may have to sign someone else's keys which is technically untrusted data. Also the fact that well, if that OpenSSL user gets compromised no matter what it is, it's pretty much game over anyway.

---
note 1: Aren't we all annoyed when we have to download click-through package files (i.e., "restricted fetch" ebuilds) for portage? "Convenience" is to just download and click-through as root through firefox and save as /usr/portage/distfiles/ directly - which I'd never do - I always endure the pain of download as a regular user, chown/chmod it, and move it into place..._________________Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSDWhat am I supposed watching?

Flash is no different than a traditional VM specifically meant to provide separation between questionable applications/OS and the machine hosting the VM. Why should there be an API to pass to a client absolute paths on the host? These separation VMs really has no need to know what directory its images are lying. There may be a "bypass" configuration option in case you really do need to do it, but by default this should be an option and manually enabled, and thus a configuration problem.

Who's going to know about turning those settings on, and who's going to know why not to do it? Most office people I know would have no idea how, or if you explained it to them no idea why it wouldn't be desirable.

That's why it's default off in "secure mode" and should never be turned on except if the user knows why it's necessary. The idea is the write flash code such that enabling these are unnecessary and they shouldn't be necessary. Any code that you didn't write yourself is automatically suspect if it requires enabling. One would hope they would know enabling writes to your hypervisor by default on your VMware VM is potentially a risk for the system.

I remember one time years ago a secretary infected the entire windows-based office by opening an infected email which supposedly contained a video. She did it like 6 times. Small company at the time, but it spammed everybody in it since she had mailing lists for all of it. We kept telling her not to open that email, just delete it. She kept insisting that she wanted to see the video.

Quote:

Quote:

Your premise that all code should be vetted is ridiculous and extremely obviously impossible to do. The purpose of root user is not to perform normal user tasks at all, but to perform a limited subset of system administration tasks which cannot be performed without escalated privileges. It is obvious to everyone except you that many tasks are extremely undesirable to do as root.

You're making a completely incorrect assumption and therefore making an nonsensical conclusion, seemingly out of self righteous spite here, adding no value to your argument. After all, you first suggested vetting and now you say it's ridiculous, so why did you suggest vetting? Of course there is no reason to run things as root if it's not necessary specifically because one is never absolutely sure what "real" code will do - whether it's buggy code or untrusted code (or to protect yourself from accidentally running system killing commands) but you have to admit it does get frustrating after a while (note 1). Also "extremely undesirable" is an extremely vague term which has different meaning to different people - at the very least this statement creates an unclear and very arbitrary line between what's OK and what's not OK.

You vet code which needs to be run as root. You don't bother with anything special for any other code. You're the one who insists that all code must be usable as root. You're the 'goose and gander' guy, and then you insist that the software should be smart enough to check for security issues.

In other words, you seem to be saying that every piece of software someone might want to run must duplicate the Linux security system in its own code, so that when running with escalated privileges no security is lost.

Quote:

As a curiosity, how do people handle OpenSSL server keys and the signing tools. Handling OpenSSL server keys absolutely does not need root to handle at all. Yet I suspect significant number of people use openssl as root because the servers need to be started as root anyway and it's a "usability issue" to switch back and forth to create and sign keys, yet you may have to sign someone else's keys which is technically untrusted data. Also the fact that well, if that OpenSSL user gets compromised no matter what it is, it's pretty much game over anyway.

---
note 1: Aren't we all annoyed when we have to download click-through package files (i.e., "restricted fetch" ebuilds) for portage? "Convenience" is to just download and click-through as root through firefox and save as /usr/portage/distfiles/ directly - which I'd never do - I always endure the pain of download as a regular user, chown/chmod it, and move it into place...

I've never configured a keyserver, but since they're especially focused on security I suspect they do what apache2 does: Run a single very simple thread as root for the low-numbered port issue, and run the keyserver as a non-privileged user.

One of my favorite features of Ubuntu is the fact that they've disabled root user. It's not hard to get a root shell if you know how, but most of the n00bs don't know how as it's not documented on their site. By the time they figure it out, hopefully they've developed enough common sense to realize they shouldn't have a root login just hanging around for whatever they might want to do.

I remember one time years ago a secretary infected the entire windows-based office by opening an infected email which supposedly contained a video. She did it like 6 times. Small company at the time, but it spammed everybody in it since she had mailing lists for all of it. We kept telling her not to open that email, just delete it. She kept insisting that she wanted to see the video.

That program should not have allowed those privileges by default. Administrator shutoff. Program has a bug if it automatically allowed administrator access of any code that attempts to run.
Stupid is as stupid does. Did she get fired just like the root firefoxer (though stupid, may not have actually done any damage unlike disrupting the whole company with spam)?

Quote:

You vet code which needs to be run as root. You don't bother with anything special for any other code. You're the one who insists that all code must be usable as root. You're the 'goose and gander' guy, and then you insist that the software should be smart enough to check for security issues.

In other words, you seem to be saying that every piece of software someone might want to run must duplicate the Linux security system in its own code, so that when running with escalated privileges no security is lost.

I'm purely targeting the software development itself to stop TTM as the top goal - take a step back and reduce mistakes. A whole security system built in for each is not only unnecessary but redundant (if the user runs unprivileged, which they should). But they better take more time to make sure their software doesn't allow race condition stack smashing (harder to prevent) and disallow accessing arbitrary files when it doesn't need to access those files (should be easy!). I just don't want devs to assume that their programs will always be run in a unprivileged environment so they can be sloppy with their security. After all, running Firefox as root in a clean environment without opening any webpages should better have no inherent risks despite being a questionable thing to do as you'd just be staring at a blank page. However if anyone is skittish about even doing this, I'd suggest deleting firefox altogether.

And well, code bugs in the kernel or whole system virtualization, there's not always good defense against these bugs and better left for the developers to handle. If they release too often, there's no way to constantly vet these every release._________________Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSDWhat am I supposed watching?

I'm intrigued by the idea of running a 'portage checksums' test. Googled a bit, but didn't quite find the magic command line for that. Any references please? Is this going to check the /usr/portage tree integrity, or the installed files integrity?

eohrnberger ... as I think was mentioned earlier in the thread, this is no guarentee (the checksums exist on the same filesystem as the tampered files ... and so may be considered equally suspect). That said, here are a few examples:

... app-portage/portage-utils are generally faster in my experience (which I would expect, given they are C, rather than python, or pipes).

best ... khay

Thanks. Running this now. So far it's only turned up conf files that have changed, so so far so good. And, yes, I realize that it's only going to detect files that portage has installed which are different, and not ones that have been added. Accepting this as a limitation.

1clue - I can see that you're really into viciously enforcing rules, which is reasonable if the software isn't completely understood, which has its base but still very arbitrary. It's clear all the examples you're giving because we know each of them have exhibited bugs in the past in both the actual server/program AND poor user configuration, both of which will benefit from privilege separation. A lot of application developers are suggesting privilege separation simply because they are not willing to find and fix bugs that can cause privilege escalation, mostly because they know it's becoming too difficult to know every possibility (and thus not "correct by design"). And again, it doesn't matter if it's escalating from a "VM" environment like HTML to running arbitrary machine code on the machine as a regular user or root, it's still a security hole that needs to be fixed.

But what really should be the lower bound? Why not run 'ls' as a unprivileged user as well, why does it get an OK for being run as root. Granted 'ls' is simple enough to be fairly well checked such that specially crafted directories with bad filenames will never cause buffer overflow and cause 'ls' to run arbitrary code. Why shouldn't other software also be subject to similar scrutiny, even if it's much harder? And if it were carefully inspected, why shouldn't it get the same treatment like 'ls' is trusted by most people? After all, after 'ls' grabs disk blocks and inode contents getting past permissions, it no longer needs root to process and print on your display - yet nobody is running it under privilege separation when it too could also harbor bugs.

Now curl/wget must also be run under privilege separation by your rules too? I suspect so, but I do I wonder how many people wouldn't bat an eye before running wget as root when they wouldn't and shouldn't run firefox as root.

It was already installed, and run, but not updated it's local baseline. So Yea, I noticed this and made a mental note to update the local baseline once I'm satisfied that the system is OK. That's gonna take a while. So far, no additional incidences of weirdness.

But the balance between usability and security is one that each organization needs to find for themselves, and the last component in the triad is cost, not only the cost of imposing the security, but also the cost should a system be compromised as well as the cost of any data that might be compromised and the cost of confidential data escaping into the wild.

But the balance between usability and security is one that each organization needs to find for themselves, and the last component in the triad is cost, not only the cost of imposing the security, but also the cost should a system be compromised as well as the cost of any data that might be compromised and the cost of confidential data escaping into the wild.

eohrnberger ... a great majority of such costs are in fact negative externalities. In terms of the major software vendors (and subservient industry) who bears the cost of "compromise [... and ...] data escaping into the wild"? In fact, they not only avoid bearing the cost, they can profit from the fact, an entire service industry is built around "usability ... and security [sic]" ... with linux being similarly driven (a trend which can be seen from how linux has gone from being a user driven phenomena to being almost entirely under the governance of the vendors who have been able extract revenue from others labour, and then control and monopolise the market).

I remember one time years ago a secretary infected the entire windows-based office by opening an infected email which supposedly contained a video. She did it like 6 times. Small company at the time, but it spammed everybody in it since she had mailing lists for all of it. We kept telling her not to open that email, just delete it. She kept insisting that she wanted to see the video.

That program should not have allowed those privileges by default. Administrator shutoff. Program has a bug if it automatically allowed administrator access of any code that attempts to run.
Stupid is as stupid does. Did she get fired just like the root firefoxer (though stupid, may not have actually done any damage unlike disrupting the whole company with spam)?

In that case, all software courtesy of Microsoft. And I didn't hold the position then that I hold now, nor were any of us connecting to confidential data of the nature I currently work with.

Quote:

Quote:

You vet code which needs to be run as root. You don't bother with anything special for any other code. You're the one who insists that all code must be usable as root. You're the 'goose and gander' guy, and then you insist that the software should be smart enough to check for security issues.

In other words, you seem to be saying that every piece of software someone might want to run must duplicate the Linux security system in its own code, so that when running with escalated privileges no security is lost.

I'm purely targeting the software development itself to stop TTM as the top goal - take a step back and reduce mistakes. A whole security system built in for each is not only unnecessary but redundant (if the user runs unprivileged, which they should). But they better take more time to make sure their software doesn't allow race condition stack smashing (harder to prevent) and disallow accessing arbitrary files when it doesn't need to access those files (should be easy!). I just don't want devs to assume that their programs will always be run in a unprivileged environment so they can be sloppy with their security. After all, running Firefox as root in a clean environment without opening any webpages should better have no inherent risks despite being a questionable thing to do as you'd just be staring at a blank page. However if anyone is skittish about even doing this, I'd suggest deleting firefox altogether.

And well, code bugs in the kernel or whole system virtualization, there's not always good defense against these bugs and better left for the developers to handle. If they release too often, there's no way to constantly vet these every release.

Dude. You're using a web browser as your example code that you think should be flawless. I can think of no more inherently insecure application except if we knew of a repository of pure malware that we could download and infect our own systems voluntarily. That's my point! A browser inherently runs code from untrusted sources. Video. Javascript. Dozens of scripting languages and markup which is semi-executable or completely executable. The very nature of the app makes it inherently unsafe to run with escalated privileges. It doesn't matter how secure you THINK the browser code itself is. Somebody always knows more than the programmers and the code review people.

Virtual machines consist of a software application which, possibly with hardware assistance, can pretend to be a physical computer. Think about that for awhile.

You still insist on calling all of this "bugs." Stop it. Yes, bugs exist and yes some of them are security vulnerabilities, but the real danger is malware. Malware is not a bug. Malware could be theoretically flawless and therefore bug free, but it's still malware.

The premise is that bugs allow malware to do their stuff. Malware cannot run if the software does not allow it to run.

If the 'bug' is the person deciding to run the malware in a terminal or doubleclicking on an video icon, that person is the bug._________________Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSDWhat am I supposed watching?

The premise is that bugs allow malware to do their stuff. Malware cannot run if the software does not allow it to run.

If the 'bug' is the person deciding to run the malware in a terminal or doubleclicking on an video icon, that person is the bug.

You presume that:

All bugs are known.

All bugs are known to the people who are writing or reviewing the software being vetted.

All potential attack vectors are known.

That Team White is bigger and/or more knowledgeable than Team Black.

None of the above points are true. In fact, there is much more monetary incentive to be part of Team Black, and many more "unconventional" players on Team Black.

Bugs exist in software which are not known to be bugs by even one human. For a human to call a code snippet a bug means that a human understands how this snippet can operate in a way other than designed. For the bug to be called a security vulnerability, the human needs to understand how, at least in theory, that the code could be used to gain unauthorized access or at least prevent normal operation of the system containing the bug.

Knowledge that a bug or vulnerability exists is not required for the system containing it to be affected badly.

For the developers and reviewers to address these bugs, they need to know about the bugs. Bugs discovered by Team Black stay with Team Black. Bugs discovered by Team White are systematically shared, and some of the people being shared with secretly belong to Team Black.

The premise that ALL software needs to be vetted the way you describe is impossible for several reasons, many of which I've mentioned above. Since you seem impervious to those reasons they won't be mentioned here again.

...If the 'bug' is the person deciding to run the malware in a terminal or doubleclicking on an video icon, that person is the bug.

So even if you understand that the browser itself is not perfect, you DO understand why running a browser as root (which has the security system bypassed) is a bad idea. Why are we arguing? Your point has no merit. Coders already spend a lot of effort on ensuring their software is not only functional but bug free and secure under normal circumstances.

You're arguing because you like to argue, not for any anticipated gain for software security.

made a mental note to update the local baseline once I'm satisfied that the system is OK. That's gonna take a while. So far, no additional incidences of weirdness.

I'm still interested in how the system got compromised. Have you got any closer to working this out ? Has all potential evidence and clues to this been destroyed ?

I suggest you boot from a live CD/DVD when doing any comparison of checksum values so you know that you get true values from tools you can trust. If you have a clean system with same use flags etc that you can compare with that will save you having to build the whole system again in a chroot. Consider everything as bad until you can confirm with the package manager and checksum values that it is OK. This will still leave you with quite a bit of investigation work, but you may get lucky and identify only a small number of binaries which fail to match.

I could split out the philosophical posts, but some parts of the thread have posts that weave between philosophy and the original topic, so splitting could make the conversation harder to follow. If the philosophy debaters want to continue, I'll try to carve up the thread and leave appropriate cross-links. Otherwise, I'll leave the posts all in one thread.