It rather involved being on the other side of this airtight hatchway

Yes, a code injection bug is a serious one indeed. But it doesn't become a security hole until it actually allows someone to do something they normally wouldn't be able to.

For example, suppose there's a bug where if you type a really long file name into a particular edit control and click "Save", the program overflows a buffer. With enough work, you might be able to turn this into a code injection bug, by entering a carefully-crafted file name. But that's not a security hole yet. All you've found so far is a serious bug. (Yes, it's odd that I'm underplaying a serious bug, but only because I'm comparing it to a security hole.)

Look at what you were able to do: You were able to get a program to execute code of your choosing. Big deal. You can already do that without having to go through all this effort. If you wanted to execute code of your own choosing, then you can just put it in a program and run it!

The hard way

Write the code that you want to inject, compile it to native machine code.

Analyze the failure, develop a special string whose binary representation results in the overwriting of a return address, choosing the value so that it points back into the stack.

Write an encoder that takes the code you wrote in step 1 and converts it into a string with no embedded zeros. (Because an embedded zero will be treated as a string terminator.)

Write a decoder that itself contains no embedded-zeros.

Append the encoded result from step 3 to the decoder you wrote in step 4 and combine it with the binary representation you developed in step 2.

Type the resulting string into the program.

Watch your code run.

The easy way

Write the code that you want to inject. (You can use any language, doesn't have to compile to native code.)

Run it.

It's like saying that somebody's home windows are insecure because a burglar could get into the house by merely unlocking and opening the windows from the inside. (But if the burglar has to get inside in order to unlock the windows...)

Code injection doesn't become a security hole until you have elevation of privilege. In other words, if attackers gains the ability to do something they normally wouldn't. If the attack vector requires setting a registry key, then the attacker must already have obtained the ability to run enough code to set a registry key, in which case they can just forget about "unlocking the window from the inside" and just replace the code that sets the registry with the full-on exploit. The alleged attack vector is a red herring. The burglar is already inside the house.

Or suppose you found a technique to cause an application to log sensitive information, triggered by a setting that only administrators can enable. Therefore, in order to "exploit" this hole, you need to gain administrator privileges, in which case why stop at logging? Since you have administrator privileges, you can just replace the application with a hacked version that does whatever you want.

Of course, code injection can indeed be a security hole if it permits elevation of privilege. For example, if you can inject code into a program running at a different security level, then you have the opportunity to elevate. This is why extreme care must be taken when writing unix root-setuid programs and Windows services: These programs run with elevated privileges and therefore any code injection bug becomes a fatal security hole.

A common starting point from which to evaluate elevation of privilege is the Internet hacker. If some hacker on the Internet can inject code onto your computer, then they have successfully elevated their privileges, because that hacker didn't have the ability to execute arbitrary code on your machine prior to the exploit. Next time, we'll look at some perhaps-unexpected places your program can become vulnerable to an Internet attack, even if you think your program isn't network-facing.

You know, it’s at times like this, when I’m stuck in a Vogon air lock with a man from Betelgeuse, about to die of asphyxiation in deep space, that I really wish I’d listened to what my mother told me when I was young.

Elevation of privilege could be in the same security context as the running program – consider a kiosk or other locked-down system that doesn’t allow users at the keyboard to run just any program of their choosing. By injecting code through a text box, an attacker could run arbitrary code – in the same security context as the original program, yet still an elevation of privilege.

Many of the good guys think that it is hard to come up with code once you have a partial hole like this. It is in fact trivial and there are nice tools out there to do it. Two example sites are http://www.shellcode.org and http://www.metasploit.com. You just pick the the code you want and they have multiple encoders for you. For example they even have encoders that will let your payload be in a buffer that has toupper() or tolower() called on it! These are the good guy sites! I did see one a while back that had a database of all the DLLs in all the Windows versions and where they were loaded and what opcodes were available. If the only thing you could control was a return address, they had sufficient information for you to pick what you wanted run.

> suppose you found a technique to cause an application to log sensitive information, triggered by a setting that only administrators can enable

While it’s true that you must have gained administrator access and you could run/do anything else you want this scenario is one where you likely wouldn’t want to. Get access and enable logging, then come back days, weeks, or months later to grab the logged information.

One entry one chance at being caught. If the entry isn’t caught, but the logging later is, it could be assumed attributed to another (current, prior) individual with admin access and left in place for you. Later you come back and use the same exploit to retrieve the log.

Not so if you take that opportunity to leave rootkits in place or processes that keep sending the logs or other data back to you… endless opportunities to be caught.

Note that Raymond’s comments and analysis were from an application-centric point of view. If the application is running in the context of the user, it cannot perform an Elevation of Privilege (EOP) attack.

However it can make other attacks which might include revelation of private data, repudiation, etc.

Note that the tables are turned for components. If you’re a component that’s used in multiple apps, your EOP potential is the greatest of all the applications which run you. Maybe you were written to run in low-rights IE but who knows when a service running as LocalSystem loads your code for some reason and starts using it?

Unfortunately it looks like addressing the security problem is more like trying to exert control over a chaotic system; you cannot exercise absolute control. Instead you can fix identifiable defects and then have mechanisms in place which systemically address issues by attempting to put bounds on bad behavior.

Raymond is technically correct, but I’m not sure this entry is very helpful to most people. In analyzing security implications of a bug, you have to ask how it can be exploited and by whom. Security researchers spend a lot of time playing hypothetical games in how somebody can use the bug to do something they are not supposed to be able to do.

The problem here is that most developers aren’t capable of adequately doing this kind of assessment, and can be led astray by naive reasoning. At a deeper level, security problems occur when technology acts in ways different than what a user (or another developer) might expect. Some of the biggest security problems come via social engineering, whereby a user is tricked into causing a payload to execute because what he is doing should not be dangerous in his view of the world.

Think about macro or e-mail script viruses. The virus writers didn’t need access to users’ machines to spread them. And yet they were devastating in their effect.

When looking at a bug that might not be a security hole, you have to consider a myriad of complicated issues. Can an attacker use this to trick somebody into putting a malformed string into the vulnerable dialog box, and thereby gain execution? Users would tend to think that they don’t need to be as careful about dialog box inputs as, say, running executables from unknown sources.

There’s also the issue of several non-vulnerability bugs combining to produce a vulnerability. Sometimes a series of partial breaks, none of which amount to much by themselves, can allow a complete compromise together.

Let’s be careful with this stuff. In the security world there are far too many major security problems that were initially labelled as non-exploitable bugs. In general, non-exploitability is a very difficult thing to prove.

In a very limited scope, it might not be a security problem by some definition. But suppose the user is running on a system with Software Restriction Policies in place — certainly then it would be a security problem.

Also, many parameters can be controlled by attackers. For instance, the Save Dialog. If an attacker made a filename that made code execute, then placed this file in a shared directory, a user could "save" to that file, thus triggering the injection.

Seriously, by that definition, you could come to the conclusion that most of IE’s security holes weren’t holes — they all ran in the user’s context. This narrow definition only works if you assume that there’s zero outside influence in any of the data leading to the hole. And in many apps, those places that aren’t affected by data at all are probably few.

I’ve heard this argument many times when people are suggesting to punt a bug. Unfortunately, the fact that *you* can’t think of a way of exploiting the coding injection to elevate privilege doesn’t mean that some other clever person will find some combination of components that assume good behavior of your component. This is especially true in the case of a reusable component (such as the example edit control), since it is possible that someone will reuse this control in a context where they allow untrusted data to be stuck into the edit control.

I’m surprised you downplay the importance of running code of your choosing on a computer (code injection) regardless of priviledge level.

You say that you can simply compile a program and run it, but you are forgetting that very few systems are configured to allow you to do that without very explicit permission to do so (via an installer).

It’s pretty well spelled out in a book published by Microsoft press (called Secure Code, I believe?). They say: "If a bad guy runs his code on your computer, it’s not your computer anymore". Along with (10?) other things like "If I bad guy can physically access your computer, it’s not your computer anymore".

In its simplest form, code injection is allowing a login capability to a computer which you should not have access to. It in effect bypasses Access control checks, and completely obliterates who knows how many layers of external security.

Aside from that, I personally do not believe that there are any OSs out there which aren’t vulnerable in some obscure way to a rootkit. And all you need to deploy a rootkit, is a code injection bug.

Priviledge elevation is a serious bug, but code injection is in my books the wildcard of bugs.

There is code injection that leads to elevation and code injection that doesn’t. Clearly, code injection that leads to elevation is worse than code injection that doesn’t. Does this constitute "downplaying" the second case?

Yes, injecting code into a machine that the attacker don’t have access to is an elevation of privilege, but that’s not what I’m talking about here. What I’m talking about here is an attacker attacking himself. Gaining the privilege you already have is not elevation.

I hardly intended to give people excuses to write holes. Not all holes are equal, in the same way not all bugs are equal and not all 911 calls are equal. That doesn’t mean you have an excuse to write bugs or ignore 911 calls either.

I retract my last post. I think I didn’t take into account your point that gaining access to a computer you don’t have access to is a priviledge escalation.

just to make it clear though, I’m not trying to flame or anything. I’m just exposing my personal point of view that code injections are more serious than priviledge elevation in a purely abstract way. (even though, exactly as you point out, this might not be the case on a per case basis)

I personally believe this for the following very abstract reason: possible attacks all have a ‘vector space’ they live in. A space of actions and commands that a user can invoke. This vector space of actions navigates the user in a set of finite states. For example, if all you had as a vector space were the commands "File->Open", "File->Print" and "File->Close" there would be no vector for writing.

Anyways, the point is that a code injection will instantly obliterate any analysis done on the system and open the vector space to be every running instruction on your processor (within priviledge). This effectively short-circuits all FSA assumptions/analysis we’ve made. I find this much more troublesome a thought than knowing that my program is running along smoothly in a state that is well known, even though it may have an elevated priviledge.

I think I understand Raymond point of view, but anyone who has the minimum interest on security know that as soon as code can be executed by someone on your machine you’re dead.

I remember that was exactly Microsoft excuse some time ago for many security problems (or why they didn’t consider them to be be a high priority). They did retracted from that point of view (or at least, PR got some lights in security), so it seems a bit odd to see that point of view here.

As already mentioned there is the kiosk case, or just simple "fullscreen" programs running on a PC with no quit command.

Remember the first IE versions? You could bypass the "kiosk" mode by using the Load/save common control (too much Explorer power in there).

Another much more dangerous case (in my point of view) is denial of services. Any competent "hacker" (using the media/movies definition) will know some form of denial of service if it can execute user code on the system. From here I can imagine some arp poisening (I’m just guessing a laptop added to the local network – easier if wifi envolved) and eventualy bad things to the local network environment.

My point is that spam and child pornography are enough for a local network to be considered seriously compromised and from the point you can inject code those are easy to do (and let those credit card numbers theft for the movies, because they are not needed for enough serious damages).

As a side note, I believe this is a point of view many people forget (and no *nix fanatic thinks of).

Actually, not just load/save on IE, netscape as well. It was a problem of Windows, not IE.

I remember we used it on our library PCs to gain access so we could install anything we want. Granted that was not a Windows NT machine, so there was no inherent access control, but still a security breach.

Actually, not just load/save on IE, netscape as well. It was a problem of Windows, not IE.

I remember we used it on our library PCs to gain access so we could install anything we want. Granted that was not a Windows NT machine, so there was no inherent access control, but still a security breach.

So I’d rather not make the optimistic claim about "code injection could be harmless". You never know how your software is going to be deployed, so it’s useless from a software developer point of view anyway. By saying so you’d give people a false sense of security. I just fail to see the point of this article.

Still, it’s possible to chain attacks together. Perhaps there’s a hole which doesn’t let you do anything except write a registry key somewhere. That’s still an elevation-of-privilege attack, of course, but not an "execute any code" attack. It doesn’t even have to be a buffer overrun, it just needs to have poor validation.

Attackers can then use that seemingly-less-serious hole to activate another exploit, eventually getting themselves to arbitrary code execution (and pwn1ng j00r b0x).

I think you did downplay it too much. You started with an example of a person hacking themselves when they knew that they were hacking themselves. Sure it’s fair to downplay this. But this does not extend to a case where a person sort of participates in hacking themselves when they don’t know that they’re sort of participating in hacking themselves. The latter case is a security problem even though it doesn’t involve elevation of privilege.

By the way a burglary that leaves a window unlocked, just like a Trojan installing a backdoor or a rootkit, isn’t rare in real life. If the victim doesn’t notice that they’ve been burglarized then they might bring home some valuables the next day. Or the burglar will find it easier to make repeat visits in search of valuables. Or the burglar who doesn’t find what they were looking for will give up with their original plan and plant some forged evidence that will get the innocent victim convicted of a crime.

I have a question, then. Is it privilege escalation if one is able to use this code injection to circumvent hardware DRM? Or does that just fall back into merely a violation of any security policy in place? I’d guess it’s privilege escalation as with hardware DRM, it’s never the hardware owner who has full privilege over the hardware. Afterall, a large intent of hardware DRM is to prevent the user from being able to "attack" the executing code.

Well, I was merely remarking on the 4th paragraph, where you said that you can already do such a thing (injection) by running your own program.

My point is that in the rare scenario that you have a computer under your discression (your laptop, desktop, or even a shell account on some server), it’s true that it’s not a big deal to be able to inject code.

But for the rest of the cases, injecting code onto a machine you do not otherwise have access to is a huge deal, even if you can only do things that are of reduced priviledge level and perfectly ‘legal’ under the credentials of the user – which are by definition not you, since you are a third party injecting code into their program.

Example: imagine a bank that has a LAN with semi sensitive information on their intraweb (like tomorrow’s keycodes for the front door). Every employee checks on the intranet every evening. Accessing that page doesn’t need any special credential or priviledge, since from the intranet, it is legally accessible to everyone. Inejcting code onto a machine inside their intranet could lead to a serious compromise.

Ok ok. that example is a bad one (people should have some sort of authentication), but my point remains the same. It is not just an assumption that foreign code cannot be executed on a given system, it is part of a security policy in place. And a code injection bug effectively overrides an entire security policy. That’s bad. In my opinion much worse than a priviledge elevation bug.

Here’s another example: if you have an app that just sets the system clock time. The fact that this application has a priviledge elevation bug is relatively benign, given that the app will still continue to do what it’s supposed to do – with the only caveat that maybe you shouldn’t be allowed to do it at all.

Same for a web browser (or even Word), priviledge elevation might allow you to actually view files that you don’t have access to.

What you *won’t* be able to do, unless you have a code injection tool, or a very ‘loose’ tool like notepad, is do anything to actually (subtly) destabilize the system. I mean, how much damage can you do – short of just ham fistedly erasing important files – to a system using word?

Mike Swain: I understand Raymond’s point, and in a very limited context, it’s understandable. But my point is that almost nothing these days is just restricted to a user sitting at a console. For instance, filenames, etc. are often controlled by external users. There’s a LOT of data that is controlled by external users. In fact, I guess one could argue that all input can be played with from the outside…

For instance, suppose there’s a bug in MSN Messenger where if you type in "Foo" as your username, it deletes all files in My Documents. That’s hardly an escalation because you could just go and delete everything in My Documents. However, if an attacker can entice someone to set that as their username ("d00d chng your nick to Foo lol"), then they’ve been successful in their attack, since the victim has an expectation that entering a username should not modify his files. Same thinking that opening a document should not run code, or typing in wierd strings into random dialogs shouldn’t run code, etc.

Managing a Citrix server is already a pain, but this article doesn’t show any pity at all. Instead it reads (to me) that Microsoft is just not interested in these application servers, where users are not trusted at all, and therefore should never be able to run their own code. It is difficult enough already.

Raymond, I do understand your point, but it is rather academic, uninteresting, and at the worst dangerous if some naive developers take it as anything greater.

The kiosk example that Aaron gave is a good one. If I am able to walk up to an internet kiosk, put in my $2 for a few minutes access, type a URL into IE that overflows and launches a worm the likes the planet has never seen before, I reckon it’s a pretty major security hole. Even if someone can’t remotely use this exploit. Your argument makes a little sense when it’s in the context of me sitting at home on my own PC typing in a specially crafted URL, but once I’ve mastered the URL and can launch it from whatever PC I like there is a problem.

*** It’s all about context, and that context changes outside the control of the people who wrote the original application. ***

I don’t follow your filename comment either. You mentioned someone who writes the file, but the file doesn’t need to exist at all or even be a valid filename if it is about overflowing a dialog box buffer.

Matthew, I believe I do understand the point Raymond is making, but the counterpoint that several of us are making is that when the context is shifted from a machine where you have Administrator privileges to some other very restrictive environment then some "plain old bug" becomes an elevation of privilege.

What I mean by his point is very academic and uninteresting is that it is similar to going to the trouble of arguing that guns are not always dangerous because people with no arms have difficulty firing them. Sure there are situations where buffer overflow bug#9999 does not give anymore access to the system than is already granted but since a change in context can change that then it is a security hole. If Windows 2003 Server had a hardcoded Administrator account with known username and password it is a security hole plain and simple regardless of whether the bank using it has it behind a firewall or not. The reason being that you have to think beyond the details of your own system and think about where it could be deployed. While I appreciate the point Raymond is making I find it smells of "well it works on my machine" — and I realise that that is not at all what he is saying, but it doesn’t matter because that is how it sounds and I know some developers are likely to take it as such and treat it as gospel. This is why I said it is dangerous.

I’d like to add that I’m not "yet again" misreading Raymond and flaming him for it. I’ve not taken issue with anything he has said in the past.