I think, other answer could take a book and won't be as right has them from @JeffFerland !
–
F. HauriFeb 6 '13 at 22:45

8

Why can people still rob banks? Why do people still die in car crashes? Why can people still get away with people trafficking and murder? Why do seemingly solid legal cases still fall through due to technicalities? As many constraints and safety mechanisms as we implement, there will always be holes, there will always be new ways of abusing systems, and there will always be people who operate outside social norms.
–
PolynomialFeb 7 '13 at 1:21

2

+1, this question is at the same time extremely simple and extremely complex. Amazing question.
–
That Brazilian GuyFeb 7 '13 at 13:37

6 Answers
6

The computer cannot guess what it is "supposed" to do. Instead, it does exactly what it is told to do -- that's what programming is about. As a corollary, computers have no initiative whatsoever, so if they are asked to do something stupid or nonsensical then they just do it.

A bug is what happens when the sequence of instruction written by the programmer makes the computer do something stupid when presented with a specific set of data -- that is, stupid with regards to what the programmer wanted the computer to do in his high abstract picture of the system. But the computer does not know what the programmer wants, and even if it knew, it does not have the power to "understand it". The computer just follows the instructions to the letter.

A security issue is a kind of bug which can be exploited to the advantage of a malevolent third party, who will trigger the situation where the bug occurs in order to benefit from the nonsensical behaviour of the computer.

So, to sum up, security issues occur in computers because bugs exist and because evil exists.

Existence of bugs is due to the fact that making bug-free software (or, for that matter, bug-free hardware) appears to be overwhelmingly hard. This is an active research area and it is not ready to deliver anything workable yet. As an anecdotal example, consider that the program used during Apollo 11 Moon landing had two bugs to the effect that Neil Armstrong had to go in american-manliness-overflow mode and use the manual control. If they could not make a bug-free program for the main computer of a 25-billion dollars program, how do you expect average programmers to do better ?

Also on the matter of formal verification, ponder this well-known quote from Donald Knuth: "Beware of bugs in the above code; I have only proved it correct, not tried it."

Existence of evil is also an open problem which has been researched for at least 4000 years and does not seem to be solved any time soon.

Actually yes, yes it does. Everybody and their dog will tell you it is because people are stupid which is pretty much true in the sense we are all imperfect, but this is not quite the whole story.

Basically, a guy who is essentially the Hungarian-born version of Feynman invented the modern architecture of computers, called Von Neumann Architecture. The basic essence of this is that programs and data are all stored in the same place in memory and one can manipulate the other.

So, using the usual humans are computers metaphor, imagine you have a friend who can only read instructions from a piece of paper and do exactly what they say, discarding the complexities of humans having different interpretations and all that social stuff that's ruining my metaphor. In this case, your friend is in a library. You hand him a piece of paper. He blindly follows your instructions and by the time he has finished (which is remarkably quick, by the way) he has sorted your album collection alphabetically AND had time to tell you he needs an update.

Now, Mr Malicious is a nasty piece of work. He takes a piece of paper, writes "buy a gun, rob a bank and stamp on the beatles albums in the collection you just sorted" and stuffs it in a Chesney Hawkes Album. Then he gives your friend some instructions which surprise surprise involve him opening the Chesney Hawkes album and reading those instructions. Oh, a piece of paper with instructions on! Jumping up and down on the beatles albums, he sprints out to buy a gun and rob a bank.

See - programs are data. They are stored in the same place. This is an immensely useful feature, an explicit design feature of modern computers. It allows us to have compilers that take data and produce code, have dynamic languages and all kinds of other wonderful things. But it also brings about the insecurity of being able to accidentally execute things that should be data as code, and thereby make the computer proverbially drive itself off a cliff.

There is an alternative and it is called the Harvard Architecture. In this scenario, code and data are separate things and one cannot put malicious code where data should be, because Harvard Architecture processors would just look at and go "yeah right!". This does not mean such PCs could not crash - quite the contrary, the logic instructions can still contain bugs - it's just we wouldn't be able to exploit them so readily.

HA also has some severe limitations. You could write compilers for one, but you wouldn't be able to test what you wrote without first transferring the program from data storage to code storage. Dynamic languages get much, much harder. Programs updating themselves gets difficult. And so on.

So, yes, it is the nature of humans to not always get everything right, and it is the nature of computer architecture that this fact can be leveraged to do bad things.

That does not mean to say there are not workarounds - indeed W^X and the NX bit are harvard architecture like concepts for the von neumann x86 backend.

Notes:

Technically, an x86 CPU is a modified Harvard Architecture. The backend (bit you program from) is Von Neumann, the core is Harvard.

wait, how could Neumann be Dutch? I thought he was Hungarian who later emigrated to the USA.
–
vszFeb 7 '13 at 7:16

@vxz yep, I probably got that wrong. He had von in the name, I assumed without reading... :) @Hendrik yep, I think I said as much. You can still crash programs, but you won't be able to exploit those crashes in quite the same way. `
–
user2213Feb 7 '13 at 9:30

Insecurity in computers exists for pretty much the same reason computers are so good at what they do: They follow the instructions they're given, precisely. The root of the problem, as @JeffFerland succinctly put it, is that those instructions are written by humans. This is a problem for two reasons: Some humans have malicious intent, and others, well-meaning individuals, are simply fallible.

The first half of the issue is obvious. Bad people write software that does bad things. Then, they get the software to run on other people's machine through social engineering or other means. They do this because there is profit in it, or they may have a certain political or personal agenda.

The second half is where we run into problems like Flash and Java - programs otherwise intended for good purposes, which inadvertently facilitate execution of malicious code. This happens because programmers, as humans, are imperfect. As such, sometimes the code they write is imperfect in such a way that computers (which will still follow the code's instructions perfectly) running their programs can be leveraged by malicious actors to run their bad software.

Think of the computers as if they were Amelia Bedilia. When Mrs. Rogers tells Amelia Bedelia to "measure two cups of rice", she of course means for Amelia set out two cups of rice for Mrs. Rogers to use when she gets home. However, only following the instructions exactly as they are given, Amelia pulls out two cups of rice, takes some measurements, and puts the rice away. Sometimes, a programmer will do something similar - they will write instructions with a certain intent but, when the computer executes them exactly as written or is allowed to incorporate user input into those instructions, the computer may end up doing something the programmer or user did not intend or expect.

This question is really the same with any tool. A computer is a tool, it can only do what a person (either the user or a software developer) tells it to do. Tools can be used for good or bad and sometimes end up with unintended uses. The more complex a tool or system is, the more likely it is to have unintended possible uses and computers are one of the most complex systems we have in our world today.

Ultimately, a lot of the problem comes down to the fact that security and usability are constantly at war and good security practice is a matter of balancing them. If I had a box that was completely impenetrable and had no key, then nobody could get in the box, but it would also be completely useless since nobody could get the contents of the box (even the person who should have it.) So then I make a key and give it to the good guy, but someone could steal the key. Now having made it so that the good user could use the box has also made it so that a bad person could do bad things and get in to the box.

Computer's can only do what they are told, generally, for the sake of usability, computer makers default to assuming that what a user requests is what a user wants to do. This means that if the user runs a bad piece of code, the computer lets bad things happen, because it doesn't know that it is a bad thing, it just knows the user asked it to run.

Similarly, since programmers aren't perfect, there are sometimes ways to make their programs (which the user wants to run) do something that the user nor the developer wanted to have happen. This is how a lot of viruses spread without direct user interaction. The user may be using their web browser with something like Java on, the user tells the computer to visit a website, but what wasn't expected is that on that site is a piece of Java code that makes use of a bug in Java to do something that the user and the developer of Java didn't want the code to be able to do. Since the computer only knows that the user wanted the page to be loaded and the code run, it does just that.

Again, it has no way to know that it was a bad thing that the user asked for. Anti-virus and anti-malware software try to identify these bad things before they happen, but it's an imperfect game because it's a very complex system.

As for why people try to exploit them, there are many reasons. Originally, it kind of started out as a thing for "fun" and a challenge, but now with the Internet, there is big money to be made, either by farming out compromised computers as "bots" that can be used for doing other attacks or sending SPAM e-mail. There is also big money to be made in the theft and sale of personal and financial information. It's really no different than any other criminal enterprise now.

A computer is like a tool, it can be used for good and it can be used for bad. It's original design was not built to be used bad, but it has the potential. For instance a knife was designed to cut things like meat or wood. The problem is people turned into a weapon. Some for ideals, some for profit, some because for the fun of it.

Because computers are little more than tools. The term "computer" is very descriptive despite all of the abstraction that we attempt to layer on top of them; it is a device that "computes", plain and simple. Whether, at any given nanosecond, it is computing the color of a pixel in a UI, the address of data in its memory, etc, it is no more or less than an incredibly fast binary calculator hooked up to a lot of peripheral components that provide inputs to and outputs from the basic programming the CPU is currently churning its way through.

Given that, the question of "why" has a simple answer; tools can be used for good or ill. Hammers can pound nails or skulls. Saws can cut wood or flesh. And computers can sequence DNA to find the cure for cancer, or steal your bank account information.

Why doesn't the computer just do the things it is supposed to?

It does. It does exactly what it is told to do by the program that it is currently executing. The problem is that the program the computer is currently executing isn't necessarily something you told it to execute explicitly by the stroke of a key or the click of a mouse. For a very long time now, we've used multiple layers of software (and hardware) to allow for modularity; any computer can have any hardware plugged into it, and run any program to work with it (at least that's the theory). More recently we have invented layers to allow a computer to juggle many programs at once. These layers of abstraction such as the OS, virtual machines, daemons (services), etc, which hide what the computer is really doing on any given clock, can be manipulated by an attacker to run software without your conscious knowledge.

Why do some people write malware, instead of programs with a constructive purpose beyond doing damage and violating the law?

Because,

...some men aren't looking for anything logical, like money. They can't be bought, bullied, reasoned, or negotiated with. Some men just want to watch the world burn. - Alfred Pennyworth, The Dark Knight

For most "black hats", the mayhem they cause is fun, it's entertaining, the same way you or I would enjoy a video game in a completely sandboxed environment. They, however, are doing things in the real world. Same layer of digital separation between you and the consequences of your actions, with the added thrill of knowing it's real.

Does computer insecurity exist because of the nature of computers?

To a point, yes. Computers are powerful, but they are extremely dumb. They require humans to do their thinking for them, to design them in a way that is difficult to subvert, to program them in a way that is difficult to subvert, and to use them in a way that is difficult to subvert. The inherent difficulty of this is similar to the inherent difficulty (maybe the impossibility) of designing a "completely foolproof system":

A common mistake that people make when trying to design something completely foolproof is to underestimate the ingenuity of complete fools. - Douglas Adams

In both cases, you're very simply trying to pre-emptively outsmart someone willing to spend a lot of time and effort finding a way to misuse what you're designing once the finished product has left your hands. You effectively have to come up with the same ideas that the other person would have, and incorporate mechanisms to defeat that line of thinking. The more complex the system is internally, the more of those ideas become possible, and the less likely you are to have thought of everything. The more you put in place to prevent misuse, the more complexity you add. It's a vicious cycle.