From

Thank you

Sorry

Another month, another few dozen patches to install -- it's never-ending. It's frustrating.

Software coding tools supposedly have security built in by default. We have "safe" programming languages. We have programmers using SDL (security development lifecycle) coding tools and techniques. We have operating systems with more secure defaults and vendors that fuzz and attack their own software with a vengeance to find holes. We have companies spending billions of dollars to eliminate software bugs.

Here are five reasons why software is still full of bugs, despite so many well-meaning attempts to eradicate them:

1. Human nature

Most -- though not all -- coding bugs originate from human error. Some can be attributed to unexpected or weird outcomes due to a software coding tool or compiler. But the majority results from mistakes made by a human programmer.

No matter how great the SDL training or the security tools we receive, we are still human and we make mistakes. If you want to know why we still have computer software vulnerabilities, it's because humans are fallible.

That said, we're not doing enough to reduce human error. Many programmers simply aren't given sufficient (or any) SDL training, nor do they have incentives to program securely. I'm always surprised by how many programmers who write security software for a living don't understand programming securely. You can bet the bank that most security software you run has as many bugs, if not more, than the software it is supposedly protecting.

But even highly trained coders who try their best miss bugs. For instance, long ago, a bad guy created a buffer overflowing in a browser using an HTML tag field that determined color. Instead of entering FFFFFh or something like that, the hacker could enter executable code into the color field, which the browser would consume and cause a buffer overflow. Voilà! Exploit. Few could have anticipated that one.

2. Increasing software complexity

By its nature, software keeps getting more complex, which of course means more lines of code. With programming, no matter how good you are, there will be a certain number of bugs and mistakes (though not always exploitable) per lines of code. People who count such items say that if you only have one mistake per every 50 lines of code, you're doing pretty well. Most programmers veer closer to a mistake for every five to 15 lines of code. Consider, say, that the Linux kernel has more than 15 million of lines of code ... you do the math.

Even without coding errors, programmers can't anticipate an application's overall interactions in the Internet age. Most programs must talk to other APIs, save and retrieve files, and work across a multitude of devices. All those variables increase the chances of a successful exploit.