Good coding practices mean good data security

Data breaches are a dime a dozen these days. Are hackers getting better? Not really. It turns out that bad coding practices lead to insecure code and glaring vulnerabilities. Who knew?

It seems like there’s another high profile data breach every other day in the news. What’s going on here? Are hackers and bots getting better at attacking our systems? Or could it just be something else?

According to a survey done by O’Reilly Media and the Software Improvement Group (SIG), these security flaws might have more to do with bad coding practices than anything else. Incompetence beats malice any day, apparently.

The picture this survey data paints is particularly compelling.

Peculiar incentives

It’s no secret that the tech world is biased towards the future. Innovation and being on the cutting edge is paramount. Often, that means there’s a certain lack of regard for maintaining legacy code. In this case, it means that there are no incentives for companies to put a lot of money into things that the customers can’t see. Paying for developers to come up with a cool new widget is easy to sell to investors; having to justify security checks is less interesting.

The most common kind of security assessment for code is penetration testing. A computer penetration test called a black box test usually consists of using common manual or automated attacks to test the system and see if an attacker can gain unauthorized access to the system.

According to O’Reilly’s survey, only 4% of respondents believed penetration testing to be sufficient.

For one thing, the penetration test has some well-known limitations. For one, it can only be performed on a working system. That means developers only get feedback very late in the design process, when the pressure is on from the company to release the software regardless of any bugs or vulnerabilities.

Another limitation is that this kind of testing relies on mostly superficial checks on various points of egress. Black box testing is like “checking the security of a bank by rattling the doors and windows”. Any critical errors that lie deep in the system will remain uncovered until it’s too late.

Finally, security testing is usually given a short period of time to find any glaring holes. Anyone trying to take advantage of the system has more than two weeks to scope out a system and find vulnerabilities. Hackers have all the time in the world. Developers? Not so much.

How to avoid the blame game

It’s easy to pass the blame around for this sort of thing. After all, no one wants to be responsible for a simple goto fail; error that put millions of users at risk. How can developers avoid making these sort of mistakes?

O’Reilly’s survey suggests that an overwhelming majority of respondents believed that code should be shown to independent security experts. Unfortunately, even though many said these reviews are worth it, many companies lack the budget for them. Almost none actually conduct these reviews.

So if company won’t pay for security testing, what’s a poor developer supposed to do? Peer review is one option. Having a second (or third, or fourth, or fifth) set of eyes looking at code might just find that crucial error you missed.

Clean code is another option. We’ve all written some ugly code in our time. While some believe that it’s more important to solve a problem than have it look good, that can cause problems down the line. Making sure that there are no redundancies, minimal dependencies, and that the code is expressive should be enough to minimize a lot of the security flaws on the developer’s end. Clean code won’t solve all the world’s problems, but it doesn’t hurt.

Besides, as the old saying goes, “always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live.”