6 Reasons Secure Coding Can Still Lead to a Shipwreck

Why application security means more than just ensuring that developers are writing secure code.

Despite our best efforts to write secure code, computer security breaches at major banks, retailers, and government agencies are making front page headlines on a regular basis. Here are six reasons writing better code may only address a fraction of a bank’s total application security risk.

1. The tip of the iceberg: Modern banking applications are written by developers using a combination of their own software together with open-source components, third-party libraries, and development frameworks -- much in the way manufacturers use a mix of in-house and sourced components in their finished products. Various industry studies suggest that only 10 to 30 percent of custom applications are written by a company’s developers. So even the most secure coding practices in the world, perfectly executed, will only address at best 20% of the potential risk. That’s only the tip of the iceberg.

2. It’s not my iceberg: This statistic varies among financial institutions, but in most large banks at least half of all of software applications are purchased, perhaps with a small element of customization, from a third party. Since it’s unlikely that a bank can access the source code of third-party applications, verifying whether they have been developed with acceptable security best-practices is extremely difficult. As we’ve witnessed over the past two years, many high-profile security breaches have exploited vulnerabilities in third-party applications or the IT supply chain.

3. The ship has already left harbor: The best time to enforce rigorous software security standards is early in the development lifecycle. Unfortunately, most banking applications were developed before stringent security standards were a core element of the original design. Many can’t be taken out of production and re-written to remediate a security flaw -- especially since the timeframe for software remediation can take up to three months in a large organization. Banks running major systems that have known security flaws are hoping that some combination of virtual firewall patching and serendipity will prevent a catastrophic and highly public outcome.

4. The ship is registered in Somalia and has a foreign crew: Most large banks today, even those that favor in-house development over third-party packages, do not maintain all of the custom code in-house in a single, easily managed location. They may use multiple development centers around the world spanning different time zones, languages, and cultures. In fact, a significant proportion of so called in-house development is actually contracted out to software consultancies, independent developers or favored software contractors in countries from India and China to Hungary, Estonia, Ukraine, and Russia. Maintaining effective quality control over software development security practices becomes increasingly difficult as the supply chain is elongated.

5. The ship looks beautiful but has holes in the hull: A further range of risks arises from the application platforms and run-time environments on which banking applications are deployed. No amount of secure development practices will protect against vulnerabilities in the application platform (e.g., Apache Tomcat, WebLogic, JBoss, WebSphere, etc.) or in the runtime environment itself (like Java, which is favored in the banking sector).

6. Captain Phillips is drunk on the bridge: The assumption that having a policy in place that requires developers to follow security best-practices is going to protect banks from coding errors is counter to a basic tenet of human nature. Human beings will not always perform in the most perfect way in all circumstances. Furthermore, the demands of delivering a feature set on time to the business unit that is paying for the software may conflict with the extra care needed to enforce all appropriate security standards.

So while it should remain our goal, even the most secure coding is not enough to prevent Titanic-scale disasters when it comes to application security. In addition to security awareness training for developers and secure coding policies and procedures, financial institutions should consider instituting security controls at the application deployment phase. These include penetration testing and an emerging technology that analyst firm Gartner calls Runtime Application Self-Protection, or RASP. This approach implements security protection (not merely detection) within the execution environment and is particularly useful in guarding against zero-day attacks.

Brian Maccaba is CEO of Waratek. His former company, Cognotec, developed AutoDeal, a pioneering Web-based foreign exchange trading platform that was adopted by more than sixty banksworldwide. London Institutional Investor magazine named him among the top thirty individuals in ... View Full Bio

It's also important for security to adapt to developers methodologies and workflows. The reason policy documents don't get read is that many modern development practices don't rely that much on documentation.

So it's much easier for security requirements to be met if they're specified in the tools and language that the development team is already using for their other requirements. A good example of this is writing security user stories, if that's the what the team is using, or to create security requirements on an issue tracker instead of in a document.

In my experience, developers want to do the right thing- we as security practitioners just need to make that as accessible as possible.

Thanks for your replies to the comments Brian. It would be very scary to see that a breach of that magnitude against a bank. It would certainly send the message to the public that their money isn't safe at the bank, which could have dire consequences, as Lehman did. At that point further regulations would be needed just to restore trust in the industry, even if the regulations themselves weren't very effective.

Yes, a breach could prompt a plethora of well meaning, but not so useful, regulations aimed at making the industry safer. Some of the regulations will be useful, while others will simply create more work.

Hopefully security procedures and safeguards can stay a step ahead of the hackers.

1 - RASP is a new security mitigation technology. As it is non-correlated with other cyber defenses it will inevitably increase the overall security effectiveness of your portfolio.

2 – Running an application in a secure virtual container creates a new option for the administrator. If you have a choice between running your app in a secure or non-secure container which should you choose?

3 – It is true that running apps in secure containers remediate security vulnerabilities without fixing the underlying code. This is highly beneficial in a number of scenarios. For example the remediation is done immediately whereas the code fixing and go live cycle in a major institution takes 3 months or more. Even if the code is going to be fixed, it makes sense to use the run time protection in the interim, until such time as the amended code can be put into production. Secondly it will protect against security weaknesses in the code that have not yet been identified. Thirdly it can defeat unknown zero day exploits.

4 – The comments regarding the necessity to continue to run the app in the secure container is well taken. As the IT world increasingly moves around application workloads in virtual and cloud environments, the fact the application is in a secure container may be a significant advantage. Where security is based on physical defenses, such as WAF, these physical defenses cannot move with a virtual application across the virtual infrastructure or cloud.

5 – Despite the comment that SQLi and XSS are trivial to fix, the unfortunate reality is that these two exploits still account for the vast majority of cyber security breaches as evidenced by OWASP and other leading reports.

Regulation of security for financial institutions is reminiscent of the debate on risk management and capital adequacy over the past decade. Lengthy debates ultimately gave way to a mix of increased regulation for all, together with significantly more sophisticated methodologies being adopted by the leading international players.

The collapse of Lehman and financial meltdown in 2008 has led to significantly increased and more specific regulatory requirements. A serious cyber security breach at a major financial institution would probably have a similar effect.

Banks (and probably all other businesses) generally think they can set their own policieis -- hence, the pushback on Dodd Frank, and the pushback on stricter security standards. In many cases, this probalby is true, but we've also seen that there isn't a great track record of self-policing. So it behooves all banks to set very strict security standards/policies and to rigidly enforce them, because all it will take is one more high-profile card or records breach in financial services for the politicians and regulators to pile on and impose security-related regulation. It's probably inevitable.

Security policies are only part of the battle, as you point out. Many companies have secure coding policies that their developers never read (and some are not even aware the policy exists). Financial firms need to make sure that their finely worded policy is more than a piece of paper that sits on a shelf. There needs to be oversight and enforcement of the policy to make sure it is actually being followed.

That is a really insightful article. I don't agree with your conclusion, however. Runtime Application Self Protection (RASP) is in many cases not the answer to many team's problems.

* RASP only covers a small class of web application vulnerabilities, such as XSS and SQLi. These are trivially easy to fix by developers, and existing WAF technologies already do a good job here for those teams without the resources to effect a fix.

* RASP is promised to work out of the box, yet prudent implementors should really have a dedicated in house monitoring staff to ensure that false-positives are being corrected and important business transactions are not being prevented.

* RASP intentionally introduces new behaviour into a system, but this behaviour and the heuristics applied are proprietary to the vendor - a black box. Certification and validation activities for some of the systems using RASP may need to consider this additional complexity. The new behaviour becomes part of the application runtime behaviour and therefore must also be validatable.

* RASP (and in many ways WAF technologies, too) never actually correct the underlying vulnerability, instead they perform the role of a compensating control. Whilst the compensation is in place, risk from the threat of the vulnerability being exploited is indeed effectively mitigated, but a strong dependency is introduced: without the compensating control, the system becomes vulnerable immediately.

* With RASP not only are years of licence fees for the compensating control now necessary, a new risk is introduced. If the dependency is not clearly documented and service managers not continually trained on this dependency, it may simply be forgotten. You see, removing the control will lead the system to fail-open.

RASP is certainly an exciting technology with much potential ahead of it, but managers should consider the potential downsides before chasing the next new shiny thing.