AbstractThe move to the cloud brings a number of new security challenges, but the application remains your last line of defense. Engineers are extremely well poised to perform tasks critical for securing the application-provided that certain key obstacles are overcome.

IntroductionThis paper explores three ways to help development bear the burden of security that the cloud places on them:

Use penetration testing results to help engineers determine how to effectively "harden" the most vulnerable parts of the application.

Apply the emerging practice of "service virtualization" to provide engineers the test environment access needed to exercise realistic security scenarios from the development environment.

Implement policy-driven development to help engineers understand and satisfy management's security expectations.

New Risks, Same VulnerabilityBefore the move to the cloud, few organizations lost sleep over application security because they assumed their internally-controlled security infrastructure provided ample protection. With the move to cloud, security concerns are thrust into the forefront as organizations consider how much security control they are willing to relinquish to cloud service providers and what level of exposure they are willing to allow.

The fact of the matter is that with or without the cloud, failure to secure the application always is-and always has been-a dangerous proposition. Even when the bulk of the network security rested under the organization's direct control, attackers still managed to successfully launch attacks via the application layer. From the 2002 breach at the Australian Taxation office where a hacker accessed tax details on 17,000 businesses to the 2006 incident where Russian hackers stole credit card information from Rhode Island government systems, to the recent attack that brought down the National Institute of Standards and Technology (NIST) vulnerability database, it's clear that a deficiency in the application layer can the be one and only entry point an attacker needs.

Public cloud, private cloud, or no cloud at all, the application is your last line of defense and if you don't properly secure the application, you're putting the organization at risk/ Nevertheless, the move to the cloud does bring some significant changes to the application security front:

Applications developed under the assumption of a bulletproof security infrastructure might need to have their strategies for authorization, encryption, message exchange, and data storage re-envisioned for cloud-based deployment.

The move to cloud architectures increases the attack surface area, potentially exposing more entry points for hackers. This attack surface area is compounded with more distributed computing technologies, such as mobile, web, and APIs.

As applications shift from monolithic architectures to composite ones, there is a high degree of interconnectedness with 3rd party services-and a poorly-engineered or malfunctioning dependency could raise the security risk of all connected components. For example, a recent attack on Yahoo exploited a vulnerability from a third-party application. The composite application is only as secure as its weakest link.

As organizations push more (and more critical) functionality to the cloud, the potential impact of an attack or breach escalates from embarrassing to potentially devastating-in terms of safety, reputation, and liability.

With the move to the cloud placing more at stake, it's now more critical than ever to make application security a primary concern. The industry has long recognized that development can and should play a significant role in securing the application. This is underscored by the DoD's directive for certifications in the area of software development security (e.g., via CISSP). Select organizations that have successfully adopted a secure application development initiative have achieved promising results. However, such success stories still remain the exception rather than the rule.

Should Development Be Responsible for Application Security?Due to software engineers' intimate familiarity with the application's architecture and functionality, they are extremely well-poised to accomplish the various tasks required to safeguard application security. Yet, a number of factors impede engineers' ability to shoulder the burden of security:

The organization's security objectives are not effectively communicated to the development level.

For engineers to determine whether a particular module they developed is secure, they need to access and configure dependent resources (e.g., partner services, mainframes, databases) for realistic security scenarios-and such access and configurability is not commonly available within the development environment.

Management often overlooks security when defining non-functional requirements for engineers and planning development schedules; this oversight, paired with the myopic nature of coding new functionality, commonly reduces security concerns to an afterthought.

Security tests are frequently started at the testing phase, when it is typically too late to make the necessary critical architectural changes.

In the following sections, we explore how strategies related to penetration testing, service virtualization, and policy-driven development can better prepare engineers to bear the heavy burden of security that accompanies the shift to the cloud.

Moving Beyond Penetration Testing: Divide and ConquerPenetration testing is routinely used to barrage the application with attack scenarios and determine whether or not the application can fend them off. When a simulated attack succeeds, you know for a fact that the application has a vulnerability which makes you susceptible to a particular breed of attacks. It alerts you to real vulnerabilities that can be exploited by known attack patterns-essentially sitting ducks in your applications. When a penetration attack succeeds, there is little need to discuss whether it needs to be repaired. It's not a matter of "if", but rather of "how" and "when."

The common reaction to a reported penetration failure is to have engineers patch the vulnerability as soon as possible, then move on. In some situations, taking the path of least resistance to eliminating a particular known vulnerability is a necessary evil. However, relying solely on a "whack a mole" strategy for application security leaves a considerable amount of valuable information on the table-information that could be critical for averting the next security crisis.

Switching to a non-software example for a moment, consider what happened when the US Army realized how susceptible Humvees were to roadside bombs in the early 2000s. After initial ad-hoc attempts to improve security with one-off fixes (such as adding sandbags to floorboards and bolting miscellaneous metal to the sides of the vehicles), the Army devised add-on armor kits to address structural vulnerabilities and deployed them across the existing fleet . In parallel with this effort, they also took steps to ensure that additional protection was built into new vehicles that were requisitioned from that point forward.

How does such as strategy play out in terms of software? The first step is recognizing that successful attacks-actual or simulated-are a valuable weapon in determining what parts of your application are the most susceptible to attack. For example, if the penetration tests run this week succeed in an area of the application where penetration tests have failed before-and this is also an area that you've already had to patch twice in response to actual attacks-this module is clearly suffering from some underlying security issues that probably won't be solved by yet another patch...

Related Links

Cynthia Dunlop, Lead Content Strategist/Writer at Tricentis, writes about software testing and the SDLC—specializing in continuous testing, functional/API testing, DevOps, Agile, and service virtualization. She has written articles for publications including SD Times, Stickyminds, InfoQ, ComputerWorld, IEEE Computer, and Dr. Dobb's Journal. She also co-authored and ghostwritten several books on software development and testing for Wiley and Wiley-IEEE Press. Dunlop holds a BA from UCLA and an MA from Washington State University.

Cloud Expo

Cloud Computing & All That
It Touches In One Location Cloud Computing - Big Data - Internet of Things
SDDC - WebRTC - DevOps
Cloud computing is become a norm within enterprise IT.

The competition among public cloud providers is red hot, private cloud continues to grab increasing shares of IT budgets, and hybrid cloud strategies are beginning to conquer the enterprise IT world.

Big Data is driving dramatic leaps in resource requirements and capabilities, and now the Internet of Things promises an exponential leap in the size of the Internet and Worldwide Web.

The world of SDX now encompasses Software-Defined Data Centers (SDDCs) as the technology world prepares for the Zettabyte Age.

Add the key topics of WebRTC and DevOps into the mix, and you have three days of pure cloud computing that you simply cannot miss.

Delegates will leave Cloud Expo with dramatically increased understanding the entire scope of the entire cloud computing spectrum from storage to security.

Cloud Expo - the world's most established event - offers a vast selection of 130+ technical and strategic Industry Keynotes, General Sessions, Breakout Sessions, and signature Power Panels. The exhibition floor features 100+ exhibitors offering specific solutions and comprehensive strategies. The floor also features two Demo Theaters that give delegates the opportunity to get even closer to the technology they want to see and the people who offer it.

Attend Cloud Expo. Craft your own custom experience. Learn the latest from the world's best technologists. Find the vendors you want and put them to the test.