I am always amazed when I read the daily cloud blogs, articles and news headlines. Any given day will bring conflicting points of view by cloud industry experts and pundits on how secure clouds are, both private and public. There never seems to be a real consensus on how far security in the cloud has evolved. How then can any corporate CIO sort through the conflicting information and make an informed decision? The good news is that several cloud industry publications; security vendors and research organizations are making a concerted effort to cut through the hype and provide CIO’s with non-biased and researched driven data to help with the decision-making process.

According to Gartner’s 2011 CIO Agenda survey, just 3% of the CIOs surveyed say the majority of their IT operations are in the cloud today. Looking ahead, 43% say that within four years they expect to have the majority of their IT running in the cloud on Infrastructure-as-a-Service (IaaS) or on Software-as-a-Services (SaaS) technologies. This article will review the security issues that are holding back CIOs right now, and what will be needed to accelerate that growth.

CIOs have a fiduciary duty and the ultimate responsibility (legally and ethically) to ensure that the corporation’s sensitive information and data are protected from unauthorized access. CIOs also have limited budgets and resources to work with so they are always researching new and emerging technologies that will reduce cost, increase security and scalability, and maximize efficiencies in their infrastructure. Independent studies have demonstrated that both IaaS and SasS cloud models decrease cost, increase scalability and are extremely efficient when it comes to rapid deployment of new systems. So what are the main security issues that have CIOs delaying a move to the cloud?

Perceived Lack of Control in the Cloud

To a CIO, control is everything; on the surface hosting your sensitive information on an outsourced, shared, multi-tenant cloud platform would seem like a complete surrender and loss of control. How can you control risk and security of an information system that resides in someone else’s data center and is co-managed by outsourced personnel?

There are several secure cloud service providers that understand this concern and have built their entire core business around providing facilities, services, policy and procedures that give their clients complete transparency and control over their information systems. Most secure cloud service providers have adopted and implemented the same security best practices, regulatory and compliance controls that CIOs enforce inside their own internal organization such as PCI DSS 2.0, NIST 800.53, ISO 27001 and ITIL.

In fact CIOs can leverage a secure CSP’s infrastructure and services that may otherwise be cost prohibitive to implement internally thus giving them greater control over their information systems and sensitive data than they might have if hosted internally.

Another area of concern for CIOs is the perceived outsourcing of the risk management of their systems. There is a great level of trust between a secure CSP and a CIO. The CIO is dependent on the cloud service provider for patch management, vulnerability scanning, virus/malware detection, intrusion detection, firewall management, network management, account management, log management and the list goes on and on. Certainly outsourcing all of these critical tasks would constitute loss of control right? Wrong! As part of their standard service offering most secure cloud service providers provide customers system access, dashboards, portals, configuration and risk reports in real time giving CIOs complete control and transparency into their systems. In fact CIOs should consider secure cloud service providers as more of an extension of their own IT departments.

Multi-tenant Cloud Security – is it possible?

One area that keeps CIOs and potential cloud adopters awake at night is the idea that their virtual machines and data would reside on the same server with other customers VMs and data. In addition, multiple customers would also be accessing the same server remotely. As discussed in the previous paragraph, to a CIO control is everything. So is it possible to isolate and secure multiple environments in a multi-tenant cloud? The answer is YES.

So how do you secure a virtual environment hosted in a multi-tenant cloud? The same security best practices that apply to a dedicated standalone information system would also apply to a VM. Virtual machines live in a virtual network on the hypervisor. Hypervisors are the operating system that your virtual machines run on top of. Through VM isolation you isolate your VMs on its own network thus isolating your VMs from other tenants VMs. There is no way for other tenants to see your VMs, or your data. The same goes for network security. You would simply implement firewalls in front of your VMs just as you would in front of a dedicated system.

Another area of concern for CIOs that should not be left out is the topic of disk wiping and data remanence. In a public cloud, multi-tenant environment customer data is typically co-mingled on a shared storage device. Conventional wisdom says that the only way to truly remove data from a disk drive is to literally shred the drives. Degaussing disk is time consuming, expensive and not practical for a public cloud environment. So what can a cloud service provider do to address this problem and provide assurance to CIOs system owners, security and compliance officers that their data has been completely wiped from all storage in the public cloud? Again, the approach is the same as it would be for a dedicated system. Using a DoD approved disk wiping utility you can boot the VM with the utility and perform the recommended number of passes to properly wipe the data from the shared storage.

In summary, there are a variety of reasons CIOs are delaying their move to the cloud from lifecycle management consideration to budgetary reasons. One area of concern that should not delay the move is cloud security. If architected and configured properly, utilizing security best practices both a private or public cloud can securely host and protect your information system and sensitive data.

Mark McCurley is the Director of Security and Compliance for FireHost, where he oversees security feature development and management of the company’s cloud hosting platform and pci compliant hosting environments. Prior to joining FireHost, McCurley played a key role in the development of a large managed service provider’s compliance practice, focused on delivering IT Security, compliance and C&A services to commercial and Federal agencies. His career has centered around data centers and customer IT systems that need to adhere to federal, DoD and commercial compliance mandates and directives. He holds CISSP, CAP and Security+ certifications, and specializes in Security and compliance for the following federal, DoD and commercial compliance mandates: DIACAP, FISMA, SOX, HIPAA and PCI.

Recent high-profile security incidents heightened awareness of how Distributed Denial of Service (DDoS) attacks can compromise the availability of critical Web sites, applications and services. Any downtime can result in lost business, brand damage, financial penalties, and lost productivity. For many large companies and institutions, DDoS attacks have been a sobering wake-up call, and threats to availability are also one of the biggest potential hurdles before moving to, or rolling out a cloud infrastructure.

Arbor Networks’ sixth annual Worldwide Infrastructure Security Report shows that DDoS attacks are growing rapidly and can vary widely in scale and sophistication. At the high end of the spectrum, large volumetric attacks, reaching sustained peaks of 100 Gbps have been reported. These attacks exceed the aggregate inbound bandwidth capacity of most Internet Service Providers (ISPs), hosting providers, data center operators, enterprises, application service providers (ASPs) and government institutions that interconnect most of the Internet’s content.

At the other end of the spectrum, application and service-layer DDoS attacks focus not on denying bandwidth but on degrading the back-end computation, database and distributed storage resources of Web-based services. For example, service or application-level attacks may cause an application server to patiently wait for client data—thus causing a processing bottleneck. Application-layer attacks are the fastest-growing DDoS attack vector.

Detecting and mitigating the most damaging attacks is a challenge that must be shared by network operators, hosting providers and enterprises. The world’s leading carriers generally use specialized, high-speed mitigation infrastructures—and sometimes the cooperation of other providers—to detect and block attack traffic. Beyond ensuring that their providers have these capabilities, enterprises must also deploy intelligent DDoS mitigation systems to protect critical applications and services.

Why Existing Security Solutions Can’t Stop DDoS Attacks

Why can’t enterprises protect themselves against DDoS attacks when they have sophisticated security technology? Enterprises continuously deploy products like firewalls and Intrusion Prevention Systems (IPS), but the attacks continue. While IPS, firewalls and other security products are essential elements of a layered-defense strategy, they do not solve the DDoS problem. Because they are designed to protect the network perimeter from infiltrations and exploits and to be policy enforcement points in the security portfolio of organizations, they leverage stateful traffic inspection technologies to enforce network policy and integrity. This makes these devices susceptible to state resource exhaustion, which results in dropped traffic, device lock-ups and potential crashes.

The application-layer DDoS threat actually amplifies the risk to data center operators. That’s because IPS devices and firewalls become more vulnerable to the increased state demands of this emerging attack vector—making the devices themselves more susceptible to the attacks. Moreover, there is a distinct gap in the ability of existing edge-based solutions to leverage the cloud’s growing DDoS mitigation capacity, the service provider’s DDoS infrastructure or the dedicated DDoS mitigation capacity deployed upstream of the victim’s infrastructure.

Current solutions do not take advantage of the distributed computing power available in the network and cannot coordinate upstream resources to deflect an attack before saturating the last mile. No existing solution enables both DDoS mitigation at the edge and in the cloud.

Enterprises need comprehensive, integrated protection from the data center edge to the service provider cloud. For example, when data center operators discover they are under a service-disrupting DDoS attack, they should be able to quickly mitigate the attack in the cloud by triggering a signal to upstream infrastructure of their provider’s network.

The following scenario demonstrates the need for cloud signaling from an enterprise’s perspective. A network engineer notices that critical services such as corporate sites, email and DNS are no longer accessible. After a root cause analysis, the engineer realizes that its servers are under a significant DDoS attack. Because its external services are down, the entire company, along with its customers, are suddenly watching every move. He must then work with customer support centers from multiple upstream ISPs to coordinate a broad DDoS mitigation response to stop the attack.

Simultaneously, he must provide constant updates internally to management teams and various application owners. To be effective, the engineer must also have the right internal tools available in front of the firewalls to stop the application-layer attack targeting the servers. All of this must be done in a high-pressure, time-sensitive environment.

Until now, no comprehensive threat resolution mechanism has existed that completely addresses application-layer DDoS attacks at the data center edge, and volumetric DDoS attacks in the cloud. True, many data center operators have purchased DDoS protection services from their ISP or MSSP. But they lack a simple mechanism to connect the premises to the cloud and a single dashboard to provide visibility. These capabilities can stop targeted application attacks as well as upstream volumetric threats that can be distributed across multiple providers.

The previous hypothetical scenario would be quite different if the data center engineer had the option of signaling to the cloud. Once he discovered that the source of the problem is a DDoS attack, the engineer could choose to mitigate the attack in the cloud by triggering a cloud signal to the provider network. The cloud signal would include details about the attack to increase the effectiveness of the provider’s response. This would take internal pressure off the engineer from management and application owners. It would also allow the engineer to communicate with the upstream cloud provider to give more information about the attack and fine-tune the cloud defense.

As DDoS attacks become more prevalent, data center operators and service providers must find new ways to identify and mitigate evolving DDoS attacks. Vendors must empower data center operators to quickly address both high-bandwidth attacks and targeted application-layer attacks in an automated and simple manner. This saves companies from major operational expense, customer churn and revenue loss. It’s called Cloud Signaling and it’s the next step in protecting data centers in the cloud, including revenue-generating applications and services.

Rakesh Shah has been with Arbor Networks since 2001, helping to take products from early stage to category-leading solutions. Before managing the product marketing group, Rakesh was the Director of Product Management for Arbor’s Peakflow products, and he was also a manager in the engineering group. Previously, Rakesh held various engineering and technical roles at Lucent Technologies and CGI/AMS. He holds a M.Eng. fromCornellUniversityand a B.S. fromUniversityofIllinoisat Urbana-Champaign both in Electrical and Computer Engineering.

Cloud computing changes the equation of responsibility and accountability for information security and poses some new challenges for enterprise IT. At Vormetric we are working with service providers and enterprises to help them secure and control sensitive data in the cloud with encryption, which has given us a good perspective on the issues surrounding who is responsible for cloud security.

While data owners are ultimately accountable for maintaining security and control over their information, the cloud introduces a shared level of responsibility between the data owner and the service provider. This division of responsibility varies depending on the cloud delivery model and specific vendor agreements with the cloud service provider (CSP). In addition, the use of multi-tenant technology by CSPs to achieve economies of scale by serving customers using shared infrastructure and applications introduces another layer of risk.

Where the buck stops or gets passed on poses some new operational and legal issues. Let’s look at each cloud delivery model to understand how each creates a slightly different balance of security responsibility between the data owner and CSP.

Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) models typically place much of the responsibility for data security and control in the hands of the SaaS or PaaS provider. There is not much leeway for enterprises to deploy data security or governance solutions in SaaS and PaaS environments since the CSP owns most of the IT and security stack.

Infrastructure-as-a-Service (IaaS) tilts the balance towards a greater degree of shared responsibility. IaaS providers typically provide some baseline level of security such as firewalls and load balancing to mitigate Distributed Denial of Service (DDoS) attacks. Meanwhile, responsibility for securing the individual enterprise instance and control of the data inside of that instance typically falls to the enterprise.

A widely-referenced example that clearly describes IaaS security responsibilities can be found in the Amazon Web Services Terms of Service. While enterprises can negotiate liability, terms and conditions in their Enterprise Agreements with service providers, the IaaS business model is not well suited for CSPs to assume inordinate amounts of security risk. CSPs aren’t typically willing to take on too much liability because this could jeopardize their business.

Since an enterprise’s ownership of security in the cloud gradually increases between SaaS, PaaS and IaaS, it’s important to clearly understand the level of responsibility provided in the terms and conditions of CSP agreements.

Having established what a cloud provider is delivering in the way of security, enterprises should backfill these capabilities with additional controls necessary to adequately protect and control data. This includes identity and access management, encryption, data masking and monitoring tools such as Security Information and Event Management (SIEM) or Data Loss Prevention (DLP). One valuable resource for evaluating cloud service provider security is the Cloud Security Alliance Cloud Controls Matrix.

Enterprises looking to further mitigate the risk of data security incidents in the cloud can also investigate Cyber insurance offerings that protect against cyber events such as cyber extortion, loss of service or data confidentiality breach. Finally, enterprises should develop both a data recovery plan and exit strategy if they need to terminate their relationship with a CSP.

Cloud security is a new and evolving frontier for enterprises as well as CSPs. Understanding the roles, responsibilities, and accountability for security in the cloud is critical for making sure that data is protected as well in the cloud as it is in an enterprise data center. The process starts with a thorough due diligence of what security measures are provided and not provided by the CSP, which enables enterprises to know where they need to shore up cloud defenses. Until further notice, the cloud security buck always stops with the enterprise.

Infosec veterans probably remember (with a smirk) how Public Key Infrastructure (PKI) was heralded as the next “big thing” in information security at the dawn of the 21st century. While PKI failed to reach the broad adoption the hype suggested, certain PKI capabilities such as key management are still important. The Diffie-Hellman key exchange protocol which solved the serious technical challenge of how to exchange private keys over an insecure channel basically created PKI.

I had not thought about key management until a recent visit to my local car dealer for an oil change. While waiting, I noticed several dealer employees struggling with a large wall-mounted metal box. This box is the dealer’s central repository for all car keys on the dealer’s lot. The box is accessed via a numeric keypad which appeared to be a sensible approach since the keypad logs all access attempts for auditing and tracking purposes.

However, on this particular day, the numeric codes would not open the box, leaving the keys inaccessible and employees quite frustrated. I left before seeing how the problem was resolved, but this incident reminded me of key management and how this technology is still crucial for data management especially with rise of cloud computing.

Key management often goes unnoticed for extended periods of time and only surfaces when a problem appears, as was the case at the dealer. When problems appear, key management is either the solution or the culprit. In the latter case, key management is generally the culprit because of an improper implementation. Poor key management can create several significant problems such as:

Complete Compromise-A poor key management system, if broken, could mean that all keys are compromised and all encrypted data is thus at risk (see my postscript for a great example). And fixing a broken key management system can be complex and costly.

Inaccessibility-As I witnessed at the dealer, a poorly implemented key management may prevent any or some access to encrypted data. That may seem good from a security standpoint, but the security must be weighed against the inconvenience and productivity loss created from being unable to access data.

With the continued stream of data breaches that appear in daily headlines, a common refrain is that data encryption is the solution to preventing data breaches. While data encryption is certainly a good security best practice and important first step, especially for sensitive data or PII, effective key management must accompany any data encryption effort to ensure a comprehensive implementation.

Here’s why.

Just throwing encryption at a problem especially after a breach is not a panacea-it must be deployed within the context of a broader key management system. NIST Special Publication 800-57, “Recommendation for Key Management-Part 1-General” published in March 2007 stated,

“The proper management of cryptographic keys is essential to the effective use of cryptography for security. Keys are analogous to the combination of a safe. If the combination becomes known to an adversary, the strongest safe provides no security against penetration. Similarly, poor key management may easily compromise strong algorithms. Ultimately, the security of information protected by cryptography directly depends on the strength of the keys, the effectiveness of mechanisms and protocols associated with keys, and the protection afforded the keys. “

Even though this NIST publication is more than four years old, this statement is still relevant.

A centralized key management solution should deal with the three ‘R’s-Renewal, Revocation and Recovery. Key management is necessary to solve problems such as:

Volume of keys-In a peer to peer model, using freeware like PGP may work, but when you are an organization with thousands of users, you need centralized key management. Just like organizations need to revoke privileges and entitlements when a user leaves the organization, you need to do the same with cryptographic keys. This can only be achieved via central key management and would crumble in a peer to peer model.

Archiving and Data Recovery. Data retention policies vary by regulation and policy, but anywhere from three to 10 years is common. If archived data is encrypted (generally a good practice), key management is necessary to ensure that the data can be recovered and decrypted in the future if needed as part of an investigation. The growth on cloud-based storage makes this problem particularly acute.

Organizations that encrypt data without a centralized comprehensive key management system are still at risk of a breach because the lack of a centralized system can cause inconsistencies and error-prone manual processes. Further, today’s sophisticated hackers are more likely to attack a poorly implemented key management system rather than attack an encrypted file, much like the German Army flanked France’s Maginot Line in 1940 to avoid dealing with the line’s formidable defenses. This is why an important aspect of key management is ensuring appropriate checks and balances on the administrators of these systems as well as ongoing auditing of the key management processes and systems to detect any potential design errors, or worse, malicious activity by authorized users.

Key management is not going away. As cloud computing adoption grows, key management is going to become even more crucial especially around data storage in the cloud. We have already seen some examples with online storage providers that show how key management is already an issue in the cloud. Cloud computing and encryption are great concepts, but organizations must accompany these with a sound key management strategy. Otherwise, the overall effectiveness of such systems will be reduced.

PS-a great example of what happens with an ineffective key management implementation is convicted spy John Walker who managed cryptographic keys for US Naval communications but copied the keys and gave them to the USSR for cash. Walker compromised a significant volume of US Navy encrypted traffic but because there was no significant auditing of his duties, his spying went undetected for years. There are several books on the Walker case, but I recommend Peter Earley’s “Family of Spies”

Merritt Maxim is director of IAM product marketing and strategy at CA Technologies. He has 15+ years of product management and product marketing experience in Identity and Access Management (IAM) and the co-author of “Wireless Security.” Merritt blogs and is an active tweeter on a range of IAM, security & privacy topics. Merritt received his BA cum laude from Colgate University and his MBA from the MIT Sloan School of Management.

Despite a broader interest in cloud computing, many organizations have been reluctant to embrace the technology due to security concerns. While today’s businesses can benefit from cloud computing’s on-demand capacity and economies of scale, the model does require they relinquish part of the control over the application and data.

Unfortunately, security controls vary significantly from one cloud provider to the next. Therefore, companies need to make certain the providers they use have invested in state-of-the-art security measures. This will help ensure that a company’s customer security and data protection policies can be seamlessly extended to the cloud applications to which they subscribe. Best practices dictate that critical information should be protected at all times, and from all possible avenues of attack. When evaluating cloud providers, practitioners should address four primary areas of concern — application, infrastructure, process and personnel security — each of which is subject to its own security regimen.

1. Application Security

With cloud services, the need for security begins as soon as users access the supporting application. The best cloud providers protect their offerings with strong authentication and equally potent authorization systems. Authentication ensures that only those with valid user credentials (who can also prove their identity claims) obtain access, while authorization controls allow administrators to decide which services and data items users may access and update. Multi-factor authentication may also be provided for controlling access to high sensitivity privileges (e.g. administrators) or information.

All application-level access should be protected using strong encryption to prevent unauthorized sniffing or snooping of online activities. Application data needs to be validated on the way in and on the way out to ensure security. Robust watermarking features ensure that materials cannot be reproduced or disseminated without permission. More advanced security measures include the use of rights management technology to enforce who can print, copy or forward data, and prevent such activity unless it is specifically authorized, as well as impose revocation and digital shredding even after documents leave the enterprise.

2. Infrastructure Security

Best-in-class providers will have a highly available, redundant infrastructure to provide uninterruptible services to their customers. A cloud provider or partner should use real-time replication, multiple connections, alternate power sources and state-of-the-art emergency response systems to provide complete and thorough data protection. Network and periphery security are paramount for infrastructure elements. Therefore, leading-edge technologies for firewalls, load balancers and intrusion detection/prevention should be in place and continuously monitored by experienced security personnel.

3. Process Security

Cloud providers, particularly those involved in business critical information, invest large amounts of time and resources into developing security procedures and controls for every aspect of their service offerings. Truly qualified cloud providers will have earned SAS 70 Type II certification or international equivalents. Depending upon geography or industry requirements, they may have enacted measures to keep their clients in compliance with appropriate regulations (e.g., the U.S. Food and Drug Administration (FDA) 21 CFR 11 regulations for the Pharmaceutical industry). ISO-27001 certification is another good measure of a provider’s risk management strategies. These certifications ensure thorough outside reviews of security policies and procedures.

4. Personnel Security

People are an important component of any information system, but they can also present insider threats that no outside attacker can match. At the vendor level, administrative controls should be in place to limit employee access to client information. Background checks of all employees and enforceable confidentiality agreements should be mandatory.

Putting Providers to the Test

When evaluating a cloud provider’s security approach, it’s important to ask them to address how they provide the following:

Holistic, 360-degree security: Providers must adhere to the most stringent of industry security standards, and meet client expectations, regulatory requirements and prevailing best practices.

This includes their coverage of application, data, infrastructure, product development, personnel and process security.

Complete security cycle: A competent cloud provider understands that implementing security involves more than technology — it requires a complete lifecycle approach. Providers should offer a comprehensive approach to training, implementation and auditing/testing.

Proactive security awareness and coverage: The best cloud providers understand that security is best maintained through constant monitoring, and they take swift, decisive steps to limit potential exposures to risks.

Defense-in-depth strategy: Savvy cloud vendors understand the value of defense in depth, and can explain how they use multiple layers of security protection to protect sensitive data and assets.

24/7 customer support: Just as their applications are available around-the-clock, service providers should operate support and incident response teams at all times.

Tips for Obtaining Information from Service Providers

When comparing cloud providers, it is essential to check their ability to deliver on their promises. All cloud providers promise to provide excellent security, but only through discussions with existing customers, access to the public record and inspection of audit and incident reports can the best providers be distinguished from their run-of-the-mill counterparts.

Ideally, obtaining information about security from providers should require little or no effort. The providers who understand security — particularly those for whom security is a primary focus — will provide detailed security information as a matter of course, if not a matter of pride.

Fahim has been with IntraLinks since January 2008. Prior to joining IntraLinks, he served as CEO at Sereniti, a privately held technology company. He was also the Managing Partner of K2 Software Group, a technology consulting partnership providing product solutions to companies in the high tech, energy and transportation industries. Previously, Fahim held executive and senior management positions in engineering and information systems with ICG Telecom, Enron Energy Services, MCI, Time Warner Telecommunications and Sprint.

I am a huge proponent of cloud-based solutions, but I also have a bailiwick for people who look to the cloud just for cloud’s sake, and do not take time to do the due diligence. While the cloud can bring strong technical, economic and business benefits if managed correctly, it can also cause pain just like any solution with which you do not follow clear criteria for evaluation to make sure it meets your needs today and in the future.

In my many discussions with IT leaders and from my own experience, I have outlined the top six cloud gotchas that you need to watch out for:

Standards: The cloud, while filling our life right now, is still relatively young with minimal standards. This one is particularly important with Platform as a Service (PaaS) vendors. Many of these platforms provide an easy-to-use and fast-to-deploy application development and life cycle environment. However, most are also based on proprietary platforms that do not play nice with other solutions. It’s important to understand potential proprietary lock-in as well as how you interface with the cloud platform or with the API infrastructure.

Flexibility: This seems odd for a cloud gotcha since flexibility and agility is touted as one of the cloud’s greatest benefits. In this case, I’m talking about flexibility within the cloud environment and in the way you interact with the cloud. What communication protocols are supported, such as REST, SOAP, FTPS, etc.? In the PaaS world, what languages are supported – is it flexible or, for example, a JAVA or .NET environment only. Does it have a flexible API infrastructure?

Reliability & Scalability: Everyone knows that the cloud provides on-demand scalability, but make sure your solution scales both up and DOWN – with the latter being the stickler for most companies. Burst capacity and quick addition of scalability might be easy, but what if you want to scale back your deployment? Make sure it’s just as easy and without penalties. Overall, know the bandwidth capability across the deployment, not just the first or last mile. On the reliability front, be wary of claims of four or five nines (99.999% uptime) and ask for an uptime report from your cloud vendor. Build uptime into your SLA (service level agreement) if this cloud deployment is mission critical for your business.

Security: This one is probably the most discussed and debated. I believe, and many vendors have proved this, that a cloud-based solution is as secure if not more secure than an on-premise approach. But as with technology in general, not all clouds are created equal, and security needs to be evaluated holistically. The platform should provide end-to-end data protection, which means encryption both in motion and at rest, as well as strong and auditable access control rules. Do you know where the data is located amid the vendor’s many data centers, and is the level of data protection consistent among all of those environments? Does the vendor use secure protocols for moving the data, such as SSL. Look for key compliance adherence by the vendor, such as PCI DSS and SAS 70 Type 2. There’s a reason the Cloud Security Alliance (CSA) is now developing a PCI courseware – it’s because there’s a clear link between the security capabilities of a cloud platform and its ability to meet the most stringent security and data protection demands found in the PCI mandate.

Costs: I can hear everyone now saying “duh” this is obvious. Yes, the initial cost of deployment or your monthly subscription fees are an easy evaluation. However, look for hidden or unexpected costs, and make sure you fully understand the pricing model. Many cloud solutions are cost-effective for a standard deployment, but then each additional module or add-on feature slaps you with additional costs. Does the vendor charge a “per support” charge? Are upgrades to new versions included? Also, there are often pricing tiers or “buckets”, and when you hit that tier, your costs can significantly increase. Finally, look for a way to clearly show your ROI or success metrics for this solution. Align your costs with your expected results, whether quantifiable or qualifiable. This is particularly important if your company is new to cloud consumption, as your ability to show success with an initial deployment will influence future implementations.

Integration: Integration is truly the missing link in the cloud. It’s so appealing to put our data in the cloud or develop new applications or extend our current infrastructure that sometimes we forget that the data in the cloud needs to be accessible, secured and managed just like on-premise data. How are you migrating data to the cloud? If you are putting everything on a physical disk and shipping it to the cloud vendor, doesn’t that rather run contrary to the whole cloud benefit? How are you exchanging and sharing information between cloud-based environments and on-premise infrastructure or even between two clouds? Think about integration before you deploy a new cloud solution and think about integration among internal systems and people as well as external partners and corporate divisions. Gartner is doing a lot of work in this area, and has a new market category called “cloud brokers”.

As I’ve said many times in presentations on the cloud, you should first buy the solution, then buy the cloud. The cloud is not a panacea, and while a cloud architectural approach brings strong business and IT value, you need to thoroughly evaluate any solution to ensure it not only meets your company’s technical and business requirements, but also enables you to grow and evolve.

Margaret Dawson is vice president of product management for Hubspan. She’s responsible for the overall product vision and roadmap and works with key partners in delivering innovative solutions to the market. She has over 20 years’ experience in the IT industry, working with leading companies in the network security, semiconductor, personal computer, software, and e-commerce markets, including Microsoft and Amazon.com. Dawson has worked and traveled extensively in Asia, Europe and North America, including ten years working in the Greater China region, consulting with many of the area’s leading IT companies and serving as a BusinessWeek magazine foreign correspondent.

When you meet someone you have never met for the first time, in a place you have never been to, do you trust him? Would you have him hold your wallet for you or would you share some sensitive personal information with him? Of course not. Obviously this person is not trusted by you at this point in time, but that doesn’t mean he never could be. Assuming you have good, trustworthy friends, it’s possible that this person could be trusted if you got to know him better. This analogy can be applied to the current state of security and trust with the public cloud.

The biggest barrier to broader and faster adoption of public cloud services (whether SaaS, PaaS, IaaS) is trust. Consider the results of nearly any survey on cloud adoption or talk with your friends and colleagues in IT, and you’ll find the message is the same; the public cloud has great promise and impressive early adoption, but there remains a nagging set of concerns that are proving hard to address. Many characterize these concerns as being about security. While I agree there are important issues around security that need to be resolved, such as how security can be managed jointly by the cloud provider and cloud consumer, I prefer characterizing the issue more broadly to be about trust. Overall though, it stands to reason that the greater the trust the greater the adoption.

Trust is about more than just security controls. Trust also emerges from good execution of “abilities,” such as reliability, availability, portability, and interoperability. Is the public cloud trustworthy for organizations’ more sensitive and mission critical applications and data? The only one that can ultimately decide this for you is you. While trust can be influenced by 3rd-parties it can only occur between two parties.

In order to improve their trustworthiness, cloud providers should:

Avoid being a black box, in particular for security and “ability” related systems and processes. I am not saying public cloud providers should publicly disclose everything and risk elevating their vulnerability levels, but they should give as much control and visibility to their customers as possible over the customer’s own services, systems and processes that they are delivering for them. The systems and processes within their services should not be a secret. Control or visibility of the customers’ services should move all the way up the application stack – from the network, through the storage, servers, applications, and data. People tend to trust those that don’t appear to be hiding anything, and thus transparency by cloud providers can help foster trust. Audits can also serve as vehicles to gain trust – whether they are done by third parties or by the customers themselves.

Improve trust by reducing technical lock-in. Portability will be high on the list of cloud consumers. Instead of keeping your customers through technical lock-in, put your head in your customers’ hands right at beginning of the relationship and make sure they have all the flexibility needed to swap vendors. Make sure that your cloud service, as appropriate, has data and application service portability that is crisply defined and free or inexpensive to invoke. Bend over backward to avoid causing customer technical lock-in, and strive to keep your customer through great service at a great price. In addition, offer clear SLAs with great warranties. In your SLAs put in clear financial penalties for missing aspects of your SLAs and maybe even some bonuses for surpassing them. In some senses I recognize that this maybe counter intuitive for some, but if the goal is enhancing trust this is a great way to do it.

When things go wrong be open and honest about them. Said another way, keep your promises. And if you can’t, tell your cloud customers quickly and honestly about your mistakes and explain what you are going to do better next time. In fact, this should be part of your corporate philosophy, so that your prospective customers hear about it before they actually experience it. Just like with personal relationships, most good cloud provider/cloud consumer relationships can survive some broken promises.

We all know that trust is relative, as in “I trust that person (or service) more than that one” or “I trust this service more than I used to.” Mathematically I think of it this way: Trust = Performance x Time. As good performance accumulates over time overall trust goes up. And good performance over a short time period elicits some more trust, but not much more. For public cloud services to attain their prospective position as the next major IT service delivery architecture (following mainframe, client/server, and Web) it is imperative that the industry take proactive steps to improve their trustworthiness.

Matthew Gardiner is a Director working in the Security business unit at CA Technologies. He is a recognized industry leader in the security and Identity & Access Management (IAM) markets worldwide. He writes and is interviewed regularly in leading industry media on a wide range of IAM, cloud security, and other security-related topics. He is a member of the Kantara Initiative Board of Trustees. Matthew has a BSEE from the University of Pennsylvania and an SM in Management from MIT’s Sloan School of Management. He blogs regularly at: http://community.ca.com/members/Matthew-Gardiner.aspx and also tweets @jmatthewg1234. More information about CA Technologies can be found at www.ca.com.

Everyone loves standards, right? When is the last time you heard a vendor proudly say that their product or service was closed and proprietary? However, it also seems that every time a new IT architecture sweeps through the market, this time one based on cloud models, the lessons of the critical value of standards needs to be relearned. While it is easy to poke fun at standards by saying such things as “I love standards because there are so many from which to choose,” it is also easy to see the incredible value that they can unlock. Look at the Internet itself as an example. It is hard to imagine the cloud reaching its potential without it using a set of widely adopted standards – security and otherwise.

In the context of this blog when I refer to security standards, I am talking about security interface standards (basically cloud security APIs) that enable security systems in one domain, whether in a cloud service or in an on-premise enterprise system, to communicate and interoperate programmatically with security systems in other domains. The absence of such standards drives the use of customized integrations which have been the bane of IT agility since the beginning of modern computing.

Why is it that everyone loves standards in concept, including those for security, but often standards definition and deployment is less than speedy? Why doesn’t everyone involved just pull together and solve this obvious problem now, instead of waiting until we are all suffering from lack of standards? While this is a general issue with standards, let’s look at this issue through the lens of the emerging public cloud-based services (public IaaS, PaaS, & SaaS). There are both rational and less rational reasons why standards are developed and used at a rate slower than they should be for maximum benefit.

While not the only factor to consider, the reality is that standards must be considered as an element of the overall vendor competitive struggle, where differentiation is key. There are logical economic reasons why market dominant vendors — in this case dominant cloud service providers — tend to be wary of using publicly available interface standards for their services. For one it makes their differentiation that much harder and it lowers the cost of switching to competitive services. Thus interface standards can serve as a competitive threat.

While no vendor will come out explicitly against standards (remember that everybody loves them), when pressed on the issue, they will come back with answers such as, “existing standards are too immature” or the “market is moving too fast to standardize yet” to explain why they are not moving more quickly to standardize their interfaces. Of course they might be partially right, but these are not objections which generally hold up under explicit and consistent customer demand for standardization. See the broad adoption of SAML by cloud providers as an example of what this pressure can accomplish.

This leads me to one of the less rational reasons why standards are not used as readily as they could be: Lack of customer vision! Without a clear long-term vision of the future and how cloud services will be engaged to support the business, customer’s of today’s cloud service providers basically stumble into using the available proprietary interfaces and thus are enabling the current providers to largely get away with not providing standards based interfaces. IT departments are doing what they need to get the job done, which optimizes the short-term results, but unfortunately it’s at the expense of the longer-term.

What does the future of the cloud look like over the next three to five years? In my view organizations of all sizes will be deep in the middle of a dynamic and hybrid mix of public cloud services, private cloud services, and traditional on-premise IT systems. The mix will vary by organization. We could see 20 percent public cloud services and 80 percent on-premise and private cloud services at some organizations and a 50/50 split or some other mix at other organizations. Even within the public cloud category there will be a tremendous variety of usage at most organizations, not only with the types of cloud services used (Infrastructure-as-a-Service, Platform-as-a-Service, Software-as-a-service) but also with the variety of service providers from which they receive them. If you agree with this view of the future, then you should understand the need to use security interface standards to enable effective security management across them.

If supporting dynamic and hybrid IT requires organizations to continually build-up and tear down proprietary security integrations that bridge their on-premise and cloud worlds, then they will either be spending an inordinate amount of time and money creating these integrations, or worse will be living in the middle of a hodge-podge of security silos, which are neither secure nor convenient for the users.

For the cloud to reach its potential as the next transformative IT architecture akin to the Internet itself, it is critical that it operate similar to Legos which can be assembled and re-assembled quickly and securely as required. Furthermore, it is imperative that automated controls, both preventive and detective, can be configured to flow back-and-forth between and among all components of the organization’s mix of public and on-premise IT systems. This prospective future is not as far off as if it might seem. There are many security interface standards already in existence (XACML, WS-Security, CloudAudit) and some are already relatively widely deployed, such as SAML, that were built to enable the hybrid cloud and on-premise application world. The primary issue now is the adoption of these standards.

While I recognize that collective action on the use of security standards such as these is not easy, I believe it is imperative that customers start envisioning and working towards this future now – and pushing their cloud service providers to get onboard with it too.

Matthew Gardiner is a Director working in the Security business unit at CA Technologies. He is a recognized industry leader in the security and Identity & Access Management (IAM) markets worldwide. He writes and is interviewed regularly in leading industry media on a wide range of IAM, cloud security, and other security-related topics. He is a member of the Kantara Initiative Board of Trustees. Matthew has a BSEE from the University of Pennsylvania and an SM in Management from MIT’s Sloan School of Management. He blogs regularly at: http://community.ca.com/members/Matthew-Gardiner.aspx and also tweets @jmatthewg1234. More information about CA Technologies can be found at www.ca.com.

Federation is a model of identity management that distributes the various individual components of an identity operation amongst different actors. The presumption being that the jobs can be distributed according to which actors are best suited or positioned to take them on. For instance, when an enterprise employee accesses services at a SaaS provider, Single sign-on (SSO) has the employee’s authentication performed by their company, but the subsequent authorization decision made by the SaaS provider.

Federation’s primary underlying mechanisms are ‘security tokens.’ It is by the creation, delivery, and interpretation of security tokens between the actors involved in a transaction that each is given the necessary information to perform their function. Security tokens serve to insulate each actor from the specific IT & security infrastructure of their partners, customers, and others by standardizing how identity information can be shared across company and policy boundaries. Returning to the enterprise employ SSO example, after authenticating the employee, the enterprise creates a security token attesting to that fact, as well as additional attributes that might determine what actions she can perform at a particular SaaS provider (e.g. she is in Engineering not Sales) and then delivers the security token to the SaaS provider. The SaaS provider, rather than directly authenticating the employee through some stored password, instead relies on the authentication performed by the enterprise, and acts accordingly.

SSO simplifies life for the employee because she need not manage a password for each SaaS application her job demands. Furthermore, SSO provides security benefits for the employer, such as being able to easily & quickly terminate access to all those applications should the employee leave the company. SSO arguably offers even greater value when the service being accessed is a mobile web application (i.e. delivered through the browser on an employee’s mobile phone). Data entry remains challenging on mobile devices, even more so when corporate password policy requires entering a mix of case and characters. If an employee is tempted to create (or reuse) an easy password at her desktop, then she will be doubly so on a phone.

The federation standards for browser-based SSO to web applications are well-established (if perhaps a bit duplicative) on the consumer web with OpenID being the preferred choice. In the enterprise and cloud world, the Security Assertion Markup Language (SAML) is the default with WS-Federation an option in Microsoft environments. SSO for mobile web applications works the same as for desktop browsers. The protocol messages and security tokens are delivered through the browser between the actors. The only potential difference is that the HTML served up may be optimized for the smaller screen and/or processing capabilities of the phone.

The popularity of the iPhone AppStore and Android Market in the consumer world highlights an increasingly important alternative to browser-based applications, Native applications have the user download and install the application to her device; the application then interacts with servers to retrieve the data rather than rely on the browser. Both native and web applications have their pros and cons. A seeming trend towards the native model may well be reversed as HTML5 makes possible richer user experiences and device integration for web applications.

The native applications on the phone push and pull data from the server typically through REST APIs. The IdM challenge for native applications is how the native application can authenticate to these APIs so that the API can make an appropriate access control decision. Security tokens provide a solution, offering similar advantages as they do for web applications. Critically though, the federation protocols relevant for web applications (i.e. SAML, OpenID, WS-Federation) are generally not optimized for the requirements, challenges, and opportunities presented by native applications.

OAuth 2.0 is a federation protocol, currently nearing finalization as an IETF standard, that can be optimized for just such native applications. OAuth emerged from the Consumer Web (an archetypical use case that of one web site being able to post to a user’s Twitter stream) but has evolved to meet enterprise and cloud requirements. For mobile native applications, OAuth defines 1) how a native application can obtain a security token from an ‘authorization server’ and 2) how to include that security token on its calls to the relevant REST APIs. Importantly, OAuth supports the concept of the user being able to control the issuance of security tokens to native applications and so indirectly control the authorizations the native applications have for accessing personal data behind APIs. Before OAuth, the default authentication model for native applications was the so-called ‘password anti-pattern’ in which the native application would ask the user to provide her password for the site hosting the APIs the native application wanted to call. Teaching users to share their passwords with arbitrary (and potentially untrustworthy) applications is less than ideal. OAuth mitigates the practice by having the native application authenticate to the API with a security token and not the password itself.

By abstracting away the particulars of each of their security infrastructures from multiple participants, and obviating the need for placing passwords ‘on the wire’ federation (and the more fundamental security token model) offers many benefits for both web (browser-based) and native (installed) mobile applications. Ultimately, an authentication and authorization framework for mobile applications should address the needs of both application models through support for the relevant federation protocols like SAML and OAuth.

About Paul Madsen

Paul Madsen is a Senior Technical Architect within the Office of the CTO at Ping Identity. He has served in various design, chairing, editing, and education roles for a number of federation standards, including OASIS Security Assertion Markup Language (SAML), OASIS Service Provisioning Markup Language (SPML), and Liberty Identity Web Services Framework (ID-WSF). He participates in a number of the Kantara Initiative’s activities, as well as various other cloud identity initiatives. He holds an M.Sc. in Applied Mathematics and a Ph.D. in Theoretical Physics from Carleton University and the University of Western Ontario respectively.

Managed cloud services are quickly being adopted by large enterprises. Organizations are increasingly embracing cloud technologies for core services like financial systems, IT infrastructure, online merchant sites, and messaging solutions. This adoption rate is creating an ever-increasing role for audit and compliance in the cloud.

Before cloud computing gave IT environments elasticity, flexibility, and transportability, it was relatively simple to provide the regulatory compliance. Prior to the cloud, an organization was able to isolate all of the devices, operating systems, and applications on which sensitive or regulated data could reside, and the auditors had an easy task of auditing the security controls and verifying policies, procedures and processes for isolated environments. However, as the industry began to adopt more flexible solutions such as cloud, it became more difficult to contain environments for auditors to provide the same review without requiring a significantly higher level of work. While a managed cloud services company may deploy like policies and security solutions for cloud computing as would be in a traditional IT environment, proof of those same controls grows more difficult to demonstrate to the satisfaction of the auditors.

For example, if an organization had a virtualized environment that had well-defined boundaries or security zones, and even during a failover or disaster recovery all events, logs, and incidents were easily tracked and verified, it took little effort for an auditor to review and provide the assurance of compliance for the environment.

Cloud changes this game a bit, with its ability to move environments dynamically, without human intervention. This move could be within a single data center, but is often from data center to data center, from coast to coast, or even from continent to continent. This flexibility, while often necessary to support business needs, introduces a level of complexity that many auditors have had difficulty with. When the auditor can’t pin down the environment, how can she or he assess its compliance?

But there are a number of cloud providers who have been working to overcome these challenges in conjunction with their auditors. For example, SAS70 (soon to be SSAE16) has been especially difficult for auditors to assess in cloud environments. Depending on the controls, SAS70 will likely have the requirement of aggregating the review of physical access to the facility, at-console access to systems, and logical access to the environment. To add to the complexity, there may be differing controls for the application that provides the user interface from the application being presented to the end users. Furthermore, the controls in place may incorporate role-based access controls with built-in work flow for provisioning and approvals. This has provided for a very complicated system of buttons and levers to assess. However, by providing a common platform for the audit trails and logs, managed cloud providers are simplifying the work for the assessor and allowing for the aggregation and correlation of those events into a simplified platform.

In addition to the aggregation of these access events, following are additional controls that cloud service providers are incorporating in order to provide the common manageability of and the ability to audit a cloud platform:

Security Event Correlation – By incorporating industry leading Security Incident and Event Management (SIEM) solutions, more cloud providers are able to aggregate the logs from multiple platforms, multiple customer-specific and customer-shared devices, and multiple data centers into a centralized security management solution that can provide an easy to review aggregation point of all related security events.

Centralized Authentication – Providing a single authority for authentication and authorization, while centralizing all accounting, is a significant step to providing the proof of access and attempted access to an auditor. This authentication, authorization, and accounting (AAA) is a critical aspect of audit and verification of access to key systems housing data or intellectual property.

Data Replication – A growing requirement for organizations moving to cloud is the seamless failover and recovery of applications in the event of an outage. While we have always enjoyed highly available, fault-tolerant systems, the gating factor has always been the integrity and currency of the backend data. In order to provide the assurance that the data in all systems, and all data centers is consistent, data replication solutions are often deployed to guarantee the low Recovery Point Objective (RPO) often required in a Disaster Recovery solution. These may require high bandwidth, low latency backend solutions to deliver the infrastructure to support such replication, and most globally diverse managed cloud service providers deliver these networks across their infrastructure.

Common Monitoring and Management Solutions – A single pane of glass is often required to provide a unified look of the entire infrastructure. This will provide an auditor the ability to verify the provider is delivering the level of service guaranteed by the solution. Auditors often look for event handling and common management across all systems. By automating the deployment of such monitoring solutions, and relying on a common platform for the management (including patch management, software revision control, and system lockdown procedures) a level of assurance can be provided to the auditor that all systems are uniform and follow the controls of the monitoring and management criteria.

As the adoption of cloud accelerates, there will be added requirements for auditors to understand these ever-changing, elastic environments, and to be able to provide the same compliance and accreditation that they have historically provided for more static, pre-defined solutions in the past. These requirements are increasing at a significant pace, and the industry relies heavily on managed cloud service providers to guide the auditors through these more difficult assessments.

Allen Allison, Chief Security Officer at NaviSite (www.navisite.com)

During his 20+ year career in the information security industry, Allen Allison has served in management and technical roles, including the development of NaviSite’s industry-leading cloud computing platform; chief engineer and developer for a market-leading managed security operations center; and lead auditor and assessor for information security programs in the healthcare, government, e-commerce, and financial industries. With experience in the fields of systems programming; network infrastructure design and deployment; and information security, Allison has earned the highest industry certifications, including CCIE, CCSP, CISSP, MCSE, CCSE, and INFOSEC Professional. A graduate of the University of California, Irvine, Allison has lectured at colleges and universities on the subject of information security and regulatory compliance.