So what is RASP? Runtime Application Self-Protection (RASP) is an application security technology which embeds into an application or application runtime environment, examining requests at the application layer to detect attacks and misuse in real time. RASP functions in the application context, which enables it to monitor security – and apply controls – very precisely. This means better detection because you see what the application is being asked to do, and can also offer better performance, as you only need to check the relevant subset of policies for each request.

We are excited about this research paper, because we are excited about what the DevOps approach has delivered to many organizations, both small and large, already. What’s more, firms who are simply on the path towards a full DevOps process already enjoy the advantages of streamlined testing and build processing with continuos integration. Our focus for this research was on how to embed security and security testing into DevOps, leveraging automated workflows to implement security testing, and providing fast feedback to developers when something is amiss. We provide a basic overview of DevOps, and then several perspectives on how security folks and developers can work together to engineer security into a DevOps pipeline.

We cover application security extensively on this blog, but normally we are trying to demystify a specific technology area to help companies understand what to look for in products, and how to differentiate real capabilities from marketing fluff.

This research paper was conceived as a way to help security people understand and better work with development. We explain what development teams are trying to do, how they want to work, and offer pragmatic advice to help mesh the goals of both organizations into a unified process. And on this topic, we really wanted to give back to the community! We’ve included much of what we have learned with secure code development over the last two decades, as well as things we’ve learned from other development teams, CISOs and security vendors, to provide a simple guide on how to promote security in Agile software development teams.

This research paper provides a detailed approach for effectively deploying, managing, and integrating a Web Application Firewall into your application security program. Our research shows that WAFs have a bad name, not because of any specific technology flaw, but mostly due to mismanagement. So we wrote Pragmatic WAF Management to cover how WAFs work, why some customers fail to derive value, and how to effectively deploy a WAF to secure applications from the increasing variety of web-based attacks.

Open source software is ubiquitous. Nearly every company is running some. Many organizations are not even aware of it – or at least weren’t until the Heartbleed vulnerability. Then they discovered what many firms already know: there is open source running in your company, and it’s an integral part of your operations.

Earlier this year I participated in the 2014 Open Source Development and Application Security Survey, as I have done the last couple years. As a developer and former development manager – and let’s face it, an overtly opinionated one – I am always interested in adding my viewpoint to these inquiries, even if I am just one developer voice among thousands. But I have also benefitted from these surveys – looking at the stuff my peers are using, and even selecting open source distributions based on this shared data.

Big data is touted as a ‘transformative’ technology for security event analysis – with promises that it will detect threats in the ever-increasing volume of event data generated from in-house, mobile, and cloud-based services. We hear big data will do more, do it better, and cost less. IT and security personnel, having seen this type of hyperbole many times, are justifiably skeptical: we have been promised rainbows and flying unicorns before. The combination of industry hype, vendor positioning, and general confusion in the press about the meaning of big data make seasoned security folks all the more wary. But big data’s hype is not just hot air – it genuinely addresses fundamental scalability and detection problems that can cripple current analytics systems. Big data is a huge step forward.

Denial of Service attacks can encompass a number of different tactics, all aimed at impacting the availability of your applications and/or infrastructure. In Defending Against Denial of Service Attacks we described both network-based and application-targeting attacks. In this paper we dig much deeper into application DoS attacks. For good reason – as the paper says:

These attacks require knowledge of the application and how to break or game it. They can be far more efficient than just blasting traffic at a network, requiring far fewer attack nodes and less bandwidth. A side benefit of requiring fewer nodes is simplified command and control, allowing more agility in adding new application attacks. Moreover, the attackers often take advantage of legitimate application features, making defense considerably harder.

API gateways are an emerging hot spot in IT services. They offer platforms for companies to selectively leverage IT systems for end user use. But well beyond just slapping a web server in front of an app, gateways both facilitate use of an application and protect it. Gateways enable third party developers, outside your organization, to support different use cases in different environments – such as new applications, mobile apps, and service mash-ups – while allowing you to control security, function, and access to data. They provide a glue layer between your systems and the outside world.

This research paper describes API gateways in detail, shows how they are deployed, and provides the key information to select and implement a gateway.

Big data systems have become incredibly popular, because they offer a low-cost way to analyze enormous sets of rapidly changing data. But the sad fact is that Hadoop, Mongo, Couch and Riak have almost no built-in security capabilities, leaving data exposed on every storage node. This research paper discusses how to deploy the most fundamental data security controls – including encryption, isolation, and access controls/identity management – for a big data system. But before we discuss how to secure big data, we have to decide what big data is. So we start with a definition of big data, what it provides, and how it poses different security challenges than prior data storage clusters and database systems. From there we branch out into two major areas of concern: high-level architectural considerations and tactical operational options. Finally, we close with several recommendations for security technologies to solve specific big data security problems, while meeting the design challenges of scalability and distributed management, which are fundamental to big data clusters.

Web Application Security is an incredibly difficult undertaking, and one of the papers we are most proud of is this one: Building a Web Application Security Program (attached below). Web Applications not only have many of the same threats and issues as traditional applications, but by their nature, have a whole additional set of issues to worry about as well. They require a different approach and analysis, and we hope that you will follow the use cases and adapt the technologies and process improvements suggested to meet your organizational needs. As the science of web application security is advancing very quickly, and as the attacks against web applications and platforms continues to evolve, our approach and recommendations will change. As we anticipate periodic updates to the content, we recommend that you periodically re-visit this section for alterations and amendments.

One of the bigger issues when migrating to the cloud is translating and extending your existing security controls, especially our old friend, network security. While cloud networking may resemble what we are used to, under the covers it behaves, and is managed, very differently. This paper covers the fundamentals and provides practical advice for managing cloud network security, including specifics for major cloud providers.

Over the last few decades we have been refining our approach to network security. Find the boxes, find the wires connecting them, drop a few security boxes between them in the right spots, and move on. Sure, we continue to advance the state of the art in exactly what those security boxes do, and we constantly improve how we design networks and plug everything together, but overall change has been incremental. How we think about network security doesn’t change – just some of the particulars.

One of the fastest growing cloud services is Cloud File Storage and Collaboration, also known as Enterprise Sync and Share. These tools allow organizations to centralize and manage unstructured data in entirely new ways. They also promise massive security benefits, including centralized control over unstructured data, with a full audit log of all user and device activity.

But not all services are created equal – inherent and optional security features vary very widely. Transitioning to these new services also requires a strong understanding of both the platform’s security capabilities and how best to leverage them to reduce your organization’s risk.

This paper originally started with a blog post called Inflection that looked at a series of developing security trends and attempted to predict their eventual outcome. I researched for nearly 18 months; this paper compiles my thoughts on where the security industry is headed, why, and how it affects us now. From the introduction:

Disruption defines the business of information security. New technologies change how businesses work and what risks people take. Attackers shift their strategies. But the better we security professionals predict and prepare for these disruptions, the more effective we can be.

As analysts, we at Securosis focus most of our research on the here and now – on how best to tackle the security challenges faced by CISOs and security professionals when they show up to work in the morning. Occasionally, as part of this research, we note trends with the potential to dramatically affect the security industry and our profession.

One of a CISO’s most difficult challenges is sorting the valuable wheat from the overhyped chaff, and then figuring out what it all means in terms of risk to the organization. There is no shortage of technology or threat trends, and CISOs need to determine which matter and how they impact security.

The rise of cloud computing is a legitimate transformation which is fundamentally changing core security practices. Far more than a mere outsourcing model, cloud computing alters the very fabric of our infrastructure, technology consumption, and delivery models. In the long term the cloud and mobile computing are likely to mark a larger shift than the Internet’s apperance.

This paper details the critical differences between cloud computing and traditional infrastructure for security professionals, and suggests where to focus security efforts. We show that the cloud doesn’t necessarily increase risks – it shifts them, providing new opportunities for substantial security improvement.

A few months back I did a series of posts on how to leverage Amazon EC2, APIs, Chef, and Ruby to improve security over what you can do with traditional infrastructure. I decided to collect these posts together, clean them up, and release them as a standalone paper.

Hopefully you find this interesting. I consider this the future of our industry.

The benefits of Infrastructure as a Service (IaaS), public or private, are driving more and more organizations to cloud computing; but one of the biggest concerns – even for internal deployments – is data security. The cloud fundamentally changes how data is stored, and brings both security and compliance concerns. We see this creating a resurgence of interest in encryption, with some very practical approaches available:

Infrastructure as a Service (IaaS) is often thought of as merely a more efficient (outsourced) version of traditional infrastructure. On the surface we still manage things that look like traditional virtualized networks, computers, and storage. We ‘boot’ computers (launch instances), assign IP addresses, and connect (virtual) hard drives. But while the presentation of IaaS resembles traditional infrastructure, the reality underneath is decidedly not business as usual.

Explaining the EMV shift and payment security is difficult — there is a great deal of confusion about what the shift means, what security it really delivers, and if it actually offers real benefits for merchants. The real story is far more interesting,

The paper discusses the use of tokenization for payment data, personal information, and health records. It covers two important areas of tokenization: First, the paper is one of the few critical examinations of tokenization’s suitability for compliance. There are many possible applications of tokenization, some of which make compliance easier, and others which are simply not practical. Second, the paper dispels the myth that tokenization replaces encryption – in fact tokenization and encryption compliment each other. This version has been updated to include PCI guidance on tokenization.

“We read the guidance but we don’t know what falls out of scope!” is the universal merchant complaint. “Where are the audit guidelines?” is the second most common criticism. On August 12, 2011, the PCI task force driving the study of tokenization published an “Information Supplement” called the PCI DSS Tokenization Guidelines. The merchant community was less than thrilled. The problem is that the PCI document is sorely lacking in actual guidance. Even the section on “Maximizing PCI DSS Scope Reduction” is a collection of broad security generalizations rather than practical advice. After spending the better part of two weeks on this wishy-washy paper we propose a better title, “Begrudging Acknowledgement of Tokenization Without Guidance”.

The Payment Card Industry (PCI) Data Security Standard (DSS) was developed to encourage and enhance cardholder data security and facilitate the broad adoption of consistent data security measures. The problem is that the guidance provided is not always clear. This is especially true when it comes to secure storage of credit card information. The gap between recommended technologies and how to employ them leaves a lot of room for failure. This white paper examines the technologies and deployment models appropriate for both security and compliance, and provides actionable advice on how to comply with the PCI-DSS specification.

Today we see encryption growing at an accelerating rate in data centers, for a confluence of reasons. A trite way to summarize them is “compliance, cloud, and covert affairs”. Organizations need to keep auditors off their backs; keep control over data in the cloud; and stop the flood of data breaches, state-sponsored espionage, and government snooping (even by their own governments).

Thanks to increasing demand we have a growing range of options, as vendors and even free and Open Source tools address this opportunity. We have never had more choice, but with choice comes complexity – and outside your friendly local sales representative, guidance can be hard to come by.

It’s all about the data. You want to make data useful by making it available to users and applications which can leverage it into actionable information. You share data between applications, partners, and analytics systems to derive the greatest business intelligence value possible. But what do you do when you cannot guarantee the security of those systems? How can you protect information regardless of where it moves? One approach is called Data Centric Security, and it is designed to protect data instead of infrastructure.

iOS 7 is a significant update, with serious implications for enterprise management and data security (don’t worry, all good).

The short version is that iOS is quite secure – far more than a general-purpose computer. But you need to understand Apple’s security philosophy to comprehend their design decisions and your integration options. Apple has a clear vision of the future for BYOD, and it is very different than the way most organizations have managed personal devices in the past.

This paper updates our guidance for iOS and includes a deep dive into iOS 7 security and management features. Special thanks to WatchDox for licensing this content so we can release it for free!

You have heard of denial of service attacks, but database denial of service? It may come as a surprise, but database denial of service attacks have become common over the past decade. Lately they are very popular among attackers, as network-based attacks become more difficult. We have begun to see a shift in Denial of Service (DoS) tactics by attackers, moving up the stack from networks to servers and from servers to the application layer. Over the last 18 months we have also witnessed a new wave of vulnerabilities and isolated attacks against databases, all related to denial of service. We don’t hear much about them because they are lost among the din of network DoS and even SQL injection (SQLi) attacks.

Database DoS doesn’t make headlines compared to SQLi, because injection attacks often take control of the database and can be more damaging. But interruption of service is no longer ignorable. Ten years ago it was still common practice to take a database or application off the Internet while an attack was underway. But now web services and their databases are critical business infrastructure. Take down a database and a company loses money – possibly quite a lot.

Between new initiatives such as cloud computing, and new mandates driven by the continuous onslaught of compliance, managing encryption keys is evolving from something only big banks worry about into something which pops up at organizations of all sizes and shapes. Whether it is to protect customer data in a new web application, or to ensure that a lost backup tape doesn’t force you to file a breach report, more and more organizations are encrypting more data in more places than ever before. And behind all of this is the ever-present shadow of managing all those keys.

Few terms strike as much dread in the hearts of security professionals as key management. Those two simple words evoke painful memories of massive PKI failures, with millions spent to send encrypted email to the person in the adjacent cube. Or perhaps they recall the head-splitting migraine you got when assigned to reconcile incompatible proprietary implementations of a single encryption standard. Or memories of half-baked product implementations that worked fine in isolation on a single system, but were effectively impossible to manage at scale. Where by scale I mean “more than one”.

Over the years key management has mostly been a difficult and complex process. This has been aggravated by the recent resurgence in data encryption – driven by regulatory compliance, cloud computing, mobility, and fundamental security needs.

Understanding and Selecting Data Masking Solutions, our newest paper, covers use cases, features, and deployment models; it also outlines how masking technologies work. We started the research to understand big changes we saw happening with masking products, with many new customer inquires for use cases not traditionally associated with data masking. We wanted to discuss these changes and share what we see with the community. This work is the result of dozens of conversations with vendors, customers, and security professionals over the last 18 months, discussed openly on the blog during our development process.

Our goal has been to ensure the research addresses common questions from both technical and non-technical audiences. We did our best to cover the business applications of masking in a non-technical, jargon-free way. Not everyone who is interested in data security has a black belt in data management or security, so we geared the first third of the paper to problems you can reasonably expect to solve with masking technologies. Those of you interested in the nut and bolts need not fear – we drill into the myriad of technical variables later in the paper.

Data Loss Prevention (DLP) is one of the farthest reaching tools in the security arsenal. A single DLP platform touches endpoints, network, email servers, web gateways, storage, directory servers, and more. There are more potential integration points than just about any other security tool – with the possible exception of SIEM. And then we need to build policies, define workflow, and implement blocking… all based on nebulous concepts like “customer data” and “intellectual property”. It is no wonder many organizations are intimidated by the prospect of implementing a large DLP deployment. But on our 2010 survey indicates that over 40% of organizations use some form of DLP.

This paper examines business requirements for securing databases; it also discusses how these requirements are addressed by assessment, discovery, monitoring, auditing, and blocking technologies. DSP is the next evolution after Database Activity Monitoring (DAM), integrating several new technologies into a unified platform for compliance and security, which identifies and reports on transactions that fail to meet business best practices.

Four years ago, when we initially developed the Data Security Lifecycle, we mentioned a technology we called File Activity Monitoring. At the time we saw it as similar to Database Activity Monitoring, in that it would give us the same insight into file usage as DAM provides for database access. The technology did not actually exist, but it seemed like a very logical next step from DLP and DAM.

For Database Activity Monitoring, the deployment model directly effects performance, management, cost, and how well the technology serves your requirements. Appliances, software, and virtual appliances are the three basic deployment models for DAM. While many security platforms offer these same deployment models, what you have learned with firewalls or intrusion detection systems does not apply here – DAM is unique in the way it collects, processes, and ultimately manages information. This white paper provides an in-depth analysis of the tradeoffs between appliance, software, and virtual appliance implementations of Database Activity Monitoring. Each model includes particular advantages that make it a perfect fit for some environments, and completely unsuitable for others.

Data Loss Prevention has matured considerably since the first version of this report three years ago. Back then, the market was dominated by startups with only a couple major acquisitions by established security companies. The entire market was probably smaller than the leading one or two providers today. Even the term ‘DLP’ was still under debate, with a menagerie of terms like Extrusion Prevention, Anti-Data Leakage, and Information Loss Protection still in use (leading us to wonder who, exactly, wants to protect information loss?).

While we have seen maturation of the products, significant acquisitions by established security firms, and standardization on the term DLP, in many ways today’s market is even more confusing than a few years ago. As customer interest in DLP increased, competitive and market pressures diluted the term – with everyone from encryption tool vendors to firewall companies claiming they prevented “data leakage”. In some cases, aspects of ‘real’ DLP have been added to other products as value-add features. And all along the core DLP tools continued to evolve and combine, expanding their features and capabilities.

Tokenization is currently one of the hottest topics in database and application security. In this report we explain what tokenization is, when it works best, and how it works – and give recommendations to help choose the best solution.

Tokenization is just such a technology: it replaces the original sensitive data with non-sensitive placeholders. Tokenization is closely related to encryption – they both mask sensitive information – but its approach to data protection is different.

This paper includes descriptions of major database encryption and tokenization technologies, a decision tree to help determine which type of encryption is best for you, and example use cases drawn from real world deployments.

If you are considering any database encryption or tokenization project, this paper should save you hours of research and architecture development time.

Two of the most common criticisms of Data Loss Prevention (DLP) that comes up in user discussions are a) its complexity and b) the fear of false positives. Security professionals worry that DLP is an expensive widget that will fail to deliver the expected value – turning into yet another black hole of productivity. But when used properly DLP provides rapid assessment and identification of data security issues not available with any other technology.

We don’t mean to play down the real complexities you might encounter as you roll out a complete data protection program. Business use of information is itself complicated, and no tool designed to protect that data can simplify or mask the underlying business processes. But there are steps you can take to obtain significant immediate value and security gains without blowing your productivity or wasting important resources.

Our goal with this paper is to help customers cut through the marketing fluff, and spotlight the differentiators between current database assessment platforms and the previous generation of DBA tools. While we discuss the individual functional components that constitute assessment platforms, don’t get scared off by the technical discussions. We also cover business justification and compliance for those who are not responsible for managing databases, but need information from the database to do their jobs. We did our best to address questions that will be posed by the different groups who are interested in database assessment technologies.

Database Assessment is distinctly different than other forms of platform and network assessment you may already be familiar with. This is partially due to the complexity of the database itself, and also because assessment provides information to multiple audiences besides the database administrators (DBAs). Databases require specialized skills to manage and secure. As database threats evolve – and as we see a continuing growth of compliance requirements relevant to data and database infrastructure – most admins are reliant on assessment support for specialized security and compliance policies. These topics are outside the core job skills of the average DBA. Assessment tools have evolved into full-fledged enterprise class products that not only address underlying vulnerability and patch management issues; but a complete range of security, compliance, and operational tasks.

Understanding and Selecting a Database Activity Monitoring Solution white paper. This paper examines the business requirements for monitoring databases, as well as the technologies that assist in capturing and analyzing that activity. Rich discusses the compliance and security issues that organizations face, and the options they have at their disposal to identify and report on transactions that fail to meet business best practices. As there are many ways to collect information in and around relational databases, and still more methods to analyze and report on the findings, Rich digs into the nuts and bolts to offer the reader a comparative analysis of the technology options available, and how they address end user requirements. This research is recommended to be used in conjunction with other application security tools, as many web and traditional applications rely upon database technology to store, manage, and report on data – linking the compliance and security requirements together.

We’ve seen a renaissance of sorts regarding endpoint security. To be clear, most of solutions in the market aren’t good enough. Attackers don’t have to be advanced to make quick work of the endpoint protection suites in place. That realization has created a wave of innovation on the endpoint that promises to provide a better chance to prevent and detect attacks. But the reality is far too many organizations can’t even get the fundamentals of endpoint security.

But the fact remains that many organizations are not even prepared to deal with unsophisticated attackers. You know, that dude in the basement banging on your stuff with Metasploit. Those organizations don’t really need advanced security now – their requirements are more basic. It’s about understanding what really needs to get done – not the hot topic at industry conferences. They cannot do everything to fully protect endpoints, so they need to start with essentials.

One of the fastest growing cloud services is Cloud File Storage and Collaboration, also known as Enterprise Sync and Share. These tools allow organizations to centralize and manage unstructured data in entirely new ways. They also promise massive security benefits, including centralized control over unstructured data, with a full audit log of all user and device activity.

But not all services are created equal – inherent and optional security features vary very widely. Transitioning to these new services also requires a strong understanding of both the platform’s security capabilities and how best to leverage them to reduce your organization’s risk.

Anti-virus is basically dead, at least according to the biggest anti-virus vendor. The good news is that signature-based AV has actually been dead for a long time; even the big players have been broadening their capabilities to assess, prevent, detect, and investigate advanced malware on endpoints and servers. There has been a tremendous amount of activity and innovation in protecting endpoint and servers, driven by necessity:

Endpoint protection has become the punching bag of security. For every successful attack, the blame seems to point directly to a failure of endpoint protection. Not that this is totally unjustified — most solutions for endpoint protection have failed to keep pace with attackers.

The lack of demonstrable progress [in stopping malware] comes down to two intertwined causes. First, devices are built using software that has defects attackers can exploit. Nothing is perfect, especially not software, so every line of code presents an attack surface. Second, employees can be fooled into taking action (such as installing software or clicking a link) that enables attacks to succeed.

Application Control technology can have a significant impact on the security posture of protected devices, but has long been much maligned. There was no doubt of its value in stopping attacks, especially those using sophisticated malware. Being able to block the execution of unauthorized executables takes many common attacks out of play. But there is a user experience cost for that protection.

Our updated and revised 2014 Endpoint Security Buyer’s Guide updates our research on key endpoint management functions, including patch and confirmation management and device control. We have also added coverage of anti- … malware, mobility, and BYOD. All very timely and relevant topics. The bad news is that securing endpoints hasn’t gotten any easier. Employees still click things, and attackers have gotten better at evading perimeter defenses and obscuring attacks.

Humans, alas, remain gullible and flawed. Regardless of any training you provide employees, they continue to click stuff, share information, and fall for simple social engineering attacks. So endpoints remain some of the weakest links in your security defenses.

This paper provides a strategic view of Endpoint Security Management, addressing the complexities caused by malware’s continuing evolution, device sprawl, and mobility/BYOD. The paper focuses on periodic controls that fall under good endpoint hygiene (such as patch and configuration management) and ongoing controls (such as device control and file integrity monitoring) to detect unauthorized activity and prevent it from completing. The crux of our findings involve use of an endpoint security management platform to aggregate the capabilities of these individual controls, providing policy and enforcement leverage to decrease cost of ownership, and increasing the value of endpoint security management.

We’ve been spending a lot of time recently doing research on malware, both the tactics of the attackers and understanding the next wave of detection approaches. That’s resulted in a number of reports, including network-based approaches to detect malware at the perimeter, and the Herculean task of decomposing the processes involved in confirming an infection, analyzing the malware, and tracking its proliferation in our Malware Analysis Quant. But those approaches largely didn’t address what’s required to detect malware on the devices themselves, and block the behaviors we know are malicious.

So we’ve written up the Evolving Endpoint Malware Detection report to cover how the detection techniques are changing, why it’s important to think about behavior in a new way, and why context is your friend if you want to both keep the attackers at bay and your users from wringing your neck.

Endpoint Security is a pretty broad topic. Most folks associate it with traditional anti-virus or even the newfangled endpoint security suites. In our opinion, looking at the issue just from the perspective of the endpoint agent is myopic. To us, endpoint security is as much a program as anything else.

In this paper we discuss endpoint security from a fundamental blocking and tackling perspective. We start with identifying the exposures and prioritizing remediation, then discuss specific security controls (both process and product), and also cover the compliance and incident response aspects.

This paper covers our recommendations for using endpoint DLP- including major features, what to look for, and deployment recommendations. Since we generally recommend full-suite DLP solutions over endpoint only solutions, you will notice the paper focuses more on endpoint DLP as part of a larger DLP program.

Thanks to Symantec for sponsoring (as always, the content was developed completely independently of any sponsorship).

We are proud to announce the availability of our Cloud Identity and Access Management research paper. While you have likely been hearing a lot about cloud services and mobile identity, how it all works is not typically presented. Our goal for this research paper is simple: Present the trends in IAM in a clear fashion so that security and software development professionals understand the new services at their disposal. This paper shows how cloud computing is driving extensible architectures and standardization of identity protocols, and how identity and authorization is orchestrated across in-house IT and external cloud services. Changes to IAM architectures provide the means to solve multiple challenges; additionally, external service providers offer commoditized integration with the cloud and mobile devices — reducing development and management burdens.

Visible devices are only some of the network-connected devices in your environment. There are hundreds, quite possibly thousands, of other devices you don’t know about on your network. You don’t scan them periodically, and you have no idea of their security posture. Each one can be attacked, and might provide an adversary with opportunity to gain presence in your environment. Your attack surface is much larger than you thought. In our Shining a Light on Shadow Devices paper, we discuss the attacks on these devices which can become an issue on your network, along with some tactics to provide visibility and then control to handle all these network-connected devices.

The more things change, the more they stay the same. We’ve been talking about Reacting Faster and Better for years and will continue to do so, since trying to prevent every attack remains futile. So the best path forward is to continue advancing the ability to prevent attacks, but to spend as much time focused on detecting attacks that successfully compromised your defenses. This detection-centric view of the world has been a central theme in our research and consists of a variety of areas to focus on, including the network, endpoints and applications.

We know many organizations have already spent a bunch of money on detection — particularly intrusion detection, its big brother intrusion prevention, and SIEM. But these techniques haven’t worked effectively either, so now is time to approach the issue with fresh eyes. By taking a new forward look at detection, not from the standpoint of what we have already done and implemented (IDS and SIEM), but instead in terms of what we need to do to isolate and identify adversary activity, we will be able to look at the kinds of technologies needed right now to deal with modern attacks. Times have changed and attackers have advanced, so our detection techniques need to evolve as well.

We have been writing extensively about the disruption currently hitting security, driven by cloud computing and mobility. Our Inflection: The Future of Security research directly addresses the lack of visibility caused by these macro trends. At the same time great automation and orchestration promise to enable security to scale to the cloud, in terms of both scale and speed. Meanwhile each day’s breach du jour in the mass media keeps security topics at the forefront, highlighting the importance of protecting critical information.

These trends mean organizations have no choice but to encrypt more traffic on their networks. Encrypting the network prevents adversaries from sniffing traffic to steal credentials and ensures data moving outside the organization is protected from man-in-the-middle attacks. So we expect to see a much greater percentage of both internal and external network traffic to be encrypted over the next 2-3 years.

What’s a couple hundred gigabits per second of traffic between friends, right? Because that is the magnitude of recent volumetric denial of service attacks, which means regardless of who you are, you need a plan to deal with that kind of onslaught.

Regardless of motivation attackers now have faster networks, bigger botnets, and increasingly effective tactics to magnify the impact of their DDoS attacks – organizations can no longer afford to ignore them.

In Defending Against Network-based Distributed Denial of Service Attacks we dig into the attacks and tactics now being used to magnify those attacks to unprecedented volumes. We also go through your options to mitigate the attacks, and the processes needed to minimize downtime.

We all know and love the firewall. The cornerstone of every organization’s network security defense, firewalls enforce access control policies and determine what can and cannot enter your network. But, like almost every device you have had for a while, you take them for granted and perhaps don’t pay as much attention as you need to. Until a faulty rule change opens up a hole in your perimeter large enough to drive a tanker through. Then you get some religion about more effectively managing these devices.

Things are getting more complicated as next-generation functionality brings a need to define and manage application policies; new devices and infrastructure evolution make it difficult to know what is allowed and what isn’t.

Detecting malware feels like a losing battle. Between advanced attacks, innovative attackers, and well-funded state-sponsored and organized crime adversaries, organizations need every advantage they can get to stop the onslaught. We first identified and documented Network-Based Malware Detection (NBMD) devices as a promising technology back in early 2012, and they have made a difference in detecting malware at the perimeter. Of course nothing is perfect, but every little bit helps.

But nothing stays static in the security world so NBMD technology has evolved with the attacks it needs to detect.

Hot on the heels of our Building an Early Warning System paper, we have taken a much deeper look at the network aspect of threat intelligence in Network-based Threat Intelligence. We have always held to the belief that the network never lies (okay – almost never), and that provides a great basis on which to build an Early Warning System.

e are pleased to put the finishing touches on our Denial of Service (DoS) research and distribute the paper. Unless you have had your head in the sand for the last year, you know DoS attacks are back with a vengeance, knocking down sites both big and small. That has created a situation where it’s no longer viable to ignore the threat, and we all need to think about what to do when we inevitably become a target.

We know it’s a shock, but your endpoint protection suite isn’t doing a good enough job of blocking malware attacks. So the industry has resorted additional layers of inspection, detection, and even protection to address its shortcomings. One place focus is turning, which is seeing considerable innovation, is the network. We see a new set of devices and enhancements to existing perimeter platforms, focused on detecting and blocking malware.

We have been saying for years that you can’t assume your defenses are sufficient to stop a focused and targeted attacker. That’s what React Faster and Better is all about. But say you actually buy into this philosophy: what now? How do you figure out the bad guys are in your house? And more importantly how they got there and what they are doing? The network is your friend because it never lies.

Attackers can do about a zillion different things to attack your network, and 99% of them depend on the network in some way. They can’t find another target without using the network to locate it. They can’t attack a target without connecting to it. Furthermore, even if they are able to compromise the ultimate target, the attackers must then exfiltrate the data. So they need the network to move the data. Attackers need the network, pure and simple. Which means they will leave tracks, but you will see them only if you are looking.

What should you do right now? That’s one of the toughest questions for any security professional to answer. The list is endless, the priorities clear as mud, the risk of compromise ever present. But doing nothing is never the answer. We have been working with practitioners to answer that question for years, and we finally got around to documenting some of our approaches and concepts.

That’s what “Fact-Based Network Security: Metrics and the Pursuit of Prioritization” is all about. We spend some time defining ‘risk’, trying to understand the metrics that drive decisions, working to make the process a systematic way to both collect data and make those decisions, and understanding the compliance aspects of the process. Finally we go through a simple scenario that shows the approach in practice.

network-security-in-the-age-of-any-computingNetwork Security in the Age of *Any* ComputingPosted 3/31/2011 By Mike Rothman. Posted in: Network Security

We all know of the inherent challenges that mobile devices and the need to connect to anything from anywhere present to security professionals. We’ve done some research on how to start securing those mobile devices, and now we have continued broadening that research with a look to a network-centric perspective on these issues. Let’s set the stage for this paper:

Everyone loves their iDevices and Androids. The computing power that millions now carry in their pockets would have required a raised floor and a large room full of big iron just 25 years ago. But that’s not the only impact we see from this wave of consumerization, the influx of consumer devices requiring access to corporate networks. Whatever control you thought you had over the devices in the IT environment is gone. End users pick their devices and demand access to critical information within the enterprise. Whether you like it or not.

What? A research report on enterprise firewalls. Really? Most folks figure firewalls have evolved about as much over the last 5 years as ant traps. They’re wrong, of course, but people think of firewalls as old, static, and generally uninteresting. But this is unfounded. Firewalls continue to evolve, and their new capabilities can and should impact your perimeter architecture and firewall selection process. That doesn’t mean we will be advocating yet another rip and replace job at the perimeter (sorry, vendors), but there are definitely new capabilities that warrant consideration – especially as the maintenance renewals on your existing gear come due.

Those of you who have followed Securosis for a while know that our Quant research is the big daddy of all our projects. We build a very granular process map for a certain function, build a metrics model, and in some cases survey our community to figure out what they do and what they don’t. We have already tackled Patch Management, Network Security Operations, and Database Security Options. Our latest Quant study tackled Malware Analysis.

The Database Security Operations Quant research project – Database Quant for short – was launched to develop an unbiased metrics model to describe the costs of securing database platforms. In the process we developed the most in-depth database security program framework we can find, as well as all the key metrics to measure database security efforts. Our goal is to provide organizations with a tool to better understand the security costs of configuring, monitoring, and managing databases. By capturing quantifiable and precise metrics that describe the daily activities database administrators, auditors, and security professionals, we can better understand the costs associated with security and compliance efforts. Database Quant was developed through independent research and community involvement, to accurately reflect all the substantive efforts that comprise a database security program.

As described in the Network Security Operations (NSO) Quant report, for each process we determined a set of metrics to quantify the cost of performing the activity. We designed the metrics to be as intuitive as possible while still capturing the necessary level of detail. The model collects an inclusive set of potential network security operations metrics, and as with each specific process we strongly encourage you to use what makes sense for your own environment.

The lack of credible and relevant network security metrics has been a thorn in the side of security practitioners for years. We don’t know how to define success. We don’t know how to communicate value. And ultimately, we don’t even know what we should be tracking operationally to show improvement – or failure – in our network security activities. The Network Security Operations (NSO) Quant research project was initiated to address these issues.

Our Incident Response in the Cloud Age paper digs into impacts of the cloud, faster and virtualized networks, and threat intelligence on your incident response process. Then we discuss how to streamline response in light of the lack of people to perform the heavy lifting of incident response. Finally we bring everything together with a scenario to illuminate the concepts.

Our Building a Threat Intelligence Program paper offers guidance for designing a program and systematically leveraging threat intelligence. This paper is all about turning tactical use cases into a strategic TI capability to enable your organization to detect attacks faster.

In Building a Vendor (IT) Risk Management Program, we explain why you can no longer ignore the risk presented by third-party vendors and other business partners, including managing an expanded attack surface and new regulations demanding effective management of vendor risk. We then offer ideas for how to build a structured and systematic program to assess vendor (IT) risk, and take action when necessary.

Despite having published a bunch of research over the years about SIEM, it’s still a very misunderstood and under utilized technology. Lots of organizations aggregate their logs (you can thank PCI-DSS for that), but not enough actually use their SIEM effectively. In the SIEM Kung Fu paper, we tell you what you need to know to get the most out of your SIEM, and solve the problems you face today by increasing your capabilities (the promised Kung Fu).

Given that most organizations have realized that threat prevention has limitations, there has been a renewed focus on threat detection. Though like most other markets in security, the term threat detection has been manipulated to mean almost everything. Thus, we figured it was time to clarify what we think threat detection is, and how it’s evolving to deal with advanced attacks, sophisticated adversaries, and limited resources.

In the Threat Detection Evolution paper, we start by reviewing security data collection, including both internal and external data sources that can help in the detection efforts. Then we discuss taking that data and reliably figuring out what is an attack. We wrap up the paper by going through the process using a quick wins scenario to bring the concepts into action.

Threat Intelligence remains one of the hottest areas in security. With its promise to help organizations take advantage of information sharing, early results have been encouraging. We have researched Threat Intelligence deeply; focusing on where to get TI and the differences between gathering data from networks, endpoints, and general Internet sources. But we come back to the fact that having data is not enough – not now and not in the future.

It is easy to buy data but hard to take full advantage of it. Knowing what attacks may be coming at you doesn’t help if your security operations functions cannot detect the patterns, block the attacks, or use the data to investigate possible compromise. Without those capabilities it’s all just more useless data, and you already have plenty of that.

This cloud thing is going to have major repercussions on how you protect technology assets over time. But what does that even mean? We start this paper by defining how and why the cloud is different, and then outline a number of trends we expect to come to fruition as described in our The Future of Security paper. Then we look at how security monitoring functions need to evolve, as an increasing amount of technology infrastructure runs in the cloud.

We continue to investigate the practical uses of threat intelligence (TI) within your security program. After tackling how to Leverage Threat Intel in Security Monitoring, now we turn our attention to Incident Response and Management. In this paper, we go into depth on how your existing incident response and management processes can (and should) integrate adversary analysis and other threat intelligence sources to help narrow down the scope of your investigation.

We’ve also put together a snappy process map depicting how IR/M looks when you factor in external data as well.

As we continue our research into the practical uses of threat intelligence (TI), we have documented how TI should change existing security monitoring (SM) processes. In our Leveraging Threat Intelligence in Security Monitoring paper, we go into depth on how to update your security monitoring process to integrate malware analysis and threat intelligence. Updating our process maps demonstrates that we don’t consider TI a flash in the pan – it is a key aspect of detecting advanced adversaries as we move forward.

As much as you probably dislike thinking about other organizations being compromised, this provides a learning opportunity.

Has your SIEM failed to meet expectations despite significant investment? Has your platform failed to keep up with emerging threats and scalability requirements? If you are questioning whether your existing product or service can get the job done, you are not alone. Given the rapid evolution of requirements, and the changing needs of enterprise users, it is no surprise that many vendors have been passed by as they work to address market demands from 4 years ago. You are likely more than a little frustrated by the difficulty of managing, scaling, and actually doing something useful with SIEM. But there comes a point where the futility of riding a mule in a horse race becomes obvious, and then it’s time to find a replacement steed.

We have always been fans of making sure applications and infrastructure are ready for prime time before letting them loose on the world. It’s important not to just use basic scanner functions either – your adversaries are unlikely to limit their tactics to things you find in an open source scanner. Security Assurance and Testing enables organizations to limit the unpleasant surprises that happen when launching new stuff or upgrading infrastructure.

Adversaries continue to innovate and improve their tactics at an alarming rate. They have clear missions, typically involving exfiltrating critical information or impacting the availability of your technology resources. They have the patience and resources to achieve their missions by any means necessary. And it’s your job to make sure deployment of new IT resources doesn’t introduce unnecessary risk.

Everyone has an opinion about security awareness training, and most of them are negative. Waste of time! Ineffective! Boring! We have heard them all. And the criticism isn’t wrong – much of the content driving security awareness training is lame. Which is probably the kindest thing we can say about it. But it doesn’t need to be that way. Actually, it cannot remain this way – there is too much at stake. Users remain the lowest-hanging fruit for attackers, and as long as that is the case attackers will continue to target them. Educating users about security is not a panacea, but it can and does help.

It’s not like a focus on security awareness training is the flavor of the day for us. We have been talking about the importance of training users for years, as unpopular as it remains. The main argument against security training is that it doesn’t work. That’s just not true. But it doesn’t work for everyone. Like security in general, there is no 100%. Some employees will never get it – mostly because they just don’t care – but they do bring enough value to the organization that no matter what they do (short of a felony) they are sticking around. Then there is everyone else. Maybe it’s 50% of your folks, or perhaps 90%. Regardless of the number of employees who can be influenced by better security training content, wouldn’t it make your life easier if you didn’t have to clean up after them? We have seen training reduce the amount of time spent cleaning up easily avoidable mistakes.

Continuous Monitoring has become an overused and overhyped term in security circles, driven by US Government mandate (now called Continuous Diagnostics and Mitigation). But that doesn’t change the fact that monitoring needs to be a cornerstone of your security program, within the context of a risk-based paradigm. So your pals at Securosis did their best to document how you should think about Continuous Security Monitoring and how to get there.

Given that you can’t prevent all attacks, you need to ensure you detect attacks as quickly as possible. The concept of continuous monitoring has been gaining momentum, driven by both compliance mandates (notably PCI-DSS) and the US Federal Government’s guidance on Continuous Diagnostics and Mitigation, as a means to move beyond periodic assessment. This makes sense given the speed that attacks can proliferate within your environment. In this paper, Securosis will help you assemble a toolkit (including both technology and process) to implement our definition of Continuous Security Monitoring (CSM) to monitor your information assets to meet a variety of needs in your organization.

Most folks think the move towards the extended enterprise is very cool. You know, get other organizations to do the stuff your organization isn’t great at. It’s a win/win, right? From a business standpoint, there are clear advantages to building a robust ecosystem that leverages the capabilities of all organizations. But from a security standpoint, the extended enterprise adds a tremendous amount of attack surface.

In order to make the extended enterprise work, your business partners need access to your critical information.

Much of the security industry spends significant time and effort focused on how hard it is to deal with today’s attacks. Adversaries continue to improve their tactics. Senior management doesn’t get it, until there is a breach… then your successor can educate them. And the compliance mandates hanging over your organization like albatross remain 3-4 years behind the attacks you see daily. The vendor community compounds the issues by positioning every product and/or service as a solution to the APT problem. Which means they don’t really understand advanced attackers at all. But complaining doesn’t solve problems, so we put together a CISO’s Guide to Advanced Attackers to help you structure a programmatic effort to deal with these adversaries.

It makes no difference what a security product or service does – they are all positioned as the only viable answer to stop the APT. Of course this isn’t useful to security professionals who actually need to protect important things. And it’s definitely not helpful to Chief Information Security Officers (CISOs) who have to explain their organization’s security programs, set realistic objectives, and manage expectations to senior management and the Board of Directors.

One topic that has resonated with the industry has been Early Warning. Clearly looking through the rearview mirror and trying to contain the damage from attacks already in process hasn’t been good enough, so figuring out a way to continue shortening the window between attack and detection continues to be a major objective for fairly mature security programs. Early Warning is all about turning security management on its head, using threat intelligence on attacks against others to improve your own defenses.

If you recall back to the Endpoint Security Management Buyer’s Guide, we identified 4 specific controls typically used to manage the security of endpoints, and broke them up into periodic and ongoing controls. That paper helped you identify what was important and guided you through the buying process. At the end of that process you face a key question – what now? It’s time to implement and manage your new toys, so this paper will provide a series of processes and practices for successfully implementing and managing patch and configuration management tools.

Organizations have traditionally viewed vulnerability scanners as tactical products, largely commoditized and only valuable around audit time. How useful is a 100-page vulnerability report to an operations person trying to figure out what to fix next? Although those 100-page reports make auditors smile, as they offer a nice listing of audit deficiencies to address in the findings of fact. But the tide is definitely turning. We see a clear shift from a largely compliance-driven orientation to a more security-centric view. We document this evolution to a vulnerability/threat management platform in our new Vulnerability Management Evolution paper.

Most organizations focus on the attackers out there – which means they may miss attackers who have the credentials and knowledge to do real damage. These are “privileged users”, and far too many organizations don’t do enough to protect themselves from that group. By the way – this doesn’t necessarily require a malicious insider. It is very possible (if not plausible) that a privileged user’s device might gets compromised, giving an attacker access to the administrator’s credentials. A bad day all around. So we wrote a paper called Watching the Watchers: Guarding the Keys to the Kingdom describing the problem and offering ideas for solutions.

Is it time? Are you waving the white flag? Has your SIEM failed to meet expectations despite significant investment? If you are questioning whether your existing product or service can get the job done, you are not alone. You likely have some battle scars from the difficulty of managing, scaling, and actually doing something useful with SIEM. Given the rapid evolution of SIEM/Log Management offerings – and the evolution of requirements, with new application models and this cloud thing – you should be wondering whether a better, easier, and less expensive solution meets your needs.

How do you answer the inevitable question “Are we good at security?” If you are like most organizations, you stutter quite a bit and then fall back to either irrelevant numbers (like AV or patch coverage) or a qualitative assessment – “We had 2 incidents last month, down from 5 the prior month prior”. Either way, the answer isn’t what management needs, or deserves.

In this paper we focus on security metrics as the foundation, but more importantly on how to leverage a security benchmark to provide a useful basis for comparison. A brief excerpt from the Executive Summary makes the distinction clear:

If you don’t already have attackers in your environment you will soon enough, so we have been spending a lot of time with clients figuring out how to respond in this age of APT (Advanced Persistent Threat) attackers and other attacks you have no shot at stopping. You need to detect and respond more effectively. We call this philosophy “React Faster and Better”, and have finally documented and collected our thoughts on the topic.

SIEM and Log Management platforms have seen significant investment, and the evolving nature of attacks means end users are looking for more ways to leverage their security investments. SIEM/Log Management does a good job of collecting data, but extracting actionable information remains a challenge. In part this is due to the “drinking from the fire hose” phenomenon, where the speed and volume of incoming data make it difficult to keep up. Additionally, the data needs to be pieced together with sufficient reference points from multiple event sources to provide context. But we find that the most significant limiting factor is often a network-centric perspective on data collection and analysis. As an industry we look at network traffic rather than transactions; we look at packet density instead of services; we look at IP addresses rather than user identity. We lack context to draw conclusions about the amount of real risk any specific attack presents.

Anyone worried about security and/or compliance has probably heard about Security Information and Event Management (SIEM) and Log Management. But do you really understand what the technology can do for your organization, how the products are architected, and what is important when trying to pick a solution for your organization?

Unfortunately far too many end user organizations have learned what’s important in SIEM/LM the hard way – by screwing it up. But you can learn from the pain of others, because we have written a fairly comprehensive paper that delves into the use cases for the technology, the technology itself, how to deploy it, and ultimately how to select it. We assembled this paper from the Understand and Selecting a SIEM/Log Management blog series from June and July 2010.

The Business Justification for Data Security is one of our more important pieces of research. It describes how to evaluate data security investments, map the potential investment to your business needs, then build a business justification case. It starts with a discussion of data security issues, then reviews alternative models (and their flaws), and finishes presents our justification methodology. Attached is the Whitepaper.

Simple website compromises can feel like crimes with no clear victims. Who cares if the Joey’s Bag of Donuts website gets popped? But that is not a defensible position any more. Attackers don’t just steal data from these websites – they also use them to host malware, command and control nodes, and proxies to defeat IP reputation systems.

Even today, strange as it sounds, far too many websites have no protection at all. They are built on vulnerable technologies without a thought for securing critical data, and then let loose in a very hostile world. These sites are sitting ducks for script kiddies and organized crime.

The next chapter in our Threat Intelligence arc, which started with Building an Early Warning System and then delved down to the network in Network-based Threat Intelligence, now moves on to the content layer. Or at least one layer. Email continues to be the predominant initial attack mechanism. Whether it is to deliver a link to a malware site or a highly targeted spear phishing email, many attacks begin in the inbox.

So we thought it would be useful to look at how a large aggregation of email can be analyzed to identify attackers and prioritize action based on the adversaries’ mission. In Email-based Threat Intelligence we use phishing as the jumping-off point for a discussion of how email security analytics can be harnessed to continue shortening the window between attack and detection.

collected-cloud-security-and-devops-posts

Featured Article

Collected Cloud Security and DevOps Posts

Posted 10/31/2016 By Rich. Posted in: Cloud and Virtualization

Since we haven’t been able to compile these into a paper, here is a list of links to our latest cloud security and DevOps content.

So what is RASP? Runtime Application Self-Protection (RASP) is an application security technology which embeds into an application or application runtime environment, examining requests at the application layer to detect attacks and misuse in real time. RASP functions in the application context, which enables it to monitor security – and apply controls – very precisely. This means better detection because you see what the application is being asked to do, and can also offer better performance, as you only need to check the relevant subset of policies for each request.

Visible devices are only some of the network-connected devices in your environment. There are hundreds, quite possibly thousands, of other devices you don’t know about on your network. You don’t scan them periodically, and you have no idea of their security posture. Each one can be attacked, and might provide an adversary with opportunity to gain presence in your environment. Your attack surface is much larger than you thought. In our Shining a Light on Shadow Devices paper, we discuss the attacks on these devices which can become an issue on your network, along with some tactics to provide visibility and then control to handle all these network-connected devices.

Our Incident Response in the Cloud Age paper digs into impacts of the cloud, faster and virtualized networks, and threat intelligence on your incident response process. Then we discuss how to streamline response in light of the lack of people to perform the heavy lifting of incident response. Finally we bring everything together with a scenario to illuminate the concepts.

Our Building a Threat Intelligence Program paper offers guidance for designing a program and systematically leveraging threat intelligence. This paper is all about turning tactical use cases into a strategic TI capability to enable your organization to detect attacks faster.