Imagine a world where product owners, Development, QA, IT Operations, and Infosec work together, not only to help each other, but also to ensure that the overall organization succeeds. By working toward a common goal, they enable the fast flow of planned work into production (e.g., performing tens, hundreds, or even thousands of code deploys per day), while achieving world-class stability, reliability, availability, and security.

In this world, Infosec is always working on ways to reduce friction for the team, creating the work systems that enable developers to be more productive and get better outcomes. By doing this, small teams can fully leverage the collective experience and knowledge of not just Infosec, but also QA and Ops, in their daily work without being dependent on other teams, deploying safely, securely and quickly into production.

This enables organizations to create a safe system of work, where small teams are able to quickly and independently develop, test, and deploy code and value quickly, safely, securely, and reliably to customers. This allows organizations to maximize developer productivity, enable organizational learning, create high employee satisfaction, and win in the marketplace.

Instead of inspecting security into our product at the end of the process, we will create and integrate security controls into the daily work of Development and Operations, so that security is part of everyone's job, every day.

The Need for Force MultiplicationOne interpretation of DevOps is that it came from the need to enable developers productivity, because as the number of developers grew, there weren't enough Ops people to handle all the resulting deployment work.

This shortage is even worse in Infosec - James Wickett described vividly why Infosec needs DevOps:

The ratio of engineers in Development, Operations, and Infosec in a typical technology organization is 100:10:1. When Infosec is that outnumbered, without automation and integrating information security into the daily work of Dev and Ops, Infosec can only do compliance checking, which is the opposite of security engineering-and besides, it also makes everyone hate us.

Getting Started

1. Integrate security into development iteration demonstrations.Here's an easy way to prevent Infosec from being a blocker at the end of the project: invite Infosec into product demonstrations at the end of each development interval. This helps everyone understand team goals as they relate to organizational goals, see their implementations during the build process, and gives them the chance to offer input into what's needed to meet security and compliance objectives while there's still ample time to make corrections.

2. Ensure security work is in our Dev and Ops work tracking systems.Infosec work should be as visible as all other work in the value stream. We can do this by tracking them in the same work tracking system that Development and Operations use daily so they can be prioritized alongside everything else.

3. Integrate Infosec into blameless post-mortem processes.Also consider doing a postmortem after every security issue to prevent a repeat of the same problem. In a presentation at the 2012 Austin DevOpsDays, Nick Galbreath, who headed up Information Security at Etsy for many years, describes how they treated security issues, "We put all security issues into JIRA, which all engineers use in their daily work, and they were either ‘P1' or ‘P2,' meaning that they had to be fixed immediately or by the end of the week, even if the issue is only an internally-facing application.

4. Integrate preventive security controls into shared source code repositories and shared services.Shared source code repositories are a fantastic way to enable anyone to discover and reuse the collective knowledge of the organization, not only for code, but also for toolchains, deployment pipeline, standards-and security. Security information should include any mechanisms or tools for safeguarding applications and environments, such as libraries pre-blessed by security to fulfill their specific objectives. Also, putting security artifacts into the version control system that Dev and Ops use daily keeps security needs on their radar.

5. Integrate security into the deployment pipeline.To keep Infosec issues top of mind of Dev and Ops, we want to continually give those teams fast feedback about potential risks associated with their code. Integrating security into the pipeline involves automating as many security tests as possible so that they run alongside all other automated tests. Ideally, these tests should be performed on every code commit by Dev or Ops, and even in the earliest stages of a software project.

6. Protect the deployment pipeline from malicious code.Unfortunately, malicious code can be injected into the infrastructures that support CI/CD. A good place to hide that code is in unit tests because no one looks at them and because they're run every time someone commits code to the repo. We can (and must) protect deployment pipelines through steps such as:

Hardening continuous build and integration servers so we can reproduce them in an automated manner

Reviewing all changes introduced into version control to prevent continuous integration servers from running uncontrolled code

7. Secure your applications.Development testing usually focuses on the correctness of functionality. InfoSec, however, often focuses on testing for what can go wrong. Instead of performing these tests manually, aim to generate them as part of automated unit or functional tests so that they can be run continuously in the deployment pipeline. It's also useful to define design patterns to help developers write code to prevent abuse, such as putting in rate limits for services and graying out submit buttons after they've been pressed.

8. Secure the software supply chain.It's not enough to protect our applications, environment, data and our pipelines - we must also ensure the security of our software supply chain, particularly in light of startling statistics* about just how vulnerable it is. While the use of and reliance on commercial and open source components is convenient, it's also extremely risky. When selecting software, then, it's critical to detect components or libraries that have known vulnerabilities and work with developers to carefully select components with a track record of being fixed quickly.

9. Secure your environments.We must ensure that all our environments in a hardened, risk-reduced state. This involves generating automated tests to ensure that all appropriate settings have been correctly applied for configuration hardening, database security, key lengths, and so forth. It also involves using tests to scan environments for known vulnerabilities and using a security scanner to map them out

10. Integrate information security into production telemetry.Internal security controls are often ineffective in quickly detecting breaches because of blind spots in monitoring or because no one is examining the relevant telemetry every day. To adapt, integrate security telemetry into the same tools that Development, QA, and Operations use. This gives everyone in the pipeline visibility into how application and environments are performing in a hostile threat environment where attackers are constantly attempting to exploit vulnerabilities, gain unauthorized access, plant backdoors, and commit fraud (among other insidious things!).

XebiaLabs is the technology leader for automation software for DevOps and Continuous Delivery. It focuses on helping companies accelerate the delivery of new software in the most efficient manner. Its products are simple to use, quick to implement, and provide robust enterprise technology.

The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...

Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...

The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...

René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions.
René is a member of the Society of Women Engineers (SWE) and a m...

Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.

Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.

Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...

Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.

As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...

The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can...

The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regulatory scrutiny and increasing consumer lack of trust in technology in general.

René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions.
René is a member of the Society of Women Engineers (SWE) and a member of the Society of Information Management (SIM) Atlanta Chapter. She received a Business and Ec...

Cloud computing budgets worldwide are reaching into the hundreds of billions of dollars, and no organization can survive long without some sort of cloud migration strategy. Each month brings new announcements, use cases, and success stories.