Ongoing cross-training, threat information sharing, executive support and a strong threat modeling infrastructure helps the company's security and development staff work collaboratively.

How would you describe the relationship between your organization’s security and development teams? Chances are, you’d use words like “tense” or “distrustful.” That’s because the two groups often feel they are working at cross-purposes and getting in each other’s way. Security sees themselves working to fix vulnerabilities that developers create, while to developers security is a series of speed bumps that keep them from reaching their milestones on schedule.

That’s the crux of the problem. Why can’t there be one set of shared goals for both teams? Software giant Microsoft believes it has achieved a common purpose between its development and security operations, and that this shared purpose has resulted in better security for both its internal and commercial software and services.

Microsoft’s approach is simple and is based on good, consistent training and communication. Executing that approach is not so simple. It requires buy-in from both groups, ongoing training, effective communication and, importantly, a strong endorsement from executive management.

CSO recently spoke with Bret Arsenault, Microsoft’s CISO, and Bharat Shah, vice-president for security engineering in Microsoft’s cloud and AI division, about how the company’s developers and security professionals collaborate to build security into its tools and products.

Move to the cloud a driving factor

Microsoft’s historical dominance on the desktop and networks has made it a target for hackers, so making its products secure has long been a priority. It’s estimated that more than a billion people use Windows on their desktops alone.

As those products have moved to the cloud, the stakes around security are now higher. Windows 10 has 39% of the worldwide marketshare for desktop operating systems. In the first three months of 2019, Microsoft claims its commercial cloud business (including Azure, Office 365 and Dynamics 365) grew by 41% with $9.6 billion in sales. Customers of these products and services expect them to be exceptionally secure.

“We've been building security into products for a very long time at Microsoft,” says Shah. “What has changed in the last 10‑odd years is that we've moved to the cloud. Cloud means we're continuously updating our products and changing our products. Everything we used to do we've had to transform in this new world, where a lot of the security things we want engineers to adopt, we've operationalized, built into our engineering systems.”

“I don't view security as fundamentally any different than, say, reliability or availability. When we moved to the cloud, we moved from ‘eliminate all the bugs’ to be resilient, to recover if something were to go wrong. The same principle applies for security,” says Shah.

The cloud has made that easier in some ways. For example, Shah says the cloud allows Microsoft to learn about and react to cyberattacks more quickly. “Previously, we would give our products to customers and their IT, and their IT would be facing cyberattacks. We probably didn't know about it as quickly as we do now in the cloud,” he says.

How security evolved at Microsoft

Before he became CISO, Arsenault was the CTO for product security, and the company had no CISO. “[That was] before we decided to break that up and put security into the DNA of all the products we build. We had it all centralized.” Now, Arsenault has responsibilities similar to that of other CSOs like reporting on security operations to Microsoft’s board and running traditional IT security. He also works with all the other operations teams, including Shah’s security operations team, and reviews where the company is with security hygiene each month with the leadership team.

In addition to being accountable for operational security across the company, Arsenault is also responsible for business continuity and disaster recovery, running a program for system resiliency. “I get to work with Bharat and the leaders in Office in other areas, where we run this sort of governance council called the Information Risk Management Council,” he says. “Bharat is the executive sponsor for that from O365.” Other groups on the council include Bing, Office, Azure and other products.

“Originally, we had the secure development lifecycle, which was really the philosophy of how to do threat modeling and ensure code quality in the boxed products we ship. Then you go to 2008, where we started doing online services,” says Arsenault. “Now our security engineering teams are doing operational security as well as service and product security. We review that with every team every month and make sure we're there.”

To better enable sharing of data and best practices, Microsoft decided to consolidate its security operations centers (SOCs) about four years ago. “I had my own security operations center, Bharat had a security operations center,” says Arsenault. Now, one shared CDOC [Cyber Defense Operations Center] allows all the teams to have what Arsenault calls a “fusion center” that allows them to better leverage the data and “feed it right back into the life‑cycle of product development.”

Cross-training security and IT teams

A lack of understanding and poor communication can doom the relationship between security and development teams. The two groups need to share knowledge, and they need to feel empowered to help each other achieve shared goals. To that end, Microsoft has created an ongoing Strike training program. “How you change and drive a culture – change the DNA – is through education and a set of behaviors and measurement,” says Shah.

Microsoft looks at that change from three perspectives. First is training for all employees through the standards of business conduct, which Arsenault says always includes security training. The next level is what Microsoft calls “security foundations,” and it allows them to address security for all employees in greater depth.

The third level adds Strike training designed exclusive for all Microsoft engineers. It is closed‑session training that walks them through what threat actors are doing and to help them understand the global threat landscape.

These Strike sessions train developers and engineers to understand the reasons behind Microsoft's security practices, the techniques and tactics hackers use, and the engineering tools available to them. The goal is to help them build a network of peers and resources they can leverage to ensure that security is built into everything they do. A typical agenda would include content like:

A keynote describing security strategy

A keynote detailing a recent security event covered in the news

Breakout sessions on topics like authentication and authorization, fundamentals of secure engineering in the cloud, threat modeling, new privacy legislation, secrets management, red team learnings, proper procedures for use of third-party software

“How do you make sure you're doing the right thing with code, with identity, with secrets, with all the other things?” says Arsenault. “We're very prescriptive. That training is run by every engineering team. It's a little different than the engineering you do for Bing and the engineering in Azure than the engineering in IT work.”

While Arsenault sees training as a critical factor to bring IT and security together, the monthly reviews with the operation teams, how Microsoft manages and assesses risk, and how that risk assessment is reported from the engineers up to the board and risk management council is what makes it all work.

“Once a month, my boss reviews the security scorecards, with each of these directs and pushes things that you think are important for the team,” says Shah. “That is a culture that is top‑down. Training is super awesome at the bottom‑up level.”

Security assurance specialists on Shah’s team see the mistakes and compromises in the code and look at all the core reviews. They then pass what they learn to the rest of the engineering team, In some cases, this has helped “eliminate a whole class of bugs,” says Shah. “Occasionally, what we learn, we push it back into our tools group or into our compiler group, into even things like static analysis, to just catch these things at scale.”

In-house security services for engineers

A large part of what Shah and his team do is build security services for Microsoft’s engineers, allowing them to “blend in” security into their engineering processes and systems. “Number one, we build these high‑scale services that help our engineers get security right.”

One of those services addresses vulnerability management and scanning. “Azure runs across more than 90 data centers, millions of VMs,” says Shah. “We don't have the option of scanning a VM at a time, so we've built a large‑scale vulnerability scanning infrastructure, where we can scan twice, thrice, or even four times a day, if we have to, at scale.” This allows Microsoft’s engineers to quickly find and fix unpatched VMs or services.

Threat modeling part of the development process

At a recent security event, one CISO commented that he worried about his team doing something that would break a software build. If he breaks the build, he believed, he breaks the trust he had built up with the DevOps team.

Because Microsoft has trained its engineers on the security risks, they better understand when breaking a build is necessary, according to Shah. “We start off at the design time by threat modeling and following design practices. You're coding and you're halfway through and we find something wrong, or some static analysis tool says, ‘Hey, this is a buffer over‑run,’ we absolutely will break the build,” he says. “Yeah, occasionally we have false‑positives, somebody gets grumpy, but we have built it into our engineering system to try and get things right.”

Arsenault notes that a break build order to fix a security issue should ideally come from the development team, which is why threat modeling is part of the process. “The CISO should never break a build,” he says.

When engineering does decide to break a build, that can become a teaching moment. Arsenault cited a recent example where an engineering team shut down part of a release because it found a bug and recognized the impact it could have. That team was proud of what they had done, but Arsenault suggested a better way to handle it in the future. “I was like, ‘I am proud of you, but you overdid it. There's a way you could've continued moving on the path, based on the risk profile, and actually fixed it within a day and not cause as much disruption as you did."

“That goes back to changing the culture, where the engineers care so much about it,” he adds. “That's been the journey over the last ten years I've been in the role. I'm more interested now when I'm putting the reins on it, because they're so aggressive about it, which is a fantastic place to be in.”

Microsoft’s threat modeling process relies on what Arsenault calls “human intelligence” and “signal intelligence.” The signal intelligence comes from an automated threat system that uses 150 different feeds from sources such as industry research and anti-virus vendors, plus internal feeds from cloud, endpoint security and identity systems. That data is then fed into the product engineering systems.

“If there's something that's significant [in that feed], then we call an audible,” says Arsenault. “If we saw something that was new and different, we would call an audible. We would reach out to those security executives and say, ‘Hey, we have something that is disruptive, or has to be dealt with in real‑time.’”

On the human side, threats might be discussed in regular monthly meetings to assess the risk or decide whether to call an audible. “The better we are at doing [threat modeling], we don't use the audible button,” says Arsenault.

A better sense of empathy with security

The engineers on Shah’s team build large-scale services just as engineers in other Microsoft groups do. “In that sense, we have a better sense of empathy when security becomes a little bit of a pain in the neck.” That, along with what security shares with them about threat risk, gives them needed insight into why they need to develop with security in mind.

Shah notes that building enterprise cloud services requires a focus on reliability and availability, and lessons learned there help improve security. “You see somebody having a small downtime. You point it out to the engineers. They go off and say, ‘OK, here's how we are going to mitigate it. Here's how we're going to build redundancy. Here's how we're going to build isolation,’ things like that," he says. “Putting engineers right in front of challenges is just magical in terms of getting the right things to happen. Once engineers are pointed to the right problem, they do the right thing. This is the biggest asset we have at the end of the day.”