Marcus Ranum: Gunnar, your blog (1 Raindrop) is one of my favorite security forums, since you seem to be as comfortable with “the big picture” strategic problems as well as the practicalities, and you do it fluently and coherently -- do you realize how rare that is?

By submitting your personal information, you agree to receive emails regarding relevant products and special offers from TechTarget and its partners. You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

Gunnar Peterson: Thanks for the kind words on the blog. In terms of bouncing between big picture and practical issues, I think this is a must in security. We’re vulnerable to poor design and implementation. Getting the level of abstraction calibrated correctly is one of the enduring challenges in infosec. How many times have we seen a big picture policy or architecture document essentially filled with low-level configuration settings that offer no strategic guidance? Conversely, we often see low-level implementations where the assumptions inherent in the implementation cascade back up through the big picture and ripple through the whole security architecture: “Well of course for this little widget to run you have to open XYZ firewall ports, disable the sandbox, and send everything in the clear.”

So the challenge in security architecture is clear separation between the views and artifacts for big picture versus practical views, and then working toward making the decisions implicit then consistent across both.

Marcus: The list of things I’ve been dying to hash out with you is impossibly long, but I thought I’d take the opportunity to dig into a few topics where your views puzzle me a bit.

First off, you frequently point out – quite correctly – that the security world’s answer to problems is firewalls and SSL and that’s all we’ve come up with in the last 16 years. But here’s the problem: What if that’s all there is? It seems to me the real answers have always been implausible: software quality, system design, secure-by-default implementations, etc. In other words, stuff nobody wants. Is the “firewalls, SSL” mantra really just, as Henry Rollins would say, “The menu today is fish” – anything else is nothing you want to seriously consider? I know I’m guilty on this issue; I’ve been complaining about the very idea of doing “secure transactions” on an operating system that can’t even enforce a kernel/user boundary, but the whole world is not going to suddenly plow under their investment in today’s insecure operating systems and switch to something better -- if something better existed.

Gunnar: “What if that’s all there is?” is a fair point for many Web apps, and in fact, many companies struggle to deploy their firewalls + SSL architecture at all much less anything more advanced such as secure-by-default implementations. Remember the Google and China story from almost two years back? We heard all the stories about the ingenious hacks, but what did Google do to address them? They turned on SSL!

I think the “firewalls + SSL is all we need” mindset is rooted in a couple of things. First, is career risk, job security and cost. The auditor is going to ask many questions about firewalls and they are going to ask a couple about SSL. You have to be able to address these--no question. The problem though, I think, is more a legacy thinking mindset where companies assume they have a perimeter that is somehow enforceable, and that the separation is buying them something in terms of security. The reality is developers have been tunneling through and around the firewalls for a decade-plus, so if you take a hard look at the policies the firewalls are enforcing for you, it’s not much. The firewall has become a ritual, but what happens is it’s interpreted as, “we’re inside the firewall so everything is OK.” This is used to justify egregious security design against core assets with predictable results.

Doing something other than firewalls + SSL architecture means the infosec team must engage in design and architecture, and this is something that historically they are not focused on. On the whole, infosec has a very operations-centric view of the world. Obviously, this is not all bad, but it tends to explain why and where a lot of these gaps spring from.

I see three areas where infosec can play a role in concrete improvements in security architecture, and do so in a cost-effective manner.

Identity architecture: Many of the weaknesses in firewalls + SSL are the total lack of identity in these systems; SAML, OAuth and other identity protocols have stepped in to fill this gap and enterprises can and should deploy these today to improve their overall security posture.

Authorization: Authorization is deeply embedded in millions of lines of spaghetti code. This has several negative consequences for the enterprises–you can’t audit it, you don’t know the structure or behavior, and you get a whole class of nasty emergent bugs that rear their head at runtime. Attribute-based access control through standards, such as XACML, give enterprises a way to externalize their authorization rules, and pull it out of code and into configuration so it can be designed and audited.

Logging and monitoring: This area really plays to infosec’s historical operational strength. Logging and monitoring needs to occur at the application, data, and identity layers to get a real view of what’s going on in the system. Many enterprises have done the easy yet expensive part by standing up SIEM/log management systems, but the hard and important part is integration to the applications. So again, you have the issue of infosec needing to establish itself as a design/build partner to ensure we get the proper hooks into the applications to get the data we need to analyze how it’s actually being used.

Marcus: Which brings me to the next pain point: It seems to me that, as customers of computing, we keep making the same decisions over and over. I won’t call them mistakes, because we’ve survived so far, albeit at tremendous costs, but the consequences of those decisions keep coming back to haunt us.

I’m thinking here of how we might start with an operating system that has a fair security model, but turn it off because users need runtime-loadable device drivers, and presto, you get malware. Or, another example: You have virtualization, and the new unit of computing becomes “a machine” instead of “a process” because that OS process separation guarantee broke down, and now we’ve got stacks and stacks of virtual machines that are just as mismanaged as the physical boxes used to be. And, presumably, the guarantee that virtual machines are separate will break down, in the same way--and for the same reasons--that the process separation guarantee broke down in individual operating systems. We’ve got horrible security in the operating systems at the endpoints, and we’re migrating that horribleness to the servers for the same reasons.

Gunnar: Yes, “This is how we have always done it” is a dangerous precedent–it assumes what you are doing is already good enough and the attackers won’t learn anything new. In fact, what you are doing probably isn’t working very well, and we know that attackers are getting better and bolder all the time. The assumptions around “what’s good enough for security” that were made on building out the current generations of software are very naïve and will eventually be reconciled with reality, probably sooner than later. Security, and especially access control, is like a chess opening, you put your pawns, knights and bishops where you think it makes sense, but guess what? Black gets to move too! No one thinks in a chess game, “Well I have my pawns, knights and bishops in the perfect spot now no one can beat me.” But, people make these assumptions all the time in security architecture.

Marcus: Exactly! I am constantly amazed that people’s default assumption about the future is it’s going to look like the present, only more. That kind of thinking is what keeps us from thinking about potential game-changing shifts in technology or how it’s used. That’s right back to your point about “firewalls + SSL” -- firewalls worked until programmers got annoyed with them and wrote firewall-friendly applications to make the firewall effectively vanish.

I want to keep hammering on the topic of system administration. Isn’t that really what the cloud is all about? Scriptable APIs for creating and configuring systems, elastic systems created by massive automation, ruthlessly centralized configuration management–aren’t these the kind of things good systems administrators have always wanted to, or tried to do? Is the cloud management’s reaction to the high cost of good systems administrators, or is it the business’ reaction to the fact that good systems administrators occasionally say, “No, that’s a bad idea”? This also seems like setting ourselves up for a cyclical move between centralized IT (and price-hikes resulting from lock-in) and departmental computing (and sticker shock resulting from security and system administration costs) -- where does it stop?

Gunnar: I think that’s right for a lot of cloud deployments. There’s also an emerging discipline called DevOps that is sort of like the return of the Unix scripting sysadmin of the ’90s in terms of power, but the scale is a lot larger. I’m not sure decentralization and departmental computing is such a bad idea on its own, but given the small percentage of operational and security talent in most enterprises relative to the project teams, it makes it hard to get in the game unless the department has its own security resources.

A big challenge to security architecture is system knowledge. Infosec teams are spread across many development and operational teams, which is good in the sense they get a broad perspective, but for any one given project, the infosec team has to rely heavily on the development and operational talent they work with. This means security results are influenced by the abilities of developers and system administrators to correctly depict the system, its problems and solution opportunities.

One thing the cloud movement should do in a positive sense is bring up a lot of the buried assumptions about what is valuable and give the infosec team a platform to make positive contributions about how to protect the most valuable assets. I think there is more willingness across the enterprise to hear the infosec team’s thoughts and ideas than at any time, because non-IT security people realize more than ever their reliance on IT systems and they realize on some level that cloud poses new threats. The challenge to infosec is to offer prescriptions, not obsess on diagnosis. The prescriptions should be patterns that can be readily implemented for a reasonable cost and should be vetted by the infosec team

Marcus: Let’s shift gears a bit to the level of national interest. Obviously, cyberwar is a concern for many governments – how do you see that playing out with outsourcing? Do you see that as an inherent contradiction?

Gunnar: The supply chain for enterprises working in the national interest is quite long indeed. I tend to think economics, trade and incentives work the best over time. In these scenarios, it’s all about giving up a little control to get something else of value. But, if you look at the real supply chains for real companies, this has already happened and businesses will continue to move in this direction where it makes sense.

What worries me more on outsourcing is not Widget X or Widget Y’s design being stolen. Those are point–in-time things, I think that when outsourcing goes too far, you lose the engineering skill sets, entrepreneurialism and other assets that have made this country so successful. If we are a nation of CFOs, what is the point of that?

Marcus: I absolutely agree with you, there. One of the big problems I’ve always seen with becoming de-skilled is you’re in a horrible negotiating position when you realize you’ve got a very expensive doo-dad you don’t understand, can’t understand, can’t fix – and are at the mercy of market forces. I used to have a car that had computer-controlled everything, and I knew if I took it to the shop and they told me, “It needs a new haydiddlediddle for $4,000,” it wasn’t even meaningful for me to say, “I want to get a second opinion.” That seems to me to be where we’re heading with some of these new network-centric services: What do you tell an organization if they want to store all their stuff somewhere in a cloud, and tear down the systems they used to use to keep backups of their data?

Gunnar: I met someone recently from a big cloud company, and when they found out I was in security they said they had a lot of respect for the “weird yet crucial” world of infosec. I thought “weird yet crucial” summed it up perfectly. In a perfect world, there would be no infosec department, but we don’t live in a perfect world. Security is about dealing with the everything else that doesn’t work -- the edge cases and the risk.

So my advice is--make security architecture a first-class citizen in your organization. The kinds of things you will need to excel at to be secure in the cloud are things that require real security expertise. The team should have authority to work at a high level on strategy and governance; and enough resources to execute at a low level with developers, testers and the like. This isn’t as easy as 1-2-3-cloud! But, it’s a better way to ensure you have a margin of safety in your cloud applications. After all, it’s your business.

E-Zine

0 comments

E-Mail

Username / Password

Password

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy