PLEASE NOTE: I HAVE PERMANENTLY MOVED MY BLOG TO http://www.rationalsurvivability.com/blog

July 13, 2007

The last few days of activity involving Google and Microsoft have really catalyzed some thinking and demonstrated some very intriguing indicators as to how the delivery of applications and services is dramatically evolving.

I don't mean the warm and fuzzy marketing fluff. I mean some real anchor technology investments by the big-boys putting their respective stakes in the ground as they invest hugely in redefining their business models to setup for the future.

Enterprises large and small are really starting to pay attention to the difference between infrastructure and architecture and this has a dramatic effect on the service providers and supply chain who interact with them.

It's become quite obvious that there is huge business value associated with divorcing the need for "IT" to focus on physically instantiating and locating "applications" on "boxes" and instead delivering "services" with the Internet/network as the virtualized delivery mechanism.

Google v. Microsoft - Let's Get Ready to Rumble!

My last few posts on Google's move to securely deliver a variety of applications and services represents the uplift of the "traditional" perspective of backoffice SaaS offerings such as Salesforce.com but also highlights the migration of desktop applications and utility services to the "cloud" also.

This is really executing on the vision of the thin-client Internet-centric vision from back in the day o' the bubble when we saw a ton of Internet-borne services such as storage, backup, etc. using the "InternetOS" as the canvas for service.

So we've talked about Google. I maintain that their strategy is to ultimately take on Microsoft -- including backoffice, utility and desktop applications. So let's look @ what the kids from Redmond are up to.

What Microsoft is developing towards with their vision of CloudOS was just recently expounded upon by one Mr. Ballmer.

Not wanting to lose mindshare or share of wallet, Microsoft is maneuvering to give the customer control over how they want to use applications and more importantly how they might be delivered. Microsoft Live bridges the gap between the traditional desktop and puts that capability into the "cloud."

Let's explore that a little:

In addition to making available its existing services, such as mail and
instant messaging, Microsoft also will create core infrastructure
services, such as storage and alerts, that developers can build on top
of. It's a set of capabilities that have been referred to as a "Cloud OS," though it's not a term Microsoft likes to use publicly.

...

Late last month, Microsoft introduced two new Windows Live Services,
one for sharing photos and the other for all types of files. While
those services are being offered directly by Microsoft today, they
represent the kinds of things that Microsoft is now promising will be
also made available to developers.

Among the other application and infrastructure components,
Microsoft plans to open are its systems for alerts, contact management,
communications (mail and messenger) and authentication.

...

As it works to build out the underlying core services, Microsoft is
also offering up applications to partners, such as Windows Live
Hotmail, Windows Live Messenger and the Spaces blogging tool.

Combine the emerging advent of "thinner" end-points (read: mobility products) with high-speed, lower latency connectivity and we can see why this model is attractive and viable. I think this battle is heating up and the consumer will benefit.

A Practical Example of SaaS/InternetOS Today?

So if we take a step back from Google and Microsoft for a minute, let's take a snapshot of how one might compose, provision, and deploy applications and data as a service using a similar model over the Internet with tools other than Live or GoogleGear.

Let me give you a real-world example -- deliverable today -- of this capability with a functional articulation of this strategy; on-demand services and applications provided via virtualized datacenter delivery architectures using the Internet as the transport. I'm going to use a mashup of two technologies: Yahoo Pipes and 3tera's AppLogic.

Yahoo Pipes is "...an interactive data aggregator and manipulator that lets you mashup your favorite online data sources." Assuming you have data from various sources you want to present an application environment such as Pipes will allow you to dynamically access, transform and present this information any way you see fit.

This means that you can create what amounts to application and services on demand.

Let's agree however that while you have the data integration/presentation layer, in many cases you would traditionally require a complex collection of infrastructure from which this source data is housed, accessed, maintained and secured.

However, rather than worry about where and how the infrastructure is physically located, let's use the notion of utility/grid computing to make available dynamically an on-demand architecture that is modular, reusable and flexible to make my service delivery a reality -- using the Internet as a transport.

3Tera's AppLogic is used by hosting providers to offer true utility computing. You get all the control of having your own virtual datacenter, but without the need to operate a single server.

Deploy and operate applications in your own virtual private datacenter

Set up infrastructure, deploy apps and manage operations with just a browser Scale from a fraction of a server to hundreds of servers in days

Deploy and run any Linux software without modifications

Get your life back: no more late night rushes to replace failed equipment

In fact, BT is using them as part of the 21CN project which I've written about many times before.

So check out this vision, assuming the InternetOS as a transport. It's the drag-and-drop, point-and-click Metaverse of virtualized application and data combined with on-demand infrastructure.

You first define the logical service composition and provisioning through 3Tera with a visual drag-drop canvas, defining firewalls, load-balancers, switches, web servers, app. servers, databases, etc. Then you click the "Go" button. AppLogic provisions the entire thing for you without you even necessarily knowing where these assets are.

Then, use something like Pipes to articulate how data sources can be accessed, consumed and transformed to deliver the requisite results. All over the Internet, transparent to you securely.

June 17, 2007

I've been trying to construct a palette of blog entries over the last few months which communicates the need for a holistic network, host and data-centric approach to information security and information survivability architectures.

I've been paying close attention to the dynamics of the DLP/CMF market/feature positioning as well as what's going on in enterprise information architecture with the continued emergence of WebX.0 and SOA.

That's why I found this Computerworld article written by Jay Cline very interesting as it focused on the need for a centralized data governance function within an organization in order to manage risk associated with coping with the information management lifecycle (which includes security and survivability.) The article went on to also discuss how the roles within the organization, namely the CIO/CTO, will also evolve in parallel.

Nothing terribly earth-shattering here, but the exclamation point of this article to enable a centralized data governance organization is a (gasp!) tricky combination of people, process and technology:

"How does this all add up? Let me connect the dots: Data must soon become centralized,its use must be strictly controlled within legal parameters, and information must drive the business model. Companies that don’t put a single, C-level person in charge of making this happen will face two brutal realities: lawsuits driving up costs and eroding trust in the company, and competitive upstarts stealing revenues through more nimble use of centralized information."

Let's deconstruct this a little because I totally get the essence of what is proposed, butthere's the insertion of some realities that must be discussed. Working backwards:

I agree that data and it's use must be strictly controlled within legal parameters.

I agree that a single, C-level person needs to be accountable for the data lifecycle

However, I think that whilst I don't disagree that it would be fantastic to centralize data,I think it's a nice theory but the wrong universe.

Interesting, Richard Bejtlich focused his response to the article on this very notion, but I can't get past a couple of issues, some of them technical and some of them business-related.

There's a confusing mish-mash alluded to in Richard's blog of "second home" data repositories that maintain copies of data that somehow also magically enforce data control and protection schemes outside of this repository while simultaneously allowing the flexibility of data creation "locally." The competing themes for me is that centralization of data is really irrelevant -- it's convenient -- but what you really need is the (and you'll excuse the lazy use of a politically-charged term) "DRM" functionality to work irrespective of where it's created, stored, or used.

Centralized storage is good (and selfishly so for someone like Richard) for performing forensics and auditing, but it's not necessarily technically or fiscally efficient and doesn't necessarily align to an agile business model.

The timeframe for the evolution of this data centralization was not really established,but we don't have the most difficult part licked yet -- the application of either the accompanyingmetadata describing the information assets we wish to protect OR the ability to uniformly classify andenforce it's creation, distribution, utilization and destruction.

Now we're supposed to also be able to magically centralize all our data, too? I know that large organizations have embraced the notion of data warehousing, but it's not the underlying data stores I'm truly worried about, it's the combination of data from multiple silos within the data warehouses that concerns me and its distribution to multi-dimensional analytic consumers.

You may be able to protect a DB's table, row, column or a file, but how do you apply a policy to a distributed ETL function across multiple datasets and paths?

ATAMO? (And Then A Miracle Occurs)

What I find intriguing about this article is that this so-described pendulum effect of data centralization (data warehousing, BI/DI) and resource centralization (data center virtualization, WAN optimization/caching, thin client computing) seem to be on a direct collision course with the way in which applications and data are being distributed with Web2.0/Service Oriented architectures and delivery underpinnings such as rich(er) client side technologies such as mash-ups and AJAX...

So what I don't get is how one balances centralizing data when today's emerging infrastructure and information architectures are constructed to do just the opposite; distribute data, processingand data re-use/transformation across the Enterprise? We've already let the data genie out of the bottle and now we're trying to cram it back in? (*please see below for a perfect illustration)

I ask this again within the scope of deploying a centralized data governance organization and its associated technology and processes within an agile business environment.

/Hoff

P.S. I expect that a certain analyst friend of mine will be emailing me in T-Minus 10, 9...

*Here's a perfect illustration of the futility of centrally storing "data." Click on the image and notice the second bullet item...:

May 21, 2007

Jon Oltsik crafted an interesting post today regarding the bifurcation of opinion on where the “intelligence” ought to sit in a networked world: baked into the routers and switches or overlaid using general-purpose compute engines that ride Moore’s curve.

I think that I’ve made it pretty clear where I stand. I submit that you should keep the network dumb, fast, reliable and resilient and add intelligence (such as security) via flexible and extensible service layers that scale both in terms of speed but also choice.

You should get to define and pick what best of breed means to you and add/remove services at the speed of your business, not the speed of an ASIC spin or an acquisition of technology that is neither in line with the pace and evolution of classes of threats and vulnerabilities or the speed of an agile business.

The focal point of his post, however, was to suggest that the real issue is the fact that all of this intelligence requires exposure to the data streams which means that each component that comprises it needs to crack the packet before processing. Jon suggests that you ought to crack the packet once and then do interesting things to the flows. He calls this COPM (crack once, process many) and suggests that it yields efficiencies -- of what, he did not say, but I will assume he means latency and efficacy.

So, here’s my contentious point that I explain below:

Cracking the packet really doesn’t contribute much to the overall latency equation anymore thanks to high-speed hardware, but the processing sure as heck does! So whether you crack once or many times, it doesn’t really matter, what you do with the packet does.

Now, on to the explanation…

I think that it’s fair to say that many of the underlying mechanics of security are commoditizing so things like anti-virus, IDS, firewalling, etc. can be done without a lot of specialization – leveraging prior art is quick and easy and thus companies can broaden their product portfolios by just adding a feature to an existing product.

Companies can do this because of the agility that software provides, not hardware. Hardware can give you scales of economy as it relates to overall speed (for certain things) but generally not flexibility.

However, software has it’s own Moore’s curve or sorts and I maintain that unfortunately its lifecycle, much like what we’re hearing @ Interop regarding CPU’s, does actually have a shelf life and point of diminishing return for reasons that you're probably not thinking about...more on this from Interop later.

John describes the stew of security componenty and what he expects to see @ Interop this week:

I expect network intelligence to be the dominant theme at this week's Interop show in Las Vegas. It may be subtle but its definitely there. Security companies will talk about cracking packets to identify threats, encrypt bits, or block data leakage. The WAN optimization crowd will discuss manipulating protocols and caching files, Application layer guys crow about XML parsing, XSLT transformation, and business logic. It's all about stuffing networking gear with fat microprocessors to perform one task or another.

That’s a lot of stuff tied to a lot of competing religious beliefs about how to do it all as Jon rightly demonstrates and ultimately highlights a nasty issue:

The problem now is that we are cracking packets all over the place. You can't send an e-mail, IM, or ping a router without some type of intelligent manipulation along the way.

<nod> Whether it’s in the network, bolted on via an appliance or done on the hosts, this is and will always be true. Here’s the really interesting next step:

I predict that the next bit wave in this evolution will be known as COPM for "Crack once, process many." In this model, IP packets are stopped and inspected and then all kinds of security, acceleration, and application logic actions occur. Seems like a more efficient model to me.

To do this, it basically means that this sort of solution requires Proxy (transparent or terminating) functionality. Now, the challenge is that whilst “cracking the packets” is relatively easy and cheap even at 10G line rates due to hardware, the processing is really, really hard to do well across the spectrum of processing requirements if you care about things such as quality, efficacy, and latency and is “expensive” in all of those categories.

The intelligence of deciding what to process and how once you’ve cracked the packets is critical.

This is where embedding this stuff into the network is a lousy idea.

How can a single vendor possibly provide anything more than “good enough” security in a platform never designed to solve this sort of problem whilst simultaneously trying to balance delivery and security at line rate?

This will require a paradigm shift for the networking folks that will either mean starting from scratch and integrating high-speed networking with general-purpose compute blades, re-purposing a chassis (like, say, a Cat65K) and stuffing it with nothing but security cards and grafting it onto the switches or stack appliances (big or small – single form factor or in blades) and graft them onto the switches once again. And by the way, simply adding networking cards to a blade server isn't an effective solution, either. "Regular" applications (and esp. SOA/Web 2.0 apps) aren't particularly topology sensitive. Security "applications" on the other hand, are wholly dependent and integrated with the topologies into which they are plumbed.

It’s the hamster wheel of pain.

Or, you can get one of these which offers all the competency, agility, performance, resilience and availability of a specialized networking component combined with an open, agile and flexible operating and virtualized compute architecture that scales with parity based on Intel chipsets and Moore’s law.

What this gives you is an ecosystem of loosely-coupled BoB security services that can be intelligently combined in any order once cracked and ruthlessly manipulated as it passes through them governed by policy – and ultimately dependent upon making decisions on how and what to do to a packet/flow based upon content in context.

The consolidation of best of breed security functionality delivered in a converged architecture yields efficiencies that is spread across the domains of scale, performance, availability and security but also on the traditional economic scopes of CapEx and OpEx.

May 06, 2007

Gunnar once again hits home with an excellent post defining what he calls the Security Architecture Blueprint (SAB):

The purpose of the security architecture blueprint is to bring focus to the key areas of
concern for the enterprise, highlighting decision criteria and context for each domain.
Since security is a system property it can be difficult for Enterprise Security groups to
separate the disparate concerns that exist at different system layers and to understand
their role in the system as a whole. This blueprint provides a framework for
understanding disparate design and process considerations; to organize architecture and
actions toward improving enterprise security.

I appreciated the graphical representation of the security architecture blueprint as it provides some striking parallels to the diagram that I created about a year ago to demonstrate a similar concept that I call the Unified Risk Management (URM) framework.

(Ed.: URM focuses on business-driven information survivability architectures that describes as much risk tolerance as it does risk management.)

Here are both the textual and graphical representations of URM:

Managing risk is fast becoming a lost art. As the pace of technology’s evolution and adoption overtakes our ability to assess and manage its impact on the business, the overrun has created massive governance and operational gaps resulting in exposure and misalignment. This has caused organizations to lose focus on the things that matter most: the survivability and ultimate growth of the business.

Overwhelmed with the escalation of increasingly complex threats, the alarming ubiquity of vulnerable systems and the constant onslaught of rapidly evolving exploits, security practitioners are ultimately forced to choose between the unending grind of tactical practices focused on deploying and managing security infrastructure versus the strategic art of managing and institutionalizing risk-driven architecture as a business process.

URM illustrates the gap between pure technology-focused information security infrastructure and business-driven, risk-focused information survivability architectures and show how this gap is bridged using sound risk management practices in conjunction with best of breed consolidated Unified Threat Management (UTM) solutions as the technology anchor tenant in a consolidated risk management model.

URM demonstrates how governance organizations, business stakeholders, network and security teams can harmonize their efforts to produce a true business protection and enablement strategy utilizing best of breed consolidated UTM solutions as a core component to effectively arrive at managing risk and delivering security as an on-demand service layer at the speed of business. This is a process we call Unified Risk Management or URM.

(Updated on 5/8/07 with updates to URM Model)

The point of URM is to provide a holistic framework against which one may measure and effectively manage risk. Each one of the blocks above has a set of sub-components that breaks out the specifics of each section. Further, my thinking on URM became the foundation of my exploration of the Security Services Oriented Architecture (SSOA) model.

April 13, 2007

I really look forward to reading Gunnar Peterson's blog. He's got a fantastic writing style and communicates in an extremely effective form about one of my favorite topics SOA and security. His insightful posts really get to the point in a witty and meaningful way. I'm going to try to make one of the OWASP meetings he is presenting at soon.

Gunnar made a fantastic post commenting on Arnon Rotem-Gal-Oz's writings on Service Firewall Patterns, but within the context of this discussion, his comments regarding the misalignment of developers, network folks, security practitioners and enterprise architects is well said:

One of my issues with common practice of enterprise architecture is
that they frequently do not deep dive into security issues, instead
focusing scalability, detailed software design, and so on. But here is
the thing - the security people don't know enough about software
design, and the software people don't know enough about security to
really help out.

Sadly, this is very true. It goes back to the same line of commentary I've also made in this regard. The complexity of security is rising unchecked and all the policy in the world isn't going to help when the infrastructure is not capable of solving the problem and neither are the people who administer it.

Add to this the reality that many security mechanisms
cannot make a business case as a one off project, but need to be part
of core infrastructure to be economic, and wel[l], you get the situation
we have today.

Exactly. While this may not have been Gunnar's intention, this description of why embedding security functionality into the "network" and expecting packet jockeys to apply a level of expertise they don't have to solving security problems "in the network" as a result of economic cram-down is going to fail.

The architects define the "what", and unless security is
one of those whats, it is not feasible to make the case for many
specialized security services at a project by project level. This is
why, enterprise architects that enable increased integration within and
across enterprises, must also invest time and resources in revamping
security services that enable this to be done in a reliable fashion.

...but sadly to Gunnar's point above, just as security people don't know enough about software design and software people don't know enough about security, enterprise architects often don't know what they don't know about networking or security. The problem is systemic and even with the best intentions in mind, an architect rarely gets the opportunity to ensure that after the blueprints are handed down, that the "goals" for security are realized in an operational model consistent with the desired outcome.

I'm going to post separately on Rotem-Gal-Oz's Service Firewall Pattern shortly as there are tremendous synergies between what he suggests we should do and, strangely, the exact model we use to provide a security service layer (in virtualized gateway form) to provide this very thing.

March 20, 2007

The article below is dated today, but perhaps this was just the TechTarget AutoBlogCronPoster gone awry from 2004?

Besides the fact that this revelation garners another vote for the RationalSecurity "Captain Obvious" (see right) award, the simple fact that XML gateways as a stand-alone market are being highlighted here is laughable -- especially since the article clearly shows the XML Security Gateways are being consolidated and bundled with application delivery controllers and WAF solutions by vendors such as IBM and Cisco.

XML is, and will be everywhere. SOA/Web Services is only one element in a greater ecosystem impacted by XML.

Of course the functionality provided by XML security gateways are critical to the secure deployment of SOA environments; they should be considered table stakes, just like secure coding...but of course we know how consistently-applied compensating controls are painted onto network and application architectures.

The dirty little secret is that while they are very useful and ultimately an excellent tool in the arsenal, these solutions are disruptive, difficult to configure and maintain, performance pigs and add complexity to an already complex model. In many cases, asking a security team to manage this sort of problem introduces more operational risk than it mitigates.

Can you imagine security, network and developers actually having to talk to one another?! *gasp*

Here is the link to the entire story. I've snipped pieces out for relevant mockery.

ORLANDO, Fla. -- Enterprises are moving forward with service
oriented architecture (SOA) projects to reduce complexity and increase
flexibility between systems and applications, but some security pros
fear they're being left behind and must scramble to learn new ways to
protect those systems from Web-based attacks.

<snip>

"Most network firewalls aren't designed to handle the latest
Web services standards, resulting in new avenues of attack for digital
miscreants, said Tim Bond, a senior security engineer at webMethods
Inc. In his presentation at the Infosec World Conference and Expo, Bond
said a growing number of vendors are selling XML security gateways,
appliances that can be plugged into a network and act as an
intermediary, decrypting and encrypting Web services data to determine
the authenticity and lock out attackers.

"It's not just passing a message through, it's actually taking
action," Bond said. "It needs to be customized for each deployment, but
it can be very effective in protecting from many attacks."

Bond said that most SOA layouts further expose applications by
placing them just behind an outer layer of defense, rather than placing
them within the inner walls of a company's security defenses along with
other critical applications and systems. Those applications are
vulnerable, because they're being exposed to partners, customer
relationship management and supply chain management systems. Attackers
can scan Web services description language (WSDL) -- the XML language
used in Web service calls -- to find out where vulnerabilities lie,
Bond said.

<snip>

A whole market has grown around protecting WSDL, Bond said.
Canada-based Layer 7 Technologies Inc. and UK-based Vordel are
producing gateway appliances to protect XML and SOAP language in Web
service calls. Reactivity, which was recently acquired by Cisco Systems
Inc. and DataPower, now a division of IBM, also address Web services
security.

Transaction values will be much higher and traditional SSL,
security communications protocol for point-to-point communications,
won't be enough to protect transactions, Bond said.

<snip>

In addition to SQL-injection attacks, XML is potentially
vulnerable to schema poisoning -- a method of attack in which the XML
schema can be manipulated to alter processing information. A
sophisticated attacker can also conduct an XML routing detour,
redirecting sensitive data within the XML path, Bond said.

Security becomes complicated with distributed systems in an
SOA environment, said Dindo Roberts, an application security manager at
New York City-based MetLife Inc. Web services with active interfaces
allow the usage of applications that were previously restricted to
using conventional custom authentication. Security pros need new
methods, such as an XML security gateway to protect those applications,
Roberts said.

March 02, 2007

Gunnar Peterson (1 Raindrop blog) continues to highlight the issues of implementing security models which are not keeping pace with the technology they are deployed to protect. Notice I didn't say "designed" to protect.

Specifically, in his latest entry titled "Understand Web 2.0 Security Issues - As Easy as 2, 1, 3" he articulates (once again) the folly of the security problem that we cannot solve because we simply refuse to learn from our mistakes and proactively address security before it becomes a problem:

"So let's do the math, we have rich Web 2.0 and its rich UI and lots
of disparate data and links, we are protecting these brand new
2007-built apps with a Web 1.0 security model that was invented in
1995. This would not be a bad thing at all if the attacker community
had learned nothing in the last 12 years, alas they have already
upgraded to attacker 3.0, and so can use Web 2.0 to both attack and distribute attacks.

The evolution of modern enterprise information architecture has driven tectonic shifts in how information is made available and consumed across constituent layers within the Enterprise ecosystem. The paradigm itself has undergone fundamental changes as the delivery mechanism and application model has transitioned from Client/Server to Internet/Web-based and now loosely-coupled componentized Service Oriented Architectures (SOA.)

SOA provides for transformational methods of producing, accessing and consuming information across a delivery “platform” (the network) and provides quantifiable benefits across multiple boundaries: the reduction of integration and management total cost of ownership (TCO), asset and resource modularity and reusability, business process agility and flexibility, and the overall reduction of business risk.

Enterprise information architects have responded to this paradigm change by adopting methodologies such as Extreme Programming (XP) which is designed to deliver on-demand software layers where and when they are needed. XP enables and empowers developers and information architects to rapidly respond to changing business requirements across the entire life cycle. This methodology emphasizes collaboration and a modular approach toward delivering best-of-breed solutions on-demand.

These highly dynamic, just-in-time solutions pose distribution, management, protection and scaling issues that static product-centric network and security paradigms cannot adapt to quickly enough; each new technology presents new architectural changes, new vulnerabilities and new attack surfaces against which threats must be evaluated. Unfortunately, there is no analog to Extreme Programming in the security world.

The networks charged with the delivery of this information and the infrastructure tasked with its secure operation have failed to keep evolutionary pace, are still mostly rigid and inflexible and are unable to deliver given a misalignment of execution capabilities, methodologies and ideologies.

This brief will first demonstrate that pure network infrastructure is, and always will be, fundamentally and unfortunately at odds with the technology and services designed to protect the information that is transported across it.

The brief will then introduce the concept of a Security Service Oriented Architecture (SSOA) that effectively addresses the network/security conflict. By using an Enterprise Unified Threat Management (UTM) system overlaid across traditional network technology it becomes possible to eliminate individual security appliance sprawl and provide best-of-breed security value with maximum coverage exactly where needed, when needed and at a cost that can be measured, allocated and applied to most appropriately manage risk.

I'll be interested in your comments regarding the abstract as well as the entire brief once I link to it.