PLEASE NOTE: I HAVE PERMANENTLY MOVED MY BLOG TO http://www.rationalsurvivability.com/blog

December 28, 2007

Alan Shimel pointed us to an interesting article written by Matt Hines in his post here regarding the "herd intelligence" approach toward security. He followed it up here.

All in all, I think both the original article that Andy Jaquith was quoted in as well as Alan's interpretations shed an interesting light on a problem solving perspective.

I've got a couple of comments on Matt and Alan's scribbles.

I like the notion of swarms/herds. The picture to the right from Science News describes the
notion of "rapid response," wherein "mathematical modeling is
explaining how a school of fish can quickly change shape in reaction to
a predator." If you've ever seen this in the wild or even in film,
it's an incredible thing to see in action.

It should then come as no surprise that I think that trying to solve the "security problem" is more efficiently performed (assuming one preserves the current construct of detection and prevention mechanisms) by distributing both functions and coordinating activity as part of an intelligent "groupthink" even when executed locally. This is exactly what I was getting at in my "useful predictions" post for 2008:

Grid and distributed utility computing models will start to creep into securityA
really interesting by-product of the "cloud compute" model is that as
data, storage, networking, processing, etc. get distributed, so shall
security. In the grid model, one doesn't care where the actions take
place so long as service levels are met and the experiential and
business requirements are delivered. Security should be thought of in
exactly the same way.

The notion that you can point to a
physical box and say it performs function 'X' is so last Tuesday.
Virtualization already tells us this. So, imagine if your security
processing isn't performed by a monolithic appliance but instead is
contributed to in a self-organizing fashion wherein the entire
ecosystem (network, hosts, platforms, etc.) all contribute in the
identification of threats and vulnerabilities as well as function to
contain, quarantine and remediate policy exceptions.

Sort
of sounds like that "self-defending network" schpiel, but not focused
on the network and with common telemetry and distributed processing of
the problem.Check out Red Lambda's cGrid technology for an interesting view of this model.

This basically means that we should distribute the sampling, detection and prevention functions across the entire networked ecosystem, not just to dedicated security appliances; each of the end nodes should communicate using a standard signaling and telemetry protocol so that common threat, vulnerability and effective disposition can be communicated up and downstream to one another and one or more management facilities.

This is what Andy was referring to when he said:

As part of the effort, security vendors may also need to begin sharing more of that information with their rivals to create a larger network effect for thwarting malware on a global basis, according to the expert.

It
may be hard to convince rival vendors to work together because of the
perception that it could lessen differentiation between their
respective products and services, but if the process clearly aids on
the process of quelling the rising tide of new malware strains, the
software makers may have little choice other than to partner, he said.

Secondly, Andy suggested that basically every end-node would effectively become its own honeypot:

"By
turning every endpoint into a malware collector, the herd network
effectively turns into a giant honeypot that can see more than existing
monitoring networks," said Jaquith. "Scale enables the herd to counter
malware authors' strategy of spraying huge volumes of unique malware
samples with, in essence, an Internet-sized sensor network."

I couldn't agree more! This is the sort of thing that I was getting at back in August when I was chatting with Lance Spitzner regarding using VM's for honeypots on distributed end nodes:

I clarified that what I meant was actually integrating a
HoneyPot running in a VM on a production host as part of a standardized
deployment model for virtualized environments. I suggested that this
would integrate into the data collection and analysis models the same
was as a "regular" physical HoneyPot machine, but could utilize some of
the capabilities built into the VMM/HV's vSwitch to actually make the
virtualization of a single HoneyPot across an entire collection of VM's
on a single physical host.

Thirdly, the notion of information sharing across customers has been implemented cross-sectionally in industry verticals with the advent of the ISAC's such as the Financial Services Information Sharing and Analysis Center which seeks to inform and ultimately leverage distributed information gathering and sharing to protect it's subscribing members. Generally-available services like Symantec's DeepSight have also tried to accomplish similar goals.

As Matt pointed out in his article, gaining actionable intelligence on the monstrous amount of telemetric data from participating end nodes means that there is a need to really prune for false positives. This is the trade-off between simply collecting data and actually applying intelligence at the end-node and effecting disposition.

This requires technology that we're starting to see emerge with a small enough footprint when paired with the compute power we have in endpoints today.

Finally, as the "network" (which means the infrastructure as well as the "extrastructure" delivered by services in the cloud) gains more intelligence and information-centric granularity, it will pick up some of the slack -- at least from the perspective of sloughing off the low-hanging fruit by using similar concepts.

I am hopeful that as we gain more information-centric footholds, we shouldn't actually be worried about responding to every threat but rather only those that might impact the most important assets we seek to protect.

Ultimately the end-node is really irrelevant from a protection perspective as it should really be little more than a presentation facility; the information is what matters. As we continue to make progress toward more resilient operating systems leveraging encryption and mutual authentication within communities of interest/trust, we'll start to become more resilient and information assured.

The sharing of telemetry to allow these detective and preventative/protective capabilities to self-organize and perform intelligent offensive/evasive actions will evolve naturally as part of this process.

November 14, 2007

A couple of weeks ago I penned a blog entry titled "The Battle for the HyperVisor Heats Up"
in which I highlighted an announcement from Phoenix Technologies
detailing their entry into the virtualization space with their
BIOS-enabled VMM/Hypervisor offering called HyperCore.

It seems that everyone and their mother is introducing a virtualization platform and the underpinning of commonality between basic functionality demonstrates how the underlying virtualization enabler -- the VMM/Hypervisor -- is becoming a commodity.

We are sure to see fatter, thinner, faster, "more secure" or more open Hypervisors, but this will be an area with less and less differentiation. Table stakes. Everything's becoming virtualized, so a VMM/Hypervisor will be the underlying "OS" enabling that transformation.

To illustrate the commoditization trend as well as a rather fractured landscape of strategies, one need only look at the diversity in existing and emerging VMM/Hypervisor solutions. Virtualization strategies are beginning to revolve around a set of distinct approaches where virtualization is:

Provided for and/or enhanced in hardware (Intel, AMD, Phoenix)

A function of the operating system (Linux, Unix, Microsoft)

Delivered by means of an enabling software layer (nee
platform) that is deployed across your entire infrastructure (VMware, Oracle)

The challenge for a customer is making the decision on whom to invest it now. Given the fact that there is not a widely-adopted common format for VM standardization, the choice today of a virtualization vendor (or vendors) could profoundly affect one's business in the future since we're talking about a fundamental shift in how your "centers of data" manifest.

What is so very interesting is that if we accept virtualization as a feature defined as an abstracted platform isolating software from hardware then the next major shift is the extensibility, manageability and flexibility of the solution offering as well as how partnerships knit out between the "platform" providers and the purveyors of toolsets.

It's clear that VMware's lead in the virtualization market is right inline with how I described the need for differentiation and extensibility both internally and via partnerships.

VMotion is a classic example; it's clearly an internally-generated killer app. that the other players do not currently have and really speaks to being able to integrate virtualization as a "feature" into the combined fabric of the data center. Binding networking, storage, computing together is critical. VMware has a slew of partnerships (and potential acquisitions) that enable even greater utility from their products.

Cisco has already invested in VMware and a recent demo I got of Cisco's VFrame solution shows they are serious about being able to design, provision, deploy, secure and manage virtualized infrastructure up and down the stack, including servers, networking, storage, business process and logic.

In the next 12 months or so, you'll be able to buy a Dell or HP server using Intel or AMD virtualization-enabled chipsets pre-loaded with multiple VMM/Hypervisors in either flash or BIOS. How you manage, integrate and secure it with the rest of your infrastructure -- well, that's the fun part, isn't it?

I'll bet we'll see more and more "free" commoditized virtualization platforms with the wallet ding coming from the support and licenses to enable third party feature integration and toolsets.

October 03, 2007

An interesting story in this morning's New York Times titled "Unlike U.S., Japanese Push Fiber Over Profit" talked about Japan's long term investment efforts to build the world's first all-fiber national network and how Japan leads the world's other industrialized nations, including the U.S., in low-cost, high speed services centered around Internet access. Check out this illustration:The article states that approximately 8 million Japanese subscribe to the fiber-enabled service offerings that provides performance at roughly 30 times that of a corresponding xDSL offering.

For about $55 a month, subscribers have access to up to 100Mb/s download capacity.

France Telecom is rumored to be rolling out services that offer 2.5Gb/s downloads!

I have Verizon FIOS which is delivered via fiber to my home and subscribe at a 20Mb/s download tier.

What I find very interesting about the emergence of this sort of service is that if you look at a typical consumer's machine, it's not well hardened, not monitored and usually easily compromised. At this rate, the bandwidth of some of these compromise-ready consumer's home connectivity is eclipsing that of mid-tier ISP's!

This is even more true, through anecdotal evidence gathering, of online gamers who are typically also P2P filesharing participants and early adopters of new shiny kit -- it's a Bot Herder's dream come true.

At xDSL speeds of a few Mb/s, a couple of infected machines as participants in a targeted synchronized fanning DDoS attack can easily take down a corporate network connected to the Internet via a DS3 (45Mb/s.) Imagine what a botnet of a couple of 60Mb/s connected endpoints could do -- how about a couple of thousand? Hundreds of thousands?

This is great news for some as this sort of capacity will be economically beneficial to cyber-criminals as it reduces the exposure risk of Botnet Herders; they don't have to infect nearly the same amount of machines to deliver exponentially higher attack yields given the size of the pipes. Scary.

I'd suggest that using the lovely reverse DNS entries that service providers use to annotate logical hop connectivity will be even more freely used to target these high-speed users; you know, like (fictional):

bigass20MbpsPipe.vzFIOS-05.bstnma.verizon-gni.net (7x.y4.9z.1)

As an interesting anecdote from the service provider perspective, the need for "Clean Pipes" becomes even more important and the providers will be even more so financially motivated to prevent abuse of their backbone long-hauls by infected machines.

This, in turn, will drive the need for much more intelligent, higher throughput infrastructure and security service layers to mitigate the threat which is forcing folks to take a very hard look about how they architect their networks and apply security.

July 13, 2007

The last few days of activity involving Google and Microsoft have really catalyzed some thinking and demonstrated some very intriguing indicators as to how the delivery of applications and services is dramatically evolving.

I don't mean the warm and fuzzy marketing fluff. I mean some real anchor technology investments by the big-boys putting their respective stakes in the ground as they invest hugely in redefining their business models to setup for the future.

Enterprises large and small are really starting to pay attention to the difference between infrastructure and architecture and this has a dramatic effect on the service providers and supply chain who interact with them.

It's become quite obvious that there is huge business value associated with divorcing the need for "IT" to focus on physically instantiating and locating "applications" on "boxes" and instead delivering "services" with the Internet/network as the virtualized delivery mechanism.

Google v. Microsoft - Let's Get Ready to Rumble!

My last few posts on Google's move to securely deliver a variety of applications and services represents the uplift of the "traditional" perspective of backoffice SaaS offerings such as Salesforce.com but also highlights the migration of desktop applications and utility services to the "cloud" also.

This is really executing on the vision of the thin-client Internet-centric vision from back in the day o' the bubble when we saw a ton of Internet-borne services such as storage, backup, etc. using the "InternetOS" as the canvas for service.

So we've talked about Google. I maintain that their strategy is to ultimately take on Microsoft -- including backoffice, utility and desktop applications. So let's look @ what the kids from Redmond are up to.

What Microsoft is developing towards with their vision of CloudOS was just recently expounded upon by one Mr. Ballmer.

Not wanting to lose mindshare or share of wallet, Microsoft is maneuvering to give the customer control over how they want to use applications and more importantly how they might be delivered. Microsoft Live bridges the gap between the traditional desktop and puts that capability into the "cloud."

Let's explore that a little:

In addition to making available its existing services, such as mail and
instant messaging, Microsoft also will create core infrastructure
services, such as storage and alerts, that developers can build on top
of. It's a set of capabilities that have been referred to as a "Cloud OS," though it's not a term Microsoft likes to use publicly.

...

Late last month, Microsoft introduced two new Windows Live Services,
one for sharing photos and the other for all types of files. While
those services are being offered directly by Microsoft today, they
represent the kinds of things that Microsoft is now promising will be
also made available to developers.

Among the other application and infrastructure components,
Microsoft plans to open are its systems for alerts, contact management,
communications (mail and messenger) and authentication.

...

As it works to build out the underlying core services, Microsoft is
also offering up applications to partners, such as Windows Live
Hotmail, Windows Live Messenger and the Spaces blogging tool.

Combine the emerging advent of "thinner" end-points (read: mobility products) with high-speed, lower latency connectivity and we can see why this model is attractive and viable. I think this battle is heating up and the consumer will benefit.

A Practical Example of SaaS/InternetOS Today?

So if we take a step back from Google and Microsoft for a minute, let's take a snapshot of how one might compose, provision, and deploy applications and data as a service using a similar model over the Internet with tools other than Live or GoogleGear.

Let me give you a real-world example -- deliverable today -- of this capability with a functional articulation of this strategy; on-demand services and applications provided via virtualized datacenter delivery architectures using the Internet as the transport. I'm going to use a mashup of two technologies: Yahoo Pipes and 3tera's AppLogic.

Yahoo Pipes is "...an interactive data aggregator and manipulator that lets you mashup your favorite online data sources." Assuming you have data from various sources you want to present an application environment such as Pipes will allow you to dynamically access, transform and present this information any way you see fit.

This means that you can create what amounts to application and services on demand.

Let's agree however that while you have the data integration/presentation layer, in many cases you would traditionally require a complex collection of infrastructure from which this source data is housed, accessed, maintained and secured.

However, rather than worry about where and how the infrastructure is physically located, let's use the notion of utility/grid computing to make available dynamically an on-demand architecture that is modular, reusable and flexible to make my service delivery a reality -- using the Internet as a transport.

3Tera's AppLogic is used by hosting providers to offer true utility computing. You get all the control of having your own virtual datacenter, but without the need to operate a single server.

Deploy and operate applications in your own virtual private datacenter

Set up infrastructure, deploy apps and manage operations with just a browser Scale from a fraction of a server to hundreds of servers in days

Deploy and run any Linux software without modifications

Get your life back: no more late night rushes to replace failed equipment

In fact, BT is using them as part of the 21CN project which I've written about many times before.

So check out this vision, assuming the InternetOS as a transport. It's the drag-and-drop, point-and-click Metaverse of virtualized application and data combined with on-demand infrastructure.

You first define the logical service composition and provisioning through 3Tera with a visual drag-drop canvas, defining firewalls, load-balancers, switches, web servers, app. servers, databases, etc. Then you click the "Go" button. AppLogic provisions the entire thing for you without you even necessarily knowing where these assets are.

Then, use something like Pipes to articulate how data sources can be accessed, consumed and transformed to deliver the requisite results. All over the Internet, transparent to you securely.

May 23, 2007

Interop has has been great thus far. One of the most visible themes of this year's show is (not suprisingly) the hyped emergence of 10Gb/s Ethernet. 10G isn't new, but the market is now ripe with products supporting it: routers, switches, servers and, of course, security kit.

With this uptick in connectivity as well as the corresponding float in compute power thanks to Mr. Moore AND some nifty evolution of very fast, low latency, reasonably accurate deep packet inspection (including behavioral technology,) the marketing wars have begun on who has the biggest, baddest toys on the block.

Whenever this discussion arises, without question the notion of "carrier class" gets bandied about in order to essentially qualify a product as being able to withstand enormous amounts of traffic load without imposing latency.

One of the most compelling reasons for these big pieces of iron (which are ultimately a means to an end to run software, afterall) is the service provider/carrier/mobile operator market which certainly has its fair share of challenges in terms of not only scale and performance but also security.

I blogged a couple of weeks ago regarding the resurgence of what can be described as "clean pipes" wherein a service provider applies some technology that gets rid of the big lumps upstream of the customer premises in order to deliver more sanitary network transport.

What's interesting about clean pipes is that much of what security providers talk about today is only actually a small amount of what is actually needed. Security providers, most notably IPS vendors, anchor the entire strategy of clean pipes around "threat protection" that appears somewhat one dimensional.

This normally means getting rid of what is generically referred to today as "malware," arresting worm propagation and quashing DoS/DDoS attacks. It doesn't speak at all to the need for things that aren't purely "security" in nature such as parental controls (URL filtering,) anti-spam, P2P, etc. It appears that in the strictest definition, these aren't threats?

So, this week we've seen the following announcements:

ISS announces their new appliance that offers 6Gb/s of IPS

McAfee announces thei new appliance that offers 10Gb/s of IPS

The trumpets sounded and the heavens parted as these products were announced touting threat protection via IPS at levels supposedly never approached before. More appliances. Lots of interfaces. Big numbers. Yet to be seen in action. Also, to be clear a 2U rackmount appliance that is not DC powered and non-NEBS certified isn't normally called "Carrier-Class."

I find these announcements interesting because even with our existing products (which run ISS and Sourcefire's IDS/IPS software, by the way) we can deliver 8Gb/s of firewall and IPS today and have been able to for some time.

Lisa Vaas over @ eWeek just covered
the ISS and McAfee announcements and she was nice enough to talk about
our products and positioning. One super-critical difference is that along with high throughput and low latency you get to actually CHOOSE which IPS you want to run -- ISS, Sourcefire and shortly Check Point's IPS-1.

You can then combine that with firewall, AV, AS, URL filtering, web app. and database firewalls and XML security gateways in the same chassis to name a few other functions -- all best of breed from top-tier players -- and this is what we call Enterprise and Provider-Class UTM folks.

Holistically approaching threat management across the entire spectrum is really important along with the speeds and feeds and we've all seen what happens when more and more functionality is added to the feature stack -- you turn a feature on and you pay for it performance-wise somewhere else. It's robbing Peter to pay Paul. The processing requirements necessary at 10G line rates to do IPS is different when you add AV to the mix.

The next steps will be interesting and we'll have to see how the switch and overlay vendors rev up to make their move to have the biggest on the block. Hey, what ever did happen to that 3Com M160?

Then there's that little company called Cisco...

{Ed: Oops. I made a boo-boo and talked about some stuff I shouldn't have. You didn't notice, did you? Ah, the perils of the intersection of Corporate Blvd. and Personal Way! Lesson learned. ;) }

May 06, 2007

Jeff Bardin over on the CSO blog pitched an interesting stake in the ground when he posited "Connectivity As A Utility: Where are My Clean Pipes?"

Specifically, Jeff expects that his (corporate?) Internet service functions in the same manner as his telephone service via something similar to a "do not call list." Basically, he opts out by placing himself on the no-call list and telemarketers cease to call. Others might liken it to turning on a tap and getting clean, potable water; you pay for a utility and expect it to be usable. All of it.

Many telecommunications providers want to charge you for having
clean pipes, deploying a suite of DDoS services that you have to buy to
enhance your security posture. Protection of last mile bandwidth is
very key to network availability as well as confidentiality and
integrity. If I am subscribing for a full T1, shouldn’t I get the full
T1 as part of the price and not just a segment of the T1? Why do I have
to pay for the spam, probes, scans, and malicious activity that my
telecommunications service provider should prevent at 3 miles out
versus my having to subscribe to another service to attain clean pipes
at my doorstep?

I think that most people would agree with the concept of clean pipes in principle. I can't think of any other utility where the service levels delivered are taken with such a lackadaisical best effort approach and where the consumer can almost always expect that some amount (if not the majority) of the utility is unusable.

Over the last year, I've met with many of the largest ISP's, MSSP's, TelCo's and Mobile Operators on the planet and all are in some phase of deploying some sort of clean pipes variant. Gartner even predicts a large amount of security to move "into the cloud."

In terms of adoption, EMEA is leaps and bounds ahead of the US and APAC in these sorts of services and will continue to be. The relative oligopolies associated with smaller nation states allows for much more agile and flexible service definition and roll-outs -- no less complex, mind you. It's incredible to see just how disparate and divergent the gap is between what consumers (SME/SMB/Mobile as well as large enterprise) are offered in EMEA as opposed to the good-ol' U S of A.

However, the stark reality is that the implementation of clean pipes by your service provider(s) comes down to a balance of two issues: efficacy and economics, with each varying dramatically with the market being served; the large enterprise's expectations and requirements look very, very different from the SME/SMB.

Let's take a look at both of these elements.

ECONOMICS

If you ask most service providers about so-called clean pipes up to a year ago, you could expect to get an answer that was based upon a "selfish" initiative aimed at stopping wasteful bandwidth usage upstream in the service provider's network, not really protecting the consumer.

The main focus here is really on DDoS and viri/worm propagation. Today, the closest you'll come to "clean pipes" is usually some combination of the following services deployed both (still) at the customer premises as well as somewhere upstream:

DoS/DDoS

Anti-Virus

Anti-Spam

URL Filtering/Parental Controls

Managed Firewall/IDS/IPS

What is interesting about these services is that they basically define the same functions you can now get in those small little UTM boxes that consolidate security functionality at the "perimeter." The capital cost of these devices and the operational levies associated with their upkeep are pretty close in the SME/SMB and when you balance what you get in "good enough" services for this market as well as the overall availability of these "in the cloud" offerings, UTM makes more sense for many in the near term.

For the large enterprise, the story is different. Outsourcing some level of security to an MSSP (or perhaps even the entire operation) or moving some amount upstream is a matter of core competence and leveraging the focus of having internal teams focus on the things that matter most while the low hanging fruit can be filtered out and monitored by someone else. I describe that as filtering out the lumps. Some enormous companies have outsourced not only their security functions but their entire IT operations and data center assets in this manner. It's not pretty, but it works.

I'm not sure they are any more secure than they were before, however. The risk simply was transferred whilst the tolerance/appetite for it didn't change at all. Puzzling.

Is it really wrong to think that companies (you'll notice I said companies, not "people" in the general sense) should pay for clean pipes? I don't think it is. The reality is that for non-commercial subscribers such as home users, broadband or mobile users, some amount of bandwidth hygiene should be free -- the potable water approach.

I think, however, that should a company which expects elevated service levels and commensurate guarantees of such, want more secure connectivity, they can expect to ante up. Why? Because the investment required to deliver this sort of service costs a LOT of money -- both to spin up and to instantiate over time. You're going to have to pay for that somewhere.

I very much like Jeff's statistics:

We stop on average for our organization nearly 600
million malicious emails per year at our doorstep averaging 2.8
gigabytes of garbage per day. You add it up and we are looking at
nearly a terabyte of malicious email we have to stop. Now add in probes
and scans against HTTP and HTTPS sites and the number continues to
skyrocket.

Again, even though Jeff's organization isn't small by any means, the stuff he's complaining about here is really the low-hanging fruit. It doesn't bear a dent against the targeted, malicious and financially-impacting security threats that really demands a level of service no service provider will be able to deliver without a huge cost premium.

I won't bore you with the details, but the level of high-availability,
resilience, performance, manageability, and provisioning required to
deliver even this sort of service is enormous. Most vendors simply can't do
it and most service providers are slow to invest in proprietary
solutions that won't scale economically with the operational models in
place.

Interestingly, vendors such as McAfee even as recently as 2005 announced with much fanfare that they were going to deliver technology, services and a united consortium of participating service providers with the following lofty clean pipe goals (besides selling more product, that is):

The initiative is one
part of a major product and services push from McAfee, which is
developing its next generation of carrier-grade security appliances and
ramping up its enterprise security offerings with NAC and secure
content management product releases planned for the first half of next
year, said Vatsal Sonecha, vice president of market development and
strategic alliances at McAfee, in Santa Clara, Calif.

McAfee is working with Cable
and Wireless PLC, British Telecommunications PLC (British Telecom),
Telefónica SA and China Network Communications (China Netcom) to tailor
its offerings through an invitation-only group it calls the Clean Pipes
Consortium.

http://www.eweek.com/article2/0,1895,1855188,00.asp

Look at all those services! What have they delivered as a service in the cloud or clean pipes? Nada.

The chassis-based products which were to deliver these services never materialized and neither did the services. Why? Because it's really damned hard to do correctly. Just ask Inkra, Nexi, CoSine, etc. Or you can ask me. The difference is, we're still in business and they're not. It's interesting to note that every one of those "consortium members" with the exception of Cable and Wireless are Crossbeam customers. Go figure.

EFFICACY

Once the provider starts filtering at the ingress/egress, one must trust that the things being filtered won't have an impact on performance -- or confidentiality, integrity and availability. Truth be told, as simple as it seems, it's not just about raw bandwidth. Service levels must be maintained and the moment something that is expected doesn't make its way down the pipe, someone will be screaming bloody murder for "slightly clean" pipes.

Today, if you asked a service provider what constitutes their approach to clean pipes, most will refer you back to the same list I referenced above:

DoS/DDoS

Anti-Virus

Anti-Spam

URL Filtering/Parental Controls

Managed Firewall/IDS/IPS

The problem is that most of these solutions are disparate point products run by different business units at different parts of the network. Most are still aimed at the perimeter service -- it's just that the perimeter has moved outward a notch in the belt.

Look, for the SME/SMB (or mobile user,) "good enough" is, for the most part, good
enough. Having an upstream provider filter out a bunch of spam and
viri is a good thing and most firewall rules in place in the SME/SMB
block everything but a few inbound ports to DMZ hosts (if there are
any) and allow everything from the inside to go out. Not very
complicated and it doesn't take a rocket scientist to see how, from the
perspective of what is at risk, that this service doesn't pay off
handsomely.

From the large enterprise I'd say that if you are going to expect that operational service levels will be met, think again. What happens when you introduce web services, SOA and heavy XML onto externally-exposed network stubs. What happens when Web2/3/4.x technologies demand more and more security layers deployed alongside the mechanics and messaging of the service?

You can expect issues and the lack of transparency will be an issue on all but the most simple of issues.

Think your third party due diligence requirements are heady now? Wait until this little transference of risk gets analyzed when something bad happens -- and it will. Oh how quickly the pendulum will swing back to managing this stuff in-house again.

This model doesn't scale and it doesn't address the underlying deficiencies in the most critical elements of the chain: applications, databases and end-point threats such as co-opted clients as unwilling botnet participants.

But to Jeff's point, if he didn't have to spend money on the small stuff above, he could probably spend it elsewhere where he needs it most.

I think services in the cloud/clean pipes makes a lot of sense. I'd sure as hell like to invest less in commoditizing functions at the perimeter and on my desktop. I'm just not sure we're going to get there anytime soon.