PLEASE NOTE: I HAVE PERMANENTLY MOVED MY BLOG TO http://www.rationalsurvivability.com/blog

October 03, 2007

An interesting story in this morning's New York Times titled "Unlike U.S., Japanese Push Fiber Over Profit" talked about Japan's long term investment efforts to build the world's first all-fiber national network and how Japan leads the world's other industrialized nations, including the U.S., in low-cost, high speed services centered around Internet access. Check out this illustration:The article states that approximately 8 million Japanese subscribe to the fiber-enabled service offerings that provides performance at roughly 30 times that of a corresponding xDSL offering.

For about $55 a month, subscribers have access to up to 100Mb/s download capacity.

France Telecom is rumored to be rolling out services that offer 2.5Gb/s downloads!

I have Verizon FIOS which is delivered via fiber to my home and subscribe at a 20Mb/s download tier.

What I find very interesting about the emergence of this sort of service is that if you look at a typical consumer's machine, it's not well hardened, not monitored and usually easily compromised. At this rate, the bandwidth of some of these compromise-ready consumer's home connectivity is eclipsing that of mid-tier ISP's!

This is even more true, through anecdotal evidence gathering, of online gamers who are typically also P2P filesharing participants and early adopters of new shiny kit -- it's a Bot Herder's dream come true.

At xDSL speeds of a few Mb/s, a couple of infected machines as participants in a targeted synchronized fanning DDoS attack can easily take down a corporate network connected to the Internet via a DS3 (45Mb/s.) Imagine what a botnet of a couple of 60Mb/s connected endpoints could do -- how about a couple of thousand? Hundreds of thousands?

This is great news for some as this sort of capacity will be economically beneficial to cyber-criminals as it reduces the exposure risk of Botnet Herders; they don't have to infect nearly the same amount of machines to deliver exponentially higher attack yields given the size of the pipes. Scary.

I'd suggest that using the lovely reverse DNS entries that service providers use to annotate logical hop connectivity will be even more freely used to target these high-speed users; you know, like (fictional):

bigass20MbpsPipe.vzFIOS-05.bstnma.verizon-gni.net (7x.y4.9z.1)

As an interesting anecdote from the service provider perspective, the need for "Clean Pipes" becomes even more important and the providers will be even more so financially motivated to prevent abuse of their backbone long-hauls by infected machines.

This, in turn, will drive the need for much more intelligent, higher throughput infrastructure and security service layers to mitigate the threat which is forcing folks to take a very hard look about how they architect their networks and apply security.

July 25, 2007

Listen, I'm a renaissance man and I look for analogs to the security space anywhere and everywhere I can find them.

I maintain that next to the iPhone, this is the biggest thing to hit the security world since David Maynor found Jesus (in a pool hall, no less.)

I believe InfoSec Sellout already has produced a zero-day for this using real worms. No Apple products were harmed during the production of this webserver, but I am sad to announce that there is no potential for adding your own apps to the KermitOS...an SDK is available, however.

The frog's dead. Suspended in a liquid. In a Jar. Connected to the network via an Ethernet cable. You can connect to the embedded webserver wired into its body parts. When you do this, you control which one of its legs twitch. pwned!

The Experiments in Galvanism frog floats in mineral oil, a webserver
installed it its guts, with wires into its muscle groups. You can
access the frog over the network and send it galvanic signals that get
it to kick its limbs.

Experiments in Galvanism is the culmination of studio and gallery
experiments in which a miniature computer is implanted into the dead
body of a frog specimen. Akin to Damien Hirst's bodies in formaldehyde,
the frog is suspended in clear liquid contained in a glass cube, with a
blue ethernet cable leading into its splayed abdomen. The computer
stores a website that enables users to trigger physical movement in the
corpse: the resulting movement can be seen in gallery, and through a
live streaming webcamera.
- Risa Horowitz

Garnet Hertz has implanted a miniature webserver in the body of a
frog specimen, which is suspended in a clear glass container of mineral
oil, an inert liquid that does not conduct electricity. The frog is
viewable on the Internet, and on the computer monitor across the room,
through a webcam placed on the wall of the gallery. Through an Ethernet
cable connected to the embedded webserver, remote viewers can trigger
movement in either the right or left leg of the frog, thereby updating
Luigi Galvani's original 1786 experiment causing the legs of a dead
frog to twitch simply by touching muscles and nerves with metal.

Experiments in Galvanism is both a reference to the origins of
electricity, one of the earliest new media, and, through Galvani's
discovery that bioelectric forces exist within living tissue, a nod to
what many theorists and practitioners consider to be the new new media:
bio(tech) art.
- Sarah Cook and Steve Dietz

July 13, 2007

The last few days of activity involving Google and Microsoft have really catalyzed some thinking and demonstrated some very intriguing indicators as to how the delivery of applications and services is dramatically evolving.

I don't mean the warm and fuzzy marketing fluff. I mean some real anchor technology investments by the big-boys putting their respective stakes in the ground as they invest hugely in redefining their business models to setup for the future.

Enterprises large and small are really starting to pay attention to the difference between infrastructure and architecture and this has a dramatic effect on the service providers and supply chain who interact with them.

It's become quite obvious that there is huge business value associated with divorcing the need for "IT" to focus on physically instantiating and locating "applications" on "boxes" and instead delivering "services" with the Internet/network as the virtualized delivery mechanism.

Google v. Microsoft - Let's Get Ready to Rumble!

My last few posts on Google's move to securely deliver a variety of applications and services represents the uplift of the "traditional" perspective of backoffice SaaS offerings such as Salesforce.com but also highlights the migration of desktop applications and utility services to the "cloud" also.

This is really executing on the vision of the thin-client Internet-centric vision from back in the day o' the bubble when we saw a ton of Internet-borne services such as storage, backup, etc. using the "InternetOS" as the canvas for service.

So we've talked about Google. I maintain that their strategy is to ultimately take on Microsoft -- including backoffice, utility and desktop applications. So let's look @ what the kids from Redmond are up to.

What Microsoft is developing towards with their vision of CloudOS was just recently expounded upon by one Mr. Ballmer.

Not wanting to lose mindshare or share of wallet, Microsoft is maneuvering to give the customer control over how they want to use applications and more importantly how they might be delivered. Microsoft Live bridges the gap between the traditional desktop and puts that capability into the "cloud."

Let's explore that a little:

In addition to making available its existing services, such as mail and
instant messaging, Microsoft also will create core infrastructure
services, such as storage and alerts, that developers can build on top
of. It's a set of capabilities that have been referred to as a "Cloud OS," though it's not a term Microsoft likes to use publicly.

...

Late last month, Microsoft introduced two new Windows Live Services,
one for sharing photos and the other for all types of files. While
those services are being offered directly by Microsoft today, they
represent the kinds of things that Microsoft is now promising will be
also made available to developers.

Among the other application and infrastructure components,
Microsoft plans to open are its systems for alerts, contact management,
communications (mail and messenger) and authentication.

...

As it works to build out the underlying core services, Microsoft is
also offering up applications to partners, such as Windows Live
Hotmail, Windows Live Messenger and the Spaces blogging tool.

Combine the emerging advent of "thinner" end-points (read: mobility products) with high-speed, lower latency connectivity and we can see why this model is attractive and viable. I think this battle is heating up and the consumer will benefit.

A Practical Example of SaaS/InternetOS Today?

So if we take a step back from Google and Microsoft for a minute, let's take a snapshot of how one might compose, provision, and deploy applications and data as a service using a similar model over the Internet with tools other than Live or GoogleGear.

Let me give you a real-world example -- deliverable today -- of this capability with a functional articulation of this strategy; on-demand services and applications provided via virtualized datacenter delivery architectures using the Internet as the transport. I'm going to use a mashup of two technologies: Yahoo Pipes and 3tera's AppLogic.

Yahoo Pipes is "...an interactive data aggregator and manipulator that lets you mashup your favorite online data sources." Assuming you have data from various sources you want to present an application environment such as Pipes will allow you to dynamically access, transform and present this information any way you see fit.

This means that you can create what amounts to application and services on demand.

Let's agree however that while you have the data integration/presentation layer, in many cases you would traditionally require a complex collection of infrastructure from which this source data is housed, accessed, maintained and secured.

However, rather than worry about where and how the infrastructure is physically located, let's use the notion of utility/grid computing to make available dynamically an on-demand architecture that is modular, reusable and flexible to make my service delivery a reality -- using the Internet as a transport.

3Tera's AppLogic is used by hosting providers to offer true utility computing. You get all the control of having your own virtual datacenter, but without the need to operate a single server.

Deploy and operate applications in your own virtual private datacenter

Set up infrastructure, deploy apps and manage operations with just a browser Scale from a fraction of a server to hundreds of servers in days

Deploy and run any Linux software without modifications

Get your life back: no more late night rushes to replace failed equipment

In fact, BT is using them as part of the 21CN project which I've written about many times before.

So check out this vision, assuming the InternetOS as a transport. It's the drag-and-drop, point-and-click Metaverse of virtualized application and data combined with on-demand infrastructure.

You first define the logical service composition and provisioning through 3Tera with a visual drag-drop canvas, defining firewalls, load-balancers, switches, web servers, app. servers, databases, etc. Then you click the "Go" button. AppLogic provisions the entire thing for you without you even necessarily knowing where these assets are.

Then, use something like Pipes to articulate how data sources can be accessed, consumed and transformed to deliver the requisite results. All over the Internet, transparent to you securely.

July 09, 2007

3) What do you make of Google's foray into security? We've seen them crawl sites and index malware. They've launched a security blog. They acquired GreenBorder. Do you see them as an emerging force to be reckoned with in the security space?

...to which he responded:

I doubt Google has plans to make this a direct revenue generating exercise. They are a platform for advertising, not a security company. The plan is probably to use the malware/solution research for building in better security in Google Toolbar for their users. That would seem to make the most sense. Google could monitor a user's surfing habits and protect them from their search results at the same time.

To be fair, this was a loaded question because my opinion is diametrically opposed to his. I believe Google *is* entering the security space and will do so in many vectors and it *will* be revenue generating.

This morning's news that Google is acquiring Postini for $625 Million dollars doesn't surprise me at all and I believe it proves the point.

In fact, I reckon that in the long term we'll see the evolution of the Google Toolbar morph into a much more intelligent and rich client-side security application proxy service whereby Google actually utilizes client-side security of the Toolbar paired with the GreenBorder browsing environment and tunnel/proxy all outgoing requests to GooglePOPs.

What's a GooglePOP?

These GooglePOPs (Google Point of Presence) will house large search and caching repositories that will -- in conjunction with services such as those from Postini -- provide a "clean pipes service to the consumer. Don't forget utility services that recent acquisitions such as GrandCentral and FeedBurner provide...it's too bad that eBay snatched up Skype...

Google will, in fact, become a monster ASP. Note that I said ASP and not ISP. ISP is a commoditized function. Serving applications and content as close to the user as possible is fantastic. So pair all the client side goodness with security functions AND add GoogleApps and you've got what amounts to a thin client version of the Internet.

Remember all those large sealed shipping containers (not unlike Sun's Project Blackbox) that Google is rumored to place strategically around the world -- in conjunction with their mega datacenters? I think it was Cringley who talked about this back in 2005:

In one of Google's underground parking garages in Mountain View ...
in a secret area off-limits even to regular GoogleFolk, is a shipping
container. But it isn't just any shipping container. This shipping
container is a prototype data center.

Google hired a pair of
very bright industrial designers to figure out how to cram the greatest
number of CPUs, the most storage, memory and power support into a 20-
or 40-foot box. We're talking about 5000 Opteron processors and 3.5
petabytes of disk storage that can be dropped-off overnight by a
tractor-trailer rig.

The idea is to plant one of these puppies
anywhere Google owns access to fiber, basically turning the entire
Internet into a giant processing and storage grid.

Imagine that. Buy a ton of dark fiber, sprout hundreds of these PortaPOPs/GooglePOPs and you've got the Internet v3.0

Existing transit folks that aren't Yahoo/MSN will ultimately yield to the model because it will reduce their costs for service and they will basically pay Google to lease these services for resale back to their customers (with re-branding?) without the need to pay for all the expensive backhaul.

Your Internet will be served out of cache..."securely." So now instead of just harvesting your search queries, Google will have intimate knowledge of ALL of your browsing -- scratch that -- all of your network-based activity. This will provide for not only much more targeted ads, but also the potential for ad insertion, traffic prioritization to preferred Google advertisers all the while offering "protection" to the consumer.

SMB's and the average Joe consumers will be the first to embrace this
as cost-based S^2aaS (Secure Software as a Service) becomes mainstream
and this will then yield a trickle-up to the Enterprise and service
providers as demand will pressure them into providing like levels of service...for free.

It's not all scary, but think about it...

Akamai ought to be worried. Yahoo and MSN should be worried. The ISP's of the world investing in clean pipes technologies ought to be worried (I've blogged about Clean Pipes here.)

Should you be worried? Methinks the privacy elements of all this will spur some very interesting discussions.

May 21, 2007

Jon Oltsik crafted an interesting post today regarding the bifurcation of opinion on where the “intelligence” ought to sit in a networked world: baked into the routers and switches or overlaid using general-purpose compute engines that ride Moore’s curve.

I think that I’ve made it pretty clear where I stand. I submit that you should keep the network dumb, fast, reliable and resilient and add intelligence (such as security) via flexible and extensible service layers that scale both in terms of speed but also choice.

You should get to define and pick what best of breed means to you and add/remove services at the speed of your business, not the speed of an ASIC spin or an acquisition of technology that is neither in line with the pace and evolution of classes of threats and vulnerabilities or the speed of an agile business.

The focal point of his post, however, was to suggest that the real issue is the fact that all of this intelligence requires exposure to the data streams which means that each component that comprises it needs to crack the packet before processing. Jon suggests that you ought to crack the packet once and then do interesting things to the flows. He calls this COPM (crack once, process many) and suggests that it yields efficiencies -- of what, he did not say, but I will assume he means latency and efficacy.

So, here’s my contentious point that I explain below:

Cracking the packet really doesn’t contribute much to the overall latency equation anymore thanks to high-speed hardware, but the processing sure as heck does! So whether you crack once or many times, it doesn’t really matter, what you do with the packet does.

Now, on to the explanation…

I think that it’s fair to say that many of the underlying mechanics of security are commoditizing so things like anti-virus, IDS, firewalling, etc. can be done without a lot of specialization – leveraging prior art is quick and easy and thus companies can broaden their product portfolios by just adding a feature to an existing product.

Companies can do this because of the agility that software provides, not hardware. Hardware can give you scales of economy as it relates to overall speed (for certain things) but generally not flexibility.

However, software has it’s own Moore’s curve or sorts and I maintain that unfortunately its lifecycle, much like what we’re hearing @ Interop regarding CPU’s, does actually have a shelf life and point of diminishing return for reasons that you're probably not thinking about...more on this from Interop later.

John describes the stew of security componenty and what he expects to see @ Interop this week:

I expect network intelligence to be the dominant theme at this week's Interop show in Las Vegas. It may be subtle but its definitely there. Security companies will talk about cracking packets to identify threats, encrypt bits, or block data leakage. The WAN optimization crowd will discuss manipulating protocols and caching files, Application layer guys crow about XML parsing, XSLT transformation, and business logic. It's all about stuffing networking gear with fat microprocessors to perform one task or another.

That’s a lot of stuff tied to a lot of competing religious beliefs about how to do it all as Jon rightly demonstrates and ultimately highlights a nasty issue:

The problem now is that we are cracking packets all over the place. You can't send an e-mail, IM, or ping a router without some type of intelligent manipulation along the way.

<nod> Whether it’s in the network, bolted on via an appliance or done on the hosts, this is and will always be true. Here’s the really interesting next step:

I predict that the next bit wave in this evolution will be known as COPM for "Crack once, process many." In this model, IP packets are stopped and inspected and then all kinds of security, acceleration, and application logic actions occur. Seems like a more efficient model to me.

To do this, it basically means that this sort of solution requires Proxy (transparent or terminating) functionality. Now, the challenge is that whilst “cracking the packets” is relatively easy and cheap even at 10G line rates due to hardware, the processing is really, really hard to do well across the spectrum of processing requirements if you care about things such as quality, efficacy, and latency and is “expensive” in all of those categories.

The intelligence of deciding what to process and how once you’ve cracked the packets is critical.

This is where embedding this stuff into the network is a lousy idea.

How can a single vendor possibly provide anything more than “good enough” security in a platform never designed to solve this sort of problem whilst simultaneously trying to balance delivery and security at line rate?

This will require a paradigm shift for the networking folks that will either mean starting from scratch and integrating high-speed networking with general-purpose compute blades, re-purposing a chassis (like, say, a Cat65K) and stuffing it with nothing but security cards and grafting it onto the switches or stack appliances (big or small – single form factor or in blades) and graft them onto the switches once again. And by the way, simply adding networking cards to a blade server isn't an effective solution, either. "Regular" applications (and esp. SOA/Web 2.0 apps) aren't particularly topology sensitive. Security "applications" on the other hand, are wholly dependent and integrated with the topologies into which they are plumbed.

It’s the hamster wheel of pain.

Or, you can get one of these which offers all the competency, agility, performance, resilience and availability of a specialized networking component combined with an open, agile and flexible operating and virtualized compute architecture that scales with parity based on Intel chipsets and Moore’s law.

What this gives you is an ecosystem of loosely-coupled BoB security services that can be intelligently combined in any order once cracked and ruthlessly manipulated as it passes through them governed by policy – and ultimately dependent upon making decisions on how and what to do to a packet/flow based upon content in context.

The consolidation of best of breed security functionality delivered in a converged architecture yields efficiencies that is spread across the domains of scale, performance, availability and security but also on the traditional economic scopes of CapEx and OpEx.

April 02, 2007

I found Thomas Ptacek's comments regarding DNSSEC deliciously ironic not for anything directly related to secure DNS, but rather a point he made in substantiating his position regarding DNSSEC while describing the intelligence (or lack thereof) of the network and application layers.

This may have just been oversight on his part, but it occurs to me that I've witnessed something on the order of a polar magnetic inversion of sorts. Or not. Maybe it's the coffee. Ethiopian Yirgacheffe does that to me.

Specifically, Thomas and I have debated previously about this topic and my contention is that the network plumbing ought to be fast, reliable, resilient and dumb whilst elements such as security and applications should make up a service layer of intelligence running atop the pipes.

Thomas' assertions focus on the manifest destiny that Cisco will rule the interconnected universe and that security, amongst other things, will -- and more importantly should -- become absorbed into and provided by the network switches and routers.

While Thomas' arguments below are admittedly regarding the "Internet" versus the "Intranet," I maintain that the issues are the same. It seems that his statements below which appear to endorse the "...end-to-end argument in system design" regarding the "...fundamental design principle of the Intenet" are at odds with his previous aspersions regarding my belief. Check out the bits in red.

...You know what? I don’t even agree in principle. DNSSEC is a bad thing, even
if it does work.

How could that possibly be?

It violates a fundamental design principle of the Internet.

Nonsense. DNSSEC was designed and endorsed by several of the
architects of the Internet. What principle would they be violating?

The end-to-end argument in system design.It says that you want to
keep the Internet dumb and the applications smart. But DNSSEC does the
opposite. It says, “Applications aren’t smart enough to provide
security, and end-users pay the price. So we’re going to bake security
into the infrastructure.”

I could have sworn that the bit in italics is exactly what Thomas used to say. Beautiful. If, Thomas truly agrees with this axiom and that indeed the Internet (the plumbing) is supposed to be dumb and applications (service layer) smart, then I suggest he should revisit his rants regarding how he believes the embedding security in the nework is a good idea since it invalidates the very "foundation" of the Internet.