“Since the Esquire article about me appeared and the Institute for Advanced Technologies in Global Resilience (IATGR) was introduced to the world, there have been a number of posts made about whether the kinds of solutions we hope to work on will actually work. John Robb, for example, believes that things like ResilienceNet are Byzantine because they seek a centralized solution to a myriad of problems. Robb writes:

‘I contend that within exceedingly complex environments, the only true way to approach resilience is through decentralized processes. If you don’t approach the problem from this perspective (a philosophy of system design), the complexity overwhelms you and you fall into a cycle of rapidly diminishing returns.’

Robb is correct that our approach is to connect valuable information from varied sources and automatically analyze it using Oak Ridge super computers, providing the results of that analysis to those who with a need to know. He fails to recognize, however, that the system is taking advantage of decentralized processes to generate value added rather than trying to create a super system that stands on its own. Others are concerned that those super computers offer a single point of failure for such a system. Ultimately, I see the system using the power of grid computing to overcome this vulnerability. In much the same way, Web 2.0 is using mash-ups, our approach to security will present information in a much more meaningful and timely manner to those who must respond to prevent, mitigate, or recover from adverse events. In other words, what looks like centralized system is much more likely to be decentralized and distributed in ways that even Robb would agree were resilient. The entire conversation is worth following on Robb’s blog. Another blog worth reading on the subject is Shawn Beilfuss’post onThe Age of Resilience. “

I agree with Steve that the quality of discussion in Robb’s thread was exceptional. An interesting aspect was that all of the participants could be classified as proponents of engineering resilience into systems by technical design and political policy but clashed over what would constitute the ideal premise (or ” philosophy”) for building resilience.

I am currently multitasking; more thoughts later in an update.

UPDATE:

“scalefree” posted the following in the thread at John’s site:

“The way you build in strong & resilient structures is by taking the math of resilience into account. The math of resilience is the math of networks, which says (very very simplified) that when you want to make a system strong & resilient, you distribute, decentralize & make redundant its structures. If you want to do it properly you use some specific algorithms to figure out how it should be decentralized, but that’s the basic idea. “

True enough. Mapping out a network or analyzing an existing one is a mathematical process and the structure of the network establishes functional parameters. On the other hand, how many dimensions are there to the concept of resilience in play here ?

How a network may be used by external actors is not always a variable that may be anticipated. The internet is a case in point. The cultural evolution of message texting as related in Rheingold’s Smartmobs is another. The mathematical arguments hold true within the network itself but not always extrinsic to it. I’m not sure the two – user and network – can be cleanly separated or controlled by algorithmic logic.

1 comment on this post.

Shawn in Tokyo:

November 29th, 2006 at 1:48 am

Hi Mark,

I updated my post in regards to the centralization question and resilience.

My argument is that if people focus too much on physical architecture, they will miss the level of resilience that can be built on other architectures I often refer to in SCM–financial, informational, relational and innovational. I attempt to address the bad actors and also reference Tom. I think it all fits in.