In this blog, I will be writing about aspects of Enterprise Architecture that straddle the boundary between the business and the technology realms.
The views that I express on here are mine. They aren't filtered, controlled, managed, refined, doctored, censored, or intended to represent any views that my employer has.

Wednesday, January 28, 2009

I have been doing consulting work for a couple of companies whose products are entirely informational. Essentially companies that provide information services over the web. I have been struck at both of them about the mixing of the technology that delivers the "product" (often really a service, but it helps my head to think in terms of a product!) and the technology that runs the business.

An example from the genuine product world will help illustrate what I am thinking about. When making and selling hamburgers, there is a clear separation of what is delivered and how it is accounted for, tracked - essentially how the back office runs. The selling of hamburgers is a sufficiently different proposition than the creation of the back office business systems. I wouldn't attempt to combine the 2. Yes it is important that the sales information flows... (see earlier post on flow of goods, flow of money, flow of information). However, I don't have my fryer installation crew and my cooks building the systems.

Where the product is informational, companies often think that the product and the back office rely on IT, so they must be the same. So we have the people who think in terms of product, features, etc. responsible for the potentially more mundane chores of installing and managing the back end systems - giving the internal business the data it needs to run and manage the business, and the sales/support and other staff the tools they need to do the job.

In reality these are entirely different groups - and should be. Yes they might share common technology needs/data/platforms (although there is little guarantee even of that). Yes they may share communications infrastructure and communication methods/platforms. But in reality the activities required to deliver a world class product and the activities to provide robust "run/manage the business" systems are as different as flipping burgers and accounting for the flipped burgers. Mixing the teams (and thus not getting a proper separation of "IT" responsibilities) leads to some very brittle systems - often because the value of the "run/manage the business" applications is almost always subjugated to the "develop and operate the product" systems.

Tuesday, January 27, 2009

These are strange bedfellows at first blush. But the more I think about them the more parallels I see.

Every book I read (and every web solution I build) talks at length about statelessness - especially session statelessness. Obviously data state is important, I really would like my bank account to know its balance and not derive it by applying all the transactions since the day I opened it every time I want to know my balance.

But I digress.

When I was a neophyte developer, the IBM 3270 family of "green screens" were just becoming mainstream. I had the enviable task of writing a series of macros in PL/I to emulate the behavior of the assembler macros for "basic mapping support." Fun project...

Anyhow in doing said project, I learned more about the 3270 than any human should have to. The key lesson of the device was that the hardware was directly addressable, had the concept of the "field" and would send back only fields that had the "Modified Data Tag" bit turned on. That meant that if a field were modified by a user, that field would be sent, but unchanged data wasn't. If nothing else that cut down the amount of data transmitted compared with approaches that refreshed the whole screen.

One much exploited approach was that the serving application could send the data out with the modified data tag already turned on. This of course meant that the device would send that data back regardless of whether the user actually modified the data or not. Immediately there was an opportunity to manage session state. Just send the stuff you needed next time back in a field with the modified data tag on. That way you have enough context for the next invocation.

The next leap was the ability to use "invisible" fields. Fields that were mapped to the screen but marked invisible (so you couldn't see their contents). Handy for passwords etc. However, if you set a field to invisible + modified data tag on, you could send suitable session data back, but the user at the screen didn't have to deal with it. You got the best of both worlds. Context information sent with the request and visible impact to the user.

Does this all sound familiar? f course nowadays it comes in in the header instead of the data, but it is the same general idea. If an architectural approach demands context data with every call, have the server send it back as part of the resource, so it automatically comes in on the transmission.

This posting by Mr. Vogels is very insightful - along the data replica dimension of distributed data management. This is clearly an important dimension, but it isn't the only one. There is a more general problem of data ambiguity. This isn't necessarily just a database problem, but an overall systems problem.

The basic thought is that when you have two representations of a piece of data that are supposed to have the same value, when do they actually have the same value (and who cares?).

We can imagine the following cases.

The 2 values must always have identical values (to an outside observer). Inside the "transaction" that sets the state of a datum, the values can be mutually inconsistent, but that inconsistency is not manifested to an observer outside of the transaction.

The 2 values will need to be "eventually consistent" - this case is admirably covered by Mr. Vogels.

The 2 values will rarely have identical values, but there are mechanisms for "explaining" the discrepancies.

The first case is almost a default case - yes we would like that please. The second case is a good perspective from a data replication perspective - essentially dealing with a common schema. The third case is the tricky case.

The first case is unattainable at large scale using ACID transactions for replicas of data at Internet scale is simply impractical for performance.

The third case is interesting because of situations where "transactions" can occur against either copy of the data independently and in arbitrary sequences. The communication mechanism between the systems that can update copies of the data may be reliable, or they may be intermittent. That isn't completely the issue.

So, to illustrate this kind of system, let's take a popular application - Quicken. Many people use Quicken to manage their household accounts. The idea is to be able to use Quicken as a kind of front end to bank accounts - but it is only intermittently connected.

At any moment, the balance that Quicken reports and the balance that the bank reports are very likely to be different values. Of course from a data management perspective they are actually different fields, however that subtlety will be lost on the majority of users. Why will the 2 have different values for the balance field? There are lots of reasons, e.g.

Transactions have arrived at the bank without being notified to Quicken yet. For example, in an interest bearing account, the interest payment will be automatically added to the balance on the bank's view of the account. Or possibly a paid in check has bounced - the bank will have debited the check amount and (possibly) added a penalty.

Transactions are processed in a different sequence in general. When a user writes the checks, there is no guarantee that they will be processed by the bank in the order in which they were written (in fact, the policy varies, e.g. process the biggest checks first if there are many to be processed because that maximizes overdraft charges in the event that an account goes overdrawn).

These reasons boil down to the need to have system autonomy for throughput (imagine having to wait at the bank to process check 101 until check 100 had been processed).

Of course it doesn't matter to us that the systems are rarely fully synchronized, that the "balance" doesn't agree across them - we have accounting methods to help us reconcile. In other words we can accept that everything is OK without caring whether the systems have the same value of the balance.

Friday, January 16, 2009

While this posting doesn't just deal with Enterprise Architecture it does begin to explore how we choose the tool (from quite a wide array) that we might choose for a communication at any given moment.

Just thinking of my own case, I have an unseemly number of communication mechanisms/paths.

These obviously aren't all 2-way, but having 25 major channels - and following several news sources, a few Twitter folks that I follow(about 50) it is clear that I have oo much time on my hands!

So why do it? It really comes down to personae and convenience. Taking just the corporate emails - each company (including my employer) has its own email infrastructure. Each client uses its own email addressing scheme to send stuff around. I can't get from one client's system into another's (and nor should I be able to).

If I am doing frivolous things, I tend to use my hotmail account. If I am doing semi-serious, but still relatively public things, I use my gmail account. For my own business and when I know the person at the other end, I tpically use my own business email.

Twitter is a great source of interesting updates. Admittedly of the 500 or so Tweets/day that I receive, about 50 are interesting to me and about 30 really interesting. So my filters are not as good as they could be.

I use the phone, but not a lot. Most of my communication is asynchronous. I text a lot, contribute to my own blogs, read a bunch of news sources. The only things I don't seem to do are listen to/download music or video.

So why is this important from a business perspective? Because we each make our own choices about which media to use. The enterprise needs to enable many different channels for the various purposes.

Is Twitter a corporate tool? Absolutely - especially for corporate travel departments. It's the easiest way to get information out quickly.

Is email a corporate tool - sadly yes. But as we have observed many times, it is very heavyweight. Sometimes the only way to get information in and out of corporations.

Phone/Voicemail? Absolutely.

I would argue that every form of communication that I use has its place in my daily corporate life. Even hotmail and gmail have helped when the corporate network is down and I have to get a response out.

Enterprise are really going to have to rethink communication - recognizing that critical information is going to leak over many channels. Draconian security groups will simply be bypassed since information will continue to flow.

Then we have the symmetry/asymmetry question. How much of what I do is simply reading other people's stuff (following them personally, subscribing to their publications or what?) vs engaging in dialog.

When dialog of some kind is needed, which of the many tools I have at my disposal do I use? My rule of thumb is whatever the person I am communicating with last used when talking to me. Of course it depends on whether it is a single short thought (twitter), a complex large file (Groove/email/SharePoint) or something in between....

Tuesday, January 6, 2009

There's an odd dichotomy happening. We are seeing pretty massive shifts to services obtained over the network for all sorts oif things (buying stuff being the most obvious, but there are so many). Yet we also insist on having our "applications" local too?

By local application, I mean a chunk of functionality that runs on a client device and must be installed independently of the rest of the functionality on the device. So the browser is an application, but things that run inside it aren't (at least not by this definition).

Much of what we can do with our smartphones, etc. can be done using a browser (possibly using the mobile version of the web site), but with some very specific look and feel needs, we tend to download and install specific applications. This is of course, especially true on the iPhone where the apple applications are legion and well liked by the applerati.

So what drives this?First and foremost, I believe is the desire to be in control of one's own destiny. The networks are not ubiquitous enough yet that we can rely on them to have what we need available whenever we might need it.

Second network cost - that can be expensive after a while.

Third pure preference - we like the look and feel of the apps we install and not of those we don't

Fourth capability. Organizaing/classifying is a core "client side" requirement. Rich experiences for doing that preclude us from using totally network based approaches - although tagging really assist with this.

There are probably lots of other reasons, but with capability moving into the network, it seems strange that i-apps are growing in strength and popularity