Friday, June 30, 2006

He is, of course, right. I guess it comes down to the degree to which the entity in question can be written purely on top of other standardised APIs (versus requiring knowledge of internals) and the amount of value that can be added by different implementations.

James Governor posted a couple of weeks ago about the goings on on the UK government's massive National Health Service project. (Disclaimer: I have no inside knowledge on what's going on there... I don't even know if any of my colleagues in other teams are tangentially involved)

The problems of the omniscient, benevolent central planner are well understood in politics and economics. In anything other than trivial situations, it simply isn't possible for some entity to know all the information necessary to make a perfectly correct decision. You absolutely have to work on the basis that you don't know everything, that actions will have unintended consequences and that it is better to start small, iterate and learn from your mistakes. Whenever this lesson is ignored, failure occurs. Consider "planned" town centres versus ones that grew organically (I'd rather live in London than one of the UK's "planned" towns). Consider "planned" economies versus market economies. The Mythical Man Month is as much

So, what makes IT projects different? Of course, the answer is nothing. They are subject to all the same problems that every other top-down project faces.

This is why agile programming, iterative development techniques, extreme programming and all that good stuff has sprung up.

So, I guess I have two questions:

1) Is the NHS project as much of a disaster as the press suggest? (My take: I suspect it is running broadly to plan but that it is the objectives for the programme that are broken.... are huge swathes of it even needed?)2) Do we have any good example of government (or other large-scale) projects where agile techniques have been used to prototype solutions, get participant buy-in and demonstrate value quickly? (My take: there are probably examples everywhere but I'm too lazy to dig them out :-p)

I've just been issued with a replacement for my ageing ThinkPad T41p. I now have a shiny new Lenovo/IBM ThinkPad T60p.

Shiny... Check. New... Check... What more could you want?

Here are the five best and five worst things about it

Five Best

1600x1200 screen

2Gb RAM, expandable to 4Gb

Feels pretty solidly built (unlike some other laptops I've had in the past, mentioning no model numbers...)

Has a fingerprint scanner.... nice little gimmick. Seems to work quite well if your fingers are dry

Feels fast

Five Worst

Comes with a tool that thinks it knows how to configure the network settings better than I do. It doesn't.

It's a little too big for a travelling user but I'd rather big and robust than small and broken.

Lenovo have changed the standard ThinkPad power plug. Arghhh!!!! Apparently it's because the new models need a higher voltage but still...... gone are the days when I could swan along to a meeting without power, knowing I could borrow somebody else's adapter.

Err... that's about it.... there's not much not to like

SummaryIt's great. I just wish it didn't have that network configuration tool whose name I'd better not mention. (I should mention that most of my colleagues swear by that tool so perhaps it's just me).

It's not just conference badges. At my current client, the ID badges are designed to be hung from the neck on a string with a single clip. Surprise, surprise.... half of the people walking around the site are displaying the blank side of their badge. it would be funny if this wasn't a moderately sensitive site :-(

11. Something to pulverize fish bones into fine powder without damaging the soft fish. The lithotripter uses ultrasonics to do this to kidney stones, but I want a commercial one to do this to fish bones in supermarkets, restaurants, and even homes. Gone will be the unpleasantness of finding bones in one's mouth, with the attendant risk of choking. Instead the bone powder will add nourishment to the meal.

to the profound:

14. Something that makes mining easier, less life-threatening to its participants, and with less environmental impact bugs specially tailored to gobble up the coal, copper, manganese or whatever, and sent down to extract the stuff. They are flushed out, the desired resource extracted, and all organized by guys in white coats who are not exposed to the hazards associated with conventional mining.

Friday, June 23, 2006

I was in IBM's Burlingame lab several times last year in the run up to the release of WebSphere Process Server. I remember lots of things I enjoyed about those trips (Upper Class on VS20, being commuting distance from San Francisco, the weather...) but the thing I remember most is the little stall at the base of the building where there was a really friendly guy who sold delicious muffins and some excellent hazelnut coffee.

I'm not going to attempt a full answer, and I'm not even going to refer to BPEL4People. Rather, I'm just going to make an argument for why human task support inside a process is useful and why human task support outside a process is useful.

Human tasks outside a process

The idea of being able to "invoke a human as a service" is such a good idea that the process people shouldn't be allowed to keep it for themselves. I can think of hundreds of applications where being able to put a piece of work on someone's worklist - and know when they've done it - would be brilliant. Imagine a banking application that allowed you to leave a message for your bank manager, know when they've read it and which allows them to give their answer directly.

An external human task manager lets you do this and it's really, really useful. WebSphere Process Server has one of these things and it works really well. Once you have it, you keep on thinking of new uses for it.

The obvious next thought, however, is to think that we've succeeded in some unstated goal of abstracting humans into (expensive) web services.

The problem is that, as great as an external task manager component is, a human is not a machine and there are aspects of human behaviour that are qualitatively different. That is: there are behaviours that one would like to model that are not best expressed through a simple (e.g. WSDL) interface.

These behaviours become particularly apparent when developing a solution that automates part of a business process

Human tasks inside a process

BPEL is the industry's current attempt to define an executable language for describing a business process. I think that's a little ambitious - although I do think it is more than just a web services scripting language.

In BPEL, we are encouraged to think about the concept of an "invoke". This is an entity in BPEL that says; "At this point, we need to invoke some functionality that exists elsewhere. Here is the input data and this is where we should store the data that comes back". At execution, the BPEL engine turns this into a web services call (most usually, at least).

This model can be thought of as the archetypal command-and-control approach. "Do this!". "Now do that!". "Did it work? Good! Now do this!"

Many processes are like this and many applications can be built on this model. In such cases, "invoking" a human in this manner is reasonable.

However, many other classes of process can be thought of in terms of a flow. Somebody does something, then a bit of automation happens, then somebody else does something. Such processes are often typified by a collective knowledge of what needs to happen next. People just seem to "know" when they are required.

When trying to model such processes, it is far more productive to model the human interactions inline. The interplay of the various people in the process is intrinsic to the process. "Swapping out" a human for a machine (as could be done in the "invoke" case) just doesn't make sense. Instead, it is useful to be able to say things like: "This step is done by a human. It can't be the one who did the previous step but it must be a manager. If they haven't done it within a day, escalate it to their boss".

Sure... this could be configured external to the process in the external human task manager but the enforced separation seems unnatural to me in such a case.

Conclusion

My claim, therefore, is that we need support for human tasks both inside a process and outside a process. It may well be that BPEL4People is over-complicated (I'm not qualified to say) but I do suspect that it will prove to be on the right lines.

Tuesday, June 20, 2006

James Governor points to an IBM press release about how we're getting into "mashups" in a big way. (As an aside, I'm sure putting "mashup" in scare-quotes will probably seem as quaint as spelling "internet" with quotes in a year or so... but I'm sticking with the quotes for now :-) )

Looks like I have yet another reason to get round to learning some PHP.

I took a swipe at Bruce Silver a few days ago and implied he didn't know anything about WebSphere Process Server. Somewhat embarrassingly, it turns out that he has written a 28 page report on it and that it appears as the first link on the Integration Developer documentation homepage. Oops...... Sorry, Bruce. (Given that he also hints at the existence of such an article in the post I referenced, I think an apology is the least I owe him).

When a new specification is proposed - especially a revision to an existing one - it is incumbent upon the proposer to justify the need it. I think the various "Human Task" use cases that Bruce outlines are pretty important but it's easy to see how this could be construed as an attempt to make it harder for other vendors to conform to the spec.

My view is that the problem is actually the other way round: I'm increasingly of the opinion that standardisation often occurs too soon and that major revisions are a reflection that the initial specs fail to anticipate potential problems or extended use cases. Unfortunately, if vendors choose to delay standardisation, they're accused of being proprietary or risk finding themselves with no influence amongst those who decide to standardise earlier. When the incentives are so strongly stacked in favour of early standardisation, it's not surprising that those who gain the most experience with a spec discover its deficiencies and seek to remedy them.

What I suspect has raised suspicions around BPEL in particular is that there have been plenty of attempts to describe business processes in the past... this is hardly a brand new field (FDL, FDML, BPML, etc, etc).... I don't have a good come-back to that yet. I'm working on it :-)

... but there's absolutely no way on earth I'll be flying on an A380 until they've been around for at least a year...

Tim Worstall's at Nightcap Syndication seems to believe the problems in getting it launched are worse than have been publicly admitted... so I guess I have a few years before I have to worry about going on one :-)

Sunday, June 18, 2006

Mainframe Blog recounts a sorry tale of raising a PMR (Problem Management Report) with a vendor for support on z/OS. (Given only IBM uses the term "PMR", I can only assume the author is trying to protect us from the shame....)

Level 1 support (the job of checking a customer's entitlement to support, capturing the abstract of the problem and routing it to the appropriate level 2 team) has always struck me as one of the roles most suitable for automation.

I long ago realised the importance of having chosen an abstract for my PMRs in advance of calling up - and spelling out every word. As frustrating as dealing with a level 1 support organisation is, I don't think the pain is necessarily a reflection on the quality of the staff: it's just not reasonable to expect a single person to be familiar with every product we support and to be familiar with every technical term unique to each of those products.

Last year, I finally got sick of carrying my laptop charger, my phone charger and my iPod charger every time I went anywhere and I bought an iGo Juice 70 when I was in the US.

It's a very clever device and I'm surprised more people don't know about them.

Well.... I'm on a long term project now and find I'm not using the iGo any more (my phone doesn't need charging while I'm away as I'm never away from home for more than three nights in a row). So, I've put it up for sale.

It's my first ever venture onto eBay as a seller.... and I'm rather hoping it doesn't go for £1... it cost me somewhat more than that.

So, readers, if you or anyone you know wants one of these, bids, wins the auction and mentions that they saw it on my blog when they contact me, then I'll throw in an adapter for free (it comes with a US plug, you see). Can't say fairer than that....

This weekend was the first one I've spent in London for several weeks... so I was very glad that the weather was so good. Nothing beats a lazy Saturday afternoon relaxing in Regent's Park, picnicng with friends, drinking wine and eating snacks - before heading into town in the evening.

However, the more interesting part of his article is his discussion of BPEL4People. To my shame, I haven't yet read this spec but I have a fair idea what will be in it (for reasons that will soon become clear).

Joe links to an article by Bruce Silver. Bruce doesn't seem to like BPEL4People very much. It seems that he doesn't see a need to make a "human" a first-class activity type in BPEL... believing that it's sufficient to standardise an interface that human task manager services must implement (externally to the process).

At first glance, a wholly external human task management service does, indeed, have many advantages: assigning work to a person is achieved by "invoking a human as a service" and you can swap between automated and human tasks simply by changing where a particular invoke activity points to... why "hard code" the use of a human in the process?

WebSphere Process Server offers this way of working and it works well.

But there's a problem: and the problem is process context. A tenet of SOA is that it doesn't really matter which system implements an interface provided they do implement it and implement it with a quality of service that you find acceptable. Unfortunately, in BPM, you really, really do care who performs a particular step in a process and the entire context of the process is sometimes needed to determine who this right person is.

The only way you can pass enough context to an external human task manager for the most complicated scenarios is by including a lot of human-task-specific stuff in the interface. That means you can't swap human implementations and automated implementations in a seamless fashion and it means that the interface will be rather unpleasant: huge amounts of process context would be flowed across multiple service calls, regardless of whether it was needed.

The solution in WebSphere Process Server (and, I suspect, also in BPEL4People) is to accept that, in many cases, having human tasks expressed directly in BPEL is the superior way to do things (we offer a choice). If a human task really is performed by a human, it's more natural to drop that human task straight onto the BPEL canvas and, because the task is inline, it has access to all the context it could possibly need (i.e. for complex role resolution such as "this task can only be done by the manager of the person who performed task A", etc, etc).

As for Bruce's claim that mandating support for all five cases in the specification is "overly ambitious and unlikely to be adopted beyond IBM and SAP themselves – if even they can achieve it", I'd urge him to take a closer look at WebSphere Process Server. This product is a gem in IBM's software crown that is beginning to get the wider recognition that it deserves. It's quite amusing to see people discussing features of Process Server that are available today as if they're some sort of unachievable nirvana :-)

Friday, June 16, 2006

I devised my own theoretical weight-loss plan some time ago. The only problem was that I could find no way to market it. It's based on a trivial bit of physics.

Let's start with some assumptions or background:

* Let's assume the core temperature of the body is 37 degrees centigrade* Remember that the specific heat capacity of water is about 4.2 Joules per kilogram kelvin. That's almost precisely 1 calorie. (Hmmm... strange that....).

This means that if you drink a litre of ice-cold water, you will burn off 37 calories by the simple process of warming it up inside your own body.

Fantastic! You can burn off about a hundred calories simply by drinking three litres of icewater.

I thought I had stumbled upon the germ of a truly great business idea until I remembered that humans tend to measure energy in kilocalories. You'd have to drink over 81,000 litres of water to lose a pound. Oops....

Thursday, June 15, 2006

I try to stay away from overly geeky subjects on this blog (no, really...) but sometimes true perfection has to be recognised.

As I do more and more design and development of solutions using web services (SOAP/HTTP, in particular), I find myself wanting to see exactly what is flowing over the wire. (That's code for: getting interoperability to work isn't always trivial....)

The number of times I've been saved by a groovy little tool called TCPMon is large. What's more, almost nobody knows about this tool.... until I tell them.... and then they can't get enough.

So, what is it? It's nothing more than a little app that listens for TCP requests on one port, dumps what it gets, forwards the data (unchanged) to a TCP port at another (or the same) machine and then dumps what comes back, before returning it to the original client... i.e. it's a proxy that dumps the traffic.

So, if you've ever wanted to see the SOAP-ENV or the fault or whatever as it flies over the network, now you can... and you've been able to for ages (since at least WAS 5 - and probably longer since I think there's also an apache version).

How?

If you're running any WebSphere Application Server based product, grab a command prompt and navigate to the lib directory

Tell it which port to listen on (that is where you will subsequently point your client) and which hostname and port to forward the requests to (i.e. the "real" server) and off you go. Fab!

Note: the "ws" in the package name makes me suspect this isn't actually a supported part of the product so don't blame me if it doesn't work for you and don't even *think* of pretending that my mentioning it gives you the right to raise a PMR :-)

My view on "climate change" is that it comes down to a value judgment: do you believe that the jaw-droppingly poor people of today should be kept poor in order to reduce the risk that the human race may die out sooner than it might have? If so, sacrificing economic growth and prosperity today (and hence reducing the number of people that are lifted from poverty thanks to trade) is the right thing to do. If not, it is clearly the wrong thing to do.

The problem is: how can we weigh the balance? Perhaps climate change will affect us far more savagely and quickly than many expect? Perhaps it's a mirage. Accurate computer models are critical if we are to come remotely close to guessing right. If they're wrong, our leaders will make the wrong decisions. Dangerously wrong decisions.

4. Never say, "on the one hand, on the other hand." Always say, "I think X because I believe Y is most important; other economists will tell you Z because they think Q is most important. They're probably wrong because of R."

Tuesday, June 13, 2006

RedMonk's James Governor has been on a roll of late.... his posts for the last week have been uniformly educational. Sadly, my feed reader couldn't handle his new clever feed and so I missed them all. For anyone else who's missed his output recently, I suggest you check if your RSS reader is confused, too.

This was the first I'd heard of that company (which, given the relatively close-nit community of ISSW, suggests that they really did succeed without needing armies of IBM consultants...). It's an extremely positive reference.

Now..... I said he was on a roll. Well..... what I really mean was that he spelled HIPAA correctly. It doesn't take much to impress me.

Only talk about vendors and products. Never actually start a conversation about the problem space that customers may face. Steer all conversations to your contrived taxonomy instead of seeking to understand and explain in their own terms

State in some form or fashion that success is tied to obtaining explicit buy-in from senior management ignoring the fact that folks not only have heard this on too many occasions but that this is somewhat obvious to folks worth their salt

State something even more insultingly obvious such as the importance of understanding your enterprise's specific business requirements

I really hope there's more to it than that..... I mean *I* can do that!

Dave Lorenzo's Career Intensity is a fabulous treasure trove of career advice (that always leaves me somewhat deflated by my inability to put any of it into action... but that's another post).

In one of his posts today, he talks about "Building Buzz" and how he was impressed by seeing a successful professional's request for help. What caught my attention was his claim that David Maister's books are "required reading" for McKinsey and Bain and BCG consultants.

My personal opinion is that any professional in IT should have read a book on concurrent programming and a book on transaction systems. Forget project management, architecture, SOA or anything else: if you don't understand transactions and concurrency, you have no business in IT; you're just too dangerous and you certainly won't be working on any of my projects.

And just as almost nobody except tool vendors works directly in or even thinks much about Java bytecode, the day may come when almost nobody except tool vendors works directly in or thinks much about BPEL

I agree with that sentence, but the implication is odd. I don't know anybody even today who works directly in BPEL or thinks about it. On my client projects, when I am building executable processes, I will use a graphical tool (in my case WebSphere Integration Developer or WebSphere Business Modeler, depending on what I'm doing). With one or two exceptions, I can't think of a time where I've even needed to look at the generated BPEL.

Robert Scoble is the guy who single-handedly transformed my opinion of Microsoft. I joined IBM in 2000 with a headful of weird beliefs about politics, economics, the IT industry and the world in general. In particular, I had a strong antipathy towards both Microsoft and Sun (I forget why I hated Sun but I'm sure there was a good reason...). As is common amongst recent university graduates, it didn't take me long to realise that most of the opinions I'd formed at college were hopelessly naive but I never really lost my distrust of Microsoft. I developed a grudging respect but no more.

Robert changed that... he helped show that, just like in any other company, there are real people working in Microsoft, doing real work. Raymond is how I discovered how clever they are but Scoble is who showed me they're not evil.

Saturday, June 10, 2006

I took a mini-swipe at Sun's boss yesterday for his compulsive obsession over power consumption. On the other side of the world, however, Scoble was having lunch with him and getting to know him. Interesting article. I still have no idea what Sun is for (question: if they didn't exist today, would anyone invent them?) but if anyone is going to make them relevant again, I suspect it might just be Jonathan.

He does, of course, have a point; the amount of heat pumped out by my humble ThinkPad does not go unnoticed when it's sitting on my lap on a warm day; I dread to think how much heat is being pumped out of all the data centres in the world.

I think his relentless harping on about this subject could be more evidence that our industry is finally maturing. Implicit in his comments is the hint that he's selling commodity boxes and differentiates himself on things like power consumption. Taken with Vinnie Mirchandani's ongoing campaign to get corporates to start looking at their phone bills, there's more than a small risk that everything's about to get very boring indeed :-)

Brian Peacock is a Hursley IBMer who has recently made the transition from the walled-garden of our internal blog community to the big, scary outside world. Having recently returned to work after a five month absence recovering from a Subarachnoid Haemorrhage, he has a rather unique perspective.

Tuesday, June 06, 2006

As Sandy Kemsley points out, this whole thing seems to have been driven by Gartner (and Oracle, who are touting their Fusion middleware). I agree that using the "SOA 2.0" moniker is silly and potentially dangerous; it freaks out the thoughtful client and gives the incorrect impression to everyone else that the SOA bandwagon is utterly out of control.

However, we should not lose the key insight that Oracle and Gartner have brought to the table: SOA is not just "CORBA 2.0"; it is also an event-driven thought. A lot of the marketing slides out there could easily lead you to believe that SOA is a new name for DCE. Not so... and Gartner/Oracle have done us all a service by getting some attention for this often-overlooked side of the equation.

However, IBM's SOA reference architecture has always made it clear that an event bus is critical to SOA (if I'm not mistaken, WebSphere ESB was even announced at the same time as IBM's big SOA launch last year). If I had only just realised the importance of events, the last thing I'd be doing would be drawing attention to myself by implying my "SOA 1.0" thoughts were missing 50% of the necessary functionality.

Vinnie Mirchandani has a bee in his bonnet about the cost of telecoms. I was originally sceptical about his claims that we were being colossally over-charged and argued that the benefits of proliferating wireless hotspots, "3G" connections and high-speed home broadband far outweighed their cost. In other words, I believed that there was a large consumer surplus when purchasing these services. Vinnie argued with me in the comments to my posting and helped me see his side of the argument.

He has now expanded on his argument and written an article on the topic for Real Finance where he points out how the cost of conference lines, calling cards, ad-hoc employee wifi access and all the rest can really mount up.