Wednesday, January 31, 2007

There is a scary thing going on in IT at the moment, its around Enterprise Architecture, now I've always maintained that There is no such thing as IT strategy but I'm getting worried that Enterprise Architecture is starting to be turned into a goal in its own right. There are some good reasons to have an Enterprise Architecture function, it helps to provide consistency and governance. The challenge however is when Enterprise Architecture starts trying to lay down 5 or 10 year plans, and even worse when it starts trying to make technology based decisions today based on those plans of tomorrow. As Sam Lowe put it to me the other day. The goal of Enterprise Architecture should be purely at the IT level, it shouldn't be at the technology level.

Now I have a slight problem with Sam's idea in that IT does actually mean "Information Technology" so clearly IT isn't abstracted from technology. What I think Sam was trying to say however is that EA isn't about solutions its about the general goals that IT should have within and organisation. I'll go along with that as an approach. There is a rule of thumb I've always used: Plan for a year, have a goal for 2, a target for 3 and vague fag packet stuff beyond then.

The problem is that EA is beginning to be seen as the goal in itself, so Enterprise Architects lay out the "vision" of where the organisation should get to in 5 or even 10 years time and create huge amounts of documentation in which to detail this end state. Simply put that is a massive waste of time and effort and something that will only impact in a negative way on the projects being done today.

Enterprise Architecture should never be the goal, it should aim to guide and steer and help to select the good from the bad. This should not be done against some grand IT or technology vision but should be done against the business value that elements will derive. The reality is that this will change significantly over a 5 year period, let alone more, and so any decisions that are made today on the basis of progression towards the mythical enterprise architecture are going to be an increased burden which is done to achieve something that won't happen.

Note here I'm talking about enterprise architecture. If you have a 5 year programme of work to deliver against a new business area or goal than that is fine, because that isn't an enterprise thing, its a solution thing.

Enterprise Architecture isn't a goal, in the same way as IT strategy isn't a goal, the point is that the business changes and IT needs to adapt in line with the business and someone needs to herd the cats to make things consistent. This is why the key skills in EA are not the creation of big documents that try and define some vague future but the ability to influence people and to help them reach the right choice for their current solution that also makes sure that the next thing down the line isn't going to be screwed.

Technology changes far too fast for anyone to make bold ten year plans. Back in 1997 some companies hadn't caught onto the concept of the internet, I even interviewed with an IT company back then who didn't have email that went outside the company! Anybody who attempted an Enterprise Architecture with a 10 year goal back then would almost certainly have looked like a muppet after only 3 years, and I doubt that the rate of IT development has slowed since then.

Enterprise Architecture is required, its required as an active day to day part of the operation of how IT operates, it should continually evolve and iterate the "to be" state of the IT estate in line with the demands of the business and it should never be allowed to start defining goals that sit way beyond the technology horizon of today. Enterprise Architecture is not R&D, its meant to be a practical solution to ensuring that solutions are consistent. Word documents rarely help make anything better, normally they make things worse, this means that EA has to be about active participation and direction of projects and programmes and about the people and communication skills that this requires. This also means that while EA is about the general cases and direction of IT that it must understand the technologies of today and those that will soon be arriving. Without a proper understanding of technology it is hard to see how EA can ensure that pieces are progressing as required.

Saturday, January 27, 2007

I've been involved in a number of technology selection pieces over the last 18 months or so, and one of the biggest challenges is rating the products against what they need to be used for rather than against each other. One of the biggest challenges has been convincing people to rate down products that have more facilities than are required.

So I've decided to have a crack at defining the products that I'd like to see and the selection criteria around them, I covered this a while ago in Splitting up the ESB but since then I've realised that its not quite a cut and dried. So the old picture was something like this...Where the idea is that a BSB works across multiple SOA Apps and then you can have a "BSB of BSBs". What I've found since then (well its been 12 months) is that there is actually an even simpler BSB out there and there is another layer where the choice between a pure BSB and an ISB style needs to be considered. This means that the model I tend to use now is like this

The DSB is just on the diagram to note something to talk about, it stands for "Domain Service Bus" and this is where you have the decision between using an ISB style of product, mainly if you already have a legacy estate and there really isn't much point building a "pure" SOA layer above it, the only thing that is important here is the external services to the domain and getting those right. This is where the older EAI products with their new EAI badges and the newer "richer" (aka more complex) ESB products come into their own.

The big issue I’ve found however when making selection choices is that vendors take a “shotgun” approach to all responses and tend to lob in lots and lots of products with very little actual architectural clarity and almost never do they try and sell anything other than the most feature rich product. Now in the ISB space this is fine as feature rich means more able to do what you want with the heterogeneous landscape you are trying to manage. For the BSB however the focus is exactly the opposite for two reasons

1) It’s a homogeneous environment with XML as the document format2) It’s the layer you want to keep the most simple to enable the most flexibility

This means that you want the least features possible to stop people doing stupid things. I’d argue in fact that at the BSB level this should be one of the cheapest product that a company every buys. The BSB does not need to support multiple document formats, it does not need to support multiple transport mechanisms and in reality it needs to support only one packaging approach, namely WS-*. Now even if the RESTians scream and shout for inclusion this doesn’t add too much complexity because its still XML over HTTP even if there are certain differences. So lets keep REST in for now to be fully buzzword compliant. Now I know here that some people are going to scream “performance” and mention JMS v WS-RM and talk about email and those pieces. But my point is that the drive at the BSB should be towards mandate and simplicity. Taking 802.11x as an approach, this can support multiple protocols (b, g, a, n) but its all based around the same standards group. So lets start with the simplest approach and then evolve it, rather than starting with multiple approaches.

So what facilities does the BSB actually need? During all of these evals I’ve done I’ve come to realise that the facilities in the BSB don’t, in some ways, make an attractive proposition for a software vendor because the product will be cheap. A BSB is in fact MQSeries for the 21st Century. The thing I most loved about MQSeries was its price for what it did. Clustering, failover, simple load balancing, a joy to configure and it pretty much always worked as advertised (except in MQSeries 5.1 using JMS on AIX with IBM’s Java implementation, but we got a patch pretty quickly… MQSeries 5.2) and once live it was solid as a rock. This is what the BSB should be, just the basics.

And by monitoring I mean being able to see the flows, bottle necks and responses across the bus, its operational monitoring rather than pure business monitoring but this is information that can flow up to a proper business monitoring solution. Complex load-balancing i.e. SLA based, is something I’d leave out for now but somewhere I’d expect the vendors to be going.

So this means no async to sync mapping, no "lightweight" process, no processes at all in fact nothing that really requires much of a product to support it at all. It needs to be fast, it needs to be simple and it needs to be able to federate. In fact I'd say that a product like this would tend to live all over the place and just "happen" to connect up to other products. These BSB would delegate to registries and policy engines it would just provide the standard mechanism for those elements to be enforced.

Now in my simple mind this looks like a good "base" product (ala MQSeries) for connecting pieces together, it isn't smart its just simple. All it does is enable different pieces to connect and be kept apart using simple routing and mediation. Lob in some semantic matching and it could get even easier.

So that is my view of what a BSB needs to be. Its the MQSeries of SOA technology, its for linking things up that know what they are talking about and know why they want to talk. I don't want the bells and whistles, I want it simple and I want it fast. This is the bit I want to use to do the cross domain pieces and the bit where I want to prevent people being "clever" so I can do the really smart stuff somewhere else.

I came across a post talking about the challenges of SOA and Financial Silos which references an item about too much SOA technology and makes the good point that we can't expect the business to fund strategic IT and should concentrate on delivering against what the business wants.

I'm with that, especially as I don't think there is IT strategy but where I think this article misses the point is that it holds up its hands and says "okay then lets just to the techy stuff at the bottom" which really is just accepting that SOA is just EAI, and like in scrabble if you lay down those two you end up with the same score.

The real problem, which is referenced but then sort of glossed over, is that IT needs to be looking well away from the technology and away from the projects and doing two things. Firstly IT should be looking to get the investment to make changes by economising on those parts of the budget that they do control (i.e. support, development and infrastructure of existing systems) and Secondly IT should be looking to do the cheap things that will have the largest impact. This means organisational and governance changes and understanding what the business service architecture should be. The article (IMO) also makes the mistake of thinking that BPM is where the real end game is here, which it certainly isn't as BPM is just another execution choice for IT and not a different way of actually doing IT.

If you think "well we can't make any changes but at least we can use SOA technologies on this project, that will help won't it?" then you are deluding yourself.

Friday, January 26, 2007

Reading a piece on El Reg about London Underground and Vista highlighted brilliantly something that has been batted around internally at work.

For Web 2.0 to really deliver the hard work is getting the existing systems ready, not in the flashy GUI. In the demo above the London "frozen points on a railway line that is always >10m" Underground knocked up a very funky demo using 3D rendering and all those flashy Vista collaborative and display features. The hard work they say had already been done in exposing systems in a way which could actually be consumed like that.

The point here is that not only do SOA and Web 2.0 work well together its actually really hard to see how you can have enterprise grade Web 2.0 without changing the way you deliver your existing IT. Please note here that again I'm not talking about WS-* v REST as those are just implementation technology decisions. What I'm talking about is creating an existing IT estate that can be easily consumed by "Mashup" or dynamic applications.

This shift requires a lot more than just lobbing Web Services onto systems as doing that would rapidly bring most mainframes to a grinding halt. So what you've got to do is make the estate function in that way from the perspective of the consumers this means that people might think that they are accessing the mainframe directly but in fact there is a sophisticated cache in front of it. This means presenting services in a way that is sensible to be consumed, and yet again this isn't about technology, its about taking a consumers view of service description and thinking about the exposure of the estate both in terms of the services and the Virtual Services consumed externally. The key here is that a lot of future value for organisations is going to be based around that form of external collaboration with suppliers, customers and partners. Enabling that is a goal of both SOA (thinking), SOA (technology), Web 2.0 (technology) and Web 2.0 (Kool-aid drinking PPT jockeys). This won't be done in a green field environment it will be done based on the existing applications, ERPs and mainframes that the organisation has today and which were never intended to work in that way.

So while there are some cracking Web 2.0 apps out there using existing technologies its going to be a massive change for existing IT enterprises to cope with this switch from I know the user now and I know what they are going to do applications to I'll know the user when they call and I'll know what they want when they ask. This is the key behind the switch from user based systems to participation based systems and the key challenge for Web 2.0 delivery.

If organisations don't embrace these thought and management changes, and then (N.B. and then) have the technology to deliver the change then all Web 2.0 will be is a fancy dress and some lipstick on a butt ugly pig.

SOA doesn't need Web 2.0, but its looking like being the best interaction model for future systems.

Web 2.0 does need SOA if its going to help enterprises deliver external value.

Monday, January 22, 2007

I've been looking at some SaaS solutions recently some have been niche vertical elements, others have been more "traditional" ERP type solutions that are delivered over the web, like Salesforce.com and Oracle OnDemand and it got me to thinking on some conversations I had with a chap called Paul Luckett. Namely if you have an infrastructure that can execute processes and you have some form of semantic mapping of your data to the format that the process requires then why would you not just by processes rather than applications?

The next logical step is then to not have the infrastructure but to "rent" processes based on how much you actually use them. Now for certain areas the argument can be made that there is too much competitive advantage for SaaS to really take off, but for lots of current ERP areas, such as Finance and HR, is there really so much value in customising things like employee onboarding, training approval, invoice creation or accounting? The answer is pretty much always "no" when it comes to putting in an ERP so why, if you have a middleware with the characteristics above wouldn't you just "buy" the process from someone who has formalised and optimised it already.

If however it is one of these commodity things that really don't add value would you really want to pay money to operate it yourself? Wouldn't you prefer to consider it as a utility in the same way as people have outsourced payrolls and telephones? I'd argue for a strong "yes" on that, and I'd even go a stage further. If you have an extremely common process, say creating an invoice, why wouldn't you actually just 100% standardise that via something like OASIS (like an ebXML that is light enough to work) and then have companies compete to be the most reliable hosting for that process.

I'm talking here about the back-end processes where ERPs currently dominate the market. But with the rise of SaaS, Semantic Web and process standards is their really a future for building your own HR, CRM or Finance solution? Isn't Oracle's OnDemand exactly the way the world needs to go?

So what is an invoice worth? 1 cent a time? 2 cents? its not going to be much. The commoditisation will rise up another level and more processes will be put into packages, but I think its reasonable to expect that in 5 years time you'll see the majority of new back-end processes being either downloaded and run or just rented. This gives whole new challenges around data security and retention to be solved but it does sort of argue that the vertical solution direction that SAP and Oracle are taking is the right one and that the new "cheap" ERPs on the block might be arriving after the bus has left.

Thursday, January 18, 2007

N.B I have no inside information it just makes senseNote for non-brits: Sky+ is a Personal Video Recorder linked to the main satellite TV supplier in the UK, Sky which is owned in part by News International... who own Fox and the Sun

Thanks to the Mrs we've now got Sky+ and a wonderful thing it is as well. What is odd about it though is that there is 80Gb taken up by Sky for whatever they want, leaving me with 80Gb left to record programmes. Given we haven't actually recorded a TV programme in about 3 years this isn't a big issue, although we have recorded 8 in the first week of Sky+, but it got me thinking about what Sky could do with that space.

Then I saw the Google tie up and I suddenly though. What would I do if I had over 40 hours of available space on a hard-drive? Sure I'd use some of it to start pumping Video on Demand at people but I'd also be looking at ways to increase the effectiveness of advertising spend.

What I'd look to do would be pay a premium to advertisers to have their adverts pre-loaded onto the boxes, these adverts would then be played at targeted users based on both their current viewing and what Sky/Google knows about them. So you watch football, golf and Oz Clarke, you get the adverts for a Spanish Wine/Football/Golf holiday. You are watching Big Brother and Sky know you already have a mobile, well today's programme isn't sponsored by Carphone Warehouse, its sponsored by Direct Line insurance because your car policy is up for renewal. Watching lots of ITV? Then here comes the debt consolidation advert just for you.

One of the big challenges of TV advertising is this problem of advertisements being broadcast like a sawn off shotgun at long range at viewers with a hope that someone will get hit. Having even 5 hours of 30 second adverts (600 adverts) would be a very powerful proposition for companies because unlike the broadcast adverts they could apply on every channel so when you skip channels you would still see the same adverts because after all its the box that controls what you watch. Sky have a big advantage here over someone like TiVo in that everyone knows that Sky is the company that makes the box and Sky is the company that controls the box and that you pay all the money to Sky to watch the TV. This gives them a high degree of control that isn't quite so available right now in the US market.

So what would it take to build such a solution? Well firstly you need the content on the disk, this is trivial you just have a trickle feed on the 2nd tuner (Sky+ has two tuners) that builds up the adverts, which are after all pretty smart, and then you need a set of smart user to advert matching technology, step in Google which needs some information about the user, hello Sky.

I'm really struggling to see what would stop Sky rolling this out on all of their own channels (clearly ITV would be upset if they did it across all the channels) and what the major technical challenges would be.

Maybe this is the way that TV adverts survive, they stop being broadcast and start being "served" from the PVR and you watch them because they are something that you want to see. Its a little bit scary but it would seem to make a hell of a lot of financial sense. Hell using a bit of fancy streaming they could really personalise it so the advert would say "Steve, get off your fat arse and get down to Globo Gym today".

Targeted adverts in a real-time or recorded broadcast stream using adverts served from a hard-disk. Bagsy on the patent.

Wednesday, January 17, 2007

One of the things that never ceases to amaze me in IT organisations is the way they treat support so differently from projects. Often these two pieces are contained in completely different parts of the organisation with different objectives and measures, unbelievably often the architects and project managers from the "build" side don't have to know anything about the existing systems, or indeed the impact of project issues on that existing IT estate.

It also creates one of the most dangerous things in IT from a cost perspective namely the focus on "Budget to Live"(BTL) rather than Total Cost of Ownership (TCO). What I mean by BTL is that a project has a budget to spend to get the project live and that is what they will spend to go live, thus demonstrating how clever they are and why they should all get paid more money.

And time after time I see the following behaviours

Testing cut short to "get it out the door"

Testing only positive scenarios

Taking coding short-cuts that "are a bit messy but its quicker"

Allowing "code first" design (and I don't mean TDD)

Excluding existing systems from question of "how to build this"

Deliberately hiding problems

Claiming "live" going to the pub and saying "its a support problem now"

"It was fine when I left it"

This is often exacerbated by the "architects" within these companies who practically aim to be ignorant of the challenges of actually deploying and managing a solution. Thus the architects are there for the first few months and then leave it to the lowly designers and implementers, and when it goes wrong.... well it isn't their fault the architecture was fine, and of course those massive maintenance issues for this "optimal" architecture are down to the lack of sophistication in support.

All of this means that projects store up a whole load of hidden costs that are only realised after the project goes live. In many organisations there is a "hand-over" between the two groups and no cross fertilisation of resources, so operations doesn't get to be involved in the project to point out the issues they just have to deal with the problems. The problem is that the project will be seen as successful, whereas the support will be seen as expensive. The project will take the plaudits and operations will carry the can, this isn't an effective way of delivering a decent IT organisation.

So what I'd recommend is that architects be made to learn the existing estate and be measured on the total cost of ownership for their area, their bonus should come from them delivering on the promises that they made at the start of the project. A project managers bonus should be split 20% go live and 80% 12 month TCO to make sure they focus on the bigger picture. Most importantly of all however the accountants should recognise where cost should be assigned, namely against the project budget that created the mess. By making these changes an organisation can start to think in terms of TCO not BTL.

If you aren't considering support in the same breath as new spend then the odds are you are doing BTL, and your IT is much more expensive in total than it should be.

Monday, January 15, 2007

Reading over on infoq I came across this interesting article about one of the most senior, if not the most senior, person from IBM Software leaving to join Microsoft. Don Ferguson is pretty central to lots of things that IBM have done and is extremely important in the overall picture at IBM. From his old IBM blog

Don chairs the SWG Architecture Board, which oversees the architecture and integration of WebSphere, DB2, Lotus, Tivoli and Rational products. Don was the original Chief Architect for the WebSphere family of products.

Which sort of scopes out the sort of calibre that Microsoft have got themselves here. I've been concerned in the last few years that Microsoft just don't appear to be focused on the enterprise market. It was clearly going to take a pretty heavy hitter to move them forwards and most importantly give them the clarity they need to build a full enterprise ready stack of products.

With Longhorn Server and the revision of most of the enterprise stack that it requires being scheduled (according to rumour) for next year this gives Don only a short time to implement the complete overhaul of how they work. As a champion of SCA at IBM it will be interesting how ideas like that are either used, or replicated, in side of Microsoft. It will be interesting how much of the current Vista and .NET 3.0 pieces make it into the enterprise stack or whether Don will decide that something better is required.

From the post I referenced above I said that Microsoft need to Buy BEA, or get a vision for SOA that allows the enterprise to evolve at a different pace to the operating system. at least now they've hired someone with a good track record in proper enterprise software. Maybe there might be competition from Redmond after all.

Saturday, January 13, 2007

I've been reading The God Delusion recently with its rather nicely argued points about theism, deism, atheism and the like. Reading it made me think however about how a clearly designed environment, like IT, operates and while reading some of the chapters around monotheism it became clear.

IT loves to proclaim the "one" truth of IT and burn at the stake all those who do not agree. It is a professional of many fundamentalist preachers who preach "the one true way" of IT and proclaim that all others are damned. It is a professional of followers of those preachers who deride others for "not understanding the truth" rather than seeing that maybe there is another side that they don't know about. Now clearly I'm open to this charge around SOA, I pretty much think that viewing businesses as services is the only practical way I've seen so far that works and right now I'm preaching that gospel.

And here comes the other bit, we may proclaim to be fixed to "the one true IT god" but we will switch as easily as pie to the next "god" that comes along. Sometimes this is in the guise of a powerful preachers (e.g. Microsoft) who convince the followers that all that had been said before (e.g. VMs are slow and not the way to go) is true and that all that is said now (e.g. VMs are the way forward and the only way to work) is true and sure enough the gullible followers develop with this religion and rarely look for solutions outside of their creed.

Consistent in all of these beliefs is that all problems are, at their route, an IT problem that needs an IT solution. Thus the battle will rage on WS v REST and Java v .NET, Spring v J2EE, ESB v ESB, BPEL v BPMN, lazy loading v pre-fetching, OO v procedural, Netbeans v Eclipse and for all of these there will be people who take, in effect, religious convictions as to the "right" of their side in being the one true way to solve the problem.

The history of IT fundamentalism has taught us that one answer has pretty much never been the sole right answer that will be true for all ages and that just as the religion of EAI has pretty much died so the religion of ERP is liable to falter in the coming years. The history of IT fundamentalism has also taught of its immense dangers to projects and even businesses, the .com bust is a great example. WS and REST will be looked back on as a footnote in the debate and we will all have moved onto a new religion with new fundamentalist preachers proclaiming its truth.

Maybe however we should look to what has caused these IT debates and what is driving these changes. The answer, in reality, is that the big changes have not been made from the fundamentalist preachers of IT they have been made by people who saw a problem that needed fixing and provided a solution that hadn't existed before this means that they key to IT is not the solution that drives but in fact it is the problem which is important, and each generation has new problems that are now required to be solved.

The Church-Turing Thesis sums up this dilemma that in fact all current problems are reducible to previously solved problems or by definition is unsolvable. This means that the only question is on the efficiency of the latest IT cult rather than on it being genuinely different. This presents the IT preachers and their followers with a problem, how to push their new cult when in reality it is going to be short lived and is in fact just a different view on the same old problems. The answer is of course denial and fixation. We hear, most often from the followers rather than the preachers, that those who do not follow the true path are doomed to failure and (a common refrain) "do not get it". The failure to accept this new religion is placed straight at the doors of the non-converts.

The reality of IT however is that these religious fads are, like religions in the real world, just things that give us false comfort about the real underlying problems and challenges that we face. It is better to have a religious rod of conviction that "REST" or "Agile" or "SOA" or "Web Services" or "PHP" or ".NET" etc etc is the best way rather than having to actually sit back and think that maybe it might be better to use something different on this problem because it is different to the previous problem. The current approach of "The answer is X, now what is the problem" and then blaming the problem is just plain madness.

The only rational position in IT is to be a political agnostic, that is you should accept that different IT religions are required in different areas and you should be skeptical about new religions that come along. When you find something that works for a given set of problems you should believe in it, but you should do so in the knowledge that something else might come along. Fundamentalism should have no place in IT, nor should the "believers" seek to condemn the heretics who dare to question their preacher or "bible". The political agnostic is one who chooses to believe for a period of time because right now that belief works for them, you could also refer to them as the slutty polytheist if you are of the fundamentalist monotheist persuasion.

One size doesn't fit all, one solution will not be the answer to all problems and there sure as hell isn't a Silver Bullet. Its fine to have certain beliefs on things that help but these should be backed up by data and should accept that these may not be true in all circumstances. Beneath it all should be the recognition that the true challenge is not what is the "one" IT solution, but what is the current problem you have and what is best suited to solve that problem. The problem is that the preaching of reason never generates the same degree of noise as that generated by the truly fanatical. So the messages that are heard in volume are not those of reason, debate and discussion but rants about technologies and the continual almost maniacal obsession with "the developer".

IT fundamentalism is a reaction to the fact that most IT challenges are created outside of new IT projects whether by customers, business, markets or the existing estate. It is a reaction to those who aren't assured in what they are doing and need some technology deity to cling to., IT fundamentalism dictates a "technology truth" as above everything as it is just too scary to admit that IT not only doesn't have all the answers but...

Wednesday, January 10, 2007

One of the statements I've made on a semi-regular basis is that the concept of business process being the most important thing in a business is a myth. The other day this was brought home to me perfectly when I was looking at packaged solutions and particularly pieces around the "sales process".

Now I'm not going to say that there is no such thing within an organisation as a sales process, of course there is, but what I'm saying is that this isn't actually the important factor when looking at the success or failure of a sales organisation. Having a process that says "Stage 3 - Agreed issue with client" and "Stage 9 - Contract Signed" isn't the thing that actually drives successful sales.

No its sales people, sales targets, bonuses and most importantly goal driven behaviour that is the important factor in sales. Understanding what the goals are is much more important that understanding what the process is, different people will have massively different approaches to the selling process and while they will move from one step to another and this will give you the ability to measure and audit their performance it will not give you the ability to help focus or improve it.

Its an old truism that KPIs drive behaviour and its massively the case in sales, it is therefore, when modelling the architecture, more important to consider these KPIs and the underlying business goals than to worry about the audit process that will allow elements to be measured. This is a great example of where the business value and importance is assessed via the KPIs and Goals, but is measured by the implementation of a process.

This is one of the critical things to remember when creating an SOA, understand that the mechanism for the implementation and measurement of a service can be different to the mechanism for defining the external drivers and business critical value of the service. In some cases process might be both of these elements, in some it will be none but do not fall into the trap that because the measurement is by process that this is actually the thing that drives the business behaviour and value.

In fact I'd say that for most Services the concept of "Goals" will be more useful than the concept of "Process".

Tuesday, January 09, 2007

Reading a post about Thinking about REST I suddenly realised one of the things that I didn't like about REST's view of the world, its the concepts that resources can always "do" things on their own. This is one of the big challenges that people tend to come across when doing object modelling and I really hope that REST isn't (in the traditional way of IT) revisiting the errors of previous generations.

The line that brought it home to me was

5. Services are seldom resources. In a REST design, a "stock quote service" is not very interesting. In a REST design you would instead have a "stock" resources and a service would just be an index of stock resources.

Now the point here is that the idea is that the "stock" resources are able in themselves to perform the required actions of their consumers.

First stage is to take a step back to explain what I mean by the object purist mistake.

The scenario is this, you are modelling a Payroll system and you "know" that objects contain both the behaviour and data for everything in the enterprise, everything that isn't an object is just about looking for objects, all functionality and business logic must be implemented in an object. This then leads to an "Employee" object which contains the method "pay" which enables that employee to be paid, the payroll process therefore just loops through all the employees hitting the "pay" method and all is fine.

The problem is that this isn't how a payroll system works from a business perspective and it certainly isn't how Employees work in the real world (imagine if you could pay yourself, whenever you wanted, and were in control of the amount to be paid). The point here is that certain pieces of business logic work at a higher level of abstraction than objects and you need something else to co-ordinate those elements and apply the business rules that are specific to the business rather than the employee.

The REST example above is making the same mistake, but using an even more obvious example to prove it. Think about a piece of "stock" sitting in a warehouse, even if we say its "smart stock" and its got a sensor tag and an RF-ID tag on it there really isn't much that a box can actually do. You can ask it questions like "what are you" but its not actually possible for a piece of stock to be ordered, it can't pick it self up, it can't write a shipment note and it can't drive a van.

Asking a piece of stock to order itself doesn't make sense in the real world and modelling something in a "pure" way in the virtual world that doesn't represent how businesses actually work is not a great idea.

Now what could work is exactly the same as what worked with Objects, namely that you have a service providing discovery, co-ordination and higher level business logic capabilities with the resources providing that information which is relevant to the specific entity. Limiting Services to just discovery and trying to shoe-horn logic into a resource that doesn't belong there is liable to lead to a system that is unable to match the changes of the real world specifically because it doesn't model how the real world works.

"Pure" models don't work, its why lots of economics is bunk as it assumes that people don't exist and then the theory holds, IT must not fall into the same comfortable trap of beholding a system as conceptually elegant when logically and physically its a bust.

Thursday, January 04, 2007

With this in mind I thought I'd revisit something from the SOA Book and the OASIS paper around virtual services. The concept of virtual services is pretty simple

Virtual Services should be used where a collection of internal services is combined to provide an external view for a customer, thus creating a "virtual service" that is one which provides no direct business function but which offers a facade over those services.[...] Virtual Services therefore provide a way to indicate where business logic can be co-ordinated and potentially simplified.

As people are beginning to look more at Ajax and Mashup applications on top of SOA infrastructures I thought it was worth going over some of the guidelines and rules for what virtual services are and how they should be used.

The first point is that a virtual service is not a trivial thing they can be immensely complicated beasts in their own right, for instance a voice based processing interface and can do some fairly complex aggregation of information. This is however different from actually handling the processing of the information itself. So a virtual service should be pulling in information from various sources, creating the most effective interface, whether via REST, WS, Ajax, Flex or something else and then delivering that to its consumers. So some basic rules for Virtual Services

No business process should be managed in a virtual service

No transactions should be managed in a virtual service

No comparison based validation of content should be done, only validation against data format (schema)

Virtual Service should be focused on the consumers view of the service

Virtual Service should be subject to "light touch" management

Virtual Service should be focused on the presentation of information and not its processing

Always enable mediation to be done between the virtual service and the services it consumes

Think of these Mashup applications as virtual service which contain the page flow and the information aggregation and representation but which forward the information for processing to the "back-end" services. In effect the Mashup is the bloke on the till at Burger King and the services are the people behind the bin flipping burgers.

Tuesday, January 02, 2007

One of the things that is often over looked when people consider quality of service is the impact of the consumer on that service. Sometimes, rarely, people consider the concept of "high value" customers and the idea that these should have some form of "fast track".

There is however another piece that impacts on both the cost and the quality of service, and that is the ability of the consumer to subvert the "standard process". Returning from our Christmas break we had one of those annoying "bugger its broken" moments on something we pay a monthly subscription for. Basically the hardware is shagged but because its five years old we now "own" it and the buggers are trying to charge us about 70 quid (500 dollars by the start of 2008) to "fix" something which probably doesn't even cost 30 quid. I'm on the phone for about 30 minutes pointing out how this isn't fair.

Then in steps the Mrs with a "oh god just give it to me", twenty minutes later we've got a contract upgrade, better equipment and its costing us 50 quid to get something we thought about spending 100 quid to get and to get everything else all sorted.

What my wife new was that playing poker with these folks involves the "bluff" of canceling the contract, this means that she gets a much better quality of service than I do as I tend to follow the rules of "what is said to be available".

The point here is that while a system might have a perceived process, indeed a process which the server side wishes to enforce and that a rigid enforcement of that process can lead to lower customer satisfaction and potentially reduced value being handled by the service.

What this means is that it is actually not up to the server to purely dictate a straight process and response to the consumer it is up to the consumer and producer to negotiate the right result for both parties a QoS policy that assumes that the producer is the final arbiter of quality is liable to produce a very unfriendly service and one which doesn't interact with people in the manner which they wish.

Its fine for the service to give you three options, but if my wife can't find the fourth then we'll be going somewhere else.