Saturday, February 22, 2003

I had a realization. I have been programming services in an object-oriented world - - and it is clunky.

I've taken a look at IBM's WSIF, which acts as an abstraction layer across invocation schemes but it too is clunky. I should be able to import a wsdl into my environment and immediately begin making calls to the operation as if it were a regular method call - programmer never knows the difference.

I believe that we need a major language upgrade. Note - in order to make this happen we don't need a VM upgrade - just a language upgrade. There is no reason why I shouldn't be able to compile my favorite SOL (Service Oriented Language) to run in a JVM or CLR.

Corey Williams recently pointed me at Water, a language that promotes itself as being used for, "Simplified Web Services and XML Programming". I haven't downloaded, but will check it out. Sidenote - how did this make it to version 3.1 and have a book come out without me noticing - I must be getting old.

Thursday, February 20, 2003

Eric Newcomer, the CTO for Iona, recently commented on the intellectual property rights around web services. Although he makes no clear stand, it appears as though he prefers having ALL of the web service standards remain open and free:

"The path we take to the future may well depend upon the outcome of the current standoff around intellectual property rights in two key areas: orchestration and reliable messaging. Some of the vendors developing specifications for these areas are raising the question of possibly charging patent or royalty payments for the rights to implement the specifications. The leading standards bodies, and traditional industry practice around software standards to date, tend to favor royalty-free implementations. "

He goes on to comment that:
"It's ironic that software vendors propose standards in the name of benefiting their customers while still trying to maintain control over the specification adoption and evolution process in support of their own interests. "

Personally, I find the idea of blending open standards and royalty-based standards an interesting proposition. I find no irony in helping customers while making money at the same time. I do however find lunacy in giving everything away for free and commoditizing the products. The real question that Eric and friends must answer is are we trying to create a valuable market around web services. If so, the 80-20 rule (free-to-royalty) should be strongly considered.

Tuesday, February 18, 2003

Developers can build applications with WebSphere's new, integrated workflow engine and tools. They can easily and visually choreograph complex business processes to flexibly integrate J2EE and Web services applications and packaged applications. An automobile manufacturer, for example, can coordinate a complex manufacturing cycle -- order parts, alert the dealership and check consumer credit -- all within a workflow that spans the network. If credit approval fails, an exception can be requested from a loan officer; if denied, the workflow can automatically cancel and neatly accommodate the other steps, such as reassigning parts to another vehicle. IBM WebSphere's workflow engine is based on XML technology that IBM is now working to standardize, in advanced form, with industry partners. This includes BPEL4WS (Business Process Executive Language for Web Services), WS-TX (Web Services Transactions), WS-C (Web Services Coordination) -- all specifications that IBM is helping to develop and drive into standards bodies.

Sunday, February 09, 2003

This is good news - and bad news. It is good because people are beginning to view web services in a state beyond 'emerging'. Many have already built production web service applications - and they think they pretty much got it figured out. The bad news is that most people don't have it figured out AND web services (GXA style) are still emerging.

Either way, the community needs good meeting grounds. Perhaps we shouldn't read into this at all - perhaps Tim and friends are of the opinion that there are already too many web service conferences and they don't want to get in that competitive space. Personally, I'd like to see an O'Reilly WS-Con.

p.s. Is it just me or does the conference look lame? wireless, publishing and fat clients? The stuff that looks interesting is the nano/biocomputing and the session on geospatial annotation : http://conferences.oreillynet.com/cs/et2003/view/e_sess/3579Kind of a bummer since they got some real good speakers.

Friday, February 07, 2003

About every two or three weeks I find myself looking at WSIF. I'm usually poking around to see if something is in there or not. My recent pokings were to review the granularity of the calls. I realized that WSIF could be used to abstract the developer from knowing if the call was local or remote, but it didn't help with "granularity modulation".

"Granularity Modulation" (a term I just made up 15 seconds ago) deals with the need to dynamically change the granularity of the call based on network latency and serialization requirements. The really cool thing about WSIF is that I can describe an invocation once and not care if it was a JMS call (to China), a SOAP call (to my local server farm) or a local invocation (to my running JVM). It was designed to give you invocation neutrality - one API for all your calling needs. However, it wasn't designed to give you location transparency (which are many of my calling needs). I know, I know - - there is no such thing as true location transparency... right... all the usual problems. Well, there is a path towards location transparency - and, IMHO, invocation neutrality and granularity modulation are the first two steps.

Many developers, as a rule of thumb, will design local calls with fine grained access and remote calls with coarse grained access. The thinking here is that with local calls much of the data that is being operated on stays in memory. With remote calls, many times the calls are not stateful and large chunks of information need to be passed across the network (latency & serialization). The key is that they actually designed the interface differently based on the location of the operation. This is what I am trying to fix - I want to design the interface the same way for both.

So, the first obvious question is how do you automatically convert between coarse and fine grained calls - clearly there is a mismatch. Yes, I know. But, in fact, in most cases, one may use several fine grained calls to accomplish what a single coarse grained call might execute. If I chose, I could describe my interfaces from both a fine and coarse grained perspective. The mode that was invoked would be based on the location (local or remote) and would be determined at runtime - not design time. Variations on this theme include using the wsdl compiler to spit out both versions for you or having a smarter JVM that is aware of the WSIF binding.

This issue should also be observed under our 'coupling microscope'. By making the interface course only or fine only - did we enable loose coupling? Nope. Does the interface design hinder loose coupling? Yes. I want to be able to ask a service (or object) the same set of questions and get the right response. Design time decisions based on location will usually lead to tight coupling or at least, minimize the reusability of the asset.

Tuesday, February 04, 2003

Cysive, a consulting company - turned product company has released a whitepaper documenting that, in-deed, native calls are faster than soap calls. Congratulations on this discovery! It only took 34 pages for them to unearth and articulate this finding.

After reading the report, I figured it would be better to not comment on it and just let the readers come to their own judgement.

The unfortunate news is that Cysive is only losing about $8 million a quarter and they have about $75 million in the bank. Thus, we have the wonderful opportunity of getting this kind of "insight" for at least another 7 quarters. Thank God for big IPO's.

Saturday, February 01, 2003

It has been stated that one of the primary advantages of web services is loose coupling. I believe that web services have the ability to provide loose coupling, but it is something that must be achieved; it isn't granted.

An example of this is in the web service invocation model. Most people are aware that web services fall into the SOA model, thus they have a consumer, producer and directory. The idea is that a consumer will look in a directory to find a producer then use this information to call the producer. This can provide different values such as:
- the ability to find competing service providers (trading partner style)
- the ability to find multiple, redundant services (same service, different boxes)
- the ability to abstract the caller from backward compatible revisions in the service (such as a change in physical location of the service provider)

It is this last example where most people start shouting that web services enable loose coupling. Yet, what I have found after reviewing sample web service code is that most people don't actually design their software to take advantage of this feature. I've created a few terms that should help articulate the point:

Optimistic InvocationMost of the sample source that I have seen does "optimistic invocation". That is, the client makes the assumption that it already knows where the service provider is (surely nothing will ever change in the production environment...)
Thus, you see code like this (excuse my pseudo): Proxy MyProxy = new Proxy ("http://www.HardCodedLocation.com/myservice/");
MyProxy.invoke(someOperation, blah blah blah);

Here, the programmer is an optimist. They assume to know where the service is located and call it. This is quick (no directory lookup) and simple (no directory lookup).

Pessimistic InvocationAnother method of calling some service is to assume that we know nothing..
Directory MyDirectory = new Directory();
Proxy MyProxy = new Proxy();
MyProxy = MyDirectory.lookup("some service description..."
MyProxy.invoke(someOperation, blah blah blah);

Here, the programmer checks the directory before each call... making sure that the client has the latest information on service providers. This adds a few lines of code and increases the processing time by adding a directory lookup to each call.

Realistic InvocationThe Realistic Invocation model is designed to find a happy medium between 'optimistic' and 'pessimistic". The idea is to start out optimistic and if it doesn't work to fail over to pessimistic: Proxy MyProxy = new Proxy ("http://www.HardCodedLocation.com/myservice/");
try {
// give the optimistic way a try...
MyProxy.invoke(someOperation, blah blah blah); }
catch (Exception e)
{
// if optimistic failed, switch to pessimistic
Directory MyDirectory = new Directory();
Proxy MyProxy = new Proxy();
MyProxy = MyDirectory.lookup("some service description..."
MyProxy.invoke(someOperation, blah blah blah);
}

Wow... 'Realistic Invocation' just turned into a whole bunch of code. Well, that should be easy enough to fix. You can either turn this model into a helper function or apply an aspect across the service calls. The real point is that optimistic invocation doesn't help much with loose coupling... pessimistic invocation usually isn't realistic from a performance standpoint, thus using a realistic model (with helper functions) gives you a simple, effective method of doing distributed calls while preserving the loosely coupled model.