Saturday, May 31, 2003

GXA (A.K.A., WS-*) is creating a girthy protocol stack, that is, the distance around the circumference of the protocol stack seems to grow daily.

The girth for the base internet protocol stack was pretty small: UDP, TCP, DNS, etc. - The cohesion within the protocol was high. However, the coupling between the protocols was moderate-to-high. That is, there were a significant number of times when a given protocol called out a "unique binding" rather than calling out "a requirement for a concern".

In the world wide web, the girth was minimal: HTTP, URI, HTML, etc. In the world wide grid, the girth seems to be rather large: SOAP, UDDI, WSDL, WS-Inspection, WS-Transaction, WS-Security, WS-ReliableMessaging, etc.

Girthy protocols worry me only when they are mandatory. If I can choose the specific protocols that I want to use in the stack, rather than having to accommodate each layer in the stack I have increased flexibility and decreased the problems associated with girthy protocol stacks.

Protocols become mandatory when the cohesion between protocols is high. This occurs when a protocol SPECIFIES that you MUST use another specific protocol. This is what I call a "unique binding". It occurs all the time in the web services protocols. Rather than stating that one protocol requires the remedy of some concern (e.g. addressing), the protocol calls out a specification by name. In these cases, we end up creating "protocol frameworks".

The creation of protocol frameworks that are bound and versioned via a profile (like WS Basic Profile) may be required given the state of the art of protocol design. However, as we begin to adopt the ws-* stack we must be aware of the cohesion and versioning issues that will arise in the future.

Thursday, May 29, 2003

The Conversation“We just converted the last of our client/server applications to thin-client!” exclaimed a friend of mine at a local Fortune 500 company. “Why?” I asked. “What do you mean why? We changed our architecture to n-tier with application servers and browser interfaces.” To which I could only ask, “Why?”

I could see that I was irritating him, but I thought I’d let him hang for a minute. This was a technically adept, well-paid engineer. He commented, “We went to n-tier because we could reduce our deployment costs. All the applications are now delivered to the clients without requiring p.c. administrators to install them! We can also deliver applications to our business partners over the Internet.”

“Those are all great things. You must be proud. Did you give up anything to accomplish this feat?” I asked. “No, I don’t think so – what do you mean?” he asked. “Well did you increase your server side costs? Did you deliver the application with the same robust user interface that you had in the client/server world? If I remember correctly, the client/server version used to connect to a bar-code reader on the pc, was that hard to do in HTML?”

Now he was aggravated. I had just pointed out three things that were worse in the new system that he preferred not to talk about. Conversations similar to this one are popping up in virtually every Fortune 500 I.T. shop. Why did we compromise on so many items? The answer is simple; the immediate advantages of the browser out-weighed any other solution that was presented at that point in time.

The BrowserThe web browser was designed to enable a simple method for linking information across physical locations and rendering structured documents. These goals were met with overwhelming success. The browser was so successful that people began thinking of it as the sole method of sending and receiving information. Even other traditional applications like email were redesigned for this new model. It was apparent that the software designer could kluge just about any piece of software to work in this paradigm. As time went on, the software community began to realize that stuffing every application into the same architectural model would not adequately serve the needs of the user or the engineer. The intentions behind the web browser were noble. No one could have predicted that an entire generation of developers would have attempted to transition all of their programming needs to this new model. Had they, the browser would surely have a different design.

Before the browser we had some pretty powerful features: robust user interfaces, access to local peripherals, client-side computing and storage, etc. Plenty of sophisticated features were traded for the promise of ubiquitous graphical rendering. Well, today we have ubiquitous rendering but none of the other features that developers consistently need at the client. We have tried to remedy this issue but met with limited success. The Java applet failed to achieve significant success due to the limited support in the Microsoft Internet Explorer browser, while the ActiveX control remained ridden with security issues.

Clean Sheet of Paper Developers are now re-thinking the role of the browser. What would client-side computing look like if you had a clean sheet of paper? Should it be the primary mechanism for delivering fat client side code? Should it always be a client and never a server? The answer to these vatic questions are leading many to believe that the client may be ready to undergo yet another major transformation. No one is suggesting that we do away with the browser, rather that we use it for what it was originally intended to do, rendering structured documents with hyperlinks.

The new goal is to balance a few simple concepts:
· I need continued access to my thin-client applications and web sites. Anything that I had in a browser I want to continue to have!
· Sometimes I work offline and I should be able to keep working when I’m not connected to the network.
· I need to communicate and collaborate with co-workers and business associates, often in real-time.
· I don’t want complicated installs nor does my I.T. department want to roll out new applications.
· Some personal information I’d like to keep on my computer rather than at a remote server.

Client Side PlatformsFive years ago I.T. shops were against adding anymore to the client than was absolutely needed. Today, as CPU performance is skyrocketing and disk and memory prices plummeting, there is a less emphasis on the hardware restrictions and a renewed interest in delivering more robust applications to the user. Also, software developer are tired of cursing HTML, trying to make it do things that it was never meant to do.

Returning many of the basic features that we had before browsers is the goal of the next generation of client side platforms. The CSP is really not a new concept. The CSP is merely a combination of those technical features that software developers need when writing client-side systems, while stressing strong (yet loose) integration between those components.

Saturday, May 24, 2003

Patricia Seybold, suggests:
"Beware of Business Process Management: Be Careful about Adopting Internally-Driven Business Processes; Instead, Design a Customer- Adaptive Enterprise Using a Services-Oriented Approach"

Friday, May 23, 2003

Jim Waldo is a smart man. Sit down and talk with him and you'll quickly realize that he spends a significant amount of time thinking about software and distributed computing. Jim was early into distributed systems with CORBA and more recently led the Jini effort at Sun. For those of us that know the Java specs - we all know that Jini was one nice piece of software. We also know that it wasn't widely adopted. And Jim is still ticked off.

When I last talked to Jim, he was pissed off at the JXTA group for getting both the attention as well as getting the internal Sun research dollars. In "Waldo-World" everything can be done inside of Java! No need to worry about all that "other stuff" (RPG, COBOL, .Net, etc.) In my humble opinion, Jim is smoking dinosaur dope.

Communications (a superset of distributed computing) is based on ubiquitous protocols. Why do we have them? Uh, to speed up adoption and to make all of the "unique" things in the world (including Jini) actually talk to each other. Does it slow down innovations? Yes sir. Does it make inter-application, inter-department, inter-business, inter-nation computing possible? Yep.

Jim has a ton to offer the web services community. If he would quit ragging on standards, reverse engineer Jini and publish it in terms of open standards - I would back him 100%. Until then, good luck with Jini. FYI, my Jini book sits on my bookshelf right between my CORBA books and my DCE books.

Monday, May 19, 2003

"There really isn’t a big mystery as to why the Web services framework—the standards that define an XML-based distributed computing architecture—have gained a credible foothold so quickly. Competitive pressures, the need to reduce costs, and a variety of other business factors are driving enterprises to integrate business processes and IT. Enterprises are creating both internally and externally facing systems that tie employees, customers, partners, suppliers, contractors, and other constituents into their business processes, for example."

Although it isn't clear, it appears as though Microsoft will embed the web service protocol stack right into all of their many operating systems. Their release of the WSE as an add-on library, and it's relationship to IIS suggests that Microsoft feels that (like everything else) web services should be close to the kernel. Usually, I disagree with their monolothic OS thinking, but this time I tend to agree.

If MS does this, it may force others to follow suit, namely Linux and other branded Unix flavors. This may cause a problem for IBM, BEA and other companies that want to sell you an application server (with web service enhancements). If MS drops the WS stack in the OS, this may cause IBM do to the same and then the Linux flavors may have to get in the game as well. I'm no rocket scientist, but this doesn't sound like good news for BEA and the other web service platform vendors. They would have to claim that either they had a superior implementation of the stack or rely on sales from the tooling (IDE, etc.) - which is a tough business. I also anticipate OEM relationships between groups like RedHat and groups like Systinet, Cape Clear and The Mind Electric...

and they can't seem to figure out what they are doing that has any value. They seem to have realized that just about everything they dream up has already been covered by BPEL or BPSS... yet they seem hell bent on keeping a working group together - from what I can tell, the purpose of the working group is to determine the purpose of the working group. I can only assume that the people that participate in this group don't like real work and would rather participate in a completely irrelevant working group.

Personally, I'll go with BPEL. As the W3C participants debate what ws-choreography is - my company has finished our orchestration implementation of BPEL Now we are moving on to finding snafu's and areas for improvement that can be fed back into the spec...

If it sounds like I think the W3C is dicking around with specifications that have already been written - you are right. KILL THIS WORKING GROUP.

Friday, May 09, 2003

I had a great conversation yesterday with one of the IBM guys. He was very excited about aspect oriented programming, while I was promoting the service-oriented model. As we got deeper into the conversation, we realized that the concept of aspects were very applicable in the service oriented world.

In the SOA, we "apply" new functionality or "aspects" to a service via one of several mechanisms. For talking purposes, let's use the "logging" example. That is, an aspect that can be applied across a set of objects (or services) that log all the calls.

There are several ways that we can add functionality. Perhaps the most straight forward in the service oriented world is by using the pipe & filter. Here we simply pipe the output of our primary service into a new service (like logging) - and we've created a discrete, assembly line approach to adding functionality. The down side is that the each participant in the pipe chain must be explicitly called and the programmer is exposed to a significant amount of raw details that they may wish to avoid.

A second method for performing this feat is to create a "tightly coupled web service" - here we just hard code the stuff right into the web service. So service 1 is hardcoded to call service 2 (logging). Perhaps not the best idea, but likely the most common means in the first go-round of creating web services.

A third method is to describe the functionality at the protocol layer. Take WS-ReliableMessaging, this information gets baked into the stream and now both participants know that they should "apply" extra functionality to make it work. This is a great way to force the adoption of functionality with minimal impact on the programmer, but is really targeted at scenarios where 2 or more participants must engage in a conversation and have a minimal agreement. Dropping too many "aspects" into the protocol will create fat and rigid protocols that are hard to debug.

A fourth method for addind functionality to a service is to create a "composite web service". Here we have one web service front-end several services. A great example is using BPEL4WS. This allows you to call a single service and a script of services that it turns around and calls for you. BPEL is designed to be loosely coupled, where new services can easily be added. The neat thing is from the outside, a BPEL looks like web service - and it is. The BPEL method is a great way to create loosely coupled service composition, but it does it by standard flow-control mechanisms. Thus it takes on a loose coupled version of the pipe & filter model.

What I am looking for is the 5th way - and as far as I know, it doesn't exist (yet). I'm picturing a WSDL model that understands pre & post conditional apsects. Here the WSDL can have declarative aspects attached to it. The closest thing I've seen is "XL", see: http://xl.in.tum.de/publ/www2002.htmlI'm not sure that this is the right way to do declarative, aspect based constructs for a service oriented world, but it sure the hell advances the thinking.

In an unexpected move, Sun Microsystems Inc. said it will join the Organization for the Advancement of Structured Information Standards' (OASIS') Web Services Business Process Execution Language technical committee.
A Sun spokesman told eWEEK that Sun will be joining the WSBPEL technical committee and will be in attendance at the first face-to-face meeting of the group, slated for May 16.

"Absolutely, Sun will be joining," the spokesman said. "We will be there at the first meeting on May 16. No rep named yet but we will have one by then."

The move indicates something of a turnabout for Sun, which is supporting an alternative specification to handle Web services orchestration, known as WS-Choreography. That standard, also supported by Oracle Corp. and others, is being developed under the World Wide Web Consortium (W3C).

Tuesday, May 06, 2003

From OASIS,
"OASIS is pleased to announce that the UDDI v2 specification has been approved as an OASIS Standard. The UDDI Spec TC is to be congratulated on the work they have done in developing this specification."

Sunday, May 04, 2003

It occurred to me that we techies have failed to create a name for the web service based Internet. You know, that thing that is service oriented, message driven, self-describing, standards-based, loosely coupled - yet structured? Roughly, it is the universal services network sitting on top of the Internet and the wireless Internet shooting XML specified messages around according to specifications driven by the non-profits and the computing giants. For a while people called it, Web II or Web V2 - but those have faded away. Today, I propose "World Wide Grid" or "WWG" as the name for the aforementioned network.

"Huh?", you say...still don't get it?? OK, one more attempt, it is the combination of the WS-I Basic Profile (SOAP, UDDI, WSDL, XML Schema) + the new WS-I Security Profile + an addressing & routing scheme (WS-Routing, WS-Addressing) + a reliable messaging scheme (WS-Reliability) and a transaction scheme (WS-Coordination and WS-Transaction). These protocols will be complemented by a series of protocols that will be optional. The optional protocols will be utilized in a "Must-Understand" scenario between conversing parties. This will include everything from billing, provisioning and SLA's, to logging.

But why the WWG? Doesn't "grid" sound too much like a high-performance, distributed computing grid, like the efforts at Globus? Maybe - but before too long the "low-performance" and "high-performance" grids will have a significant amount of overlap and may likely become one grid. What about the OGSA efforts? That's easy - many people (like IBM marketing) tend to use grid to mean "run-time dynamic allocation of resources (CPU, Disk, Memory, Network) - this is clearly part of a grid - the part that many companies will promote to customers as having a near term advantage (cost savings through server consolidation), but it only touches on the real opportunity of creating a secure, messaging network where anyone in the world can participate. Call me old fashioned, but what many call grids are not much more than advanced versions of MPI - - and this in my opinion is missing the boat.

Back to marketing - I imagine the T.V. commercial where Bill Gates, Tim Berners Lee, Larry Ellison, Scott McNealy and Sam Palmisano stand up together and tell the world that they are backing, creating and selling the next web, the WWG. They go on to explain that it will connect every business, enable extended supply chains and create efficiencies that have only been dreamt about... I then imagine the NASDAQ once again being a popular place to make investments.

Web services are a network. And the network needs a name.

I know that many people read my blog - and many of you blog as well. So here is my challenge - support me in creating a new name for this next generation network -- OR -- make a better (or different) recommendation. Blog it and let's revisit.

Friday, May 02, 2003

1 May 2003 -- IBM Corp says its new WebSphere software will help companies get more value from their existing IT resources by allowing them to automatically manage multiple applications running on multiple clusters of servers as a single environment.
IBM says by virtualizing the resources available across a grid of WebSphere servers, the new technology -- called IBM Server Allocation for WebSphere Application Server -- allows customers to simultaneously increase application performance and resource utilization.

The technology, according to IBM, will enable companies to manage business applications running on different servers, and with differing priorities, usage patterns and computing profiles, as a single environment that can automatically adapt to sudden changes, much like the electrical grid. The company says it uses open Web services standards and will be enhanced as emerging Open Grid Standard Architecture (OGSA) protocols mature.

More details: http://www-916.ibm.com/press/prnews.nsf/jan/66F0366806B9363785256D19004988C4