Discussions

Delivering the keynote at European DevWeek in London on Tuesday, .Net evangelist Don Box said HTTP presents a major challenge for Web services, for peer-to-peer applications and even for security. A replacement will eventually have to be found, he said, but it is not at all clear who will provide this replacement.

Well, you shouldn't have, because the point that HTTP is not appropriate as a transport protocol from an interoperability standpoint is perfectly legitimate. It's been made by more reliable sources (IBM for instance), and is not just some cooky Microsoft theory.

As we move more toward utilizing some transport protocol as the "glue" that binds applications together, it's shockingly apparent that HTTP is not the answer. Do the phrases transactionality and guaranteed delivery shine any light on the situation? How about state persistence?

The great thing about HTTP (at least one of its many) is that it is very lightweight, extensible, and can be bastardized to fit any situation that you may dream of. The answer to every situation in which HTTP falls short?: just layer protocols on top if it and away you go! Nevermind the fact that it was meant as a simple request/response, stateless document transport protocol.

HTTPR is a great idea, but unfortunately it misses the point. We've been getting by using nonstandard mechanisms (otherwise known as HACKS) to get around its shortcomings, and the act of simply layering or modifying features is not going to change that.

It's time for something new... and something that doesn't start with J (because we all know that if Microsoft doesn't buy into it, it won't fly... and I don't see them backing an acronym that's first letter represents the word "Java").

As we move more toward utilizing some transport protocol >as the "glue" that binds applications together, it's >shockingly apparent that HTTP is not the answer. Do the >phrases transactionality and guaranteed delivery shine any >light on the situation? How about state persistence?

LOL! when was it not 'apparent' that HTTP is a stateless protocool?
Running web services over http was always a dumb idea. There are protocools for all those other situations you describe and they work perfectly well.
Companies have built their firewalls to connect/accept traffic on Port 80 because of the general perception that HTTP is a low risk protocool. What microsoft wants is a HTTP++ which will do all those wonderful things that webservices require and yes they want to run it on Port 80 so that they can claim it is as secure as HTTP.
Who wants to bet that the next version of IIS will run this new 'standardised' Web service enabled HTTP ?

That something new has been existing for a while. Its known as IIOP, and an alternative is DCOM. The problem is these are not firewall friendly and SOAP guys thought that using HTTP will give them nirvana.

Web Services, as cool as they sound do not solve any particular business problem, and it currently is a cool technology waiting for a business need.

Spot on. I find it funny that a big "pro" is 'If you use SOAP over HTTP you can sneak through the firewall as it is probably already open!'.

And how long will it take for the sysadmins to look for SOAP-Action headers and stop them from going through ;)

I do see potential in "Web Services" *IF* vendors really do buy in and allow everyone to talk to each other. This is purely political, and nothing to do with technology. We will have to wait and see if this happens :)

> particular business problem, and it currently is a cool
> technology waiting for a business need.

Um, right. You couldn't be further from the truth, and aparrently have not had to deal with the never-ending problem of enterprise interoperability. And by interoperability, I am talking about thousands of disparate applications utilizing dozens of technologies, many of which were not designed to talk to each other and never will be able to w/out a base-technology neutral solution.

You couldn't be further from the truth, and aparrently have not had to deal with the never-ending problem of enterprise interoperability. And by interoperability, I am talking about thousands of disparate applications utilizing dozens of technologies, many of which were not designed to talk to each other and never will be able to w/out a base-technology neutral solution

>>>

If you need to deal with enterprise integration consider using a EAI vendor. Are you saying me that with web services, you can flip a switch and every application integrates with every other enterprise application?.

BTW, EAI solutions work well today, only you got to be willing to spend money.

Most enterprise integration vendors ARE switching to SOAP as their communications protocol, and are moving towards allowing logic to be exposed via WSDL. And, anyhow, unless I need to map business processes and flow, create a centralized integration architecture, or need a really snazzy solution, why spend money when a web service can be developed and hosted for free living among my other web sites?

> Are you saying me that with web services, you can flip a > switch and every application integrates with every other > enterprise application?

No, not at all. Merely that it is that much easier and actually possible to implement using open standards such as those utilized by web services. I never implied that it is as easy as "flipping a switch."

> Give me a concrete business need that is solved by web
> services.

I-N-T-E-G-R-A-T-I-O-N. Application A needs to share business logic with Application B. It's as simple as that.

John: "Most enterprise integration vendors ARE switching to SOAP as their communications protocol"

That may be their stated intent, but most are looking to primarily support technologies such as JCA (and even COM+) because it allows them to integrate directly with 2pc tx managers and plug efficiently into application servers, which are hosting the logic that is actually doing the integration.

BTW - Out of curiousity, where do you work John? What kind of work do you do with integration?

> primarily support technologies such as JCA (and even
> COM+) because it allows them to integrate directly with
> 2pc tx managers and plug efficiently into application
> servers, which are hosting the logic that is actually
> doing the integration.

I'm not sure that gluing into specific technologies is their primary concern as much as gluing into existing systems period. Regardless of the technologies that are required to do so, they will use them, especially if utilizing proprietary APIs will result in a more scalable and reliable system - which it will in most cases. But, my guess is that web services will be most EAI vendor's catch all solution for fairly lightweight integration situations (by lightweight, I mean from a transactional standpoint, when the overhead of the transport protocol is not over-burdening).

However, all of the largest players in the EAI industry have announced support - if not evangelism - for web services as a whole, and are. WebMethods, for instance, even belongs to the Web Services Interoperability Organization, and was an active participant in the development of many standards upon which web services are based.

My guess is that you'll find many hits per EAI vendor, announcing their support for web services.

...not sure where I was going with this. This discussion has gotten way off topic, and I believe that I am to blame.

> BTW - Out of curiousity, where do you work John? What
> kind of work do you do with integration?

I work at EMC, who suffers from the (most likely standard) problem of integration across the corporate enterprise. The concrete work that I have done with integration is fairly minimal - limited to one-offs here and there, rather than a centralized infrastructural approach - although I have experimented and evaluated many different infrastructure approaches.

Enterprise integration is actually a personal interest of mine as well, since,at least in a corporate enterprise it is one of the most frequent problems and is the single motivation for so many new glue-like applications that are perfect candidates for integration. There is so much disparate replicated business logic, replicated data... you know, the usual.

Thanks for the detail, and I agree that HTTP-based / SOAP-based / XML-based / etc.-based access will be the catch-all integration approach. Have you had a chance to look at JCA yet? I wonder if it could be used on top of and/or under web services ... particularly once security context and 2pc are standardized for web services. It's already obvious how it will fit with JDBC and datasources of that style.

Web Services decouples the protocol binding from the service interface definition.

So, WSDL can be used to define a service which could be later bound to one or more protocols. You'd make a binding for each protocol and then just start listeners for those protocols. But, the interface it-self doesn't change.

Microsoft's marketing department is probably the best in history. This is a primo example of FUD.

<quote source="ms marketing" type="Establish fact, water it down">
Among the problems with HTTP, said Box, is the fact that it is a Remote Procedure Call (RPC) protocol; something that one program (such as a browser) uses to request a service from another program located in another computer (the server) in a network without having to understand network details.
</quote>

Yes, this is very true. What else is there? Everything networked boils down to an RPC. Heck, even messaging queues are a form of RPC. So what?

Also: Note that Box says "browser" here... He's playing on the two minute timeout in browsers.... This is a function of browser software, not HTTP.

<quote source="ms marketing" type="Add equal part Fear and/or Uncertainty">
This works for small transactions asking for Web pages, but when Web services start running transactions that take some time to complete over the protocol, the model fails. "If it takes three minutes for a response, it is not really HTTP any more," Box said. The problem, said Box, is that the intermediaries -- that is, the companies that own the routers and cables between the client and server -- will not allow single transactions that take this long.
</quote>

Really, that depends on your design pattern, doesn't it? If you have a really long transaction, maybe you'd like to send the message and look again for a response. Provide a queue. I'm not necessarily advocating a polling system, but that's a very simple example of one way this could work.

Maybe you have another HTTP listener that waits for a response.

Whatever RPC protocol you're using, you always have to address this issue.

Again, Box is playing off the two minute browser timeout. But if you need a method that takes five days to complete, you're going to have to create some kind of queue (or something) that the end user has to look at later (or be notified of an event, etc.)

<quote source="ms marketing" type="Stab at it until it's dead" modifier="Repeat, if necessary">
"We have to do something to make it (HTTP) less important," said Box. "If we rely on HTTP we will melt the Internet. We at least have to raise the level of abstraction, so that we have an industry-wide way to do long-running requests -- I need a way to send a request to a server and not get the result for five days."
</quote>

See above. It's all in how the webservice is designed and defined. Sure, some webservices will be able to reply with an immediate response in the same request. We're actually doing this where I work, for requests that can last many minutes. We don't have a problem with it, it's proven to be very reliable and stable.

Longer requests simply require a different mode of thinking. As we're all well aware (or should be) Microsoft is normally behind the curve when it comes to thinking new ideas, instead they come up with new ways to reinvent the wheel.

I have to say that I completely agree with you Chris. My take on this is that in Microsoft's grand vision, they've identified HTTP as a stumbling block. So in order to pave the way for some new patentable although standards based protocol, they put drones like Don Box out there to sow the seeds of doubt.

I remember a few years back when Don Box was at the center of the CORBA vs DCOM wars at the time taking place on usenet. Now he's been upgraded to .Net evangelist rather than retired like DCOM.

The problem isn't the protocol, its the fact that the internet is inherently unreliable and HTTP makes the best of a bad job. Any new protocol still has to traverse this unreliable territory.

So my basic point is that Box is acting as an advanced scouting party, sowing the seeds of Microsoft's latest internet strategy, which will need to be played out to stop .Net collapsing under is own weight. And what better place to start,... with an audience that's willing to swallow everything that Don Box can regurgitate.(Pun intended)

<quote>
I have to say that I completely agree with you Chris. My take on this is that in Microsoft's grand vision, they've identified HTTP as a stumbling block. So in order to pave the way for some new patentable although standards based protocol, they put drones like Don Box out there to sow the seeds of doubt
</quote>

Don is undoubtedly partial to Microsoft but I would certainly not call him a drone. Regardless of the fact that he is extremely smart and a very funny speaker, He took a very active part in the design of SOAP 1.0 and he is the author of the very first implementation.

To me, Microsoft's message is loud and clear: HTTP is insufficient to transport the Web Service and another protocol is needed. This protocol will most likely be a combination of SOAP/WSDL.

Nobody disputes that and all the companies (including BEA) are working to make this interoperability a reality.

My instincts are very much in accord with yours Julian! Since M$ have clearly failed to 'embrace and extend' [read muscle in on and break] HTTP, why not try to invent something new. But first, start with the FUD...

> scouting party, sowing the seeds of Microsoft's latest
> internet strategy, which will need to be played out to
> stop .Net collapsing under is own weight.

You know, it seems that with Microsoft, they're damned if they do and they're damned if they don't. The main complaint against Microsoft has always been the proprietary nature of all of their products and solutions. Now, they come out with a new platform to combat Java (competition is a good thing), and are pushing open standards, rather than proprietary, for RPC. This is a good thing.

So, Microsoft thinks that HTTP should be changed... maybe they're right, perhaps they're wrong. At least they are not doing it themselves this time, using a proprietary extension built into IIS or one of the various other solutions that could have been utilized. If they're encouraged to move away from the Microsoft way of doing things, we'll all be better off.

<quote>
The problem isn't the protocol, its the fact that the internet is inherently unreliable and HTTP makes the best of a bad job. Any new protocol still has to traverse this unreliable territory.
</quote>

I think this crystalizes, to me at least, is the key point in this discussion. How are the proposed replacement protocols dealing with this issue? Is it truly an issue that can be addressed by a protocol?

Another way of looking at this problem is the question of synchronous vs. asynchronous behaviour. Can you actually have guaranteed delivery AND guaranteed a response time? And would that response time be adequate to preserve the web experience we've grown accustomed to?

I like HTTP. I hope it is with us for a very long time. As protocols go its simple, widely implemented and works well over fast and slow connections. What I don't like is EJB.

EJB reminds me of one C developers initial reactions when he realized the .h files were missing in Java. "How will I define interfaces?" he asked. Java doesn't have a good way to share interfaces and that is translated into EJB's way to declare interfaces. EJBs have all the overhead of a Bean just to describe what they do. It's messy, complicated and unnecessary.

Huh? Its called "interface". C's .h files were a maintenance and namespace nightmare.

If your point was "EJB is hard", yes I agree,but it not because of Java "bad interface sharing".

For web services WSDL is the "interface language", and the implementation can be anything, including EJB-based ones.

To address the point at hand...HTTP is fine. SOAP over HTTP isn't, but only because of security concerns. We've already got message queing and JMS, but there's the angle...there's no JMS for MS MessageQueue! Thus, MS needs a "protocol" for queuing that supplants the API layer of JMS. Marketing 101: if you don't own the APIs, make a new one. Since messaging isn't really an MS strength (in terms of market share), they'd rather own a new protocol. Pretty clever, actually ;)

<quote>
To address the point at hand...HTTP is fine. SOAP over HTTP isn't, but only because of security concerns. We've already got message queing and JMS, but there's the angle...there's no JMS for MS MessageQueue! Thus, MS needs a "protocol" for queuing that supplants the API layer of JMS. Marketing 101: if you don't own the APIs, make a new one. Since messaging isn't really an MS strength (in terms of market share), they'd rather own a new protocol. Pretty clever, actually ;)
</quote>

I don't get your point. Sure MS has no JMS for MSMQ, but they have their own (of couse) an API for MSMQ. Having JMS doesn't mean solve the problem SOAP is trying to solve. JMS is only an API, each app server has his own implementation of JMS en I believe that at the level of the use (wire) protocol there is no standard. When two app servers use a different implementation of JMS whith different wire protocols what is the chance that they can communicate with each other? Also, SOAP over HTTP tries to overcome the firewall difficulties, how firewall friendly are message queuing products? And last with JMS you can send/receive any message, at this level there is no standard of how the message should be formatted and interpreted, SOAP and webservices provide a standard at this level. Sure SOAP could run over a message queing product (but probably not firewall fiendly) even using JMS but in my opinium SOAP/Webservices and JMS are at a different level in the communication stack.

But I'll take something useful and easy over something useful and complex any time.

I think in a lot of situations SOAP web services are useful enough and they're simpler to create and use than EJBs.

No doubt EJBs cover functionality areas that web services don't, but my take is that such functionality areas are in the 20 part of the 80-20 business scenarios. The bad news is that you have to fight with the complexity of EJBs even if you are in the 80 part...

Edgar: "I think in a lot of situations SOAP web services are useful enough and they're simpler to create and use than EJBs."

That's a function of the tools, not the technology. EJBs are simpler technologically than web services, but the web services push grew out of a mature technology that had good tools (HTTP servers, XML utilities, HTML tools, etc.). EJBs are just as easy (or easier) with the right tools. When the platform vendors figure out that they need to make EJB easy, you'll start to see tools like the one Cedric showed recently at eWorld that 100% automated the deployment descriptor creation and editing. Expect to see similar pushes from IBM and the other leading platform vendors. As far as the Java tools companies, several already make it simple -- I think the Together stuff is one example and Neuvis is another.

HTML presents info to humans using http.
All security is server side, with SSL for secure info. But crucially, all RPC is secure inside your corp. Data feeds tend to be flat file ftp/proprietary data feed.

XML/SOAP presents data to machines using http (say).
Can now use SSL or equiv, but places load on servers. And RPC clients are now internal and external.

RISKY business this. Opening up your internal API's to external clients using the web... I bet most folks are writing internal SOAP/XML handlers which their managers then open up to clients/potential clients and competitors. The great thing about html/http as opposed to current RPC technologies is that it focuses the mind on who is internal and who is external. And this is formalised in firewalls.

So, overall, I would side with the folks saying HTTP should NEVER be used for data transport RPC. Way too risky, even with roles and god knows what. You need a port based solution so security chaps can get their brains around what you want. And to enforce this you NEED a different protocol, otherwise people just put their http listeners on different ports, which misses the point.

As for Microsoft, I agree it is in FUD mode (they once said TCP/IP was proprietary, ha ha).

Jonathan
====================================================
Code generation for EJB/Struts/Beans/Value Objects - The LowRoad version 4.06 is out with local interfaces.
http://www.faraway.co.uk/tallsoft/lowroad/

I actually worked with the people who invented IIOP (and very clever people they were too). In fact the acronym IIOP didn't originally stand for "Internet Inter-Orb Protocol", it stood for "Internet Inter-Operability Protocol", and was bastardised to refer to Orbs later, as it was a CORBA interoperability requirement. Now if we can get away from this notion that IIOP has anything particular to do with CORBA we can use it as the protocol for web services. IIOP supports distributed secure transactions, defines a wire protocol to ensure compatibility between heterogenous systems and is actually quite efficient on the wire to boot. Why re-invent the wheel? HTTP can't cut it in this environment, IIOP can. Yes we're going to have to get the network admins to open another port or two, SO WHAT!!!.

Is a really good thing. The protocol enforces the type of access. Don't forget that a good security person starts every converstation by saying NO. Any port open means a security breach.

Imagine an internal RPC server which hands on the customer database - eg card numbers, credit histories etc. This system and all its RPC must never be exposed, it is purely internal. So we need an internal and an external mechanism, which is very very simple - port number.

Which is why http is good with respect to current RPC techs - its on a different port. But bad with respect to the future - people are using port 80 for XML/SOAP and so we lose the obvious decoupling of internal high value data from external high RISK connections.

You can't be serious. You'd never use only port numbers to segregate your internal "users" from your external ones. This is far too dangerous. Once an external use is on your LAN the potential is there for attack, even if only denial of service. There should always be a DMZ between the external connection and the internal one. Your web service is exposed by the Internet looking server. This would make RPC calls via a separate network adapter and firewall to another server which would do your back-end stuff. Even if we were able to get SOAP and HTTP to work you'd want to do that surely?

yes, missed my point. You keep all the firewalls etc. But from an app design point of view having different protocols on different ports for internal apps and external apps is a fine thing.

It means that security people can audit the protocol open to external apps, and there is no risk of those ports being used to hack internal web services. Providing your developers keep to guidelines and don't create tunnels.

This decoupling of internal and external RPC (by whatever means) is really useful. And this is what HTTP has given us to date. but SOAP/XML is changing that.

People are now writing web services for internal use which are on the same port as external clients.

It used to be internal RPC listened on other ports than external clients - HTTP was open/insecure. Corba/IIOP/RMI/Bespoke were internal and the firewalls were not opened up.

I agree hackers once in can do anything. I agree DMZ/firewalls are hacked. That is a seperate argument. I'm really argueing from an an design perspective.

It's sad....HTTP is a victum of it's own name. Box is saying that HTTP is a protocol for sending HTML pages, and I'm kinda astonished that a person who seems to be so intelligent is so ignorant in this reguard (especially when he talks about the timeout of browsers being equated to the timeout of HTTP (when, in fact, there is no time out in HTTP, if you want to wait 5 days for a response, just write up a HTTP client that will wait that long).

The fact of the matter is is that HTTP was a very well designed protocol. It's stateless, so that you don't need to maintain a connection to a server do do useful work (and consume valuable server resources), it defines a header that is extensible to support the application that is using the protocol, the contents of the message is any binary payload, it defines a standard mechanism to transmit data, you can pass credentials in a standard manner, it has standard attributes defined, but you can just make up your own if you like (ex: SOAP_ACTION), etc. Any state required for a transaction can be managed on the client (ex: sessionId) and therefore makes applications much more scalable in this model.

The whole argument about waiting 5 days for a response, well, that's a case for asynchronus messaging, and in that case I would recommend in a internet environment using HTTP to communicate with a queuing system (the request of the HTTP message will contain the command, and the response will an ID that the client can use to find out the actual result later). I'm also not a big fan of polling, so a solution could be constructed to support notifications of events over HTTP.

The point is: people wanting for a replacement of HTTP seem to be looking for functionality in a protocol where it doesn't belong. Protocols define message structure and rules about transmission (ex: TCP/IP says that packets will be delivered in order and reliably, and defines the structure of a TCP packet). Any functionality beyond this and you are stepping out of bounds from the protocol to the application behavior. Don't look to the protocol layer to solve these problems, look to the application layer.

Utilizing the Web to share application processes is surely not the brightest of ideas . It does not provide the reliability and security required for Enterprise class transactions .
However , a combination of SOAP with JMS or JAXM can provide us the best of both worlds ( Messaging and Internet Standards ) .

Yes I believe it definately does. The reasons the way I see it are
1. HTTP - Hyper Text Transfer Protocol
Was designed to as the name says transfer text and not fullscale Objects. What this does is we are transfering text from server to client and vice-versa and then converting them into Objects in the OOP Language of our choice.
2. When we use standard browsers HTTP forces us to send both the data and the GUI Code every time from server to client for every request. This means If I just want to send the name of a person I would also have to send across more data about how it would be displayed ie the <html> to </html>. This in turn means the application runs slower than it would may be under other protocols.
3. The biggest problem with HTTP from what little I can see is that it is a request-response protocl. i.e. Until there is a request the server cannot respond. what this means is that it is a pull protocol and not push. Most Serious applications tend to need the client to do certain things when certain events occur on the server which is only possible in very roundabout ways in HTTP whereas this should be built into any protocol meant for building applications.

To end the long story HTTP is a text transfer and not a platform to build applications so we definately need a protocol which was meant for building applications.

To end the long story HTTP is a text transfer and not a >platform to build applications so we definately need a >protocol which was meant for building applications.

I agree with this statement of yours but ...
All the reasons you describe are not the failures of HTTP. They are the failures of trying to build an RPC platform over HTTP aka SOAP.
The real question is Does SOAP need to use a different underlying protocool other than SOAP? YES
Should it supplant HTTP or should it be called a replacement for HTTP? NO.
Should HTTP.Net/HTTP++ run over port 80 instead of HTTP? NO WAY.

The bottom line is that trying to run an RPC Protocool over HTTP was always a dumb idea. Don Box is trying to blame HTTP for what ails SOAP instead of gracefully accepting that he and the 'geniuses' behind SOAP screwed up royally.

"Contrary to popular belief, SOAP has not been ratified by the W3C. As of this writing it is just a submission, which, essentially, means they thought it was nifty enough to start a discussion about it. "

"SOAPs two biggest developers are Microsoft and IBM. Microsoft has incorporated SOAP into it's latest OS and Visual Studio. As usual though, MS has decided that they don't need to adhere to the whole spec. In their SOAP for Java SDK they modify the namespace (which breaks some other implementations), use a limited combination of 1.0 and 1.1 fault codes (and improperly document it), and include many serious bugs..."

TechTarget provides technology professionals with the information they need to perform their jobs - from developing strategy, to making cost-effective purchase decisions and managing their organizations technology projects - with its network of technology-specific websites, events and online magazines.