Years ago it was Carl and Gary's that was the central hub for the VB community. The place we all started browsing and then jumped off to other locations. There really hasn't been an equivalent hub (or portal) for a very long time.

Robert has been working (along with Duncan) on this for quite a while now, soliciting input from a lot of people in the VB community - including authors, speakers and others. The site has been slowly evolving, and now is really starting to show some great promise as a central hub for the VB community.

OK, now I feel better. Perhaps I jumped the gun with my previous post.

Rich Turner gave an awesome presentation – totally on the mark from start to end.

He was very, very clear that the prescriptive guidance is to use asmx (web services) to cross service boundaries and to use Enterprise Services (COM+), MSMQ, Remoting or asmx inside a service boundary.

Note that inside a service might be multiple tiers. Multiple physical tiers. You might cross network boundaries (though that should be minimized), but that’s OK. This is all inside your service, within your control. Since it is inside your control, you should choose the appropriate technology based on all criteria (such as performance, transactional support, security, infrastructure support, deployment and so forth).

This is the best and most clear guidance I’ve heard from Microsoft yet. Very nice!

I’m at a Microsoft training event, being briefed on various technologies by people on the product teams – including content on Indigo.

The unit manager gave an overview, and someone asked about the recommended architecture guidance around today’s Remoting technology. He reiterated that the recommendation is to only use it within a process. This, after he’d just finished pointing out that there are scenarios today that are only solved by remoting.

Say what?

Then several other Indigo team members covered various features of Indigo and how they map to today’s technology and how we may get from today to Indigo. Numerous times it comes up that Indigo incorporates much of the Remoting model (because it is good), and that most code using Remoting today will transparently migrate to Indigo when it gets here.

So what now?

First, the prescriptive guidance is nuts. They are saying conflicting things and just feeding confusion. Remoting is sometimes the only answer, but don't use it?

I'm sorry, I have to build real apps between now and when ever Indigo shows up. If Remoting is the answer, then it is the answer. End of story.

Second, it turns out that you are fine with Remoting as long as you don’t create custom sinks or formatters for Remoting your code will move to Indigo just as easily as any asmx code you write today (which is to say with minimal code changes).

And of course you should avoid the TCP channel and custom hosts – use IIS, the HttpChannel and the BinaryFormatter and life is good.

Finally, (as I’ve discussed before), Remoting is for communication between tiers of your own application. If you are communicating across trust boundaries (between separate applications) then you should use web services – or better yet use WSE 2.0.

Conversely, if you are using web services or WSE 2.0, then you have inserted a trust boundary and you shouldn’t be pretending that you are communicating between tiers – you are now communicating between separate applications.

Though not strictly a computer thing, I have a classic tale of customer service. And hey, customer service is an issue for everyone, all the time. As computer professionals we provide a service to our customers, whether internal or external. And providing good customer service means we get better raises and/or get to keep our job.

This tale of customer service is actually two interlinked stories. They illustrate that customer service is all about how you treat the customer. It is all about interpersonal relationship and interaction.

Many years ago I changed the oil in my car myself. My wife and I couldn’t afford to pay to have it changed, and it wasn’t hard to do on cars built in the 70’s and early 80’s. But over time it became harder and harder to dispose of the used oil, and my income improved as my career moved along.

So eventually we started going to Valvoline Rapid Oil Change. They were quick, efficient and inexpensive. More expensive than me doing it, but by this point in life it was worth the extra money. After using their services for quite some time, they offered to change the PVC valve in our car. It was a $6 part. Not overly hard to change, and their price wasn’t much higher than the price of the part itself. In other words, it seemed worth it, so we said sure, go ahead.

In those years Minnesota required annual emissions testing for all cars in the Twin Cities area. Our test was scheduled for perhaps a month after the PVC valve had been replaced. And the car failed. Of course the state test doesn’t say why the car failed, just that it failed.

So off we went to the Chevy dealer. $60 later it turns out that the emissions failure was due to a faulty PVC valve.

Now I knew darned well that the valve was new. After all, Rapid Oil Change had just changed it less than a month before. So I went over to the oil change place, with their receipt for the PVC valve, the state emissions failure form and the dealer receipt showing the work done at the dealer.

I asked to talk to the manager. He didn’t look happy, and as I explained the story, he looked less happy. Before I could even explain where I hoped this conversation would go, he cut me off and told me that there was no way he could know if their PVC valve was faulty or if the problem had some other root cause.

In other words, he shut me down. No apology, much less offering to defray the $60 his shop had cost me.

In all the years since then we’ve never used Valvoline Rapid Oil Change. Any time I get a chance, I tell people to avoid the chain. Just by my wife and I not using them, they’ve certainly lost more than the $60. And I like to think that I’ve cost them other business as well, thus hopefully providing some just punishment for employing such a crappy manager and for installing faulty parts.

Contrast this to an experience I had just this past week. Last weekend we bought a small pop-up camper trailer so it will be easier to go camping as a family.

The trailer has a round 6-pin wiring plug for the lights. It has the 6-pin plug so it can get power directly from the car alternator/battery and thus it can charge the deep cycle battery in the camper while we drive down the road. Unfortunately my van has a flat 4-pin connector, which works great for most small trailers (like my boat trailer), but doesn’t match up to the 6-pin.

I figured I could rig up an adapter that would at least get the lights working. My thought was that I just didn’t need to recharge the battery from the van, and that it would be cheap and easy to get the lights working.

It turns out that modern cars are finicky and have complex wiring… They aren’t nearly as easy or fun to work with as my 1976 Datsun F-10, or my 1987 Cavalier… So I managed to blow out the fuse for my tail and brake lights on the van, and I still didn’t have the trailer working right.

I called around and ended up going to Burnsville Trailer Hitch, a specialist in these sorts of things. $110 later I had a fully wired 6-pin connector on my van, along with a 6 to 4 converter for my boat trailer. I thought that was money well-spent, since I’d explored running power from the battery back to the plug myself and it would have taken me far longer than the 45 minutes it took them.

But my story isn’t done. Here’s the catch. When I got home, I tried the plug and it didn’t work. The signal and brake lights on the trailer just didn’t work.

I called back to the store, and they offered to look at the problem if I brought the van back. I wasn’t totally thrilled, since we’re talking about a 15 mile drive each way, but there was nothing to be done. Off I went.

About 10 minutes after I got there, the guy comes in and says that there’s this converter unit that safely combines the signal and brake light wires from the van (which has separate bulbs for signal and brake) into a single pair of wires for left and right on the trailer (which has one bulb on each side for signal and brake). Turns out this converter was blown.

Now I knew about the converter, having wired one into my previous car. With him pointing out that it was blown, I strongly suspected that it was my doing, from my earlier attempt to wire the trailer.

Over my protests, he said they’d replace it for free. Seriously, I protested a bit, saying that it could easily have been my fault. His answer? It could have easily been their fault since they might have blown it when they did the wiring for the 6-pin connector.

So here we have a store with awesome customer service. I was and am impressed and will recommend people go there when ever it is appropriate.

But here’s the real kicker. The oil change place pissed off a regular customer, losing not only me and my wife, but everyone we can convince to avoid the place. The trailer hitch place has no reason to expect I’ll ever return. After all, how often do you need a trailer hitch? Yet they were professional and went above and beyond the call to provide great service.

As I said to start, I’m not sure this has anything to do directly with computers, but I’ll bet you that if you treat your customers like Burnsville Trailer Hitch that you’ll have happy customers, get better raises and have a more secure job overall!

They already have some cool content, including relevant blog listings, links to sites with useful code, tips and tricks and so forth. And they plan to do more in the near future, including some more interactive content so we, as the community, can help create and manage some of the site's content.

I think this is an excellent step on the part of the VB team to help support the huge VB community, and I appreciate it. Thanks guys!

There is this broad-reaching debate that has been going on for months about remoting, Web services, Enterprise Services, DCOM and so forth. In short, it is a debate about the best technology to use when implementing client/server communication in .NET.

I’ve weighed in on this debate a few times with blog entries about Web services, trust boundaries and related concepts. I’ve also had discussions about these topics with various people such as Clemens Vasters, Juval Lowy, Ingo Rammer, Don Box and Michele Leroux Bustamante. (all the “experts” tend to speak at many of the same events, and get into these discussions on a regular basis)

It is very easy to provide a sound bite like “if you aren’t doing Enterprise Services you are creating toys” or something to that effect. But that is a serious oversimplification of an important issue.

Because of this, I thought I’d give a try at summarizing my thoughts on the topic, since it comes up with Magenic’s clients quite often as well.

Before we get into the article itself, I want to bring up a quote that I find instructive:

“The complexity is always in the interfaces” – Craig Andrie

Years ago I worked with Craig and this was almost like a mantra with him. And he was right. Within a small bit of code like a procedure, nothing is ever hard. But when that small bit of code needs to use or be used by other code, we have an interface. All of a sudden things become more complex. And when groups of code (objects or components) use or are used by other groups of code things are even more complex. And when we look at SOA we’re talking about entire applications using or being used by other applications. Just think what this does to the complexity!

Terminology

I think a lot of the problem with the debate comes because of a lack of clear terminology. So here are the definitions I’ll use in the rest of this article:

Term

Meaning

Layer

A logical grouping of similar functionality within an application. Often layers are separate .NET assemblies, though this is not a requirement.

Tier

A physical grouping of functionality within an application. There is a cross-process or cross-network boundary between tiers, providing physical isolation and separation between them.

Application

A complete unit of software providing functionality within a problem domain. Applications are composed of layers, and may be separated into tiers.

Service

A specific type of application interface that specifically allows other applications to access some or all of the functionality of the application exposing the service. Often this interface is in the form of XML. Often this XML interface follows the Web services specifications.

I realize that these definitions may or may not match those used by others. The fact is that all of these terms are so overloaded that intelligent conversation is impossible without some type of definition/clarification. If you dislike these terms, please feel free to mentally mass-substitute them for your favorite overloaded term in the above table and throughout the remainder of this article J.

First, note that there are really only three entities here: applications, tiers and layers.

Second, note that services are just a type of interface that an application may expose. If an application only exposes a service interface, I suppose we could call the application itself a service, but I suggest that this only returns us to overloading terms for no benefit.

A corollary to the above points is that services don’t provide functionality. Applications do. Services merely provide an access point for an application’s functionality.

Finally, note that services are exposed for use by other applications, not other tiers or layers within a specific application. In other words, services don’t create tiers, they create external interfaces to an application. Conversely, tiers don’t create external interfaces, they are used exclusively within the context of an application.

Layers

In Chapter 1 of my .NET Business Objects books I spend a fair amount of time discussing the difference between physical and logical n-tier architecture. By using the layer and tier terminology perhaps I can summarize here more easily.

An application should always be architected as a set of layers. Typically these layers will include:

Presentation

Business logic

Data access

Data management

The idea behind this layering concept is two-fold.

First, we are grouping similar application functionality together to provide for easier development, maintenance, reuse and readability.

Second, we are grouping application functionality such that external services (such as transactional support, or UI rendering) can be provided to specific parts of our code. Again, this makes development and maintenance easier, since (for example) our business logic code isn’t contaminated by the complexity of transactional processing during data access operations. Reducing the amount of external technology used within each layer reduces the surface area of the API that a developer in that layer needs to learn.

In many cases each layer will be a separate assembly, or even a separate technology. For instance, the data access layer may be in its own DLL. The data management layer may be the JET database engine.

Tiers

Tiers represent a physical deployment scenario for parts of an application. A tier is isolated from other tiers by a process or network boundary. Keeping in mind that cross-process and cross-network communication is expensive, we must always pay special attention to any communication between tiers to make sure that it is efficient given these constraints. I find it useful to view tier boundaries as barriers. Communication through the barriers is expensive.

Specifically, communication between tiers must be (relatively) infrequent, and coarse-grained. In other words, send few requests between tiers, and make sure each request does a relatively large amount of work on the other side of the process/network barrier.

Layers and Tiers

It is important to understand the relationship between layers and tiers.

Layers are deployed onto tiers. A layer does not span tiers. In other words, there is never a case where part of a layer runs on one tier and part of the layer runs on another tier. If you think you have such a case, then you have two layers – one running on each tier.

Due to the fact that layers are a discrete unit, we know that we can never have more tiers than layers. In other words, if we have n layers, then we have n or less tiers.

Note that thus far we have not specified that communication between layers must be efficient. Only communication between tiers is inherently expensive. Communication between layers could be very frequent and fine-grained.

However, notice also that tier boundaries are also layer boundaries. This means that some inter-layer communication does need to be designed to be infrequent and coarse-grained.

For all practical purposes we can only insert tiers between layers that have been designed for efficient communication. This means that it is not true that n layers can automatically be deployed on n tiers. In fact, the number of potential tiers is entirely dependant on the design of inter-layer communication.

This means we have to provide terminology for inter-layer communication:

Term

Meaning

Fine-grained

Communication between layers involves the use of properties, methods, events, delegates, data binding and so forth. In other words, there’s a lot of communication, and each call between layers only does a little work.

Coarse-grained

Communication between layers involves the use of a very few methods. Each method is designed to do a relatively large amount of work.

If we have n layers, we have n-1 layer interfaces. Of those interfaces, some number m will be course-grained. This means that we can have at most m+1 tiers.

In most applications, the layer interface between the presentation and business logic layers is fine-grained. Microsoft has provided us with powerful data binding capabilities that are very hard to give up. This means that m is virtually never n-1, but rather starts at n-2.

In most modern applications, we use SQL Server or Oracle for data management. The result is that the layer interface between data access and data management is typically course-grained (using stored procedures).

I recommend making the layer interface between the business logic and data access also be course-grained. This provides for flexibility in placement of these layers into different tiers so we can achieve different levels of performance, scalability, fault-tolerance and security as required.

In a web environment, the presentation is really just the browser, and the actual UI code runs on the web server. Note that this is, by definition, two tiers – and thus two or more layers. The interaction between web presentation and web UI is coarse-grained, so this works.

In the end, this means we have some clearly defined potential tier boundaries that map directly to the course-grained layer interfaces in our design. These include:

Presentation <-> UI (web only)

Business logic <-> Data access

Data access <-> Data management

Thus, for most web apps m is 3 and for most Windows apps m is 2. So we’re talking about n layers being spread (potentially) across 3 or 4 physical tiers.

Protocols and Hosts

Now that we have an idea how layers and tiers are related, let’s consider this from another angle. Remember that layers are not only logical groupings of domain functionality, but also are grouped by technological dependency. This means that, when possible, all code required database transactions will be in the same layer (and thus the same tier). Likewise, all code consuming data binding will be in the same layer, and so forth.

The net result of this is that a layer must be deployed somewhere that the technological dependencies of that layer can be satisfied. Conversely, it means that layers that have few dependencies have few hard restrictions on deployment.

Given that tiers are physical constructs (as opposed to the logical nature of layers), we can bind technological capabilities to tiers. What we’re doing in this case is defining a host for tiers, which in turn contain layers. In the final analysis, we’re defining host environments in which layers of our application can run.

We also know that we have communication between tiers, which is really communication between layers. Communication occurs over specific protocols that provide appropriate functionality to meet our communication requirements. The requirements between different layers of our application may vary based on functionality, performance, scalability, security and so forth. For the purposes of this article, the word protocol is a high-level concept, encompassing technologies like DCOM, Remoting, etc.

It is important to note that the concept of a host and a protocol are different but interrelated. They are interrelated because some of our technological host options put restrictions on the protocols available.

In .NET there are three categorical types of host: Enterprise Services, IIS or custom. All three hosts can accommodate ServicedComponents, and the IIS and custom hosts can accommodate Services Without Components (SWC).

The following table illustrates the relationships:

Host

Protocols

Technologies

Enterprise Services(Server Application)

DCOM

Simple .NET assembly

ServicedComponent

IIS

Web servicesRemoting

Simple .NET assembly

ServicedComponentSWC

Custom

Remoting Web services (w/ WSE)DCOM

Simple .NET assembly

ServicedComponentSWC

The important thing to note here is that we can easily host ServicedComponent objects or Services Without Components in an IIS host, using Web services or Remoting as the communication protocol.

The all three hosts can host simple .NET assemblies. For IIS and Remoting this is a native capability. However, Enterprise Services can host normal .NET assemblies by having a ServicedComponent dynamically load .NET assemblies and invoke types in those assemblies. Using this technique it is possible to create a scenario where Enterprise Services can act as a generic host for .NET assemblies. I do this in my .NET Business Objects books for instance.

Hosts

What we’re left with is a choice of three hosts. If we choose Enterprise Services as the host then we’ve implicitly chosen DCOM as our protocol. If we choose IIS as a host we can use Web services or Remoting, and also choose to use or not use the features of Enterprise Services. If we choose a custom host we can choose Web services, Remoting or DCOM as a protocol, and again we can choose to use or not use Enterprise Services features.

Whether you need to use specific Enterprise Services features is a whole topic unto itself. I have written some articles on the topic, the most broad-reaching of which is this one.

However, there are some things to consider beyond specific features (like distributed transactions, pooled objects, etc.). Specifically, we need to consider broader host issues like stability, scalability and manageability.

Of the three hosts, Enterprise Services (COM+) is the oldest and most mature. It stands to reason that it is probably the most stable and reliable.

The next oldest host is IIS, which we know is highly scalable and manageable, since it is used to run a great many web sites, some of which are very high volume.

Finally there’s the custom host option. I generally recommend against this except in very specific situations, because writing and testing your own host is hard. Additionally, it is unlikely that you can match the reliability, stability and other attributes of Enterprise Services or IIS.

So do we choose Enterprise Services or IIS as a host? To some degree this depends on the protocol. Remember that Enterprise Services dictates DCOM as the protocol, which may or may not work for you.

Protocols

Our three primary protocols are DCOM, Web services and Remoting.

DCOM is the oldest, and offers some very nice security features. It is tightly integrated with Windows and with Enterprise Services and provides very good performance. By using Application Center Server you can implement server farms and get good scalability.

On the other hand, DCOM doesn’t go through firewalls or other complex networking environments well at all. Additionally, DCOM requires COM registration of the server components onto your client machines. Between the networking complexity and the deployment nightmares, DCOM is often very unattractive.

However, as with all technologies it is very important to weigh the pros of performance, security and integration against the cons of complexity and deployment.

Web services is the most hyped of the technologies, and the one getting the most attention by key product teams within Microsoft. If you cut through the hype, it is still an attractive technology due to the ongoing work to enhance the technology with new features and capabilities.

The upside to Web services is that it is an open standard, and so is particularly attractive for application integration. However, that openness has very little meaning between layers or tiers of a single application. So we need to examine Web services using other criteria.

Web services is not high performance, or low bandwidth.

Web services use the XmlSerializer to convert objects to/from XML, and that serializer is extremely limited in its capabilities. To pass complex .NET types through Web services you’ll need to manually use the BinaryFormatter and Base64 encode the byte stream. While achievable, it is a bit of a hack to do this.

However, by using WSE we can get good security and reliability features. Also Web services are strategic due to the focus on them by many vendors, most notably Microsoft.

Again, we need to evaluate the performance and feature limitations of Web services against the security, reliability and strategic direction of the technology. Additionally keeping in mind that hacks exist to overcome the worst of the feature limitations in the technology, allowing Web services to have similar functionality to DCOM or Remoting.

Finally we have Remoting. Remoting is a core .NET technology, and is very comparable to RMI in the Java space.

Remoting makes it very easy for us to pass complex .NET types across the network, either by reference (like DCOM) or by value. As such, it is the optimal choice if you want to easily interact with objects across the network in .NET.

On the other hand, Microsoft recommends against using Remoting across the network. Primarily this is because Remoting has no equivalent to WSE and so it is difficult to secure the communications channel. Additionally, because Microsoft’s focus is on Web services, Remoting is not getting a whole lot of new features going forward. Thus, it is not a long-term strategic technology.

Again, we need to evaluate this technology be weighing its superior feature set for today against its lack of long-term strategic value. Personally I consider the long-term risk manageable assuming you are employing intelligent application designs that shield you from potential protocol changes.

This last point is important in any case. Consider that DCOM is also not strategic, so using it must be done with care. Also consider that Web services will undergo major changes when Indigo comes out. Again, shielding your code from specific implementations is of critical importance.

In the end, if you do your job well, you’ll shield yourself from any of the three underlying protocols so you can more easily move to Indigo or something else in the future as needed. Thus, the long-term strategic detriment on DCOM and Remoting is minimized, as is the strategic strength of Web services.

So in the end what do you do? Choose intelligently.

For the vast majority of applications out there, I recommend against using physical tiers to start with. Use layers – gain maintainability and reuse. But don’t use tiers. Tiers are complex, expensive and slow. Just say no.

But if you must use physical tiers, then for the vast majority of low to medium volume applications I tend to recommend using Remoting in an IIS host (with the Http channel and BinaryFormatter), potentially using Enterprise Services features like distributed transactions if needed.

For high volume applications you are probably best off using DCOM with an Enterprise Services host – even if you use no Enterprise Services features. Why? Because this combination is more than two times older than Web Services or Remoting and its strengths and limitations and foibles are well understood.

Note that I am not recommending the use of Web services for cross-tier communication. Maybe I’ll change my view on this when Indigo comes out – assuming Indigo provides the features of Remoting with the performance of DCOM. But today it provides neither the features nor performance that make it compelling to me.