SonOfPirate

Pages

Thursday, November 17, 2011

I am currently working on a very large-scale service-oriented enterprise application which consists of a server-side application using WCF Web API to expose a RESTful API. There will be numerous client applications accessing the services. As part of this communication, we will be passing client context information, such as the current language, the application name, etc. in HTTP headers with each request. So, I need a way to pull this information out of the request pipeline and persist it in a way that the service code can make use of the context information PER REQUEST.

My Little MEF Problem

The simplest solution to my problem would be to create a MessageHandler that pulls the header information out, places into a context object and inserts that object into the HttpRequestMessage.Properties collection so that it is passed to the service method. This would look something like:

public class ClientContextMessageHandler : DelegatingHandler

{

public ClientContextMessageHandler()

: base()

{

}

protected override Task SendAsync(

HttpRequestMessage request,

CancellationToken cancellationToken)

{

var language = request.Headers.AcceptLanguage.First().Value;

var context = new ClientContext()

{

Culture = new CultureInfo(language)

};

request.Properties.Add("ClientContext", context);

return base.SendAsync(request, cancellationToken);

}

}

Unfortunately, I am working with a layered architecture and we use Dependency Injection to decouple our dependencies (for testing, maintainability and extensibility). We have chosen to use MEF. This decision is part of a larger discussion but it boiled down to the fact that it is a part of the .NET Framework now, will receiving continued support from Microsoft and allows us to have dynamic composition of our API so we can easily extend the solution by simply dropping assemblies in the runtime folder.

My problem with the above solution is that I need the context information to be available throughout the application - in any layer, anywhere it's needed. I do not want to pass the object around in every method call; therefore, I need a way to inject the object at runtime while processing a request.

I'll come back to this.

WCF Threading

I won't give a dissertation on WCF threading - mostly because I'm no expert. But, suffice it to say, that WCF uses thread pooling and, as a result, a specific thread may be reused for multiple requests.

Why does this matter? Because any data saved in the thread, either using the [TreadStatic] attribute or a named slot, will be persisted across multiple requests. So it is possible that data stored for one request is exposed to another. Not what we want.

However, knowing that a thread will only ever be servicing a single request at any one time should help lead us to a workable solution so long as we eliminate any risk of 'bleed through'.

Defining a Thread-Safe Context

I think I have enough information at this point to at least define my ClientContext class. Here's what I came up with:

internal sealed class ClientContext

: IClientContext

{

[ThreadStatic()]

private static ClientContext _instance;

private ClientContext() { }

public CultureInfo Culture { get; set; }

[Export(typeof(IClientContext)]

public static ClientContext Current

{

get

{

if (_instance == null)

_instance = new ClientContext();

return _instance;

}

}

}

I've essentially created a per-thread singleton that I can access through the Current property. Whenever I call ClientContext.Current, I know I will get the object for the thread I am currently executing on.

Notice that I've marked the Current property with the Export attribute. This tells MEF to use this property whenever I need to inject an instance of the IClientContext interface in my code.

Reworking the MessageHandler

Now that I have a thread-safe context, let's rework the message handler to use this class instead of instantiating a new object each time.

public class ClientContextMessageHandler : DelegatingHandler

{

public ClientContextMessageHandler()

: base()

{

}

protected override Task SendAsync(

HttpRequestMessage request,

CancellationToken cancellationToken)

{

var language = request.Headers.AcceptLanguage.First().Value;

var context = ClientContext.Current;

context.Culture = new CultureInfo(language);

return base.SendAsync(request, cancellationToken);

}

}

Notice that I am no longer storing the context in the HttpRequestMessage.Properties collection. I don't need to do this because the instance is stored in the TreadStatic variable and exposed via the MEF export.

Message Handlers vs Operation Handlers

Despite all looking good, I found the solution wasn't reliable. Sometimes, when pumping a lot of requests into my service, the code would fail with a NullReferenceException. After some digging, I found that the Culture property was sometimes null in my service code even after setting it properly in the message hander. I quickly learned that the message handler occassionally ran on a different thread than my service method. Uh-oh!

Fortunately I had an opportunity to discuss this problem with Glenn Block (of Prism, MEF, WCF Web API and now Node.js fame) who was in town for a speaking engagement. It turns out that it is just a matter of making sure the context is set on the same thread the request handler (the service) is executing. The solution is to use an HttpOperationHandler instead of a MessageHandler.

According to Glenn, message handlers operate asynchronously which means they could execute on a different thread from the request handler (service) so we should never do anything in a message handler that requires thread affinity. On the other hand, operation handlers run synchronously on the same thread as the request handler, therefore we can rely on thread affinity.

I'm not sure if I'm passing the right information to the base class constructor, but the rest of the code was a simple port from the original message handler. I'm also using the generic base class to make the code simpler.

WTF!

So I run the application, make a test call and see that my operation handler is called as expected, the context information set but the Culture property is now always null in my service method! What's worse is that I am able to verify that the operation handler and service method are, in fact, running on the same thread by checking the Thread.CurrentThread.ManagedThreadId value.

Coping with MEF

The problem is in how MEF satisfies the imports. Unfortunately I don't know the details why this is the case but know how I got around it.

First, I removed the Export attribute from the Context property of my ClientContext class. Then I created the following ClientContextExporter whose sole purpose is to make MEF behave the way I want.

[Export(typeof(IClientContext))]

internal sealed class ClientContextExporter

: IClientContext

{

public ClientContextExporter()

{

}

public CultureInfo Culture

{

get

{

return ClientContext.Current.Culture;

}

}

}

This type implements the same interface but delegates all members to the ClientContext.Current object. The class is marked with the Export attribute so MEF will use this class to satisfy any imports.Wrap It Up Already!
So, that's it. I have the following types:

ClientContext (with the Export attribute removed) is the implementation of the context and provides the container for the thread-safe instances injected by MEF,

ClientContextOperationHandler plugs into the WCF Web API request pipeline to extract the header information and set the properties of the current ClientContext object.

ClientContextExporter simply delegates to ClientContext.Current and is the export used by MEF to satisfy any imports.

Wednesday, October 12, 2011

I am current working in an enterprise environment on a solution that involves multiple Silverlight applications running on multiple systems in a client-server environment. (To clarify, there can be multiple Silverlight applications running on a single system and there are several such systems on the network.) The topic of messaging has been prevalent lately and I see a lot of confusion when discussing the various types of messaging. It would make these discussions so much easier if we had common yet unique terminology for each type.

As I see it, we have the following types of messaging (along with the terminology I use to keep 'em straight!):

Inner-Application Messaging One of the (sub-)patterns promoted by the MVVM pattern widely used for Silverlight applications is that of a Messenger. This is also known as Broker or Event Aggregator depending on the framework you are using. Regardless of the name, this is a way of using loosely-coupled messages to communicate between objects (view models in MVVM) rather than tightly-coupled events. This type of messaging only pertains to communication within an application.

I refer to inner-application messaging as simply, "messaging".

Inter-Application Messaging I actually prefer the term Notification for the next type of messaging, but as the world of mobile devices has taken off, this term has become synonymous with inter-application communication. With Silverlight 4, Microsoft introduced a form of Silverlight client-to-Silverlight client communication using the LocalMessageSender and LocalMessageReceiver classes as well as a way to display a cute little popup window when a message is received using the NotificationWindow class. Other examples are sending Toast notifications in Windows Phone 7.

I've been calling inter-application messaging, "notifications".

Inter-Machine Messaging In my environment, we have a broadcasting system that runs on the server and pushes messages out to the client applications running on any machine in the network. I used to refer to this as a Notification Service keeping in line with such commercial products as SQL Server Notification Services. However, since we don't have a better name for inter-application messaging and notifications seem to be so popular there, a new term is called for.

I've started using "broadcasting" for inter-machine messaging.

By no means am I suggesting that my terminology is the best. In fact, quite the opposite and I solicit anyone that stumbles across this post to share your opinions. It has truly been difficult to sit in meetings and keep everyone on the same page when discussing messaging. We really need some standard terminology.

Sunday, September 11, 2011

As you can see, it's been quite a while since I've blogged. I thought the 10th anniversary of the 9/11 attacks was a great time to rekindle my blog.

Today is a somber day for those of us that lived through the attacks. I, fortunately, was not directly affected and can only imagine what life has been like for those that were. We, like most of the country, sat in awe as the images from New York, Pennsylvania and Washington D.C. streamed across the tv and felt the fear and uncertainty what would come next. I can still remember sitting at my desk, listening as the morning radio show crew learned of the attacks, turning the volume up as everyone in the office huddled around and trying to use the web to find out more.

The web was still just a toddler in those days so information was difficult to track down. We were able to see still images on the CNN site to drive home the description we were hearing on the radio. Information was scattered and all over the map in those early hours and days. None of us could fathom the magnitude of what was taking place or what would follow.

My kids were all young on 9/11/01. Too young to really understand what had happened other than the fact that something had happened. Because of this, my wife and I decided nine years ago to buy the 9/11 DVD made by Jules and Gegeon Naudet. If you haven't watched this documentary, I suggest getting a copy. I've yet to see a more compelling, truthful and downright riviting source of information on that day. We watched it once and put it on the shelf.

Today my kids are 22, 20, 19, 18 and 14. They are finally old enough to fully understand the events that day and I can think of nothing better than to remember and honor all of those that lost their lives that day then to bring my family together, significant others included, and watch the dvd once again. We will have a nice dinner then gather in the family room to relive that day. I'm sure we will spend a couple of hours afterwards answering questions and talking about the days that followed.

The bottom line is that we remember. And after today my children will have a deeper understanding of a day that truly changed the world we all live in.

Sunday, July 11, 2010

So the first question you may have is, “what does he mean by modern technology?” For those of us that grew up in the ‘80s or before, we know that pretty much anything that has to do with computers is “modern technology.” But I am referring to the new toolset of the on-demand generation: Twitter, Facebook, My Space and so on – social networking.

You see, to my kids, computers are as basic to their lives as color televisions were when I grew up – every household had at least one and eyebrows were raised if your family didn’t. My youngest daughter was better with a mouse at 3 years-old than anyone else in my family – including me! They’ve grown up with Windows and Office and, for the most part, the Internet. And while those of us that had a hand in creating the modern computing culture are proud that we’ve brought the world within a few keystrokes, there is an unfortunate side-effect that is becoming more and more apparent as a result of these advances: e-narcissism.

Having two teenage daughters at home, it was a common dinner-time joke to ask one of them if they truly thought anyone cared that after school they were “doing homework then movie time with Jake” as she had posted to Twitter. But when my teenage son took over the texting championship and has maintained his lead over my daughters every month for over a year now, I had to stop and ask myself what our kids were really learning and how the always connected technology we were handing them was shaping their values, beliefs and personalities.

If you haven’t read “The Narcissism Epidemic: Living in the Age of Entitlement”, I strongly suggest doing so; especially if you are a parent of younger kids. I’ve lived what the authors describe and have battled for the past few years to keep my kids’ heads in the right place despite a generation that is falling victim to the narcissism of social networking and cloud computing.

But, as much as I could go on and on, it is another side-effect of this technology that has drawn my attention recently. With the proliferation of electronic communication mediums such as blogs, tweets, social networking sites and hardware that supports these tools such as cell phones, access to individuals has been greatly enhanced. And while this is awesome when we want answers or to learn something new or stay on top of what’s happening in the world, there is a flip-side. Access also leads to transparency. And by this I mean that we get a chance to learn more about someone than ever before.

Psychiatrists don’t analyze a person by reading a medical history – they listen as the person talks. This is how they get to know that person, learn about them and get inside their head. The more they talk, the more the doctor learns. This holds true in general as well. You walk up to an attractive person at a party and it is through dialogue that you determine if a real connection exists. But there is a difference when speaking to someone in a chair across the room or standing in front of you at a party versus sharing your thoughts to a faceless, nameless entity on the Internet.

Sociologists, Human Resources personnel and business managers recognized this problem as e-mail became more and more commonplace in our culture. So often someone will stick their chest out a little farther in an e-mail than they ever would if in front of their target audience. And so we also see the same trend in today’s social networking. Where it crosses a line is when the target audience is known and the authors seem to forget that the Internet is open for anyone to see.

No longer is workplace gossip spread at the water cooler. Rumors fly across the Internet via tweets, comments and wall posts at the speed of light. And unless we are diligent, they are out there for everyone, including the target of the rumor or comment, to see.

I just left a position that, on paper, was one of the best jobs I’ve ever had. The group was already using Visual Studio 2010, Team Foundation Server 2010, Silverlight 4 and so on. I was programming some pretty cool applications and the people I was working with were, for the most part, excellent. I can truly say that in nearly 30 years of developing software I’ve never worked with a more intelligent group of developers as part of one team. And while the position was a step-backwards on my career path and only used about 25% of what I brought to the table, the latter could have easily changed over time and the former was an acceptable trade-off to be part of a cutting-edge team. It was getting to know several of the people on the team that pushed me out the door.

I didn’t get to know them over lunch or while working on projects. It was during those times that I found them to be very intelligent and competent developers. No, it was what I learned about their true personalities online that shaped my decision to move on.

I don’t know if they didn’t realize, forgot or simply don’t care that their conversations on Twitter were visible to anyone. But it certainly gave me the opportunity to see their true colors.

I actually stumbled upon the conversations accidentally when I started to follow one of their blogs. Alongside the list of recent posts was a list of recent tweets. One of them caught my eye and motivated me to look farther. It was a harsh criticism, full of expletives, about another of my co-workers and a task he had chosen to take on in the hopes of improving the user experience of several of our applications. Now it isn’t easy to follow a Twitter conversation if you aren’t a follower of everyone involved, but you can put the dialogue together if you review each person’s tweets.

As it turns out, four of the developers on the team have been having extensive conversations about the other team members, including their opinions of those members and the work they do, etc. “behind their backs”. Unfortunately, it really isn’t a private conversation when you hold it via Twitter. Suggesting that the other developers, myself included, are “kids” that “don’t know WTF they are doing” were probably not intended for my eyes – or anyone else on the team. But, social networking provides this level of transparency and, unless you are careful, your inner thoughts may become public with the click of a mouse.

Through the arrogance and disrespect, I learned that these individuals were truly ignorant and could not be trusted. Unfortunately, they are also some of the most tenured developers in the group and the most influential with management. And while they refer to their teammates as “kids, er, architects” tongue-in-cheek, I’ve seen the fruits of their labor and they have nothing to warrant the feather in their caps. I worked with a team 1/3 the size of theirs which produced a production-ready, mission-critical application for the U.S. Army in ¼ the time these so-called “code ninjas” generated some of the most hacked up, spaghetti code I’ve ever seen that doesn’t even come close to meeting the requirements for the project. They blame their poor coding on the time crunch and pressure they were under “over the holidays” but any developer worth his salt knows that good developers will write good code under any circumstance. Bad developers will make excuses. Arrogant developers will make fun of and blame others.

The bottom line is that I would have known none of this, been oblivious to the true feelings of these former co-workers had it not been for the transparency that social networking provides. Depending on how you view my story, you may feel this is good or bad. I would hope the developers that made those comments on Twitter will be ashamed and see this as a negative, but I suspect that I’m an a** for writing this. I guess I’ll know when they tweet about it later, eh?

Sunday, July 4, 2010

Recently I’ve found myself in a number of conversations about best practices for consuming WCF services in client applications. It is my position that WCF is an implementation detail and the vast majority of uses either ignore or abandon one of the core principals in Object-Oriented Program for the sake of ease or because developers simply don’t give it a second thought. By reminding ourselves that we should be following solid OO principals first, our best practices for using WCF becomes clear.

Always Code Against Interfaces
I’m sure you’ve heard it before: we should always code against interfaces rather than concrete implementations. While this sounds great in theory, there are practical benefits to this principal as well.

Interfaces allow us to decouple implementations so calling code doesn’t have to be concerned with how an operation will be performed – only that it will satisfy that contract established by the interface. This decoupling is key to Dependency Injection where the actual implementation of the interface is determined externally at runtime. Breaking dependencies allows us to create mocks and stubs for testing purposes or change the concrete class at runtime.

In fact, interfaces provide the most robust way for us to adhere to the OO principal of polymorphism in the .NET world. Regardless of the actual type we are working with, we are able to treat the object as if it were the interface we need.

Making WCF Testable
One of the most common problems I’ve seen when consuming WCF services in client applications is testing client code without actually calling service methods. One reason for this is that most developers simply reference the proxy service class and call the desired service method(s). This creates tight coupling between client code and the service proxy. A better solution would be to have client code use an interface that represents the service then we could mock the service for testing purposes.
Unfortunately, the lack of a readily available interface to work with serves as a deterrent for most developers and they simply relegate this as a future refactoring task. This reduces the code coverage of your unit tests and degrades the overall confidence in the quality of the application.

I won’t entertain the debate between using the auto-generated proxy types created using the Visual Studio “Add Service Reference” wizard or sharing code between the server and client in this article. Suffice it to say, in either case, we still have to create an interface to serve our needs.

Back To The Point
We can now come full-circle to the point of this article: WCF is an implementation. Following the logic that an interface hides the actual implementation from calling code and our need to decouple the WCF service from the client by working with an interface instead of directly using the service proxy, we can ascertain that WCF really is an implementation detail and our client code really doesn’t and shouldn’t care that it is calling a WCF service. From the client’s perspective, it simply needs an object that implements the interface. The implementation could be a WCF service, mock object for testing, a traditional XML web service, a class from a shared library or some new technology or approach that has yet to be announced.

Give this a little more thought and you’ll see how this applies to many other OO principals and best practices. Decoupling the services means our client code can better adhere to the single-responsibility principal. If we want to change the service implementation from WCF to some other technology, we only have to make that change in one place, the service implementation, which keeps code DRY (Do not Repeat Yourself!). Service implementation is also encapsulated in one place making the code reusable and easier to maintain.

Putting It All Together
I find having actual code to demonstrate a concept helps drive the information home, so let’s use the example of a client application that is consuming a Customers service which provides simple methods to retrieve and save information about customers in our fictitious application. We will create a Silverlight application using the Model-View-ViewModel (MVVM) design pattern for the user interface.

Instead of designing the WCF service, let’s start with the use-case for our client application. When our application starts, we will display a list of existing customers in the user interface. Following the MVVM pattern, our list will be data bound to the CustomerListViewModel which uses our service to retrieve the list of customers from the server.

We use constructor injection to obtain a reference to an implementation of the ICustomerService interface used by the view model to perform service operations.

The view model uses lazy-loading to retrieve the list of customers. Because Silverlight forces us to make all calls asynchronously, we will return an empty collection initially but could just as easily return null since Silverlight data-binding will handle null values gracefully.

The BeginLoad method is responsible for starting the asynchronous call to the service. Because Silverlight only supports asynchronous communication with the server, our service pattern must provide an easy way to handle all method calls. This is accomplished by passing a callback delegate to the service method that will be called when the operation completes.

private void BeginLoad()
{
_service.ListCustomers(OnLoadComplete);
}

When the operation completes, the OnLoadComplete method is called. After checking if the operation was cancelled on the server, we translate the results and update the view model property.

One thing to note is that we are not persisting references to the data objects returned from the service. This is consistent with a pure-MVVM approach were we only ever expose view models to our UI.

As you can see, nothing in the code above requires WCF or cares if that is the technology used to perform the service operation. However, we can easily mock the ICustomerService interface for unit testing purposes. And if something changes in the way the service is implemented, we won’t have to make a single change to our view model class – as long as the contract isn’t broken.

Let’s take a look at the interface for the service our view model uses:

Each method accepts whatever parameters are needed to perform the operation as well as a callback method that is executed when the operation completes. The delegates are strongly-typed to make coding easier.

Here is where we finally see our WCF proxy come into play. Our service implementation also abstracts away the async pattern used by the auto-generated proxy class by wiring a local handler to the completed event and passing the callback method to the service method. The callback delegate will be passed automatically to the completed handler in the event args.

In the handler for the completed event, we check for any error that may have occurred on the server then obtain the callback method from the event arguments. The callback is invoked with the results of our service method call. (I'm using AutoMapper to simplify mapping the DTO returned by the service to the interface expected by our code.)

Conclusion
The point of this article was to illustrate that from the perspective of the client application, WCF is an implementation detail and a simple pattern can be used that makes our code more adherent to solid object-oriented principals. Hopefully the explanation and sample walk-through have demonstrated how this approach will also make our code more testable and flexible in the process. While it may seem like additional work and code, the benefits gained far outweigh the effort required and after putting it into place, I’m sure you’ll see for yourself how much easier it is to test and maintain your code as your application changes.

Tuesday, April 6, 2010

I've become an avid believer in the MVVM design pattern for XAML-based applications but find certain aspects remain ambiguous as the pattern moves through its "toddler" years. One of these areas has to do with the relationship between the Model and the ViewModel.

Decoration
In many articles and examples, I see that the Model is directly exposed as a property on the ViewModel. This seems like the easy way out to me, but does fit one approach to the pattern. In this case, the ViewModel is designed to "extend" the Model by exposing additional properties and methods used by the UI - an implementation of the Decorator pattern, if you will. But, this approach has a couple of issues:

First, there might not be a one-to-one relationship between the data contained in the Model and the way the UI wants to display it. Perhaps we want to display a person's name as their full name but the Person object only contains FirstName and LastName properties. Sure, we could modify the Person class to expose a FullName property, but what if we don't have access to that code - perhaps Person came from an external service? Isn't it the role of the ViewModel to make these adaptations for us?

What if we have an object graph with parent-child (or even grandchild) relationships? Wouldn't we want the ability to manage the child objects in their own view? At which point, we'd need all that extra stuff the ViewModel offers for the child views, too.

On the other hand, let's not discount the simplicity of exposing the Model object as a property on the ViewModel. One property and a convenient way to swap the object under-the-hood of the ViewModel without having to worry about multiple property changed events, etc.

Delegation
Another common approach I've seen is to store a reference to the Model object in a private variable and have the ViewModel delegate to the object in property setters/getters. One advantage of this is that we can preserve any business logic that may be contained in the Model object but it does increase the code in our ViewModel significantly as we have to mimic each property we want exposed to the View. Plus, if the Model object contains validation logic, etc. we have to include plumbing to hook that up. And, let's not forget the need to handle property changed events from the Model so we can bubble them to the UI.

Child objects or collections are then wrapped in their own ViewModel class and exposed to the UI, thereby, supporting deep object graphs in a consistent manner.

Truthfully, I only see this as a reasonable approach if you are using rich business objects for your Model. Simple DTO's don't contain any additional logic that provide value when delegating. Of course, it makes it easy to swap them under-the-hood and possibly saves a step when interacting with a service to persist changes but I don't see this as good enough justification.

Duplication
The other option is to use the Model as a DTO and "load" the ViewModel state from the DTO when retrieved. The ViewModel would then manage the state internally via data-binding with the View. For persistance, the ViewModel would create a new DTO, populate it and execute whatever method(s), presumably a service call, to save the state.

This choice suffers as the previous from the additional coding required to "re"implement the various properties for the View to consume. However, it does provide clean separation of layers as the View knows absolutely nothing about the Model. And, if I'm not mistaken, one of the diagrams for the MVVM pattern I've seen shows exactly that.

So, the question in my mind is what purpose should a ViewModel serve and which approach best suits that role? While I hate recoding properties that have already been implemented in my Model object, there are some very good reasons while I think this is the right approach:

Separation of layers. The View knows nothing about the Model so changes to the Model don't affect the View.

The ViewModel is there to serve a particular use-case which, following solid OO principals, should be behavior-driven while our Model object (given today's trend toward entity-base data access technologies) is most likely data-centric.

The ViewModel assumes the responsibility to reformatting, repackaging, re... the data obtained from the Model into a form required by the UI.

The ViewModel adds INotifyPropertyChanged support and aleviates this responsibility from the Model. This may not be the case if dealing with rich model objects where we may want to delegate and have the ViewModel bubble the PropertyChanged event.

As the last point demonstrates, heading down one path often turns on itself and takes us right back to the original question/debate. I'm still forming my opinion and welcome your thoughts, comments and observations to help me along this journey.

Thursday, February 18, 2010

I am only a few chapters in but I'm already adding "Clean Code: A Handbook of Agile Software Craftsmanship" by Robert C. Martin (available at Amazon.com) to my list of required reading for my development teams. Despite the fact that Robert bases his examples on Java, I believe the concepts ring true to the .NET family of languages and every lesson, guideline and recommendation holds value for any developer.