Introduction

.NET 2.0 introduces a lot of new features in Windows Forms and deployment technologies that take Smart Client development to the next level. Previously, we had to worry about too many issues including rich controls (owner drawn controls), multithreading, auto update and various other issues while developing a Smart Client. All these have been taken care of in the 2.0 Framework. However, most of the web based systems including Web Services and Web Sites are already developed in .NET 1.1. So, in this article, we will look at a real life scenario where a .NET 2.0 based Smart Client will be using a well engineered Service Oriented Architecture (SOA) based web service collection developed using .NET 1.1. The sample application will show you from early design issues to the deployment complexities, and how you can make a Smart Client smartly talk to a collection of web services which are also well engineered using the famous SOA. The server side will be fully utilizing Enterprise Library June 2005 by using all of its application blocks. The resultant product is an ideal example of best practices in cutting edge technologies and latest architecture design trends.

Feature Walkthrough

Login

Now this is something that you don't see everyday. This looks like the login screen of a preview release of Longhorn which has been renamed to Windows Vista. The new login screen is quite different than this one, but I really liked this design and decided to use it in my own apps.

When you click the Login button, a background thread is spawned which connects to the webservice and passes the credentials. As the login is in background, the UI remains active and you can cancel login and exit. We will see how we can make a truly responsive UI by calling Web Services asynchronously using the new BackgroundWorker component.

Fig: Login in progress

Loading Required Data Asynchronously

When the main form loads, it needs to load the course list and the student list. As this takes time and we cannot keep the user stuck on the UI, it loads in the background using two different asynchronous web service calls. You can develop small controls in such a way that each control itself manages its asynchronous loading by utilizing the convenient BackgroundWorker component.

The UI remains fully responsive while the application is loading. As this is a long task, the user may want to explore the menus or do some other chores while the application gets ready to work.

Tabbed Interface: Document-centric View

The application follows an enhanced version of the Model-View-Controller architecture which we will discuss here. The architecture I have implemented is almost similar to what you see in Microsoft Office, Visual Studio and other desktop applications which provide Automation Support. You will see how you can leverage this wonderful design idea to create a truly decoupled, extensible and scriptable application that saves your development and maintenance time significantly.

Each tab represents a Document which can be a Student Profile, an Options module, Send Receive Module or any other module that inherits from the Document class.

Modules can load their data asynchronously. For example, here the course list of a student loads asynchronously. Although the basic profile information of all students are loaded during application startup, the details are loaded on demand, saving initial bandwidth consumption and overall memory allocation of the entire application.

Owner-drawn Tree View

We will explore the new WinForms features of .NET 2.0 which has made most of the controls fully owner drawn. You can now create owner drawn TreeView, ListView, ListBox and almost all the controls. Most of the controls now offer a virtual DrawItem method and/or event which you can subscribe to and provide custom rendering. You can also define whether the items' height will be variable or fixed. When you have the power to draw the items yourself, you can do anything inside a node. You can show pictures along with text, draw text in different font and color inside the same node, make nodes hierarchical etc.

The Course Treeview on the left is owner drawn. You can see the nodes provide multiline content, with different background gradient. It also renders multiple properties of an object inside one node. We will see how to do this later on.

Offline Capability

Offline capability is the best feature of Smart Clients. Smart Clients can download data that you need to work with later on and then disconnect from the information source. You can then work on the data while you are mobile or out of office. Later on, you can connect to the information store and send/receive the changes that you have made during the offline period.

One example here is, you can open the students data you want to work with. Then you can disconnect from the web service or go offline. You can then modify these students either by editing profile, or by adding/dropping courses, modifying accounts etc. Later on, when you get connected or get online, click on the "Send/Recive" button and the changes are synchronized back to the server.

Both you and the server get the latest and up-to-date information.

You can enhance the offline experience further by providing local storage of information so that users can shut down the application after going offline and can later on start the application to work with the local data. This can easily be done by serializing the course and student collection to some local XML file.

Going Online

Here you see the Outlook 2003 style Send Receive module. This module collects the pending saves and sends them to the server. It also fetches latest student information for those which you have modified.

In case of any error received from the server, you can see them from the second tab. This tab uses an owner drawn ListBox control to show both the icon and the text of the error message.

For such offline synchronization, you need to handle concurrency on the server side. We will discuss the server architecture and how we have implemented concurrency later on.

Being Smart Client

Requirements

Local Resources and User Experience

All Smart Client applications share is an ability to exploit local resources such as hardware for storage, processing or data capture such as compact flash memory, CPUs and scanners for example. Smart Client solutions offer hi-fidelity end-user experiences by taking full advantage of all that the Microsoft® Windows® platform has to offer. Examples of well known Smart Client applications are Word, Excel, MS Money, and even PC games such as Half-Life 2. Unlike "browser-based" applications such as Amazon.Com or eBay.com, Smart Client applications live on your PC, laptop, Tablet PC, or smart device.

Connected

Smart Client applications are able to readily connect to and exchange data with systems across the enterprise or the internet. Web services allow Smart Client solutions to utilize industry standard protocols such as XML, HTTP and SOAP to exchange information with any type of remote system.

Offline Capable

Smart Client applications work whether connected to the Internet or not. Microsoft® Money and Microsoft® Outlook are two great examples. Smart Clients can take advantage of local caching and processing to enable operations during periods of no network connectivity or intermittent network connectivity. Offline capabilities are not only of use in mobile scenarios however, desktop solutions can take advantage of offline architecture to update backend systems on background threads, thus keeping the user interface responsive and improving the overall end-user experience. This architecture can also provide cost and performance benefits since the user interface need not be shuttled to the Smart Client from a server. Since Smart Clients can exchange just the data needed with other systems in the background, reductions in the volume of data exchanged with other systems are realized (even on hard-wired client systems, this bandwidth reduction can realize huge benefits). This in turn increases the responsiveness of the user interface (UI) since the UI is not rendered by a remote system.

Intelligent Deployment and Update

In the past, traditional client applications have been difficult to deploy and update. It was not uncommon to install one application only to have it break another. Issues such as "DLL Hell" made installing and maintaining client applications difficult and frustrating. The Updater Application Block for .NET from the Patterns and Practices team provides prescriptive guidance to those that wish to create self-updating .NET Framework-based applications that are to be deployed across multiple desktops. The release of Visual Studio 2005 and the .NET Framework 2.0 will beckon a new era of simplified Smart Client deployment and updating with the release of a new deploy and update technology known as ClickOnce.

(The above text is copied and shortened from the MSDN site.)

How is This a Smart Client

Local Resources and User Experience: The application downloads from a website using ClickOnce deployment feature of .NET 2.0 and runs locally. It fully utilizes local resources, multithreading capability, graphics card's power, giving you the best user experience .NET 2.0 has to offer.

Connected: The application works by calling several web services which act as the backend of the client.

Offline Capable: You can load students data you want to work with and go offline. You can make changes to the student profiles, add/drop courses, modify accounts etc. and then synchronize them back when you go online.

Intelligent Deployment and Update: The application uses Updater Application Block 2.0 to provide auto update feature. Whenever I release a new version or deploy some bug fixes to a central server, all the users of this application automatically get the update behind the scene. This saves each and every user from going to the website and downloading the new version every time I release something new. It also allows me to instantly deliver bug fixes to everyone within a very short time.

Multithreaded: Another favorite requirement of mine for making a client really Smart is to make the application fully multithreaded. Smart Clients should always be responsive. They must not get stuck whenever they are downloading or uploading data. User will keep on working without knowing something big is going on in the background. For example, while you are loading one student's course list, you can move around on the course list on the left or can work on another student data which is already loaded.

Crash Proof: A Smart Client becomes a Dumb Client when it crashes in front of a user showing the dreaded "Continue" or "Quit" dialog box. In order to make your app truly Smart, you need to catch any unhandled error and publish the error safely. In this app, I will show you how this has been done using .NET 2.0's Unhandled Exception trap feature.

The Client

The greatest feature of this application is that, it provides you with an automation model similar to what you see in Microsoft Office applications. As a result, you can write scripts to automate the application and you can also develop custom plug-ins. The idea and implementation of this architecture is pretty big and explained in this article:

The basic idea is to make an observable Object Model that all the UI modules or Views subscribe to in order to provide services based on changes made to the object model. For example, as soon as you add a new Document object in the Documents collection, the Tab module catches up the Add event and creates a new tab. The Views also reflect changes made on the UI to the object model. For example, when the user switches a tab, the view sets the Selected property to true for the corresponding Document object.

This is the object model of the Client application. _Application is the starting point. This is not WinForm's Application class. This is my own Application class with a prefix "_". It contains collections of StudentModel, CourseModel, and Document. These are all observable classes which provide notification of a variety of events. They all inherit from a class named ItemBase which provides the foundation for an observable object model. ItemBase exposes a lot of events which you can subscribe to in order to get notified whenever something happens on the object.

When you extend your object from ItemBase, you get the feature to Observe changes made in your objects. Your extended objects can call ItemBase's NotifyChange method to let others know when the object changes. For example, let's look at the StudentModel which listens to any changes made to its properties. Whenever there is a change, it calls the NotifyChange method to let others know that it has changed.

So, this gives you the following features:-

Whenever you make changes in the Student object, e.g. set it's FirstName property to "Misho" (my name), the "First Name" text box on the UI gets updated. This happens because the StudentProfileUserControl which contains the text box has already subscribed to receive events from the StudentModel and has dedicated its life only to reflect changes made in the object model to the UI.

Similarly, whenever user types anything on the the first name text box, the UserControl immediately updates the FirstName property of the StudentModel. As a result, OnChange event is fired and another module which is Far Far Away (Shrek2), receives the notification and updates the title bar of the application reflecting the current name of the student.

So, you can now have objects which can broadcast events. Any module can anytime subscribe/unsubscribe to a student object and provide some service without ever knowing existences of the other modules. This makes your program's architecture extremely simple and loosely coupled as you only have to think about how to listen to changes in the global object model and how to update that object model. You never think, hmm... there's a title bar which needs to be changed when user types something in the "First Name" text box which is inside a StudentProfile UserControl. Hmm... the user control is in another DLL. How can I capture the change notification on the text box and carry it all the way to the top Form. Hmm... looks like we need to expose a public Event on the user control and we need to subscribe to it somehow from the Form. Now from the Form, how do I get reference to the user control which is hosted inside another control, which is inside another control and which is also inside another control. Hmm... So, we have to add events to all these nested user controls in order to bubble up the event raised deep inside the StudentProfile control. Life is so enjoyable, isn't it?

Instead of doing these, what you can do is, make the Form subscribe to OnChange event of the StudentCollection . The StudentProfile control will reflect changes from the UI to the Student object and raise the event. The Form will instantly capture it and update the title bar. But this requires you to have a global object model just like what you see in MS Word's object model.

Let's see the difference between how we normally design desktop applications and how an automation model can dramatically change the way we develop them:

Making such an object model is quite easy when you have the ItemBase class which I have perfected over the years, project after project. All you need to do is extend it, pass the parent to the base on the constructor, and call the methods on the base whenever you like. For example, the StudentModel class is as simple as it looks here:

Similarly, there is a CollectionBase class which is the base of all observable collections. This class provides you with two major features:

Notification of item: add, remove, clear.

Notification from any of the contained item.

The second feature is the most useful one. If you have a StudentCollection class which contains 1000 Student objects, you cannot just subscribe to 1000 object's OnChange event. Instead you subscribe to the OnItemChange event of the collection which is automatically fired whenever a child object raises its OnChange event. The CollectionBase class handles all the difficulties of event subscription and propagation from its child objects for your convenience.

ItemBase provides another feature which is listening to a child collection. For example, a Course object has a collection of Section objects which in turn has a collection of SectionRoutine objects. The Course object, being an ItemBase, can receive events raised from the SectionCollection which inherits CollectionBase. As a result, Course object can receive events from Section objects contained inside the SectionCollection.

Command Pattern

For desktop applications, this is probably the best pattern of all. Command Pattern has many variants, but the simplest form looks like this:

I have used Command Pattern for all activities that involve Web Service call and UI update. Here is a list of commands that I have used which will give you ideas how to break your code into small commands:

LoadAllCourseCommand - Load all courses and sections by calling Course web service and populate the _Application.Courses collection.

LoadStudentDetailCommand - Load details of a particular student. Called when you load a student for editing.

SaveStudentCommand - Called by the "Send/Receive" module to save a student's information to the server.

CloseAllDocumentsCommand - Called by the Window->Close All menu item to close all open documents.

CloseCurrentDocumentCommand - Called by the Window->Close menu item and the Tab->Close menu item in order to close the current document.

ExitCommand - Exits application.

So, you see, all the user activities are translated to some form of commands. Commands are like small packets of instructions. Normally each command is self-sufficient. It does all the activities like calling web service, fetching data, updating object model and optionally updating the UI. This makes the commands truly reusable and very easy to call from anywhere you like. It also reduces duplicate codes in different modules which provide similar functionality.

It contains a list of Students that needs to be synchronized. The SendReceiveUI inherits from DocumentUIBase. This is a base user control for all Document type user controls. DocumentUIBase provides a reference to the Document which it needs to show.

Here worker is a BackgroundWorker component, one of .NET 2.0's marvel. This component provides a very convenient way to provide multithreading in your UI. When you put this control in your Form or UserControl, you get three events:

DoWork - where you do the actual work. This method is executed in a different thread and you cannot access any UI element from this event of from any function that somehow originates a call from this event.

ProgressChanged - This event is fired when you call the ReportProgress method on the worker component. This event is slip streamed to the UI thread. So, you are free to use the UI components. However, what you do here blocks the main UI thread. So, you need to do very small amount of work here. Also remember not to call ReportProgress too frequently.

RunWorkerCompleted - This is fired when you are done with the execution in DoWork event. This function is called from the UI thread, so you are free to use UI controls.

Here's the DoWorker method definition:

privatevoid worker_DoWork(object sender, DoWorkEventArgs e)
{
// This method will run on a thread other than the UI thread.// Be sure not to manipulate any Windows Forms controls created// on the UI thread from this method.this.BackgroundThreadWork();
}

I generally follow a practice to include the "Background" word in the methods which are called from here. This gives me a visual indication to remember that I am not in the UI thread. Also, any other function called from such a function are also marked with the "Background" word.

The actual work is performed in this method. Let's see what the Send/Receive module does:

In this method, we call the web services for doing the actual work. As this method is running in a background thread, the UI does not get blocked while the web service calls are in progress.

You see, the tasks are actually encapsulated inside small commands. These commands do the web service call and update the object model. I need not worry about refreshing the UI or repopulating the student list drop down from here because whenever the object model is updated, they update themselves. I need not think about any other part of the application at all and can focus on doing my duties which is relevant to this context only.

So you see, whenever you add a new module to your application, you need not worry about changing existing modules' code in order to utilize the new module. The new module just subscribes to the object model and does its job whenever some events are raised. You can also tear off a module from any part of the application without worrying about its dependency on other modules. There is no dependency. Everyone only depends on the object model. Everyone is listening to the object model for its own purpose. No body cares who comes in and who goes away.

This is the true power of an automation supported object model.

Owner-drawn ListBox

You have seen in the "Send Receive" module's screenshot that, a simple ListBox is showing both icon and text. This is possible if you tell the ListBox not to do any rendering itself and let you to take care of the rendering. This is called Owner Drawing. .NET 2.0's ListBox control has the DrawMode property which can have the following enumerated values:

Normal - The framework does the default rendering of list box items which is plain text.

OwnerDrawFixed - You get to draw the items but each item is fixed height.

OwnerDrawVariable - You get to decide the height of each item and draw each item as you like.

After setting the drawing mode, subscribe to the DrawItem event and do your drawing.

You will get everything you need to know about the drawing context from the parameter e which lets you get the Graphics and Bounds of the current item. You can see in the code that, within the bounds, we can use the Graphics class to draw whatever we like. First we are filling the area with White color which cleans the area. Then we check whether this is the current item or not and draw the focus rectangle accordingly. After that we draw an image which is followed by the text of the item.

Owner-drawn TreeView

Just like owner drawn ListBox we can make TreeView control owner drawn and do whatever we like with the nodes. For example, the screenshot shows you that we are rendering gradient background and multiline text on a node. This is not something that was possible with .NET 1.1 unless you go under the hood and work with the Win32 API. But now, .NET 2.0 makes it all the managed way which is the easy way.

First, we override the OnDrawNode method which does the actual drawing.

Most of the code here you see are dealing with current node state and drawing the focus rectangle or a gray rectangle. At the end, according to the current node object, we are calling the actual render function.

We have the full power to draw text, icons using any style or font in this method. This gives us complete power over rendering of the TreeView control, yet fully reusing its complete functionality as it is.

Web Service Connection Configuration

When you add reference to a webservice, the proxy code is generated with the web service location embedded in the code. So, during development, if you add web service reference from the local computer, you get the web service reference set to localhost. So, before going to production, you need to change all the web service proxies, so that they point to the production server.

Another problem is, what if your web service locations are not static? What if you need to make the URL configurable from your own configuration file, instead of fixing it from Visual Studio?

I solve it the following way:

I store the server name in a configuration variable.

I use ServiceProxyFactory class which gives me the webservice proxy reference I need, instead of directly accessing a webservice proxy.

So, basically I am following the Factory pattern where a factory decides which proxy to return and it also prepares the reference a bit before returning the reference. For example:

The Server - Service Oriented Architecture

Requirements

A self-contained, stateless function which accepts one or more requests and returns one or more responses through a well-defined interface. Services can also perform discrete units of work such as editing and processing a transaction. Services do not depend on the state of other functions or processes. The technology used to provide the service does not form part of this definition.

Moreover a Service needs to be stateless:

Not depending on any pre-existing condition. In an SOA, services do not depend on the condition of any other service. They receive all information needed to provide a response from the request. Given the statelessness of services, consumers can sequence (orchestrate) them into numerous sequences (sometimes referred to as pipelines) to perform application logic.

Moreover, SOAs are:

Unlike traditional object-oriented architectures, SOAs comprise of loosely joined, highly interoperable application services. Because these services interoperate over different development technologies (such as Java and .NET), the software components become very reusable.

SOA provides a methodology and framework for documenting enterprise capabilities and can support integration and consolidation activities.

The Sample SOA

There are four services available on the web project available in the source code:

Enterprise Library

Enterprise Library is a major new release of the Microsoft Patterns & Practices application blocks. Application blocks are reusable software components designed to assist developers with common enterprise development challenges. Enterprise Library brings together new releases of the most widely used application blocks into a single integrated download.

Extensibility. All application blocks include defined extensibility points that allow developers to customize the behavior of the application blocks by adding in their own code.

Ease of use. Enterprise Library offers numerous usability improvements, including a graphical configuration tool, a simpler installation procedure, and clearer and more complete documentation and samples.

Integration. Enterprise Library application blocks are designed to work well together and are tested to make sure that they do. It is also possible to use the application blocks individually (except in cases where the blocks depend on each other, such as on the Configuration Application Block).

Application blocks help address the common problems that developers face from one project to the next. They have been designed to encapsulate the Microsoft recommended best practices for .NET applications. They can be added into .NET applications quickly and easily. For example, the Data Access Application Block provides access to the most frequently used features of ADO.NET in simple-to-use classes, boosting developer productivity. It also addresses scenarios not directly supported by the underlying class libraries. (Different applications have different requirements and you will not find that every application block is useful in every application that you build. Before using an application block, you should have a good understanding of your application requirements and of the scenarios that the application block is designed to address.)

It's a great block to use in your application. However, if you already have a custom database based authentication & authorization implementation, you will have to do the following in order to replace your home grown A&A codes to industry standard practices provided by the Enterprise Library:

If you have a user class, extract the password field from it. Keep the user name field.

Map the user name field with the "Users" table created by the database script provided with the Security Application Block. Create a foreign key from your user table to EL's Users table on the User Name column.

Change your authentication code to use the AuthenticationProvider available in EL.

Change your home grown role based authorization with EL's role based authorization.

But checking just user's role membership is not sufficient. In complex applications, we need to provide task based authorization which means checking whether the user has permission to do a particular task or not. This can be done the following way:

The Security Console application provided with the source code allows you to create user, create role and then assign user to role. More on this later on.

However, defining a rule is a bit different. My first expectation was, a rule should be in the database so that it can be changed from code. Instead, it turned out to be static and defined in the application configuration file using the Enterprise Library Configuration application.

Sending Password Over the Wire

If you are not using HTTPS/SSL, or Integrated Windows Authentication, you will face the problem of sending password over the wire. You cannot send password as plain text because anyone can eavesdrop and get the password. So, you have to send the MD5/SHA hash of the password to the webservice. The simplest way to do this is to send credentials using SOAPHeader. Here's a class called CredentialSoapHeader which contains user credentials:

publicclass CredentialSoapHeader : SoapHeader
{
///<summary>/// User name of the user
///</summary>publicstring Username;
///<summary>/// Hash of password, do not send password clear text over the wire
///</summary>publicstring PasswordHash;
///<summary>/// Version no of client. If it does not match with server's version no,
/// no request served
///</summary>publicstring VersionNo;
///<summary>/// Temporary security key generated for a particular client
/// When client
///</summary>publicstring SecurityKey;
}

There are two ways you can pass authentication information to the server:

Always send user name and password hash on every web service call.

After authenticating for the first time, the server will generate a security token and you can use that token for later calls. However, this is not secure as anyone can capture the user name and security token using a network sniffer and then impersonate.

On the client side, we have seen that we are using a factory for generating the web service proxy which prepares the web service proxy reference with the credentials:

Enforcing Update from the Server

One interesting trick you have noticed in the above code is that, we are passing VersionNo inside the SoapHeader. When you deploy Smart Clients, if you do not have the Auto Update feature which forces users to update the application before using it, you will run into a problem where people won't update the application before using it even though you tell them to do that. As a result, they may use old code which corrupts data on the server and creates problems for others and themselves. Imagine a scenario where you have fixed a delicate account calculation and you asked all Accounts people to get the latest version. But someone forgets to do that and continues to use the old version which miscalculates the account statement. This becomes a maintenance nightmare for you.

The solution is to reject any client by force, which is not up-to-date. On the client application, store a configuration variable which contains the version number. On the server side, store the version number that you will allow in a configuration file. Every request contains the SOAPHeader which contains the VersionNo. Match these numbers with each other and if they don't match, the request is rejected. This prevents users from login in using an unsupported version of the client or even perform any operation on the server when they are already logged in.

if( this.Credentials.VersionNo != Configuration.Instance.VersionNo )
thrownew SoapException("The version you are using" +
" is not compatible with the server.",
SoapException.VersionMismatchFaultCode, "Security");

Configuring Security Application Block

Before getting started, make sure you have added Data Access Application Block in the configuration and you have mapped it properly to your database. This database will be used in all the places.

Step 1: Create an Authentication Provider. I am using Database Provider. You can use other options like Active Directory Authentication Provider. Please see the EL documentation for details. After adding the Database Provider, select the database where authentication information is stored. I am using the same database SmartInstitute for storing the configuration.

Step 2: Find the SecurityDatabase.sql file from EL's source code. This is the SQL which prepares the database. If you want to use the default "Security" database, then you can run it. But if you want to use your own database, open the file in an editor and perform the following search & replace:

Replace N'Security' with N'YourDatabaseName'

Replace [Security] with [YourDatabaseName]

Don't run the file yet! It will delete your database. Remove the DROP DATABASE command from the beginning of the file. Here's how it should look:

Step 5: Add a Rules Provider. This is a bit different. Right click on the provider name and create a new rule. Here's how the rule screen looks like:

Here you can define the static expression of roles and identities which match a rule. If a user and its roles match a rule, it is authorized for the rule. We have already seen before how to authorize against a rule.

Step 6. Add a Security Cache provider. We will cache credentials in order to avoid database call for authentication every time a webservice method hits.

Preparing the Security Console Application

Security Console application is provided with EL's source code. You will have to modify the dataConfiguration.config and define your own database name and then build a version.

Authenticating using EL: The Surprise

Here's the code you need to use in order to authenticate using EL's AuthenticationProvider class:

But you don't have password in plain text because the client will be sending the MD5 hash of the password. You cannot either get the password from the database using any of the AuthenticationProvider functions so that you could by-pass this process and match directly. So, you have no other choice but to change the EL's code and run into a maintenance nightmare.

The easy solution is, you pass the MD5 hash of the password to AuthenticationProvider as it is, but in the database you store the hashed value of the password hash. This means you will hash twice before storing the password in the database. The Security Console application creates the user accounts. So, you can easily change its code to hash the hash of the password and then store the double hashed bytes in the database. So, when you specify the MD5 hash to AuthenticationProvider, it hashes it again and then matches with the bytes available in the database. The database also contains the hash of hash and they match.

This is not extra load on the server because the first hash is done by the client and the second hash is done by the server.

Data Access

Data Access Application Block

Enterprise Library provides a convenient Data Access Application Block for all your database needs. It has a convenient Database class which provides all the methods needed for database access. However, those who were already using the previous Patterns & Practices Data Access Application Block, they will run into major problem as there is no SqlHelper or SqlHelperParameterCache. This will force you to go through all your data access code and change them to use Database instead of SqlHelper. The worse news is, there is no SqlHelperParameterCache.

This is a real example where you should not use any third party code directly, instead always use your own wrapper. No matter how standard they appear to be and from whatever reliable source they come from, once you get dependent on them and they change, you are doomed.

Due to this problem, I have created a SqlHelper class which has the same signature as the previous DAB:

Once you make a class like this, you can have your existing code untouched and make them work with EL. However, if you were using SqlHelperParameterCache before, then you will have to make your version of that also:

Caching

Caching Application Block is the only block where I haven't run into any issue during usage. But again, it seems that Cache Expiration callback feature is missing.

Caching in a Server Cluster

When you have a cluster of servers, you will run into caching problems. Consider this scenario:

You are caching a list of objects on a web method. For example, I am caching all the students once GetAllStudents is called.

Now, the client modifies one student and I have to clear the Student cache so that it is again loaded from the database and cached.

The web server in the cluster which receives the "SaveStudent" command clears up its own cache. But it does not clear the cache of other web servers.

The next call from the client goes to another web server which is still holding an old copy of the cached objects. So, the client sees old information of the students even after making changes and saving them.

The only solution to this problem is to use a centralized cache store. You will have one dedicated computer for handling all the cached objects that are costly to load from the database. But this single server might become your single point of failure and performance bottleneck under heavy load.

Performance Monitoring

EL has rich instrumentation features. It uses both WMI Events and Performance Counters throughout the application blocks giving you the opportunity to monitor performance of database calls, authentication & authorization success/failure, measure cache hit rates etc. You can utilize these performance counters in order to detect bottlenecks in your server application. You can use the Windows Performance Monitor to see how EL App Blocks are doing. Here's a screenshot which shows different performance counters from EL App Blocks:

Code Generation using Code Smith Template

Code Smith is a tool that can dramatically reduce your development time. Everyday we write common codes for database calls, object population, object to UI population and vice versa. All these can be automatically generated from a single mouse click using Code Smith.

The entire server side code, except the Facade, is generated using the wonderful .NET data tiers generator. However, it generates code using the old Patterns & Practices Application Blocks. It was pretty difficult to modify these templates and generate EL specific codes. In the source code provided with this article, you will find the modified template which generates EL specific codes.

ClickOnce Deployment

Let's take a look at my most favorite feature of .NET 2.0, the ClickOnce deployment. This is truly a revolutionary step towards deployment of Desktop/Smart Client applications. Deployment and versioning has never been this easy. I remember the day when we used to make our own hand coded custom installers and service releases during C++ days. Things did not change much even on VB era. It introduced DLL hell. Now with .NET, there's no DLL hell. But auto update still is not available from the .NET framework and you have to use custom libraries like the Updater Application Block in order to provide automatic update. But now with ClickOnce, everything ends here. This is the ultimate deployment you can ever have.

While configuring ClickOnce, you need to make changes in the following section:

This dialog box defines the texts you see in the published website. After publishing your application by right clicking on the EXE project and selecting "Publish", you get a page like this:

After deploying, you can freely make changes in the application, add new features, fix bugs. When you are done, right click on the EXE project and click Publish again. All users get the updated version as soon as they launch the application. ClickOnce takes care of downloading the latest version and installing it.

And of course, for those reluctant users who never close the application and do not get the updated code, you can stop them whenever you want by implementing the VersionNo concept I have introduced before.

Installing & Running the Sample Application

Configuring the Server

Server is developed using .NET 1.1. So, you will need Visual Studio 2003.

The "Server" folder contains the code only and the database is located in the "Database" folder. You need to follow the following steps:

Download Enterprise Library June 2005 and install it properly. After installing, go to Start->Programs->Enterprise Library June 2005->Install Services. Download URL.

Restore the database from the "Database" folder. Use the database name "SmartInstitute". If you change the name, you will have to change the dataconfiguration.config and put the connect user name, password and database name.

Create a virtual directory using IIS Manager named "SmartInstituteServices" which maps to "Server\SmartInstituteServices".

Client is developed using .NET 2.0. So, you will need Visual Studio 2005.

Open the "client\SmartInstituteClient.sln"

Go to SmartInstitute.App -> Properties

Go to Signing tab

Click "Create Test Certificate"

The default configuration assumes, you have the web service at "localhost". If you want to change it, then go to SmartInstitute.Automation project and open the "Properties\Settings.settings". Modify the "WebServiceURL" property.

Whenever you "Add Reference" or "Update Reference" a web service reference, Visual Studio generates all the related classes inside the Web Proxy class (Reference.cs). If you open up the code of the Proxy, you will see, all the related classes like Student, Course, CourseSection etc. are created from what VS discovers from the WSDL. This is a big problem because we have all the domain entity objects defined in the SmartInstitute.dll. It already contains all these classes and this DLL is shared between the server and the client. I could not find any way to tell VS that, please include the SmartInstitute namespace and do not generate duplicate classes in the Web Proxy class. As a result, whenever you update the reference, you can no longer build the project.

How to fix this problem:

Open the Reference.cs file inside the Web reference folder. You will not see them from VS normally. You will have to turn on the "Show All Files" feature by clicking the button at the top of the "Solution Explorer" view.

Scroll down until you find the CredentialSoapHeader class.

After this class, you will see all those entity classes are created. Remove all these classes and the enumeration definition.

Remember not to delete the delegates and events you see in the proxy. Those are required.

I haven't found a solution yet to do this automatically. Either I completely missed any feature in VS which already does this job or I need to write a macro which does this job automatically.

Conclusion

We have covered from design to development and deployment to maintenance of a .NET 2.0 Smart Client. You have also seen how you can implement an object model similar to Microsoft Office applications in your own projects. Such an object model automatically enforces a loosely coupled architecture as designed which provides ultimate extensibility. We have also seen how you can develop Service Oriented Architecture using XML Web Services and how Enterprise Library helps you dramatically reduce the development cost and time by providing a rich collection of reusable components. The sample code will give you ample examples to deal with real life Smart Client development hurdles and the reusable components will dramatically reduce the time needed for similar projects.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

Comments and Discussions

Am havingtrouble running the solution in visual studio 2010, could you please upload an updated version of it, and if possible could you use WCF instead of Web services because am totally into WCF and have no knowledge of Web Service Configurations thanks

The error is that i couldn't load type "SmartInstituteServices.Global" in file Global.asax, error details tell me that the language referenced by the language attribute is not supported by Visual Studio IntelliSense and stament completion. I edit this file, add attribute

Dear Omar,
Mate I am from Australia and I love your articles!
I have been thinking of using the MDA design pattern for ages now but I dont think I finally have had the need for its full power on this huge gov project. You did this back in 2005!

The customer has specified that we must go WPF and they are also happy that we are looking at Prism2.
I think we will also try and use some of the nice features in Daniel Vaughans Calcium framework for Prism2.

I have been searching around on the web a lot but I have not been able to find any other examples of the use of this framework. It seems very strange when as you say most of the flagship MS products like word, excel use or have facades using the MDA approach.

It seems that the MVVM model and others have taken precedence. I was thinking that the MVVM model is really a simpliciation of the MDA model. With the ViewModel component of the MVVM model acting as the application model. I would really like to use the MDA model for a composite application as I can see that our modules will be highly interacting and therefore would require event pub/subs between the modules for handling communications and this is really not a scalable approach.

I think we should look to colaborate to get the MDA model into the WPF/Prism community.
Have you got the time to chat/work on this?

I really like the automation model described, its the natural evolution of the other frameworks i had worked with using windows applications (that particular lineage ended in 2000 when i switched over to web work full time). It would be nice to expand this out to include a template framework as well, a way to keep all the forms consistent across the application. Its really a great article and im going to look to implement and extend it in VS2008, hopefully i can post something back here in the future.

The architect concerns himself with the depth and not the surface, with the fruit and not the flower. - Lao-Tsu, revisited by Philippe Kruchten

Could not help noticing that the heart of everything lies in ItemBase and ItemCollectionBase classes, thanks for the code for these, also would be interesting to integrate the bus architechture you mentioned in another article into this app, so updates are visible to other users in near real time(Im assuming that this is not what happens in this app, correct me if I am wrong) Anyways here is my question..Im mostly a web application developer and have never worked on a commercial windows app, although i did understand the code and the concept. Can you shed some light on how this may be adapted to be used with web applications. One obvious way one might think is to to develop a similar object model in javascript and again use web services over ajax (probably what a lot of modern fancy ajax sites do), but frankly my javascript is not that good and most project i work on have such strict deadlines that it is usually just getting data and displaying it in the classical asp.net page cycle.
Is there any way of emulating what you showed in a web application on the server, Im thinking because often views have to be recreated even when underlying data has not changed. If we find some way of implementing this model so that it stays alive on the web server..although it will may beat the purpose of scalability by eating too much resources on the web server... im still just thinking.