Disclaimer

Most of the modern web applications provides real-time functionalities (“real-time web“) through a set of technologies and practices that enable users to receive information as soon as it is published by its authors, rather than requiring that they or their software check a source periodically for updates. Moreover, in very scalable and complex architectures, server-side code execution is often asynchronous. Just for example, let’s think to a task-based UI which submits a command like “book a plane ticket” to a web service. The server-side command processing could be performed after some hours: for example, the command could be just enqueued to a command-bus to be processed later. In scenarios like that, the client can’t count on an updated read model just after sending the command. As a consequence, in order to receive a feedback as soon as possible, all the involved clients should poll the server until the command execution reaches a significant state (e.g. in progress, completed, canceled etc.) and the read model is updated, ready for queries.

Before WebSockets, the classic implementations of this kind of real-time features were not so easy and they used to adopt strategies like forever frame (see “Comet“) or periodic/long polling. Today, all the modern browsers and web servers fully support WebSockets and they can extabilish bi-directional and persistent communications, so that a client can receive content through a “push” action performed by the server. In the ASP.NET world, SignalR is a growing new library that uses WebSockets under the covers when it’s available, and gracefully fallbacks to other techniques and technologies when it isn’t, while the application code stays the same. SignalR also provides a very simple, high-level API for doing server-to-client RPC (call JavaScript functions in clients’ browsers from server-side code) in an ASP.NET application, as well as adding useful hooks for connection management, e.g. connect/disconnect events, grouping connections, authorization.

Developers that are currently using the Ext.NET component framework can take advantage on SignalR by combining it with the Ext.NET MessageBus component. The MessageBus provides a simple and robust client-side infrastructure to propagate notifications to listening UI components. The reference scenario I’m talking about in this post is represented in the figure below:

1. The client browser extabilishes a persistent connection to a server-side SignalR application hub. After that the client mantains a reference to an auto-generated hub proxy.
2. The Ext.NET UI components submits commands to the server.
3. At any time of the server-side command execution, the server can use the SignalR hub to push notification messages back to all the involved clients via RPC.
4. Any client receiving a SignalR message through the hub proxy redirects the message to the Ext.NET Message Bus
5. On the basis of the specific type of message, the Ext.NET UI components are updated through a message handler function. In fact, each Ext.NET component has a MessageBusListeners property (client side handlers of MessageBus client side events) and a MessageBusDirectEvents property (server side handlers of MessageBus client side events).

Let’s have a look to a minimalistic example implemented in an ASP.NET MVC web application. Here’s the view :

As you can see, the view is composed by the following Ext.NET components:

– A couple of buttons which send commands to the server (e.g. Add/Remove a customer)
– A grid panel which holds the current customer data
– A grid panel which holds trace data about messages received through the SignalR connection.

The integration between the client-side SignalR hub proxy and the Ext.NET components MessageBus is done through the loadHub javascript function: it wraps the SignalR hub named “applicationHub” so that all the received messages are redirected to the Ext.NET MessageBus and again to the listening UI components. Please note that the SignalR “publish” function and the Ext.NET MessageBus “publish” function accept the same parameters: the message name and the message data. For this reason, the integration between the two worlds is practically natural.

In the example above, the Store of the former GridPanel is reloaded each time its MessageBusListener intercepts any message whose name starts with the prefix “Customers.” . Please pay attention to the Buffer property: it’s very useful when the component is under message storming and we want the UI to be refreshed just after a specified delay during which no messages have been received.

What about server-side code? Well, the server-side code is not so relevant in this post. The most important thing to be considered here is that at some point of the server-side command execution, the code retrieves the SignalR hub, selects which clients will receive the RPC (for simplicity, in this example a message is sent to all connected clients) and finally pushes a message containing the data the client needs for updating UI. Here an example:

I believe dynamic dispatch is a very cool feature of C# 4.0. It was designed to simplify interop between statically typed C# and dynamically typed languages or COM components by deferring method resolution at runtime, dynamically applying the same overload selection logic that the C# compiler would normally use at compile time. This feature is known as single/multiple dispatch. A common usage of this technique can be found in many implementations of the visitor pattern. Maybe you have already read something like this:

This code works great when you don’t know the exact type beforehand and you don’t want to use a big switch statement. If you use dynamic dispatch within the same class everything should work as expected. So, if you execute the following lines of code..

The problem here is that every call to the ProcessMessage method via dynamic dispatch is linked to the only implementation of the method that is found in the DefaultMessageHandler class, and the ones in the derived classes will never be executed. At first glance, this behaviour could be not so easy to expect.

A simple solution

In order for our code to work as expected we could just override the HandleMessage method in the DerivedMessageHandler class, leaving it exactly the same as the one defined in the base class. As alternative, we could completely move the HandleMessage method to derived classes.

I’d like to share a couple of extension methods that helped me in situations where I needed to convert some rendered WPF windows or controls to bitmaps. Many devs know how complex this task was in Windows Forms. Instead, in WPF it’s quite simple, at least if you’re familiar with the RenderTargetBitmapclass, and the range of BitmapEncoders. In order to convert a visual to a bitmap,I like to see something like this:

myVisual.ToBitmapSource().ToPngFile(@”C:\ScreenShot.png”);

The ToBitmapSource() extension method allows you to get a single, constant set of pixels at a certain size and resolution representing the visual (please note that a BitmapSource uses automatic codec discovery, based on the installed codecs on the user’s system). I’ve always found useful to replace the default black background that WPF reserves for transparency with a custom brush. So I introduced the transparentBackground parameter (default: white) which overrides the default black one.

Starting from Enterprise Library 5.0, Unity supports interception mechanisms which captures the call to an object at invocation time and provides the full implementation of the object through lightweight code generation (ILEmit). It’s something very similar to the aspect-oriented programming (AOP) approach.

However, Unity is NOT an AOP framework implementation for the following reasons:

It uses interception to enable only preprocessing behaviors and post-processing behaviors.

It does not insert code into methods, although it can create derived classes containing policy pipelines.

It does not provide interception for class constructors.

Instance Interception VS Type Interception

With instance interception, when the application resolves the object through the container,

The Unity interception container obtains a new or an existing instance of the object and creates a proxy.

Then it creates the handler pipeline and connects it to the target object before returning a reference to the proxy.

The client then calls methods and sets properties on the proxy as though it were the target object.

Interface interception

With type interception , the container uses a derived class instead of a proxy (it resembles AOP techniques). Type interception avoids the possible performance penalties of using a proxy object by dynamically deriving a new class from the original class, and inserting calls to the behaviors that make up the pipeline. When the application resolves the required type through the Unity container,

the Unity interception container extension creates the new derived type and passes it, rather than the resolved type, back to the caller.

Because the type passed to the caller derives from the original class, it can be used in the same way as the original class.

The caller simply calls the object, and the derived class will pass the call through the behaviors in the pipeline just as is done when using instance interception.

Type Interception

However, there are some limitations with this approach. It can only be used to intercept public and protected virtual methods, and cannot be used with existing object instances. In general, type interception is most suited to scenarios where you create objects especially to support interception and allow for the flexibility and decoupling provided by policy injection, or when you have mappings in your container for base classes that expose virtual methods.

Interceptors and Behaviors

Unity uses an Interceptorclass to specify how interception happens, and an InterceptionBehaviorclass to describe what to do when an object is intercepted. There are three built-in interceptors in Unity:

VirtualMethodInterceptor: a type based interceptor that works by generating a new class on the fly that derives from the target class. It uses dynamic code generation to create a derived class that gets instantiated instead of the original, intercepted class and to hook up the call handlers. Interception only happens on virtual methods. You must set up interception at object creation time and cannot intercept an existing object.

InterfaceInterceptor: an instance interceptor that works by generating a proxy class on the fly for a single interface. It can proxy only one interface on the object. It uses dynamic code generation to create the proxy class. Proxy supports casting to all the interfaces or types of the target object. It only intercepts methods on a single interface. It cannot cast a proxy back to target object’s class or to other interfaces on the target object.

TransparentProxyInterceptor: an instance interceptor that uses remoting proxies to do the interception. When the type to intercept is a MarshalByRefObject or when only methods from the type’s implemented interfaces need to be intercepted. The object must either implement an interface or inherit from System.MarshalByRefObject. If the marshal by reference object is not a base class, you can only proxy interface methods. TheTransparentProxy process is much slower than a regular method call.

Interception is based on one or a pipeline of behaviors that describe what to do when an object is intercepted. You can create your own custom behaviors by implementing the IInterceptionBehaviorinterface. The interception behaviors are added to a pipeline and are called for each invocation of that pipeline, as shown below.

Here an example of interception behavior which intercepts a call to a method and logs some useful info if the call throws an exception internally:

In detail, you must provide an implementation of the two IInterceptionBehavior interface methods, Invoke() and GetRequiredInterfaces(), and set the WillExecute property.

The WillExecute property indicate if the behavior perform any operation when invoked. This is used to optimize interception. If the behaviors won’t actually do anything (for example, PIAB where no policies match) then the interception mechanism can be skipped completely.

The GetRequiredInterfaces method returns the interfaces required by the behavior for the intercepted objects. The Invoke method execute the behavior processing.

The Invoke method has two parameters: input and getNext. The input parameter rapresents the current call to the original method, the getNext parameter is a delegate to execute to get the next delegate in the behavior chain.

Now let’s see an example of usage. The following code executes a statement that makes my IService implementation raise an ArgumentNullException. This exception will be logged thanks to the ExceptionLoggerInterceptionBehavior registered for my IService interface.

In these days I’m using Unity as IoC and DI container in a project. One of the “must-have” features of a modern container is the ability to be configured at runtime, preferably with a fluent mapping registration interface. Moreover, one of the expected features is the support for decorator or chain object pattern configurations with intuitive code. A simple scenario could be something like this:

The intent is straightforward and quite common: registering the relationship between the Service and the ServiceDecorator class, so when someone asks for an IService, he gets a Service instance wrapped in a ServiceDecorator instance. Let’s have a look at the most used solutions.

Solution 1: Using an InjectionConstructor

Well, configuring the Unity container to support a decorator in the old-fashioned way requires some not so much readable code. In fact, making this work seems a kind of magic.

I think the code above is not optimal, because it uses magic strings. It also has a dangerous disadvantage: changes to the ServiceDecorator constructors will generate runtime instead of compile time errors.

Solution 2: Using an InjectionFactory

This is a substantial improvement of the previous solution. It allows to specify a factory function which can use the injected container to explicitly resolve eventual dependencies to decorated/decorator constructors.

This is absolutely my favourite solution. It’s less code, and most importantly it describes the intent in a better way, because it’s focused on the developer’s object model and not the on the Unity’s one.

In these days I’m having fun experimenting the Razor View Engine support introduced in Ext.NET v.2.0 (Beta3 at the time of writing) and I’m appreciating how fluently you can configure any Ext.NET component. Nowadays there isn’t a good documentation about these new features (on Ext.NET official site you can read “Server-side Ext.NET API documentation is being worked on”) so if you want more info about Razor support please have a look at the Ext.NET official forum (I’ve found this thread particularly helpful!). Moreover, in the Ext.NET Examples Explorer you can try to get the necessary knowledge for translating WebForms code examples into Razor views.
In this post I’d like to show one of my first tests: a simple GridPanel supporting server-side data paging via AJAX proxy.
Here the interesting Razor syntax:

First of all, please note the Html.X() helper method. This is the entry point for configuring any Ext.NET component. As requisite for getting started, you can call the Html.X().ResourceManager() helper method. Like the counterpart of Ext.NET for WebForms, it automatically injects every script and stylesheet you need into your page. The output should be something like this:

OK, now through the Html.X() helper you can start configuring the GridPanel. In my example I have an AjaxProxy which calls a controller action in order to get back some JSON data to bind to the GridPanel. Some point of interest:

start and limit are the two standard ExtJs querystring parameters sent by the proxy to the remote data source in order to tell it how to page data.

AjaxProxy can process the resulting JSON via a JsonReader. In particular, pay attention to the Root() and TotalProperty() methods. They tell the reader respectively which root property in the JSON response contains the data rows and which property contains the total results count. They are two fundamental info for the right grid rendering.

In a project of mine I’m using a simple Request/Response service layer very similar to the amazing Davy Brion’s Agatha project. Everything started a few time ago when I was searching for a smart way to design a client-server infrastructure which was focused on messages and not on operations. This layer would be not only a classic WCF-based service, but also a some kind of in-process facade to my business layer where I could centralize any cross cutting concern. So I was focused on the capability of moving my service layer and its business logic to a separate machine and hosting it through WCF without any significant modifications to my code (admitting that service layer doesn’t share state with upper layers like the presentation layer). After reading Davy’s “Why I Dislike Classic Or Typical WCF Usage” , I was convinced to address myself to one service contract with one service operation, avoiding to spend time thinking about how to design and implement service contracts and operations. In this way, the first (great) advantage I got is that I can add functionalities simply defining a request message, a response message and a request handler which executes the logic needed between a request receiving and a response sending. Very simple and effective!

Now let’s define a simple rule: every concrete request type must be corresponded by a concrete response type. The idea is to basically consider each service operation as a request which must have a response. For each request you define, you need to provide an handler which does whatever it needs to do to handle the request and returns a response. In my solution, a simple generic base request handler has been defined in the following way:

As you can see, the service itself implements the IRequestHandler interface and the actual implementation is very minimal: it’s just a small class which resolves the appropriate handler through an abstraction called IRequestHandlerProvider which internally may use an IoC container capable of resolving the request handler associated to a request type. Then, the service delegates the execution to an handler by passing the request to it, and finally it returns a typed response to the client.

Enter MEF

Now we could need a IoC container to resolve and create the instances of the request handlers. As shown in this post, you can use the Castle Windsor IoC container to use dependency injection. That basically allows you to register each valid request handler that is present, for example, in a given assembly. Even my purpose was to find a simple way to plug in request handlers defined in external assemblies, getting everything registered automatically in a centralized request handler provider when the application starts up. So, I was focused on MEF. What I tried to do is to treat each request handler as an extension, because in my project each tuple “Request-Response-Handler” represents an extension unit. Moreover,

MEF is an integral part of the .NET Framework 4

MEFoffers a set of discovery approaches for locating and loading available extensions even in a “lazy” fashion

MEF allows tagging extensions with additonal metadata which facilitates rich querying and filtering so an extensibility element can provide metadata to exported items.

Following this philosophy, beyond the classic Export attribute I introduced a RequestHandlerMetadataAttribute useful for simplifying the process of resolving an handler related to a specific request type. So, the LoginRequestHandler defined before could be decorated as the following…

In order to allow us to access the metadata, MEF introduces a special kind of Lazy<T,M> that has attached metadata. M in this case is an interface (called “metadata view”) that contains only getter properties. MEF automatically generates a proxy class that implements this interface and it plugs all the metadata in for us. This is very cool! What is happening behind the scenes is that MEF is using reflection emit in order to construct the typed metadata view.

As result, I’ve used the metadata view IRequestHandlerMetadata to easily find the right handler for a request of a given type, as shown in the GetRequestHandler() method of the following MefRequestHandlerProvider.

Putting all together, in the application startup I placed the following initialization code which uses the MefRequestHandlerProvider. As you may know, request handlers can be located inside catalogs (e.g. AssemblyCatalog, AggregateCatalog etc.). When using a client proxy, I need to expose just one method which never needs to be updated. That’s a good advantage!

Conclusion

In this post I’ve tried to show how I used MEF to plug request handlers in a request/response service layer similar to Agatha. I have not written anything about serialization issues when exposing the service layer through WCF. Moreover, I haven’t treated any aspects about request handlers such as objects lifecycles or error management. I think that any question about these service layer insights can find good answers after reading the Davy Brion’s Request/Response Service Layer Series.

One of the most important pattern that a good ORM technology should support in order to face the object-relational impedance mismatch is the Identity Map pattern. It’s just one of a set of conceptual and technical difficulties emerging when objects or class definitions are mapped in a straightforward way to database tables or relational schemas.

What’s Identity Map?

In Martin Fowler’s book “Patterns of Enterprise Application Architecture”, the Identity Map is defined as a way of ensuring “that each object gets loaded only once by keeping every loaded object in a map. Looks up objects using the map when referring to them.” If the requested data has already been loaded from the database, the identity map has to return the same instance of the already instantiated object and if it has not been loaded yet, it should load it and stores the new object in the map. In this way, it follows a similar principle to lazy loading. As result, the Identity Map design pattern introduces a consistent way of querying and persisting objects (e.g. through a context-specific in-memory cache) which prevents applications from duplicate retrievals of the same object data from the database.

Ok, in order to better understand this concept, let’s start from a non-Identity Map example. If we have an application that uses a simple persistence layer that performs a database query and then materializes one or more objects, we might see code that creates different instances of the same logical entity:

In this example, customer1 and customer2 both contain separate copies of the data for the same customer. If we change the data in customer1, the change has no effect on customer2. If we make changes to both and then save them back to the database, one just overwrites the changes of the other. That’s because our persistence framework just doesn’t know that customer1 and customer2 both contain data for the same logical entity.

Conclusion: multiple objects containing data for the same entity, lead to concurrency problems when it’s time to save data.

How does Entity Framework approach the Identity Map pattern?

Now let’s have a look at the Identity Map way! In the unit test below, we have some Entity Framework code in which three different object queries are executed in order to get data for the same customer:

How you can see, now all 3 customers are equal. Moreover, when we change a property on customer1, we get that same change on customer2 and customer3. In fact, they’re all references to a single object that is managed by the EF’s ObjectContext. Behind the scenes EF ensures that only one entity object is created and the multiple entities that we try to load are just multiple references to that one object, regardless of how many times or how many different are the ways we load an entity. This is a behavior compliant with the Identity Map pattern!

The key is EntityKey

So how does this work? First of all, every entity type has a key that uniquely identifies that entity.

If your Customer entity inherits from EntityObject (which is the base class for all data classes generated by the Entity Data Model tools) or simply implements the IEntityWithKey interface, in the debugger you’ll notice that Customer has a property that EF created for you named EntityKey (which corresponds to the primary key in the database). EntityKey contains data about all the information ObjectContext needs in order to maintain an Identity Map. You could think of the map as a “cache” that contains only one instance of each object identified by its EntityKey.

REMEMBER: Entity Framework v4 does not require you to implement IEntityWithKey in a custom data class especially if you use POCO entities.

In the previous example, when we get customer1 from our context, by default EF runs the query, creates an instance of Customer (uniquely identified by its key CustomerId), stores that object in the cache, and gives us back a reference to it. When we get customer2 from the context, the context does run the query again and pulls data from our database, but then it sees that it already has a customer entity with the same EntityKey in the cache so it throws out the data and returns a reference to the entity that’s already in cache. The same thing happens for customer3.

So how many database queries EF will perform if we write something like this?

Wait… if there’s a cache, why is it performing three queries? The second part of Martin Fowler’s definition of Identity map says “… looks up objects using the map when referring to them”. An obvious question is: if I’m loading an object that already exists in my cache, and EF is just going to return a reference to that cached object and throw away any changes it gets from the database query, can’t I just get the object directly from my cache and skip the database query altogether? That could really reduce database load.

The answer is: you could explicitly get an entity directly from the cache without hitting the database, but only if you use a special method to get the entity by its EntityKey. Here an example:

object customerObj;
if (context.TryGetObjectByKey(entityKey, out customerObj))
{
// the customer has been found in the cache Customer customer = (Customer)customerObj;
}

What about if we don’t know the actual value of an EntityKey? Well, we can’t use this feature.

In fact, having to use the EntityKey is a big limitation since most of the time you want to look up data by some other field and not by a primary key which could be a Guid or another data type impossible to know.

Identity Map and MergeOptions

Now two interesting questions:

Can I customize the strategy that EF uses to compare the datasource values and the cache entities values?
What happens to cached entities when the underlying database rows change?

After customer1 is loaded, someone changes the record in the DB. Will customer2 have the original values, or the new values? Remember that customer1 and customer2 are references to the same entity object in the cache, and our first db hit when we got customer1 did pull the original value, but then the query for customer2 also hit the database and pulled data. How does EF handle that? The answer is: it depends on the MergeOption enumeration. The possible options are:

AppendOnly (default) : It simply throws the new data out. If an object is already in the context, the current and original values of object’s properties in the entry are not overwritten with data source values. The state of the object’s entry and state of properties of the object in the entry do not change and Identity Map is guaranteed. Here’s a test example:

OverwriteChanges: Unlike the AppendOnly option, it applies new data. If an object is already in the context, the current and original values of object’s properties in the entry are overwritten with data source values, ignoring every changes we make in the meanwhile. Identity Map principle is still preserved.

NoTracking : In this scenario, objects are not tracked in the ObjectStateManager. Each time we hit the DB for getting a customer, the EF provides a new instance of the Customer class. So, in this case, the Identity Map principle is broken (we can find some analogies with the non-Identity Map solution presented at the beginning of this post).

PreserveChanges : this option is quite a compromise between the AppendOnly and the OverwriteChanges options.

If we don’t change any property of our entity (i.e. the state of the entity is Unchanged), the current and original values in the entry are overwritten with data source values. The state of the entity remains Unchanged and no properties are marked as modified.

If we change a property of our entity (i.e. the state of the entity is Modified), the current values of modified properties are not overwritten with data source values. The original values of unmodified properties are overwritten with the values from the data source.

Entity Framework v4 compares the current values of unmodified properties with the values that were returned from the data source. If the values are not the same, the property is marked as modified.

… I received the following strange error while trying to instantiate the Uri class…

System.UriFormatException: Invalid URI: Invalid port specified.

But why?
The answer is not so obviuos. That’s because I was executing that code while the pack:// scheme wasn’t yet registered. In fact, this scheme is registered when the Application object is created. The very simple solution is to execute the following code just before executing the test…

[TestInitialize]

publicvoid OnTestInitialize()

{

if (!UriParser.IsKnownScheme("pack")) new System.Windows.Application();

In this post I will show how different DataTemplates related to a hierarchy of classes can be nested and, therefore, reused. The concept is very simple, but its applications in a real scenario could be not so trivial!

Let’s assume to have a base ViewModel useful for editing and saving an object of your model. If the object class is subject to some derivations, maybe you’d like to derive your base ViewModel too in order to fulfill the model inheritance hierarchy. Moreover, most probably you have to define different editing views taking into account the whole inheritance hierarchy. In that case, maybe you’d like to reuse more XAML as possible.

So, let’s assume to have a base abstract Customer class and some concrete specializations, like EducationCustomer and GovernmentCustomer (see image below). Then, we design ViewModels in order to edit concrete instances of Customer class. In the class diagram below you can see a base ItemEditViewModel<T> which consists in a simple generic ViewModel which exposes a generic Item to be modified and a SaveCommand to persist it somewhere. The class also defines an abstract method OnCanSaveItem() which a concrete implementation must override in order to specify its own validation rules.

Ok, we have just defined model and viewmodels. Now the interesting part! Our datatemplates could share some portions of XAML (e.g. the edit DataTemplate of the GovernmentCustomer is quite identical to the DataTemplate of the EducationCustomer, but it differs from the former just for a field). So, how can we reuse DataTemplates? First, we can define the edit DataTemplate for the base Customer class…

As you can see, in this example the edit DataTemplate is referenced by key, but in a real scenario you can define your own mechanism to bind the right ViewModel to a DataTemplate for the Item to be edited and saved. In this example, the output result is the following

While working on a project of mine, I’ve got to face an interprocess communication (IPC) on a Windows CE device. In my scenario, the device vendor uses the Message Queue Point-To-Point infrastructure so that native processes can communicate with managed processes through IPC. On other Windows platforms, IPC can be achieved through named pipes (native) or remoting (managed), but none of these options are available to Windows CE. Point-to-point represents a little-known IPC mechanism that is efficient, flexible, and unique to Windows CE version 4.0 and later. Moreover it can interact with the operating system, for example, for getting power information.

If you don’t know this feature of Windows CE, first of all you should read this MSDN article:

After analyzing the managed wrapper proposed by the article, I’ve started to refactor the source code in order to make it more suitable for my needs. So, I’d like to share my design and implementation🙂.

Let’s explain some key concepts:

A message queue can be addressed by a name or, generally, by an handle. The handle is the only data you can refer to if the queue has no name (NULL). Note that the empty string is considered a valid, non-null name.

A message queue can be read-only or write-only: it means that a process can get an handle to a message queue just for reading or writing messages. If you want to read from and write to the same queue, you should create two handles pointing to the same queue.

Message queues are FIFO. Writers can write messages in a queue until it’s full and readers can read messages from a queue until it’s empty. When a reader process invokes a read operation on a message queue, the first unread message is removed from the queue and delivered to the reading process.

As you can see in the class diagram above, I’ve defined an abstract MessageQueue class which holds the queue info ( e.g. the max length, the max message length, the current readers/writers count and so on… ) and exposes a generic factory method for creating concrete implementations.

WriteOnlyMessageQueue

It’s a concrete class for writing messages in a queue. The class exposes some overloads of the Write() method in order to write a message in a queue and choose whether to block the calling thread until the message is written in the queue (that is the queue is not full).

ReadOnlyMessageQueue

It’s a concrete class for reading messages from a queue. The class exposes some overloads of the Read() method in order read a message from a queue and choose whether to block the calling thread until a message can be read from the queue (that is the queue is not empty).

AutoReadOnlyMessageQueue

It’s a concrete class derived from ReadOnlyMessageQueue. It uses a monitoring thread useful to automatically read messages after they are written in the queue. So, the class exposes a MessageRead event which is fired for each message read from the queue.

As you can see, in this case you need an handle to a source process that owns the message queue, while the queue handle is the same returned by the Create() method.
Finally I’ve created a simple smart device project for testing this class library (Visual Studio 2008 / Framework 2.0 or 3.5) just refactoring the example proposed by the author of the MSDN article mentioned above.

OK this is not the classic DateTime picker bound to a textbox… with a jQuery calendar😉.
If you need a custom datetime editor template that splits the datetime parts in drop-down lists like this…

<%= Html.EditorFor(model => model.BirthDate, “Date”) %>

…or like this…

<%= Html.EditorFor(model => model.EventDateTime, “DateTime”) %>

…then this post may help you. As you should know, in ASP.NET MVC 2 the default model binder has some difficulties to combine splitted datetime parts in the View. So, if you need to define a DateTime property in your model and make a custom editor template that splits the DateTime parts in different controls (e.g. TextBox and/or DropDownList), first you should read this smart solution by Scott Hanselman. The idea is to separate the way we render the month field, the day field, the year field etc. from the mechanism that will assemble them back in a DateTime structure for model binding.
Starting from the Global.asax, the first thing to do is to register the Scott’s Custom Model Binder and then specify all the available options (the strings there are the suffixes of the fields in your View that will be holding the Date, the Time, the Day etc.)

ModelBinders.Binders[typeof(DateTime)] = newDateTimeModelBinder()
{
Date = "Date", // Date parts are not splitted in the View
// (e.g. the whole date is held by a TextBox with id “xxx_Date”)
Time = "Time", // Time parts are not splitted in the View
// (e.g. the whole time is held by a TextBox with id “xxx_Time”) Day = "Day",
Month = "Month",
Year = "Year",
Hour = "Hour",
Minute = "Minute",
Second = "Second"};

Now, let’s have a look to template editors. In Views\Shared\EditorTemplates directory we can put two simple templates: Date.ascx and DateTime.ascx. The former renders only the drop-down lists for the date part of the DateTime structure (Month, Day, Year), while the latter renders the time part too. Here the code for Date.ascx:

That’s all!
Note that in the template editor above, the Hour, Minute and Second parts are rendered as HTML hidden fileds, because the Scott’s DateTimeModelBinder configured in the Global.asax expects a value for all the six parts of the splitted DateTime structure. It’s just a clean workaround to make the Scott’s model binder work without any change to the original code. In a real implementation hidden fields should be not required😉.

Now, what about validation? Well, both client-side and server-side validations are quite trivial: the server-side validation can be obtained through a custom ValidationAttributethat checks if the DateTime value is correct (e.g. the value should be not equal to DateTime.MinValue or DateTime.MaxValue).

The corresponding client-side validation adapter can be implemented by deriving the DataAnnotationsModelValidator class. It allows us to specify a remote validation rule from the client. In this scenario, the part of the DateTime structure that could be validated is the Date part.
So, we can create a SplittedDateRequiredValidator in order to check if each drop-down is holding a valid value. To accomplish this requirement, a simple solution is to make the client-side validator aware of the IDs of the <select> elements holding the DateTime’s Month, Day and Year values.

Before looking at javascript validator code, let’s register the SplittedDateRequiredValidator as the client-side validation adapter for all model properties decorated with the DateRequiredAttribute. To accomplish that, we have to put the following line of code in the Global.asax…

Finally, the client-side validator will evaluate the selected index of the drop-down lists in order to ensure that the user has selected a valid date (note that the isValidDate function simply checks if the users has specified an existing date).

In this post, I’d like to show how we can pass multiple values into an ICommand implementation by using the CommandParameter property of an input control. This is useful especially in MVVM architectures, so that the View can interact with the ViewModel in a clean way, facing the fact that the Execute method of the ICommand interface allows only a single object.
A solution is to use the MultiBinding class which allows us to define a collection of Binding objects attached to the target CommandParameter property of our input control. In a concrete example, let’s consider a simple search box with an OK button and a checkbox “ignore case”. The OK button is bound to a custom FindCommand defined in the ViewModel.

When the user clicks on the OK button, we want two parameters to be passed to the command: the string to be searched and the “ignorecase” option. How can we bind these two parameters to the Button’s CommandParameter?
Well, first we have to create a class to hold the parameters.

As you can see in the code above, we can iterate through the list of input values of the Convert method, checking their type, and then correctly assign the properties of the parameter class. Obviously you can implement different solutions (you always know the order of the parameters set in the XAML), but the most important thing at this point is that the return value of the Convert method is what will be passed as argument to the Execute method of our FindCommand.

To wire up the XAML to take advantage of this class, we have to include the <Button.CommandParameter> element. This contains the <MultiBinding> element, which has the “Converter” attribute. In the code below, the converter type is added as a resource to the button to make this post easier to read, but convention usually dictates resources are added at the Window level to allow reuse and readability.
Under the MultiBinding.Bindings element, we add a <Binding> element for each parameter that we need to pass into the command.

When using the SerialPort.GetPortNames() method, you are querying the current computer for a list of valid serial port names. For example, you can use this method to determine whether “COM1” and “COM2” are valid serial ports in your computer. The port names are obtained from the system registry (if the registry contains stale or otherwise incorrect data then this method will return incorrect data). The limit of this approach is that you get just an array of port names (e.g. { “COM1”,”COM2” … }) and nothing else! If the com ports are physical, there’s no problem but what about virtual ports connected for example through an USB adapter? Well, you can determine if a port is valid but you don’t know exactly which COM number was assigned to your device. So you need more information! In the system Device Manager, you can see the COM port friendly name under the "Ports (COM & LPT)" heading. This means that the right COM port number can be found by using WMI🙂 A solution to this need comes from WMI Code Creator tool which allows you to generate VBScript, C#, and VB .NET code that uses WMI to complete a management task such as querying for management data, executing a method from a WMI class, or receiving event notifications using WMI. A suitable WMI query is “SELECT * FROM Win32_PnPEntity WHERE ConfigManagerErrorCode = 0”. Here is a code example showing how to enumerate the information of the COM ports currently available on your system (including the friendly name of course) by executing the query above.

There are a lot of ways of converting a byte array to the corresponding hexadecimal string. I usually adopt the BitConverter class in order to optimize the readibility of code, but starting from the .NET Framework 3.0 the same task can be obtained using a single line of code through extensions methods:

An important feature of the ASP.NET MVC framework is the possibility of creating asynchronous controllers. As in Asynchronous Pages in ASP.NET 2.0, the aim is to avoid a “thread starvation” in your web application, preventing web clients to receive a bad 503 status code (Server too busy). In fact, when the Web Server receives a request, a thread is taken from the application threadpool mantained by the .NET Framework. In a synchronous scenario, this thread lives (and can’t be reused) until all the operations complete. Well, asynchronous pipeline is better when the logic creates bottlenecks waiting for network-bound or I/O-bound operations. Considering that an asynchronous request takes the same amount of time to process as a synchronous request, minimizing the number of threads waiting for blocking operations is a good practice, particularly appreciated by your Web server when it’s bombarded by hundreds of concurrent requests. Now, have a look to this simple asynchronous controller:

By default, ASP.NET MVC won’t call the ListCompleted method until the AsyncManager associated with the request says that there is no outstanding asynchronous operations. But it’s possible that one or more asynchronous operations might never complete!!! Moreover, if the callback for one of your asynchronous operations throws an exception before it calls the AsyncManager.OutstandingOperations.Decrement() method,the request will keep waiting a decrement until it times out! So, putting the AsyncManager.OutstandingOperations.Decrement() call inside a finally block would be fine🙂. The AsyncManager object has a built-in default timeout set to 45 seconds, so if the count of outstanding operations doesn’t reach zero after this long, the framework will throw a System.TimeOutException to abort the request. If you want to set a different timeout you can use the AsyncTimeout filter for specifying a different duration. If you want to allow asynchronous operations to run for an unlimited period, then use the NoAsyncTimeout filter instead.

Finally, we have to say that most applications will have an ASP.NET global exception handler that will deal with timeout exceptions in the same way as other unhandled exceptions. But if you want to treat timeouts in a custom way, providing a different feedback to the user, you can create your own exception filter or you can override the controller’s OnException() method (e.g. to redirect users to a special “Try again later” page).

Web Development Helper is a free browser extension for Internet Explorer that provides a set of tools and utilities for the Web developer, esp. Ajax and ASP.NET developers. The tool provides features such as a DOM inspector, an HTTP tracing tool, and script diagnostics and immediate window. Web Development Helper works against IE6+, and requires the .NET Framework 2.0 or greater to be installed on the machine.

If you are migrating your database across different platforms or applications, then you know that it can not be done by simple copy-and-paste operations. To forget about difficulties associated with database conversion, you should try ESF Database Convert. This wizard-based tool addresses almost any database conversion need. Advanced converting mechanisms of the tool provide smooth conversion directly from/to any of the following database formats: Oracle, MySQL, SQL Server, PostgreSQL, Visual Foxpro, FireBird, InterBase, Access, Excel, Paradox, Lotus, dBase, Text and others(e.g.: Access to Oracle, Oracle to SQL Server, SQL Server to MySQL, MySQL to PostgreSQL…). Also you can convert any format of a database with ODBC DSN. ESF Database Convert includes the support of table CLOB/BLOB, Primary/Foreign Keys, Indexes, Auto-ID and maps table and field names/types in converting. It provides all the required conversion options, taking into account the peculiarities of both input and output database formats. You can convert data exactly the way you want it. The tool comes with the batch conversion mode that can enhance productivity by speeding up the entire conversion process. Our users regularly convert multi-million records databases using our software.

Here you can find an interesting diagram of programming languages history. Years go by, but surprisely you can see how apparently incompatible paths (OO and functional programming) are slowly fusing in time. For about 50 years, computer programmers have been writing code. New technologies continue to emerge, develop, and mature. Now there are more than 2,500 documented programming languages!

Here a preview😉

Moreover, O’Reilly has produced a poster called History of Programming Languages which plots over 50 programming languages on a multi-layered, color-coded timeline.