It’s strange how common application architecture concerns are often ignored when it comes to supporting design-time data. By “design-time data” I am referring to pre-canned/fake data that is shown in Views while they are being displayed and edited in Expression Blend, or the Visual Studio visual design surface (a.k.a. Cider). There are two commonly seen approaches to supporting design-time data, each of which has severe drawbacks.

The most commonly seen approach is to use the d:DataContext and d:DesignInstance settings, which assign a designer-only data context for a View. The d:DataContext setting is ignored at run-time. This approach might seem great because it appears to keep design-time concerns consolidated in the View layer, which is the application layer in which “design-time” applies most directly. Nothing could be further from the truth. While having support for design-time data settings in the View layer might be great for early UI prototyping, it quickly becomes problematic as the rest of the application layers fall into place. The main problem with d:DataContext is that you now have to maintain two separate data contexts: the real one and the design one. Often people create entirely separate classes (or the designer tools generate dynamic classes) that mimic the real data objects and ViewModel objects that are used at run-time. This introduces a grotesque amount of duplication in the code base, just for the sake of showing some fake data in Blend. Having duplicated classes and extra settings in XAML just for the sake of design-time data strikes me as patently absurd. Have fun maintaining that application over the years…

Another common approach to creating design-time data is to keep the Views unaware of being in design-time, and to have their ViewModel objects handle it for them. This can easily lead to ViewModel classes that are littered with conditional logic that does one thing at design-time and something different at run-time. For example, a ViewModel that exposes a collection of Foo objects might issue a network call to get Foos at runtime, but just new up a bunch of Foos at design-time. Complicating ViewModels with repetitive conditional logic has a distinctly rotten code smell. If a ViewModel is a Model of a View, and a View should not be smart enough to know about “design-time” then, by extension, neither should its Model (i.e. the ViewModel). There has to be a better way!

Let’s take a step back and return to basics. In a layered application architecture, you separate the responsibilities and concerns into various layers. The View layer is responsible for showing things. In MVVM, the ViewModel layer is responsible for maintaining the logical state of Views, and processing user interactions. So far, we have not mentioned anything about data access. That’s the job of a layer below the ViewModel, perhaps called the Model layer or the DataAccess layer. If the application needs data, it goes to the DataAccess layer to get it. Design-time data is still data. It, too, should come from the DataAccess layer. Everything above the DataAccess layer, such as ViewModels and Views, should not be responsible for making decisions about how data is accessed: including design-time data.

In an application that I’m working on these days, I created an interface that the ViewModel objects use to access data and save data. It’s a modest sized application, so only one interface was needed so far. In a larger system you could have as many interfaces as you need to meet your data access requirements. The data access interface is implemented by three classes: one for run-time data access, one for design-time data access, and another for unit test-time data access. I use dependency inversion (specifically, the Service Locator pattern) to supply an implementation of the data access interface to the ViewModels. When the Views and ViewModels are created and start living their life, they are blissfully unaware of the run-time context in which they exist. When running my unit tests, I provide the data access implementation with whatever data I want it to return to the ViewModel that invokes it. The design-time implementation loads up some fake data from disk and passes it back into the ViewModels, which then bubbles it up to the Views on the design surface. Of course, at run-time the “real” data access implementation is used to retrieve the actual data processed by the application.

To me, the primary benefit of this approach is that almost the entire system has no concept of “run-time” vs. “design-time” vs. “test-time.” The only place in the code base that is aware of the run-time context is the tiny bit of code that determines which implementation of the data access interface to make available to the ViewModels.

I must admit, I had never really become too comfortable with the C# ‘yield’ keyword until recently. I knew that it was introduced in C# 2.0 as a means of simplifying the creation of an enumerator. I also knew that the C# compiler interprets the code in a method that uses the yield keyword. A compiler-generated implementation of IEnumerator exists behind the scenes, which implements the logic to produce the enumerator you declared in C#. Beyond that vague understanding, I was not too familiar with it. It felt “odd” so I rarely used it.

The article also has a demo program that lazy-loads each item’s children. That demo does not provide the ability to search. Shortly after publishing the article, two people asked how to have a lazy-loaded tree with search capabilities. Aside from the fact that performing a search that produces no matching items will force all of the items to be loaded; it is a reasonable question. I decided to implement a solution.

At the time, I misunderstood exactly how my search method used the ‘yield’ keyword. I was under the false assumption that it was executing the search logic to completion when the FindMatches method is called, and storing the results in a collection of some type. As we will see later, this is entirely untrue, but I, too, can be an idiot sometimes. :)

I took the load-on-demand sample and added in the ability to search the items. The UI looks like this:

Based on my misunderstanding of how a yield-based enumerator works, I thought it would have been bad to use it in this situation. Since I (incorrectly) thought that it walked down the entire data tree upon creation, performing a search with it would have forced the entire tree to be loaded into memory at once. So I thought that I had to create a custom search algorithm that would perform the search and only load as many items into memory as necessary to find the first matching item. After finding the first match, the next time I perform the search, it should only load as many items as necessary to find the next matching item. And so on, and so on…

I have plenty of experience building recursive algorithms that walk over trees, but I had never built one like this. Most recursive algorithms store their state on the callstack. Each time the recursive method is called, a new frame is pushed onto the callstack, and it keeps track of the local variable values for you. By ‘local variables’ I mean things like, the current item, the index of a for loop, the parent item to which we return after processing a sub-tree, etc.

That’s all well and good, until you need to create a recursive method that returns a value and, later on, needs to resume processing from where it left off last time it was invoked. This is exactly what I needed to build. I needed to create a recursive algorithm that stores its state in an external data structure, so that I can effectively save and load that state between executions. It was an exciting Sunday morning programming exercise, for sure!

The source code of the demo app is available at the bottom of this post. In it, you’ll find a TreeViewItemViewModelSearch.cs file. It contains all of the code involved with this implementation. That file contains a static class, TreeViewItemViewModelSearch, which contains just one public method:

The SearchResult class implements IEnumerable<TreeViewItemViewModel> and its GetEnumerator returns an instance of SearchEnumerator, which implements IEnumerator<TreeViewItemViewModel>. This search logic is invoked by the enumerator’s MoveNext method:

(Click on the image to view the source code at full size.)

The helper classes seen in that algorithm are listed below:

As it turns out, none of this is necessary at all! If you look in the TreeViewItemViewModel class, you will see how there are two implementations of the search functionality. One of them uses the very elaborate code we just saw, and the other, which works JUST AS WELL as my code, is a simple method with the ‘yield’ keyword. Click on the following image to see how both techniques are used:

As seen in the FindMatches_Yield method, all that I had to do was add in code that lazy-loads the child items before it searches them! The compiler-generated implementation of my search logic will be invoked every time the enumerator’s MoveNext is called, and it only searches for one item at a time. This is perfect for a load-on-demand scenario. If I had known about this earlier, I never would have bothered to write that custom search enumerator. But then again, it was a lot of fun and quite interesting to implement, so it’s all good!

I have been thinking a lot about the WPF TreeView control lately. My recent article about it, which, by the way, is my wedding gift to Sacha Barber, discusses how the ideal way of working with a TreeView is to bind it to a ViewModel, and then program against the ViewModel. This got me thinking.

When I first started learning about Windows Forms, I thought the TreeView control had a strange name. Why not call it the Tree control? Why did they add the word “View” to the end? At the time, the .NET 1.0 era, I don’t think any other control had the word “View” tacked onto the end of its name, except for ListView. After a while, I got used to the name and never gave it much thought, until now.

The WinForms TreeView control is not really providing a “view” of a tree: it is the tree. You create a tree data structure in the control, where each node in the tree is of type TreeNode, and the control renders those nodes. One could argue that the WinForms TreeView control’s name is a misnomer, because it can only show its own TreeNodes. By the same reasoning, we should call the Menu a MenuView and the ComboBox a ComboBoxView, but that is not the case. WinForms should have a Tree control instead of a TreeView control.

The WPF TreeView control is very different. Granted, at the end of the day, it still has a tree data structure consisting of TreeViewItem objects, but the similarities end there. We can use the TreeView in our WPF programs to literally provide a view of a tree. The fact that the WPF TreeView supports data binding allows us to create a tree data structure, perhaps of custom ViewModel objects, and provide it to the TreeView control’s ItemsSource property. At that point, the TreeView will do all the grunt work of creating TreeViewItems and hooking them together for us. We are not required to focus much on the TreeView, but more on the data we actually care about.

We can provide in-depth customizations for how the TreeView renders the items, by setting properties such as ItemContainerStyle to affect properties of every TreeViewItem, and ItemTemplate to specify what each item should look like and where its child items come from. We declare all of this in XAML, and let TreeView handle all the details of applying our styles and templates. We can remain focused on the tree data structure, it’s behavior, it’s state, and just tell the TreeView how we want to view the tree.

It might seem like a minor difference, but, in my opinion, it represents a significant change of mindset. In WPF, the TreeView control provides a view of a tree.

In a surge of Mahler-inspired geekery, I wrote and published what I consider to be one of my best WPF articles. If you have ever thought that the WPF TreeView is too complicated and doing anything non-trivial with it is difficult, think again! Over the past few days I have been solidifying my TreeView programming techniques, thanks to an invigorating e-mail thread with Sacha Barber, and it all culminated in this article:

WPF is great because it practically requires you to separate an application’s data from the UI. All of the problems listed in the previous section derive from trying to go against the grain and treat the UI as a backing store. Once you stop treating the TreeView as a place to put data, and start treating it as a place to show data, everything starts working smoothly. This is where the idea of a ViewModel comes into play.

In my opinion, one of the biggest differences between programming in WinForms and WPF is how much you use data binding. WPF programs typically make heavy use of data binding, but many WinForms apps do not use it nearly as much.

I just published an article on CodeProject that is intended to help people with WinForms and ASP.NET backgrounds to start thinking in terms of WPF data binding. It walks you through four iterations of the same simple app: going from all code with no data binding to all XAML with data binding.

A few months ago I flew out to Redmond so that I could speak at Microsoft’s WPF Bootcamp. I gave a presentation about implementing the Model-View-Whatever pattern in WPF, which was received very well by the attendees. After giving that presentation I solidified the material into my ‘Using MVC to Unit Test WPF Applications‘ article on CodeProject. This seminal work is the foundation of the structural skinning architecture seen in Podder.

The entire WPF Bootcamp 2008 was videotaped and recently published online. Karsten announced it a few days ago in this post. All of the videos and associated downloads are hosted in a Silverlight application (of course), which you can view here. NOTE: That page comes up empty for me in FireFox, so be sure to open in it Internet Explorer.

You can find my presentation under Day 3. It is called “Hello, Real World!” and runs for about one hour. I highly suggest you check out some of the other presentations, too. There were many interesting things discussed during the bootcamp.

In case you want to download the entire video file,which is quite large, you can grab them individually from this page. My video is around 330MB so it might take a while to download it, but if you’re just dying to fill up that new external hard drive… :)

This blog post explains what routed commands are, and why you should use them. My explanation does not cover all of the details, nor does it show all of the various uses of routed commands. For a more comprehensive review of commanding, but with a very different (and somewhat misleading) perspective on the issue, be sure to check out the official documentation here.

Many demonstrations of routed commands explain them in the context of text editing, which is just one of the many ways to use them. In fact, so many explanations about routed commands focus on how to use them in text editing scenarios that many people I have met assume routed commands have no practical use outside of text editing. That is an incorrect, yet understandable, interpretation of one of WPF’s coolest features.

What is a routed command? Well, the answer to that question is too complicated to explain without first taking a step back and explaining some other things. To understand what a routed command is, we will examine what “routed” means, and then what “command” means. Once we have those two ideas defined, we can combine them and see what happens.

Routed

In WPF, the visual elements that constitute a user interface form a hierarchical tree structure. This “element tree” has many purposes, and is a core part of how WPF operates. The root node in the tree is a container, such as a Window or a Page. Everything seen within that top-level container is a descendant element in its element tree. There are two “views” of this element tree: the visual tree and logical tree. To learn more about that concept, I recommend you read this article.

In general, the term “route” refers to a path between two points in a network. Take that idea and apply it to the WPF element tree, and now you have a path between visual elements in a user interface. Now imagine an electric current quickly flowing through that route, zapping each element it passes through. When an element is zapped by the electric current, it has the opportunity to come to life and do whatever it wants (i.e. execute an event handler). That is pretty much how I think about routed events, except my complete mental model involves unicorns, robots, and laser beams, as seen in the diagram below:

…just kidding.

One could provide a more mundane definition of a routed event as a notification that travels a route between two elements in the WPF visual tree. They are not really events, in the standard sense of .NET events. People wrap routed events with standard .NET events, out of convention, just to make them more usable and “.NETish.” But in reality, routed events are not events. They are a feature of the WPF framework, intimately tied to the WPF visual tree.

Routed commands use routed events, as do many other parts of WPF, so it is important to understand them thoroughly. If you are still hazy on the idea of routed events, check out the blog post I wrote about the topic here.

Command

The Command pattern is nothing new. It means that you create an object that knows how to do some task, and then the entire application relies on that object to do that task. For example, if I am creating an image editing application, I could create a command object called “CreateNewImage” or something like that. When the user either clicks the “New” menu item, or clicks the “New” toolbar button, or hits Ctrl + N, etc., the same CreateNewImage command object would create a new image.

In WPF, we have the ICommand interface, which makes it easy to create command objects that WPF classes can use. When you implement ICommand, you place all the knowledge of what needs to be done to accomplish the task in the command. You can then assign that command to, say, a Button’s Command property so that the command will execute when the user clicks the Button.

The ICommand interface has three members that all commands must define. It has two methods and one event. The CanExecute method determines if the state of the application currently supports the execution of the command. If CanExecute returns true, then the Execute method can be invoked, which is what actually performs the execution logic baked into the command. If CanExecute returns false, all controls whose Command references that command object are disabled so that the user cannot attempt to execute the command at that time. This is a very convenient feature, especially if multiple controls in the user interface all are bound to that command.

If the command determines that it’s “can execute” status changes, it can raise the CanExecuteChanged event, defined by the ICommand interface. This lets WPF know that it should call the command’s CanExecute method again, to query the new status.

Routed + Command = RoutedCommand

So what happens when you mix routed events with commands? Well, you get a very powerful way to think about how an application represents and implements its features. You get routed commands.

A routed command is a command object that does not know how to accomplish the task it represents. It simply represents the task/feature. When asked if it can execute and when told to execute, it simply delegates that responsibility off to somewhere else.

The answer is, confusingly enough, it does not know whom.

A routed command does not determine if it can execute and what to do when executed. Instead, some routed events travel through the element tree, giving any element in the UI a chance to do the command’s work for it. It truly is just a semantic identifier: a named entity that represents a feature of a program.

When a routed command is asked if it can execute, the routed PreviewCanExecute and CanExecute events tunnel down and bubble up the element tree. These events give all elements between the root of the tree (such as a Window) and the source element that references the command a chance to determine if the command can execute. When the command is told to execute, the PreviewExecuted and Executed routed events travel the same event route, checking to see if anybody cares to react to the event. If so, they can run a handler for the event, if not, the event finishes zapping the elements and nothing happens.

Who Cares?

You might be wondering why it is a good idea to use routed commands at all. Why bother? What’s wrong with just hooking a Button’s Click event and doing things the “normal” way?

Well, you do not have to use routed commands if you do not want to. You certainly can just hook a Button’s Click event and go to town. By extension, why bother with a data layer and a business layer? Why not just stick your whole application into Window1.xaml.cs and be done with it? That would be much easier, right?

My point is, there are very real advantages to adhering to the loose coupling proffered by routed commands. Let’s review them.

First, all controls using the same RoutedCommand will automatically be disabled when the command cannot execute. If you have ever used a program where clicking on enabled buttons does not actually do anything, you will appreciate this.

Second, there is less event handler code to write since most of the wiring is provided for you by the commanding system. You do not have to add event handlers for each UI element that executes the same command, whereas using events directly off of UI elements requires many handlers that all basically do the same thing.

Third, if you use my implementation of Model-View-Controller or Structural Skinning, using routed commands is an absolute must. Learn more about MVC here and Structural Skinning here.

Fourth, using routed commands makes it possible to decouple the Software Engineering team from the Visual Design team. The developers don’t have to worry about what type of element is consuming application functionality, just as long as the UI executes the right commands all is well. This also frees the designers from having to worry about such details so that they can focus on creating a great user experience.

Fifth, but certainly not last, using routed commands as part of your design process forces you to map functional requirements to commands up front. This process of taking a list of required features and translating them into RoutedCommand objects with meaningful names is invaluable to help the team as a whole understand the system they are about to build. It also creates a set of short terms that people can use to refer to features.

Show me Some Code

I threw together a quick demo project in Visual Studio 2008, showing the simplest possible usage of a custom routed command. Download the code here: Routed Command Demo Be sure to rename the file extension from .DOC to .ZIP and then decompress the file.