Most people think of workflows as a tool to represent and automate back-end business processes. Back-end business processes normally require some user interaction but their main purpose is not to drive the user experience or manage the UI. However, there is a growing type of application that leverages workflow as a tool to drive the user interaction and drive the user experience of an interactive process. This type of technology is called page flow.

Last year at TechEd, we showed off some bits we had been working on internally that were designed to make that possible, the ability to model the user interaction of an application using workflow. This approach provides developers the ability to continue managing the complexity of their application in a structure and scalable manner. It turned out that the code we showed at TechEd wasn't going to end up in any of the product releases, so the dev team requested permission to release that code as a sample of how one can implement a generic navigation framework using WF that can support multiple UI technologies (i.e. ASP.NET and WPF). This year, I just finished giving a talk showing this off and talking about how it will be available today!

Thanks go to Shelly Guo, the developer and Israel Hilerio, the PM who had worked on this feature, and to Jon Flanders for providing packaging and quality control

Navigate to setup.exe and run the setup, this will copy the sample projects and the source code for the sample, as well as some new visual studio project templates.

Now, let's open up a sample project, so navigate to the samples directory and open the ASPWorkflow sample, this will show off both an ASP.NET Front end as well as a WPF Controller (you can actually use the two together). Let's get to the good stuff right away, and open up the workflow file.

Wow… what's going on here? It kind of looks like a state machine, but not really. What has been done here is to create a new base workflow type. Things like SequentialWorkflow and StateMachineWorkflow aren't the only ways to write workflows, they are just two common patterns of execution. A NavigatorWorkflow type has been created (and you can inspect the source and the architecture document to see what this does) and a WorkflowDesigner has been created for it as well (again, this source is available as a guide for those of you who are creating your own workflow types).

Each of the activities you see on the diagram above is an InteractionActivity, representing the interaction between the user (via the UI technology of their choosing) and the process. A nice model is to think of the InteractionActivity as mapping to a page within a UI. The output property is the information that is sent to that page (a list of orders or addresses to display) and the input is the information that is received from the page when the user clicks "submit". The InteractionActivity is a composite activity, allowing one to place other activities within the activity to be executed when input is received. The interesting property of the InteractionActivity is the Transitions collection. By selecting this and opening its designer, we are presented with the following dialog:

This allows us to specify n-transitions from this InteractionActivity or "page" to other InteractionActivities. And we can specify this via a WF activity condition. This way, we could forward orders greater than $1000 to a credit verification process, or orders containing fragile goods through a process to obtain insurance from a shipper. What's cool about this, my page does not know about that process, it just says "GoForward" and my process defines what comes next. This de-couples the pages from the logic of your process.

Finally, let's look inside an ASP.NET page and see what we need to do to interact with the process:

AspNetUserInput.GoForward("Submit", userInfo, this.User);

This code is specifying the action and is submitting a userInfo object (containing various information gathered from the page) to the InteractionActivity (in this case, it submits to the Page2 InteractionActivity). If we look at what we've configured as the Input for this InteractionActivity, we see the following, which we can then refer to in the transition rules in order to make decisions about where to go next:

Plenty of other stuff we could talk about here, support for back button, persistence, etc and I could continue to ramble on about this in another record-length blog post, but I will stop here for now. I will continue to blog about this, look forward to hearing any and all types of feedback, and what you'd be interested in seeing in this. Moving forward, there aren't any formal plans around this, but if there is enough interest in the community, we could get it created as a project on codeplex. If that sounds intriguing either contact me through this blog, leave a comment so that I can gauge the interest in such a scenario.

The wheels of evangelism never stop rolling. Just a few months ago I was blogging that .NET 3.0 was released. I've been busy since then, and now I can talk about some of that. Today, the March CTP of Visual Studio "Orcas" was released to the web. You can get your fresh hot bits here. Samples will be coming shortly. Thom has a high level summary here.

UPDATE: Wendesday, 2/28/2007 @ 11pm. The readme file is posted here, a few minor corrections have been made to the caveats below.

More updates... corrections to another caveat (a post-build event is required to get the config to be read).

A Couple of Minor Caveats

Since this is a CTP, it's possible that sometimes the wrong bits end up in the right place at the wrong time. Here are a few things to be aware of (not intended to be a comprehensive list):

Declarative Rules in Workflows: There is an issue right now where the .rules file does not get hooked into the build process correctly.

Solution: Use code conditions, or load declarative rules for policy activiites using the RulesFromFile activity available at the community site

WF Project templates are set to target the wrong version: As a result, trying to add assemblies that are 3.0.0.0 or greater will not be allowed.

Solution: Right click the project, select properties, and change the targeted version of the framework to 3.0.0.0 or 3.5.0.0

A ServiceHost may not read config settings because the app config does not get copied to the bin (update: only on server 2003): You will get an exception that "no application endpoints can be found"

Solution: For the time being, configure the WorkflowServiceHost in code (using AddServiceEndpoint() and referencing the WorkflowRuntime property to configure any services on the workflow runtime

This also means that a number of the workflow enabled services samples will not work out of the box. Replace the config based approach with the code based approach and you will be fine. I will try to post modified versions of these to the community site shortly.

WorkflowServiceHost exception on closing: You will get an exception that "Application image file could not be loaded... System.BadImageFormatException: An attempt was made to load the program with an incorrect format"

Solution: Use the typical "These are not the exceptions you are looking for" jedi mind trick. Catch the exception and move along in your application, as if there is nothing to see here.

Tools from the Windows SDK that you've come to know and love, like SvcConfigEditor and SvcTraceViewer are not available on the VPC.

Solution: Copy these in from somewhere else and they will work fine. The SvcConfigEditor will even pick up the new bindings and behaviors to configure the services for some of the new functionality.

The CTP is not something that is designed for you to go into production with, it's designed to let you explore the technology. There is no go-live license associated with this, it's for you to learn more about the technology. Since most of these issues have some work around, this shouldn't prevent you from checking these things out (because they are some kind of neat).

New Features In WF and WCF in "Orcas"

Workflow Enabled Services

We've been talking about this since we launched at PDC 2005. There was a session at TechEd 2006 in the US and Beijing that mentioned bits and pieces of this. One of the key focus areas is the unification of WCF and WF. Not only have the product teams joined internally, the two technologies are very complementary. So complementary that everyone usually asks "so how do I use WCF services here?" when I show a workflow demo. That's fixed now!

The Send and Receive activites live inside of the workflow that we define. The cool part of the Receive activity is that we have a contract designer, so you don't have to dive in and create an interface for the contract, you can just specifiy it right on the Receive activity, allowing you a "workflow-first" approach to building services.

Once we've built a workflow, we need a place to expose it as a service. We use the WorkflowServiceHost which is a subclass of ServiceHost in order to host these workflow enabled services. The WorkflowServiceHost takes care of the nitty-gritty details of managine workflow instances, routing incoming messages to the appropriate workflow and performing security checks as well. This means that the code required to host a workflow as a WCF service is now reduced to four lines of code or so. In the sample below, we are not setting the endpoint info in code due to the issue mentioned above.

To support some of the more sophisticated behavior, such as routing messages to a running workflow, we introduce a new channel extension responsible for managing context. In the simple case, this context just contains the workflowId, but in a more complicated case, it can contain information similar to the correlation token in v1 that allows the message to be delivered to the right activity (think three receives in parallel, all listing on the same operation). Out of the box there is the wsHttpContextBinding and the netTcpContextBinding which implicitly support the idea of maintaining this context token. You can also roll your own binding and attach a Context element into the binding definition.

The Send activity allows the consumption of a service, and relies on configuraiton to detemrine exactly how we will call that service. If the service we are calling is another workflow, the Send activity and the Receive activity are aware of the context extensions and will take advantage of them.

With the Send and Receive actiivty, it gets a lot easier to do workflow to workflow communicaiton, as well as more complicated messaging patterns.

Another nice feature of the work that was done to enable this is that we know have the ability to easily support durable services. These are "normal" WCF services written in code that utilize an infrastructure similar to the workflow persistence store in order to provide a durable storing of state between method calls.

As you can imagine, I'll be blogging about this a lot more in the future.

JSON / AJAX Support

While there has been a lot of focus on the UI side of AJAX, there still remains the task of creating the sources for the UI to consume. One can return POX (Plain Old Xml) and then manipulate it in the javascript, but that can get messy. JavaScript Object Notation (JSON) is a compact, text-based serialization of a JavaScript object. This lets me do something like:

In WCF, we can now return JSON with a few switches of config. The following config:

1: <servicename="CustomerService">

2: <endpointcontract="ICustomers"

3: binding="webHttpBinding"

4: bindingConfiguration="jsonBinding"

5: address=""behaviorConfiguration="jsonBehavior"/>

6: </service>

7:

8: <webHttpBinding>

9: <bindingname="jsonBinding"messageEncoding="Json"/>

10: </webHttpBinding>

11:

12: <behaviors>

13: <endpointBehaviors>

14: <behaviorname="jsonBehavior">

15: <webScriptEnable/>

16: </behavior>

17: </endpointBehaviors>

18: </behaviors>

will allow a function like this:

1: public Customer[] GetCustomers(SearchCriteria criteria)

2: {

3: // do some work here

4: return customerListing;

5: }

to return JSON when called. In JavaScript, I would then have an instance of a CustomerOrder object to manipulate. We can also serialize from JavaScript to JSON so this provides a nice way to send parameters to a method. So, in the above method, we can send in the complex object SearchCriteria from our JavaScript. There is an extension to the behavior that creates a JavaScript proxy. So, by referencing /js as the source of the script, you can get IntelliSense in the IDE, and we can call our services directly from our AJAX UI.

We can also use the JSON support in other languages like Ruby to quickly call our service and manipulate the object that is returned.

I think that's pretty cool.

Syndication Support

While we have the RSS Toolkit in V1, we wanted to make syndication part of the toolset out of the box. This allows a developer to quickly return a feed from a service. Think of using this as another way to expose your data for consumption. We have introduced a SyndicationFeed object that is an abstraction of the idea of a feed that you program against. We then leave it up to config to determine if that is an ATOM or RSS feed (and, would it be WCF if we didn't give you a way to implement a custom encoding as well?) So this is cool if you just want to create a simple feed, but it also allows you to create a more complicated feed that has content that is not just plain text. For instance, the digg feed has information about the submission, the flickr feed has info about the photos. Your customer feed may want to have an extension that contains the customer info that you will allow your consumers to have access to. The SyndicationFeed object allows you to create these extensions and then the work of encoding it to the specific format is taken care of for you. So, let's seem some of that code (note, this is from the samples above):

HTTP Programming Support

In order to enable both of the above scenarios (Syndication and JSON), there has been work done to create the webHttpBinding to make it easier to do POX and HTTP programming.

Here's an example of how we can influence this behavior and return POX. First the config:

1: <servicename="GetCustomers">

2: <endpointaddress="pox"

3: binding="webHttpBinding"

4: contract="Sample.IGetCustomers"/>

5: </service>

Now the code for the interface:

1: publicinterface IRestaurantOrdersService

2: {

3: [OperationContract(Name="GetOrdersByRestaurant")]

4: [HttpTransferContract(Method = "GET")]

5: CustomerOrder[] GetOrdersByRestaurant();

6: }

The implementation of this interface does the work to get the CustomerOrder objects (a datacontract defined elsewhere). And the returned XML is the datacontract serialization of CustomerOrder (omitted here for some brevity). With parameters this gets more interesting as these are things we can pass in via the query string or via a POST, allowing arbitrary clients that can form URL's and receive XML to consume our services.

Partial Trust for WCF

I'm not fully up to date on all of the details here, but there has been some work done to enable some of the WCF functionality to operate in a partial trust environment. This is especially important for situations where you want to use WCF to expose a service in a hosted situation (like creating a service that generates an rss feed off of some of your data). I'll follow up with more details on this one later.

WCF Tooling

You now get a WCF project template that also includes a self hosting option (similar to the magic Visual Studion ASP.NET hosting). This means that you can create a WCF project, hit F5 and have your service available. This is another are where I will follow up later on.

One of the things that my team is working on is the next version of the workflow designer. In order to help us get real feedback, we engaged with our usability teams to design and execute a usability study.

For details on what the test looks like (when we did them 3 years ago for the first version of the WF designer, see this great channel9 video). The setup is still the same (one way glass mirror, cameras tracking the face, screen, posture of the subject), the only difference is the software, we're busy testing out some new concepts to make workflow development much more productive. At this stage of the lifecycle, we're really experimenting with some different designer metaphors, and a usability test is a great way to get real feedback.

One thing I've always tried to do since I came to Microsoft is being sucked into the Redmond bubble. The symptoms of placement inside said bubble are a gradual removal from the reality that everyday developers face. When I came to the company two years ago, I was chock full of great thoughts and ideas from the outside, and much less tolerant of the "well, that's just how it works" defense.

Slowly, though, as you start to get deep into thinking about a problem, and tightly focusing on that problem, those concerns start to fade away, as you look to optimize the experience you are providing. Sitting in on the usability labs yesterday was a great reminder to me of how easily one can slip into the bubble. Our test subject was working with a workflow in the designer and had a peculiar style of working with the property window in VS. Now, when I use VS, I use the property grid in one way. I have it docked, and I have the dock set to automatically hide. I have known some developers who prefer the Apple / photoshop style where the property pane floats. The customer's way of working with the property grid was that he had it floating, but he would close it after every interaction. This required him to do one of two things in order to display the grid again, either go to the View menu, or (and what his style of work was) right clicking on an element and selecting properties.

The prototype we were doing the usability testing with, however, does not have that feature wired up, in fact, it currently doesn't display the properties item in the context menu at all. Not because we have an evil, nefarious plan to remove the properties item inconsistently throughout our designer, but rather because no one gave it any thought when we put the prototype together as we had other UI elements we wanted to focus on.

This became a serious problem for our customer, as the way he expected to work was completely interrupted. At one point, we asked him to dock the property window so we could continue with the test. This is the most fascinating part of the study to me, and that was watching him work to dock the property grid in the left panel. I've become so used to the docking behavior in VS (see screenshot below), that it didn't even occur to me that this might present a problem for the user. Instead, we watched for 3 minutes or so as he attempted to figure out how to move the window, and then try to process the feedback that the UX elements give. About 60 seconds in or so, the property grid was about at a similar location to the screenshot, with just a centimeter or two's distance away from being in "the right place". Watching his face, we saw him look slightly confused and then move it elsewhere. Two more times he came back to that same spot, just far enough away to not get the feedback that might help him in the right direction. It was at this point, the spontaneous yelling started among the observers in the room. Something that has become so obvious to us, something we have internalized and accepted as "just the way the world," was becoming crystal clear to us how much difficulty this was causing. The yelling was things like "Move up, move up" "no, wait, over, over" "oh, you almost, almost, no...." trying to will the customer through the soundproof wall what we wanted him to do.

This situation repeated itself time and time again with different UI elements, and it was very, very educational to see the way different users manage their workspace and interact with a tool that I've become so familiar with that I forget to see the forest for the trees. I also realized, that although I had worked with a lot of customers and other developers, very rarely had I paid attention to how they work, rather than simply their work.

Now, here's where I open up the real can of worms. We're looking to make usability improvements in the WF designer. Are there any that really bother you? What can we do to make you a more productive WF developer?

I recently worked with a customer who was implementing what I would call a "basic" human workflow system. It tracked approvals, rejections and managed things as they moved through a customizable process. It's easy to build workflows like this with an Approval activity, but they wanted to implement a pattern that's not directly supported out of the box. This pattern, which I have taken to calling "n of m", is also referred to as a "Canceling partial join for multiple instances" in the van der Aalst taxonomy.

The basic description of this pattern is that we start m concurrent actions, and when some subset of those, n, complete, we can move on in our process and cancel the other concurrent actions. A common scenario for this is where I want to send a document for approval to 5 people, and when 3 of them have approved it, I can move on. This comes up frequently in human or task-based workflows. There are a couple of "business" questions which have to be answered as well, the implementation can support any set of answers for this:

What happens if an individual rejects? Does this stop the whole group from completing, or is it simply noted as a "no" vote?

How should delegation be handled? Some business want this to break out from the approval process at this point.

The first approach the customer took was to use the ConditionedActivityGroup (CAG). The CAG is probably one of the most sophisticated out of the box activities that we ship in WF today, and it does give you a lot of control. It also gives you the ability to set the Until condition which would allow us to specify the condition that the CAG could complete, and the others would be cancelled (see Using ConditionedActivityGroup)

ConditionedActivityGroup

What are pros and cons of this approach:

Pros

Out of the box activity, take it and go

Focus on approval activity

Possibly execute same branch multiple times

Cons

Rules get complex ( what happens if the individual rejections causes everything to stop)

I need to repeat the same activity multiple times (especially in this case, it's an approval, we know what activity needs to be in the loop)

I can't control what else a developer may put in the CAG

We may want to execute on some set of approvers that we don't know at design time, imagine an application where one of the steps is defining the list of approvers for the next step. The CAG would make that kind of thing tricky.

This led us to the decision to create a composite activity that would model this pattern of execution. Here are the steps we went through:

Build the Approval activity

The first thing we needed was the approval activity. Since we know this is going to eventually have some complex logic, we decided to take the basic approach of inheriting from SequenceActivity and composing our approval activity out of other activities (sending email, waiting on notification, handling timeouts, etc.). We quickly mocked up this activity to have an "Approver" property, a property for a timeout (which will go away in the real version, but is useful to put some delays into the process. We also added some code activities which Console.WriteLine 'd some information out so we knew which one was executing. We can come back to this later and make it arbitrarily complex. We also added the cancel handler so that we can catch when this activity is canceled (and send out a disregard email, clean up the task list ,etc). Implementing ICompensatableActivity may also be a good idea so that we can play around with compensation if we want to (note, that we will only compensate the closed activities, not the ones marked as canceled).

Properties of the Approval Activity

Placing the Approval Activity inside our NofM activity.

What does the execution pattern look like?

Now that we have our approval activity, we need to determine how this new activity is going to execute. This will be the guide that we use to implement the execution behavior. There are a couple of steps this will follow

Schedule the approval's to occur in parallel, one per each approver submitted as one of the properties

Wait for each of those to finish.

When one finishes, check to see if the condition to move onward is satisfied (in this case, we increment a counter towards a "number of approvers required" variable.

If we have not met the criteria, we keep on going. [we'll come back to this, as we'll need to figure out what to do if this is the last one and we still haven't met all of the criteria.]

If we have met the criteria, we need to cancel the other running activities (they don't need to make a decision any more).

Implement the easy part of this (scheduling the approvals to occur in parallel)

I say this is the easy part as this is documented in a number of places, including Bob and Dharma's book. The only trickery occurring here is that we need to clone the template activity, that is the approval activity that we placed inside this activity before we started working on it. This is a topic discussed in Nate's now defunct blog.

protectedoverride ActivityExecutionStatus Execute(ActivityExecutionContext executionContext)
{
// here's what we need to do.// 1.> Schedule these for execution, subscribe to when they are complete// 2.> When one completes, check if rejection, if so, barf// 3.> If approve, increment the approval counter and compare to above// 4.> If reroute, cancel the currently executing branches.
ActivityExecutionContextManager aecm = executionContext.ExecutionContextManager;
int i = 1;
foreach (string approver in Approvers)
{
// this will start each one up.
ActivityExecutionContext newContext = aecm.CreateExecutionContext(this.Activities[0]);
GetApproval ga = newContext.Activity as GetApproval;
ga.AssignedTo = approver;
// this is just here so we can get some delay and "long running ness" to the// demo
ga.MyProperty = new TimeSpan(0, 0, 3 * i);
i++;
// I'm interested in what happens when this guy closes.
newContext.Activity.RegisterForStatusChange(Activity.ClosedEvent, this);
newContext.ExecuteActivity(newContext.Activity);
}
return ActivityExecutionStatus.Executing;
}

Code in the execute method

One thing that we're doing here is RegisterForStatusChange() This is a friendly little method that will allow me to register for a status change event (thus it is very well named). This is a property of Activity, and I can register for different activity events, like Activity.ClosedEvent or Activity.CancelingEvent. On my NofM activity, I implment IActivityEventListener of type ActivityExecutionStatusChangedEvent (check out this article as to what that does and why). This causes me to implement OnEvent which since it comes from a generic interface is now strongly typed to accept the right type of event arguments in. That's always a neat trick that causes me to be thankful for generics. That's going to lead us to the next part.

Implement what happens when one of the activities complete

Now we're getting to the fun part of how we handle what happens when one of these approval activities return. For the sake of keeping this somewhat brief, I'm going to work off the assumption that a rejection does not adversely affect the outcome, it is simply one less person who will vote for approval. We can certainly get more sophisticated, but that is not the point of this post! ActivityExecutionStatusChangedEventArgs has a very nice Activity property which will return the Activity which is the one that caused the event. This let's us find out what happened, what the decision was, who it was assigned to, etc. I'm going to start by putting the code for my method in here and then we'll walk through the different pieces and parts.

publicvoid OnEvent(object sender, ActivityExecutionStatusChangedEventArgs e)
{
ActivityExecutionContext context = sender as ActivityExecutionContext;
// I don't need to listen any more
e.Activity.UnregisterForStatusChange(Activity.ClosedEvent, this);
numProcessed++;
GetApproval ga = e.Activity as GetApproval;
Console.WriteLine("Now we have gotten the result from {0} with result {1}", ga.AssignedTo, ga.Result.ToString());
// here's where we can have some additional reasoning about why we quit// this is where all the "rejected cancels everyone" logic could live.if (ga.Result == TypeOfResult.Approved)
numApproved++;
// close out the activity
context.ExecutionContextManager.CompleteExecutionContext(context.ExecutionContextManager.GetExecutionContext(e.Activity));
if (!approvalsCompleted && (numApproved >= NumRequired))
{
// we are done!, we only need to cancel all executing activities once
approvalsCompleted = true;
foreach (Activity a inthis.GetDynamicActivities(this.EnabledActivities[0]))
if (a.ExecutionStatus == ActivityExecutionStatus.Executing)
context.ExecutionContextManager.GetExecutionContext(a).CancelActivity(a);
}
// are we really done with everything? we have to check so that all of the // canceling activities have finished cancellingif (numProcessed == numRequested)
context.CloseActivity();
}

Code from "OnEvent"

The steps here, in English

UnregisterForStatusChange - we're done listening.

Increment the number of activities which have closed (this will be used to figure out if we are done)

Write out to the console for the sake of sanity

If we've been approved, increment the counter tracking how many approvals we have

Use the ExecutionContextManager to CompleteExecutionContext, this marks the execution context we created for the activity done.

Now let's check if we have the right number of approvals, if we do, mark a flag so we know we're done worrying about approves and rejects and then proceed to cancel the activities. CancelActivity. CancelActivity schedules the cancellation, it is possible that this is not a synchronous thing (we can go idle waiting for a cancellation confirmation, for instance.

Then we check if all of the activities have closed. What will happen once the activities are scheduled for cancellation is that each one will eventually cancel and then close. This will cause the event to be raised and we step through the above pieces again. Once every activity is done, we finally close out the activity itself.

Using it

I placed the activity in a workflow, configured it with five approvers and set it for two to be required to move on. I also placed a code activity outputting "Ahhh, I'm done". I also put a Throw activity in there to raise an exception and cause compensation to occur to illustrate that only the two that completed are compensated for.

So, what did we do?

Create a custom composite activity with the execution logic to implement an n-of-m pattern

Saw how we can use IEventActivityListener in order to handle events raised by our child activities

Saw how to handle potentially long running cancellation logic, and how to cancel running activities in general.

Saw how compensation only occurs for activities that have completed successfully

Extensions to this idea:

More sophisticated rules surrounding the approval (if a VP or two GM's say no, we must stop)

Non binary choices (interesting for scoring scenarios, if the average score gets above 95%, regardless of how many approvers remaining, we move on)

Create a designer to visualize this, especially when displayed in the workflow monitor to track it

While I was on break, a number of folks pinged me asking me about this blog post by Tad Anderson.

I find the investment in time to learn how to use 3.0/3.5 has been a complete waste time. So we have release 1.0 and 1.5 of WWF becoming obsolete in favor of version 2.0. These are the real release numbers on these libraries, and that is how they should have been labeled. They are not release 3.0 and 3.5.

First, your investment in the existing technologies is not a "waste of time." The idea of modeling your app logic declaratively, via workflows doesn't change, nor do the ideas surrounding how one builds an application with workflows. What we are fixing is that we are making it substantially easier to use, and enabling more advanced scenarios (like implicit message correlation). What you will not be able to re-use is some of the things you did the first time and thought, "hmmm, I wonder why I have to do that [activity execution context cloning, handleExternalEvent, I'm looking at you]." From a designer perspective, your not going to have to keep remembering the quirks of the v1 designer. I think about this similarly to the way we went from ASMX web services to WCF. The API's changed, but the underlying thinking of building an app on services did not. Regarding version numbers, all of our libraries are versioned to the version of the framework we ship with (see WPF, WCF, etc). Internally we struggled with what we call the thing we're working on now and decided to stick with framework version (so WF 4.0, rather than WF 2.0).

Secondly, it's important to note, we're not getting rid of the 3.0, 3.5 technologies. We're investing to port them to the new CLR, and work to make the designers operate in VS 2010. If you get sufficient return by using WF in your apps today, use WF today. If WF doesn't meet your needs today, and if we're fixing that by something that we're doing in 4.0, then it makes sense to wait. Note, I'm not defining "return" for you. Depending upon how you define that, you may reach a different conclusion that someone in a similar setting.

Thirdly, activities you write today on 3.0/3.5 will continue to work, even inside a 4.0 workflow by way of the interop activity. Much as WPF has the ability to host existing WinForms content, we have the ability to execute 3.0-based activities.

There is a larger issue of how we (the big, Microsoft "we") handle a combination of innovation, existing application compatibility, and packaging of features. I'm not sure how we avoid the fact that inevitably, any framework in version n+1 will introduce new features, some of which will not be compatible with framework version n, some of which may do similar things to features in framework version n. Folks didn't stop writing WinFroms apps when WPF was announced (they still write WinForms apps). As I mentioned, this is a big issue, but not one I intend to tackle in this post :-)

The feedback we got from customers around WF was centered around the need for a few things:

Activities and Workflows

A fully declarative experience (declare everything in XAML)

Make it easier to write complex activities (see my talk for the discussion on writing Parallel or NofM)

Make data binding easier

Runtime

Better control over persistence

Flow-in transactions

Support partial trust

Increase perf

Tooling

Fix Perf and usability

Make rehosting and extensibility easier

Most of these would require changes to the existing code base, and breaking changes would become unavoidable. The combination of doing all of these things makes the idea of breaking all existing customers absolutely untenable. We're doing the work to make sure that your WF apps you write today will keep on working, and with services as the mechanisms to communicate between them, one can gradually introduce 4.0 apps as well. Given the commitment we have to our v1 (or netfx3) customers, we don't want to introduce those kinds of breaking changes.

Kathleen's article summarizes this very nicely, and rather than be accused of cherry-picking quotes, I encourage you to read the whole article.

What We Will See

The designers for Foo will leverage a new service in order to display a list of database tables. We will also need to publish this service to the editing context, and handle the fact that we don’t know who might publish it (or when it might be published). Note that in VS, there is no way to inject services except by having an activity designer do it. In a rehosted app, the hosting application could publish additional services (see part 4) that the activities can consume. In this case though, we will use the activity designer as our hook.

Publishing a Service

Let’s look at the designer for Foo (as Foo is our generic, and relatively boring activity).

Not much to this, except a drop down list that is currently unbound (but a name is provided). Also note that there is a button that says to “publish the service”. Let’s first look at the code for the button click

What are we doing here? We first check if this service is already published using the Contains method. We can do this because ServiceManager implements IEnumerable<Type>.

[update, finishing this sentence.] One could also consume the service usingGetService<TResult>. You may also note that there is a GetRequiredService<T>. This is a call that we know won’t return null, as the services we are requesting must be there for the designer to work. Rather than returning null, this will throw an exception. Within the designer, we generally think of one service as required:

Let’s look at the definition of the service. Here you can see that we are using both an interface and then providing an implementation of that interface. You could just as easily use an abstract class, or even a concrete class, there is no constraint on the service type.

If there is not a service present, we will publish an instance of one. This becomes the singleton instance for any other designer that may request it. Right now, we have a designer that can safely publish a service. Let’s look at consuming one

Consuming a Service

Let’s look at some code to consume the service. There are two parts to this. One is simply consuming it, which we already saw above in discussing GetService and GetRequiredService . The second is hooking into the notification system to let us know when a service is made available. In this case, it’s a little contrived, as the service isn’t published until the button click, but it’s good practice to use the subscription mechanism as we make no guarantees on ordering, or timing of service availability.

Subscribing to Service

Here, using the Subscribe<TServiceType> method, we wait for the service to be available. The documentation summarizes this method nicely:

Invokes the provided callback when someone has published the requested service. If the service was already available, this method invokes the callback immediately.

In the OnModelItemChanged method, we will subscribe and hook a callback. The callback’s signature is as follows:

As you can see, in this callback, the service instance is provided, so we can query it directly. You may ask, “why not in Intialize?” well, there are no guarentees that the editing context will be available at that point. We could either subscribe to context being made available, or just use ModelItemChanged:

This wraps a basic introduction to the ServiceManager type and how to leverage it effectively to share functionality in designers.

Let’s look at a before and after shot in the designer:

Before & After

What about Items?

Items follow generally the same Get, Subscribe, and publish pattern, but rather than publish, there is a SetValue method. If you have “just data” that you would like to share between designers (or between the host and the designer) an Item is the way to go about that. The most commonly used item we’ve seen customers use is the Selection item in order to be able to get or set the currently selected model item.

That’s our tour of basic publish and subscribe with Services and Items.

One of the things I am working on now is the next release of the workflow designer. One thing I have heard a number of requests for over the years is the ability to refactor workflows. I'd love to get some feedback if this would be valuable (I'm pretty convinced it is), and if it is valuable, what kind of things would you like to be able to do? One I have heard before is the selection of some subset of activities in a workflow and select "refactor to new activity" which would pull that out to a new custom activity.

What other things make sense? Please reply in comments, a blog post that pingbacks, or send me email at mwinkle_AtSign_Microsoft_dot_com.

<Usual disclaimers apply, this is not something we have made any decisions about, I am just trying to gather some data>

With all due respect to George and Ira Gershwin, I have a quick question for the readers of this blog. In V1, we have an interesting scenario is talked about frequently, and that's the file extension of our xml form of workflow.

When we debuted at PDC05, there existed an XML representation of the workflow which conformed to a schema that the WF team had built, and it was called XOML. Realizing that WPF was doing the same thing to serialize objects nicely to XML, we moved to that (XAML), but the file extensions had been cast in stone due to VS project setups. So, we had XAML contained in a XOML file.

Is this a problem for you? I could see three possible solutions in the future <insert usual disclaimer, just gathering feedback>:

XOML -- we have a legacy now, let's not change it

XAML -- it's XAML, so change the file extension to match it (and introduce an overload to the XAML extension, which for now is associated with WPF)

something else, say .WFXAML -- this reflects the purpose, is unique to declarative workflows and doesn't have any weird connotations (What does xoml stand for???).

Is this an issue? Is this something you would like to see changed? Do any of these solutions sound like a good idea, bad idea, etc?

One common scenario that was often requested by customers of WF 3 was the ability to have templated or “grey box” or “activities with holes” in them (hence the Swiss cheese photo above). In WF4 we’ve done this in a way that way we call ActivityAction

Motivation

First I’d like to do a little bit more to motivate the scenario.

Consider an activity that you have created for your ERP system called CheckInventory. You’ve gone ahead and encapsulated all of the logic of your inventory system, maybe you have some different paths of logic, maybe you have interactions with some third party systems, but you want your customers to use this activity in their workflows when they need to get the level of inventory for an item.

Consider more generally an activity where you have a bunch of work you want to get done, but at various, and specific, points throughout that work, you want to allow the consumer of that activity to receive a callback and provide their own logic to handle that. The mental model here is one of delegates.

Finally, consider providing the ability for a user to specify the work that they want to have happen, but also make sure that you can strongly type the data that is passed to it. In the first case above, you want to make sure that the Item in question is passed to the action that the consumer supplies.

In wf3, we had a lot of folks want to be able to do something like this. It’s a very natural extension to wanting to model things as activities and composing into higher level activities. We like being able to string together 10 items as a black box for reuse, but we really want the user to specify exactly what should happen between steps 7 and 8.

A slide that I showed at PDC showed it this way (the Approval and Payment boxes represent the places I want a consumer to supply additional logic):

Introducing ActivityAction

Very early on the release, we knew this was one of the problems that we needed to tackle. The mental model that we are most aligned with is that of a delegate/ callback in C#. If you think about a delegate, what are you doing, you are giving an object the implementation of some bit of logic that the object will subsequently call. That’s the same thing that’s going on with an ActivityAction. there are three important parts to an ActivityAction

The Handler (this is the logic of the ActivityAction)

The Shape (this determines the data that will be passed to the handler)

The way that we invoke it from our activity

Let’s start with some simple code (this is from a demo that I showed in my PDC talk). This is a timer activity which allows us to time the execution for the contained activity and then uses an activity action to act on the result.

It is important to note that I use the second parameter (and the third through 16th if that version is provided) in order to provide the data. This way, the activity determines what data will be passed to the handler, allowing the activity to determine what data is visible where. This is a much better way than allowing an invoked child to access any and all data from the parent. This lets us be very specific about what data goes to the ActivityAction. Also, you could make it so that OnCompletion must be provided, that is, the only way to use the activity is to supply an implementation. If you have something like “ProcessPayment” you likely want that to be a required thing. You can use the CacheMetadata method in order to check and validate this.

Creation of DelegateInArgument<TimeSpan> : This is used to represent the data passed by the ActivityAction to the handler

Creation of the ActivityAction to pass in. You’ll note that the Argument property is set to the DelegateInArgument, which we can then use in the handler

The Handler is the “implementation” that we want to invoke. here’s it’s pretty simple, it’s a WriteLine and when we construct the argument we construct if from a lambda that uses the passed in context to resolve the DelegateInArgument when that executes.

At runtime, when we get to the point in the execution of the Timer activity, the WriteLine that the hosting app provided will be scheduled when the ScheduleAction is called. This means we will output the timing information that the Timer observed. A different implementation could have an IfThen activity and use that to determine if an SLA was enforced or not, and if not, send a nasty email to the WF author. The possibilities are endless, and they open up scenarios for you to provide specific extension points for your activities.

That wraps up a very brief tour of ActivityAction. ActivityAction provides a easy way to create an empty place in activity that the consumer can use to supply the logic that they want executed. In the second part of this post, we’ll dive into how to create a designer for one of these, how to represent this in XAML, and a few other interesting topics.

It’s that time of year that I’ll be taking a little bit of time off for the holidays, so I will see y’all in 2010!

How do I do this?

A lot of times people get stuck with the impression that there are only two workflow models available: sequential and state machine. True, out of the box these are the two that are built in, but only because there are is a set of common problems that map nicely into their execution semantics. As a result of these two being "in the box," I often see people doing a lot of very unnatural things in order to fit their problem into a certain model.

The drawing above illustrates the flow of one such pattern. In this case, the customer wanted parallel execution with two branches ((1,3) and (2,5)). But, they had an additional factor that played in here, 4 could execute, but it could only execute when both 1 and 2 had completed. 4 didn't need to wait for 3 and 5 to finish, 3 and 5 could take a long period of time, so 4 could at least start once 1 and 2 were completed. Before we dive into a more simple solution, let's look at some of the ways they tried to solve the problem, in an attempt to use "what's in the box."

The "While-polling" approach

The basic idea behind this approach is that we will use a parallel activity, and in the third branch we will place a while loop that loops on the condition of "if activity x is done" with a brief delay activity in there so that we are not busy polling. What's the downside to this approach:

The model is unnatural, and gets more awkward given the complexity of the process (what do we do if activity 7 has a dependency on 4 and 5)

The polling and waiting is just not an efficient way to solve the problem

This Is a lot to ask a developer to do in order to translate the representation she has in her head (first diagram, with the model we are forcing into).

The SynchScope approach

WF V1 does have the idea of synchronizing some execution by using the SynchronizationScope activity. The basic idea behind the SynchronizationScope is that one can specify a set of handles that the activity must be able to acquire before allowing it's contained activities to execute. This let's us serialize access and execution. We could use this to mimic some of the behavior that the polling is doing above. We will use sigma(x, y, z) to indicate the synchronization scope and it's handles (just because I don't get to use nearly as many Greek letters as I used to).

This should work, provided the synchronization scopes can obtain the handles in the "correct" or "intended" order. Again, the downside here is that this gets to be pretty complex, how do we model 4 having a dependency on 3 and 2? Well, our first synchronization scope now needs to extend to cover the whole left branch, and then it should work. For the simple case like the process map I drew at the beginning, this will probably work, but as the dependency map gets deeper, we are going to run into more problems trying to make this work.

Creating a New Execution Pattern

WF is intended to be a general purpose process engine, not just a sequential or state machine process engine. We can write our own process execution patterns by writing our own custom composite activity. Let's first describe what this needs to do:

Allow activities to be scheduled based on all of their dependent activities having executed.

We will start by writing a custom activity that has a property for expressing dependencies. A more robust implementation would use attached properties to push those down to any contained activity

Analyze the list of dependencies to determine which activities we can start executing (perhaps in parallel)

When any activity completes, check where we are at and if any dependencies are now satisfied. If they are, schedule those for execution.

So, how do we go about doing this?

Create a simple activity with a "Preconditions" property

In the future, this will be any activity using an attached property, but I want to start small and focus on the execution logic. This one is a simple Activity with a "Preconditions" array of strings where the strings will be the names of the activities which must execute first:

Create the PreConditionExecutor Activity

Let's first look at the declaration and the members:

[Designer(typeof(SequentialActivityDesigner),typeof(IDesigner))]
publicpartialclass PreConditionExecutor : CompositeActivity
{
// this is a dictionary of the executed activities to be indexed via// activity nameprivate Dictionary<string, bool> executedActivities = new Dictionary<string, bool>();
// this is a dictionary of activities marked to execute (so we don't // try to schedule the same activity twice)private Dictionary<string, bool> markedToExecuteActivities = new Dictionary<string, bool>();
// dependency maps// currently dictionary<string, list<string>> that can be read as // activity x has dependencies in list a, b, c// A more sophisticated implementation will use a graph object to track// execution paths and be able to check for completeness, loops, all// that fun graph theory stuff I haven't thought about in a whileprivate Dictionary<string, List<string>> dependencyMap = new Dictionary<string, List<string>>();

We have three dictionaries, one to track which have completed, one for which ones are scheduled for execution, and one to map the dependencies. As noted in the comments, a directed graph would be a better representation of this so that we could do some more sophisticated analysis on it.

Now, let's look at the Execute method, the one that does all the work.

Basically, we first construct the execution tracking dictionaries, initializing those to false. We then create the dictionary of dependencies. We then loop through the activities and see if there are any that have no dependencies (there has to be one, this would be a good point to raise an exception if there isn't. We record in the dictionary that this one has been marked to execute and then we schedule it for execution (after hooking the Closed event so that we can do some more work later). So what happens when we close?

There are a few lines of code here, but it's pretty simple what's going on.

We remove the event handler

If we're still executing, mark the list of activities appropriately

Loop through and see if any of them have dependencies on the activity which just now completed being done

If they do, remove that entry from the dependency list and check if we can run it (if the count == 0). If we can, schedule it, otherwise keep looping.

If all the activities have completed (there is no false in the Executed list) then we will close out this activity.

To actually use this activity, we place it in the workflow, place a number of the child activity types within it (again, with the attached property, you could put nearly any activity in there) and specify the activities that it depends on. Since I haven't put a designer on it, I just use the SequenceDesigner. Here's what it looks like (this is like the graph I drew above but kicks off with the "one" activity executing first:

Where can we go from here

Validation, remember all that fun graph theory stuff checking for cycles and completeness and no gaps. Yeah, we should probably wire some of that stuff up here to make sure we can actually execute this thing

Analysis of this might be interesting, especially as the process gets more complex (identifying complex dependencies, places for optimization, capacity stuff)

A designer to actually do all of this automatically. Right now, it is left as an exercise to the developer to express the dependencies by way of the properties. It would be nice to have a designer that would figure that out for you, and also validate so you don't try to do the impossible.

Make this much more dynamic and pull in the preconditions and generate the context for the activities on the fly. This would be cool if you had a standard "approval" activity that you wanted to have a more configurable execution pattern. You could build the graph through the designer and then use that to drive the execution

I'm going to hold off on posting the code, as I've got a few of these and I'd like to come up with some way to put them out there that would make it easy to get to them and use them. You should be able to pretty easily construct your own activity based on the code presented here.

I've been having some fun playing around with Visual Studio 2008 and the .NET Framework 3.5, and wanted to summarize some of the content I've put up on channel9 and other places.

Samples

The Conversation Sample remixed -- if there is one sample in the SDK to help you understand what is going on with context passing and duplex messaging, this is the sample that helped me learn it. I had this sample reworked a little bit so that you don't have 5 console windows open.

Pageflow sample 1, live hosted -- watch this as pageflow is hosted "live" in the cloud. This lets you interact with a pageflow as well as dive into the code using some tools my team has built.

Pageflow sample 2, live hosted as above -- this is the sample that shows how we can leverage the navigator workflow type to be in multiple paths at the same time (a parallel state machine almost).

// standard disclaimer applies, this is based on the released Beta 1 bits, things are subject to change, if you are reading this in 2012, things may be, look, smell, work differently. That said, if it’s 2012 and you’re reading this, drop me a line and let me know how you found this!

First, let’s start with your existing WF projects. What happens if I want to create a 3.5 workflow? We’re still shipping that designer, in fact, let’s start there on our tour. This shows of a feature of VS that’s pretty cool, multitargeting.

Click New Project

Notice the “Framework Version” dropdown in the upper right hand corner.

This tells VS which version of the framework you would like the project you are creating to target. This means you can still work on your existing projects in VS 2010 without upgrading your app to the new framework. Let’s pick something that’s not 4.0, namely 3.5. You’ll note that the templates may have updated a bit, select Workflow from the left hand tree view and see what shows up.

There isn’t anything magical about what happens next, you will now see the 3.5 designer inside of VS2010. You’re able to build, create, edit and update your existing WF applications.

Let’s move on and switch over to a 4.0 workflow.

Create a new project and select 4.0

Create a new WF Sequential Console application and name it “SampleProject”. Click Ok.

We’ll do a little bit of work here, but you will shortly see the WF 4.0 designer. It looks a little different from the 3.x days, we’ve taken this time to update the designer pretty substantially. We’ve built it on top of WPF, which opens up the doors for us to do a lot of interesting things. If you were at PDC and saw any Quadrant demos, you might think that these look similar. We haven’t locked on the final look and feel yet, so expect to see some additional changes there, but submit your feedback early and often, we want to know what you think.

Let’s drop some activities into our sequence and see what’s there to be seen.

We’ve categorized the toolbox into functional groupings for the key activities. We heard a lot of feedback that it was tough to know what to use when, so we wanted to provide a little more help with some richer default categories. Add an Assign activity, a WriteLine activity and a Delay activity to the canvas by clicking and dragging over the to the sequence designer.

You’ll note that we’ve now got some icons on each activity indicating something is not correct. This is a result of the validation executing and returning details about what is wrong. Think of these as the little red squiggles that show up when you spell something wrong. You can hover over the icon to see what’s wrong

You can also see that errors will bubble up to their container, so hovering over sequence will tell you that there is a problem with the child activities.

What if I have a big workflow, and what if I want to see a more detailed listing of errors? Open up the Error View and you will see the validation results are also displayed here. You’ll note there is some minor formatting weirdness. This is a bug that we fixed but not in time for the Beta1 release.

Now, let’s actually wire up some data to this workflow. WF4 has done a lot of work to be much more crisp about the way we think about data within the execution environment of a workflow. We divide the world into two types of data, Arguments, and Variables. If you mentally map these to the way you write a method in code (parameters, and state internal to the method), you are one the right track. Arguments determine the shape of an activity, what goes in, what goes out. Variables allocate storage within the context of an activities execution. The neat thing about variables, once the containing activity is done, we can get rid of the variables, as our workflow no longer needs them (note, we pass the important data in and out through the arguments). To do this, we have two special designers on the canvas that contain information about the arguments and variables in your workflow

First, let’s click on the Argument designer and pass in some data.

Arguments consist of a few important elements

Name

Type

Direction

Default Value (Optional)

Most of these are self explanatory, with the one exception being the Direction. You’ll note that this has In, Out and Property. Now, when you are editing the arguments, you are actually editing the properties of the underlying type you are creating (I’ll explain more about this in a future post). A more appropriate name might be “Property Editor” but the vast majority of what you’ll be creating with it is arguments. Anyway, If you select In or Out, this basically wraps the type T in an InArgument, so it becomes a property of type InArgument<T>. We just provide a bit of a shorthand so you don’t always have to pick InArgument as the type. The default value takes an expression, but in this case, we won’t be using it.

Let’s go ahead and add an argument of type TimeSpan named DelayTime. You’ll need to select browse for types and then search for the TimeSpan

Variables are similar, but slightly different, variables have a few important elements:

Name

Type

Scope

Default Value (Optional)

Remember earlier, I mentioned that variable is part of an activity, this is what Scope refers to. Variables will only show up to be the scope of the selected activity, so if you don’t see any, make sure to select the Sequence, and then you will be able to add a variable. Let’s add new variable, named StringToPrint, of type String.

Now let’s do something with these in the workflow. One thing I’m particularly happy with that we’ve done on the designer side of things is to enable people to build activity designers more easily. There are lots of times where you have activities that have just a few key properties that need to be set, and you’d like to be able to see that “at a glance” The assign designer is like that.

Now, let’s dig into expressions. One big piece of feedback from 3.0 was that people really wanted richer binding experiences. You see this as well with the WPF data binding . We’ve taken it to the next level, and allow any expression to be expressed as a set of activities. What this means is that we do have to “compile” textual expressions into a tree of activities, and this is one of the reasons we use VB to build expressions. In the fullness of time, other languages will come on board. But how to use it, let’s see. Click on the “To” text box on the Assign activity. You will see a brief change of the text box, and then you will be in a VB Expression Editor, or what we’ve come to refer to as “The Expression Text Box” or ETB. Start typing S, and already you will see intellisense begin to scope down the choices. This will pick up all of the variables and arguments in scope.

On the right side, we won’t use any of the passed in arguments, we’ll show off a richer expression. Now, the space on the right side of the designer is kind of tight for something lengthy, so go to the property grid and click on the “…” button for the Value property

This just touches the surface of what is possible with expressions in WF4, we can really get much richer expressions (3.x expressions are similar to WPF data binding, they are really an “object” + “target” structure).

Not everything makes it to the canvas of the designer surface, and for that, we have the property grid. If you’ve used the WPF designer in VS2008, this should look pretty familiar to you. Select the delay activity, and use the property grid to set the duration property to the InArgument you created above. This experience is similar, with the ETB embedded into the property grid for arguments.

Finally, repeat with the WriteLine and bind to the StringToPrint variable.

Navigating the Workflow

There are two different things that we have to help navigating the workflow, our breadcrumb bar at the top and the overview map (which appears as the “Mini Map” in the beta). Let’s look at the overview map. This gives you a view of the entire workflow and the ability to quickly scrub and scroll across it.

Finally, across the top we display our breadcrumbs which are useful when you have a highly nested workflow. Double click on one of the activities, and you should see the designer “drill into” that activity. Now notice the breadcrumb bar, it displays where you have been, and by clicking you can navigate back up the hierarchy. In beta1, we have a pretty aggressive breadcrumb behavior, and so you see “collapsed in place” as the default for many of our designers. We’re probably going to relax that a bit in upcoming milestones to reduce clicking and provide a better overview of the workflow.

Finally, there may be times where we don’t want to have a designer view, but would rather see the XAML. To get there, just right click on the file and ask to “View Code”

This will currently ask you if you are sure that you want to close the designer, and you will then see the XAML displayed in the XML editor. For the workflow we just created, this is what it looks like:

This is the standard program.cs template with two modifications. The first is passing data into the workflow, indicated by the dictionary we create to pass into the WorkflowInstance. This should look familiar if you have used WF in the past.

So, last week I wrapped up a conversation at TechReady, our internal conference, where I was talking about the integration between WF and WCF in .NET 3.5. This talk was somewhat bittersweet, it's the last conference where I'm scheduled to talk about WF 3.0/3.5, I'll start talking about WF 4.0 at PDC this fall.

There are a series of 4 demos that we'll talk about in this series:

Basic Context Management

Simple Duplex

Long Running Work Pattern

Conversations Pattern

I've gotten a lot of requests to post the code samples, so I want to do that here:

Sample 1, Basic Management of Context

The goal of this sample is to show the way that the context channel works, and how to interact with it from imperative code.

Ingredients:

One basic workflow service that simply has two Receive activities bound to the same operation inside of a sequence.

Inside each Receive, I have placed a Code Activity that simply outputs a little bit of info (the vars declared on lines 1 and 2 are used by the Receive activities:

Line 14 is where the magic happens, here' we grab the context token from the IContextManager.

Line 19 is where the magic completes, we apply this token to the new proxy. Note, this proxy could be running on different machine somewhere, but one I get the context token, I can use it to communicate with the same workflow instance that the first call did.

So, what have we shown:

Manipulating context in workflow and imperative code

How to extract the context token

How to explicitly set the context token

The caching behavior of the context channel (as seen in Scenario 1)

The behavior of the context channel to return the context token only on the activating message

A question recently came up on an internal list about how to start a workflow to do some work and then have it accept a message via a Receive activity. This led to an interesting discussion that provides some insight into how the WorkflowServiceHost instantiates workflows in conjunction with the ContextChannel.

Creating a Message Activated Workflow

By default, the WorkflowServiceHost will create a workflow when the following two conditions are true:

The message received is headed for an operation that is associated with a RecieveActivity that has the CanCreateInstance property set to true

The message contains no context information

It is interesting to note that you don't even need to use a binding element collection that contains a ContextBindingElement. The ContextBindingElement is responsible for creating the ContextChannel. The job of the ContextChannel is to do two things on the Receive side

Extract the context information and pass that along up the stack (hand it off into the service model)

On the creation, and only on the creation, of a new instance, return the context information to the caller in the header of the response.

So, if we want to create workflows based on messages dropped into an MSMQ queue, we can do that by not trying to add the ContextBindingElement into a custom binding on top of the netMsmqBinding, and associating the operation with a Receive activity with the CanCreateInstance equaling true. Note, that any subsequent communication with the workflow will have to occur with a communication channel over which we can pass context.

Creating a Non-Message Activated Workflow

In the case that this post is about, we do not want to activate off an inbound message. The way to do this doesn't require much additional work. We first need to make sure we don't have any of our Receive activities marked with CanCreateInstance to true. This means that no message coming in can activate the workflow. Our workflow will then do some work prior to executing the Receive activity and waiting for the next message. Our workflow will look like this (pretty simple)

When we want to start a workflow, we need to reach into the workflow service host and extract the workflow runtime and initiate the workflow:

WorkflowServiceHost myWorkflowServiceHost = new WorkflowServiceHost(typeof(Workflow1), null);
// do some work to set up workflow service host
myWorkflowServiceHost.Open();
// on some reason to start the workflow
WorkflowRuntime wr = myWorkflowServiceHost.Description.Behaviors.Find<WorkflowRuntimeBehavior>().WorkflowRuntime;
WorkflowInstance wi = wr.CreateWorkflow(typeof(Workflow1));
wi.Start();
// need to send wi.InstanceId somewhere for others to communicate with it

The last note is important. In order for a client to eventually be able to communicate to the workflow, the workflow instance Id will need to be relayed to that client.

One of the things that worked out incredibly well at TechEd was our chalk talks. We had a small theater set up with about 20-30 seats, a whiteboard and a small monitor for presentations. A number of the chalk talks on Windows Workflow Foundation were "steal a chair" events, where more people showed up than chairs. These talks were a great chance to dive deep into some specific areas of functionality, answer questions and head on over to the whiteboard to work through some design issues as well.

One thing that isn't so nice about the new http://wf.netfx3.com site is that all of the file listings do not roll up into one single nice syndication feed. I want the ability to aggregate all of the files into nice rss feeds so I can stay on top of samples, activities, etc. In order to enable this I had to create new blogs that aggregate the individual folder feeds, and then another one to aggregate those blogs.

The great thing about this position is that I get to pay attention to all of the conversations folks are having about WF.

The bad thing about this position is that I get to pay attention to all of the conversations folks are having about WF. I'm still learning how to listen to everything and anything that's going on in the world of WF, so forgive me for being a little slow to respond.

Brian Noyes started off with this post last Wednesday about the complexities inherent in WF, which was followed by Scott listing some of the common gotcha's in WF development. Tomas also has two posts (part 1 and part 2) on the subject. Jon weighed in somewhere in the middle there discussing some of the points raised. Some interesting comments are being posted to Jon's and Brian's posts.

For me, this is really rewarding to see the community having these conversations about the technology. Please keep having them, and if you have feedback, post it into our connect site. These things get routed directly to the team. Things are pretty much ready to go on v1, but that means we're working on planning what vNext and beyond are looking like, and we need to hear these things!

That being said, there is complexity in our model, and a lot of that comes from being extensible enough to manage the logic of your Windows Forms app using the same engine that runs the document life cycle workflows in MOSS. I think this is the benefit of providing this "foundational" api to enable workflow in any application, but it does come at the cost of a learning curve and complexity, and there are valid arguments whether certain pieces of complexity are neccessary. So, what do we do about it? Let's have a real informal $100 exercise. Basics of the exercise here:

You have $100 engineering dollars to spend. No matter how many millions we'd actually wind up spending, we use $100 as an easy number for people to keep in their heads.

There are well over $100 dollars worth of features you want.

The challenge is in determining how to spread the $100 in a way that produces with the most aggregate value.

What would you like to see added, improved, "fixed" in WF? Some thoughts, but don't feel limited to these: [Standard disclaimer: These are just my ideas, and nothing here means it will become part of the product.]

I get a number of queries about when WF will ship, or (more frequently) when the tools will be ready. The answers are "done" and "done." I've had two internal requests in the last week along these lines, so I wanted to try to state this as clearly as possible. The tools for WF are released and supported. The tools for WCF and WPF are in CTP and will continue to be updated in a CTP like fashion.

The story: Workflow is going to be out in full force, we've got a ton of great sessions lined up, and we've tried to create a bunch of good chalk-talks for people to attend when they just can't get enough workflow! Ok, to be specifice, we've got 15 workflow sessions, and another 8 chalk-talks on top of that. You can't find the chalk-talks from the main site yet, so I'm going to list them below. When you get to Boston come find us in the Connected Systems (CON) in the developer Technical Learning Center (TLC), which I believe is going to be color designated the blue TLC. We'll have something that lets you know when you can attend these great sessions. And remember, these aren't recorded, so it's a one time show, you may never be able to catch these presentations again!

In Windows Workflow Foundation, the tracking service keeps log information about workflow events and activity execution statuses. The workflow runtime automatically identifies events related to executing workflow instances and outputs them to a tracking service. This chalk talk will cover the capabilities of the out of box SQL-based tracking service as well as how and why you would build a custom tracking service.

In this chalk talk we’ll look at ways to leverage WF to handle the business logic in your web application. First, we’ll look at hosting options (in process, exposed via WCF) and then move into a few different patterns for workflow. These will include using WF to manage short lived business logic (from postback to render), participating in long running business process managed by WF, and using the Rules Engine to drive validation and other rules based scenarios. We’ll also discuss security considerations in these approaches as well as listen to how you’re planning on using WF in your web applications.

Windows Workflow Foundation comes with a workflow designer which you normally use in Visual Studio 2005. The workflow designer component is allowed to be rehosted in your application. This talk will describe how you can add the workflow designer into your application so that your application can create and edit sequential and state machine workflow models. We will cover workflow designer feature integration of activity property binding, the rules editor and using code handlers with your designed workflow models.

In a web application the transitions between multiple web pages is often written in code. The business logic deciding which page to send the user to next gets hidden in with the procedural code in the page. User interface page flow is a concept to allow the declarative modeling of page transitions and this can be implemented using Windows Workflow Foundation. This talk will describe the concept in more detail and give you a sneak peak at the advances that Microsoft is planning in this area.

The WorkflowSchedulerService defines how CPU threads can be used by the workflow runtime. Standard ACID transactions are supported in Windows Workflow Foundation through the TransactionScope composite activity. Long running processes that require some compensatory action when an exception occurs are also supported through the CompensatableTransactionScope. This talk will discuss these interesting areas of Windows Workflow Foundation.

Windows Workflow Foundation provides a rich set of features to support powerful fault handling, robust Atomic and long-running transactions, and flexible compensation support for failed transactions. This session will examine how to manage exceptions within a workflow, how to use the System.Transactions namespace, how to implement both atomic and long-running transactions, and how to utilize compensation and the compensate activity to recover from faults occurring during a transaction’s execution. Demonstrations will be provided to highlight the features and techniques developers need to know to build resilient and reliable workflow applications.

CON-TLC305 Inside the WF runtimePresenter: Bob Schmidt

The WorkflowRuntime is the engine that manages executing workflow instances in Windows Workflow Foundation. It handles events for workflow instances, interacts with services that the host application adds and manages workflow persistence. This talk will drill down into the workflow runtime and give you some insight as to how it works. This will be an advanced talk and you should have some prior exposure to Windows Workflow Foundation prior to attending.

The breakout session “Windows Workflow Foundation: Building Rules-Based Workflows” gave an introduction to the rules engine capabilities provided in Windows Workflow Foundation (WF). In this chalk talk, learn more about the WF Rules extensibility mechanisms, which support more advanced scenarios. See an example of how to externalize rules so that they can be maintained separately from the workflow assembly. In addition, learn how to author and execute rules outside of a workflow. Also, see how you can create custom expression and action types that can be used directly in your rules.

I would like to take this moment to apologize for all of the attendees who were at our WIN302 session this afternoon here in Barcelona. Moments before we were scheduled to begin, a very nasty power issue hit our room, causing the lights to go out, all of the equipment on stage to shut down, and reset all of the audio equipment (replaced with a series of rather nasty sounding "pops".) Our demo machine, which we had just spent the last few hours getting set "just so," was also a casualty of this.

Following 10 minutes of working with stage crews, audio techs, and David frantically trying to get the demo machine back to a usable state, we decided to begin the talk. I had counted 5 minutes since someone ran down onto the stage yelling into a walkie talkie, so I figured we were in the clear.

David was still working on the demo, so I began the talk, and quickly needed to fill time while David worked on the demo machine.

In short, by the time we got back to being ready, things were all jabberwockied up, and I was most certainly off my game, and as a result found myself rambling when I should have been focused, grasping for phrasing when I should have been driving the message, and stumbling in a talk where I had hoped to be knocking it out of the park.

I want to apologize to the attendees, because you deserved a much better talk than the one you got (and David and I are going to make it up in part 2, tomorrow).

Reading through the feedback was pretty hard, this is a crowd that has very high expectations, and today did not meet that bar.

Just when you think you have things all ready to go.

How could we have done better?

A backup machine, set in exactly the same fashion as the first machine would have still not been particularly pleased with the power issue.

I need a way to be able to save the state of all of my open visual studio windows and script out so I can run one script that opens all of the instances, and all of the right files (setting to the right spot would be nice as well).

Not freaked out. We had just gotten set and ready to go, and the power thing really knocked me off kilter.

So, we walk away and we learn something, and we'll be back to do it again tomorrow. Everyone has these nightmare conference stories, but that still doesn't make things better.

In my last post, I covered using ActivityAction in order to provide a schematized callback, or hole, for the consumers of your activity to supply. What I didn’t cover, and what I intend to here, is how to create a designer for that.

If you’ve been following along, or have written a few designers using WorkflowItemPresenter, you may have a good idea how we might go about solving this. There are a few gotcha’s along the way that we’ll cover as we go through this.

First, let’s familiarize ourselves with the Timer example in the previous post:

So, let’s build a designer for this. First we have to provide a WorkflowItemPresenter bound to the .Body property. This is pretty simple. Let’s show the “simple” XAML that will let us easily drop something on the Body property

Not a whole lot of magic here yet. What we want to do is add another WorkflowItemPresenter, but what do I bind it to? Well, let’s look at how ActivityDelegate is defined [the root class for ActivityAction and ActivityFunc (which I’ll get to in my next post).:

hmmm, Handler is an Activity, that looks kind of useful. Let’s try that:

[warning, this XAML won’t work, you will get an exception, this is by design :-) ]

<sap:ActivityDesignerx:Class="actionDesigners.ActivityDesigner1"xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"xmlns:sap="clr-namespace:System.Activities.Presentation;assembly=System.Activities.Presentation"xmlns:sapv="clr-namespace:System.Activities.Presentation.View;assembly=System.Activities.Presentation"><StackPanel><sap:WorkflowItemPresenterHintText="Drop the body here"BorderBrush="Black"BorderThickness="2"Item="{Binding Path=ModelItem.Body, Mode=TwoWay}"/><RectangleWidth="80"Height="6"Fill="Black"Margin="10"/><!-- this next line will not work like you think it might --><sap:WorkflowItemPresenterHintText="Drop the completion here"BorderBrush="Black"BorderThickness="2"Item="{Binding Path=ModelItem.OnCompletion.Handler, Mode=TwoWay}"/></StackPanel></sap:ActivityDesigner>

While this gives us what we want visually, there is a problem with the second WorkflowItemPresenter (just try dropping something on it):

Now, if you look at the XAML after dropping, the activity you dropped is not present. What’s happened here:

The OnCompletion property is null, so binding to OnCompletion.Handler will fail

We (and WPF) are generally very forgiving of binding errors, so things appear to have succeeded.

The instance was created fine, the ModelItem was created fine, and the it was put in the right place in the ModelItem tree, but there is no link in the underlying object graph, basically, the activity that you dropped is not connected

Thus, on serialization, there is no reference to the new activity in the actual object, and so it does not get serialized.

How can we fix this?

Well, we need to patch things up in the designer, so we will need to write a little bit of code, using the OnModelItemChanged event. This code is pretty simple, it just means that if something is assigned to ModelItem, if the value of “OnCompletion” is null, initialize it. If it is already set, we don’t need to do anything (for instance, if you used an IActivityTemplateFactory to initialize). One important thing here (putting on the bold glasses) YOU MUST GIVE THE DELEGATEINARGUMENT A NAME. VB expressions require a named token to reference, so, please put a name in there (or bind it, more on that below).

Well, this works :-) Note that you can see the duration DelegateInArgument that was added.

Now, you might say something like the following “Gosh, I’d really like to not give it a name and have someone type that in” (this is what we do in our ForEach designer, for instance). In that case, you would need to create a text box bound to OnCompletion.Argument.Name, which is left as an exercise for the reader.

Alright, now you can get out there and build activities with ActivityActions, and have design time support for them!

One question brought up in the comments on the last post was “what if I want to not let everyone see this” which is sort of the “I want an expert mode” view. You have two options. Either build two different designers and have the right one associated via metadata (useful in rehosting), or you could build one activity designer that switches between basic and expert mode and only surfaces these in expert mode.

I’ve been meaning to throw together some thoughts on attached properties and how they can be used within the designer. Basically, you can think about attached properties as injecting some additional “stuff” onto an instance that you can use elsewhere in your code.

Motivation

In the designer, we want to be able to have behavior and view tied to interesting aspects of the data. For instance, we would like to have a view updated when an item becomes selected. In WPF, we bind the style based on the “isSelectionProperty.” Now, our data model doesn’t have any idea of selection, it’s something we’d like the view level to “inject” that idea on any model item so that a subsequent view could take advantage of. You can kind of view Attached Properties as a nice syntactic sugar to not have to keep a bunch of lookup lists around. As things like WPF bind to the object very well, and not so much a lookup list, this ends up being an interesting model.

To be clear, you could write a number of value converters that take the item being bound, look up in a lookup list somewhere, and return the result that will be used. The problem we found is that we were doing this in a bunch of places, and we really wanted to have clean binding statements inside our WPF XAML, rather than hiding a bunch of logic in the converters.

One thing that might look a little funny to some folks who have used attached properties in other contexts (WF3, WPF, XAML), is the “IsBrowsable” property. The documentation is a little sparse right now, but what this will do is determine how discoverable the property is. If this is set to true, the attached property will show up in the Properties collection of the ModelItem to which the AP is attached. What this means is that it can show up in the Property grid, you can bind WPF statements directly to it, as if it were a real property of the object. Attached properties by themselves have no actual storage representation, so these exist as design time only constructs.

Getter/ Setter?

One other thing that you see on the AttachedProperty<T> is the Getter and Setter properties. These are of type Func<ModelItem,T> and Action<ModelItem,T> respectively. What these allow you to do is perform some type of computation whenever the get or set is called against the AttachedProperty. Why is this interesting? Well, let’s say that you’d like to have a computed value retrieved, such as “IsPrimarySelection” checking with the Selection context item to see if an item is selected. Or, customizing the setter to either store the value somewhere more durable, or updating a few different values. The other thing that happens is that since all of these updates go through the ModelItem tree, any changes will be propagated to other listeners throughout the designer.

Looking at Some Code

Here is a very small console based app that shows how you can program against the attached properties. An interesting exercise for the reader would be to take this data structure, put it in a WPF app and experiment with some of the data binding.

Lines 13-17 output the properties, and let’s see what that looks like:

---- Enumerate properties on dog (note new property)----

Property : Name

Property : Noise

Property : Age

Property : IsAnInterestingDog

---- Enumerate properties on cat (note no new property) ----

Property : Name

Property : Noise

Property : Age

Ok, so that’s interesting, we’ve injected a new property, only on the dog type. If I got dogMI.Properties[“IsAnInterestingDog”], I would have a value that I could manipulate (albeit returned via the getter).

As we can see, we’ve now injected this behavior, and we can extract the value.

Let’s get a little more advanced and do something with the setter. Here, if isYoungAnimal is set to true, we will change the age (it’s a bit contrived, but shows the dataflow on simple objects, we’ll see in a minute a more interesting case).

1: // now, let's do something clever with the setter.

2: Console.WriteLine("---- let's use the setter to have some side effect ----");

Pay attention to what the Setter does now. We create the method through which subsequent SetValue’s will be pushed. Here’s that output:

---- let's use the setter to have some side effect ----
cat's age now 10

Finally, let’s show an example of how this can really function as some nice sugar to eliminate the need for a lot of value converters in WPF by using this capability as a way to store the relationship somewhere (rather than just using at a nice proxy to change a value):

1: // now, let's have a browesable one with a setter.

2: // this plus dynamics are a mini "macro language" against the model items

3:

4: List<Object> FavoriteAnimals = new List<object>();

5:

6: // we maintain state in FavoriteAnimals, and use the getter/setter func

7: // in order to query or edit that collection. Thus changes to an "instance"

Line 14 – Create a setter that acts upon the FavoriteAnimals collection to either add or remove the element

Line 28-32 – do a few different sets on this attached property

NOTE: you can’t do that in beta2 as the dynamic support hasn’t been turned on. Rather you would have to do isFavoriteAnimal.SetValue(dogMi, true).

Line 35 then prints the output to the console, and as expected we only see the dog there:

-- Who are my favorite animals?
Sasha

I will attach the whole code file at the bottom of this post, but this shows you how you can use the following:

Attached properties to create “computed values” on top of existing types

Attached properties to inject a new (and discoverable) property entry on top of the designer data model (in the form of a new property)

Using the Setter capability to both propagate real changes to the type, providing a nice way to give a cleaner interface, as well as use it as a mechanism to store data about the object outside of the object, but in a way that gives me access to it such that it seems like the object.

This is some really nice syntactic sugar that we sprinkle on top of things

What do I do now?

Hopefully this post gave you some ideas about how the attached property mechanisms work within the WF4 designer. These give you a nice way to complement the data model and create nice bindable targets that your WPF Views can layer right on top of.

A few ideas for these things:

Use the Setters to clean up a “messy” activity API into a single property type that you then build a custom editor for in the property grid.

Use the Getters (and the integration into the ModelProperty collection) in order to create computational properties that are used for displaying interesting information on the designer surface.

Figure out how to bridge the gap to take advantage of the XAML attached property storage mechanism, especially if you author runtime types that look for attached properties at runtime.

Use these, with a combination of custom activity designers to extract and display interesting runtime data from a tracking store

When we start doing this two way style of messaging, we now open up to start modeling some interesting business problems. In the previous post, you'll note that I did not include the code, because I mentioned we needed to be more clever in scenarios where we listen in parallel.

First, a brief diversion into how the Receive activity works. Everybody remembers the workflow queues, the technology that underlies all communication between a host and a workflow instance. The Receive activity works by creating a queue that the WorkflowServiceHost (specifically the WorkflowOperationInvoker) will use to send the message received off the wire into the workflow. Now, the Receive activity normally just creates a queue that is named the same as the operation the Receive activity is bound to. However, if we have two Receive activities listening for the same operation at the same time, no longer is a single queue useful to route responses back as we want to route to the correct Receive activity instance.

There is property on the Receive activity called ContextToken. Normally this is null in the simple case. However, when we want our Receive activity to operate in parallel, we need to indicate that it needs to be smarter when it creates a queue.

By setting this property (you can just type in a name, and then select the common owner all of the parallel receive's share. This will cause the Receive activity to create a queue named [OperationName] +[ConversationId], the conversation ID takes the form of a GUID, and is the second element inside a context token.

The sample that I show for this talk is simply the conversations sample inside the SDK. This is the sample to check out to understand all sorts of interesting ways to use the context tokens to model your processes.

Now, there are two conversation patterns here. One is the one shown above, which I refer to as an n-party conversation where n is fixed at design time. We can accomplish this with the parallel activity. The other is where n is arbitrary (imagine you send out to business partners stored in the database). The way to do this is to use the Replicator activity. The Replicator is a little known gem shipped in 3.0 that essentially gives you "ForEach" semantics. But, by flipping the ExecutionType switch to parallel, I now get the behavior of a parallel, but operating with an arbitrary n branches.

So, in order to enable conversations, we need to tell our receive activity to be a little smarter about how it generates its queue name, and then we simply follow the duplex pattern we discussed in the last twoposts. Once we do that, we're in good shape to start modeling some more interesting communication patterns between multiple parties.

Where can we go from here?

We can just make the patterns more interesting. One interesting one would be the combination of the long running work with cancellation and a Voting activity in order to coordinate the responses and allow for progress to be made when some of the branches complete (if I have 3 yes votes, I can proceed). The power of building composite activities is that it gives me a uniform programming model (and a single threaded one to boot) in order to handle the coordination of different units of work. Get out there and write some workflows :-)

standard beta disclaimer. This is written against the beta1 API’s. If this is 2014, the bits will look different. When the bits update, I will make sure to have a new post that updates these (or points to SDK samples that do)

In yesterday’s post, we went over the core components of the designer. Let’s now take that and build that rehosts the designer, and then we’ll circle back around and talk about what we did and what comes next.

Start VS, Create a new project, and select a WPF project

Inside the VS project add references to the System.Activities.* assemblies. For now, that list looks like

System.Activities.dll

System.Activities.Design.Base.dll

System.Activities.Design.dll

System.Activities.Core.Design.dll

You might think the list of design assemblies is excessive. We’ll be collapsing probably into two design assemblies, one with the designer infrastructure and one with the activity designers in subsequent milestones.

Create some layout in the WPF window to hold the various designer elements. I usually do a three column grid for toolbox, property grid and designer canvas.

Now that we’ve got the layout down, let’s get down to business. First let’s just get an app that displays the workflow designer and then we will add some other interesting features. We wanted to make it easy to get a canvas onto your host application, and to program against it. The key type that we use is WorkflowDesigner, this encapsulates all of the functionality, and operating context, required. Let’s take a quick look at the type definition

Gets or sets an EditingContext object that is a collection of services shared between all elements contained in the designer and used to interact between the host and the designer. Services are published and requested through the EditingContext.

Returns a UI element that allows the user to view and edit the workflow visually.

The editing context is where we will spend more time in the future, for now the View is probably what’s most interesting, as this is the primary designer canvas. There are also some useful methods to load and persist the workflow as well.

Let’s start off real simple, and write some code that will display a basic sequence, and we’ll get more sophisticated as we go along.

1: using System.Windows;

2: using System.Windows.Controls;

3: using System.Activities.Design;

4: using System.Activities.Core.Design;

5: using System.Activities.Statements;

6:

7: namespace BlogPostRehosting

8: {

9: /// <summary>

10: /// Interaction logic for Window1.xaml

11: /// </summary>

12: publicpartialclass Window1 : Window

13: {

14: public Window1()

15: {

16: InitializeComponent();

17: LoadWorkflowDesigner();

18: }

19:

20: privatevoid LoadWorkflowDesigner()

21: {

22: WorkflowDesigner wd = new WorkflowDesigner();

23: (new DesignerMetadata()).Register();

24: wd.Load(new Sequence

25: {

26: Activities =

27: {

28: new Persist(),

29: new WriteLine()

30: }

31: });

32: Grid.SetColumn(wd.View, 1);

33: grid1.Children.Add(wd.View);

34: }

35: }

36: }

Let’s walk through this line by line:

Line 22, construct the workflow designer

Line 23, Call Register on the DesignerMetadata class. Note that this associates all of the out of the box activities with their out of the box designers. This is optional as a host may wish to provide custom editors for all or some of the out of box activities, or may not be using the out of box activities.

Line 24-31, Call Load, passing in an instance of an object graph to display. This gives the host some flexibility, as this instance could come from XAML, a database, JSON, user input, etc. We simply create a basic sequence with two activities

Line 32, set the column for the view

Line 33, add the view to the display

This gives us the following application:

Now, that was pretty simple, but we’re also missing some key things, namely, the property grid. It’s important to note however that this has all of the functionality of the designer (the variables designer, the overview map, etc. This will react just the same as if you were building the workflow in VS.

This will let us see the property grid (so things get a little more interesting).

So, we’re able to display the workflow and interact with it, but we probably also want to have a constrained authoring experience (not just editing), so that comes in the form of the ToolboxControl. For the sake of this blog post, we’ll use this in XAML, but we certainly can code against it imperatively as well.