I haven't seen many "Fatal Execution Engine Error (79FFEE24)" sorts of messages lately - not since .NET 1.0/1.1 days.

But we were wrestling with an issue for multiple weeks in which a WCF service would crash with exactly this error. Since this service uses the DataContractSerializer, this post seemed quite relevant. The workarounds suggested there were not going to be a fit for us, unfortunately.

The really insidious part of this problem was that the initial failure mode was causing a third party library (that our service invoked) to fail...complete with a stack trace that pointed deep into the third party code. However, when we removed the call to this library, we got the "Fatal Execution Engine" exception while WCF was attempting to serialize the service response. (Really nasty, and it points to some sort of stack corruption perhaps.) You could see that WCF serialization was at fault by analyzing a crash dump of the IIS worker process with WinDbg.

After talking with some folks at Microsoft support, they indicated that the DataContractSerializer issue is fixed in .NET 4.0, but there is not a hotfix available for .NET 3.5sp1 at this time.

The workaround they proposed - and which has resolved our issue - was to place all assemblies that contain types "T" used in contracts that have IEnumerable<T> into the GAC. (In other words, if your contract has IEnumerable<T> elements, then all types T have to be strong-named & in the GAC.)

Why does this work? The bug with DataContractSerializer apparently does not manifest itself when assemblies are loaded as "domain neutral" (shared across all appdomains.) You can force strong-named/GAC'd assemblies to be loaded as "domain neutral" by using the LoaderOptimization attribute. But if you're hosting in IIS, you are automatically getting LoaderOptimization(LoaderOptimization.MultiDomainHost) behavior for your application. If you're not hosting in IIS, this bug doesn't seem to appear at all.

This workaround is a hassle, of course - it ripples across the application in many ways...but it does resolve the issue.

I'll be giving a talk for Microsoft's "Build Your Skills" event on March 24th in St. Louis (and March 31st in Minneapolis) on the topic of code profiling. We'll look at the profiling tools built into Visual Studio Team System Developer, and a few others to boot. You can get all the details here. There is a whole slate of great talks planned for the day, so register now if you're in the area...

A frequent pattern in TFS/Team Build is to merge from one branch to
another using a label as the basis for the merge. (That is, you select
a label in the source branch that designates the point you want to
merge "from".) Often, this label was applied by Team Build
automatically.

This might play out like: "I know this build of this feature branch is
good; I'll use the corresponding label as the basis for a merge back to
the trunk." Etc.

If this sounds like you and your shop, be sure to enable the feature that Buck Hodges discusses here to make sure that your build label sticks around even when your retention policy indicates the corresponding build should be deleted.
Otherwise, if the merge process takes awhile (due to conflict
resolution, or lunch) you might find that upon completion of your work,
you get an error indicating the label you were using cannot be found.

If this scenario does play out poorly for you...you could
attempt to deduce the time at which your build label was applied and
then apply your own label (with the same name) to that point in time on
the source branch. The merge process will then complete...

Many folks have noted that a lot of the Visual Studio elements that have been present within BizTalk to support the development experience are no longer...quite so BizTalk specific! BizTalk projects now build upon C# projects, and thus a lot of the differences that you used to see in navigating property pages, compilation settings and build mechanics are now gone. This is a very good thing - it allows you to leverage skills you already have on your team.

MSBuild support is now first class. Everyone who went through the trouble to install (and invoke) DevEnv.exe on their build server in order to build BizTalk projects will be glad to know that this is no longer required. (Builds can be ever so much faster when you aren't relying DevEnv...)

Because of the close relationship with C# projects, you can now have C# artifacts directly in your BizTalk projects. Many people have noted that when you "Add new item...", C# classes aren't offered as an option. The product group has explained that this is because currently, the designers (such as the Orchestration designer) are unable to provide intellisense for types that are within the same assembly. So, you are required to use "Add existing item...".

(This sort of reminds me of the interaction between pipelines and schemas - the former require fullly qualified assembly information at run time, which they won't get if you combine these two artifacts in the same assembly.)

If this limitation persists, will the feature get used? I tend to think so. There are often cases where the smart thing to do in an orchestration is to delegate to a component...but the work you do in that component is, at times, so specific to the orchestration that it makes sense for them to be co-located for deployment and organizational purposes. What do you think ?

The support for unit testing is extremely welcome - check it out. Debugging maps is great as well, but I'm now (often) partial to the external XSLT approach these days.

XmlPreProcess is a general purpose tool in its own right for managing configuration files across multiple environments. The tool has pulled in previously separated functionality (the excellent stuff done by Tom) so that it can consume spreadsheets (that describe environment variations) directly, rather than needing a separate process for that. Very slick stuff !

I had a chance to sit down with Jeff Brand while down in Omaha for my presentation at the HDC. We recorded a podcast on all (most) things Scrum and TFS – you can check it out here. It was great fun – thanks Jeff!

I’m not able to attend the PDC this year, but its been interesting to watch the definition of Oslo take better shape after watching it for a long while.

I was reading Jon Flander’s post on the topic, where he reiterated the “Oslo as language, model repository, and visual editor aka Quadrant” theme. The language, known as “M” – draws this quote from John: “visual models are useful for a certain portion of the developer population, but for the most part developers like to write code, which means text.”

Text is good. I can store text in a version control system, and branch it when needed. I can merge it with well-known tools later on, and compare the differences between versions easily. I can have multiple developers work on it at once with a sane reconciliation process upon check-in.

Even visual models that have a textual representation behind the scenes struggle with these basics…because the text is not first class – it is (often) just a convenient serialization format that is sufficiently opaque for the benefits of text to be lost.

So I’m all for “M”. It will be fun to watch the vision unfold further.