Since WPF came out there’s been one quirk, one “optimization” in data binding that has been a serious pain.

Interestingly enough the same quirk is in Windows Forms, but the WPF team tells me that the reason it is also in WPF is entirely independent from how and why it is in Windows Forms.

The “optimization” is that when a user changes a value in the UI, say in a TextBox, that value is then put into the underlying source object’s property (whatever property is bound to the Text property of the TextBox). If the source object changes the value in the setter the change will never appear in the UI. Even if the setter raises PropertyChanged, WPF ignores it and leaves the original (bad) value in the UI.

To overcome this, you’ve had to put a ValueConverter on every binding expression in WPF. In CSLA .NET I created an IdentityConverter, which is a value converter that does nothing, so you can safely attach a converter to a binding when you really didn’t want a converter there at all, but you were forced into it to overcome this WPF data binding quirk.

WPF 4.0 fixes the issue. Karl Shifflett describes the change very nicely in this blog post.

This should mean that I can remove the (rather silly) IdentityConverter class from CSLA .NET 4.0, and that makes me happy.

I have put a beta release of version 3.6.3 online for download. This version is now feature complete, and my plan is to release it around the next of next week. I'll only be changing this version for show stopping issues, otherwise this is the final code.

If you are using 3.6.x, you should download and test this version. There are important bug fixes in this version - please see the change logs for details. If you are developing Silverlight, WPF or Windows Forms applications (in particular), you'll almost certainly want some of these fixes!

There are minor new features as well, including

Named connections in the GetManager methods for ConnectionManager and similar types

ReadProperty() method in the ObjectFactory base class

But the primary focus is on fixing bugs and refining key usage scenarios.

I decided to try using Visual Studio 2010 Beta 1 to open CSLA .NET for Windows 3.6.3.

Unfortunately this isn’t as smooth as one would hope.

VS runs an upgrade wizard on the solution, but this breaks the project file. I had to manually edit the project file using notepad to remove a bunch of stuff in the configuration. How did I know what to remove? I created a brand new Class Library project for .NET 4.0 and compared the contents of that file to the Csla file…

Now that I could open the solution I figured it would just build. Unfortunately not.

Two references were broken: System.ComponentModel.dll and System.Runtime.Serialization.dll. Perhaps the project references them by version number, I’m not sure. I do know that the references were broken, so I removed and re-added the references to these files.

Next is an issue with the web service reference required for the old asmx data portal channel. I suspect the issue is that I need to update (or remove and re-add) the service reference to the web services host. But I am not entirely sure it is worth carrying this legacy channel forward (or the Remoting or Enterprise Services ones) into CSLA .NET 4.0 – WCF is the preferred solution after all, and it has been around for a few years. So at least for now I just removed the service reference, the web service proxy class in Csla.DataPortalClient and the web service host class in Csla.Server.Hosts.

The result is that the solution builds. I haven’t tried running (or building) the unit test project yet, and I suspect there’ll be a few issues there as well, but at least the basic build of Csla.dll is now possible.

We interrupt the normal technical content of this blog to bring you an important news flash.

The Star Trek movie is awesome!!

Through a fortuitous accident, I and my sons were able to see a pre-screening of the movie this evening. We literally snuck in at the last minute.

I am a long-time trekkie. I love TOS and ST:TNG (with the usual caveats). DS9 was a sad rip-off of Babylon 5, Voyager was “Star Trek does Space 1999”, and Enterprise got good only after it was canceled (the last half-season was totally on track). And really I’m not really going to talk about the movies. Khan was great, and IV was fun, otherwise not so much…

So having watched the universe and characters I loved so much slowly dwindle and fade into utter drivel over the past many years, I had serious reservations about this new movie. Of course it is hard to imagine they could do more damage to the Star Trek universe, so I suppose there was nothing really to lose either.

On the upside, my hope was that this movie would do for Star Trek what the new Doctor Who did for that show, or the new Battlestar Galactica did for that show. There is evidence that the beloved content of my youth, when handled by competent, respectful and loving hands, can be given new and often better life now, with today’s special effects.

And my hope has been realized.

This movie treats the characters, the universe and the overall content and setting with respect. They took the setting and characters and breathed new life into them – capturing the humor, the interplay, the drive – the very essence of the original concept – and they created a movie for me.

A movie for the little kid who sat three feet in front of my Grandfather’s TV (because we didn’t get that channel at my house). But a movie for the person I am today, remembering what it was like to be that kid. Just like the new Doctor Who and BSG shows did with their original inspiration.

It has been many, many years since the words “These are the voyages of the starship Enterprise…” had the power to send chills up my spine. To make me smile, and to think that the future really is bright and wonderful.

At the end of this movie, when Leonard Nimoy speaks these words I felt these things like I did so many years ago, sitting on my Grandfather’s floor.

In working on MCsla I have learned quite a few things. Obviously I had to learn the ins and outs of MGrammar, MSchema, MGraph and the various related command line tools. But that’s not really the interesting part, because that sort of knowledge is specific to a bunch of unreleased tools and so is transitory.

What is more interesting is learning about the process of building a runtime that works off metadata. Which leads to learning about what metadata is needed for a runtime. Which leads to learning about how you get that metadata – where it comes from.

In my simplistic worldview I thought I’d create a single DSL to create metadata for my runtime. And this is what I’ve done in my MCsla prototype, but it really isn’t sufficient for any long-term real effort.

Why?

Because my DSL describes business objects. It doesn’t describe details about the UI, or about the data access layer (DAL) or any data mappings into the database. In my prototype I am inferring enough information from the metadata to create a basic UI, a limited DAL and some data mappings, but this is only useful for a prototype.

The conclusion I’ve come to is that any real CSLA .NET application runtime will require a number of related metadata schemes:

Business object description

UI description

DAL description

Data mapping description

If you create a single DSL to cover all these at once I think you’ll be in trouble. The trouble will come in at least two forms. First, this single DSL will almost certainly approach the complexity of C# or other 3GLs, and at that point why use a DSL at all? Second, such a DSL would almost certainly break down walls around separation of concerns. Let me talk about this second point further.

One of the primary strengths of CSLA .NET is that it encourages clear separation of concerns. The business layer supports numerous UI layers at one time – Web Forms, WPF, ASP .NET MVC, Silverlight, Windows Forms, WCF services, Workflows, etc. Similarly, the business layer doesn’t know or care how or where the DAL gets the data, as long as the DAL can provide create/read/insert/update/delete operations.

It is a virtual certainty that you’ll need a different DSL for ADO.NET EF than for raw ADO.NET (connection/command/datareader) data access layers. Similarly, you’ll probably need a different DSL to describe a web UI, as opposed to a XAML-based UI. The technology semantics are so different that it is hard to imagine a single DSL that encompasses both UI styles (at least one that is rich enough to make the end users happy).

As a result, I suspect we need a different DSL for each technology of each layer of the application architecture. And that’s OK, because it is a good bet that Microsoft will create a DSL for ADO.NET EF (as an example), so we don’t need to. That’ll probably cover at least item number 4 in my earlier list.

The role of the “DAL description DSL” is probably to map the items in the business DSL to the items in the data mapping DSL.

Already it becomes clear that some DSLs (like any DSL for ADO.NET EF, or for a framework like MCsla) will exist in their own world, for their own narrow domain. And that’s fine, because that is the point of a DSL.

And at the same time, it becomes clear that we’ll need to create cross-DSL DSLs to bridge the gaps between different DSLs. So that hypothetical “DAL description DSL” would be specific to both MCsla and the data mapping DSL.

To put it another way, it seems to me that we’re looking at two types of DSL. “Real” domain languages, and “glue” domain languages. Domain languages that target a specific domain, and domain languages that target the gap between other domain languages.

I suspect the end result, is that to create a comprehensive runtime, the runtime will need to pull in metadata for each layer of the application, and for the description on how the layers interact. And a developer will need to learn 4 or more DSLs to describe the various layers of the application. And tooling will need to evolve to validate the various inter-connected bits of DSL code so we aren’t left catching mis-matched bits of DSL input at runtime.

To summarize: this DSL/runtime stuff is very cool, but it seems to me that it will take years of maturation before everything comes together in a really productive manner.

One caveat: clearly you could limit the flexibility of the architecture/runtime and simplify the problem space and thus this multi-DSL issue. My fear though, is then you are no different from CASE and many other similar failed concepts of the past – so that seems rather uninteresting to me.

The 4th video in the CSLA .NET for Silverlight video series is now available. This is 65 minutes of in-depth content covering the creation of 2-, 3- and 4-tier applications using CSLA .NET for Silverlight.

If you’d like to get a better idea of the structure, style and quality of the video series you can download a FREE promotional montage. This promotional video includes lecture and coding content, providing an example of everything you’ll see in the full video.

The N-Tier Architecture segment covers a lot of ground, including these topics:

How to share code files and objects between a Silverlight client and .NET server

How to have server-only and client-only sections in your business layer

How to configure an ASP.NET server for the Silverlight data portal

How to configure the Silverlight client to use a remote data portal

How to use the MobileFactory attribute to create an observer object to validate inbound client requests on the web server

How to set up the web server to act as a "bridge", routing all client calls to a real application server behind a second firewall