Version 2.1.3 includes some bug fixes and minor enhancements. The enhancements are primarily around the CslaDataSource web control, which now provides more integrated support for paging and sorting.

Most importantly, version 2.1.3 is the specific version described in my CSLA .NET Version 2.1 Handbook. I am releasing this as a ~160 page ebook, available in editions for both C# and VB, for purchase directly from my web site.

This new book is best thought of as a sequel to my my Expert C# 2005 Business Objects and Expert VB 2005 Business Objects books. It covers the changes made since CSLA .NET 2.0 was released: how the framework has changed, and how to use the new features and capabilities. Please note that this is not a replacement for those books: it expands on them.

I hope you enjoy CSLA .NET 2.1.3, and the new book. Thank you, code well, have fun!

Thanks to a lot of work by fellow Magenic consultant Chuck Macomber, my CSLA .NET code is now in Subversion (svn) rather than in cvs. Thank you Chuck!!

I'm told that svn offers many advantages over cvs. Just at the moment all I see are the differences where I need to learn and adjust, but that will pass :)

I've left cvs alive for the moment, but Chuck was able to port all the history into svn, so once things get settled I should be able to entirely drop cvs.

The svn repository is at svn://svn.lhotka.net/csla, and it should be open for anonymous read. As with cvs, I'm allowing anyone to read the contents of the respository.

Also, the web viewer (www.lhotka.net/cslacvs) remains alive, but now defaults to using the svn repository. The cvs repository is still visible there too, just use the drop-down in the upper right-hand corner of the page to switch to it. One known issue at this time, is that the diff function doesn't work in svn. I don't know why, but python seems unable to find the diff program under svn, though it works fine under cvs.

If you have any idea how to solve this diff issue, I'm all ears. Google, thus far, has been of little value... (I am now guessing the diff issue was a caching problem, because it simply started working all by itself... Gotta love web development...)

Going forward, all changes to CSLA .NET will occur in the svn repository.

Version 2.1.2 includes some bug fixes and minor enhancements. The enhancements are primarily around the CslaDataSource web control, which now provides more integrated support for paging and sorting.

Most importantly, version 2.1.2 is the specific version described in my CSLA .NET Version 2.1 Handbook. I am releasing this as a ~160 page ebook, available in editions for both C# and VB, for purchase directly from my web site.

Look for the C# edition in the very near future (it is in the final review stage).

This new book is best thought of as a sequel to my Expert C# 2005 Business Objects and Expert VB 2005 Business Objects books. It covers the changes made since CSLA .NET 2.0 was released: how the framework has changed, and how to use the new features and capabilities. Please note that this is not a replacement for those books: it expands on them.

Releasing this as an ebook in PDF format is a risk on my part. I put just as much time and effort into writing this book as I have any other book I've written, but I want to provide a more cost-effective way for you to purchase this material than through normal publishing channels. It is my sincere hope that you will purchase this book if you read it (even after a few people have purchased it and they start sharing it for free).

I'm not giving up on traditional publishing channels necessarily. I'll most likely use a traditional publishing approach when I reach a point that CSLA .NET and/or my thinking changes to such a degree that there'd be value for you, as a reader, in me rewriting my books in a major way. But when I'm writing about extra functionality that rests on the existing work, it seems like I'm providing better value to you by selling a lower-cost sequel than forcing you to repurchase much of the same material to get at the new parts.

But all this hinges on people's honesty. On your honesty.

I hope you enjoy CSLA .NET 2.1.2, and the new book. Thank you, code well, have fun!

There's this "five things about me" tagging meme going around, and I've now been tagged by Bill, Craig and Andrea.

1. I grew up in the middle of Minnesota, surrounded by hundreds of acres of forest and lakes. Our nearest neighbor was a mile away. The nearest town 14 miles away, and the nearest real town over 30 miles away. The nearest city: a 2.5 hour drive. As a youth, I lived one of those lives of adventure you might read about from long in the past. I spent my time fishing, hunting, trapping, snowmobiling, boating, swimming and generally wandering through the woods and lakes of central Minnesota.

3. I love speculative fiction (the fancy name for science fiction and fantasy). Tolkien, Asimov, Niven, Reynolds, Scalzi, McKillip, Norton and many others are on my favorites list. Between all these books, and my wife's collection of all the essays, letters and other writings of America's founding fathers (plus many random other books on many topics) we have an entire room set aside as a library.

4. My first overseas trip was to London, to attend a Babylon 5 convention to commemorate the conclusion of the show. My wife was (and technically still is) a moderator on the Babylon 5 moderated newsgroup (remember usenet? :) ).

5. I'm a would-be game developer. Years ago I wrote a MUD for the DEC VAX called Mordecai. It consumed all the resources the university VAX could muster back then, and I even ended up working with the Operations guys to build in capabilities to throttle the game so it wouldn't be playable during prime homework usage periods and that sort of thing. I've written bits and pieces of a much more ambitious .NET equivalent, but that project stalled a while back. The reality today is that game development is such a small part of building a game in total (graphics, physics engines and so forth are a much bigger part) that it is hard to get as excited about this stuff as I used to be...

Kindly enough, and because I don't have time to track down who's been tagged or not, I'm not tagging anyone, so this thread of execution ends here ;)

In both cases the issue is that data binding doesn’t refresh the value from the data source after it updates the data source from the UI. This means that any changes to the value that occur in the property set code aren’t reflected in the UI.

The question he posed to me was whether it was a good idea to have a property set block actually change the value. In most programming models, goes the thought, assigning a property to a value can’t result in that property value changing. So any changes to the value that occur in the set block of a property are counter-intuitive, and so you simply shouldn’t change the value in the setter code.

Here’s my response:

The idea of a setter (which is really just a mutator method by another name) changing a value doesn't (or shouldn't) seem counter-intuitive at all.

If we were talking about assigning a value to a public field I’d agree entirely. But we are not. Instead we’re talking about assigning a value to a property, and that’s very different.

If all we wanted were public fields, we wouldn't need the concept of "property" at all. The concept of "property" is merely a formalization of the following:

public fields are bad

private fields are exposed through an accessor method

private fields are changed through a mutator method

creating and using accessor/mutator methods is awkward without a standard mechanism

So the concept of "property" exists to standardize and formalize the idea that we need controlled access to private fields, and a standard way to change their value through a mutator method.

Consider the business rule that says a document id must follow a certain form - like SOP433. The first three characters must be alpha and upper case, the last three must be numeric. This is an incredibly common scenario for document, product, customer and other user-entered id values.

Only a poor UI would force the user to actually enter upper case values. The user should be able to type what they want, and the software will fix it.

But putting the upper case rule in the UI is bad, because code in the UI isn't reusable, and tends to become obsolete very rapidly as technology and/or the UI design changes. There's nothing more expensive over the life of an application than a line of code in the UI. So while it is possible to implement this rule in a validation control, in JavaScript, in a button click event handler - none of those are good solutions to the real problem.

Yet if that rule is placed purely in the backend system, then the user can't get any sort of interactive response. The form must be "posted" or "transmitted" to the backend before the processing can occur. Users want to immediately see the value be upper case or they get nervous.

So then we're stuck. Many people implement the rule twice. Once in the UI to make the user happy, and once in the backend, which is the real rule implementation. And then they try to keep those rules in sync forever - the result being an expensive, unreliable and hard to maintain system.

I've watched this cycle occur for 20 years now, and it is the same time after time. And it sucks.

This, right here, is why VB got such a bad name through the 1990’s. The VB forms designer made it way too easy to write all the logic in the UI, and without any other clear alternative that's what happened. The resulting applications are very fragile and are impossible to upgrade to the next technology (like .NET). Today, as we talk, many thousands of lines of code are being written in Windows Forms and Web Forms in exactly the same way. Those poor people will have a hell of a time upgrading to WPF, because none of their code is reusable.

What's needed is one location for this rule. Business objects offer a workable solution here. If the object implements the rule, and the object runs on the client workstation, then (without code in the UI) the user gets immediate response and the rule is satisfied. And the rule is reusable, because the object is reusable - in a way that UI code never can be (or at least never has been).

That same object, with that same interactive rule, can be used behind Windows Forms, Web Forms, WPF and even a web services interface. The rule is always applied, because it is right there in the object. And for interactive UIs it is immediate, because it is in the field's mutator method (the property setter).

So in my mind the idea of changing a value in a setter isn't counter-intuitive at all - it is the obvious design purpose behind the property setter (mutator). Any other alternative is really just a ridiculously complex way of implementing public fields. And worse, it leaves us where we've been for 20+ years, with duplicate code and expensive, unreliable software.

Here's an issue from Windows Forms that appears to have crept into WPF as well – along with a solution (thanks to Sam Bent and Kevin Moore from the WPF team):

Consider a class with a property that enforces a business rule - such as that the value must be all upper case:

public class Test : INotifyPropertyChanged

{

public event

System.ComponentModel.PropertyChangedEventHandler

PropertyChanged;

// ...

private string _data;

public string Data

{

get

{

return _data;

}

set

{

_data = value.ToUpper();

OnNotifyPropertyChanged("Data");

}

}

}

Bind this to a TextBox and type in a lower-case value. The user continues to see the lower-case value on the screen, even though the object obviously has an upper-case value. The PropertyChanged event is blissfully ignored by WPF data binding.

I believe this is the same "optimization" as in Windows Forms, where the assumption is that since the value was put into the object by data binding that it can’t be different from what's on the screen - so no refresh is needed. Obviously that is a unfortunate viewpoint, as it totally ignores the idea than an object might be used to centralize business logic or behavior...

In Windows Forms the solution to this issue is relatively simple: handle an event from the BindingSource and force the BindingSource to refresh the value. Bill McCarthy wrapped this solution into an extender control, which I included in CSLA .NET, making the workaround relatively painless.

In WPF the solution is slightly different, but also relatively painless.

It turns out that this optimization doesn’t occur if an IValueConverter is associated with the binding, and if the binding’s UpdateSourceTrigger is not PropertyChanged.

For the TextBox control the UpdateSourceTrigger is LostFocus, so it is good by default, but you’ll want to be aware of this property for other control types.

An IValueConverter object’s purpose is to format and parse the value as it flows to and from the target control and source data object. In my case however, I don’t want to convert the value at all, I just want to defeat this unfortunate “optimization”. What’s needed is an identity converter: a converter that does no conversion.

“You are killing me. I wrote a rather scathing review of your Professional Business Objects with C# on Amazon.com and on my own blog. However, recently I read a transcription of an ARCast with Ron Jacobs where you talked about business objects. I believe I agreed with everything you said. What are you trying to do to me?

I am just a regular shmoe developer, who preaches when listened to about the joys and benefits of OO design for the common business application. I feel too many develop every application like it is just babysitting a database. Every object’s purpose is for the CRUD of data in a table. I have developed great disdain for companies, development teams, and senior developers who perpetuate this problem. I felt Expert C# 2005 Business Object perpetuates this same kind of design, thus the 3 star rating on Amazon for your book.

In the ARCast you mentioned a new book coming out. I am hoping it is the book I have been looking for. If I wrote it myself it would be titled something like, “Business Objects in the Real World.” It would address the problems of data-centric design and how some objects truly are just for managing data and others conduct a business need explained by and expert. These two objects would be pretty different and possibly even use a naming convention to explicitly differentiate the two. For example I don’t want my “true” business objects with getters, setters, isDirty flags or anything else that might make them invalid and popping runtime errors when trying to conduct their business.

Anyway, I could ramble on (It’s my nature). However, I just want to drop you a line and say there is a real disconnect between us, but at the same time, I wanted to show everyone at my office what you were saying in your interview. You were backing me up! However, just months ago I was using your writings to explain what is wrong with software development. We seem to be on the same page, or close anyway, but maybe coming to different conclusions. I guess that’s what is bothering me.”

The following is my response, which I thought I’d share here on my blog because I ended up rambling on more than I’d planned, and I thought it might be interesting/useful to someone:

I would guess that the disconnect may flow from our experiences during our careers - what we've seen work and not work over time.

For my part, I've become very pragmatic. The greatest ideas in the world tend to fall flat in real life because people don't get them, or they have too high a complexity or cost barrier to gain their benefit. For a very long time OO itself fit this category. I remember exploring OO concepts in the late 80's and it was all a joke. The costs were ridiculously high, and the benefits entirely theoretical.

Get into the mid-90's and components show up, making some elements of OO actually useful in real life. But even then RAD was so powerful that the productivity barrier/differential between OO and RAD was ridiculously high. I spent untold amounts of time and effort trying to reconcile these two to allow the use of OO and RAD both - but with limited success. The tools and technologies simply didn't support both concepts - at least not without writing your own frameworks for everything and ignoring all the pre-existing (and established) RAD tools in existence.

Fortunately by this time I'd established both good contacts and a good reputation within key Microsoft product teams. Much of the data binding support in .NET (Windows Forms at least) is influenced by my constant pressure to treat objects as peers to recordset/resultset/dataset constructs. I continue to apply this pressure, because things are still not perfect - and with WPF they have temporarily slid backwards somewhat. But I know people on that team too, and I think they'll improve things as time goes on.

In the meantime Java popularizes the idea of ORM - but solutions exist for only the most basic scenarios - mapping table data into entity objects. While they claim to address the impedance mismatch problem, they really don't, because they aren't mapping into real OO designs, but rather into data-centric object models. Your disdain for today’s ORM tools must be boundless J

For better or worse, Microsoft is following that lead in the next version of ADO.NET - and I don't totally blame them; a real solution is complex enough that it is hard to envision, much less implement. However, here too I hold out hope, because the current crop of "ORM" tools are starting to create nice enough entity objects that it may become possible to envision a tool that maps from entity objects to OO objects using a metadata scheme between the two. This is an area I've been spending some time on of late, and I think there's some good potential here.

Through all this, I've been working primarily with mainstream developers (“Mort”). Developers who do this as a job, not as an all-consuming passion. Developers who want to go home to their families, their softball games, their real lives. Who don't want to master "patterns" and "philosophies" like Agile or TDD; but rather they just want to do their job with a set of tools that help them do the right thing.

I embrace that. This makes me diametrically opposed to the worldviews of a number of my peers who would prefer that the tools do less, so as to raise the bar and drive the mainstream developers out of the industry entirely. But that, imo, is silly. I want mainstream developers to have access to frameworks and tools that help guide them toward doing the right thing - even if they don't take the time to understand the philosophy and patterns at work behind the scenes.

I don't remember when I did the ARCast interview, but I was either referring to the 2005 editions of my business objects books which came out in April/May, or to the ebook I'm finishing now, which covers version 2.1 of my CSLA .NET framework. Odds are it is the former, and it isn't the book you are looking for - though you might enjoy Chapter 6.

In general I think you can take a couple approaches to thinking about objects.

One approach, and I think the right one, is to realize that all objects are behavior-driven and have a single responsibility. Sometimes that responsibility is to merely act as a data container (DTO or entity object). Other times it is to act as a rich binding source that implements validation, authorization and other behaviors necessary to enable the use of RAD tools. Yet other times it is to implement pure, non-interactive business behavior (though this last area is being partially overrun by workflow technologies like WF).

Another way to think about objects is to say there are different kinds of object, with different design techniques for each. So entity objects are designed quasi-relationally, process objects are designed somewhat like a workflow, etc. I personally think this is a false abstraction that misses the underlying truth, which is that all objects must be designed around responsibility and behavior as they fit into a use case and architecture.

But sticking with the pure responsibility/behavior concept, CSLA .NET helps address a gaping hole. ORM tools (and similar tools) help create entity objects. Workflow is eroding the need for pure process objects. But there remains this need for rich objects that support a RAD development experience for interactive apps. And CSLA .NET helps fill this gap by making it easier for a developer to create objects that implement rich business behaviors and also directly support Windows Forms, Web Forms and WPF interfaces – leveraging the existing RAD capabilities provided by .NET and Visual Studio.

Whether a developer (mis)uses CSLA .NET to create data-centric objects, or follows my advice and creates responsibility-driven objects is really their choice. But either way, I think good support for the RAD capabilities of the environment is key to attaining high levels of productivity when building interactive applications.

In November Dunn Training held the first ever official CSLA .NET three day class. It was a smashing success, and resulted in a lot of great feedback and comments. Here are a couple quotes from attendees of the November class:

"At first, I was not sold on the need for CSLA. After attending this class and seeing the examples and proof, I'm on the bandwagon. This class proved the usefulness of CSLA and sold me on giving up writing all the plumbing myself."

"Miguel and Mark have provided one of the best ways to get up to speed using CSLA."

"The best career enhancing training investment I have made in the last 10 years. Rocky is lucky to have the DUNN team doing this training. Great stuff - solid, professional and accurate."

For my part, I want to thank everyone who contributed to the CSLA .NET forums, and to the CSLAcontrib project. The time and energy you have all put in over the past few months has been a great help to the CSLA .NET community, and I know there are many people out there who are grateful for your efforts!

Most importantly though, I want to thank all the users of CSLA .NET and everyone who has purchased copies of my books. At the end of the year I received numerous emails thanking me for creating the framework (and I appreciate that), but I seriously want to thank all of you for making this a vibrant community. CSLA .NET is one of the most widely used development frameworks for .NET, and that is because each of you have taken the time to learn and use the framework. Thank you!

For me 2006 was a year of change. Starting with CSLA .NET 2.0 I've been viewing CSLA .NET as not just an offshoot of my books, but as a framework in its own right. Of course many people have been treating it that way for years now, but I hope it has been helpful to have me treat point releases a bit more formally over the past number of months.

This extends to version 2.1, which represents an even larger change for me. With version 2.1 I'm releasing my first self-published ebook to cover the changes. This ebook is not a standalone book, rather it is best thought of as a "sequel" to the 2005 book. However, it is well over 150 pages and covers both the changes to the framework itself, as well as how to use the changes in your application development. The ebook is undergoing technical review. That and the editing process should take 2-3 weeks, so the ebook will be available later this month.

Looking at the rest of 2007 it is clear that I'll be spending a lot of time around .NET 3.0 and 3.5.

I'll be merging the WcfChannel into CSLA .NET itself, as well as implementing support for the DataContract/DataMember concepts. This, possibly coupled with one WPF interface implementation for collections, will comprise CSLA .NET 3.0.

It is not yet clear to me what changes will occur due to .NET 3.5, but I expect them to be more extensive. Some of the new C#/VB language features, such as extension methods and lambda expressions, have the potential to radically change the way we think about interacting with objects and fields. When you can add arbitrary methods to any type (even sealed types like String) many interesting options become available.

Then there's the impact of LINQ itself, and integration with the ADO.NET Entity Framework in one manner or another.

ADO EF appears, at least on the surface, to be YAORM (yet another ORM). If that continues to be true, then it is a great way to get table data into data entities, but it doesn't really address mapping the data into objects designed around use cases and responsibility. If you search this forum for discussions on nHibernate you'll quickly see how ADO EF might fit into the CSLA .NET worldview just like nHibernate does today: as a powerful replacement for basic ADO.NET and/or the DAAB.

LINQ is potentially more interesting, yet more challenging. It allows you to run select queries across collections. At first glance you might think this eliminates the need for things like SortedBindingList or FilteredBindingList. I’m not sure that’s true though, because the result of any LINQ query is an IEnumerable<T>. This is the most basic type of list in .NET; so basic that the result must often be converted to a more capable list type.

Certainly when you start thinking about n-level undo this becomes problematic. BusinessBase (BB) and BusinessListBase (BLB) work together to implement the undo capabilities provided by CSLA .NET. Running a LINQ query across a BLB results in an IEnumerable<T>, where T is your BB-derived child type. At this point you’ve lost all n-level undo support, and data binding (Windows Forms, and any WPF grid) won’t work right either.

So at the moment, I’m looking at LINQ being most useful in the Data Access Layer, along with ADO EF, but time will tell.

The point of all this rambling is this: I didn’t rush CSLA .NET 1.0 or 2.0. They came out when I felt I had good understanding of the issues I wanted to address in .NET 1.0 and .20 respectively. And when I felt I had meaningful solutions or answers to those issues. I’m treating .NET 3.5 (and presumably CSLA .NET 3.5) the same way. I won’t rush CSLA .NET to meet an arbitrary deadline, and certainly not to match Microsoft’s release of .NET 3.5 itself. There’s no point coming out with version of CSLA .NET that misses the mark, or that provides poor solutions to key issues.

So in 2007 I’ll most certainly be releasing the version 2.1 ebook and CSLA .NET 3.0 (probably with another small ebook). Given that Microsoft’s vague plans are to have .NET 3.5 out near the end of 2007, I don’t expect CSLA .NET 3.5 to be done until sometime in 2008; but you can expect to see beta versions and/or my experiments around .NET 3.5 as the year goes on.

Of course I’ll be doing other things beyond CSLA .NET in 2007. I’m lined up to speak at the SD West and VS Live San Francisco conferences in March. I’m speaking in Denver and Boulder later in January, and I’ll be doing other speaking around the country and/or world as the year goes on. Click here for the page where I maintain a list of my current speaking engagements.

To close, thank you all for your support of the CSLA .NET community, and for your kind words over the past many months. I wish you all the best in 2007.