I just got back from a week’s vacation/holiday in Great Britain and I feel very refreshed.

And that’s good, given that just before going to the UK I wrapped up the first draft of Chapter 1 in the new book Billy Hollis and I are writing. As you have probably gathered by now, this book uses DataSet objects rather than my preferred use of business objects.

I wanted to write a book using the DataSet because I put a lot of time and energy into lobbying Microsoft to make certain enhancements to the way the objects work and how Visual Studio works with them. Specifically I wanted a way to use DataSet objects as a business layer – both in 2-tier and n-tier scenarios.

Also, I wanted to write a book using Windows Forms rather than the web. This reflects my bias of course, but also reflects the reality that intelligent/smart client development is making a serious comeback as businesses realize that deployment is no longer the issue it was with COM and that development of a business application in Windows Forms is a lot less expensive than with the web.

The book is geared toward professional developers, so we assume the reader has a clue. The expectation is that if you are a professional business developer (a Mort) that uses VB6, Java, VB.NET, C# or whatever – that you’ll be able to jump in and be productive without us explaining the trivial stuff.

So Chapter 1 jumps in and creates the sample app to be used throughout the book. The chapter leverages all the features Microsoft has built into the new DataSet and its Windows Forms integration – thus showing the good, the bad and the ugly all at once.

Using partial classes you really can embed most of your validation and other logic into the DataTable objects. When data is changed at a column or row level you can act on that changed data. As you validate the data you can provide text indicating why a value is invalid.

The bad part at the moment is that there are bugs that prevent your error text from properly flowing back to the UI (through the ErrorProvider control or DataGridView) in all cases. In talking to the product team I believe that my issues with the ErrorProvider will be resolved, but that some of my DataGridView issues won’t be fixed (the problems may be a “feature” rather than a bug…). Fortunately I was able to figure out a (somewhat ugly) workaround to make the DataGridView actually work like it should.

The end result is that Chapter 1 shows how you can create a DataSet from a database, then write your business logic in each DataTable. Then you can create a basic Windows Forms UI with virtually no code. It is really impressive!!

But then there’s another issue. Each DataTable comes with a strongly-typed TableAdapter. The TableAdapter is a very nice object that handles all the I/O for the DataTable – understanding how to get the data, fill the DataTable and then update the DataTable into the database. Better still, it includes atomic methods to insert, update and delete rows of data directly – without the need for a DataTable at all. Very cool!

Unfortunately there are no hooks in the TableAdapter by which you can apply business logic when the Insert/Update/Delete methods are called. The end result is that any validation or other business logic is pushed into the UI. That’s terrible!! And yet that’s the way my Chapter 1 works at the moment…

This functionality obviously isn’t going to change in .NET or Visual Studio at this stage of the game, meaning that the TableAdapter is pretty useless as-is.

(to make it worse, the TableAdapter code is in the same physical file as the DataTable code, which makes n-tier implementations seriously hard)

Being a framework kind of guy, my answer to these issues is a framework. Basically, the DataTable is OK, but the TableAdapter needs to be hidden behind a more workable layer of code. What I’m working through at the moment is how much of that code is a framework and how much is created via code-generation (or by hand – ugh).

But what’s really frustrating is that Microsoft could have solved the entire issue by simply declaring and raising three events from their TableAdapter code so it was possible to apply business logic during the insert/update/delete operations… Grumble…

The major bright point out of all this is that I know business objects solve all these issues in a superior manner. Digging into the DataSet world merely broadens my understanding of how business objects make life better.

Though to be fair, the flip side is that creating simple forms to edit basic data in a grid is almost infinitely easier with a DataTable than with an object. Microsoft really nailed the trivial case with the new features - and that has its own value. While frustrating when trying to build interesting forms, the DataTable functionality does mean you can whip out the boring maintenance screens with just a few minutes work each.

Objects on the other hand, make it comparatively easy to build interesting forms, but require more work than I'd like for building the boring maintenance screens...

Like any author who writes practical, pragmatic content I am constantly torn. Torn between showing how to write maintainable code vs fast code. Between distilling the essence of an idea vs showing a complete solution that might obscure that essence.

Just at the moment I'm building a demo application, including its database. Do I create the best database I can, hopefully showing good database design techniques and subsequently showing how to write an app against an "ideal" database? Or do I create the database to look more like the ones I see when I go to clients - so it will have good parts and some parts that are obviously ill-designed. This latter approach allows me to show how to write an app against what I believe to be a more realistic database.

I'm opting for the latter approach. Yet sitting here right now, I know that I'll get lots of emails (some angry) berating me for creating and/or using such a poor database in a demo. "Demos should show the right approach" and so forth. Of course if I were to use a more ideal database design I'd get comments at conferences (some angry) because my demo app "isn't realistic" and only works in "a perfect world".

See, authors can't win. All we can do is choose the sub-group from which we're going to get nasty emails...

But that's OK. The wide diversity of viewpoints in our industry is one of our collective strengths. Pragmatists vie against idealists, performance-hounds vie against those focused on maintainability. Somewhere in the middle is reality – the cold, hard reality that none of us have the time to write performant, maintainable code using perfect implementations of all best practices and known design patterns. Somewhere in the middle are those hard choices each of us makes to balance the cost of development vs the cost of licensing vs the cost of hardware vs the cost of maintenance vs the cost of performance.

I think ultimately that this is why computer books don’t sell in the numbers you’d expect. If there are 7+ million developers, why does a good selling computer book sell around 20,000 copies?

(The exceptions being end-user books and theory books. End-user books because end users just want the answers, and theory books because they often rise above the petty bickering of the real world. Every faction can interpret theory books to say what they like to hear, so everyone likes that kind of book. Theory books virtually always “reinforce” everyone’s different world views.)

I think the reason “good selling” books do so poorly though. Only a subset or faction within the computer industry will agree with any given book. That faction tends to buy the book. Other factions might be a few copies, but they get a bad taste in their mouths when reading it, so they don’t recommend or propagate the book. Instead they find other books that do agree with their world view.

Note that I’m not complaining. Not at all!

I’m merely observing that C# people won’t (as a general rule) buy a VB book, and Java people won’t buy a C# book. OO people won’t buy a book on DataSet usage, and people who love DataSets won’t waste money on an OO book. People who love the super-complex demos from Microsoft really hate books that use highly distilled examples, while people who just want the essence of a solution really hate highly complex examples (and thus the books that use them).

As I write the 2nd editions of my Business Objects books I’m simultaneously writing both VB and C# editions. Several people have asked whether it wouldn’t be better to interleave code samples, have both languages in the book or something so that I don’t produce two books. But publishers have tried this. And the reality is that C# people don’t like to buy books that contain any VB code (and visa versa). Mixed language books simply don’t sell as well as single-language books. And like it or not, the idea behind publishing books is to sell them – so we do what sells.

But that solution doesn’t work for realistic vs idealistic database designs – which is my current dilemma. You can’t really write a book twice – once with an ideal database and once with a more realistic one. Nor can you really double the size (and thus cost) of a book to fit both ideas into it at once.

So I’m settling for the only compromise I can find. Parts of my database are pretty well designed. Other tables are obviously very poor. Thus some of my forms are trivial to create because they come almost directly from the “ivory tower”, while other forms rely on more complex and less ideal techniques because they come from the ugly world of reality.

Just to put questions to rest and reassure anyone who is concerned, I am actively working on CSLA .NET 2.0 and still intend to release 2nd editions of both the Expert VB and C# Business Objects books next spring - probably March or April.

A few people have asked if this is still the case, since my web site post was a few months ago and I haven't updated it. But it turns out that the post remains pretty accurate and I really don't have much more to say :)

CSLA .NET 2.0 will use generics - primarily for collections. It will support the new databinding, primarily because the new databinding uses the same interfaces as today (yea!), with the exception that ASP.NET Web Forms databinding will require some UI helper classes because it sadly isn't as transparent as Windows Forms.

The big change will be that the DataPortal will support multiple transports (most likely including remoting, asmx and enterprise services). This sets the stage for transparent support of Indigo (now Windows Communication Foundation, WCF) when it comes out later. But it also allows choice between today's three primary RPC technologies in a very transparent manner.

The other big change from the .NET 1.x book versions is that CSLA .NET 2.0 and the books will be covering CSLA .NET version 1.5 plus. In other words, most of the functionality of 1.5 will roll forward, including the RuleManager, context transfer (for globalization), exception handling and so forth. The one big change here is that I don't intend to support in-place sorting of collections, instead opting for a sortable view approach more akin to the way the DataView works against a DataTable. This will be more powerful, simpler and (I think) faster.

I am also trying to preserve as much backward compatibility as possible while adding the new capabilities. For instance, I intend on keeping BusinessCollectionBase and adding a new BusinessListBase which uses generics. That said, there's no doubt that some changes to existing code will be required - I just don't know exactly what yet...

Of course my primary goal with the original books wasn't to promote CSLA .NET specifically as much as to demonstrate the process (and some specific solutions) of creating a framework in .NET to support distributed OO. That remains my goal, and I think the lack of earth-shattering change helps with this. A good framework shields business applications from changes in the underlying platform. In .NET 2.0 data binding changes, but we largely don't care. Generics get added, and we can use or ignore them without penalty. Indigo will show up and we will be shielded from its changes. ADO.NET 2.0 has some incompatibilities with 1.x and we are largely shielded from those changes. On the whole I am pretty pleased with how little change is actually required moving from .NET 1.x to 2.0.

The FCC has decided to allow phone companies to screw us consumers over just like cable companies do. They really should have gone the other way and forced cable to be more like DSL.

It will be interesting to see what my particular phone company does, as I've been with the same ISP for many years. Now I could be forced to switch to my phone company. Odds of them allowing static IP addresses are probably about as good as with cable - which is to say not good...

In short, the FCC probably just costed me several hundred dollars a year either in buying a hosting service for all my domains and sites, or in buying my phone company's corporate level service which is a lot more expensive than my current ISP.

If it comes to that though, at least I'll get faster service. Our cable company provides a lot faster connectivity and my DSL, and if I'm going to have to pay super-high rates to get a static IP address I'd rather go with cable and get the faster speeds...

While the current goverment might be against raising taxes, they certainly seem to be happy to help corporations get more of our money...

Shocked? You shouldn’t be. The term “distributed objects” is most commonly used to refer to one particular type of n-tier implementation: the thin client model.

I discussed this model in a previous post, and you’ll note that I didn’t paint it in an overly favorable light. That’s because the model is a very poor one.

The idea of building a true object-oriented model on a server, where the objects never leave that server is absurd. The Presentation layer still needs all the data so it can be shown to the user and so the user can interact with it in some manner. This means that the “objects” in the middle must convert themselves into raw data for use by the Presentation layer.

And of course the Presentation layer needs to do something with the data. The ideal is that the Presentation layer has no logic at all, that it is just a pass-through between the user and the business objects. But the reality is that the Presentation layer ends up with some logic as well – if only to give the user a half-way decent experience. So the Presentation layer often needs to convert the raw data into some useful data structures or objects.

The end result with “distributed objects” is that there’s typically duplicated business logic (at least validation) between the Presentation and Business layers. The Presentation layer is also unnecessarily complicated by the need to put the data into some useful structure.

And the Business layer is complicated as well. Think about it. Your typical OO model includes a set of objects designed using OOD sitting on top of an ORM (object-relational mapping) layer. I typically call this the Data Access layer. That Data Access layer then interacts with the real Data layer.

But in a “distributed object” model, there’s the need to convert the objects’ data back into raw data – often quasi-relational or hierarchical – so it can be transferred efficiently to the Presentation layer. This is really a whole new logical layer very akin to the ORM layer, except that it maps between the Presentation layer’s data structures and the objects rather than between the Data layer’s structures and the objects.

What a mess!

Ted is absolutely right when he suggests that “distributed objects” should be discarded. If you are really stuck on having your business logic “centralized” on a server then service-orientation is a better approach. Using formalized message-based communication between the client application and your service-oriented (hence procedural, not object-oriented) server application is a better answer.

Note that the terminology changed radically! Now you are no longer building one application, but rather you are building at least two applications that happen to interact via messages. Your server doesn't pretend to be object-oriented, but rather is service-oriented - which is a code phrase for procedural programming. This is a totally different mindset from “distributed objects”, but it is far better.

Of course another model is to use mobile objects or mobile agents. This is the model promoted in my Business Objects books and enabled by CSLA .NET. In the mobile object model your Business layer exists on both the client machine (or web server) and application server. The objects physically move between the two machines – running on the client when user interaction is required and running on the application server to interact with the database.

The mobile object model allows you to continue to build a single application (rather than 2+ applications with SO), but overcomes the nasty limitations of the “distributed object” model.