Every now and again I blog about the fact that Magenic needs good developers (.NET, SQL, Sharepoint, BizTalk, CSLA .NET) in San Francisco, Minneapolis, Chicago, Atlanta and Boston. This continues to be true, and if you are at all interested, read on!

My colleague, Brant Estes, put together a little web site to try and seduce you into applying for work at Magenic. But more fun, he put together a broad-reaching tech quiz to illustrate the kind of knowledge we expect people to have before we hire them.

I'll be brutally honest, and say that I scored 84%, which entitles me to keep working at Magenic (whew!). I figure that's not too bad, given that the quiz covers areas (like HTML) that I try to avoid like the plague (and I didn't cheat and use Google while taking the test :) ). Brant tells me that passing is 70%.

Whether you are interested in working at Magenic or not, the quiz is a fun challenge, so feel free to take a look (just give yourself around 10-15 minutes to go through it).

I'm busy getting my work and home life in order, as I head off to Barcelona with my oldest son for a week. I'm speaking at the Barcelona .NET user group on Wednesday, 21 November, so if you are in the area stop by and say hello! Otherwise, we'll be around the Barcelona area, enjoying Spain - I really love Spain!

Some of what I'm busy trying to get organized are some of the upcoming changes to CSLA .NET in version 3.5. Thematically of course, the focus is on support for LINQ, primarily LINQ for Objects (like CSLA objects) and LINQ for SQL. As usual however, I'm also tackling some items off the wish list, and making other changes that I think will make things better/easier/more productive - and some that are just fun for me :)

When talking about LINQ for Objects, there are a couple things going on. First, Aaron Erickson, a fellow Magenicon, is incorporating his i4o concepts into CSLA .NET to give CSLA collections the ability to support indexed queries. Second, there are some basic plumbing things that need to happen for LINQ to play well with CSLA objects, most notably ensuring that a result (other than a projection) from a query against a CSLA collection results in a live, updatable view of the original collection.

By default LINQ just creates a new IEnumerable list with the selected items. But this can be horribly unproductive, because a user might add a new item or remove an existing item from that list - which would have no impact at all on the original list. Aaron has devised an approach by which a query against a BusinessListBase or ReadOnlyListBase will result in a richer resulting list, that is still connected to the original - much like SortedBindingList and FilteredBindingList are today. This way, when an item is removed from the view, it is also removed from the real list.

Back to the i4o stuff, allowing indexing of properties on child objects contained in a list can make for much faster queries. If you only do a single query on a list it might not be worth building an index. But if you do multiple queries to get different views of a list (projections or not), having an index on one or more child object properties can make a very big performance difference!

When talking about LINQ for SQL, the focus is really on data access. LINQ for SQL, in the CSLA world-view, is merely a replacement for ADO.NET. The same is true for the ADO.NET Entity Framework. Architecturally, both of these technologies simply replace the use of a raw DataReader or a DataSet/DataTable like you might use today. Alternately, you can say that they compete with existing tools like nHibernate or LLBLgen - tools that people already use instead of ADO.NET.

In short, this means that your DataPortal_XYZ methods may make LINQ queries instead of using an ADO.NET SqlCommand. They may load the business object's fields from a resulting LINQ object rather than from a DataReader. In my testing thus far, I find that some of my DataPortal_Fetch() methods shrink by nearly 50% by using LINQ. That's pretty cool!

Of course nothing is faster than using a raw DataReader. LINQ loads its objects using a DataReader too - so using LINQ does incur overhead that you don't suffer when using raw ADO.NET. However, many applications aren't performance-bound as much as they are maintenance-bound. That is to say that it is often more important to improve maintainability and reduce coding than it is to eek out that last bit of performance.

To that end, I'm working on enhancing and extending DataMapper to be more powerful and more efficient. Since nHibernate, LINQ and ADO.NET EF all return data transfer objects or entity objects, it becomes important to be able to easily copy the data from those objects into the fields of your business objects. And that has to happen without incurring too much overhead. You can do the copies by hand with code that copies each DTO property into a field, which is fast, but is a lot of code to write (and debug, and test). If DataMapper can do all that in a single line of code that's awesome, but the trick is to avoid incurring too much overhead from reflection or other dynamic mapping schemes.

One cool side-effect of this DataMapper work, is that it will work for LINQ and EF, but also for XML services and anything else where data comes back in some sort of DTO/entity object construct.

There are also some issues with LINQ for SQL and context. Unfortunately, LINQ's context object wasn't designed to support n-tier deployments, and doesn't work well with any n-tier model, including CSLA. This means that LINQ doesn't/can't keep track of things like whether an object is new/old or marked for deletion. On the upside, CSLA already does all that. On the downside, it makes doing inserts/updates/deletes with LINQ for SQL more complex than if you used LINQ in the simpler 2-tier world for which it was designed. I hope to come up with some ways to bridge this gap a little so as to preserve some of the simpler LINQ syntax for database updates, but I'm not overly confident...

I'm also doing a bunch of work to minimize and standardize the coding required to build a business object. Each release of CSLA has tended to standardize the code within a business object more and more. Sometimes this also shrinks the code, though not always. At the moment I'm incorporating some methods and techniques to shrink and standardize how properties are implemented, and how child objects (collections or single objects) are managed by the parent.

By borrowing some ideas from Microsoft's DependencyProperty concept, I've been able to shrink the typical property implementation by about 35% - from 14 lines to 9:

In this case the implementation is more similar to a DependencyProperty, in that BusinessBase really does contain and manage the child object. There's value to that though, because it means that BusinessBase now automatically handles IsValid and IsDirty so you don't need to override them. The next step is to make BusinessBase automatically handle PropertyChanged/ListChanged/CollectionChanged events from the child objects and echo those as a PropertyChanged event from the parent. Along with that I'll also make sure that the child's Parent property is automatically set. This should eliminate virtually all the easy-to-forget code related to child objects - resulting in a lot less code and much more maintainable parent objects.

As I noted earlier, there are a lot of things on the wish list, and I'll hit some of those as time permits.

One item that I've done in VB and need to port to C# is changing authorization to call a pluggable IsInRole() provider method, allowing people to alter the algorithm used to determine if the current user is in a specific role. The default continues to do what it has always done by calling principal.IsInRole(), but for some advanced scenarios this extensibility point is very valuable.

Beyond that I'll hit items on the wish list given time and energy.

However, I need to have all major work done on 3.5 by the end of January 2008, because I'll start working on Expert 2008 Business Objects, the third edition of the Business Objects book for .NET, in January. The tentative plan is to have the book out around the end of June, which will be a tall order given that it will probably expand by 300 pages or so by the time I'm done covering .NET 3.0 and 3.5...

Now I must do some packing, and get into the mindset required to survive 11 hours in the air - they just don't make airplane seats for people who are 6'5" tall.

I always encourage people to go with 2-tier deployments if possible, because more tiers increases complexity and cost.

But it is important to recognize when the value of 3-tier outweighs that complexity/cost. The primary three motivators are:

Security – using an app server allows you to shift the database credentials from clients to the server, adding security because a would-be hacker (or savvy end-user) would need to hack into the app server to get those credentials.

Network infrastructure limitations – sometimes a 3-tier solution can help offload processing from CPU-bound or memory-bound machines, or it can help minimize network traffic over slow or congested links. Direct database communication is a chatty dialog, and using an app server allows a single blob of data to go across the slow link, and the chatty dialog to occur across a fast link, so 3-tier can result in a net win for performance if the client-to-server link is slow or congested.

Scalability – using an app server provides for database connection pooling. The trick here, is that if you don’t need it, you’ll harm performance by having an app server, so this is only useful if database connections are taxing your database server. Given the state of modern database hardware/software, scalability isn’t a big deal for the vast majority of applications anymore.

The CSLA .NET data portal is of great value here, because it allows you to switch from 2-tier to 3-tier with a configuration change - no code changes required*. You can deploy in the cheaper and simpler 2-tier model to start with, and if one of these motivators later justifies the complexity of moving to 3-tier, you can make that move with relative ease.

* disclaimer: if you plan to make such a move, I strongly suggest testing in 3-tier mode during development. While the data portal makes this 2-tier to 3-tier switch seamless, it only works if the code in the business objects follows all the rules around serialization and layer seperation. Testing in a 3-tier mode helps ensure that none of those rules get accidentally broken during development.

I was recently asked whether CSLA incorporates thread synchronization code to marshal calls or events back onto the UI thread when necessary. The short answer is no, but the question deserves a bit more discussion to understand why it isn't the business layer's job to handle such details.

The responsibility for cross-thread issues resides with the layer that started the multi-threading.

So if the business layer internally utilizes multi-threading, then that layer must abstract the concept from other layers.

But if the UI layer utilizes multi-threading (which is more common), then it is the UI layer's job to abstract that concept.

It is unrealistic to build a reusable business layer around one type of multi-threading model and expect it to work in other scenarios. Were you to use Windows Forms components for thread synchronization, you'd be out of luck in the ASP.NET world, for example.

So CSLA does nothing in this regard, at the business object level anyway. Nor does a DataSet, or the entity objects from ADO.NET EF, etc. It isn't the business/entity layer's problem.

CSLA does do some multi-threading, specifically in the Csla.Wpf.CslaDataProvider control, because it supports the IsAsynchronous property. But even there, WPF data binding does the abstraction, so there's no thread issues in either the business or UI layers.

I was recently asked how to get the JSON serializer used for AJAX programming to serialize a CSLA .NET business object.

From an architectural perspective, it is better to view an AJAX (or Silverlight) client as a totally separate application. That code is running in an untrusted and terribly hackable location (the user’s browser – where they could have created a custom browser to do anything to your client-side code – my guess is that there’s a whole class of hacks coming that most people haven’t even thought about yet…)

So you can make a strong argument that objects from the business layer should never be directly serialized to/from an untrusted client (code in the browser). Instead, you should treat this as a service boundary – a semantic trust boundary between your application on the server, and the untrusted application running in the browser.

What that means in terms of coding, is that you don’t want to do some sort of field-level serialization of your objects from the server. That would be terrible in an untrusted setting. To put it another way - the BinaryFormatter, NetDataContractSerializer or any other field-level serializer shouldn't be used for this purpose.

If anything, you want to do property-level serialization (like the XmlSerializer or DataContractSerializer or JSON serializer). But really you don’t want to do this either, because you don’t want to couple your service interface to your internal implementation that tightly. This is, fundamentally, the same issue I discuss in Chapter 11 of Expert 2005 Business Objects as I talk about SOA.

Instead, what you really want to do (from an architectural purity perspective anyway) is define a service contract for use by your AJAX/Silverlight client. A service contract has two parts: operations and data. The data part is typically defined using formal data transfer objects (DTOs). You can then design your client app to work against this service contract.

In your server app’s UI layer (the aspx pages and webmethods) you translate between the external contract representation of data and your internal usage of that data in business objects. In other words, you map from the business objects in/out of the DTOs that flow to/from the client app. The JSON serializer (or other property-level serializers) can then be used to do the serialization of these DTOs - which is what those serializers are designed to do.

This is a long-winded way of saying that you really don’t want to do JSON serialization directly against your objects. You want to JSON serialize some DTOs, and copy data in/out of those DTOs from your business objects – much the way the SOA stuff works in Chapter 11.