Polymorphic View

Tuesday, April 27, 2010

Visual Studio 2010 introduced a new way to do Web Deploy using a Transformation mechanics. At first glance, it can appear complex if you have never used XSLT Transforms, however, this doesn't need to be complex or difficult, and you really don't need to know XSLT Transforms to do this. There are a lot of blog posts about how you can write your own XSLT Transforms or change the way it works in VS2010, however, very few show you how simple you can use the existing features. That is my focus in this post.

Simple way to change settings for Release in Web.config

First make sure your Web.Config has a dropdown arrow next to it. If it doesn’t simply right-click on your Web.Config and select Add Config Transforms.

This will add two or more Web.Config Transform files under the Web.Config

Web.Debug.config

Web.Release.config

Others if you have more setup under the Configuration Manager

If you double-click one of these files, you will be presented with a very blank and confusing place to start if you have never seen XSLT Transforms before. Do not worry if you haven’t. I will cover 3 simple cases to get you started using them and feeling comfortable with them, and you can grow and learn more about them with some links I will provide to do even more powerful things.

It is important to note that your Web.Config in your application is your base configuration details. The Config Transform files are used to change sections or individual settings from the base config. This is the mindset you need to have when changing things and deploying a new web.config.

How to get started using the Config Transform file to change settings for Production

The first case I will cover is the case where you need to completely replace a section in your Web.Config such as AppSettings.

To accomplish this simply write the following in your Config Transform file (such as Web.Release.config):

Notice that I have added the AppSettings section to the Config Transform and added a xdt:Transform=”Replace” to it. This instructs the deployment to completely replace what I have in my AppSettings in the original Web.Config with this ones, if I am deploying with the Debug Configuration (we will get to deploying in a bit).

As a result of the above setting, the debug deploy will contain a AppSettings section with 3 settings here, even though my base has completely different values in it.

What if you wanted to only change a setting value and not a complete section? How would you accomplish this?

This instructs the Web Config Transform that you wish to find a Setting called “MyDbConnection” in the ConnectionStrings section of your Web.Config, and completely change the attribute settings that you are specifying (it will only change those attributes you write here, if there is a attribute that you do not wish to overwrite, simply don't add it here, and it will copy over).

This is accomplished by the two statements:

xdt:Transform="SetAttributes" This instructs the Config Transform to replace or set any attributes on this setting from the base Web.Config

xdt:Locator="Match(name)" This instructs the Config Transform to match the setting based on the name attribute you have given in the exact location. Meaning, it is saying Match the setting under the ConnectionStrings with the attribute “Name” called MyDbConnection, and replace or set any attributes I have listed here.

Note above I am using the Transform “SetAttributes”. If you had used the xdt:Transform="Replace" instead, it would simply had replaced the whole setting by that matched name. So instead of setting the Attributes specified, it would replace the setting with this one from the default Web.Config.

Great, now how do I add new settings without replacing a whole Section in my Web.Config?

Notice I added a new Setting in the ConnectionStrings called “AddedConnection”. This will be added to the end of the connectionStrings from the default Web.Config by simply specifying xdt:Transform="Insert" in the setting.

There is a myriad of options here, to keep it simple, I will select FileSystem as the Publish method, but you could just as easily select Web Deploy, FTP Deploy, and it handle all the details of this through your settings.

Once you click Publish

Original default Web.Config Values:

The Published version of the Web.Config values:

Great, now how do I find out how to do more advanced things?

The MSDN Documentation is a great place to drill down on the details: here and Scott Hanselman did a great Mix 10 talk on this and is a great place to start in getting a lot more advanced here.

I hoped this helped you in some way to get started doing things at least a bit more structured and utilizing this incredibly powerful way of doing deployments. You can literally do a lot more than what I have shown you, and you can even deploy T-SQL Scripts to run against an environment in conjunction with publishing your site. Take a look to get a lot more in depth.

Monday, October 20, 2008

Recently Microsoft has updated the ASP.NET MVC to Beta status, and along with that came code changes. One of the code changes to the IModelBinder and how it interacts with the Parameter bindings broke what I wrote on the last post (utilizing ASP.NET MVC as a JSON Service Provider, and coding the entire web site on the client side).

Upon loading the Beta bits, I realized my AJAX Parameter function was infinite looping. The reason is that the Model Binder defaults to trying to bind Complex Types in the Action Parameter to the form values being passed back, however, since I am not doing a form postback, it could not bind that to the complex type, and could not run the ActionExecuting on the AjaxParameterAttribute I had set.

So to fix this, I thought I would use the IModelBinder and re-do that section of my code using the new bits. Here is the result:

As you can see, I updated the example to be more real world with caching out the original server side object to do compares against the modified version:

I also added 2 new Javascript helpers to test Dates with the Client Side AJAX Templating engine:

And the HTML Code to test Live Binding Dates with the MVC Beta:

Before I change the Live Binding Date using the Microsoft AJAX Client Template:

After I merely type a different date, no post back, it auto converted the string back to the Javascript Date property on my Model Object (notice how the convert and convertBack Live Templates handle the display and changing the javascript object – I have complete control over it):

After post back, testing the IModelBinder to serialize the Javascript Object in JSON to a C# object:

This is just cool. Anyway you slice this, ASP.NET MVC with Microsoft AJAX Client Templating engine is just simple, effective, and elegant. I am falling in love with these frameworks the more and more I play with them.

Also, in my opinion, using the IModelBinder is a cleaner approach to the original solution I used earlier. It just feels more separated out.

Thursday, October 9, 2008

Wow, that’s a really long title, but that is what this post is actually about, and why I think it is a great way to handle many web applications today.

I have recently been doing a lot more Javascript Client side applications and working a lot closer with JSON objects to build complex web pages. The problem I typically run into is getting my C# POCO objects to translate to a JSON Object on the client side so I can work with it and change it and then post it back to the server to handle it appropriately.

The basics of what I wanted was to seamlessly work with my Model data and it automatically handle converting to JSON or C# Objects at the correct layer.

I have used ADO.NET Data Services and Entity Framework (with the EF POCO Adapter project by Jaroslaw Kowalski @ here - which by the way is a preview of how POCO will work in Entity Framework 2.0, and has absolutely changed my mindset on Entity Framework).

First, Some Background

Using ADO.NET Data Services allowed me to utilize jQuery as the controller and presentation layer, and the ADO.NET Data Service and Entity Framework as the model. However, if I wanted to split up my Domain specific logic from my individual application business logic on the server side, I would have some work to do. Suppose that you already have a Service(Business) layer on the server side that setup your POCO objects and handled application specific logic before pushing it down to the Data-Layer (in my instance the EFPocoAdapter) or handled loading data in a very specific way, that would not be easy to do using ADO.NET Data Services.

I can push Domain specific rules by tying into the Adapter from EFPOCOAdapter using the following way:

Example of such a POCO Object tied into EFPocoAdapter:

As you can see, my User model object represents a user in my system. I am tying that entity to IDomainEntity to specify that it requires Domain Validation, and when my Adapter in my Domain Layer saves this Model, it will run validation rules against it to verify it is correct.

However, if I had Business Logic I wanted to impose in my Service layer that does not belong in my Domain Data Layer, I cannot easily use ADO.NET Data Services without some additional work.

So in essence I want a to structure my Layers as such: Model Layer => Domain Data-Layer (EFPocoAdapter) => Service Layer (Business)

The above would represent my Model for the MVC Framework. By doing this, I can cleanly separate out each concern in its respective layers and maintain reusability and simplicity.

However, if ADO.NET Data Services cannot get me there easily, what do I do? I could use WCF and that has worked fine for me in the past, however, since I fell in love with the MVC Framework, I wanted to see how I can utilize that.

Alright, so what is this post really about?

With that bit of history out of the way, what is my intent here? I essentially want to utilize ASP.NET MVC to make a Web Service Provider that returns to me my C# model objects in JSON Format, and allow me to post that Javascript object back to the server and it automatically convert it back to a C# object to process on the server.

So in essence, architecturally speaking, the Javascript logic becomes the Presenter that calls to the Server-Side Controller, that will call the Service Layer to get the Model Data from the Domain Layer.

In a way, I want to seamlessly convert between C# objects to Javascript objects without me doing any additional work than is necessary and utilize some new features such as Templating to get a two-way binding to that Javascript object so, again, I have little work to do.

So how did I do it? Well, with the help of Scott Hanselman in this post (here) and Omar AL Zabir in this post (here) I had enough tools to accomplish what I needed.

How did I do it?

First I needed a way to convert a C# model object to a Javascript object from MVC. I created the following ActionResult class to accomplish this:Notice that depending on the content type, it will return JSON or XML. The standard JsonResult type in ASP.NET MVC requires that I know I want JSON. Utilizing a known mechanism in HTTP to specify what you want: the content type. There is no need to re-invent the wheel here, content type is the requested type you are expecting.

Next I needed a way to convert a Javascript object back to a C# object in the parameter of a MVC Action. I created the following ActionFilterAttribute to accomplish this:

I then set-up a test Controller to test this setup:

Notice that I am returning on GetUserList Action a list of Users that is a C# Domain Object and depending on the content type from the client, will return XML or JSON.

Next notice that I am marking up the “POST” action UploadUser to convert the parameter User[] from a JSON Format to a C# Domain Object so that I can then use my Service layer to save it or whatever I want to do with it. I then return back a JSON response back to the client, along with a possibly changed Model object for the client to rebind to.

Now, I need a client side script to help me handle calling and tying everything together. For this, I included the following Javascript Libraries:

MicrosoftAjax.js

MicrosoftAjaxTemplates.js (you can find this at CodePlex under ASP.NET)

jQuery

JSON2 *the version changed by Rick Strahl* (you can use whatever you want, I just like this one)

A custom Helper jQuery Plugin to help make everything simpler :

This custom Javascript library will allow me to make my Javascript Client code less terse, and easier to maintain.

Next, let’s setup a View to test this scenario: There is a lot going on here, but to put it simply, I am using MVC to route the user to this Test View that will load up the User data via a JSON Endpoint request and once that C# Model object is transformed to a Javascript object, I use Microsoft Templating to two-way bind that to my controls. As the user changes values, it is automatically updating the Javascript object, just as it would in WPF. Once the user is finished, they can click the save button, and the client side code will make a Javascript call passing that Javascript object back to the server MVC endpoint (Controller Action). That action will automatically convert that JSON object to its equivalent C# Model object, so that my controller can now save it or whatever it wants, and then respond back to that AJAX request with any errors or messages, as well as a possibly modified Model Javascript object for my View to rebind to.

Phew, that may seem complicated, but it’s really very simple, and requires very little to accomplish once all of your code is in place.

Enough about what it is doing behind the scenes, show us results!

Upon browsing to my http://localhost/home I am greeted with the following:

As I change the values in the text box, notice that the Javascript object on the client is changing also (I have not posted back yet!):

After I post back:

I am testing to see that if some of my Domain logic or application logic fails, that I can get that information back to the client to fix. To test this, the success of the AJAX post will be considered a failure (as you can see from the Test Controller code) if any value has the word “fail” in it. This will allow me to change my C# Model object slightly from the controller and push it back to the client as a Javascript object to rebind the controls to. This will allow the user to fix any issues they may have and repost if necessary.

Here is the Debug against the post so you can see how the Javascript object is transformed into a C# object:

As you can see, once setup with the AjaxResult, AjaxParameterAttribute, and the custom jQuery Plug-in Client-side helper, it is a snap to build out AJAX driven pages and not have to worry about going from server side objects to client side objects. This to me brings us much closer to a Silverlight/Flash type application, especially when considering the speed increases in Javascript on all major browsers coming.

Wednesday, September 3, 2008

I recently read an article on getting around how Entity Framework works in regards to trying to simulate it into the MVC Storefront app that Phil Haack did a video series on recently. Muhammad Mosa does an excellent job writing on the same experiences I faced when trying to mock up a architecturally sound design utilizing Entity Framework (swapping out Linq to Sql).

In this article, Muhammad mentions 2 ways to get around the short comings of what it can project client side vs server side and how to map that into a Repository that flows with fluent filters as extension methods like the MVC Storefront application series.

The first way he mentions is the Virtual Proxy pattern class that wraps the entity object, and allows you to lazy load its references and the object itself when you reference its properties.

The second way is to explicitly define a complex projection (the select new from a Linq statement into another typed object - such as a data transfer object (DTO)) as a client side query with the AsEnumerable method, to pull all of the data from the database into memory to do some complex shaping, but leave its child references as IQueryable to lazy load.

Please refer to the above link to read Muhammad's article to get a clearer picture of what this means.

I decided to discuss these approaches in the comments, and realized how long the comment got, so I wanted to post it here for any readers to join in the discussion, so that hopefully, we can come up with a cleaner solution.

Entity Framework Architectural Discussion

The bad thing about the second way (i.e. the Client side evaluation), besides the mass amounts of data you need to pull into memory, is that now you are segregating your methods into some, like GetCategory7() with a AsEnumerable() and the GetProducts() with no AsEnumberable(). So you are segregating how you handle each pull from the Data Store based on what is the Main object, and its child references, which isnt all that bad if you have a repository per main object call.

However, supposebly you would have a Product Repository which has to duplicate what you are pulling from your Category Repository but with an AsEnumerable to correctly shape it.

This will make for duplicate logic in pulling data depending on if you need that entity as a child reference or base object.

Based on that alone, I would stick with the first option, which is basically a Virtual Proxy Pattern on top of each entity object. The problem here is the mass amount of mock up for each Property, especially when you already went through the trouble of creating your Model in the EDM. Now you have do it through the Entity Framework Model, and your wrapper proxy class, every time something changes. Again, excessive duplicate work.

Unfortunately, I hate to say it, but the third way it seems is just using Entity Framework as the Repository object that calls into each entity is probably the less work way, but couples you completely to EF. You can then marry your Business rules and logic to the partial of each entity. This also allows you to utilize ADO.NET Data Services in a seamless way to expose your domain layer to multiple applications (which you can do with the first and second way but you will need to implement IUpdateable in your Repository -> which can be a pain).

There is no "utimate" way to do this at this time. But to me, saving work up front, especially when it could change for v2 is probably a simpler way. But layered wise, the proxy pattern approach is definately more architecturally sound, the problem is you are back to building out your DTO objects that match your entity model, basically regurgitating out each new property in your proxy DTO class.

I dont know, but for situations where I want to utilize ADO.NET Dataservices, approach 3 seems the better route now, I just hope when POCO objects make there way, I dont get burnt in having to re-write much.

Essentially, the typical data layer model has you making a Repository Layer, a Service Layer (your business layer), and the application Layer (your MVP/MVC layer).

By utilizing EF at the Repository Layer, you completely couple yourself at that layer to EF, but gain the flexibility to easily change your model independently from your data schema (which is typically what the Repository does anyway).

I am testing out using the ADO.NET DataService as the service layer, since you can inject before query calls, and on updates/creates/etc. Unfortunately, to fully utilize my rules logic, it is simpler to push that into the partial for each entity, and throw an exception which I can trap in the DataService layer and respond out.

From there my Applications Model Layer in a MVC application will handle making a Application Business Layer that is specific for that application that simply consumes the ADO.NET Data Service to get the data and utilize the entity in the application. So rather than trying to make my Domain Layer with EF/ADO.NET Data Services be the business rules for everything, it just handles making sure my domain rules are correct for my model, and the application itself handles business rules that are correct for that specific application (since this model is for multiple applications to consume this -- it cant be everything to everyone, but it can make sure it is valid at a domain layer).

Hope this helps, this is just some of the things I have played around with. Maybe we can poke holes in both our ideas enough to come up with a cleaner solution.

Sunday, August 10, 2008

So recently I have been on this kick to learn Design Patterns in Ruby, and to figure out all kinds of neat ways to do the same design patterns in a dynamic language that is as versatile and elegant as Ruby. While playing around with the decorator pattern in Ruby, I started trying to think of slick ways to accomplish the pattern in C# 3.0.

In Ruby, there is a really slick (and somewhat dangerous) way to handle the means to accomplish the decorator pattern in Ruby (hold on guys, I am getting to the C# part in a bit, I am trying to explain how I got there):

Imagine a string class that holds a string with a method called “write_line”. Suppose for some reason you wanted to apply a decorator pattern to this, and decorate the string at runtime to make the string perform in various output ways. In Ruby, you could tackle it the GoF (Gang of Four) ways, or you could do the following:

Alias method Wrapper:

Decorate with Modules:

Ok, so since Ruby is so dynamic, it can on the fly edit the class instance and inject objects in the inheritance chain after the class itself. So you can utilize this to decorate your base functionality at runtime.

However, C#, since it is statically typed, really won’t allow you to tackle it this way. I started thinking about some of the new features in C# 3.0, and some of the things I have read about, and some of the things I have played around with using Piping, Filtering, and a Fluent Interface (I first started learning about this on the MVC Storefront series by Rob Connery [I highly recommend watching this series btw]). It occurred to me, that although we can’t accomplish the decorator pattern the Ruby way, we can accomplish it a nice readable way using Piping and a Fluent Interface way.

Take for example the standard Decorator Pattern typically seen in C# (using the above example in the classic GoF way):

The above would use the chain of sequences called by Write to build out the string that is Timestamped, Numbered, and outputted.

This is fine and dandy, but not very pretty to look at in the least. And since the Decorator Pattern, is basically hiding the base object (that does the real work) in itself, it is simply chaining it out till it gets to the base object (“chaining” being the operative word here – similar to Piping).

So what if we do this in a fluent type of way?var outputString = new List<string>{“hello”, “world”, “this”, “is a”, “test”};var myDecoratedString = new SimpleString().WithNumbering().WithTimeStamp();myDecoratedString.Write (outputString);

This is much more readable if you ask me, and it is very clear and concise into what is happening. It is saying that you want a SimpleString object that has Timestamp and Numbering to decorate it.

The Decorator objects have been transformed into extension methods that act on the SimpleString object and decorate it from the extension method.

A simple example of this would be the following:

Output would be the following:

Success!

Now I am fully aware this is only decorating one function on the concrete component. But there is no reason you couldn’t make the Class Decorator (in this example the SimpleClassDecorator) have multiple Action<T> (or Action<T, S> …, or even Func<T, TResult> if you need return values).

In most cases however, there is mostly one entry point into making a class start doing something. Keep in mind, while the SimpleClassDecorator class may look somewhat ugly (because you are encapsulating the block in a delegate), it pays off when you call your decorators. And you only have to write one SimpleClassDecorator, but you can write tons of Decorating Extension Methods to decorate your concrete component object, that actually look quite easy to make going forward.

Anyways, this certainly isn’t the only way to do a Fluent Decorating Pattern, but it was my first attempt at 2am in the morning (I hate it when I think of something in the middle of the night and need to figure it out).

Also, it’s worth noting that this would also be trivial to do the same thing in Ruby, just using blocks.

Friday, August 1, 2008

Lately, I have been using ASP.NET MVC exclusively on a lot of projects and have been looking for a clean way to enforce the DIP (Dependency Inversion Principle) utilizing the Strategy Pattern throughout my applications. If all of these terms seem alien to you, then you should really look into reading about them. If you want to write highly maintainable code (that is subject to spec changes every month), work very little to achieve this, and in general have clean code, these principles will go a long way to helping you out. Also, once you start doing things this way, you will never want to go back to the old clunky way of over-inheritance, and coupled-ridden code. Before you go farther, if you don’t know what these things are, do your-self a favor and Google “Dependency Inversion Principle” and “Strategy Pattern” and do a quick read on the subject. Once you get the hang of these ideas, a Dependency Injection Framework (also referred to as Inversion of Control Containers), essentially helps you very quickly enforce and setup your DIP throughout your code seamlessly and effectively. It’s a tool to help you achieve this principle and pattern throughout your code.

So after playing around with several Dependency Injection (DI) Frameworks (Unity, Spring, etc) I finally settled on one that I instantly fell in love with because of its simplicity, speed, and complete flexibility: Ninject (www.ninject.org). It is by far one of the simplistic dependency injection frameworks I have found, and fits within how I like to do things. A lot of other DI Frameworks want to be setup through a configuration file (most likely XML) to state how to handle casting and creating of your abstractions to concrete classes. However, Ninject is all about coding modules to setup your dependencies (which you can refer to configuration files there if necessary). It is also completely open so that you can modify and build on it to suit your needs.

Now don’t get me wrong, there are several great DI Frameworks out there, so before you flame me, here me out. DI Frameworks are like opinions, people have several of them. Each person thinks theirs is the best, and you cant tell them any differently. The beauty here is that you just use what works for you. The ideas I will post here should be usable through-out any DI Framework with some massaging.

Lets start with the common MVP Pattern in a Classic ASP.NET Web Form application and then when we understand those concepts, apply it to a ASP.NET MVC Web Application.

Typically you will need to setup a Module to define how Ninject should handle your Abstractions to Concrete classes:

In this example I am telling Ninject how to inject my abstractions IRepository and IService to what Concrete class I want it to be (the AppHelper is a static class that handles grabbing which configuration details are needed for my application – similar to how Rails does this). You can see the syntax is very readable. You simply tell it that you want to bind this abstraction to this concrete type: Bind<AbstractionType>().To<ConcreteType>(). The only weird one up there is the Bind<PagePresenter>().ToSelf(). This one is simply telling Ninject that if I ask you go give me this type, just instantiate it (you could also specify default constructor parameters for Ninject to use, or when you want it to make it). You could have just omitted this, as it does this by default, but I like to be explicit in my rules.

There is no voodoo going on here. To further clarify what is going on and how it handles creating these abstractions to your concrete classes down the chain, let us look at a brief implementation of the presenter, service, and repository classes (because when we create instances of it we will be creating a Presenter, that will in turn need a Service, that will in turn need a Repository implementation instance):

Presenter:

Service:Repository:As you can see, using MVP (Model View Presenter), in the view we will need the following:The only piece necessary now, is to set your view (or you can simply use NinjectHttpApplication for your Global.asax.cs and PageBase for your View located in Ninject.Framework.Web) to load up your Presenter using Ninject:

The injecting of the this object on the view will see the Property with the attribute “Inject” and start the chain of creating all of your concrete class instances from looking at the Abstract type it requires (using the Module we defined earlier). We didn’t have to write a single new line in our code, Ninject handled all of this for us.

Ok, so it works really nice in ASP.NET Web Forms, but what about ASP.NET MVC. Well, this is where it required me to extend Ninject, as it doesn’t add helper support to ASP.NET MVC that fits nicely in the ASP.NET MVC style of doing things.

Typically as you saw above, we get the view to inject all the way down the tree of instantiations. What you didn’t see is you can use Ninject.Framework.Web to help in your Global.asax class file (by changing it from HttpApplication to NinjectHttpApplication) to handle a lot of this setup for us, and make your Page inherit from PageBase to auto-inject the view and start it down the chain.

So how would we do this in MVC? Well you could do the same thing with the Global.asax file and make a base Controller class to inject the controller instance object, or you can do it the MVC way, and make it nice and clean.

So what is one way to do that?I am glad you asked. First I had to think about how a classic ASP.NET page works using the MVP pattern and how it works using the MVC pattern. The classic ASP.NET page request first comes to a web page, so the view is the one to get the initial response, so it makes sense to start the chain of injections from there on down. But in MVC, the request first comes to the Controller, so logically we need to start the chain of injects starting there. The most effective way to do this (without tying the Controller to a base class of say NinjectControllerBase – in case we want to utilize some swanky open source ideas out there that need this) is to use a controller factory to handle all of this for us.

Here is such an implementation of one (keep in mind that KernelContainer is located in Ninject.Framework.Web, and the bottom code would be put into a dll that you can add to all of your MVC applications) :

So what does all of this allow you to do? Well assuming you package up the above into its own dll (say Ninject.Framework.MVC), you can just add it to your project and do the following to add Ninject support to your application.

1) Use the simple default Ninject controller factory to add in support in your Global.asax file in the RouteTable function.

2) Make your own Controller Factory and inherit from NinjectAbstractControllerFactory and just override CreateKernel (this is where you would make a new Ninject Kernel with the Module we created early on), then in the Global.asax file add it when building the routes for you MVC application.

As you can see here you can call it 2 ways. If you don’t care about utilizing a Controller Factory, then you can just use the first method, that builds the controller factory for you and sets up your injections for all controllers for you. You can alternatively make your own that inherits from the above NinjectAbstractControllerFactory. You can chose how you want it to load.

This is a great way to have IoC/DI in your MVC application without getting in the way of your other code. You set it and your rules and forget about it. From there you can just start marking up “[Inject]” to those constructors, properties, etc you want Ninject to take care of.

This example will load DataService and the Repository (in the IService implementation) that we specified in our module. Similar to how it worked in the MVP pattern mentioned above, except our Controller Factory did all the setup for us behind the scenes and tucked away.

Feedback, questions, comments. Keep in mind this is merely one approach, there are several others that can be taken, but I like this one. I have used it on 3 projects so far, and it really is efficient and simple to setup.

If anyone knows of a good place to store code, I can zip up all of this code and make a simple example for you to play with and tweak to how you would like it to work.

For now, as per suggestion, you can copy the above code (the Factory Code) from google docs here.

Sunday, June 1, 2008

I have recently been having some debates on Silverlight / Flash / (and even AIR) importance to the web. Some of these people in the discussion often say that AJAX enabled web sites are enough for what you need on the web. I whole heartedly disagree with this assessment, so I thought I would make a blog posting about what why I feel the importance of RIA development on the web is an important next step for our view of the web in the coming years.

Recently, I would say the past 5 or so years, we have been hearing a trendy word passed around called “Web 2.0”. To me, this merely means socializing on the Web in new and exciting ways. Sites such as Digg, Twitter, etc. have crafted a new age of our society and new ways for us to communicate. This is a great step forward for our generation. However, many have started to wonder what the next version of the Web will look like (“Web 3.0”).

I have a hypothesis of what will be more important to our next version of the web, and it will be somewhat seamless to the typical user that surfs the web. It will be a better organization of the domain layer, meaning, it will be spreading out your business logic and domain layer across multiple web services and Restful services that can allow for multiple consumption of the users data. This will allow you to access the data through Desktop application, Web applications, or consumption from other companies trying to tie into your data. Now, from this definition, there are a lot of people who are already there: Amazon, Ebay, Microsoft, etc. This is true, but once this becomes the norm way of creating content on the web, I think it will be an expected feature across all applications. To me the idea of “the next web version” will be the multiple ways of getting to your data and consuming that data.

All that being said, how does Silverlight, Flash, and AIR (web like desktop apps that consume web resources) play into this. There is an excellent article I read from nikhilk.net that talks about this: http://www.nikhilk.net/Entry.aspx?id=190. In it he describes the Reach vs Capabilities approach, where Classic Simple HTML is the broader reach, then AJAX apps, then AJAX + RIA (Silverlight and AIR), RIA apps, then Desktop apps that finally goes to greater capability. I think this is an accurate assessment of the technology at hand.

To me, Silverlight (etc) will be the new way to present web controls and feature components that give some depth and control to the user, in a presentation that has both speed and power. Silverlight gives the same flexibility as a windows application where the separation of concerns (domain objects) rests on your web services. It helps you really build out an enterprise system that will force you to create the idea of the “next generation of the web”. It will do this by really making you think how to structure your domain objects in interesting ways. This will enable you to create your domain layer mechanisms so that they can be consumed in multiple ways, such as an AJAX Website, Desktop application, and the RIA application. During this process, it will allow you to space out your business logic and domain logic across multiple servers, but still provide a very quick interface for the client, as well as maintain state storage locally and give the user a better environment to manipulate their data.

One thing I keep hearing is Silverlight is the next Active-X. It is really nothing like Active-X, except for the fact that it runs on the client side. Active-X was a way for the client to consume a dll created by the site and run that business logic locally to that client. Silverlight is a complete framework that runs on the client’s side. The domain layer “should be” created and utilized from the Server side (web services / Rest ful APIs). The point here is that Active-X can be any user created DLL that is registered to your browser and interacts with your OS. Silverlight is the partial .NET Framework running on your system, it doesn’t interfere with the OS or register any components to it. It is completely self contained, unlike Active-X. An active-x object could wreak havoc on your system, where-as a Silverlight application has restrictions on what it can do, because it is self contained in the Framework for it. It is nothing more or less than Flash, which is not considered Active-X in nature. What it is, is a fraction of the .NET 3.5 Framework running locally on the client, with a WPF/E (Windows Presentation Foundation Everywhere) front-end that provides vector style graphics and a rich GUI and mechanisms to consume data in multiple facets.

All that being said, I do not think have a majority of your website in Silverlight is a good thing; far from it. Silverlight, to me, is great for doing complex User controls and Components that require speed, complex GUI, depth, and simplicity of use. I can foresee a lot of Admin Sections being in Silverlight, as well as key complex components, such as Events Calendar, Attendance components, Form Builder, etc. Really anything that requires a complex GUI that requires simplicity and depth.

We all know that JavaScript is great for simple things, but the more complex a page gets and the more complex the GUI needs to perform, the more of a headache JavaScript can become for the typical developer. Not to mention, JavaScript will result in a much slower interface. Looking back at all of the issues I have ran into lately with several projects, over 35-40% of the issues was trying to get Javascript and .NET to play nicely together. Now, if I had been using ASP.NET MVC, a lot of those problems could have been mitigated, however, the more complex your JavaScript code is, the more cumbersome the code will get, and the more difficult to maintain. I am not saying JavaScript is bad and convoluted, but it is something to me that should be mitigated to making simple features seem more alive.

I am working on a proof of concept application that is build on the ASP.NET MVC framework, that makes and includes a User Control written in Silverlight (which is developed to use the same MVC pattern), to complement and see how all the pieces fit together. I read an article at http://marlongrech.wordpress.com/2008/03/20/more-than-just-mvc-for-wpf/ by Marlon, that I am very fascinated to try out in Silverlight. The article talks about using the MVC + Mediator pattern to communicate between all of the pieces in WPF. However, since Silverlight is a subset of WPF, and I cannot find a way to do the EventManger and register events of a XAML view from another class at the moment, I am having to do a mix between MVC + M and a little more work on the view’s side to create events that the controller can consume. I am trying to do as little code as possible on the Views side and keep it all confined in the Controller side. It is coming along great, and I am really thing this is the way to bring MVC in Silverlight with ASP.NET MVC Framework to keep the separation consistent on both sides.

I am currently trying to write a lot of domain base classes and interfaces to help speed up the connection of these two technologies, to help mitigate the time spent including them in the same project. I am also working on a "drag in and work" Security ASP Membership feature for MVC ASP.NET by extending a starter kit found at http://www.squaredroot.com/post/2008/04/MVC-Membership-Starter-Kit.aspx by Troy to fit the needs that I typically use for the Memberships.

Before I leave off today, I would like to leave some links to some good Silverlight Examples:

Article Archive

Polymorphic View

Del.icio.us Bookmarks

Rss Feed

Who/What?

Polymorphic Idiology:I reject your reality, and substitute my own.

Who am I?My name is Corey Gaudin. I am a Senior Software Engineer and Software Architect in the .NET Framework. I also am the co-founder and leader of Acadiana .NET User Group. I try to specialize in Web and SOA Solutions using the .NET Framework.