…

Main menu

Post navigation

So I’ve found it hard to accept the approaches taken by a lot of the MVVM and other MV* JavaScript frameworks and libraries that have come out over the last few years primarily due to the way that they mix logical directives and markup. This mixing of concerns is reminiscent of the very coding practices of mixing JavaScript inside of HTML that the industry fought hard to end during the early part of this century. Today we have frameworks that not only mix “binding” templates like Mustache or Handlebars but also that introduce new languages that provide flow, filter and execution directives inside of attributes within HTML elements. Instead of “onclick” we implement things like “ng-click”. While this initially seems like a simple approach, I worry that we are introducing maintainability headaches. And like most “new shinies” in JS, even these practices are being superseded by newer practices that are equally concerning. Now, instead of placing code in our markup, there are libraries like ReactJS and Imba that seek to place markup inside of code.

A few years ago, I was having a conversation with another developer, Cory House, about the mixing of concerns that occur when using these libraries and frameworks. I mentioned to him that I had been using a different technique that allow for the creation of views and controls, without directly using JQuery but still allowing for unobtrusive JavaScript. At the time I was using a framework that I had built exclusively for the company that I was working for. I have however since moved on and have now begun work on a completely new library that fully encapsulates the concepts that I had discussed with Cory and others back in 2012-2013.

Today, I would like to announce that I have begun development of a new MV* library named AtomicJS. This library provides an engine to build web applications based on a design pattern that I have been referring to as Model – View – View Adapter – Controller. In this pattern, the View is completely abstracted away from the Controller via the View Adapter. In the case of AtomicJS, the “View” can be built in any language/markup including HTML. The “View Adapter” is constructed from View Adapter Definitions written in plain JavaScript, usually as a single POJO, with definitions and initializers for the controls found within the “View Adapter” definition. The “Controller” is also written in JavaScript and the “Model” can be a simple JSON object or other POJO. Other supporting constructs such as Service Proxies and Observers are employed as desired and are generally written in JavaScript.

Model – View – View Adapter – Controller

The entire library is configurable using Dependency Injection and one or more composition roots. You can inject the view adapter support “engine” that provides the functional interfacing between the “View Adapters” built from the View Adapter definitions and the HTML DOM provided by a web browser, or you can inject a different engine to provide a different set rendering/interfacing adapter methods. Since all dependencies are injected, “mockist style” unit testing the components of the pattern is very simple.

Check out the following constructs from the current TodoMVC.com based demo for AtomicJS.

The following is the view adapter definition that defines the functional layout for the entire TodoMVC demo app:

So, it’s been awhile since I’ve published a post on this site. Today I found myself answering a stack overflow post with essentially what I had planned on covering for this part of my series on generics in .net. So I figured that I would post the same content here. As a result, this 5th part is being publish out of order and before parts 2-4. Hopefully, it won’t be another year before I get around to posting those parts. So with that said, here is the post. Note that some of the concepts employed in this post will be covered in parts 2-4.

For those who work with multiple generic classes that share the same generic type parameters, the ability to declare a generic namespace would be extremely useful. Unfortunately, .Net (or at least C#) does not support the idea of generic namespaces. So in order to accomplish the same goal, we can use generic classes to fulfill the same goal. Take the following example classes related to a logical entity:

Then, through the use of partial classes you can separate the classes into separate nested files. I recommend using a Visual Studio extension like NestIn to support nesting the partial class files. This allows the “namespace” class files to also be used to organize the nested class files in a folder like way.

The above is a simple example of using an outer class as a generic namespace. I’ve built “generic namespaces” containing 9 or more type parameters in the past. Having to keep those type parameters synchronized across the nine types that all needed to know the type parameters was tedious, especially when adding a new parameter. The use of generic namespaces makes that code far more manageable and readable.

One of the most powerful features in my opinion about .Net, which really sets it apart from other language frameworks, is its implementation of Generics. This post will be the first in a multiple part series on the subject.

What is Generics?

So, what is Generics and how do we use it? Generics or generic programming is a form of coding where type parameters are declared in the signature of classes or methods in place of specific types so that they can be specified later as type arguments. These type parameters are then used in place of specific types in the implementation of those classes or methods where they are defined. The following are examples of declared type parameters on a class and on a couple of methods of a normal class:

In the above examples, <T> declares that the class or method where it is defined contains zero, one or more references to an as of yet undeclared type T that will be specified later. Once specified, all instances where T is used will effectively be substituted with the type specified.

Consuming Generic Classes

Perhaps the most common use of Generics in .Net is the use of the Generic classes found within the System.Collections.Generic namespace. And of these classes, perhaps the most commonly used is the List<T> class. The List<T> class is the Generic equivalent of the ArrayList class. It is virtually identical in function as both implement the IList interface and both essentially contains and manage an array of items. Where the List<T> class shines is in the type safety of ensuring that each instance of the class contains only a homogeneous list of type T or its derivatives. ArrayList by comparison is homogenous only to the System.Object type. Since nearly all types derive from System.Object, the ArrayList class does not do much to ensure type safety in most practical cases, especially when one has a specific sub type in mind that they wish to have a homogeneous list of.

So the List<T> class allows you to specify a type argument for parameter T that “constrains” T to a specific type or its derivatives. When you construct an instance of List<T> at run-time, the type argument that you provide is effectively substituted for T all throughout the implementation of List<T> and a new class type is created. Any further instances created using the same type argument for T are also instances of this new class type.

One of the main advantages of this form of reuse is in leveraging the same general functionality of a List type across multiple item types without have to repeat write the same general code. Take the following for example class snippets:

Consider how the two implementations of GetValueAtIndexOrDefault are virtually identical. They vary only between the use of the types int and string with regards to the type of items each type of list contains. Using generics, the above two implementation could be abstracted into the following:

Notice how the code is now DRYed up. The type parameter T is now substituted for whatever type arguments we want to specify. To effectively replace the IntList and StringList class types, we would create instances of List<int> and List<string> respectively.

I plan on posting more on the topic of generics in future posts covering the following concepts:

So while working on my assigned project at work, I observed some lag populating some drop downs on a screen at I was on. Immediately, I suspected that the drop downs were being loaded from asynchronous calls performed after the screen had loaded. So I fired up Firebug, and sure enough, there they were. 3 calls were being made to the server to get 3 different sets of data to use to populate 3 drop downs. Despite the calls being small and relatively quick (28-49ms each), the effect was still a visible lag long enough that I was able to open one of the drop downs before it had finished being populated.

Now some might say, so what? Big deal. Why fuss over 49ms? Well, despite the very visible lag, this is just a symptom of a problem that could very well escalate should more drop downs of this nature get added to the screen. Or if this “pattern” is replicated on other more complicated screens it could lead to user frustration and possible inefficient use of resources, especially if the app is stateless and requires authentication.

I’ve followed a guideline, almost a rule, over the past 4 years that I have been building service oriented single page apps, and that is: perform no more than one call per user action. A user initiated action is nothing more than a simple use case. And simple use cases should define a contract that specifies what must be submitted and what must be provided if the preconditions are met. To me, these extra calls are being made because the use case requirements were not fully implemented into one service call. Think of the activity flow or sequence diagram one might draw for a given use case. In its simplest form, the user actor initiates some action with the system and the system responds. Now, I’m not talking about calls for images or other visual elements. When I refer to calls, I’m talking about service method invocations. This by and large is why I take issue with using (non-pragmatic) RESTful services with thier full HATEOS driven implementations as the model for building services for client side applications, but that’s for another post on another day.

Another concern that I have when I see chatty applications that engage in this type of behavior is that it indicates to me that the client application is more familiar with the intimate details of the middleware than it probably should be. This could lead to unintended exposure of pieces of the system that should otherwise be encapsulated. This could expose security concerns and allow new unexpected permutations of workflows and interactions to occur with those functions.

Now it may seem that I am arguing against reusability here, but I assure you that it’s quite the opposite. One could still potentially have those same fine grain functions, but just encapsulate them as implementation detail code that is reused by more coarse grain use case specific methods. In fact, by doing this you will likely find that some of the functionality that resides in the front end code might make more sense being encapsulated in the use case service method code on the server. This code would then be automatically reused if one were to build an alternative front end that leveraged the same service methods. You might even reduce the size of the amount of data that you are sending over the wire, especially if some of it is going to be filtered out or is just used to help process or relate the data.

For example, let’s say that we organize products by manufacturers. And let’s say that we have two drop down lists to filter with, one that is a list of manufacturers and the other a list of products. If we obtain these two lists separately and then drive the contents displayed in the product list based on the selected manufacturer, then we are writing code on the front end to handle that processing. Additionally, we likely have a manufacturer id or some sort of code, possibly the name, on the product records returned that associate the products with the manufacturers. An alternative approach to this would have been to send down a dictionary with the manufacturer names as keys and object values containing a list of their products already broken out to each manufacturer. The processing of the two lists would have been completed on the server and now, an alternative front end would not have to have the same logic repeated. Consider the following:

The code on the front end can now simply rebind the product list control with the products property of the company object bound to the item selected in the company list control. With the two separate lists, the front end code would have to scan through the list of products to locate the related products to bind to the products list control when the selected item in the company list is changed. Consider also that there may be even more complicated rules that might affect those lists such as the user’s location for availability or a combination of other factors and it becomes easier to see why we might want to move that concern to a more centralized location.

I’m interested in other opinions on this topic. What do you think? Should we strive to make our front-ends more or less pretty and dumb or should we move more processing logic to the front and have finer grain access to resources from the middle ware? Please leave your thoughts in the comments below.

Just wanted to post a quick update regarding the development efforts on Atomic Stack. I’ve started writing unit tests for the classes developed so far. Development on the target code with be mostly halted until the tests have been caught up. After that time, I plan on adhering to TDD practices going forward on the project.

I’ve started with unit tests for the .Net side, but also plan on writing tests for the web/js side. If anyone has any suggestions on JavaScript unit testing tools to check out, please leave a comment below. I should be exploring options for testing the web side over the next couple of weeks. I’ve already planned ahead for unit testing on the web side by abstracting away the HTMLDom by way of the as of yet incomplete baseApplication class. There is an htmlDomApplication concrete class that provides the wiring to the HTMLDom. I plan on using an alternative implementation to provide a mock for unit testing.

So today, I found myself needing to map exception types to http status codes for the purpose of looking up which status code to report back from any service endpoint invocation that has been interrupted by an unhandled exception. Now, I could have simply setup a lazy instantiated static instance of a Dictionary<Type, HttpStatusCode> somewhere and referred to it. Or I could have setup a function with a switch statement on an exception parameter’s type and casing on typeof() calls on various exception types to return HttpStatusCodes. Each of these has their drawbacks though. The switch statement would only apply in this one case of translating the types. The dictionary would only provide a map to translate from the exception type to an http status code. If we ever decided that we wanted to make other decisions or take other actions based on an one of the exception types, we would either have to write more switch statements or expand the value type of the dictionary.

So I discussed with my colleague about using a subclassable enum. I built a reference implementation using the Subclassable Enum implementation from AtomicStack. Its class signature looked something like this:

This has led me write this post about what exactly a Subclassable Enum is and some of the ways it can be useful. First, let’s start with some of the problems that the Subclassable Enum helps to solve.

One of the things that some people have wished that they could do in .Net is create enums based on string values. With the standard enum class, the set of constants defined by the enum must have an underlying type that is an integral type. If no type is specified, then the underlying type defaults to Int32. The following is not a valid .Net enum:

Another useful feature of subclassable enums is the ability to controllably allow others to extend the list of enum values. As long as you don’t mark the enum class as sealed, then it is open to extension. For example, consider the following extensions to the Status enum from above:

Now we can call the SetStatus method from above with any of the following calls:

public void CallSetStatus()
{
Person person = new Person();
// The original Status values work
person.SetStatus(Status.Active);
person.SetStatus(Status.Inactive);
// The original Status values are available via ExtendedStatus too
person.SetStatus(ExtendedStatus.Active);
person.SetStatus(ExtendedStatus.Inactive);
// The new Status values also work
person.SetStatus(ExtendedStatus.Pending);
person.SetStatus(ExtendedStatus.Locked);
}

With Subclassable Enums, iterating over the list of registered enums is simple. For example consider the following iteration over the Status enum from above:

foreach(Status status in Status.AllValues) { ... }

Or you can iterate over their underlying values with the following:

foreach(String status in Status.AllNaturalValues) { ... }

Subclassable enums also benefit from being subclasses like any other. You can define enums that are abstract and require subclass implementations that override abstract functionality. For example, consider the following change to the Status enum:

Now consider that you want to make another decision based on a status. For example, let’s say that you want to optionally log user activity based on status. With a classical enum you might write a helper utility method like the following:

Since the LogActivity method is abstract, all enumerated values are now required to at least implement the method. With the classical enums, the LogActivity utility method may have been defined far from the GetStatusPermissions method. There is no guarantee that new statuses that are added to the classical enum actually get cases defined for them across the various related switch statements.

And finally, another benefit of subclassable enums is that you are not restricted to a single type of underlying value for the enum entry. There may be times when an you would like an enum to represent two or more different types of values for a given entry. Consider the following change to the Status enum for example:

Now each status presents both a string constant and an integer constant. The StringIntegerEnum base class provides the ability to obtain both unique lists of underlying values as well as being able to substitute the enum entry for either a string or an integer. So for example, the status might be stored in the database using its integer value, but may be operated on mostly by its string value in the middleware code. Just as StringEnums are able to be converted to and from their string values, StringIntegerEnums are able to be converted to and from either their string values or their integer values. This provides the ability to use subclassable enums as an enumerable mapping structure.

As you can see subclassable enums provide a greater degree of flexibility and versatility than classical enums. There is a cost of course to subclassing enums. In order to be derivable, these types are class instances and will not perform the same as classical enums. But I think we can see that the trade off is likely worth it if you have any of the above requirements. Additionally, avoiding the proliferation of switch statements based on classical enums all by itself may be justification enough.

Check out the subclassable enum implementation on Github in the AtomicStack project: SubclassableEnum.cs

I have started writing the Atomic Stack Coding Standards documentation. This documentation will begin with coding styles and best practices in an attempt to encourage consistency in the code among the various contributors.

This is just a quick post to inform that I’ve begun development on what I’m calling the Atomic Domain Specific Querying Language or ADSQL for short. I’ve setup a TestService class in the AtomicWeb project to serve as a play ground for exploring the language. Please note that this code will not execute without throwing an exception as there is very little implementation behind the declarations. In fact, it’s mostly filled with thrown NotImplementedExceptions. Execution is not a goal at this time. Right now, I am planning on inviting others to explore the grammar of the language and to either contribute or suggest additional elements to the language. The language is defined in the Schema folder of the Atomic.Net project. I will also be adding more elements over the next few days/weeks. The basic strategy of consuming the language when using an IDE that provides IntelliSense style assistance is to use dot notation to discover the next applicable elements to choose from in order to continue writing a statement. If typing in a dot results in no suggestions, remove the dot and try a square bracket. This will indicate that the language requires an input argument. The input argument may be a lambda expression to branch into the language constructs of another related entity. Consider the following as an example:

The .CreatedBy element chained from the .And expects a lambda expression in which the single argument provided to the expression will be the appropriate criteria language elements for the .CreatedBy property. In this case, the argument provided for the createdByWhere parameter will be a User.Criteria language element. The developer can then formulate the appropriate sub query criteria that applies to the CreatedBy property.

In addition to more language elements, I will also be adding a few more core entities and another project to demonstrate how application developers can add their own entities to the schema.

I’ll discuss the inner workings of how generics are used to construct and route the language elements in another post. In the meantime, if you’re interested in how the language expressiveness is achieved, please explore the Schema folder in the Atomic.Net project.

So, I’ve been thinking about how I wanted to try to jump start my blog for a very long time. The problem is, that for as much as I like technology, I’ve never really taken to the typed word. I’ve always preferred to make a phone call rather than write up an email or send a text. For as fast as I can type, it seems that I can never really type fast enough to get my ideas recorded, except generally in perhaps the case of programming which is good, given that I’m a programmer. But when it comes to free-flowing thoughts like those generally relayed via speech, I would definitely prefer to dictate than to actually type. So, I’m going to attempt to use the Voice Memos iOS application as a way to stage the content for my blog.

Recently I decided to start an open source project, based on some opinions I had received at the St. Louis Days of .Net which echoed similar sentiments from the previous year’s conference. You see, over the years I’ve been fortunate to have been tasked with solving some of the most difficult challenges faced by the various teams that I have been a part of. I’ve also been fortunate to have worked with some very talented and skilled individuals on those teams. Some I collaborated with and others I was literally schooled by. I’ve somehow managed to hold on to the practices that have proven useful and advantageous in the various projects that I’ve worked on and assimilated them as recurring patterns. I’ve described some of these patterns to certain individuals over the last few years, and nearly every time, I’ve been asked if any of it was embodied in an open source implementation. The unfortunate answer has always been nothing that I’m involved with and nothing that I was aware of.

So the purpose of this open source project is to embody the set of tools that I will be implementing and leveraging during the course of the development of a personal closed source project of my own. As such, requirements will flow from my personal project to the tool-set project. The tool-set will be comprised of a stack of technologies that I will leverage to build an n-tier web application. These technologies will be new implementations based upon the patterns that I have successfully leveraged over the years. These will include things in the following list, which I plan to go into further detail of in future posts:

Classical inheritance implementation on top of ECMAScript 5/JavaScript 1.8.5 with base class method call dispatching, public and protected access modifiers, instance and static scopes with constructors

Pure client side MVC solution implemented using JSON/HTML/JavaScript

and of course more…

The name of this set of tools is Atomic Stack. The server side tiers (Atomic.Net) are being implemented in .Net using C# due its amazing generics support. The client side tiers (AtomicWeb/AtomicJS) are being built upon HTML5/ECMAScript 5/CSS 3. The project goals will including adherence to development practices including:

Separation of Concerns (including Unobtrusive JavaScript)

Clean Coding Principles

Tier/Down Design and Development (with wide client side development cycles with narrow vertical server side development sprints)

In addition I will be looking at employing additional practices not currently in use including the following:

Design by Contract (at least for application hosted services)

Test Driven Development

I’m sure that if anyone actually comes across this blog, that some may point out that there are a ton of frameworks and libraries out there. They may question, do we really need yet another framework or library, much less a stack of them? Frankly, I’m not sure. I do know however that I have ideas and I would like to contribute those ideas in a tangible way to the community. And due to my past experience, there is a certain degree of independent yet cohesiveness among these ideas which therefore are compelling me to attempt to start from scratch and create these new tools with minimal constraints solely upon the raw platforms that they are to be built upon (.Net, JavasScript, HTML, CSS, etc.). Ofcourse I will very likely be incorporating additional dependencies upon things that are already well written and tested (like Mike Woodring’s DevelopMentor ThreadPool, HtmlAgilityPack and jQuery). But it is very likely that I will avoid some tools whose implementations I find lacking (for example the MS Entity Framework).

Anyway, I droned on long enough and I’m not quite sure how to end this post. If you are interested in checking out the project, please visit http://atomicstack.com.

It’s been almost 6 years since I’ve purchased a copy of windows for myself. I switched to Mac in 2006 because of growing frustration with Microsoft and the way they treat their customers. I completely skipped windows vista and 7. I had heard recently that Microsoft was trying to get back in the good graces of consumers, so I thought, why not? I’ll give them another try. So I dropped a couple of hundred bucks on 2 copies of Windows 8 System Builder. No less than a week and a half after installing one of the copies on my iMac, the machine dies (required a new logic board and video card). Now that timing is interesting to begin with given that I have not had any problems with the machine in the 15 months that I’ve owned it. But I digress. In the days before I had to take the machine in to Apple Care, I was unable to get any windows updates to install. After receiving the machine back, I was still not able to install any updates, so I opted to use the “Refresh your pc” option. After windows “refreshed” I noticed that it was no longer activated. At this point, I felt my blood begin to boil, because I just knew this meant that I was going to have to call the activation line. Sure enough, the activation wizard was unable to reactivate and I had to make the call at 4:53 in the morning. This is a disgrace. After abandoning Microsoft for 6 years, they have managed to make me feel like they are accusing me of thievery in less than two weeks after I tried to give them another shot. I am furious! This is the last money I throw their way for a long time.

Allow me to give a counter scenario on this matter which will demonstrate how this should have went: Because the iMac required a new logic board/motherboard, my iTunes on the OS X side was no longer authorized to play my purchased content. Since I had already authorized 5 of my 6 macs (including the iMac before the logic board was replaced), I could not re-add this machine as an authorized machine. What was the resolution? I simply deauthorized all of my machines, and reauthorized them one by one. Simple. Elegant. No phone calls to make. No stupid IVR system to deal with while furious that I had to spout out a bunch of numbers. No need to then be transferred to a judge to have to re-spout those same damn numbers again. No need to type another endless long set of numbers read off by that previous slow IVR system. No need to feel like I just paid a company $200 for the privilege of being treated like an accused thief.

It’s no wonder Apple is kicking their buts in the consumer space. Microsoft still does not know how to treat their customer properly!

Why couldn’t they simply use the same model as iTunes only with only 1 authorization allowed? The old logic board is dead. It won’t be phoning home again. Simply deauthorize that board and allow the replacement board to be authorized.

Thanks Microsoft for renewing the ill perceptions I had long held that I eventually had allowed to fade. I feel reinvigorated to spread the word again amongst my family and friends. I fully recall now why I became a “Microsoft Hater”.