Adventures in Software Development

Category Archives: SPA Development

Unit Tests

This weekend, I finished up adding unit tests and code coverage reports to the Excido solution. I was able to get 100% code coverage on my data model project and my service/business layer project.

I did not attempt to write tests for the other layers for several reason, the most important being that with the exception of the front end, AngularJS project and a utility project, the other layers are almost entirely configuration and specialization of third-party libraries like Web API, Breeze and Entity Framework. That isn’t to say that those layers shouldn’t have unit tests, but I’ve made a decision to use my time working on other parts of the project for now. If I had more resources, the front-end project and the utility project would definitely have unit tests.

Having unit tests on the data model and business layers gives me confidence that the most important parts of the project cannot inadvertently be broken without me knowing it. The tests can run locally on my machine as I’m coding, but more importantly, they are automatically run on the build server whenever any new code is checked in. If any new code breaks something important, I’ll be notified right away. This gives me the ability to make big changes to the software without worrying about breaking old code.

Having the test harness configured and working also allows me to easily start using TDD or Test Driven Development. TDD is a software development process where the developer first writes a unit test that defines what the software should do and then writes the software so that it passes the unit test. This encourages simple designs, inspires confidence and provides for technical documentation of what the software is supposed to do.

Up until this point in the project, I’ve written all of my code first and just got around to writing unit tests after. From here on out, for the important layers, I’ll be writing the unit tests first and then writing the code to make the unit tests pass.

Code Coverage

Code coverage is simply a measure used to describe the degree to which the source code of a program is tested by a particular test suite. A program with high code coverage has been more thoroughly tested and has a lower chance of containing software bugs than a program with low code coverage.

Since the data model and business layers of this project will contain all of the code that will dictate how the software will work, it is important to me that as much of the code is tested as possible. The code coverage report that my testing software gives me shows me that currently 100% of my important layers are covered by unit testing. Knowing this, I can confidently add code to these layers without worrying that some obscure part of the code might be affected. I know because 100% of the code, even the obscure branches that might hardly ever get run, will be tested.

Moq

While writing the unit tests, I made extensive use of Moq. Moq is a mocking framework written specifically for .NET that uses LINQ expression trees and lambda syntax.

Mock objects are simulated objects that mimic the behavior of real objects. Last week, I wrote about using dependency injection and interfaces. Because each of my components delegates it’s work to other components that are only referenced through interfaces, the behavior of any single component can be tested without using any other specific component. Therefore, special components that always behave in specific ways can be injected into a component that is being tested to be sure that the component is always being tested in exactly the same situation. The special components that always behave in specific ways are the mock objects.

For example, last week I described how my repository layer requires an implementation of the IDataContext interface to delegate fetching records from database. In the actual software, there is an implementation of the IDataContext that uses Entity Framework to fetch data from the database. When I’m testing my repository layer, I don’t want to set up Entity Framework and maintain a special database that has to be returned to a certain state every time I want to test. Since the repository layer can work with any IDataContext, I can give it an implementation that doesn’t use Entity Framework but instead always returns a specific set of records of my choosing. That special implementation of IDataContext is a mock object.

Writing special implementations of all of the interfaces required to test a class can be tedious and time consuming. Moq makes this very easy by allowing the declaration and behavior of mock objects to be written in-line using LINQ Expressions.

Take a look at line 9 of the unit test for the GetSlugContent method of the repository:

In that one line of code, I am able to declare an implementation of the IDataContext interface that always returns a specific list of records regardless of what query is run. Under any other circumstances, this would be useless, but when I’m testing that my repository layer is faithfully returning the records given to it by the underlying data context, this is perfect.

Moq can do so much more and writing effective unit tests without it would make unit testing on a one-person project like this one almost impossible.

Progress

Much of the work over the past several weekends has been invisible — adding a repository layer and setting up unit testing. It may be hard for people on the outside of the project to appreciate the importance of this work, but it will definitely make work easier as the project evolves and things need to be changed.

Hopefully, next weekend I can make some visible progress to keep the stakeholders happy. 😉

Repository Layer

A well designed stack is like rainbow layer cake; the layers are well defined and separable.

I added a repository layer to the server-side stack. This is a very important, but invisible addition to the Excido project.

A few weeks ago, I wrote that I really wasn’t happy that the Web API controller could reach all the way down the stack to the Entity Framework layer. I had originally considered this because I want to use Breeze on the client side and the default Breeze configuration on the server side is strongly coupled to Entity Framework. I was going to mitigate this in the future with some asymmetric layering (not the hairstyle). Breeze brings a lot to the table on the client-side and I didn’t want to give that up, but I also wasn’t comfortable with the layering on the server-side.

I knew that Breeze isn’t hard-coded to Entity Framework. In fact, I have used it at work with NHibernate. So, I knew that the data-access could be reconfigured I just didn’t know how hard it would be. It turned out that is isn’t very hard at all. The Breeze website has many sample applications using different combinations of technologies, including several using ASP.Net WebApi with different data sources. Using these as a reference, I was able to put together my own Breeze Context Provider that is independent of Entity Framework.

With control of the Breeze Context Provider, I was able to insert a repository layer between the Breeze Context Provider and Entity Framework isolating Entity Framework from higher stack layers.

Currently, my repository layer doesn’t do anything other than pass the calls straight through to Entity Framework, so one might wonder why I went through all of the trouble to put it in there.

A poorly designed stack is like rainbow swirl cake; the layers are inseparable.

One reason is that both Breeze and Entity Framework are not written by me and I have no control over them. If I wrote my program allowing the different layers direct access to each other, they would very quickly become dependent on each other. If one of the technologies I have no control over changes or goes away or becomes otherwise unusable, I wouldn’t be able to replace that technology without rewriting my whole application.

For example, if hypothetically, a new version of Breeze was released that had awesome new features, but no longer worked with Entity Framework, I wouldn’t be able to use it without also replacing Entity Framework. Replacing two entire layers of the program could prove impossible without a complete re-write.

Another good reason is that I need a place to put business rules. For example, I want to enforce that a creation time-stamp can never be changed. I also want to enforce that expired content cannot be retrieved and I am going to want to have more than one definition of expired.

Since these rules need to be obvious to the end user, it might make sense to implement them on the client side. That’s actually not a bad idea and a user-friendly UI should make it easy for the users to follow the rules. However, my server-side is sitting on the open internet via a public Web API. I can’t be sure that calls made to my back-end are coming from a web client that I’ve written. The business rules will have to be on the server-side even if they are echoed on the client-side. Also, I may want to write a desktop application that uses the same lower layers as my Web API. If I put the business rules too close to the top of the stack, I’ll have to re-write them for every front end I write.

The best place for the business rules is somewhere that all requests must pass. That place is my repository layer. The repository layer will see every request and every update that goes to the data-layer. Since the upper layers no-longer have access to the lower layers, they cannot bypass the repository layer.

Dependency Injection

I’m enforcing the separation of the layers with dependency injection through interfaces. Each layer expects an instance of the layer below it to be passed to it in its constructor. The constructor parameters are defined using interfaces rather than classes so that any implementation of the interface can be used and the layer will work.

Figure 1 – Excido Server Side Stack

Figure 1 shows that the Web API controller declares in its constructor that it expects an IBreezeContextProvider. It doesn’t know or care howthe IBreezeContextProvider interface is implemented, it just cares that it is implemented. As long as the API controller has an instance of an IBreezeContextProvider, it can do it’s job.

The class BreezeContextProvider (notice it doesn’t start with an I) is an implementation of that interface, but it similarly needs an instance of an IRepository<T> to do its job. It declares this in its constructor and does all of its work using whatever implementation is given to it.

In this case, the implementation of that interface is my repository layer.

Likewise, my repository layer requires an implementation of the IDataContext interface to work. The EntityFramework DBContext for my project implements the IDataContext interface and can be used by the repository layer to do it’s job. I can, if I need to, write other implementations of the IDataContext interface and the repository layer would work just as well.

So, if the hypothetical scenario described above I was considering no longer using EntityFramework. If all of the layers were coupled, it would be impossible. With decoupled layers, I need only write a new implementation of IDataContext and the rest of the program will continue working unchanged.

I’m using Unity Container in the Web API project to do automatic dependency injection for me. This allows me to map all of the interfaces in the program to classes in one place. Unity takes care of making sure each layer gets the dependencies it needs. Unity Container was formerly called the Unity Application Block and was maintained by the Microsoft patterns & practices team. It has since become an open-source project.

I got tired of looking at all of the plain text on the Excido web site, so tonight I put on my graphic designer hat and put together a rudimentary logo for it. The logo is very simple; it’s basically just the word Excido with a giant red X. However, I think it stands out and the big red X alludes to the idea of content that automatically get deleted.

While I was at it, I created a smaller version of the logo that can be used as a favicon, the tiny icon that web browsers show on tabs and in the URL bar. I think this adds a little bit of flash to the site so it’s not so plain.

The capabilities are built into both platforms and they integrate well. I was able to get the Team Foundation Build service that is built into Visual Studio Online to automatically build the Excido solution whenever I check in any changes. I was also able to get it to automatically deploy the new build to Microsoft Azure.

I’m currently deploying directly to the production environment so what you see at http://www.excido.net/ is always the latest code. When I get to version 1.0, Azure has the concept of “slots” or staged deployment. This will allow me to deploy to a staging environment where I can verify the deployment and then swap the staging slot with the production slot.

This probably sounds like an advertisement, but the ease of configuration and online availability of the combination of Azure and Visual Studio Online make state of the art, business class DevOps tools available to everyone, including one-man operations like myself. Even though I’m working alone and I’m only able to work on this project a few days every month, I’m able to keep making what I think is respectable progress.

It’s hard to believe that there are still much larger organizations that spend countless hours and dollars fighting this battle that has already been won.

A few days ago while describing the architecture of Excido I wrote that I was ok with “allowing the Web API to reach directly into Entity Framework in order to take advantage of data access service built into Breeze.” This didn’t sit right with me. I suppose I wasn’t as ok with it as I thought.

I was comfortable allowing it (I thought) because of the benefits that Breeze brings to the client side, like change tracking and caching. The client-side programmer part of me really, really wants those benefits. However, the software architect part of me really, really doesn’t like it. By allowing the Web API and Breeze to reach directly into Entity Framework, I’m tightly coupling the architecture from the web browser all the way down to the ORM. While I can’t imagine replacing Breeze or WebAPI or Entity Framework, tightly coupling the three of them in my code makes my code unlikely to be able to adapt to any changes at all. Sure, it’s unlikely that I will be replacing one of the components, but if a new version of one of them comes out with attractive features, it is unlikely that my code will be agile enough to handle the upgrade.

There is also a security issue. Even though the WebAPI is only expecting calls from the BreezeJS client, there’s nothing to stop anyone else from sending requests to the WebAPI. If the WebAPI has access all the way down to the ORM, malicious users could gain that access as well. While there is no pressing security requirements right now, it is entirely reasonable that users will expect their records to be secure.

The Breeze server component does allow me to add code to inspect and cancel requests before they are passed to the ORM. I could use that to apply some security at that point, but then I would be coupling my business code even more tightly to Breeze. I’m more than convinced that there needs loosely coupled business and repository layers between the WebAPI and the ORM.

It turns out that Breeze does allow for this. The part of Breeze that interfaces with Entity Framework is called a ContextProvider. Breeze will allow me to write my own ContextProvider that interfaces with my own business layer instead of Entity Framework. It is possible for me to have the benefits of Breeze on the client side without compromising my architecture on the server side.

Unfortunately, writing my own ContextProvider is not an insignificant task. There are several examples of custom ContextProviders, but there is not a simple tutorial on how to write one. It won’t be impossible, but it won’t be easy.

The software architect part of me wanted to get working on this right away. I started laying out the architecture in my head and taking notes. However, the project owner part of me knows that I should finish my current sprint before rushing off to redesign the architecture. I decided to simply add some new Features to the Product Backlog and agree to have that conversation with myself at a later date.

I wanted to simply add a item to the backlog that said “re-work the architecture” or “add layers between WebAPI and EF”. I’m using the scrum process template in Visual Studio Online and it suggests entering backlog items in terms of “features”. The question I had to ask myself is: “What feature will the software have that requires me re-work the architecture?”

This was a good question to ask. If the new architecture wouldn’t bring any feature to the software, there’s no reason to do it. I came up with the following features:

Users should only be able to edit their own content.

Users should not be able to edit the creation date of any content.

These features will require me to write some sort of security into the software. I suppose that I could use the hooks that Breeze gives me to inspect changes before they are made, but as the architect I think I can make the decision to use a more complicated design to achieve flexible security.

I suppose I could add the following feature:

The software should remain flexible to changing security requirements.

Different project owners will have different opinions as to whether that is a requirement or a feature. Different project owners will also have different opinions as to whether requirements should be allowed in the feature list. I’ve sort of compromised by wording the requirement as a feature.

By adding the above features, it’s now evident that we need to add the following as well:

There is finally something to look at.
I wasn’t incorrect when I predicted that I might be working on this project in days or even hours per month. It’s been about two months and I think that’s an appropriate length for a “sprint” on this project.

The app is called Excido which is Latin for “slip out or escape from memory”. The plan is still for the app to become a service where users can share content that automatically expires and then disappears. I’ve secured the domain names Excido.info and Excido.net, both of which I think are exceedingly appropriate for this app.

I set out to do the bare minimum to make the app work and be useful. Currently the app works, but I don’t know if it’s of any use just yet. I added four features to the backlog when I started:

Allow users to create a unit of shared content.

Allow users to specify an expiration on a unit of shared content.

Allow users to download a unit of shared content.

Do not allow users to download expired content.

Of these, only the first is complete. Right now the app will let a user add lines with two fields each to a list. The user can edit the two fields of text and the user can delete one of the lines. That’s it – like I said, “The bare minimum”.

However, a lot of work went into getting to that point. I had to build a basic architecture in order to get to this point. The app is built upon an Azure SQL Database created and maintained using Entity Framework code-first database migrations. On top of Entity Framework, I’m using ASP.NET Web API and Breeze. Breeze creates an OData service on top of a Web API controller on the server side and a full-service data access layer on the client side with built in caching and change tracking.

Typically, I would create a repository layer and a business layer between the Web API controllers and data access layer and segregate them using interfaces, but I’m allowing the Web API to reach directly into Entity Framework in order to take advantage of data access service built into Breeze. Currently the app doesn’t do very much. It shows a list of records and allows the user to edit and update those records. Right now, moving the repository and business layers to the client makes sense. If (when) the program gets more complicated, I can add some more layers on the server side and use asymmetric layering.

On the client, I’m using TypeScript and AngularJS. I decided a while ago that I was not going to use Knockout and Durandal despite the fact that I have a lot of experience with them. I want to learn something a little more modern and ubiquitous. I spent a lot of time trying out Aurelia and I like it very much. Aurelia is the new, next gen JavaScript client framework from Durandal Inc. It’s definitely cutting edge, but it is far from ubiquitous. In the end, I chose AngularJS simply because of its popularity.

I definitely did not accomplish everything that I thought I would, but I think I’m still going to call this a successful “sprint” because I delivered something tangible.

I think I’ll be able to finish the rest of the current backlog in the next two months, so I’m just going to take everything I didn’t finish and move it into the next sprint. I have a lot of ideas for where to go after that. If I find some time to put them down, I’ll add them to the backlog for future sprints.

When we first started this project, Michael and I had recently become unemployed and the idea was to start a project where we could keep practicing our craft and put our skills on display.

Fortunately for both of us, we each found a new job fairly quickly, but that meant that neither of us had time to work on the project and we mostly abandoned it. However, after recently reading the book Adaptive Code via C#: Agile coding with design patterns and SOLID principles by Gary McLean Hall, I’ve decided that I’d like to give the project a go anyhow. The book reminded me how much I can enjoy software development and if I can’t find that enjoyment at the office, I’ll have to find it on my own.

Because we both have jobs and families and other responsibilities, this project is not going to move quickly, but I am determined to make it go somewhere. My goal is to use the principles of Agile and SOLID to run the project but with an unconventional time scale. Instead of hours per day that we can spend on the project, we’ll be working in days per month or maybe even hours per month. Instead of days or weeks, our sprints will probably be in months.

In any case, the first step is to define the project. In the business world, this task would fall to the client or stakeholder. This is the person for whom the project is being done. Since this is a personal project, I suppose that I am the stakeholder. In fact, I will be playing most of the roles in this project.

The purpose of this project is going to be to produce a web application and service that allows users to share content that expires and then disappears. The content can be links, files or graphics. Each share can be set to expire after a certain number of downloads or after a certain amount of time.

That is it. I have other ideas for other features (like password protected content), but those will be implemented later. The idea is to get something working and delivered as quickly as possible, so for this first release, we are only going to do the bare minimum to make the app work and be useful.

The next step is to turn this description into features, user stories (or backlog items) and tasks.

Features are high-level things that the software can do. For example, in this project one feature is that users can share content.

User stories are the things that a user does to use a feature. A product backlog item is something the software must do to allow a user to use a feature. These are two different ways of looking at the same thing. Some Agile methodologies, like Scrum, use “backlog item” while others use “user story”. I’m going to try to closely follow Scrum, so I’ll use “backlog item” from now on. An example of something the software will need to do to allow users to share content is to create a share.

Tasks are the individual things that the developers must do to implement the backlog items. An example of something a developer must do to implement the creating of shares, is to create a table for storing shares.

A good way to start building these is to list the verbs and nouns:

Verbs: share, expire, disappear, access

Nouns: user, content, link, file, graphic, share, time

We can use these to help build lists of features and backlog items:

Features

Content can be shared.

Shared content can be set to expire after a certain amount of time.

Shared content can be set to expire after a certain number of accesses.

Shared content can be accessed by other users.

Backlog Items

Allow users to create a unit of shared content.

Allow users to specify an expiration on a unit of shared content.

Allow users to download a unit of shared content.

Do not allow users to download expired content.

For each of the backlog items, the developers must complete a number of tasks.

Tasks

Allow users to create a unit of shared content.

Create a table to store units of shared content.

Create a page on which users can add units of shared content.

Allow users to specify an expiration on a unit of shared content.

On the content creation page add controls for specifying an expiration.

Allow users to download a unit of shared content.

Create a web api that serves previously content.

When a user creates a unit of shared content return a URL that can be used to retrieve the content through the web api.

Do not allow users to download expired content.

When retrieving previously created content, do not serve content that has expired.

We now have our features, backlog items and tasks. The next step will be to estimate the amount of work each will take and then prioritize the tasks. I’ll be using Visual Studio Online to manage the project. We’ll take a look at that in the next installment.