September 22, 2011

After doing a few spikes it’s time for some actual work on my task based Silverlight application. In my previous post I indicated that I wanted to work in a ATDD and TDD fashion. The reason that I wanted to do this has to do with the automated feedback you constantly g et when implementing your code and you eventually have a big set of automated tests which help prevent regression problems.

ATTD & TDD

If you know TDD then the phrase ‘Red, Green, Refactor’ will be very familiar to you. If you work with ATDD then you will have a very similar flow. I tried to indicate in the following picture how I see ATDD and TDD work together:

So you start by writing a failing acceptance test (or multiple if this is handled by a product owner together with a tester). Since I will be doing this on my own, I will write those test myself. I will use SpecFlow to write the acceptance tests since it integrates nicely in Visual Studio and in MSTest. Next I’ll create the code to make the acceptance tests run the actual tests. The failing acceptance test will describe the expected behavior of the piece of functionality I will be adding. Next I’ll implement the functionality in the familiar TDD fashion, so using the ‘Red, Green, Refactor’ principle. When eventually the acceptance test(s) succeed, I’ve got some new functionality in my application for which I know that it works, because I started out by describing the functionality using SpecFlow and those tests now pass. The beauty of this is that the automated tests also become a great way of preventing regression problems because every feature you added is also covered by automated tests.

Continuous integration

I’ve got a test TFS server running, so I will be using this to run a CI build on every check-in. This build will compile all code, run the unit tests and I also want it to run all the acceptance tests. To be able to do this, I will have to make sure that all tests run very quickly. Whenever the tests take several minutes or longer to execute, it will be a show stopper, since history have proved that when tests take too long, they will get skipped and you loose the advantage that the tests give you. To keep the tests fast, they will have to run completely in-memory. I’ll get into the decisions that I make related to that in a future blog post.

Continuous deployment

With all the testing in place, there is the possibility to try out continuous deployment. Lately there is a lot talk about this and I really see advantages of it. To get this working, one of the mayor prerequisites is to have a good automated testing strategy. I described some of the testing behavior already, which is the unit tests and the acceptance tests. Next to that I also want to add automated UI tests to test the behavior of the actual user interface. These tests will not test any business rules or logic, since those are tested using the acceptance tests. Last category of tests which are relevant would be integration and performance tests. These kind of test could be the integration with a database, but also with other linked systems. I’ll probably will not use these kind of tests a lot (only for the integration between all the layers of the application, including the actual database), but whenever you’re facing a more distributed environment, including links to other systems, those links should also be tested using the automated integration and performance tests.

The following picture gives an idea of how I would like to have the automated deployment working.

So the automated build consists of several steps. Whenever the solutions compiles and all unit tests pass, it will be deployment to the development server. When also all the acceptance tests succeed, I want it to be deployed to the test environment. Then the automated UI tests will be run against the test environment. If they also pass, then the solution will be deployed to the acceptance environment, where the performance and integration tests will be run. Then the application should be ready for deployment to a production environment.

For my application, the deployment to the different environment will not be that different, since I don’t have several different servers to deploy the application to. However, in a enterprise scenario, there will be a lot more challenges to get your deployment completely automated.

Like this:

September 5, 2011

I’ve been very interested in stuff like CQRS and Task Based user interfaces lately, but unfortunately I can’t work with it on the current assignment. So I decided to create a private project in which I can try out all the related technology, techniques and best practices I can find. So this is the list of tools and techniques I’m going to use:

Techniques / methodologies

CQRS I do like the idea of Command Query Responsibility Segregation which I first read about in posts by Greg Young on codebetter. If you want more info about that, read the posts by Greg Young or Udi Dahan also has some interesting blog posts about it.Other great sources of information are cqrsinfo.com and the DDD/CQRS google group.

Task based UI The idea behind a task based user interface is of course the focus on the user and making the user interface in such a way that it makes working with it easy from the perspective of an user. It also works great together with CQRS and is almost a prerequisite to doing DDD al the way.

TDD Test driven development… What more is there to say about that. If you don’t know what it means. Go read about it!!!

ATDD This one is not that well known as TDD, but it stands for Acceptance Test Driven Development. Unfortunately I haven’t had the change to work on a project at a customer where it’s used, but I’ve been trying it out in an MVC project I’ve been doing and I’m convinced that it definitely helps creating better software and preventing regression problems.

Tools

Caliburn.Micro I’ve been experimenting with Prism in the past and it was pretty big to get started with. A colleague of mine talked about Caliburn.Micro several times and I wanted to give it a spin to see if it is easier.

AutoFac I need an IoC container in the Silverlight client and I now of two that I’ve heard of and that is Unity and Autofac. Some recent blog posts I’ve read about their performance made me decide to ditch Unity and give AutoFac a spin.

StructureMap I’ve worked with StructureMap in the previous project and I really liked it. Next to that it looks pretty good in performance comparisons so I wanted to use it as the IoC container in my web application project hosting the services.

Silverlight In my opinion Silverlight is a very logical choice when implementing a LOB application which should be available using the web. Especially when there is no need to target a very wide range of users, with all kind of operating systems and browsers. Otherwise I would probably have chosen for ASP.NET MVC with jQuery, but Silverlight is the best match for the application I’ll be building.

WCF I will be using web services build using WCF for the communication of the Silverlight client with the web server to get the data from the SQL Server Express database. I will be looking at both plain WCF services and WCF data services.

SpecFlowI’ve been trying out Specflow as the way to write my ATDD tests using Gherkin syntax. It integrates nicely with Visual Studio so I will be using that again.

When looking at the architecture of the application, it will have a lot of similarities with the Silverlight Cookbook application on codeplex which my colleague Dennis Doomen created.

I will update this post with links to other blog posts when I have some experiences worth sharing with any of these points.

Like this:

July 25, 2011

Since it’s my vacation I have some more time to work on my home project. I’m creating a web shop in ASP.NET MVC 3 and I’ve got a test TFS server running. I use this project to get some hands on experience in ASP.NET MVC 3 since the project I’m working on is still Webforms but I see more future in MVC. I’ve implemented a basic site where you get a view a list of products and some more details about them.

So my next step is that I want a CI build on the TFS server. I configured it and it seems pretty easy, but than I got the following error in my CI build (but I did not get it when building locally): “/temp/global.asax(1,0): error ASPPARSE: Could not load type …..”.

Eventually I found the problem. In the log file I saw the call to aspnet_compile, which is triggered because I set the property to build the views (MvcBuildViews). The problem is with the path which is supplied to the aspnet_compile command. I eventually fixed it by adding a condition to the AspNetCompiler build task in my web application project file. It now looks like this:

To get this working I defined a build property IsDektopBuild which I set to true in the Debug build configuration, which is the configuration I’ll build locally. I also defined a Debug CI configuration which does not contain this build property. Now I’ve got my CI build up and running. Next step is to add some specflow acceptance tests to see if they help me with controlling the quality of the code.

Like this:

June 28, 2011

We’ve been using the construction of a wrapper class around the DateTime for unit testing which Ayende described here.

Today one of my colleagues discovered that there is a possibility to run multiple unittests in parallel. How to do that is described here. However, the approach we used with the SystemDateTime class no longer worked, since it uses a static field which caused problem with two tests running at the same time. I tweaked it a bit to fix this, so I wanted to share the code we use for it.

Like this:

May 5, 2011

In the project I’ve been working on, we had to deal with a design agency, which creates a lot of html markup, which must stay exactly the way it was created by them be. Next to that, the product should eventually work in SharePoint, so we have to use WebForms :-(. I’ve been developing a framework which will be used which looks very similar to the FrontController pattern, however there are a few differences, for instance, the Commands (or ViewActions as we call them) will not give a ActionResult, but our controller will do this, since there is a lot of knowledge there around navigation between forms etcetera. But that’s not the subject of this post.

One of the things we’ve also been working on is to make the transition of the designers stuff to the asp.net stuff as small as possible, so that changes made by them can easily be merged in our controls. So the designers have been working on control-like pieces of html which will be reused to compose pages. Nothing strange here from a developers point of view, so we made UserControls out of them which will be the equivalent of their control-like html markup. These UserControls had to be used in several portal web project. So there were a few options on the reuse of the controls:

Create CustomControls for each control-like element of the designersThis was not an option because the html delivered by the designers was sometimes very much in one control (over 300 lines) which would make it a nightmare (and laborious task) to keep CustomControl in sync with their markup files.

Use the ‘Add as Link’ option to add the user controls to the portal projectsThis option was killed by the specific software factory what we had to use, since that didn’t support this construction which made all several nightly builds fail.

Copy each UserControl to each portal projectYeah right… not an option at all of course.

Convert UserControls into CustomControls.OK, that option was the only option left worth investigating.

So I had some work to do. This post by David Ebbo was a very good starting point on how to do it. So I started out with a small demo project where I did all the steps outlined by David and I got that part working. However, there were some things that were not the way we wanted it.

The fixednames was not handy for us since we want to have one resulting dll, otherwise there were way to many dlls to reference in the portal projects.

The name of the assembly of the controls was something like App_Web_BLABLA_ascx.dll which just looks ugly and might change over time, which made it useless for us in a automated scenario.

The functionality should be integrated in the build of the project, to be able to make it work how the other projects work using the software factory

So I started out to fix these points. The first two points I wanted to fix was to merge the assemblies that were created and name them the way I wanted it. So I started with using ILMerge to do this. In my demo I had created two very small user controls to do this and after the ILMerge, it seemed to work. I then added a larger user control and tried that also, but that one just did not work correctly. Just a small part of the markup was visible and that was it. I did a lot of googling on it but eventually found out that when you go over a specific number of characters in your user control (I can’t remember the actual amount anymore :-(), resources are used in the generated code. And then ILMerge breaks it. After being stuck there for a while, one of my colleagues mentioned that I could try aspnet_merge instead. I didn’t know that it existed, but I tried it out and it worked like a charm.

So I only had to fix the 3rd point mentioned above. The software factory has a mechanism built in to copy assemblies created by the project to a specific location every time you build the project. So that lead me to the idea to add the aspnet_compile and aspnet_merge as a post build command, since the magic of the software factory was done using a AfterBuild MSBuild task, so I could not use that. Eventually my post-build command looked like this:

First I remove the directory where I’m going to publish the web app (using the aspnet_compile).
Next step is also related to the software factory workings. The resulting assembly of the project has to be exactly the same name, otherwise it won’t be copied. So I delete the current dll created by the project, otherwise my aspnet_merge step will create an assembly which is not exactly the name I wanted. However, the result is that there can’t be worked with code behind files, but that was not a problem for this.
Then I use aspnet_compile to precompile the site and write the resulting stuff to a temporary folder.
Then the aspnet_merge is executed to merge the created App_Web assemblies to one assembly with the same name as the assembly that would have been created by my original web application project. Note that I’m also resigning the assembly to make sure it is strong named.
The last step is to copy the merged dll and pdb files back to the original location.

With this small piece of post-build commands I got the solution eventually working and it is being used now in multiple solutions without problems. I hope some of you guys can also benefit from this. Please let me know if it helps.

Rate this:

Share this:

Like this:

May 2, 2011

Today was the second and last day of the DevDays. I’ve been attending some diverse sessions which I wanted to write something about.

A Developer’s Tour Around Expression Blend – by Mike Taulty

I haven’t had much time lately to do anything with Silverlight unfortunately, so I thought this would be a good opportunity to get a refreshed on it. I became clear that you can do a lot more with Expression Blend then I thought. Mike made it clear that it is not just a tool for designers, but that there are parts that are very useful for a developer too. One thing became clear during the session and that’s that you need a high resolution screen when working with Blend, which was unfortunately not the case with the beamer . Especially working with resources and animations is way easier with Blend then with Visual Studio, so I’ll definitely give it a try when I’m going to do some Silverlight work again.

I thought this would be an interesting talk with the noise around HTML 5 and the browser plugins. What became clear in the talk is talk HTML 5 can do pretty much, but that it’s still a matter of how browser implement specific stuff, even when they follow the HTML 5 specification. What Jeff unfortunately did not talk about was the usage from developers perspective and especially the scenario of a Business Application. I believe that Silverlight has a very big potential in LOB application, so it was kind of a bummer that this was not talked about. One thing I was not very aware of is the power of SVG, which Jeff showed a it of. I guess this can be helpful, so one more thing to check out.

Since I’ve been working several months on identity related stuff I wanted to attend a session from Vittorio about claims based authentication. I heard a bit about ACS so this looked like a good session to attend. It was a very interesting session and ACS delivers a very interesting set of functionality. I’ve been working on a web shop application lately and I wanted to use at least Live Id to authenticate and maybe others like Google and Facebook also. The functionality of ACS looks exactly as what I want, however,I’m not sure about the costs. At least it’s something that’s very interesting, so I’ll definitely check it out.

Introduction to Razor – by Alex Thissen

I will definitely do some work on an ASP.NET MVC 3 project and I will use Razor as view engine, so this was another session which could be interesting. The basics were already known because of blogs I read from Scott Guthrie etc. The information about templates was interesting, so I’ll definitely check that out. At the en Alex mentioned WebPages as the base for pages with Razor shortly, but unfortunately for me it was not clear what he wanted to point out, so I’ll have to dig into that myself.

Overall the DevDays were very interesting. There were several very good speakers, so that was great. At some points the location seems a bit small for the vast amount of developers attending, so maybe a larger location would be better next year. I have a few nice days again, so I hope I can attend the DevDays next year again.

Rate this:

Share this:

Like this:

April 28, 2011

Today was the first day of the DevDays. It’s been an interesting day and I got inspired again to try some stuff out.

Keynote

The keynote was a talk with some good speakers, with Scott Hanselman as one of them. He did a great talk about the evolution we’ve had in the Microsoft development platform and as usual with a talk by Scott, it was great fun.
Next up was Wade Wegner with a talk about Azure. It’s clear that the cloud is a major topic right now, so it’s obvious that part of the keynote was dedicated to that. It was nice to see a demo, although it was too bad that the internet connection was bad, so there were several hiccups.
The keynote continued with Ben Riga who showed several of the improvements to be expected by the new Mango update of Windows Phone 7. Since I’ve already watched the keynotes of the Mix11, a lot of hat was showed and the demo’s were pretty familiar.
Last but not least was it Rob Miles turn. Unfortunately, there was not much time left for him, so he had to go in Fast Forward mode. I did like the stuff he showed and the way he presented it, so it would have been good if he had had a bit more time.

What you as an ASP.NET developer should know about jQuery – by Gill Cleeren

Since the keynote took too long, I was a bit too late for the session, so it already started. It was a talk about the basics of jQuery and touched a lot of the basic stuff. Gill showed a lot of demo’s on how to do this, so that was very helpful. After working a while with ASP.NET MVC and another framework I developed which abstracted the entire ASP.NET webforms stuff, I was already convinced to use webforms and the ASP.NET AJAX as less as possible. The existence of jQuery and all the possibilities that I saw today made me even more convinced. I already did a few small things with jQuery, but after the session I like it even more.

Reactive Extensions for .NET for the rest of us – by Mike Taulty

After a nice lunch it was time for a talk by Mike Taulty about the Reactive extensions for .NET. I had heard a bit about it but was not sure how I could use it. Mike is a good speaker and he made clear HOW you can work with the Reactive Extensions. The problem for me is that it’s not clear for me when I would use it. My head is still spinning a bit, so I’m not going to use it for now. Hope to have some time soon to dive a bit more into it to get a clear picture about the situations I could use it. It seemed to be a pretty powerful framework, but for it’s not something I’ll be using soon I guess.

NuGet in depth: Empowering Open Source on the .NET platform – by Scott Hanselman

Another talk by Scott which I enjoyed a lot. The talk was also very busy and I’m curious if that was because of Scott, or because of the subject, I guess both. Scott showed very well how powerful NuGet is and how easy you can actually create a NuGet package and publish it for everyone to use. I already had some ideas in mind on how I could use it and this talk gave me a better picture on how to do it.

This was the last session I visited today. I wanted to know a bit more about this since I had been testing it a bit to compare it with Selenium. I had no real knowledge about it at that time, so that might have influenced the outcome a bit. I guess it can be useful, especially in combination with testers who use the Test Manager. The session made me think about it again and I’ll have to spend some time on it again to make up my mind if I want to use it or not.

It’s been a good first day and I’ve seen some stuff which gave more things to do and check out, even tough I still don’t have time for it . I’m looking forward to tomorrow to get some more inspiration for some experimenting at home.

Share this:

Like this:

April 27, 2011

Today was the pre-conference day of the DevDays 2011. I’ve been attending the ALM track and have had some interesting insights.

Adapting SCRUM – by Rene van Osnabrugge

The first part of the day was a session by Rene van Osnabrugge. For me personally there was not much new information. It was a lot of information about SCRUM and a small bit on how to use it with TFS. Since I’m already very familiar with SCRUM, I would have liked to see some more demo’s on how to integrate this with TFS and the reports etc. that are delivered a part of the SCRUM template of Microsoft. Guess I’ll be doing that some evening when I’ve got some time left.

Improving the developer workflow – by Dennis Doomen

The second session was hosted by my colleague Dennis Doomen. Since I’ve been working quite a while together on a project with him until a year ago and soon will be joining him again on another project, most of the things he mentioned were not new to me. He pointed out some adjustments he made to the team project template (you can find that info also here). A quick look at code metrics and code analysis was also done. I do agree with one thing he mentioned about those, that you can force your team to do all these things before checking in using check-in policies, but it will result in lesser check-ins. The good thing about checking in regularly is that the amount of merging (hopefully) will be less. So I can only agree on this one to not be too strict on all the check-in-policies you set.
He also mentioned a lot of patterns he uses, which can be very helpful when developing software, but I stick to my opinion that you should also be careful in what way you use them. I’ve seen software where a lot of patterns were used, but the software was almost unreadable because of that. In most cases I will prefer not following a pattern to the letter if it makes the code more readable. Of course the abbreviations like SOLID, DRY, etc. were also mentioned .

Adopting Continuous integration – by Ewald Hofman

After the lunch break Ewald Hofman did a talk about team build and continuous integration. This was the talk I was most interested in, since I have been doing several things with team build in the 2008 edition, but I haven’t been playing with the 2010 edition much. Off course Gated checkin was one of the topics that passed by, since it was a new addition in 2010. He mentioned one important thing to keep in mind is that gated chekins can be very useful, but there is a catch. When checking in, changes are not immediately in source control, they are shelved. If you continue working on another task which touches the same file and the build fails it gets difficult to keep track of which changes were new (after the checkin) and which were from the checkin. Especially when the build takes a long time, this can get tricky, so it was good that he mentioned this. So two practices were mentioned, which are pretty logical:

Define build ( s) for Continious Integration or Gated checkin, but make sure those builds are fast, so they should only compile your code and run fast unit tests

Define nightly build ( s) which may take longer and which are more extensive, like running code analysis, integration tests, etc.

Another interesting part (at least in my opinion) was the part about customizing the build workflow and adding custom build activities. It was good to see that it’s pretty easy to customize the flow, since it’s WF when you’re used to the WF interface and also to add custom activities.

Agile testing with VS 2010 – by Marcel de Vries

Last but not least Marcel de Vries did a talk about testing in an agile team and how VS2010 could support this. I liked the fact that he showed a lot during demo’s so I’ve got an idea how to use the VS 2010 client for testers. I’ve seen a few things about it last year also, but I haven’t had any more time to look at it so it was a good refresher. During the talk it became clear that it’s a great tool for testers and developers, since it can support the developer in fixing bugs found by the testers. On the other hand it was clear that there are a few things that are not working as smooth as you would like it, so I’m very curious for the next version of this client for testers.

Altogether it was an interesting day. I had hoped that we were going a bit deeper into capabilities of the VS2010 ecosystem and how it can support agile teams, so I hope that if there will be another pre conference next year, that it will be more on TFS and hopefully on some new features of the vNext of Visual Studio and Team System.

I’m looking forward also to tomorrow, day 1 of the DevDays. There are some interesting topics, like Reactive extensions, jQuery and NuGet, so I hope to learn a lot tomorrow. I’ll try to write another post on my experiences of day 1.

Rate this:

Share this:

Like this:

March 22, 2011

A few days ago a colleague of mine was deploying a patch for an application on servers. All assemblies are located in the GAC, so those assemblies had to be updated. The application runs on several frontend servers, so the patch had to be deployed to every server. The following picture show’s how this was done.

The assemblies were copied to each of the windows explorers using drop & drop. However, the issue was still available on most of the servers. An application pool recycle did not help, even an IIS reset did not fix the problems.

Now what was the problem. The files were copied to the others servers using the windows explorer, which were opened like this: \\SERVER\c$\windows\assembly. The problem however was that this is a shell extension which make the GAC looks nice in the windows explorer. It does not show the actual directory structure. If you check the GAC using a dos prompt it looks more like this:

This windows shell extension however was the cause of the unexpected behavior. You would expect \\SERVER\c$\windows\assembly to be pointing to the GAC of the remote server, however, this is not the case. It just displays the contents of the local assembly cache again. One of the other colleagues ran into this same issue once before, so he already knew about this gotcha and could point to the mistake.

Another lesson learned. When you you are using the GAC (is it really, really necessary?) make sure that you install the assemblies on the machine itself, or use a tool which you’ve made sure of that you can use it to install assemblies in the GAC on a remote server.

Like this:

February 16, 2011

In my previous post I mentioned that I had been working on a C# formatter for selenium which uses MSTest as test framework and also FluentAssertions to make the tests more readable. After a few hours last Saturday night I had something working which looks pretty ok for now. I guess I’m not the only one wanting to generate this code, so I wanted to share what I’ve got.

I started off by taking the source of the C# formatter which is installed together with the selenium IDE. Note that there is a bug (maybe more, but I ran into this one ) in the source, which results in a lack of options to set in the IDE. The problem is regarding the configForm setting, which is done in the options. This is pretty strange if you look at it, because that configForm is used to generate the UI for the options (which options are available can be found here). The configForm should be set outside the options. I tried to modify the source, but that’s not possible, it’s read-only. If you still want to use the NUnit formatting as done in the formatter, you can add another formatter and add the modified source. You can download the modified source here.

Now to the formatter I wrote. I wanted a few things:

MSTest as test framework

use FluentAssertions for readability

Start the selenium server when starting test run (and close it when tests finished)

There is a description here on how to start creating a custom formatter. However, I was lazy, so I copied the source from the existing C# formatter and modified it to accommodate my needs. First of all I fixed the problem with the options, which I described above.
Next I changed the assertion methods to use the FluentAssertions syntax.
I updated the generated header and footer, since I don’t want all the initialization of a Selenium instance etc. to be done on every TestInitialize since it takes several seconds. I changed it to be done on a central location.

The biggest change I made was add formatting for a test suite, which is not available in the default C# formatter. In the code for the test suite, I added methods for AssemblyInitialize and AssemblyCleanup. In these method, I will start and tear down the selenium server process and the selenium instance. This uses a static selenium instance, so this will cause problems if you want to use it in a multi-threaded environment, but for now, it matches my needs. The complete source of the formatter can be downloaded here.

I hope some of you find this helpful also. If you have some good suggestions for the formatter, then please let me know.

Edit 22-02-2011: The links to the sources were incorrect. I updated the links now, so you can download the files from my skydrive.