A few days ago I was asked to review Spire.Doc from E-iceblue, which is a .Net component that lets you manipulate Word documents programmatically. Since I don't use Word documents in any of my projects, I was unable to test Spire.Doc in a real world situation. However, I did install it and looked at the samples.

The first pleasant surp...]]> codeproject

A few days ago I was asked to review Spire.Doc from E-iceblue, which is a .Net component that lets you manipulate Word documents programmatically. Since I don't use Word documents in any of my projects, I was unable to test Spire.Doc in a real world situation. However, I did install it and looked at the samples.

The first pleasant surprise came out after the installation process finished. The installer opened a window that invited me to take a look at the samples. I think that this is very cool, since the first thing I usually do after installing something is looking at sample code and copying it into my own. The program that displayed the samples (called SampleCenter and available from the Start menu) contains a brief list of main features, each having a short description, a link to sample code (C# and VB.Net included, all VS versions from 2005 to 2012 supported), and a button to launch a demo (compiled from the sample code). The only thing I didn't like was that the link to sample code opened the corresponding folder, while I would prefer to see the relevant portion of the code open in the same window.

I didn't try to use it in my own project, but judging from the samples, the API is pretty straightforward. I don't know if they support some uber-professional Word features that nobody ever knows or cares about (like, automatically coloring stock symbols in red whenever they go down on the stock exchange), but they support all the good old stuff, including setting up the document properties, printing, and converting to PDF, RTF, and TIFF formats (and some others).

The main reason for using a tool like Spire.Doc, as far as I see, is that you don't have to have Office installed, and even though you might have it, Spire.Doc is a pure .Net thing that does not rely on this funny thing called Automation. Hence you can use it on e.g. Asp.Net sites, or in some other scenarios where performance and robustness are important (although, as I mentioned above, I didn't have a chance to test either).

I've been struggling for some time to make NuGet work for my online .Net IDE project, Chpokk. The good news is that there turned out to be a command line tool, NuGet.exe, which can be used on a build server, so I though I'd just use its functionality. The bad news is that it turned out that it only downloaded packages, not modifying the project. Then the good news is that there's ProjectManager class, which can handle the required modifications to the...]]>

codeproject

I've been struggling for some time to make NuGet work for my online .Net IDE project, Chpokk. The good news is that there turned out to be a command line tool, NuGet.exe, which can be used on a build server, so I though I'd just use its functionality. The bad news is that it turned out that it only downloaded packages, not modifying the project. Then the good news is that there's ProjectManager class, which can handle the required modifications to the project. And of course there's the bad news -- while this guy downloads the packages, it doesn't copy the unzipped files to the package folder from the temporary location. Fortunately, the final news was good -- the ProjectManager class proved to be extensible enough to make it work. Although I haven't tested some advanced stuff, like ps scripts or config transforms, it works for the most situations, and extending it would be pretty simple.

The most work is done by the ProjectManager class. I had to add three extensions though. First, in the PackageReferenceAdded handler, I make sure that the assembly files are saved to install path. Second, I extend the MSBuildProjectSystem class in order to add the content files to the project. Last, I'm subclassing the LockalPackageRepository class. This class is used, in particular, for checking whether a package is already installed. The implementation just checked that either a *.nuspec or *.nupkg file existed.

You can find a working project at https://github.com/uluhonolulu/BlogSamples/tree/master/NuGetSample. Download the project and build a console executable that, when run, installs the NUnit (or another, if you add an argument) package to the "testPackages" folder and adds a reference to the project.

If you are a .Net developer, there's no way you've never used, or at least heard about, NuGet. Unless you don't use any external dependencies, this package manager is a must have tool. And, if you want to learn it, you can spend a day or two reading various blogs, with lots of repetition and stale information, or you can buy this book, NuGet 2 Essentials (note: this is not an affiliate link), and have it all in one place, nicely...]]>

codeproject

If you are a .Net developer, there's no way you've never used, or at least heard about, NuGet. Unless you don't use any external dependencies, this package manager is a must have tool. And, if you want to learn it, you can spend a day or two reading various blogs, with lots of repetition and stale information, or you can buy this book, NuGet 2 Essentials (note: this is not an affiliate link), and have it all in one place, nicely organized.

Let me help you make the right choice then.

The book has it all. From the first click on the "Manage NuGet Packages" menu item, to writing a custom PowerShell installation script for your own package, to installing a private Nuget gallery on your own server. Sometimes you are led by a hand, with detailed step-by-step instructions, yet being explained each step and the motivations behind that. Sometimes you are presented with a choice, with explanations of the pros and cons of each option. Because of this, the book never seems too advanced nor too primitive. The flow is also pretty natural: first, we read about a particular problem a developer has, then a solution that NuGet offers, then we dive into the details. So, you can choose the level of involvement, for example, read the overview now, leaving the implementation details for the time when you need them.

This is why, although the book is over 100 pages, you shouldn't be afraid that it would steal a lot of your precious time. It is very easy to just pick the stuff you need and leave the rest for the time when it's needed. "NuGet 2 Essentials" has something new for every developer, from a student learning an essential Visual Studio plugin, to an OSS developer preparing her own package for publishing, to a team manager setting up a private NuGet feed.

As I wanted to write a balanced review, I tried hard to find something negative about the book. The only thing I managed to figure is that it could add a few words about the NuGet's predecessors (OpenWrap, Nu, and Horn). However, I must admit that it would serve only a theoretical purpose, which is not essential for a practical book like this.

In other words, this book is perfect. If you need to learn NuGet on any level, go get it.

Whenever you need a user to upload something to your application, you usually put a big shiny "Upload" button right in the center of your Web page. We've all been trained to do it -- input type=file and all that, just like in the 80's.

Fine.

Except that it's not the glorious 80's anymore, and most users keep their stuff in the cloud, where our beautiful Upload button can't reach them. You have to do better than that.

Time to move on

This is why I decided t...]]>

codeproject

Whenever you need a user to upload something to your application, you usually put a big shiny "Upload" button right in the center of your Web page. We've all been trained to do it -- input type=file and all that, just like in the 80's.

Fine.

Except that it's not the glorious 80's anymore, and most users keep their stuff in the cloud, where our beautiful Upload button can't reach them. You have to do better than that.

Time to move on

This is why I decided to add Dropbox support for my online .Net IDE that I'm building. Naturally, if you want to edit your code online, you probably keeping it online somewhere. Dropbox was just the first really cool cloud storage, so I decided let's add support for Dropbox first.

Fortunately, there's a really cool .Net wrapper for it, so I just added a few things that made sense, and also created a graphic (that might be the very first post with a graphic on this blog), so that you get a clear picture of what's going on. The wrapper is called DropNet, you can get it from NuGet, and there's a brief how-to here. The only problem is that the main Client object should be somewhat "prepared" before it can be used -- namely, it should get an OK from both the end user to access her Dropbox folders, and from Dropbox itself. Having a human factor right in the middle of the process made it essientially non-automatable (in my own tests I would put a breakpoint and perform the manual part, then continue runnning the test), and a little bit complicated to grasp. In addition, the DropNet wrapper doesn't "know" on which state of the process we are being currently. So [I heard you like wrappers, so] I wrapped this wrapper in my own wrapper, calling it DropNetWrapper, and published it (together with a couple of extension methods that let you download/upload a file/folder more easily) on GitHub, just to boost my ego a little bit.

It's kinda hard to explain all that dancing that should be done before you can actually use it, so I'm putting here the abovementioned picture (you may laugh, but I actually installed PowerPoint in order to draw it).

Here's the legend:

1.The main page calls the GetAuthorizeUrl method of the DropnetWrapper instance via AJAX.

2.The return value of this method is used to open a popup window which lets the user login to DropBox and authorize your app to manage her DropBox files.

3.After the user clicks the Allow button, the callback Url is opened in the same popup window. It’s a page from your site. It’s not supposed to display anything, just execute a script (see the next step).

4.The popup page informs the main page that the authorization was successful (or not) and closes itself.

5.The main page gets the DropNetClient instance from DropNetWrapper via its Client property. It is important that we use the same DropNetWrapper instance, so you might want to use a HttpSession lifestyle (or just keep it in your Session state).

6.The main (or any other) page uses the DropNetClient’s methods (or extensions via the provided DropNetExtensions class) to manipulate the user files and do other interesting things.

Note that this is just one of the many possible ways of using it. For example, you might want to open the authorization page in the main window (actually it's quite big, so the main window might even be a better choice), which would simplify things a bit, although I'd hate to have the user leave my site.

We developers just love writing automated tests, don't we? (Except for the guys and gals who believe that tests are for testers only). We also love writing tests before we actually write code, because it is still a revolutionary concept for us, and we like doing absurd things. We also call it "TDD" in our blog posts.

While we love to call it an Art, actually we're quite happy that it is not. We're no...]]>

codeproject

We developers just love writing automated tests, don't we? (Except for the guys and gals who believe that tests are for testers only). We also love writing tests before we actually write code, because it is still a revolutionary concept for us, and we like doing absurd things. We also call it "TDD" in our blog posts.

While we love to call it an Art, actually we're quite happy that it is not. We're not artists, after all. Not those spaced out wierdos always having late noisy guests and borrowing money from us the day after. We love it to be predictable.

And that's why we love programmer-centric tests.

Now, if somebody tells us about user-oriented tests, and maybe even mentions BDD casually, we just imagine that we'll have to write an end-to-end test that would involve a database, simulating user mouse clicks, and other scary stuff, including late night debugging test ordering issues.

Fear not.

I'll give you a real world example of a user-oriented test that it going to change your life forever. Or maybe not. But it will sure be more robust, less complicated, easier to support, and better documenting the system's functionality than its programmer-oriented version.

(On a side note, do I sound like I'm selling you something? Must be reading too many marketing blogs..)

Let's start.

Arrange

Recently I've been working on a big solo project of mine, a .Net code editor called Chpokk. Naturally, since the code it edited online, it must be stored somewhere on the server. My first idea was to put each user's files in a folder that's named after her username. But there's one tricky point. Since I'm using Janrain for social login, there can be different users with the same username. For example, my Twitter's username is uluhonolulu, but somebody could register a WordPress account with the same name and get access to my files. So, I had to think about something unique.

Fortunately, Janrain gives me a lot of data, including something called uniqueIdentifier. Which is, well, unique, so it's a good candidate for a folder name. Since it can contain symbols that are unacceptable for a folder name, and since I wanted to keep the user's name in the beginning (so that I could quickly find the code and investigate any issues), I decided to go with the following: let the folder name be {username}_{hash of the unique identifier}.

And now the testing fun begins.

Act

See, since I already know the implementation, there's a hard-to-resist temptation to write a test that assumes this implementation. A typical developer-centric test has implementation details exposed all over it. In our case, it would look like

There are so many things wrong with this test. But the main problem is, it is hard to understand what it checks, and what part it plays in the application. Imagine that a year later we change some part of the code and the test breaks. Is it good or bad? Should we fix the test, or the production code? If folderName becomes something other than "uluhonolulu_-1493874745", would it break some of the app's functionality? The tests are meant to provide, among other things, a safety net, but in this case the net is so tight it acts like a straightjacket. Why? Because it has my idea of implementation hardcoded both in the inputs and the expected output, any time the implementation changes (and it will change), we risk breaking the test even if the functionality is OK.

My favorite technique for fixing the tests (or writing them the "right" way) is to start with the name. From the user's perspective, it is not important how we are constructing the folder name, what's important is that the folders are different for different users. So, let's change the name to:

publicvoidDifferentUsersShouldGetDifferentFolderNames()

Now, we have to just code what this name suggest, meaning we have to take two different users and compare their folder names. Note that we don't write an end-to-end test, we are still in the unit test category. So, our Assert stage should be something like this:

Assert.AreNotEqual(firstFolderName, secondFolderName);

Next, let's get to the Act stage. The signature of the GetFolderName method that we suggested before just doesn't fit here. it leaks implementation details. Meaning that just looking at it, we already know something about implementation (namely, that it is based on username and unique ID). And that's just as bad as making all your private functions public. Only worse. Because, once you make it public, it is very hard to change it.

So, let's get back to our initial requirement, and forget our idea of implementation (that's the hardest part of TDD, even if you know how to implement something, you force your mind shut up and listen to the tests). What we have to calculate is the name of the folder, unique to the user. What we know about the user is just the user profile data. So, let's use it as the single argument for our method, and the method itself would be responsible for getting the fields that it needs.

See what has just happened here? They say TDD helps us write better code. We've just seen how it helped us find a better signature for our method.

Going backwards, we are coming to our Arrange stage of the test. It should be a little bit more complicated than the original version, but not much. We should create two sample profiles here with the same username. I could just make two simple profiles each having only username and uniqueId fields, but I prefer taking two real world profiles (say, two my profiles from different identity providers). Since we'll need a lot of code, I'm just refactoring it to a separate function (actually, two functions for two profiles), but I make it clear that they have the same userName:

It is still a unit test, no complicated setup, no testing several things in the same test, in other words, nothing to be afraid of. And yet, it is clearly a user-centric test.

Assert

Remember that our purpose was to make this test less fragile. Meaning that it should still pass when we add more requirements (but keep the original one). Let's see whether we have achieved this with our new test.

One thing I discovered is that, as some sites use email for username, there's no "preferredUsername" field in the corresponding profile. I decided to change my implementation and use "confirmedEmail" for these providers. The test still passes -- no change required. The method's signature didn't need change, thanks to what our testing strategy suggested.

Second idea, what if I decide that users with the same email should be treated as one user? That means that I would need to change my implementation again -- same emails would mean same folder names. Would our test stay green? Well it depends on the input data. As I mentioned, I just took my profiles from two different providers. Note that I cheated: as I was testing the situation with two different people with the same username, I should have taken profiles of two different users. Well, when I'll be implementing my second idea, I'll be punished for my cheating, because my implementation will break that test (since I used the same email in both profiles, they now should correspond to the same folder, breaking the test).

Here are our benefits from converting this test to a user-oriented one:

It is clear which part of the system functionality this test ensures

The test will not break nor it won't compile if we introduce new specs

The test can serve as part of the executable documentation of the system

For those of you who can't wait to start testing async requests, here's the good news (and no, there are no bad news in this post): starting with the 3.1 version of Ivonna, you can test them with the same syntax as before, i.e. using either session.Get(url) or session.GetPage(url).

However, this is not the end of the story. Most of the time you use the async pattern for a reason. You have a lengthy operation in your Web code...]]>

codeproject

For those of you who can't wait to start testing async requests, here's the good news (and no, there are no bad news in this post): starting with the 3.1 version of Ivonna, you can test them with the same syntax as before, i.e. using either session.Get(url) or session.GetPage(url).

However, this is not the end of the story. Most of the time you use the async pattern for a reason. You have a lengthy operation in your Web code. And that's the other good news: now you can mock async methods called from your controllers/codebehind/whetever, returning custom values. This functionality is a part of the Ivonna.Framework.MVC assembly, but you can use it with any Web framework -- it's not really related to MVC, but I didn't put it into Ivonna itself since I didn't want to introduce the .Net 4.0 dependency (this might change in the future).

Suppose you have the following code that you want to test (taken from MSDN):

You can test the whole method, but the test would have wait for the HttpClient to return the result, which can take a long time, and be unpredictable. In addition, you might want to test your controller's logic, not the code that runs the MSDN site. So, you want a predictable return value, and you want it fast. Here is how the new functionality helps you:

There's also a strongly typed lambda-based overload, but personally I find it less convenient and more noisy. Note that a stub like this works on all requests for all instances of HttpClient, so if you want a more fine grained control, you'll have to implement your own CThru aspect.

One more gotcha here. Some useful async methods are implemented as extensions, so you cannot use the syntax above. Use more verbose session.AddAspect(new Stub(..)) instead.

]]>http://ivonna.biz/blog/2013/3/8/testing-async-requests,-mocking-async-methods-with-new-ivonna.aspxWriting the first test for a Real System, Part IIhttp://feedproxy.google.com/~r/ivonna/oigv/~3/J6JplzjcfAQ/writing-the-first-test-for-a-real-system,-part-ii.aspxSat, 21 Jul 2012 00:00:00 GMThttp://ivonna.biz/blog/2012/7/21/writing-the-first-test-for-a-real-system,-part-ii.aspx codeproject

Guilt. That's what I've been feeling all these days.

Ok, it's not that I spent the past month in a deep depression. I should confess that I'm not that kind of guy. But naming a post "blah blah part I" is kinda making a commitment. And the longer I kept postponing writing the second part, the worse I was feeling about it. So, I finally decide that I won't leave my workplace until I finish it.

Ok, it's not that I spent the past month in a deep depression. I should confess that I'm not that kind of guy. But naming a post "blah blah part I" is kinda making a commitment. And the longer I kept postponing writing the second part, the worse I was feeling about it. So, I finally decide that I won't leave my workplace until I finish it.

In Part I, I have defined some guidelines that would help me in writing my first test. What's missing is the requirements for this particular system (Chpokk, and online C# code editor) I'm building. So, let me remind you these guidelines, this time with practical application.

I want to produce some business value as soon as possible

What I want my application to do is something that people will find actually useful. "As soon as possible" means a Minimal Viable Product from your marketing course. Perhaps it means that "at least one person might find it useful for at least a small something". In other words, it might be good for a demo. It is not big enough to be used in the daily life, but at least it's better than having just an AbstractFactoryFactoryAdapter. So, instead of building Version One piece by piece, I want to build Version 0.01, not caring much about clean and maintainable code (I'm postponing it for the Refactoring phase), but this version, although being far from perfect in all aspects, will actually solve someone's problem. And of course, among all problems that my applicationis going to solve, I'm choosing the most important one.

This is also very good in terms of getting user's feedback as soon as possible. Since I'm solving a real world problem, there must be some user out there who has this exact problem, so if I provide a solution (for free), this user is going to be my best friend, providing valuable feedback for the rest of her life.

Back to the point

So, since I'm building an online code editor, the "something useful" is, well, being able to edit the code. What means "edit"? It means, load, change, save. In my case, "load" is "clone a git repository", since that's where most of the code is stored today (yes I know there are other source control systems, and some people actually use the one named "file system", but I decided to start with Git), and display a file in the editor. "Edit" is "change a file in an online editor", which involves intellisense, refactoring, autosaving and whatnot, but let's forget about it until later. "Save" involves.. ok, let's not talk about it now.

This is the point where I'm going to contradict myself.

Theoretically, I should cover all three parts with one test. The problem is, that would involve multiple user interactions, meaning I should probably add UI testing (in which I'm not very proficient), meaning I should decide on UI (which I don't want to yet), meaning I'll still have to do a lot of things at once. Instead, I split my task into several smaller ones, but I'm not doing it from the developer perspective (I'm not test-driving specific classes), but rather based on user actions.

The first such action is cloning a remote repository. I agree that there is not much business value in it, so I'd rather not showing it to end users (except maybe for my 5-year old Alice), but still it's something that can be shown to somebody -- "hey, this thing can clone a repository!"

I don't want my tests to be fragile.

While I don't want certain parts to be fully implemented yet, I also don't want my tests to be dependent on them. These implementation details should be hidden from my tests, and never be tested themselves.

The main reason for a test to be fragile, meaning it can suddenly stop passing while the application behaves correctly, is that it depends on some implementation details. Whenever such a detail changes without affecting the application (for example, as a result of refactoring), the test breaks. Such a test brings more harm than benefit, since we cannot rely on it when checking our application's health. It becomes a burden -- we have to support it in addition to supporting our main application.

A typical example is an UI test that relies on element id's. If we are doing Asp.Net WebForms, we often let the framework generate these id's for us. As a result, each time we change our control structure without affecting its functionality (say, add a new NamingContainer), the id's (which are implementation details) change, and our test fails. Don't think that Asp.Net MVC is free from that -- a simple Action Method renaming would make a test that depends on it fail.

In our case, one of the implementation details is the physical location of the local repository. I can guess that I should do something to prevent different users or projects use the same folder, but I don't want to care about it right now. So, what I'm doing is encapsulating it, making sure it doesn't leak. For now, I hardcode it to a particular value, but I don't use this value in my tests. So, my tests, just as my production code, use this value without actually knowing it. Later, when the implementation changes, the tests won't break, since they rely on the same implementation.

It also allows me, as agile gurus advice, delay the implementation until it's really necessary.

The tests should not use any knowledge from outside.

This one has been covered a bit in the previous post, but I'd like to elaborate on it. Sometimes you look at the assert, and you don't understand it at all. I mean, it doesn't have to be crystal clear (that's what the name of the test is for), but at least I should be (eventually) able to understand it, because if it's broken, I have to fix it somehow. Whenever a test depends on some external resources, it becomes totally unclear. In addition, it encourages reusing this external resource, making it complex to satisfy the needs of all tests (I'm looking at you, a test repository for LibGit2Sharp).

Of course, this doesn't apply to the infrastructure. Even unit tests sometimes are better run with a real database, and we're talking integration tests here.

This is why, rather than taking a database that has some rows pre-inserted, I prefer taking a clean database and inserting the rows I need during the Arrange phase. Another example is when I need to parse a .sln file, rather than using an existing one, I'm creating it in the test. This way I'm free to create as many versions as I need for testing various cases, and I have a unique resource for each test, that contains only the stuff I need for this particular test. But the main benefit is that everything my test needs is just around the corner.

The requirements

Before we start writing our test, we should write down the requirements. As I keep insisting, our requirements shouldn't be developer-centric, i.e. they shouldn't be, like, "method xxx should return an instance of yyy with such and such property values". They should be formulated in terms of user expectations.

So, my task for today is to start implementing the "clone" story. I'm coming up with the following requirements:

When a user clicks the "open project" button, a popup appears that invites her to enter the repository Url (we assume that the repository is publicly readable). After entering an Url and clicking "Ok", the users sees a progress indicator.

After the process is finished, a user is redirected to the "Project" page.

This page should display the list of the files.

Now, as I mentioned previously, I'd like to split this into several tests. For a particular test that I'll be writing now, which is going to be entirely server-side, I've got the following story:

If a user submits the Url of a remote publicly readable repository to our system, it should clone a repository to a folder where this user can access it later.

Note that I don't specify the exact location of the target folder here (see Rule #2). Instead, there's a rather vague "where.. can access it later", which is not clear how to program. Let's say this means the result of some property or method which will be introduced by our code.

Finally, the code

I was delaying it as long as I could, but finally here it is:

usingSystem;
usingSystem.IO;
usingChpokk.Tests.GitHub.Infrastructure;
usingChpokkWeb.Features.Exploring;
usingChpokkWeb.Features.Remotes;
usingLibGit2Sharp.Tests.TestHelpers;
usingMbUnit.Framework;
usingStructureMap;
// Namespace should correspond to the feature we're developing // Later it's going to help me to find this particular testnamespaceChpokk.Tests.Cloning {
publicclassWhenYouSendACloneCommandToAServer {
privatestring_fileName;
privatestring_targetFolder;
// First, prepare the context
[FixtureSetUp]
publicvoidSetup() {
conststringrepoUrl="git://github.com/uluhonolulu/Chpokk-Scratchpad.git";
// ARRANGE// Create a random filename_fileName=Guid.NewGuid().ToString();
// Commit a new file to the remote repositoryvarcontent="stuff";
Api.CommitFile(_fileName, content);
// Prepare the target folder// This is where we get the relative repository path // See discussion about Rule #2varrepositoryInfo=ObjectFactory.GetInstance<RepositoryInfo>();
_targetFolder=Path.Combine(Path.GetFullPath(@".."), repositoryInfo.Path);
// We cannot clone into a nonempty directory, so delete itif (Directory.Exists(_targetFolder))
DirectoryHelper.DeleteDirectory(_targetFolder);
// ACT// Get an instance of our controller.// I'm using a container so that I don't have to rewrite it// each time I change the signature of the constructor.// For unit tests, use automocking container.varcontroller=ObjectFactory.GetInstance<CloneController>();
// Create a model for using with our Action Method.// PhysicalApplicationPath is databound automatically,// but in out test we need to submit it.varmodel=newCloneInputModel
{PhysicalApplicationPath=Path.GetFullPath(".."),
RepoUrl=repoUrl};
// Finally, execute the Action method.controller.CloneRepository(model);
}
[Test]
publicvoidRepositoryFilesShouldAppearInTheDestinationFolder() {
varexpectedFile=Path.Combine(_targetFolder, _fileName);
varexistingFiles=Directory.GetFiles(_targetFolder);
Assert.AreElementsEqual(new[] { expectedFile }, existingFiles);
}
}
}

Your posts are too long, and I don't understand the main idea

Ok, here's the summary:

1. Produce the business value as soon as possible. It means, identify the main benefit that your product does, and write a test for it, end to end. You are allowed, however, to split it into several tests, but each one should represent a significant feature.

Benefits: When you make this test green, you actually have a working product. You can receive an early feedback on it, you can show it to your Mom, but what's most important, it feels great!

2. Don't let the implementation details leak into your tests. It means, whenever you have to use some knowledge in your test that's irrelevant to the end user, hide it behind a piece of production code that's implemented as simple as possible.

Benefits: You don't have to think about these details right now and go straight to goal #1, plus, and this is actually the most important thing, you get solid tests. Because one of the main reasons that tests break over time is that the implementation details change, and these tests depend on them.

3. Always prepare your context in the test code. It means, other than infrastructure, everything that your test uses should be created as part of it.

Benefits: Such tests are much easier to support. If your test breaks, at least you can be sure it's not because some external resource has been changed. In addition, it is much easier to figure out what's going on in this test.

Wrapping up

This post (both parts) take too long to write. Probably much longer than it deserves. I think I should have written an ebook instead, sell it for $37, and become rich. Anyway, I hope it made somebody's life less boring.

Today I had some fun trying to figure out how to fix the height of the jQueryUI dialog. The client wanted it to be exactly 500px. Or something that resembled 500px. Anyway, it definitely shouldn't have been from the top to the bottom of the screen. Although I sure set it to 500.

The fun part is that it had to work in IE7-9, but in *quirks* mode. The client won't switch to the standards mode, since the site (made in early 2000's, tables inside tables all the way down) would break ap...]]>

codeproject

Today I had some fun trying to figure out how to fix the height of the jQueryUI dialog. The client wanted it to be exactly 500px. Or something that resembled 500px. Anyway, it definitely shouldn't have been from the top to the bottom of the screen. Although I sure set it to 500.

The fun part is that it had to work in IE7-9, but in *quirks* mode. The client won't switch to the standards mode, since the site (made in early 2000's, tables inside tables all the way down) would break apart.

After some debugging, I figured that one line in a certain method would save me. Namely, a fix would require adding a line at the beginning of the "_size" method. So, I could just leave it like that..

Except that I couldn't.

Doing so wouldn't just violate the Open/Closed principle, it would offended the shadows of the Fathers of SOLIDity and the Alt.Net deities.

After all, JavaScript is a dynamic language, right? So, we can do whatever dirty trick we can think of, including messing with "private" methods.

Extending a jQueryUI Widget by rewriting a widget's private method? Nothing could be easier!

While I could easily put there something like alert('OHMYGOSH!!!') (and have fun imagining my coworkers trying to figure out what's going on), what I'm actually doing here is just add something to the beginning. So, first I'm saving a reference to the existing function, then I redefine it, adding the line I need and then invoking the function itself.

There are many binders in my big rusty toolchest. Some are good boys (and girls), others just like to misbehave. I mean, they do what they're destined for, and they do it really good, but in the process of doing it they break one or several Holy Laws that our Holy President wants us to abide by.

Nobody complains though.

This particular one saves me a lot of repetitive coding

Web requests tend to contain simple values. We developers like to work with objects. We li...]]>

codeproject

There are many binders in my big rusty toolchest. Some are good boys (and girls), others just like to misbehave. I mean, they do what they're destined for, and they do it really good, but in the process of doing it they break one or several Holy Laws that our Holy President wants us to abide by.

Nobody complains though.

This particular one saves me a lot of repetitive coding

Web requests tend to contain simple values. We developers like to work with objects. We like it so much that we are even willing to create objects from simple values. In particular, we often have to retrieve an object from the database (or from the cache), using its ID. Believe it or not, I was doing it at the beginning of almost every action method, sometimes two or three times. I felt so exausted that I would postpone writing the rest of the method until after a lunch break.

And then I saw the light

A few years ago Scott Hanselman wrote an article about an IPrincipal model binder (I use it a lot as well). And I thought, hey, these binders are not just for slapping your form values together, they can do more than that!

And I wrote the EntityBinder.

This particular binder may be frowned upon by respectable developers. Binders should know their place, you know. They are meant to stay somewhere between your M, V, and C. Your database and the business layer should be a forbidden territory for them.

Ok guys, you may have your controller full of boring repetitive code. I'm done with this.

Another Bad Thing that this boy (or is it a girl?) does, is that it does two things instead of one (what?? you forgot about the Single Responsibility principle???). It serves both as a custom binder attribute and a Binder. I did it this way because I wanted to save several keystrokes writing [EntityBinder("projectId")] instead of [EntityModelBinder(typeof(EntityBinder), "projectId")]. While one can argue that the code for the binder became less maintainable, the code that used it became twice as maintainable, and that was a huge gain.

The downside is that I couldn't use Dependency Injection in that binder (at the time of the writing, I couldn't use it anyway, because it was the first version of Asp.Net MVC), so I had to resort to Service Location (and had never had any problem with that).

What exactly this shiny binder does, anyway?

The binder that I'm going to show you looks at the parameter name and type and tries to guess the name of the field that holds the ID, and the type of the entity. So, given a declaration like this:

public ActionResult AnketaDefinition([EntityBinder] Project project)

it looks first for a request value named "projectId", and if it cannot find it, for a value named "Id". Then it asks the ORM for an entity of the type Project with that Id.

In case we don't want the defaults, we can provide our own, but it happens very rarely.

There is one question though. What do we do if we don't find the id in the request? It turns out that there are cases when we want to return null, and cases when we want to throw an exception. There is an additional boolean parameter called "relaxed" which you can use for that. What is default behavior, you decide. I'd recommend throwing an exception, just in case.

And finally the code that you can steal use

The code together with a sample aplication can be found at GitHub. The main part, however, is below:

First I look for the field value using the rules I mentioned above. Next, I figure the entity type. If not specifically set in the attribute, the type should be that of the parameter we're binding to. Next, I use StructureMap.ObjectFactory to get an instance of NHibernate.ISession. You can use any container and ORM you like. The rest is simple. I have omitted the part where you handle array valued parameters, you can see it in the original source.

Writing a test for our great binder

As always, I prefer writing an integration test, the one that actully executes an Asp.Net request, because it lets me demonstrate how powerful Ivonna, my Asp.Net testing tool, is. This time, however, I'm adding a little bit of mocking (so it's not a 100% integration test). Because I don't want to setup NHibernate with all that mapping, bootstrapping, and stuff, I'm just stubbing the DB access using the new Ivonna/CThru Stub syntax:

session.Stub<ISession>("Get").Return(entity);

Here you have some kinda brute force stubbing, where you don't need much flexibility, and you don't want anything "force" you into a supposedly good design (which is close to impossible when writing integration tests anyway). Just make the Get method on any ISession instance return this object, regardless of the arguments (strictly speaking, we should verify that the argument is as intended, but let's not overcomplicate our test). Here is the full test:

As you see, we prepare an Entity instance, have it returned from the stubbed ORM call, then execute our request, and verify the parameter of the action call. Our Web should have the SampleController class with the Get method having the following signature:

publicActionResultGet([EntityBinder()] Entityentity)

That's it for today, I do hope you'll find it useful, and please tell me that you like to break the rules as much as I do, whenever it makes your (and the others') life happier. I do believe that releasing this binder to the general public I'm doing a Good Thing, and the world became a better place because of this, and maybe even a couple of whales will be saved from brutal killing, but maybe not.

When you have a complicated View, things can easily get messy. A View may have several Partials, each Partial being reused in several Views. Each Partial might require certain library scripts, which in turn might depend on other scripts.

(At this point, I stopped and thought, maybe I should really add images to my posts. Ayende does it, although he has clearly no time for finding an appropriate image. All the cool guys do it. Perhaps I should do it, too.)

codeproject

When you have a complicated View, things can easily get messy. A View may have several Partials, each Partial being reused in several Views. Each Partial might require certain library scripts, which in turn might depend on other scripts.

(At this point, I stopped and thought, maybe I should really add images to my posts. Ayende does it, although he has clearly no time for finding an appropriate image. All the cool guys do it. Perhaps I should do it, too.)

<insert a messy picture here>

(By the way, if you know a place where I could quickly steal images for my posts, please tell me in the comments. I heard that it makes them more entertaining and gives a personal touch.)

Turned out FubuMVC provides a nice solution to this problem. You can have an arbitrary number of "config" files (the extension is misleading, they're plain text files) in arbitrary places, named either *.script.config, or *.asset.config. Each file describes relationships between assets, or just groups them for reference. After that, your View just references a file it needs, and your Master issues a directive to render all required files.

Applying a custom policy (should be a type implementing either IAssetPolicy or ICombinationPolicy):

apply policy {policy type}

Combining assets:

combine {name1}, {name2},.. as {set name}

Alias:

{alias} is {asset name}e.g., jquery is lib/jquery.min.js

Extending (not sure what it means -- please enlighten me!):

{asset name} extends {another name}

In addition, you can have empty lines for readability, and lines starting with '#' for comments.

How works them config files?

Suppose you have the pieces provided above in one of your config files. Whenever you need the demo.js script on your page, you just put

<Scriptsrc="demo.js" />

somewhere close to the top of your View (assuming you use a Spark ViewEngine with the default bindings; it would work for other Views, of course, but with a slightly different syntax). This directive doesn't output anything, but it tells the Asset Pipeline that you'll need that script at some time. Later, presumably in your Master, you have the following:

<Scripts />

This should render all the required scripts for the page. In particular, although we have never mentioned we need jQuery, it's going to be referenced, since demo.js needs it. All scripts will be rendered in the correct order. In addition, there won't be any duplicates.

Another Good Thing is that the config files acan be easily reused across different projects. After all, the library dependencies is what doesn't depend on a particular project.

This can't be so good. Now tell me the bad news!

The bad news is that the current implementation enforces rather strict rules for where your assets should be. Namely, everything should be placed in the Content folder, with the hardcoded subfolder names for styles, scripts, and images. this doesn't work good with styles requiring images in a subfolder, and also can cause some friction e.g. when updating jQuery via NuGet. While having such a rigid structure is a bit unusual for this otherwise very flexible framework, the benefits, in my opinion, outweight the (forgot the English word for that thing that is outweighted by the benefits).