Øyvind Valland's Babel Lutefisk .net

Thursday, 23 January 2014

In the last two or three years I have been doing a lot of BDD, and I have really enjoyed it. Introducing BDD in a project for my latest client has been particularly rewarding. This company has previously relied heavily on manual regression testing and the majority of its QA staff were not trained sufficiently in development to be able to write their own automated tests. As mentioned this lead to a heavy reliance on manual testing but it also lead to something else: the unit and integration tests that were written by developers were largely ignored by the QAs because they did not understand them. "Not understanding" meant more than that they couldn't read the test source code and work out what the tests did. The tests were inconsistently named and poorly organised, and the QAs felt that they could not assess whether the appropriate test coverage had been achieved (real coverage, not a percentage of lines covered). This lead to unit and integration tests not being run as part of builds.

I've oversimplified this scenario - there are many reasons other than the QAs' inability to write and understand automated tests that led to this situation. However, the point is that that was the state of affairs - and things needed to be done. The reliance on manual regression testing meant releases were incredibly infrequent, and a full regression test cycle could easily take six weeks. If that's not madness, I don't know what is.

The company had brought in a new head of QA to sort out this mess, and his directive was fairly simple. Automate all tests that can be automated without excessive cost, write all tests as specifications (preferably with a Gherkin-style language and/or SpecFlow), and prefer BDD over TDD for new development work. Following this directive many existing manual regression tests were created as UI automation tests using SpecFlow and Selenium, existing integration and unit tests were cleaned up and made part of the build, and the company as a whole (well, more or less) attempted to embrace the BDD style of specifying test cases.

However, there was still the problem of new unit and integration tests. Developers across the organisation were used to writing their tests with NUnit and Moq. A typical, contrived, test could look something like:

This test is simple and if you're a developer you can probably read it just fine. But if you're not a developer this is can be hard to understand. What are we testing, exactly? There is a comment in the test that says the order hasn't been dispatched and that's why an exception will be thrown - but what if the comment wasn't there? Even if you are a developer and you come across this test you're likely to have to go digging to really understand what the test is all about. Whether or not you think this style of writing tests is problematic is a matter of taste and preference, but what we realised in our project was that the non-developer QA who wanted to assess whether the right level of test coverage had been achieved would often be completely at a loss as to what had been tested.

In order to address this particular problem we decided that we should write all our unit and integration tests as specifications. We decided that the test we've already discussed should be written like this:

Given I have an order
And the order has not yet been dispatched
When I try to refund an amount against the order
Then an InvalidOperationException is thrown

This is a test that anybody can read and understand, regardless of technical ability, so we decided we'd write all our tests in this way so that anyone could look at any test, be it UI Automation test, database integration test or unit test and easily understand not only what it does, what it tests and why it failed (if that were the case).

The first, and rather obvious, choice of tooling for definig the tests was SpecFlow. However, we soon realised that it wasn't a good choice for this organisation. There was (not surprisingly) some resistance to having to use a new tool from the developers - why couldn't they just keep using NUnit like they had? Also, we quickly realised that although non-technical QAs now certainly could help define unit tests using Gherkin they typically didn't. So we decided to take matters in our own hands and create our own tiny framework, one that would give the best of both worlds: Specification style tests written using nothing but NUnit and Moq. I've called this framework MicroSpec.

MicroSpec is essentially one code file (I've taken a leaf out of Dapper's book) containing a single class, the Specification class, which implements four interfaces. The Specification class is an abstract class which executes your tests and prints the specification to the console as the tests run. You write tests by specifying each step in the test as a separate method and passing these steps as delegates to the Specification class' Given-When-Then methods. The test we've already looked at, above, could be written like this:

Here we have three methods which define the steps in our test definition. They are all called by the specification class:

i_have_an_order

creates a new order for use by our test

the_order_has_not_yet_been_dispatched

ensures that the order created in the previous step has not been dispatched. This might execute some logic on the order or just assert that the state of the order is correct.

i_try_to_refund_an_amount_against_the_order

would issue a refund against the order.

The last step in the test is a call to ThenAn which is a method that's used to capture expected exceptions. Chaining ThenAn (or it's brother ThenA for those who care about grammar) with IsThrown() asserts that a specific exceptions has been thrown during the course of the test (if you want to make sure an exception was _not_ thrown, you'd end the chain with IsNotThrown() instead).

As the test runs the Specification class creates console output before each step is executed. I'll repeat the output of this test here so you don't have to scroll:

Given I have an order
And the order has not yet been dispatched
When I try to refund an amount against the order
Then an InvalidOperationException is thrown

As you can see, the console output takes the names of the delegates/steps and prefixes them with the type of step: "Given", "When", "Then", "And", "Then an", "Then a", "is thrown" or "is not thrown". But the console output does not have to be constrained to just the name of the steps within the test. You can also include test parameters by following the following naming conventions:

integers surrounded by underscores, e.g. _0_, are replaced by the value of the corresponding parameter passed to the step.

integers preceded by the letter "g" and surrounded by underscores, e.g. "_g0_", are replaced by the name of the corresponding generic parameter defined by the step.

To illustrate, imagine that the value of the refund issued in the above test matters. You could then rewrite the test like this:

Given I have an order
And the order has not yet been dispatched
When I try to refund 20 GBP against the order
Then an InvalidOperationException is thrown

Writing unit tests in this manner takes a little getting used to, and I find that it generally takes a little longer to complete the first couple of tests than when following a more traditional approach. That's because you've got a little more "infrastructure" to create in order for your tests to run. Managing state between test runs can also be a little trickier sometimes. But the tests themselves become infinitely more understandable and readable than tests written in a traditional way, and I think that the extra effort you have to put into defining your steps clearly in English language not only helps make your tests better but also aids in designing the software you're building. Describing the pictures you have in your mind with words while you're in the early stages of coding up a solution is a good way of vetting your design ideas.

Anyway - MicroSpec works for me, and it's bridged the gap between non-technical QAs who want to read and understand low-level (tests) and the developers that write them. It's the way I currently like to write my unit tests. If you want to give it a try, you can download it from BitBucket. Any thoughts and comments are most welcome.

Thursday, 3 January 2013

Last year I was lucky enough to work on several projects with some very good UI developers. Doing so enabled to greatly improve my own skills in that area, and one of the biggest 'take-aways' from these projects was the way in which these guys preferred to manage the JavaScript in a project. In this post I will outline how they did it, and why they did it this way.

Note: I really like this approach because it's solved (and neatly so) a problem I never seemed to sort out in a satisfactory manner on my own. I am sure there are many other good ways of doing this - I'm just particularly fond of this solution. I welcome discussion on the matter!

The basic idea behind how the JS should be organised can probably be summed up in three simple points:

Everything is namespaced (and thus organised accordingly).

Every page that requires JS has a single 'page class' that creates and initialises JS components that work on that particular page. A 'page class' should not do anything except initialisation.

All JS code that provides functionality to a page should is organised into logical components.

I'll address each bullet point in order, but before doing so let's have a quick way at how the JS files are organised in the web project (or on disk, for that matter). Note that the code examples and screen shots are from a ASP.NET MVC4 project in VS2012.

As you can see from the picture on the right I have modified the default project structure somewhat. In the "Content" folder I have created a "Scripts" folder (with multiple sub-folders) and also a "Styles" folder. In this post I'll only talk about the "~/Content/Scripts/MyApp" folder and its files/sub-folders. I'll mention here, though, that the "~/Content/Styles" folder contains the "site.css" stylesheet and the "themes" folder from the original project. The "~/Content/Scripts/Plugins" folder contains all the scripts that were originally placed in the "~/Scripts/" folder, and the "~/Content/Scripts/Jasmine/" folder will hold all Jasmine files and tests.

Now, back to the bullet points. We'll address them in order, starting with:

Namespaces

The "~/Content/Scripts/MyApp" folder should be named in accordance with whatever your application is called. Inside this folder, place a single JS file named in accordance with whatever the root namespace of you application's JS should be. I usually use the application name as the root namespace, hence I name the root namespace file MyApp.js.

Inside this file I simply declare the root namespace like this:

var MyApp = {};

You'll notice that inside the "~/Content/Scripts/MyApp/" folder there is also a "Pages" folder. This is the folder that will hold all the JS files that define the previously mentioned page classes. The "Pages" folder has two files: MyApp.Pages.js, which defines the MyApp.Pages namespace...

MyApp.Pages = {};

... and also MyApp.Pages.Home.js, which defines the "Home" page class. We will get onto the details of this particular file in a second.

Although we've only talked about two folders and three files, I've hinted at a pattern with regard to namespaces. Every folder under "~/Scripts/Content/MyApp/" represents and is named in accordance with a namespace. That namespace is defined in a JS file named exactly the same as the full namespace, e.g. MyApp.Pages.js or MyApp.MappingComponents.js.

In this example there are only two namespaces, and that's fine to start with. I keep all components in the root namespace unless some are completely page specific (thus not really reusable) or unless there are too many of them so that it becomes useful to group them.

Page classes

All page classes are defined under the "MyApp.Pages" namespace. The purpose of a page class is to create and initialise all JS (components) required by a specific page in your web application. In this example we have the MyApp.Pages.Home class. Let's take a closer look:

All this page class does is create a GoogleMap component, and initialise it. On creation the component is passed a reference to the div that we want to render the map within, and then it is initialised by calling the init method of the component and passing in the latitude, longitude, and map zoom level.

This is a contrived example, so I've hardcoded the initialisation of the map instance. In a 'real world' scenario you'd probably have your page class either read the lat/lng from hidden form fields on the page or perhaps request the lat/lng by making an ajax call. The point is, beyond doing the necessary work for creating and initialising components, the page class is pretty simple.

Components

All JavaScript for a page should be encapsulated in (reusable) components, and each component should be defined in its own, separate file. There are two reasons for this: It makes your JS more manageable because each component is defined in one place and has one purpose. It also makes your JS easier to test, because each component is discrete. The latter point is still subject to how you choose to build your components, though, and I'm not going to go into testing of JS here.

Let's have a quick look at the GoogleMap component that's used by the Home page class:

This is a very simple component that creates a Google map instance on a div in your HTML markup. As you can see, when an instance of this component is created you pass it a reference to the div that should be used as the canvas for the map. This reference is then stored so that when the init() method is called the map can be drawn within the given div.

Putting it together

In order to make all of this work we just have to do the following:

Add a reference to the Google Maps API within the tag of _Layout.cshtml

Add references to "~/Content/Scripts/MyApp/MyApp.js" and "~/Content/Scripts/MyApp/Pages/MyApp.Pages.js" within _Layout.cshtml

Add references to "~/Content/Scripts/MyApp/MyApp.GoogleMap.js" and "~/Content/Scripts/MyApp/Pages/MyApp.Pages.Home.js" within the "~/Views/Home/Index.cshtml" view.

Add a div with the id "map_canvas" within the "~/Views/Home/Index.cshtml" view. Oh - and you should style it, too, to the desired size.

With all this in place, the home page should look something like this:

Points to note

While I really like this solution, there are a couple of things to beware of before you start using it yourself. The most important thing to realise is that as your site grows, so will the number of JavaScript files. If you reference each file separately the web browser will potentially have to make a lot of requests in order to render a single page, and this is bad for performance. Therefore you should utilise bundling so that multiple files can be combined into a single downloadable unit.

While bundling is really helpful for the above scenario, you should also exercise some restraint when defining these. Because of the number of JS files you might have in a large project it can be tempting to "bundle everything" into one big, eh, bundle and reference that everywhere. That's ultimately counter productive because the web browser will have to download more than what's required for a single page, and it will probably lead to a lax attitude towards the boundaries that you've defined within your JS, too.

Monday, 7 November 2011

SEO guidelines usually recommend that a site's URLs should be kept in all lower-case. The reason for this is that search engines and web servers alike (IIS is a notable exception) will treat two differently cased URLs as two different resources. While the host name of a URL is case insensitive (i.e. there's no difference between http://www.mysite.com and http://www.MySite.com) the resource path is not. Therefore http://www.mysite.com/home and http://www.mysite.com/Home are considered different resources.

While this might not make much sense semantically, consider this: The world's most widely used web server, Apache, treats URLs as case sensitive. Therefore the above URLs do, in fact, represent two different pages. As such, search engines treat these two URLs as different pages, too - and if your website doesn't care about URL casing you might end up with a split index for your pages.

So - how do we ensure that your website generates only lower-case URLs? With ASP.NET MVC this is easy. All you need is:

A LowercaseRoute class

An extension method for RouteCollection

An extension method for AreaRegistrationContext

And yes - the solution I'm about to detail will work with MVC Areas.

The LowercaseRoute class extends the Route class and basically lets that class do all the work. LowercaseRoute just ensures that the host and path portions of he URL are turned to lower-case while the querystring portion is left alone:

The AreaRegistrationContext extension method calls the MapRouteLowercase extension method on RouteCollection and also adds the current context's AreaName property to the route's DataTokens collection. This second step is crucial for areas to work:

And that's all you really need. Of course, you may want to add more extension methods of your own so that you can add route constraints or any other data that your route handler may need (such as namespace differentiators). But I'll leave that for you to flesh out on your own. Happy coding!

Tuesday, 1 November 2011

Routing is one of the things in the MVC framework that seems to just 'get me' every once in a while. Routing isn't that hard - it's pretty straight forward stuff. Routing is something you should, ideally, set up once - and then forget about. In order to do this, though, you need to have a plan up front - and you don't always have that (personally I think it's best if you've got a site map and a URL schema worked out before you start). If you add routes as you need them you can end up with more routes than you need and route conflicts that you didn't foresee.

This isn't intended as a post about MVC routing in general. Rather I wanted to post about a little piece of work I had to do in order to make use of two routes that conflict with one another. I think these routes highlight the most common problem people have when they're getting to grips with MVC routes; your URL is matched by the wrong route entry.

I like to keep things simple, and I don't like making extra work for myself. As such I like the default route that any Visual Studio MVC project comes set up with:

This route entry is simple to understand and you can build large applications based entirely on this single route entry. Of course, you may want to add some "pretty" routes as well such as "/login" or "/signup" - but that single route is really all you need. That is, until you decide that not all your URLs should be on the form "{controller}/{action}/{id}."

This second route is important to my application because it's used to display certain entities by their unique identifier. Such entities can be a member of the site, a club, or a store. Example URLs are:

As you can tell from the example URLs, several controllers can be mapped by this route and, if they are, the route should always map to the "index" action method and pass in the "id" parameter. But, if you add these routes to your application, you'll run into trouble because they conflict with each other. Let's look at why.

An incoming URL will be matched by one route and one route only. Routes are examined one by one and the first match is the one that will be used. This means that the order in which we add routes is important. We always want our most specific routes to be listed first. Let's apply this principle to our two routes:

Here I've added the "{controller}/{id}" route first because I feel this is the more specific of the two routes. Now let's look at a sample URL:

http://mywebsite.com/members/oyvind

If we break this URL down you'll see that the "/members" section of the URL will map to the "{controller}" portion of both routes. Also, the "/oyvind" section of the URL will map to the "{id}" portion of the first route. Happy days! We have a match! It looks like our two route entries might work after all.

But not so fast. What about this URL?

http://mywebsite.com/account/changepassword

This URL should look familiar to you as it's more of a traditional MVC route; in fact, it's a classic default route for any standard MVC project. But will this work with our two route entries? The "/account" section of the URL will map to the "{controller}" portion of both routes. The "/changepassword" section of the URL will map to both the "{id}" portion of the first route, and the "{action}" portion of the second route. However, because the first match wins, the first route is chosen and the request will end up being directed to the Index action method on the AccountController class, with an id parameter of "changepassword"... this isn't what we intended.

The "/account" section of the URL will map to the "{controller}" portion of both routes. The "/changepassword" section of the URL will map to both the "{action}" portion of the first route, and the "{id}" portion of the second route. In this case our first route will be selected - and the request will be directed to the ChangePassword action method on the AccountController (with an empty id). This is the desired result for this URL. But what about this URL?

http://mywebsite.com/members/oyvind

The "/members" section of the URL will map to the "{controller}" portion of both routes. The "/oyvind" sectin of the URL will map to the "{action}" portion of the first route, and the "{id}" portion of the second route. Because of the order of precedence, the first route will be selected and our request will be directed to the Oyvind action on the MembersController, with an empty id. Most likely we'll end up with a "404 - Not Found" because I doubt very much you'll have an action called Oyvind on any of your controllers.

I need both routes to work, but they clearly conflict with each other and changing the order of the routes doesn't actually help. What can I do? Somehow I need to help the MVC framework understand when to pick one route over the other. Thankfully there's a built-in mechanism we can leverage to help us: the route constraint.

When you add a route to the route table you can specify that this route has certain constraints. A constraint applies to a portion of the route (for example the "{id}" portion) and can set out that this portion has to match certain values, be of a certain format, or exclude specific values. When defining routes you pass the constraints as a third parameter to the MapRoute method:

In this contrived example I've specified a route using the standard/default route pattern, but I've specified a constraint for the "{action}" portion of the route. The constraint states that unless "{action}" equals "MySpecialAction" the route will not be matched. This route constraint is actually a regular expression constraint, so you if you want to allow "{action}" to include not only "MySpecialAction" but also "YourSpecialAction" you can alter the route entry as follows:

You can read more about route constraints here (http://www.asp.net/mvc/tutorials/creating-a-route-constraint-cs) as I'm not going to dwell on the specifics here. Rather, I want to get on with the problem at hand. Let's break it down a little:

I want to use the default route "{controller}/{action}/{id}" as much as possible. This is the route I want to base my whole site on.

In some special cases I want the route "{controller}/{id}" to take precedence. At the moment I know that I want this route to take precedence for the MembersController, the ShopsController, and the ClubsController.

I want to be able to define action methods other than Index on the MembersController, ShopsController, and ClubsController - and I want these actions to be matched by the default route.

The problem we encountered with the routes in their raw, unconstrained, form is that the "{id}" portion of a route will happily match the "{action}" portion of the other route, and vice versa. Since we add the most specific route first

we need to make sure that the "{id}" parameter does not match any action methods on the controller. Also, we don't want this route to apply to all controllers, so we need to constrain the "{controller}" portion of the route to the desired controllers. Let's start with the controller constraint first. We want this route to only apply to the ClubsController, ShopsController, and MembersController:

The other constraint, however, is a little bit more involved. The route should not be matched if the "{id}" portion of the URL matches any of the action methods on any of the controllers that this route applies to. We can do this by applying another regex constraint which contains the names of all the action methods on these controllers:

The above example is contrived - but it attempts to highlight a problem with the approach. Not only will this list of action names grow very quickly, you will also have to remember to add action names to this list whenever you add an action on any of the controllers (or modify the list if you change the name of any of the action methods). This approach will work, but it's not a very maintainable solution.

A better approach would be to use a custom route constraint. A custom route constraint is a class which implements the IRouteConstraint interface. This interface defines a method called Match:

The job of the custom route constraint is to decide if a given route parameter is valid for the given route. We want to check a single parameter, "id" against a potentially large list of values. To this end, I've created a ValuesConstraint class:

The ValuesConstraint class is instantiated by passing in the list of values, and an an optional flag which indicates if the route parameter should be a match or not be a match for these values. It can be used in the following fashion:

While the above would work it doesn't actually solve the problem of maintainability, because we're still hard-codign a list of strings representing the action methods on our controllers. So, the final piece of the puzzle is to create a method that outputs a list of all the action methods on controllers of our choosing:

Now our "{controller}/{id}" will only be matched by URLs where "{controller}" equals "Clubs", "Members", or "Shops" and "{id}" does not equal any action method name on any of these controllers. Any URL that does not match this route will then default to our, uhm, default route. Ta-dah!

The whole point of using this class is that it makes it easy to test other classes that have some kind of time dependency. To use a contrived example, imagine that you've got a class which will only do its work if the time is between 1am and 2am. It makes the decision (should I work or should I not?) by checking DateTime.Now.

How do you test this? You _could_ run your tests just before 1am, between 1am and 2am, and then again after 2am - but that's just stupid. You should be able to run your tests anywhere, any time, and as many times as you want.

The solution, here, is to make the class depend on DomainTime.Now instead. By doing so, you canoverride the current time during testing by setting it with the OverrideForTesting property.

Note that this property is an internal property. Expose it to your test assemblies by using the

[assembly: InternalsVisibleTo("Your.Test.Assembly")]

directive in the AssemblyInfo class in the assembly where DomainTime resides.

Now you can test to your heart's content. It's just a matter of setting the appropriate DateTime for each test.

One final note: The DomainTime.Reset() method is there for your SetUp() or TearDown() methods so that you can avoid having the DateTime set by one test bleed over into another test.

Last night I did a bit of work on how I bind views to navigation items. I have tended to include information about 'active tabs' as part of the view model (which fits well with the idea of having one model per view) - but I didn't like the hierarchy of view models that emerged from it.

UPDATE: After a an anonymous comment on this post I have updated the implementation to use ViewData rather than TempData as the commenter rightly pointed out that, while the TempData implementation will work, TempData is for redirect.

What I ended up doing was sticking a piece of data in the ViewData dictionary and pulling it out in the view to determine which tab should be rendered as 'active'. I created some extension methods for ViewData to do this:

Then I created an action filter which I can stick on a controller and/or an action. Notice the AttributeUsage which specifies the allowable targets and that the attribute can be applied more than once (this could be important if you've got more than one menu):

As you can tell I am using enumerations for my different "types" of navigation (main header nav, lefthand nav for profile pages, left hand nav for club management pages, etc). The thing is, though, youcan use anything you like - because TempData stores things as objects (you'll notice that my TempData extension methods use generics so you get type safety as well). You could, for example, store an object that holds the state for several layers of navigation if that's what you need.

In the views you just do this:

@{var currentSection = ViewData.GetNavElement();}

and use 'currentSection' however you please. I just use it to determine if I should set a CSS "selected" class on my navigation items.

Wednesday, 5 October 2011

Lately I've been doing a good amount of work on a website feature that requires the output of a relative date and time. For example, a entry written on a user's wall (the website has social aspects) may be annoted with "written by John Smith about two hours ago." It is a relatively straight forward task to accomplish this - but I thought I'd write a few words about some pitfalls you may come across.

Even if you have only every written the most trivial of applications you are likely to have used the DateTime structure in .NET (if you've written .NET apps, that is). And you're very likely to have used DateTime.Now; Let's take a closer look at the DateTime structure. According to Microsoft's documentation a DateTime "Represents an instant in time, typically expressed as a date and time of day." And, DateTime.Now "Gets a DateTime object that is set to the current date and time on this computer, expressed as the local time."

Cool. DateTime.Now is really handy for getting a handle on the current time, and we've all used it. But now I'm going to tell you that you shouldn't.

DateTime.Now returns a DateTime object that represents "the current date and time on this computer, expressed as the local time." As convenient as this may be, it's a potential source of trouble. "This computer" is the computer where the code executes. It could be your desktop machine, a development server, a production server, or a mobile device. The time on that machine depends on where the machine is located (which timezone it is in) and whether or not the machine is affected by daylight savings. So why is this a problem?

The problem is that if you cannot guarantee that all the machines in your infrastructure are all in the same timezone and are equally affected (or not affected) by daylight savings, using DateTime.Now in your code will potentially yield different timestamps on different machines even if the call to DateTime.Now was made at exactly the same time on the machines in question. 1pm in Oslo, Norway, on October 5th 2011 is not the same time as 1pm in London, UK, on the same date. "But that's just silly", I hear you say. "All our infrastructure is in the same data centre in one place." OK. Fine. That may very well be the case. But what about your users. Where are they? Are all of them in the same time zone as your servers? And is it likely that you'll never grow beyond having only local users and having only one data centre?

Even if your answer to the above questions is "we'll never scale beyond one data centre and all our users are in the same place and always will be" I think you should keep reading. It might just make your life a little simpler down the road. Just in case.

The problem with DateTime.Now is that it always represents the local time of the machine on which the code executes, and you don't really want to worry about where that machine is, because doing so makes life as a developer painful. What you want to do instead is use DateTime.UtcNow which returns an instance of DateTime representing the Coordinated Universal Time (UTC) of now. UTC is the local time of the server less any timezone differences and less any daylight savings difference. If you only ever store and use UTC DateTimes then none of your DateTime comparisons will ever have to take into consideration any time differences caused by time zones or daylight savings.

The only thing you now have to worry about is the thing you should worry about, which is displaying the correct date to your end users. You'll have to adjust for the timezone they're in because a date for an event at 12pm in London should be rendered as 1pm for a user in Oslo (they're always an hour ahead of London time).

Regardless of whether you use UTC or not, you'll always have to consider time zones and daylight savings when rendering dates for a user. Using UTC, that's all you'll have to worry about. If you use DateTime.Now, however, you'll also have to ensure that you know what the time offset of that DateTime instance is if you're going to compare it to another date, or if you're going to render it to a user. Pain in the arse (PITA).

I reckon that you should always use UTC times in your applications regardless of what your user base might look like. It makes life simpler from the start, and if you ever need to support users across different timezones you'll be a step ahead.

So, as a rule, this is what I do:

Always call DateTime.UtcNow and never DateTime.Now. (In fact, I don't use DateTime... I use DomainTime which is a wrapper I've created around DateTime. I'll write more about that in a later post).

Always treat DateTime stored in a database as UTC. This means that when I read that DateTime out of the database I specify that it is a UTC DateTime using the DateTime.SpecifyKind() method. This is very important, because the machine will by default treat any DateTime as local time.

Note that if you're using an ORM such as NHibernate you need to tell the ORM that the date should be treated as UTC. With FluentNHibernate this is really simple:

Map(x => x.DateCreated).CustomType().Not.Nullable();

Lastly, before displaying the date to the user, I apply the time difference between UTC and the user's location. There are several ways of doing this. For example you can have your users tell you which time zone they're in and you can apply the offset. Or, if your users are web users you can use Javascript's Date.getTimezoneOffset() method and apply the difference (in minutes) to your UTC date. Check out this StackOverflow question for some specific pitfalls of that particular method.