tag:blogger.com,1999:blog-7008996059029054191Tue, 14 Apr 2015 14:56:18 +0000asp.netF#unit testingPete's Blog on .NET Developmenthttp://petesdotnet.blogspot.com/noreply@blogger.com (Pete)Blogger4125tag:blogger.com,1999:blog-7008996059029054191.post-6396284907826927631Sat, 26 May 2012 17:18:00 +00002012-05-29T07:55:24.879+02:00unit testingUsing object builder for creating domain objects in unit tests.<h2>Background</h2>
<p>
When writing the unit test for a specific type, that type often
depends on other specific types, often domain objects themselves. I
have often seen unit tests construct these domain objects and setting
required properties as necessary to set up the fixture required for
that specific test.
</p><p>
In some cases that can be fine, but if the required parameters for a
constructor changes, you can end up with many unit tests having to be
modified to reflect this change. You can also be in a position where
some properties need to be set within a range of valid values in order
for the object itself to be in a valid state required for the test. As
the system is build upon, the criteria that makes up a valid domain
object can change, and that can ripple through the entire test suite.
</p><p>
In a recent project I worked on, we had a very strict programming
model for the domain objects. All properties on a domain object that was either required or
read-only had to be passed as a constructor parameter. This ensured
two things. When a new domain object has been constructed, we could be
sure that would be in a consistent state, i.e. no non-nullable values
would be
null or unassigned. We could also be sure that read-only properties
could not be modified.
</p><p>
That programming model has both its advantages and its
disadvantages. It makes it a little less flexible for unit tests. And
the afforementioned problem can become a big problem, as the
paramteters for the constructor changes regualarly.
</p><p>
Another problem that could arise would be that certain combinations
of properties would be invalid. E.g. we had a class that had a numeric value
property, but also numeric MinValue and MaxValue properties that
specified the valid range of values for the Value property. Setting
the Value property to something outside the valid range would throw an exception.
</p><p>
Therefore if one test requires a specific value for the Value
property, that test would also have to set the MinValue and MaxValue
properties even though the actual value of those properties have no
bearing on the outcome of the test.
</p><p>
This would result in tests being polluted as the extra property initialization
code hides from the reader what is important for this particular
function being tested, especially seen from the test-as-documentation point of
view.
</p><p>
Therefore from very early on in the project we relied on creating
helper classes in order to construct domain objects in the required
state, and ensuring that the returned domain object was in a valid
state. So asking this helper class for an object with a specific
Value, it would automatically make sure the MinValue and MaxValue
properties was set to contain the required Value property value.
</p><p>
As the project progressed, we developed a builder pattern
relying on lambdas for constructing the domain objects. Although the
pattern required a little more code to begin with, this pattern
resulted in clean and easily maintanable unit tests. And the pattern
resulted in a high level code of reuse.
</p><p>
I will not be reusing code from that actual project, but I will make
some examples using easyly understood domain objects.
</p>
<h2>The Pattern</h2>
<p>
A simple builder class that could build a User domain object could
look like this:
</p>
<pre name="code" class="csharp">
public class UserBuilder
{
private string _firstName;
private string _lastName;
public void SetFirstName(string firstName)
{
_firstName = firstName;
}
public void SetLastName(string lastName)
{
_lastName = lastName;
}
public User GetUser()
{
return new User
{
FirstName = _firstName;
LastName = _lastName;
};
}
public static User Build(Action&lt;UserBuilder&gt; buildAction = null)
{
var builder = new UserBuilder();
if (buildAction != null)
buildAction(builder);
return builder.GetUser();
}
}
</pre>
<p>
A simple unit test could look like this
</p>
<pre name="code" class="csharp">
[TestFixture]
public class UserFormTest
{
[Test]
public void ShouldInitializeFirstNameTextBox()
{
var user = UserBuilder.Build(x =&gt; x.SetFirstName("First"));
var form = new UserForm(user);
Assert.That(form.FirstName.Text, Is.EqualTo("First"));
}
[Test]
public void FormShouldBeValidWhenValidUserEntered()
{
var user = UserBuilder.Build();
Assume.That(user.Valid);
var form = new UserForm(user);
Assert.That(form.Valid, Is.True);
}
}
</pre>
<p>
Let's look at what the first test communicates to the reader. It
communicates that a UserForm requires a user. It also communicates
that the first name of the user is of importance to the outcome of the test.
</p><p>
The last tast is based on the assumption that the UserBuilder always
returns user that is a valid object.
</p><p>
But lets say we add a new property,
Email, to the User class, and a value for this property is required
for the User to be valid. This addition will cause the last test
to fail.
</p><p>
But lets add to the user builder:
</p>
<pre name="code" class="csharp">
public class UserBuilder
{
private string _email;
...
public void SetEmail(string email)
{
_email = email;
}
public User GetUser()
{
return new User
{
FirstName = _firstName;
LastName = _lastName;
Email = _email ?? "johndoe@example.com";
};
}
}
</pre>
<p>
Now, the UserBuilder will always return a User with an initialized
email address. If a test requires the email to have a specific value,
it can set the value, otherwise a dummy value will be used.
</p>
<h2>Reusability of build actions.</h2>
<p>
Lets look at a different type of system. Imagine that we are building
an ASP.NET MVC application, and we are writing the
UserController. In this case the controller returns a view model that
the view can render. Some prefer to always wrap the data the view
requires in a view model; others find it to be a waste of code. But in
this case the controller returns a view model simply in order to have
a very simple test to demonstrate.
</p>
<pre name="code" class="csharp">
[TestFixture]
public class UserControllerTest
{
Mock&lt;IUserRepository&gt; _userRepositoryMock;
UserController _controller;
[SetUp]
public void Setup()
{
_repositoryMock = new Mock&lt;IUserRepository&gt;();
_controller = new UserController(_repositoryMock.Object);
}
[Test]
public void ShouldReturnViewModelWithCorrectFirstName()
{
var user = UserBuilder.Build(x =&gt; {x.SetId(42); x.SetFirstName("First"); });
_repositoryMock.Setup(x =&gt; x.Get(42)).Returns(user);
var viewModel = _controller.Detail(42).ViewData.Model as UserViewModel;
Assert.That(viewModel.FirstName, Is.EqualTo("first"));
}
[Test]
public void ShouldReturnViewModelWithCorrectLastName()
{
var user = UserBuilder.Build(x =&gt; {x.SetId(42); x.SetLstName("Last"); });
_repositoryMock.Setup(x =&gt; x.Get(42)).Returns(user);
var viewModel = _controller.Detail(42).ViewData.Model as UserViewModel;
Assert.That(viewModel.LastName, Is.EqualTo("Last"));
}
}
</pre>
<p>
One could say that all having a separate test for each separate
property is a waste of code. You could just set up all required
properties in one go, and have multiple asserts in the end of the
test. Personally, I prefer to have a separate test for each
assertion. But that requires a little less code duplication.
</p>
<pre name="code" class="csharp">
[TestFixture]
public class UserControllerTest
{
Mock&lt;IUserRepository&gt; _userRepositoryMock;
UserController _controller;
[SetUp]
public void Setup()
{
_repositoryMock = new Mock&lt;IUserRepository&gt;();
_controller = new UserController(_repositoryMock.Object);
}
public User SetupUserInMockRepository(Action&lt;UserBuilder&gt; buildAction)
{
var user = UserBuilder.Build(x =&gt;
{
x.SetId(42);
buildAction(x);
};
_repositoryMock.Setup(x =&gt; x.Get(42)).Returns(user);
return user;
}
[Test]
public void ShouldReturnViewModelWithCorrectFirstName()
{
SetupUserInMockRepository(x =&gt; x.SetFirstName("First"));
var viewModel = _controller.Find(user.Id).ViewData.Model as UserViewModel;
Assert.That(viewModel.FirstName, Is.EqualTo("First"));
}
[Test]
public void ShouldReturnViewModelWithCorrectFirstName()
{
SetupUserInMockRepository(x =&gt; x.SetLastName("Last"));
var viewModel = _controller.Find(user.Id).ViewData.Model as UserViewModel;
Assert.That(viewModel.FirstName, Is.EqualTo("Last"));
}
}
</pre>
<p>
What is interesting here is that the helper function here takes a
build action, and adds to it. It clearly displays that it becomes very
easy to create new helper functions don't take a User as a parameter,
but instead a much more general specification about how the user
should look.
</p><p>
Lets take another example, the repository iself. In this case the
repository is also the data access layer. So unit testing this
repository actually means storing and loading data from the database,
verifying that the SQL queries are correct.
</p>
<pre name="code" class="csharp">
[TestFixture]
public class UserRepositoryTest
{
public User CreateUserInDb(Action&lt;UserBuilder&gt; buildAction)
{
var user = UserBuilder.Build(buildAction);
var repository = new UserRepository();
repository.AddUser(user);
repository.Commit();
}
// It is assumed that the database is cleared between each test.
[Test]
public void PartialSearchByFirstNameReturnsUser()
{
var user = CreateUserInDb(x =&gt; x.SetFirstName("First"));
var repository = new UserRepository();
var users = repository.FindByFirstName("F");
Assert.That(users.Single().Id, Is.EqualTo(user.Id));
}
[Test]
public void PartialSearchByFirstNameDoesntReturnUser()
{
var user = CreateuserInDb(x =&gt; x.SetFirstName("First"));
var repository = new UserRepository();
var users = repository.FindByFirstName("X");
Assert.That(users.Count(), Is.EqualTo(0));
}
}
</pre>
<h2>Nested builders</h2>
<p>
If your domain objects reference each other, you can let different
builders use each other. Imagine a Document class that takes the User
who created the document as constructor parameter, and the user cannot
be null.
</p>
<pre name="code" class="csharp">
public class DocumentBuilder
{
List&lt;Action&lt;UserBuilder&gt;&gt; _userBuildActions = new List&lt;Action&lt;UserBuilder&gt;&gt;();
public void BuildUser(Action&lt;UserBuilder&gt; buildAction)
{
_userBuildActions.Add(buildAction);
}
...
public Document GetDocument()
{
var userBuilder = new UserBuilder();
_userBuildActions.ForEach(x =&gt; x(userBuilder));
var user = userBuilder.GetUser();
return new Document(User);
}
}
</pre>
<p>
Using the this builder we can create a document and specify how the
creator should look. E.g. in this test for a document form:
</p>
<pre name="code" class="csharp">
[TestFixture]
public class DocumentFormTest
{
[Test]
public void ShouldDisplayOwnerName()
{
var document = DocumentBuilder.Build(x =&gt; x.BuildUser(y =&gt; y.SetFullName("John doe")));
var documentForm = new DocumentForm(document);
Assert.That(documentForm.UserNameLabel.Text, Is.EqualTo("John Doe"));
}
}
</pre>
<p>
We can create some extension functions to make this test easier to read.
</p>
<pre name="code" class="csharp">
public static class DocumentBuilderExtensions
{
public static void SetCreatorFullName(this DocumentBuilder builder,
string name)
{
builder.BuildUser(x =&gt; x.SetFullName(name));
}
}
[TestFixture]
public class DocumentFormTest
{
[Test]
public void ShouldDisplayCreatorName()
{
var document = DocumentBuilder.Build(x =&gt; x.SetCreatorFullName("John Doe"));
var documentForm = new DocumentForm(document);
Assert.That(documentForm.UserNameLabel.Text, Is.EqualTo("John Doe"));
}
}
</pre>
<p>
Not only have the helper functions reduced the amount of code for the
individual test to a minimum, it very clearly communicates its
intent. The name of the person who created the document should be
displayed on the form. It also abstracts from the test exactly how the
owner name is stored.
</p><p>
But the most important quality of this pattern is that the unit tests
relying on these builders to construct required domain objects have
shown to require
very little maintenance as the system is expanded and code is being refactored.
That is a huge gain.
</p>http://petesdotnet.blogspot.com/2012/05/background-when-writing-unit-test-for.htmlnoreply@blogger.com (Pete)1tag:blogger.com,1999:blog-7008996059029054191.post-3505776038483961852Sat, 17 Jul 2010 12:39:00 +00002010-07-18T20:53:38.209+02:00F#Implementing a simple preprocessor in F# using FsLex<h2>Introduction</h2>
<p>In this article, I will show how two different lexers can work together to create a tokenizer with preprocessing functionality.</p>
<p>This is not a perfect solution for implementing the preprocessor used in programming languages like C or C++, but if you have some simpler tasks, this can provide an elegant solution; much simpler than you can achieve by a single lexer/parser pair. In reality this will generate a preprocessing lexer->tokenizing lexer->parser pipeline.
</p>
<p>This article assumes that the reader is familiar with FsLex and FsYacc</p>
<h2>The problem domain</h2>
<p>First let me describe the problem that I have, and the need for a preprocessor. The project that I personally have is a case of auto generating code from a single definitions file describing the classes and attributes of the domain. Such a file could look like this:<p>
<pre>
entity User
field ID int
field FirstName string[250]
field LastName string[250]
field Email string[250]
generator DomainObject .\MyProject.Domain
generator DataAccess .\MyProject.DataAccess
generator TableScript .\DatabaseScripts\Schemas\dbo\Tables
end
</pre>
<p>This file defines that I have a User in my domain model, that I need a domain object (c# class) generated implementing the user; I need a data access object that can save and load a user to/from the database; and I need a table script that can generate a user table in the database. The line</p>
<pre>
generator DomainObject .\MyProject.Domain
</pre>
<p>Defines a type of code generator (DomainObject) and a destination folder (./MyProject.Domain) where the generated file will be saved.</p>
<p>The three different code generators take care of generating a great deal of trivial code for me, allowing me to concentrate on the important part, domain logic.</p>
<p>Now let's add another domain object to the script</p>
<pre>
entity User
field ID int
field FirstName string[250]
field LastName string[250]
field Email string[250]
generator DomainObject .\MyProject.Domain
generator DataAccess .\MyProject.DataAccess
generator TableScript .\DatabaseScripts\Schemas\dbo\Tables
end
entity UserLog
field ID int
field UserID int
field Action string[250]
field ActionDate datetime
generator DomainObject .\MyProject.Domain
generator DataAccess .\MyProject.DataAccess
generator TableScript .\DatabaseScripts\Schemas\dbo\Tables
end
</pre>
<p>Now, we see some duplications appearing, the same destination folder are being used for the different code generators. If the project is restructured so files will be moved to new folders, the same change will have to be carried out many places. Adding the possibility to define a variable will remove this duplication.</p>
<pre>
@DomainOutputFolder=.\MyProject.Domain
@DataAccessOutputFolder=.\MyProject.DataAccess
@TableOutputFolder= .\DatabaseScripts\Schemas\dbo\Tables
entity User
field ID int
field FirstName string[250]
field LastName string[250]
field Email string[250]
generator DomainObject $(DomainOutputFolder)
generator DataAccess $(DataAccessOutputFolder)
generator TableScript $(TableOutputFolder)
end
entity UserLog
field ID int
field UserID int
field Action string[250]
field ActionDate datetime
generator DomainObject $(DomainOutputFolder)
generator DataAccess $(DataAccessOutputFolder)
generator TableScript $(TableOutputFolder)
end
</pre>
<p>Now the output folders are stored in variables, so changing the output folder is a lot easier now. It also gives a better overview of what and where code is generated since the destination folders are placed in the beginning of the file.</p>
<p>The domain specification script without variables is the format that my tokenizing lexer and parser wants, so the job of my preprocessor is to transform the latter domain specification script with variables to the former script without variables.</p>
<h2>Implementing the Preprocessor</h2>
<p>Normally when generating a lexer/parser pair, the lexer emits a series of tokens that the parser can recognize and use to build up an abstract syntax tree. But there is no specification on what type of data the lexer needs to return. It can return integers, strings, or whatever you desire. You can even get the lexer to return an abstract syntax tree directly (with the result that it is completely impossible to understand)</p>
<p>My solution revolves around a lexer that reads character data and return character data. In order for the output of the preprocessing lexer to be used as an input to the tokenizing lexer, the result must be wrapped in a package that the lexer understands. In this example, I have wrapped the lexer in a TextReader implementation. </p>
<p>I will start by showing how this is actually used in the project, as it will provide the context in order to more easily understand the solution. Here is first the parsing routine without the preprocessor (I have used the --unicode flag for FsLex, which generates a lexer that operates on char data, not byte data)</p>
<pre>
let reader = new System.IO.StreamReader (filename)
let lexBuffer = LexBuffer<char>.FromTextReader reader
Parser.start Lexer.token lexBuffer
</pre>
<p>Here is the parsing routine with the preprocessor added</p>
<pre>
let reader = new System.IO.StreamReader (filename)
let preprocessingReader = new PreprocessingTextReader(reader)
let lexBuffer = LexBuffer<char>.FromTextReader preprocessingReader
Parser.start Lexer.token lexBuffer
</pre>
<p>Notice that the PreprocessingTextReader is simply a decorator pattern. This implies that the preprocessor is usable on other context. The preprocessor simply transform an input text stream.</p>
<p>The solution thus has two components, a preprocessing lexer, and a specialized TextReader class that uses the preprocessing lexer to deliver the processed result.</p>
<h3>Preprocessor.fsl</h3>
<p>Lets first look at the actual preprocessing lexer which make up the core of the preprocessor. This is the component that actually converts the input stream.</p>
<p>The lexer output is a char array. So every pattern matched should return in an char array. </p>
<p>First I will show the file, and then go into details with each element. As I said, I assume that the reader is familiar with FsLex and FsYacc, so I will not describe FsLex basics here.</p>
<pre>
{
module Preprocessor
open System.Collections.Generic
open Microsoft.FSharp.Text.Lexing
let variables = new Dictionary<string, string>()
let lexeme = Lexing.LexBuffer<_>.LexemeString
let parseConstant lexbuf =
let input = lexeme lexbuf
// Extract "a=b" in "@a=b"
let s = input.Substring(1)
let parts = s.Split('=')
variables.Add(parts.[0], parts.[1])
let resolveConstant lexbuf =
let input = lexeme lexbuf
// Extract "xyz" in "$(xyz)"
let variable = input.Substring(2, input.Length - 3)
variables.[variable].ToCharArray()
}
let char = ['a'-'z' 'A'-'Z']
let identifier = char*
let nonNewlines = [^ '\r' '\n']*
rule preProcess = parse
| eof { [||] }
| "@"identifier"="(nonNewlines) { parseConstant lexbuf; [||] }
| "$("identifier")" { resolveConstant lexbuf }
| _ { lexbuf.Lexeme }
</pre>
<p>In order to implement the variables, I must go down the imperative programming style and introduce static mutable data in the lexer in the form of a dictionary that can hold variable names and variable values.</p>
<pre>
let variables = new Dictionary<string, string>()
</pre>
<p>Followed by this are two helper functions. The first adds a variable to the dictionary. The second retrieves a variable from the dictionary, and returns it as a char array.</p>
<pre>
let parseConstant lexbuf =
let input = lexeme lexbuf
// Extract "a=b" in "@a=b"
let s = input.Substring(1)
let parts = s.Split('=')
variables.Add(parts.[0], parts.[1])
let resolveConstant lexbuf =
let input = lexeme lexbuf
// Extract "xyz" in "$(xyz)"
let variable = input.Substring(2, input.Length - 3)
variables.[variable].ToCharArray()
</pre>
<p>Then comes the lexer rules. The first rule is pretty simple, the end of file indicator simply returns an empty array.</p>
<pre>
rule preProcess = parse
| eof { [||] }
</pre>
<p>The second rule identifies a variable definition. Upon seeing this, the previously shown helper function parseConstant is called. Note that multiple lexer states could possibly have made this implementation clearer, e.g. eliminating the need for splitting strings and removing the @ character in the parseConstant function. But this implementation doesn't use that. The return value is an empty array.</p>
<pre>
| "@"identifier"="(nonNewlines) { parseConstant lexbuf; [||] }
</pre>
<p>The third rule is the one that actually looks up the variable, again using the previously defined helper function.</p>
<pre>
| "$("identifier")" { resolveConstant lexbuf }
</pre>
<p>And last, any other lexeme is returned as is. Because the lexer is generated with the --unicode flag, the lexeme is a char array, and can therefore be returned as is.</p>
<pre>
| _ { lexbuf.Lexeme }
</pre>
<h3>PreprocessingReader.fs</h3>
<p>The PreprocessingReader class is a specialization of the TextReader
class that uses the preprocessing lexer and exposes it as a
TextReader. Seing how the preprocessing lexer was implemented, it is
clear that the reader should function like this.</p>
<ul>
<li>The reader has an internal buffer of char data read from the lexer</li>
<li>When the application reads from the reader, it will return data from the buffer</li>
<li>If there is no data in the buffer when the application reads from the reader, the reader will read data from the lexer into the buffer first.</li>
<li>When reading data from the preprocessing lexer into the buffer,
the function should be able to indicate if we have hit the end of
file.</li>
</ul>
<p>The buffer in this case is implemented by a Queue&lt;char&gt; (from the System.Collections.Generic namespace). The buffer is filled by the function chechQueue(). This function will put data in the queue if necessary. The function will return true if there is more data to process, and it will return false, if the queue has been emptied and the preprocessing lexer has passed end of file.</p>
<p>Since performance is of no concern in this specific application, the implementation is a minimal one, one that simply implements the Read() and Peek() functions.</p>
<pre>
module PreprocessingReader
open System
open System.IO
open System.Collections.Generic
open Microsoft.FSharp.Text.Lexing
type PreprocessingTextReader (sourceReader: StreamReader) =
inherit TextReader()
let reader = sourceReader
let lexBuffer = LexBuffer<char>.FromTextReader reader
let queue = new Queue<char>()
// Checks if there is more data to read. If the queue is empty, it
// will check if there is more data in the input file, and add it to
// the queue.
// If there is more data to process, the function returns true. If
// all the data in the input file has been processed, the function returns false
let rec checkQueue () =
if queue.Count > 0 then
true
elif lexBuffer.IsPastEndOfStream then
false
else
let s = Preprocessor.preProcess lexBuffer
if (s.Length = 0) then
checkQueue()
else
for c in s do
queue.Enqueue c
true
override x.Dispose(disposing) =
if disposing then reader.Dispose()
override x.Peek() : int =
if checkQueue() then
queue.Peek() |> Convert.ToInt32
else
-1
override x.Read() : int =
if checkQueue() then
queue.Dequeue() |> Convert.ToInt32
else
-1
</pre>
<h2>Conclusion</h2>
<p>I believe that this is an elegant solution to my problem in the context that it is given. But using it in a more complex context could cause problems.</p>
<p>The first problem with this code is the lack of performance optimization. This implementation reads only a single character at a time. Letting the preprocessor be able to read larger chunks of data at a time would probably yield better performance. But in my case it takes less than a second to execute at compile time if there are changes to the input file, so performance is not a problem here.</p>
<p>And second, the preprocessor messes up the line numbering. If you are building a compiler and want to show a line number for compilation errors, then a specific line returned by the preprocessor would not necessarily have the same line number in the original file, making it difficult for the user of that compiler to identify where in his/her source code there is a bug.</p>
<p>And third, there is no handling of looking up a variable that is not defined. This implementation will simple fail with an exception.</p>
<p>So if you are building advanced parsers and compilers, or are using this to analyze text runtime in a production environment where performance is critical, then you need to resolve those issues before using this type of implementation.</p>
<p>But if these are not the issues you have, then this is a very simple clean implementation of a preprocessor, where the concerns have been clearly seperated. This preprocessor can easily be extended to implement include files, conditional compilation, and much more. And the preprocessor can easily be used in a different context, it does not need to be used with a parser.</p>
/Petehttp://petesdotnet.blogspot.com/2010/07/implementing-simple-preprocessor-in-f.htmlnoreply@blogger.com (Pete)1tag:blogger.com,1999:blog-7008996059029054191.post-2802032810144597161Mon, 14 Sep 2009 20:25:00 +00002010-07-18T20:54:01.318+02:00asp.netGeneric Handlers and ASP.NET routing<p>
This article will discuss the use of generic handlers, such as .ashx files, in a web project using ASP.NET routing.
This is article is based on experience with .NET 4.0 Beta 1 framework. It will probably not apply to .NET 3.5 projects,
and although I believe the general concept will still be valid in future versions of ASP.NET, the next version may
contain a fix that makes a workaround described here obsolete.
</p>
<p>
Should any of the functionality described in here be affected by subsequent betas or the final version of .NET 4.0
I will keep this post updated.
</p>
<h2>The Generic Handler</h2>
<p>
ASP.NET has long had the option for creating a file that could serve other data than HTML. This could be images
generated dynamically, or files that could come from a binary field in a database, or maybe runtime compression
of javascript files, etc. The uses are many.
</p>
<p>
The easy way to implement such a generic handler would be to create an .ashx file, and implement the process
request. You could also write a class directly that implements the IHttpHandler interface, and then register
this file with a specific extension for example.
</p>
<p>
In both cases, you need to implement the <tt>IHttpHandler</tt> interface.
</p>
<pre name="code" class="csharp">
public interface IHttpHandler
{
void ProcessRequest(HttpContext context);
bool IsReusable { get; }
}
</pre>
<p>
The interesting function if <tt>ProcessRequest</tt> where the implementation writes the actual data,
text or binary, to the output.</p>
<p>
With the routing feature in .NET 4.0, it would be natural to map a route to either the .ashx file, or just a class
that implements the <tt>IHttpHandler</tt> interface. The first case is not directly possible unfortunately, but the second
case is very easy to implement. And compared to pre-routing possibilities, you don't need to create mappings in web.config.
</p>
<p>
I have not really spend a long time investigating the option of getting .ashx files to work with routing
because I'm much more happy with just having a class implementing <tt>IHttpHandler</tt>. There therefore may be easy
solutions possible for this scenario. But if there was, I wouldn't use them.
</p>
<h2>Why can't we use .ashx files</h2>
<p>
In the .NET 4.0 Beta 1 framework you have to add routes to the route table, using
<pre name="code" class="csharp">
Routes.Add(string routeName, RouteBase route)
</pre>
<p>
The <tt>RouteBase</tt> class is an abstract class in the framework, and there is only one concrete implementation, the
<tt>Route</tt> class. This class is constructed through one of the constructors:
</p>
<pre name="code" class="csharp">
public Route(string url, IRouteHandler routeHandler);
public Route(string url, RouteValueDictionary defaults, IRouteHandler routeHandler);
public Route(string url, RouteValueDictionary defaults, RouteValueDictionary constraints, IRouteHandler routeHandler);
public Route(string url, RouteValueDictionary defaults, RouteValueDictionary constraints, RouteValueDictionary dataTokens, IRouteHandler
routeHandler);
</pre>
<p>
In every case you need an object that implements <tt>IRouteHandler</tt>. Again, only one implementation of this interface is implemented, the
<tt>PageRouteHandler</tt> that easily routes to an .aspx file.
This handler does not accept an .ashx file however, as it is not a <tt>Page</tt>.
</p>
<p>
So to use .ashx files, you'd need the non-existing <tt>AshxRouteHandler</tt>. But since I don't miss it, I haven't spend time
investigating how difficult it would be.
</p>
<h2>Implementing Routes for an IHttpHandler implementation</h2>
<p>
Since I have dismissed the usage of .ashx files, I will now go into the solution that I use and like,
creating a route for a class that implements the <tt>IHttpHandler</tt> interface. To accomplish this, all we need to do is create a class
that implements the <tt>IRouteHandler</tt>. This interface only has a single function, that is quite easy to implement.
<p>
<pre name="code" class="csharp">
// This is the actual http handler, the class that should handle the actual requests.
public class MyHandler : IHttpHandler
{
...
}
public class MyHandlerRouteHandler : IRouteHandler
{
public IHttpHandler GetHttpHandler(RequestContext requestContext)
{
return new MyHandler();
}
}
// Map the route to extract an image with a specific ID
// This code is added where all the other routes are mapped.
RouteTable.Routes.Add(new Route("Image/{ImageID}", new MyHandlerRouteHandler()));
</pre>
<p>
That is basically be it. There is a simple problem however. We cannot inspect the route values in our handler. Normally you
would retrieve the route values like this
<pre name="code" class="csharp">
public class MyHandler
{
public void ProcessRequest(HttpContext context)
{
int imageID = int.Parse((string)context.Request.RequestContext.RouteData.Values["ImageID"]);
}
}
</pre>
<p>
The <tt>Request.RequestContext</tt> property is not initialized however.
</p>
<p>If you use reflector to reverse engineer and inspect the .NET framework you can see that the <tt>PageRouteHandler</tt>
class injects this RequestContext
</p>
<pre name="code" class="csharp">
public virtual IHttpHandler GetHttpHandler(RequestContext requestContext)
{
...
page.Context.Request.RequestContext = requestContext;
...
}
</pre>
<p>
But because the <tt>RequestContext</tt> property setter is marked as internal, we cannot do this in our own implementation
of <tt>IHttpHandler</tt>, so we have to create a workaround for this (or maybe more correctly, a different workaround than the
MS one). I would argue that this behaviour is not logical, and the <tt>RequestContext</tt> property should be set automatically
by the framework. Hopefully this will be changed before the release. Here is a modified implementation of the above
code that correctly retrieves the route data.
</p>
<pre name="code" class="csharp">
public class MyHandler : IHttpHandler
{
public RequestContext RequestContext { get; set; }
public void ProcessRequest(HttpContext context)
{
int imageID = int.Parse((string)RequestContext.RouteData.Values["ImageID"]);
...
}
}
public class MyHandlerRouteHandler : IRouteHandler
{
public IHttpHandler GetHttpHandler(RequestContext requestContext)
{
return new MyHandler() {RequestContext = requestContext};
}
}
</pre>
<h2>Generic Solution Without a DI container</h2>
<p>
Here I will present a solution for a generic route handler so you don't need to create a new route handler class
for each http handler class in the system. This solution has some requirements that are
unnecessary if you are using a DI (Direct Injection) container to control instantiation of your classes.
</P>
<pre name="code" class="csharp">
public interface IHttpHandlerBase : IHttpHandler
{
void RequestContext {get; set; }
}
public class GenericRouteHandler&lt;T&gt; : IRouteHandler
where T : IHttpHandlerBase, new()
{
public IHttpHandler GetHttpHandler(RequestContext requestContext)
{
var retVal = new T();
retVal.RequestContext = requestContext;
return retVal;
}
}
public class ImageHandler : IHttpHandlerBase
{
...
}
// This goes into the route initialization
RouteTable.Routes.Add(new Route("Image/{ImageID}", new GenericRouteHandler&lt;ImageHandler&gt;()));
</pre>
<h2>Generic solution with a DI container</h2>
<p>
If you are using a DI container, then you can remove the need for creating a new interface.
It also removes the need for the <tt>where new()</tt> generic constraint. I prefer using
<a href="http://structuremap.sourceforge.net/">StructureMap</a>
as DI container. Here is my implementation for a generic route handler using StructureMap.
</p>
<pre name="code" class="csharp">
public class MyImageHandler : IHttpHandler
{
RequestContext _requestContext;
public MyImageHandler(RequestContext requestContext,
// Other dependencies that the class needs
)
{
_requestHandler = requestHandler;
// Store other dependencies
}
public void ProcessRequest(HttpContext context)
{
...
}
}
public class GenericRouteHandler&lt;T&gt; : IRouteHandler
where T : IHttpHandler
{
public IHttpHandler(RequestContext requestContext)
{
return ObjectFactory.With(requestContext).GetInstance&lt;T&gt;();
}
}
// Route initialization
RouteTable.Routes.Add(new Route("Image/{ImageID}", new GenericRouteHandler&lt;MyImageHandler&gt;()));
</pre>
<p>
The <tt>With()</tt> function tells StructureMap that during this specific
call, this particular instance of the RequestContext should be used for any
constructor parameter of that particular type. This avoids having to create plugin rules
for <tt>RequestContext</tt> in structure map.
</p>
<p>
This class, as it is written here, is the one that I use in my own system.
</p>
<h2>Final Thoughts</h2>
<p>
I've been working with the ASP.NET 4.0 Beta 1 for some months now, and I really appreciate
the routing feature build in. I find that this works more smoothly than other 3rd party url rewriting
modules that I've investigated. This is likely because there is build in
support in the core of the framework. Some 3rd party url rewriting modules required some
workarounds in order to be able to handle postbacks correctly. With the build in routing
this is never a problem.
</p>
<p>
And with very little work it is easy to extend this functionlity to not only work with .aspx
pages, but any http handler in the system.
</p>
<p>
Pete
</p>http://petesdotnet.blogspot.com/2009/09/generic-handlers-and-aspnet-routing.htmlnoreply@blogger.com (Pete)2tag:blogger.com,1999:blog-7008996059029054191.post-2525877024192246578Wed, 26 Aug 2009 08:54:00 +00002009-08-26T12:11:53.191+02:00asp.netASP.NET Web forms - Receiving events in a Repeater with ViewState disabled<p>
Welcome to my blog, and my first blog post, ever. I will use this
blog to post information, tips, tricks, patterns, etc. mainly for the
.NET platform.
</p>
<p>
My first blog post is about overcoming the problem that controls
that can perform postback stops working if you place them inside a
repeater, and the repeater's viewstate is disabled. The page does
post back, but the event is lost. It never arrives at its
destination. A while ago, I created a workaround for this limitation.
It cannot handle all situations, but it can handle those that I find
to be the most common ones.
</p>
<h2 >The Problem</h2>
</p>
Why is the ViewState important for the repeater?
</p>
<p>
When the browser requests a page, a new instance of the specific
page class is created; and every control that this page contains is
also instantiated, and placed in the page's control tree. When all
initialization logic, event handlers etc have been processed, the
control tree is rendered to HTML, and the instance that was created
is forgotten. When the same user makes a postback, a new fresh
instance is created for every control in the control tree.
</p>
<p>
The view state is basically a place where the control tree is
serialized to, and deserialized from at post back, to allow
recreating the state of the page after the post back without having
to initialize the page for every post back, avoiding expensive
database operations.
</p>
<p>
So if for example we have a page with a button and a repeater.
When you click the button, the page does a post back. By
deserializing its state from the view state, the repeater is capable
of recreating its control tree, therefore being able to render the
same HTML again, without having to rebind to the original data.
But the viewstate can take up quite a lot of space in your page.
And say for example that you don't have any buttons on the page, but
you have a button inside the repeater, and when you click the button
you modify the underlying data, thus you have to rebind the repeater
anyway. In this case, you don't need to have the viewstate to render
the same HTML again.
</p>
<p>
But when you click the button, the ASP.NET framework looks at the
ID of the control that should receive the event, and calls a function
on this control, namely IPostBackEventHandler.RaisePostbackEvent. But
the target for the function is not the repeater itself, but the
button that is located in the control tree inside the repeater. An
because the control tree is not regenerated, there is no one to
receive this event.
</p>
<h2 >The Workaround</h2>
<p>
At one project I worked on, we had a page with four repeaters,
each rendering a table with quite a lot of fields. This gave such a
huge viewstate that it was giving performance problems. So I found
this workaround to the problem.
</p>
<p>
There are two drawbacks to this solutions. You have to place the
repeater inside its own user control. I personally don't really find
this to be a drawback because I always wrap my repeaters inside their
own user controls to better organize the code.
</p>
<p>
The second drawback is a bit more serious. You cannot just place
any control that has post back events inside the repeater. In fact,
you need to control the event generation yourself. But if your
repeater only does post backs from Button or LinkButton controls,
this should be fairly simple.
</p>
<p>
The reason why you have to wrap the repeater inside a user control
is because the controls inside the repeater is not capable of
receiving the event; we simply cannot change this fact. Therefore we
must direct the event to a control outside the repeater. That control
happens to be the user control that we are wrapping the repeater in,
thus why we have to wrap it in a user control. This leads to the
second consequence; that you cannot place any control inside a
repeater. This is because controls that fire post back events have a
habit of routing the event to itself. We therefore need to control
the post back javascript code from the user control implementation
itself.
</p>
<p>
That actually means that we cannot even use Button or LinkButton
controls inside the repeater. But their behavior is very easy to
reproduce, so they are easy to implement in your workaround.
But say that you placed a DatePicker control inside your repeater.
Then you would not be able to use this workaround.
Let's move on to the example:
</p>
<h2 >The example</h2>
</p>
This example is created for .NET 4.0 beta 1. I originally created
this pattern for a .NET 2.0 project, so I know that the pattern works
for earlier versions of the framework, but there might of course be
differences in the implementation.
As I have described, I have the repeater wrapped in a user
control. When you create a new user control, there are basically two
ways to implement the user control:
</p>
<ol>
<li>
The user control has the responsibility of getting data from
the data source, and updating data based on event.
</li>
<li>
The page (or a presenter in MVP) has the responsibility of
getting data from the data source, and sends the data to the user
control through a public method on this. Button clicks in the
repeater will cause it to raise an event that the page handles, and
updates the underlying data, and tells the repeater to update
itself.
</li>
</ol>
<p>
Option 2 has a more decoupled design, but also more code. As this
post is not about decoupling application logic, but events in a
repeater, my example code will use option 1; the repeater loads data
in Page_Load, and updates data directly, when the event is received.
</p>
<p>
In my simple example, I load a bunch of objects from a data source
and place them in a repeater-generated table. Each row in the table
has a “delete”-link. Clicking the link will delete the
object and rebind the repeater.
</p>
<p>
Let's first take a look at my “data source”:
</p>
<pre name="code" class="csharp">
namespace RepeaterEvent
{
public class DataClass
{
public int ID { get; set; }
public string A { get; set; }
public string B { get; set; }
}
public static class DataClassContainer
{
public static List<DataClass> Objects;
public static void Init()
{
Objects = new List<DataClass>();
for (int i = 0; i < 20; i++)
{
Objects.Add(new DataClass()
{
ID = i,
A = "A" + i,
B = "B" + i
});
}
}
}
}
</pre>
<p>
The Init() function simply reinitializes the static array, and I
call this function in the Page_Load function in my user control, if
it is not a postback:
</p>
<pre name="code" class="csharp">
protected void Page_Load(object sender, EventArgs e)
{
if (!IsPostBack)
{
DataClassContainer.Init();
DataRepeater.DataSource = DataClassContainer.Objects;
DataRepeater.DataBind();
}
}
</pre>
<p>
During postbacks, I then modify the Objects collection, and rebind
the repeater to the modified collection.
</p>
<p>
In my test project, my user control is named RepeaterControl. Here
is the class declaration:
</p>
<pre name="code" class="csharp">
namespace RepeaterEvent
{
public partial class RepeaterControl : UserControl, IPostBackEventHandler
{
public void RaisePostBackEvent(string eventArgument)
{
...
}
}
}
</pre>
<p>
The user control implements IPostBackEventHandler so that it may
receive postback events.
</p>
<p>
As I need to manually generate the event firing code, the
“LinkButton” is replaced by a simple HTML hyperlink. I
use functionality in the .NET framework generate the actual
javascript for me. Let's have a look at the .ascx file.
</p>
<pre name="code" class="xml">
&lt;%@ Control Language="C#" AutoEventWireup="true" CodeBehind="RepeaterControl.ascx.cs" Inherits="RepeaterEvent.RepeaterControl" %&gt;
&lt;asp:Repeater runat="server" ID="DataRepeater" EnableViewState="false"&gt;
&lt;HeaderTemplate&gt;
&lt;table&gt; &lt;tbody&gt;
&lt;/HeaderTemplate&gt;
&lt;FooterTemplate&gt;
&lt;/tbody&gt; &lt;/table&gt;
&lt;/FooterTemplate&gt;
&lt;ItemTemplate&gt;
&lt;tr&gt;
&lt;td&gt;&lt;%# GetA(Container.DataItem) %&gt;&lt;/td&gt;
&lt;td&gt;&lt;%# GetB(Container.DataItem) %&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="&lt;%# GetDeleteScript(Container.DataItem) %&gt;"&gt;Delete&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/ItemTemplate&gt;
&lt;/asp:Repeater&gt;
</pre>
<p>
The GetXYZ functions are functions that I have defined in my code
behind file. I always do this instead of using the Eval function, as
it will provide compile time checking of the properties that I
access. If someone for example removed or renamed a property, they
would receive a compile time error instead of a runtime error.
</p>
<p>
Here are the three functions.
</p>
<pre name="code" class="csharp">
protected string GetA(object objDataClass)
{
var obj = (DataClass)objDataClass;
return obj.A;
}
protected string GetB(object objDataClass)
{
var obj = (DataClass)objDataClass;
return obj.B;
}
protected string GetDeleteScript(object objDataClass)
{
var obj = (DataClass)objDataClass;
string eventArgs = "delete:" + obj.ID;
return Page
.ClientScript
.GetPostBackClientHyperlink(
this, eventArgs);
}
</pre>
<p>
As you can see, I use the
Page.ClientScript.GetPostbackClientHyperlink function to generate a
javascript hyperlink, “<a href="javascript:__doPostBack">javascript:__doPostBack</a>(...)”
that I can use as the URL for my hyperlink element. The first
parameter is the target for the event. The target for the event is
the user control itself; so we pass “this” as the first
parameter. The second parameter is a string that will be sent to the
RaisePostBackEvent when the link is clicked. I use this format
“delete:{id}” so that I can make something that resembles
the CommandName/CommandArgument behaviour of normal repeater event.
</p>
<p>
Let's just have a look at the RaisePostbackEvent(...) function:
</p>
<pre name="code" class="csharp">
public void RaisePostBackEvent(string eventArgument)
{
string[] args = eventArgument.Split(':');
string command = args [0];
string argument = args[1];
if (command == "delete")
{
int id = int.Parse(argument);
DataClassContainer.Objects.Remove(
DataClassContainer.Objects.Find(x => x.ID == id));
DataRepeater.DataSource = DataClassContainer.Objects;
DataRepeater.DataBind();
}
}
</pre>
<p>
It simply decodes the argument, and determines what function
should be performed (delete), and which object is the target for this
operation. It then removes it from the “database” and
then rebinds the repeater to the updated data source.
</p>
<p>
So basically what we have here is something that reproduces the
behaviour of having a LinkButton with a CommandName and a
CommandArgument inside the repeater.
</p>
<h2 >Conclusion</h2>
<p>
I have shown you that you can have events fired from inside a
Repeater control, and handle them. But as it's not natively supported
by the framework, there are some limitations. But that said, this
pattern has proved valuable to me in the past, so if you can live
with the limitations, you can chunk off a great part of your view
state. And if you have as much view state as I have had, you can
seriously improve user experience.
</p>
<p>
As this is my first blog post, I would appreciate comments on how
it is written. Is it too long, too short. Is it clearly
understandable. Were there points I should have focused more on.
Should I have provided more example code?
</p>
<p>
I have a few ideas for new blog posts, one or two about using
StructureMap in ASP.NET projects, and then I have planned for a long
post on unit testing ASP.NET Web Forms. And I'm talking about real
unit testing, stuff that you can run from NUnit console without being
dependent on a web server. I'm currently writing on that one, but
there is going to be a lot of text. So that will be at least 3
seperate blog posts, maybe more.
</p>
<p>
I hope you find this useful.
</p>
<p>
Pete.
</p>http://petesdotnet.blogspot.com/2009/08/asp.htmlnoreply@blogger.com (Pete)8