Saturday, September 01, 2007

Maybe it sounds silly but I just discovered a way to allow generics fit naturally into the Factory code. Paired development has its own benefits - I constantly overlooked this trick before and as a result was forced to get rid of some neat (from my point of view) generics designs.

Imagine you have started with this kind of design (I had to remember my first experiment so I apologize if the code seems a little but rough):

Friday, August 31, 2007

Damn, do I have a low tolerance to the Microsoft® marketing tricks nowadays. They sell you a notebook with pre-installed Vista Home Premium®, which is crappiest® OS I've seen since Windows 95® pre-SE and tell you that, of course, you can buy it clean but it's not gonna affect the price. They evidently give exactly that Membership Provider part, which you're most interested in, to the stupidest® intern in the team, who forgets un-comment property, so providing True or False to the constructor doesn't make any difference. Of course, of course, they cut the cost by cutting QAs while project BAs have never been a Microsoft® top priority.

And finally, the ingenious piece of corporate brainwork - Windows Genuine Software® checker. Great, - it didn't work for my perfectly LEGITIMATE Windows Server 2003®. I couldn't download an important patch, because it kept failing and complaining that this system is not supported. I saw this thing fly on pirated® systems (not mine, I've just cast a glance at computers of some naughty, naughty people, who had long since been cured and own legitimate copies now) but why it can't reward me® - such a genuine LEGITIMATE user?!

OK, you don't like me - I don't like you: there is a way to avoid this nastiness (when you're rightly frustrated over your LEGITIMATE software) and here is a community answer to the corporate greed® and stupidity®: Greasemonkey + script.

I am confident that information provided in this post will not be used for anything but Cultural Learnings of Legitimate Software for Make Benefit Glorious Nation of Microsoft®.

Saturday, August 25, 2007

Doing automated builds you will inevitably encounter a problem deploying an application to different environments. You can only avoid it in the case if you live in the happy world of single box, but then you most likely do not bother with continuous integration in the first place. The worst of deployment configuration is unleashed during the "death race", when all those nasty wrong configuration bugs are discovered during the client presentation.

If application is built the smart way, all environment-specific details are encapsulated. In case with ASP.NET application it is most likely the web.config file. It is no doubt that the solution is to handle a deployment task to the automated script. Aside from technical details, the trick is to have some kind of configuration template in one hand and environment-dependable variables in the other and smelt them together when time comes, rather automatically than manually. What are our choices to settle the shit configuration once and live happily ever after?

The easiest way seems to have multiple instances of the configuration file, so the relevant one is pulled to the deployment environment. There is a huge disadvantage, though, as you will become a victim of the main copy-paste curse - synchronization problem. It will quickly go out of hands if you have more than one project and more than two developers to worry about. Changes have to be tracked and reproduced scrupulously - that defeats the whole idea of laziness.

Another, a little bit exotic, but viable method, is to create a single config file with placeholders inside:

Then the deployment heavily relies on the build script, which will hold all necessary variables for different environments. The disadvantage is that the raw web-config file is unusable and you can not run the application without deploying it properly (even to the local environment) or tweaking the project build events to run it from the Visual Studio. On the bright site, the configuration lays in the hands of the Jedi build master and the real production settings are hidden from the rest of the team (attention, Sarbanes-Oxley-compliant companies!).

The third choice is to create a custom configuration section which will be controlled by a single key, changeable by the build script. It maybe a full-scale class or something more lightweight. The first approach will give you all flexibility you may need, but would require some kind of common library if you have multiple projects. Second approach, would require developers to learn the new way of retrieving configuration values.

Whatever way we choose, we should keep in mind: it is the laziest way that will be favored by your fellow developers. If we will be able to access the configuration through the ConfigurationManager class (which is 100% customary) and to create relevant configuration sections just slightly different from what we used to (let's say - 80% customary) - it will be the preferred combination. Finally, developers spend more time using the configuration, than creating or changing it.

I would love to hear about the other ways to automate configuration deployment.

UPDATED: Another approach is to have most environment-dependable sections (e.g. connectionsStrings and appSettings) "outsourced" to the satellite files grouped by environment, using "file" attribute:

<appSettingsfile="config\production\connectionStrings.config"/>

The build script is sent over the web.config file and depending on a parameter it just replaces the middle part of URI - from "production" to "bat" or "dev". It is a relatively small amount of automatic changes and deployment error is visible right away. This approach we are using now and it seems to work well. At least we can be sure that once we perfected the web.config, it is unlikely that somebody will mess it up with environment changes, which are encapsulated and independent from each other. The production settings can be SOX-friendly isolated and hidden. The downside - the good'ol synchronization problem.

The plot for the future Jason Statham's movie bears the most similarities to the project stage, which, I recently realized, is an unmistakable benefit of the waterfall development. It occurs at the very end of the project and consists of spasmodic attempts to do last-minute (and often the only) QA, bug fixing, deployment and redeployment. Unlike the "Death March", when the bleeding troops more or less steadily approaching a distant milestone, the Race is usually packed within a very tight timeframe, usually hours, when the client is arriving shortly or CEO is about to drop by between golf games. Nobody can stay sane even if they want to. It is too late to do the right things and it is the time for hacking, shortcutting, patching and failure acceptance. It is even too late for a fresh cannon fodder and asking "Didn't I tell you so?" will bring a deserved wrath on your head.

The worst residue of the Death Race (and the Death March) is a wrong idea that it has actually worked, if the project was small enough to be wrestled into place with some degree of success. Good practices and patterns used during the whole project seem to be unnecessary as they weren't needed for the last-second hackings, so often activists are blamed for time-wasting (which led to the Race, of course). Next time the team will try to undertake the larger task with the same approach. Then it would be a good idea to shake the dust off your resume...

Friday, August 10, 2007

Provider-based authorization and authentication is definitely a huge step forward from the primitive form authentication we've got in ASP.NET 1.0. Unfortunately Microsoft again didn't do a great job unifying the approaches. It seems that Membership, Role and Profile projects were handled by three geographically and developmentally separated teams who had no desire to communicate with each other. The Membership provider architecture and implementation is easily the best from the three while Profile - the worst (it is noteworthy that even a quite thorough book on security from the Microsoft itself ignores Profile features). Despite the similarities in implementation all three provider models are frustratingly different. Like in the case with Team System, the problem seems to be a feeble BA job.

Out-of-the-box implementations work good like Membership or fair, like others for a standard project. But what if we need an admin-type application which can service few client apps? Client membership databases can either be shared or separate, while admin application will be able to access them all. It can give us great flexibility and cut amount of code tremendously. Anyway, even for the other cause it is still nice to handle multiple providers.

Membership works out of the box without a glitch(surprise, surprise) - it is no brainier to authenticate against arbitrary provider using Membership.Providers collection. Profile sucks as usual - from my point of view it doesn't make any sense to use multiple providers if we can not inherit profile from different base classes. The Role provider imposes some challenge though. Wouldn't it be nice to change default provider programmaticaly once for all application?

Static Roles class will serve you a RolePrincipal associated with the default Role Provider, thus Context.User.IsInRole() will not give us what we want. There are plenty ways around but it is good idea to let developers use this customary method. The following code is based on a snippet from a "Professional ASP.NET 2.0 Security, Membership, and Role Management" - an excellent (but quite heavy, literally :) book by Stefan Schackow. The idea is to replace IPrincipal Context.User class with RolePrincipal one, which constructor accepts Role Provider name - that's exactly what we need. Stefan proposes to hook the method to the GetRoles event in the RoleManagerModule. In this case you should consider tricky business of passing along the desired provider name. The Session object is not accessible this early in the pipeline and other possible meanings, - query string and cookie - still may not be the weapon of choice. Query string will add an extra headache if we use URL rewriting or serve extensive amount of dynamic pages from Content Management System. Cookie should be guaranteed to stay untouched for the whole user session or something nasty could happen. The code can be placed in the application page controller class. If you use back-door for a seamless login, the gateway and front login page should run exactly the same logic.

Note that we manually synchronize the Thread.CurrentPrincipal with Context.User - the job normally done by DefaultAuthenticationModule when we login into the application. Dominick Baier had a great article about differences and similarities of these two objects quite a while ago.

Somehow it seems that after installing Resharper 3.0 my computer started chocking on multiple instances of Visual Studio opened. I assume that VS stacks some information while being used but it never was a problem before - the memory usage seemed to be capped. Now if not recycled routinely, the memory utilization grows enormously - right now I have two VS instances open and they occupy more than a 900Mb in total and it's still increasing.Also I noticed that after few days working with ASP HTML layout become unbearably slow and it definitely looks like Resharper is struggling to provide Intellisense.I won't give up on Resharper anyway, but is this really a case? Has someone experienced the same?UPDATE: Another suspect is TestDriven.NET. So the Resharper maybe innocent after all....NET Frameworks, their patches and Silverlight Betas are the usual suspects.UPDATE: It is a Resharper. With possible help from .NET components but definitely not a TestDriven.NET. Looks like there are some problems with Intellisense which Resharper tries to provide for HTML layout...

Monday, August 06, 2007

The refactored and (hopefully) less confusing version of the page validation test fixture. Now there is no need to implement any abstract methods in your test fixture but just inherit the base class with proper generic type. Much lazier way to do things...

UPDATE: We have to add validation controls to the Page Validators collecion manually. Also it makes sense to expose control accessor methods, like void SetControlValue(string id, object value) and WebControl GetControlValue(string id) so we can access controls whcih are protected on the page.

Wednesday, August 01, 2007

Some time ago I decided to close gaps in my multithreading knowledge once and for all (I am still pretty sure that I would stay away from multiple threads like from regular expressions :). I looked for the ultimate book on multithreading and to my surprise there were not a lot of them around. Eventually I found what I looked for but research gave me some interesting thoughts.

The graph (I like graphs!) represents a publishing dates of the first 20 books on multithreading from the Amazon.com. This is an interesting tendency - the peak is on the 1997-1999. Of course there are some distracting factors, like IDE standardization and Internet, - blogging seems to chock the life out the technology publishing.

So the conclusion (biased enough) seems to be this: when powerful meanings to build software, complex enough to consider multiple threading, were unleashed upon programming public, the interest in multithreading books rose. Operating systems with parallel processing abilities become more affordable and more programmers were summoned to feed the software hunger. Multithreading ceased to be a sacred clandestine knowledge of the chosen few. Here is the essential timeline of operating systems and languages progress:

1991

Visual Basic

Linux, Macintosh OS 7

1992

Borland Pascal

Solaris 2.0, Windows 3.1

1993

Ruby

FreeBSD, Windows NT 3.1

1995

Borland Delphi, Java, Ruby

ColdFusion, Windows 95

1996

Mac OS 7.6

1997

PHP 3, JavaScript, J2SE 1.1

Mac OS 8, Windows NT 4

1998

ANSI/ISO standard C++

Solaris 7, Windows 98

1999

XSLY, GML, J2SE 1.2

Mac OS 9, Windows 98 SE

2000

.NET 1.0 Beta, J2SE 1.3

Windows 2000

2001

Ruby goes public

Mac OS X v10.0, Windows XP

2002

.NET 1.0 RTM, J2SE 1.4

Mac OS X v10.2, Windows XP x64

2003

.NET 1.1 RTM

Mac OS X v10.3, Windows Server 2003

2004

Ruby on Rails, J2SE 5.0

2005

.NET 2.0 RTM

Mac OS X v10.4

2006

.NET 3.0 RTM, Java SE 6.0

2007

.NET 3.5 Beta 1

Windows Vista

By the 2000 IDE seem to make multithreading easy enough to implement without fundamental understanding the processes behind it and provided enough guiding through their own help. And once again - blogging provides more timely information on the subject than any book.

Wednesday, July 18, 2007

Pretty neat flash game - you can create your own South Park character.

This has pretty close resemblance to one guy I know. Just a little bit too athletic maybe...

Dear lawyers, all legal rights, etc. belong to those South Park guys. I am not responsible for anything. Do not steal or get caught. Flash game results belong to the site owner who knows what she is doing.

Monday, July 16, 2007

Regular expressions is real pain for me. Somehow :) I use them so rarely so I have to learn them almost from the very beginning every time I need them. By the moment I know enough to use it successfully I can move on for next few months and my knowledge evaporates till the next stop on this vicious circle.

I used handy online regex tester until recently. It chokes sometimes (certainly not due my poor syntax ;) but it's very good for quick test. Eventually smart people pointed me to a lovely Espresso utility. With the help of this great tool regular expressions are not that excruciating. At version 3.0 it packages a lot of features and is very user friendly. Just go there and get it. For free.

I have to apologize - I turned on comment verification and moderation. The comment process will be slightly more tedious and you won't be able to see your comment right away, but please do not feel frustrated, I will love to hear from you but give me few hours (more like 24 :) to publish it.

Friday, July 13, 2007

So far Vista is not very impressive. But it has one (among very few :) advantage - it comes with IIS 7, which can hold multiple web sites. There is no need to explain, what convenience it provides for web applications development and debugging. For those who works on Windows XP there is a way to emulate (just emulate, folks!) multiple sites on their IIS. You won't be able to run more than one site at once (common, it's not a server!) but still it's pretty handy - you can screw one up with SSL and another with crazy ISAPI filters absolutely independently. So this is what you can do:

Stop the Default Web Server. Navigate to the admin scripts folder (normally c:\inetpub\adminscripts) and run the following command:

adsutil.vbs enum /p w3svc

If you really want to know how the administration scripts work, check here. This particular command will enumerate your default web site as number 1. The script may fail for the first time and offer to register C# scripting or something like that, so just agree on all prompts and run the same command one more time.

Now you can create a copy of your default web site:

adsutil.vbs copy w3svc/1 w3svc/2

And you have yourself a nice second site. IIS MMC most likely will not reflect new site even after refreshing, so just close and reopen it. You can rename the second site the way you want.Note: always copy from the default site (w3svc/1) and always keep your default web site untouched for a template.

Again - unfortunately, you can not have two sites running simultaneously. You will have to stop before starting another.

Sunday, July 08, 2007

The ASP.NET can not be debugged properly under Vista (Home Premium?) IIS7.

The bitching:

Microsoft rushes too much. Maybe it is all result of the H1B visa situation (US government was bribed by Apple!) which they try to resolve with new Vancouver development center. In this case it better be a new QA center, as they have already enough clumsy developers and not enough QA personnel. And Technical Writing center would be nice to have too. An article which is supposed to help with your Vista-related problems recommends to install "IIS 6 Compatibility Layer" 8 times between 600 lines without a single link to the software or instruction...

The solution:

Read this, this and this to educate yourself or just install this patch (the related article is still not available, of course) and enjoy long missing debugging.

Saturday, July 07, 2007

I mentioned in Part I that svcutil.exe will generate a nice client proxy code for your service. It appeared not so nice at the end and I spent some time trying to figure out the correct contract type description for the client configuration. It ended up to be a very small tweaking - I just brought all types under the single namespace umbrella. This code was generated (some attributes and code lines are removed for simplicity):

The contract interface IServiceContract here is one of the most important parts - before everything happens, the application should find the contract type to create the instance of the proxy class. As you can see, the interface is outside the namespace. This separation reflects the real order of things but I would prefer convenience. So to avoid confusion, let's bring all types under the client's main namespace ClientNamespace:

Monday, July 02, 2007

Sometimes it is good to have a distributable proxy code for a WCF service prior it's been deployed and it is not possible to generate a client proxy through a wizard. There is a nice little utility SVCUTIL.EXE, which comes with WCF. I've seen it located under C:\Program Files\Microsoft SDKs\Windows\v6.0\Bin and Program Files\Microsoft Visual Studio 8\Common7\IDE folders. This utility allows to generate client proxy class and configuration template from a compiled WCF service library or executable.

First run utility against a compiled service code to generate metadata documents:svcutil.exe MyService.dll.It will generate a whole bunch of XSD and WSDL files. The next command:svcutil.exe *.wsdl *.xsd /language:c#will generate MyService.cs class - strongly-typed client proxy (guaranteed against the weird WCF proxy generation in Win App :), and output.config file - client endpoint and binding configuration.

These tricks are better described in the utility help than on Internet and it took me a while to figure out, so I decided to share it.

Saturday, June 30, 2007

I was playing with Windows Communication Foundation over the weekend and came over one very interesting peculiarity.

My WCF service uses Basic Http Binding, is hosted by IIS, and has the following method:

[OperationContract]public List<string> GetCustomerList() {...}

Let's use a Windows Applications as a client (just note the fact that Form1 class was created by default). We create a reference to the service through a "Add service reference" wizard. Nothing fancy. The generated reference consists of .map XML and .cs proxy class - very similar to the Web Service reference.

Now the weird stuff - how, do you think, our method was mapped? Most likely your guess is wrong, - the return type is a BindingList type (never worked with that one):

System.ComponentModel.BindingList<string> GetCustomerList();

Huh? Let's remove this reference and add it again but prior to that get rid of all windows form classes in the application. Here is a newly generated mapping:

string[] ListProducts();

Tada! This is more anticipated result, as a generic list and array should be serialized exactly the same way.

I can't understand a possible logic behind that. Maybe it's just a bug but it is still not clear to me what a possible connection could be there. I am too lazy to try map a service with the different bindings but encourage anybody to try.

Friday, June 29, 2007

I have a solid impression that I should have known this from the previous life. What is a difference between these pieces of code?

string sql = "SELECT SUM(Quantity)FROM Production.ProductInventory"

string sql = @"SELECT SUM(Quantity)FROM Production.ProductInventory"

The first will result in a compiler error while the second one - the string literal, is completely legitimate! No more tedious and ugly concatenations. Remember moving SQL statements back and forth between SQL Server console and Visual Studio? Ew!

Wednesday, June 27, 2007

I can't believe it, - Apple has missed the best spot for its money! Two Mac dudes - permanently scratched McClane Willis and inarticulate Justin "Mac" Long, are saving the free world from a high-tech terrorist (who, I bet, is surfing the Internet with a Vista PC).But why, why, why this formidable adversary is not played by John Hodgman?! 20th Century Fox and Apple would save tons of money on their budgets and impressed hordes would march straight from the theatres to iMac stores!

Oh, and if you wonder why, despite buying a Mac, you are still not cool - there is always a room for perfection :)

Tuesday, June 26, 2007

Do not forget to install Visual Web Developer with Orcas. Otherwise Web projects will be unavailable (except for Web Control project) and Silverlight 1.1 Tools for VS 2008 will refuse to install, because "You must install Microsoft Visual Studio codename "Orcas" Beta 1 before installing this product."

To fix the problem you can re-run Orcas installation in add/remove components mode.

Monday, June 25, 2007

Very well organized tutorial and reference book. John Sharp is a great writer and I bet, - great instructor, so he knows the importance of a learning plan depending of a student knowledge level. You will be able to build a decent WCF services upon reading first two chapters, while the rest of the book will guide you through the different aspects of this SOA breed. Better try examples on some capable environment, like Windows Server 2003.

Friday, June 22, 2007

It happened that I had to buy Vista laptop (tell me about the Microsoft Tax). To install Orcas (now VS 2008) one better have a virtual machine (if you don't want to bother with tedious upgrade process later on). There are three obvious choices - VMWare, Parallels and Virtual PC.

VMWare wasn't free and had spooky 200 Mb size download. Also for an inexperienced virtualist the choices they give you with free software are not really clear so it ended up put aside for a while.

Parallels is highly recommended by the people who I respect and it's widely praised all over Internet... By the Mac community... Not Windows. Windows users are definitely second-class citizens there. I was ready to pay but gave up after three days tweaking it back and forth trying to make Parallel Virtual PC merely read from CDROM - it invariably crashed. Next surprise was their non-existent support. On the forum people claim waiting for weeks for the callback promised within three days (for the paid product!). I expected something better from a flagship of the free world free-spirited Mac-Ruby-Open-Source community. Boo.

Virtual PC 2007 from the glorious Microsoft is free (hmm...) and promises no support for Vista Home Premium Edition(aha!). Anyway I tried in despair and it worked nicely! Tiny small download, familiar and simple setup - good stuff! The only trick is that there will be few glitches here and there until VPC Additions are installed. Mouse will not be shared between the host and guest and DVDROM, when mounted for guest PC can disappear from the host's sight. VPC Help has all the instructions.

Thursday, May 31, 2007

I wrote before about different ways to update assembly versions for multiple projects. Previous solutions included artificial shared state file which should reside in the Cruise Control . NET folder. This is additional deployment hassle, of course, but it offers a little bit different approach, than a natural way - using a Labeller task, specifically designed for these purposes.

Let's say, we still have two projects configured in CC.NET and resulting DLLs should be marked with the shared build number. The following snippet illustrates Labeller configuration:

Now the ReleaseBuild project will increment the build number and DailyBuild will use it for it's purposes. There is a downside - now only ReleaseBuild has a control over the label but if we will create some rules around build use we can easily find ways to use this feature. On another side there is a huge advantage - Cruise Control dashboard and CCTray monitor show label, which is now a real DLL version...

Tuesday, May 29, 2007

If you use a Windows Live Writer to edit your blog offline(Beta 2.0 just came out - significantly improved) and if you ever experienced an urge to share your beautiful code with the other world - use the Insert Code for Windows Live Writer. This nice little add-on will provide nice layout for C#, VB, HTML and SQL syntax.

Just small advice - by default the add-on will try to embed its style in the every post, so take your time and manually include this style in the main CSS. Do it once and always uncheck "Embed StyleSheet" box.

Wednesday, May 23, 2007

What is the place of an Architect in the agile development process? The idea of design, evolving from the development seemingly proclaims the architect position obsolete. Right - group of smart and seasoned developers definitely should come up with an equally good design.

The position of an architect is often misunderstood in organizations: very often they are expected to produce UML diagrams and command developer herds, coding to the specs. Some architects do not code at all but just sit in the white castle up in the sky and craft divine designs.

So far those seditious thoughts seemed to be about right but smart people from XP Toronto user group gave me a better idea: an architect is The One who is looking out for the technical debt!

Developers, pumping their velocity points for the current iteration, easily can lose the horizon. Imagine some code which could use a 5-day refactoring. For a two-week iteration it is a tough decision - it won't add velocity points and it will not bring immediate business value. So naturally, developers and project managers can not justify this refactoring from their point of views. And they shouldn't, having the Architect onboard! Then the team has a perfect balance of powers: QA vs. BA take care of usability branch and developers vs. architect - technical branch.

Friday, May 18, 2007

Number of sessions, including ASP.NET AJAX, Orcas, Silverlight and Vista gadgets (Bring Your Own Laptop). Some new packages and new Microsoft initiatives on privacy-breaking protection. Hands-on labs and keynote from David Crow. Free give-away kit (Vista?). Hopefully - interesting people around.

Thursday, May 17, 2007

I think I am good at math (which is generally expected from the Computer Sciences graduate) and kind of good at geography (it's my hobby). I am risking to offend marketing and business professionals assuming that basic calculation skills and common knowledge are not considered beneficial for Master of Marketing or MBA but the Frequent Flyer Programs managers seem to think different. Marketing Vice President guy of Deutsche Lufthansa AG, for example, signs as a "Dr" under the statement that proclaims Toronto and Frankfurt 60,000 miles apart.

It happened that I am a member of Lufthansa Miles & More program (and Air Miles, like you). Here is my last statement - 9878 miles. Quite a lot, eh? Almost the round-the-world voyage. Let's get to the numbers: the distance between Fairbanks, AK and Guatemala City is 4313 miles - the longest North America trip Google could calculate for me (yes, yes, the Guatemala is a little bit outside the scope but I couldn't get distance between Cancun and fishing camp on the Chukchi Sea coast). Looks like I've got myself a round trip from whatever to whatever in North America, whoo-hoo!

Now, what can I afford? The economy class within North America will cost me 30,000 miles, from Canada to Europe - 60,000 miles (and some pitfalls). Looks like for the non-pilot Lufthansa folks the Earth circumference is close to 200,000 miles. I am afraid to find out what Lufthansa pilots think (thank God, they have autopilots!).

You can say: "Read the fine print - those are status miles, dumbo!". Right. Call them "status points" then! (or call the program Inches & More - pretty suitable name). If I promised to pay Lufthansa 100 status dollars, what word they consider the key - "dollar" or "status"? Would they likely be surprised when they find out that my status dollar is actually worth 10 cents? Status cents, he-he.

Phil Haack had a great post on testing ASP.NET pages with embedded Visual Studio Development Web Server. This is a cavalry and heavy artillery of ASP.NET testing. Once you established a decent testing framework based on that code, you are covered for your UI part.

I've never got to that point (I have an excuse - I am not a UI guy!) but some requirements for UI implementation - like validation - are perfectly eligible candidates for unit testing. Chris Stevenson gave me an idea (and what I finally tried) to test an ASP page with brutal force - just like any other class. We'll start from two assumptions: the validation on the page is done by standard Validation controls; properties of all Validators (in particular - ControlToValidate) are set in the code-behind (in our example - in the SetupValidators() method).

Next we'll proceed in three steps:

Step 1. On the runtime we do not have Page.Controls collection populated for us, so we will have to do it manually. With the following code our controls will be set and ready for the test.

Now developers can write unified maintainable tests against validation on every particular page.

As a step further we can introduce different kind of testable interfaces and decorate our Base Page class. Example of this can be ITestableValidatorContainer interface (actually used in the TestFixture example above):

Saturday, May 12, 2007

I am certainly not a wise person. Instead of learning from a Dan Hounshell's struggle, I wasted tons of time, trying to make Microsoft profile work for Web Application Project. Dan gave a perfect description of how I felt after fruitless fighting with MS Profile Provider until smart people (who read more than me) pointed me in the right direction.

To make long story short here is the recipe:

Decide to utilize Microsoft (widely advertised) Profile feature.

Use Web Application instead of the native Web Site.

Fail miserably and admit that you are stupid doing something wrong.

When Microsoft patched a Visual Studio to give an alternative to the hated Web Site application, their developers broke a support for a Profile mechanism. The schema will refuse to function unless you download ASP.NET Profile Generator - it will replace a Web Site's automatic CommonProfile generation. (hurry up - Microsoft is abandoning GotDotNet by the summer 2007).

Even after you successfully generated the profile class, there are still some limitations. You are pretty much safe if the profile properties come from the web.config file:

and another part will contain the required plumbing provided by the generator.

And the last tip: the Runtime will try to cast your profile class to the ProfileCommon type and fail if you didn't inherit your class from the System.Web.Profile.ProfileBase. Better do it in the generated partial class. The full example of the Web Application Profile class you can download here CustomProfile.cs.

Again - it is a good idea to keep your custom part separated from the generated code so nobody would accidentally regenerate the profile. To make these sad accidents even less likely - do not install the ASP.NET Profile Generator and simply use the example code.

Friday, May 11, 2007

The quote from an Ottawa Business Journal: " Canadian companies are paying recruiters up to 20 per cent more than they were two years ago as firms struggle to find new talent, according to a new report ..."

I don't know about you but I suddenly pictured a restaurant with hungry visitor, absolutely incompetent in cooking, skillful waitor and well-done steak (which is just a piece of meat, obviously incapable of anything except being tasty).

Is it true that apriori a talent can not get himself hired without an intermediary? It sounds about right for IT guys :) but what about artists or MBA's?

Monday, May 07, 2007

Have you ever felt underrated? The main reason is that people can understand what you are doing and it gives them a feeling that you are replaceable. You can proof them wrong with few simple techniques.

1. You should agree (and some managers may support you), that this code

Also there is a pretty good chance that the former spread all over the project located in several key areas and only Tom or Mike will agree to maintain it. So who will keep their jobs when the tough times come? Considering that the latter piece takes twice as much time to build (especially when those silly unit tests are around), the hardworking Tom and Mike look 4 times more productive than always malcontent Chad and Martin.

2. It is tempting to give your classes and methods misleading names but do not leave your opponents enough ground to accuse you of sabotage. The class names like Utilities or FlickrWCalcST will do. Method names shouldn't resemble verbs and something like TotalRecall always adds a nice touch to the ASP.NET code-behind.

3. People, who operate with terms like cohesion,are you enemies only want to make your classes more transparent, so the next-desk guy can easily replace you! support your code. It's an easy catch: if your classes are never less than 600 lines of code, even toughest agile junkie would consider an easier prey.

4. If you are tempted to add another #region block in your code, because navigation throughout the class is getting tedious - you are on the right way. Classes with a dozen collapsed regions look exceptionally solid.

5. Beat them with their favorite weapon - the innovation. Thoroughly spread generics and reflection throughout the places they less expect. With the method DoLwBack<Dictionary<string,ConfCol<int>>>(input1, input2) you will live up to your reputation of sophisticated programmer.

6. Work alone. Communication let others to spy on your plans and give away your productivity (which you most likely have to camouflage).

Some other useful techniques were discussed at the Niagara Falls, NY conference Waterfall 2006.

Just an update for a previous post (it's too long to publish it in the same article). As I mentioned, the use of C# script will eliminate disadvantage of using <asminfo> task and give cleaner and more robust solution. So here is the code:

Friday, May 04, 2007

If you use continuous integration builds, one of the steps you want to consider is to have resulting assembly DLL marked with incremental version number. This way QA people will be less confused about the version of site they're testing. Among the numerous advantages it gives you an additional selling point for an Agile development process. It doesn't matter do you tag your build in the source control or preserve it on the hard drive waiting to be burned on deployment CD (BTW: the former is more Sorbanes-Oxley compliant).

You can use the CC.NET <Labeller> task offers automatic increment. I am not sure but it may be reset if CC server is restarted; if not - then you don't really have much control over this number. Original NAnt would require crazy workarounds, but NAntContrib library contains very useful task.

The process is pretty standard in the main steps - Cruise Control project checks out the files, passes the control to the NAnt build script, which builds it, runs NUnit tests and tags it. It includes the following tasks:

You can notice that the first script counts on CCNetArtifactDirectory variable, which value is passed down from the Cruise Control script by default. The variable points to the <ProjectName>/Artifacts folder which is created automatically under the Cruise Control root directory the first time the project is built.

Here is the trick - we keep the version.number file here ( increment-version task creates it if can't find). The content of the file initially set to 1.0.0.0 (or another value if you wish) and task will increment it, following the incremental settings and result is stored in the ${buildnumber.version} variable. The advantage of this approach is that while build script is a part of the solution, the version.number file logically belongs to the CC.NET project level. At least it's kept away from developers who can change it :). Another approach is to store version.number in the source control (of course outside the the path, observed by CC.NET), but then additional will be required to check file out, change and check back in, while not adding too much protection.

It is worth mentioning that <update-assembly> task will regenerate AssemblyInfo.cs file from scratch and it's up to you to restore the required settings. If you prefer to avoid this, use <script> with C# function which will gently vivisect the AssemblyInfo.cs file and replace just a version.

Tuesday, May 01, 2007

Do not believe your eyes - this book is pitch black in reality (no, it can't be a different book, can it?).Have you ever chosen between property and method or wondered if you should call your class WtfThngMgc? Here is your guide with sound DO's and DONT's. The .NET Framework guys cover pretty much everything you need when designing your own framework. You can learn from all their breakthroughs, mistakes and trade-offs which they carefully analyze. As all good books it sounds like you've always known all this stuff but forgot.Great job highlighting distinctive features of application and framework code - worth reading.

Tuesday, April 24, 2007

Great, great, great book. Despite the title it doesn't stop on security topics exclusively, but analyzes the important entrails of ASP.NET - pipelines, meaning of events, workflows. It will bring all the stuff together and organize your ad-hoc knowledge.

All possible authorization and authentication scenarios reviewed, including the very common mixed authentication with internal and external users, which I haven’t seen described too often. Great tips and trick with detailed instructions are mixed with high-level architecture overviews. Interestingly, book avoids describing the Profile mechanism. Microsoft did lousy job porting this functionality from the icky Web Site to the Web Application projects and using Profile capabilities requires extra efforts.

This book is strongly recommended for good .NET developers who want to become great .NET developers.

Tuesday, April 10, 2007

Inspired by Scott Hanselman article about careful exception handling I decided to investigate the price of catching cast errors. For comparison I chose a TryParse method, which was implemented by .NET 2.0 in all basic types instead of just double in version 1.1.

I wrote small test runner (and practiced generics and delegates along the way). The following two methods were tested:

Successful operations at no surprise are pretty fast and equal in performance. But in the failure case the traditional try-catch handling is more pricey.

What would happen if there are more operations, like – 1000 of them? Difference is more visible:

In database-related applications a 1000 parse operations in a row are very common. While DateTime.Parse seems to be capable of better performance than Convert.ToDateTime (as latter uses Parse deep inside), the price of a try-catch eats the advantages anyway.