Fog Creek's website has been redone to use our new Sam Sherwood-designed logo. The cutting edge page design is thanks to superstar web designer Dave Shea, famous for the CSS garden and the eye-popping new Mozilla home page, with additional programming and graphics by Fog Creek's own Dmitri Kalmar. It's about 99% standards-compliant (with the exception of a couple of stray FONT tags left over from old content that hasn't been updated... oh the horror!).

Question one, for you telecom mavens out there. If you buy DSL service in New York from Covad, aren't they just going to get Verizon to install the actual DSL circuit? If so... why is it cheaper to get it from Covad?

Yes, we seem to be in the market for a new DSL provider. And I'm tired of playing the blame game where your DSL provider blames everything on Verizon and Verizon blames everything on the DSL provider, so I'd be willing to pay the monopoly tax if it meant when our DSL went down there was nobody left to blame. If you know whether Covad uses Verizon, post an answer here.

Question two, for you reliable SQL Server mavens out there. Suppose I wanted to build a Win2K-based web service using SQL Server to store the data. But I'm a reliability nut. So obviously I'll use industrial strength servers with RAID, two power supplies and network cards, etc, and they'll live in secure colocation facilities.

To further minimize failure points, I'll have a hot backup. But the twist is that I figured as long as I'm paying for a hot backup, it would be more reliable if it was somewhere else, say, on the other coast.

So here's the plan I'm working on. Server A in New York, with IIS and SQL Server. Server B in Vancouver, with IIS and SQL Server. Server A is somehow "writing through" any database changes to server B. I know I can do this with transaction log shipping; is this a good way to do it? Is there a better way?

Then if Server A blows up, I simply ask my ISP to route the packets intended for Server A to Server B. (I assume they can do this if it's their backbone).

Might I please kindly request in advance that you do not suggest using Linux instead of Windows 2003. Yes, I concede that Linux is "more secure," but not when I'm the one pushing the buttons. Last time a flaw was discovered in Windows, it took me two clicks to patch it. Last time a flaw was discovered in SSH, it took me four hours of compiling and messing around to patch it. I apologize but I don't have the skilz to keep a Linux box secure, so please, let's talk about how to make this particular configuration reliable, not about whether Linux is a better OS than Windows. Or, actually, if you do want to talk about whether Linux is more secure than Windows, do so here.

And a font

Back in the days when I did Mac development (System 6) the biggest monitors available for the Mac were maybe 9", and the only way to see a reasonable amount of code on screen was to use a tiny font. Now that I have two 18" LCD panels, the only way to see a reasonable amount of code on screen is to use a tiny font. The world is awash in lovely TrueType fonts but none of them are monospaced, which is a nuisance for programming because things which should line up won't.

Fortunately, I have found ProFont, and all is well again. For best results use the FON version, not the TTF version.

Almost any argument about managing the software development process inevitably deteriorates into anecdote-ping-pong. “We did wawa and everyone quit.”

“Oh yeah? Then how do you explain Company X? They wawa regularly and their stock is up 20%!”

If you have even the slightest bit of common sense, you should ask: “Where's the data? If I'm going to switch to Intense Programming I want to see proof that the extra money spent on dog kennels and bird cages is going to pay for itself in increased programmer self-esteem. Show me hard data!”

And, of course, we have none.

One set of people will tell you you gotta have private offices with walls and a door that closes. Another set of extremos will tell you everyone has to be in a room together, shoulder-to-shoulder. Neither of them have any hard data whatsoever, where by “hard data” I mean “data that wouldn't be laughed out of a sixth-grade science classroom.” The truth is, you can't honestly compare the productivity of two software teams unless they are trying to build exactly the same thing under exactly the same circumstances with the exact same human individuals, who have been somehow cloned so they don't learn anything the first time through the experiment.

Tom DeMarco was so frustrated at the inherent impossibility of providing any kind of hard data that he went so far as to write a novel in which he fantasizes about a bizarre land in which programmers are so cheap you actually can do experiments where, say, half the people have offices and half the people have cubicles.

But we don't have the data. We don't have any data. You can give us anecdotes left and right about how methodology X worked or didn't work, but you can't prove that when it worked it wasn't just because of one really, really good programmer on the team, and you can't prove that when it failed is wasn't just because the company was in the process of going bankrupt and everybody was too demoralized to do anything at all, Aeron chairs notwithstanding.

But don't give up hope. We do have the collective wisdom of fifty years of building software to draw from. Or at least, it's somewhere. Your typical startup with three pals from college may not exactly have the collective wisdom, so they're going to reinvent things from scratch that IBM figured out in 1961, or go bankrupt failing to reinvent them. Too bad, because they could have read Facts and Fallacies of Software Engineering, by Robert L. Glass, the best summary of what the software profession should have agreed upon by now. Here are just a few examples from the 55 facts and 10 fallacies in the book:

The most important factor in software work is not the tools and techniques used by the programmers, but rather the quality of the programmers themselves.

Adding people to a late project makes it later.

Reuse-in-the-small (libraries of subroutines) began nearly 50 years ago and is a well-solved problem.

Reuse-in-the-large (components) remains a mostly unsolved problem, even though everyone agrees it is important and desirable.

You can read the others in the table of contents on Amazon. One of the best things about the book is that it has sources for each fact and fallacy, so you can go back and figure out why we collectively believe that, say, code inspection is valuable but cannot and should not replace testing. This is bound to be particularly helpful when you need ammunition for your arguments with people in suits making absurd demands (“Can we make a baby in 1 month if we hire 9 mothers?”).

My incoming spam is running at over 200 junk emails a day, but SpamBayes is catching them all, with virtually no false positives. Bayesian filtering, invented by Paul Graham and available in many open source implementations, is the best answer yet.

I spent the long weekend grinding through the backlog of Joel on Software translations. There are a bunch of new articles in various languages including new sections for Esperanto and Greek. All in all there are 264 translations in progress in 32 languages thanks to 242 volunteers around the world. 177 translations are complete and have already been posted.

There are a few articles, already translated, which just need copy editors before I can post them. If you read and write one of these languages fluently and are willing to help out, I'd really appreciate it! What's involved is just looking for typos and errors and improving the translation wherever possible. If I don't find anyone to edit the articles I will probably just go ahead and post them unedited but it would be nice to have a second set of eyes improving the quality of the translations.

A frequently asked question: why bother with these translations? Surely any real programmer knows English! And my frequently answered answer: First of all, not every programmer knows English, and if they do, they may not know it that well, so they may not really enjoy reading things written in English if they don't have to. Second, even if the programmers have learned enough English to decipher online documentation, their pointy-haired bosses from management may not have.

Another frequent question: why not just use Babelfish or Google Language Tools or another similar translation tool? Answer: They are seriously little. You cannot include/understand simply the exit. Er, what I meant to say was, they are seriously inadequate. The quality of translations produced by automatic software is so horrible that you really can't understand the output. Try asking Google to translate http://french.joelonsoftware.com from French to English for some real howlers. "Then why does nobody make planning? Two principal reasons. Firstly, it is really difficult. Secondly, nobody believes that that is worth the sorrow of it. Why give so much difficulty to be worked on a planning if it is known that it will not be correct?"