We’ve been going through beta testing at Strangeloop, which means I’ve had the chance to do some serious scaling of ASP.NET. One of the interesting experiences that keeps coming up in this process is the reaction we get from customers when we’re helping them do load testing.

One of the things we can offer our early beta test customers is the opportunity to load test their site, with and without Mobius in the loop. We need the test data anyway, and quite a few candidates don’t really have much in the way of load testing resources ready to go. And then we test their site in our lab with our Spirent Avalanche, and they go “Wow! I need one of those!”

So what’s a Spirent Avalanche, you ask? Funny you should ask… It’s 3Us of load testing love.

Josh Bixby, our VP of product development, noticed it when he was at trade shows. One of the benefits of having our feet in both the development camp and the networking camp is that we naturally see things on the network side that a lot of developers don’t. Josh pointed out that virtually every company making networking appliances had one of these 3U boxes in their demo racks. But I’d never heard of it before. So we checked it out, and realized it was the best answer I’ve ever seen to doing load testing. I know that load testing isn’t something people want to think about unless they HAVE to think about it. But if you do have to think about it, you have to check this out.

I don’t need to emphasize how much of a pain load testing is. Typically, you have two options, both of which suck: If you’re doing it yourself, you may be spending literally a week setting up a load test farm, and you’re probably spending more energy making the configuration work than actually doing the test. Which is no surprise, since most likely you’re using any piece of junk you can find, trying to network together a bunch of machines with different NICs, different performance, different speeds, etc., before you even begin to configure the test. I had one customer that bought me ten identical, dedicated servers for load testing - for about the same cost as an Avalanche - but that’s the exception, not the rule. And it still gives you much less control, you have to do all your own analytics, etc.

It’s easy to think “Oh, I’ll just use Mercury Interactive (sorry, HP Mercury) to do my load testing.” Easy until you see the price. Paying six digits for load testing with a 20% annual maintenance contract isn’t so easy. And that’s just for software – you still supply the hardware. I don’t think anyone told Mercury that the Dot Com Boom was over.

So taking a page from the network guys, there’s a third way to do load testing: You get a Spirent Avalanche, hook it up, and let it do the job. One 3U box with four gigabit Ethernet ports that can generate nearly two million users by itself. So you’ve got the hardware and the software all in one box.

Of course, the Avalanche isn’t cheap either, although they’ve nailed the gradually pregnant business model well – you can rent the gear, and those rental charges get applied to a purchase. We spent less than $100,000 on our 2700 with all the features we needed to do web testing. It also uses TCL-based scripting, which is usually the realm of networking guys, not developers, and can be difficult to understand. TCL provides the Avalanche with the flexibility to do load testing on a lot more than just web stuff.

However, bundled with the Avalanche is a product called TracePlus/Web Detective (Spirent Edition), made by System Software Technology (SST). SST makes a variety of different TracePlus products for networking and web, including this version specifically for working with the Avalanche. TracePlus provides the classic capture mechanisms that you see with most load generating tools, where the tool captures your navigation of the web pages and captures them as HTTP commands. The Avalanche internally converts this to its TCL commands.

The Avalanche has some ability to do reporting internally (pretty graphs), but the main way we’re using it is in “Excel mode”, where it generates CSV files that we can load into spreadsheets for analysis.

We’re also finding that the Avalanche doesn’t understand ASP.NET things like viewstate very well, but then, neither does WAST. We’re using Visual Studio 2005 Team Edition for Testers to get really smart functional testing around specific ASP.NET features.

Even with these complications, it’s such a better way to do load testing than setting up servers, and infinitely better than letting your paying customers do the testing. So if you’re doing load testing, why aren’t you using one of these? Why don’t more people know about this? This is pretty standard equipment if you build networking gear. It’s not like the Avalanche is some new, earth-shattering product. It’s not even mentioned on the main page of Spirent’s Web site?!?

I have yet to find anyone else in the ASP.NET world using a Spirent Avalanche. I really think it’s just a cultural issue, where great stuff is getting lost in translation between the networking world and the Web development world.

Important lesson: If you’re not paying attention to the networking space, you should be. You may just be wasting your time wrestling with a problem that other smart people have already solved. That’s one of the cool things about working with Strangeloop; we really get to straddle the line between those two worlds.

Just recording .NET Rocks this week and Carl and I had a chance to talk to Thomas Lewis and Mike Swanson about Show Off at PDC 2005.

The idea is to show videos of developers showing off their favorite bits of code. Some clever trick or idea that can be shown in under five minutes. The concept is cool, but what really stokes me is that its about the community, not about Microsoft.

Normally at the PDC you're watching Microsoft presenters showing off the future of Microsoft tools. But this is going to be the opposite - developers showing what they've done with Microsoft tools.

I'm encouraging Carl to "show off" this little tool he wrote for .NET Rocks. It takes the source version of the show and generates WMA, MP3, AAC in different quality modes, plus creates the split versions for folks who want it to fit on CD, and creates the torrent files for using BitTorrent. It saves a ton of time and is just the sort of thing I think would make a great five minute video.

We were all affected by the tsunami that struck southern Asia, but my friend Julie Lerman really did something about it - she dropped everything and started working with an aid organization in Indonesia called AcehAid. She's pretty much done nothing else since, I don't know if she's eating, much less working.

Julie had a brainstorm to get a bunch of us consulting types together and auction off an hour of our time on eBay, all proceeds donated to AcehAid. My buddy Stephen Forte put out the call, wrangled us in and got the auction posted today. We've already gotten some pretty good press even before the auction was up.

Actually, I wouldn't mind getting an hour of these folks time, many of them are friends, and they're all so busy that actually get an hour to just talk to them is bloody difficult. For myself, I've been booked up with work for so many years, I haven't worked with a new client since 2001.

I'm not a huge Wiki fan, but Microsoft putting software into the Open Source domain is pretty cool.

If you've never heard of Wikis, you're not alone, they're kind of a weird concept (and product), a web site that's fully editable by virtually anyone, so that you get this sort of disorganized amorphous blob of potentially useful information that keeps moving and changing... wow, its just like the Internet!

Anyway, Wikis were invented by Ward Cunningham, a terribly clever fellow who now works at Microsoft (which is, after all, the land of terribly clever people). FlexWiki was developed by David Ornstein, a Program Manager for Longhorn. Do these two facts have any relation? I dunno.

This is the third time Microsoft has posted software to SourceForge, the first, back in March of 2004 was the Windows Install XML (WiX) toolset. Second is the Windows Template Library, released in May. Both these projects are libraries, and so not of any interest to regular mortals, however, they are some of the most popular projects on SourceForge, with 103,000 and 22,000 downloads respectively - the page view counts are huge!

Man, I'm a huge fan of Jim Allchin. He's straight talking, serious and kicks ass. My current favorite quote from Jim: “Malware. I want it dead.” And follows that by saying that he hadn't been able to deliver that 100% for XP SP2. But they're still plugging away.

But the topic of the day was Longhorn. Most people know that the name comes from the Longhorn bar that sits between Whistler (the code name of Windows XP) and Blackcomb (which was supposed to be the next version of Windows). But the reality of Longhorn is that it has grown to be an amazing and complex version of Windows. The highlight peices have been Avalon, Indigo and WinFS. Microsoft has promised a stunning amount of new functionality in Longhorn, and Jim is promising to deliver on it, just in a different form.

What's happened is that the Windows team is fixing the date of Longhorn - for “holiday time 2006.” To do that, they are breaking up the delivery of all these different features.

The exciting part is that versions of Avalon and Indigo are going to be made available for Windows XP and Windows Server 2003. This is great news for developers, we're going to get a chance to build software utilizing the capabilities of these subsystems without having to have Longhorn. We don't have to drive our customers to the latest OS to take advantage of this new technology.

WinFS is being pushed back, to be delivered after Longhorn. The way Jim talked about it, it sounds to me like WinFS is growing in scope - the more they realize the power of object based data storage, the more development they need to do. Jim said they realized they don't want to ship the WinFS client component without the server component, and that means they need more time. It makes sense to me, it sounds like its going to be worth the wait. And it doesn't sound like its going to be long after, either. Jim says that WinFS will be in beta when Longhorn ships. That pretty much means that WinFS must ship in 2007 - Microsoft rarely ever goes over a year in beta.

The obvious question is “what's left for Longhorn?” and the answer is plenty. Sure, Avalon, Indigo and WinFS have been the highlight elements, but there is plenty more in the plan. A vastly more advanced search system is key, along with better functionality all around. The new display driver model of Longhorn is going to make a huge difference, I don't think we'll see the full power of Avalon until that is in place. A vastly improved deployment engine is going to make a big difference to anyone handling more computers than they can reach easily in one room.

In the end, the room applauded Jim, not just for being forthright about the realities, but because I think everyone here realized that this new plan is a better plan. Waiting for a massive shipment of all new code is not the best way to go - break the important bits down and get them out the door. That way we can kick the tires, explore the capabilities, and feed back into Microsoft to make them better. When the whole comes together, it'll be vastly superior to what we originally came up with at the beginning.

There's been plenty of kafuffle lately over how development jobs are getting outsourced to other countries... but I see no downside to this, no matter which way it goes.

The reality of development, even now, is that the majority of software development projects fail. Back in 1994, the Standish Group wrote The Chaos Report, which was an evaluation of 365 groups of people covering 8,380 applications. Of those projects, 31.3% of them were cancelled before completion. 52.7% of the projects went more than 189% of their original cost estimates. Only 16.2% of projects were completed on-time and on-budget. So depending on how you measure failure, you can choose between 30% and 80% of software projects being considered a failure.

Now that was ten years ago, and the Standish Group continues to publish the Chaos Report, they just charge a bundle for it. But some folks that have paid the money say that in ten years, things have improved. Outright failures (project cancellations) have dropped to 15%. Still, its not a trivial failure rate. And there are plenty of other reports to reflect the on-going problems with building software.

These reports all say the same thing. Project don't fail because of inadequate technology, or even inadequate programmers - they fail from bad planning. Lousy requirements, poor tracking methods, weak quality assurance, and so on... in the end, its all bad project management problems. Computers can do the work, and programmers can (usually) write half decent code, but getting them to write the right things is problematic.

This issue only gets amplified when you go to offshore development. If you don't have a plan to handle the logistics of the project, you're going to have just as a big failure offshore as you did on. Maybe it'll cost you less, but its still a failure.

Some folks talk about the need for architects, but I think the local role in an offshore project is bigger than that - the requirements gathering, project progress tracking and quality assurance evaluation represent a ton of work. And, as with all projects, as soon as something is built, it needs to be changed, so there's more work in dealing with the changes. And if these things aren't being handled well, you're going to fail.

But suppose (and this is a big supposition) that you do get your application successfully built using outsourced developers. In fact, suppose (and this is REALLY a big supposition) that all these applications get built perfectly. What then? Well, there's still plenty of work building better apps. Its not like there's any shortage of software to be built. Most companies I know are only willing to talk about the one application they need right now because its so hard to get anything finished. But when you drill deep into their plans, you see dozens of prospective applications.

Reducing the cost and increasing the speed in which applications can be built can only be good for our industry - it means MORE work, not less.

So, regardless of how the outsourcing movement works out, it can only be good - if it fails, we're back where we started, still trying to build applications because its hard. And if it succeeds, we're going to build more, better applications.

Of course, this is all roses and sunshine as long as you aren't the programmer getting laid off because your company is outsourcing development. There aren't any easy answers for you... including blaming the loss of your job on outsourcing. This isn't the first time jobs have been shuffled, and its not the last. And as for that “of course its easy for you, you're not the one being laid off” argument... grow up. I'm not being laid off because I work for myself, and I stay focused on having an effective return on investment for my customers. If you did the same, you'd be fine too - self-employed or working for someone else. Valuable people stay busy - there's always more work than time.