Code Renaissance is about building great teams and great software. By exploring best practices, team interactions, design, testing and related skills Code Renaissance strives to help you create the team and codebase that you've always wanted.

Let's say you have a new customer that you're about to bring on board that will grow your business significantly, say by forty to sixty percent. Is this a good thing or a bad thing? From some perspectives of course it's good, especially if your company needs the money to keep its head above water. Even if that's not the case, additional profits are always welcome.

Something to think about though... how will your current system hold up under the load? How maintainable is it? How scalable is it? What will happen when those nightly jobs that normally run into the wee hours of the morning come under double load and now don't finish before users log-in in the morning? What impact will those reports that are directly hitting against production data have when they pull twice as much data?

Perhaps those aren't you your problems, perhaps you run a good clean shop and there are no skeletons in your closet. Still, trust me, there will be problems. Given this much growth it's just entirely too likely that there will be things that do not scale under the additional load (even if you don't and probably can't know what they are yet).

Of course the work that these problems require is in addition to the other work that the client will generate. What new features and reports has the sales team promised? For a client this big I can almost guarantee there were some. If you're lucky sales has had conversations with I.T. to verify that they can meet the demands. Still, what is I.T. going to say? "No we can't do it; you'll just have to turn down the really big client." Not likely.

Sadly though, these probably won't be the last of the new customers demands. The customer knows how valuable they are to you and they'll likely use this to their advantage. They're in the position of power. What happens if they continue making demands? What happens if they get frustrated and leave?

Worse yet what happens if they stay? Will you burn out your IT staff? Your development team was likely at or just above capacity. If you have a small team it will take at least 2 to 3 months to staff up if you want competent people and then it'll take another 2 to 3 months for them to become really productive. And this is if you're lucky; sometimes it can take a really long time to fill just one position. Sometimes, after a long search, you hire someone really good and you're feeling great... and then they leave after a week and you have to start all over.

Have you started looking yet? Don't wait too long or your development team may start to buckle under the demands and then you'll have turnover to worry about. Let's say that your team manages to stay on top of things. Did they do it by cobbling together brittle, unmaintainable code under the pressure of unrealistic time lines?

What about the plans the business had for innovation and revolution... the types of things you were doing that brought in this big new client. How long are you delaying them? Will they ever be revived. What about the really talented developers that you had working on these cool new projects? If you reallocate them to the new client work will they get bored or frustrated and jump ship.

Slow, steady growth is best, but it may not be possible to turn down a big new client, even if you know you're not ready for them. So what do you do? If you go forward you'll need to mitigate the risks as quickly as possible.

I think the main the problem is often just poor planning and communications. You can't blame the sales staff for aggressively pursuing new clients; that's their job. But someone has to be coordinating their efforts with preparation on the I.T. side well in advance of anything actually happening. Someone has to be looking at the big picture.

Some one has to say, "We're planning to double our business, what do we need to do to support that?". In general, big clients do not come on board quickly; there should be adequate time for preparation if only you're careful to plan for it. Just don't be blinded by the profits and don't rush things.

Have you ever found yourself living in a Dilbert world at work? You know when you're working in an otherwise sane company and someone does something so bizarrely wrong that you would have sworn it could never happen in real life so you must be in a Dilbert comic?

At a previous job there was a problem with an integration server which was traced back to a USB flash drive that was accidentally unplugged. Apparently the server was low on memory so someone plugged a 2 Gig drive in and moved a critical database onto it. Then when someone else saw the flash drive and removed it (likely thinking, "Hey what the heck is this doing here?"), the integration environment went down.

This is yet another example of the short-term fix myth in action. In addition to the to the fact that short term fixes are a maintenance disaster and get you deep into technical debt, they are often really bad ideas in and of themselves for other much more basic reasons (such as the fact that pulling out a flash drive will bring down the integration environment).

This is a great podcast on the risks of large redesigns and the benefits of incrementally redesigning one piece at a time. I've wondered before why this isn't done more as it seems to me it offers a lot less risk for only a little more effort. The audio is rather brief but it covers the content well and it was just great to find someone talking about this.

I was very interested in what Kent Alstad had to say about bottlenecks so I pulled out what I could get from his talk and tried to fill in the gaps. After a bit of study on my own here's my take on it.

First we know that there are always bottlenecks in distributed applications (web or otherwise) and though they may not initially be apparent, under enough load they will show themselves.

There are three main types of bottlenecks:

Memory bound – Too much data being cached or too much data being held in processes at one time for the specific amount of RAM on a system.

CPU bound – Too many processes and/or one or more intensive processes consuming the the all available CPU time.

IO bound – Processes fighting for their turn to read and write data as well as different processes contending for access to the same data.

Once you hit your first bottleneck you can fix it in hardware or in software or both.

Scaling out – Increasing performance using distributed architecture (adding more servers); more servers share the load by processing different pieces of a process or having a piece of the process duplicated on two or more servers. This approach often requires that software changes be made to distribute the load across the servers. Also JavaScript and Ajax are often used to scale out by distributing some processing to the client; remember to include client-side compute time in your performance analysis, not just server side compute time.

As I mentioned previously, now that I have decided that I really enjoy blogging and plan to stick with it, it's time that I start putting some work into my site. So how do I make (possibly) sweeping changes to my site without risking a misstep that will flush everything down the toilet?

Well the first thing that I need is some sort of Revision Control System. Revision/Version/Source Control software allows you to track all changes that you make to your files and to compare different versions and roll back to whatever version that you choose. It's particularly useful if you have multiple people working on a project as it usually has mechanisms in place that help you resolve changes made by different people and it keeps everyone on the same version of the project.

The concept is simple. Change a file, check it in. Change something else, check it in. Think you made a mistake, compare versions and roll back if need be. Roll out to test. Find a bug, fix it and check it in. Once the release is solid push it to production. If you find a bug in production you can pull the last release from source control and roll back to it.

Some common/popular open source solutions include Subversion, GIT, and CVS. I had previously heard good things about Tortoise SVN (a Subversion Client), which won the SourceForge.net - 2007 Community Choice Award for Best Tool or Utility for Developers. It's a shell extension so integrates directly into windows explorer, which makes it very easy to use.

It took me about 15 minutes to create a repository and get a handle on the basic functionality. It was all fairly intuitive. Right clicking on files allows you to access a context sensitive menu. The "SVN Commit" command saves your changes. The "SVN Update" command pulls the latest version of a file or directory down. A sub menu provides more advanced features.

It would be nice to have it accessible from the Visual Studio IDE and I understand there is at least one Subversion Client that lets you do just that for a price, but I wanted something free and for the small scope of this and other projects that I'll be working on Tortoise SVN is more than enough. Also because it inserts itself into windows explorer it is not limited to software development but can be used on any files that you need to track versioning on.

Official stats put Internet Explorer well in the lead in the browser wars, but that's mainly because most computers are sold with windows installed, which means that they get IE by default. A large majority of these people never even know that they have a choice in browsers. Among the people who are tech savvy enough to make a choice I believe people are overwhelmingly choosing Firefox. In support of this, here are some stats provided by Google Analytics about my site.

Notice Firefox's dramatic lead. People reading technical blogs are people empowered to make a choice and those people are overwhelmingly choosing Firefox.

So why Firefox? In a word, extensibility. Firefox was built with user added functionality in mind and lots of very talented people are very busy extending it. Firefox has hundreds of plugins for all sorts of things, one of them is bound to be right up your alley.

As a web developer the Firebug and YSlow plug-ins are a must. The added productivity and quality that these tools provide make the decision to design your site to support for Firefox a no brainier.

Analyze your sites performance

Monitor ajax http requests

Debug javascript

Syntax check your javascript

Drill into the DOM realtime

Make temporary changes to HTML, Javascript and CSS realtime to work out what if's

That last feature is really cool. Imagine sitting down with someone who is approving your site design and having them comment that they wish the font was a little bigger. You jump into the css and two seconds later it is. Image too small, spacing wrong? Zippity zip... "There hows that?". Keep a notepad handy because the changes can't persist, but that also means you can freely tweek things without fear of messing something up (just refresh the screen).

In junior high I'd ride my bike several miles to the library and ride back with a stack of books balanced on my handle bars. I read a lot of science fiction and studied everything from gemology and sailing to computers and lasers. In high school I started frequenting book stores more and more because the materials were up to date but I still made it to the library about once a quarter. Following high-school though I almost never went to the library; it was inconvenient and I seldom found what I wanted.

A few years ago I check my local library website out of curiosity and found that everything about libraries had changed. Welcome to what I'd like to call Public Library 2.0: the interactive online experience. Now you can:

Search the catalog of all libraries in your area and reserve books online.

Get an email notification when your book is available and pick it up at the location of your choice within 7 days.

Renew online if you need the book longer.

Get access to difficult to find books through the inter-library loan program. Library staff will locate the book by contacting college and private libraries. They'll have it shipped to them, and call you so that you can pick it up (this is great for technical books, even recently released ones).

Download popular audio books online and listen to them up to 2 weeks.

View your account online (books / requests / fines).

Because of the convenience of all of this I now rely on the Jacksonville Public Library as a reliable source of current information. I can't guarantee that your library will live up to these high standards, but I'd encourage you to take a few minutes to check your local library website and find out.

In this presentation Leisa talks about iterative verses waterfall development. She does not recommend a specific methodology, but instead elaborates on the failures of waterfall models and the benefits of iterative ones. If you have experienced the failures of waterfall and are looking for alternatives then this talk is for you.