I am nearly positive I didn't come up with this analogy myself, but quick googling reveals no link. Anyway, a great deal of the software I use professionally on a daily basis has been grown organically. Frequently, it started from one person's itch, was released into the wild via source control system, and has been sort of hardened over time by the contributions of many individuals into something more or less production-worthy. Sometimes it was sensibly designed at first by the original author, but many other times it was just a hack thrown together because it was cheap, and propagated because it was useful.

I suspect much software is written this way. I am a believer in worse-is-better design. I am pretty happy with this state.

All too frequently, the software proves unequal to the task at hand, however. In this situation one of two things happens. Sometimes, someone gets a bee in his bonnet and architects the "right" solution from scratch. All the previous work is thrown out. The first 90% of the effort is getting the new design implemented. The second 90% of the effort is patching all the little niggling fixes that make software usable, the fixes that the collective intelligence of the user community provided for the previous wattle-and-daub implementation.

Other times, nobody cares enough to fix the irritation. Workarounds and horrible kludges are developed to keep things limping along. Everyone can see the system is basically busted.

Almost never does someone refactor existing code to use a new algorithm or what-have-you. It is always either add another layer of whitewash onto the sagging wall or burn the wall down and start over.

This state of affairs with broken legacy software bugs me. I think refactoring existing codebases is a good thing. I myself have never done so, almost always opting for the kludge route. I suspect this is fairly widespread. Is it?

Do you think refactoring should be occurring with the same regularity as redesigns? Greater frequency? Do you have a personal guideline or belief on when to give up and start over versus when to limp along with existing code?

It's situational; factor in feasibility and the expected return of effort to you.

Imho, the time and effort saved by reigning in complexity, minimizing redundancy, and preventing human error alone makes most refactoring tasks I've encountered worth it. Also, the longer one waits to refactor, the more difficult it is to do (due to a variety of reasons, not the least of which is forgetfulness), which ultimately discourages refactoring later. Negative self-fulfilling prophecies suck.

I usually only refactor the sections that need to be more efficient or the sections that I need to build more stuff on top of. I try to leave everything else alone, since it doesn't make business sense to go back unless it's broken.