There ARE counter-examples of successful systems re-built from scratch.

The rewrite might have killed Netscape, for example, but without the rewrite from scratch not only would have Netscape died but there wouldn't be a Firefox or Mozilla Foundation now.

(And I'd argue that it wasn't the rewrite that killed Netscape: the rewrite was necessary. What killed Netscape, and would have killed it even without the rewrite, was that the then all-mighty Microsoft decided to get into the internet game for real and build a good-enough browser. It's ironic that this good enough browser was IE6, which we know view as the worst impediment to web progress).

> FogBugz is written in Wasabi, a very advanced, functional-programming dialect of Basic with closures and lambdas and Rails-like active records that can be compiled down to VBScript, JavaScript, PHP4 or PHP5. Wasabi is a private, in-house language written by one of our best developers that is optimized specifically for developing FogBugz; the Wasabi compiler itself is written in C#.

So to avoid starting from scratch they introduced a new language/compiler? Hrm... I question the scalability of this solution, what if every company decided to do this instead of biting the bullet and doing a rewrite?

Look at history: The Digg v4 rewrite was catastrophic. Not in that it didn't work (it was quite elegant engineering wise e.g. Cassandra-powered Digg buttons) but in that it took a really long time and when it was finally ready it was already behind.

Funny because the only "real" difference between reddit and digg is the fact that reddit has an unlimited amount of categories. That and people seem(ed) more interested in the reddit categories than the comparable digg ones.

That article is probably the most frequently cited piece of bad engineering advice in the world. Bad and unneeded advice, since most corporate environments are already extremely hostile to any kind of rewrites.

Heck, there are companies that still run DOS "servers". Engineering black holes. They suck developer time in and bend all infrastructure around themselves.

Yep often all the accrued "bug fixes" accumulated in the complicated solution are "fixes" for "bugs" like "Jim didn't know that the language standard library already had a solution for this, so he made it himself, poorly" or "Pam,Bob,and Dave each had to come back and fix Jim's naive solution by coding specific logic for the corner case they each ran into".

When the new hire Steve shows up and deletes the whole thing and uses the time-tested model of the wheel, he is not a villain, and anything to suggest otherwise is poor management philosophy that will drive Steves away, leaving you with a bad project, bad management and bad engineers (AKA 99% of every project ever). Even if by doing so Steve introduces bugs in the two-wrongs-make-a-right consuming code (which is simply a bug) and isn't polite enough to discover this on his own. The flexibility and resources to make systematic improvements like these which combat cruft and maintain or repair feature velocity are why QA - automated or otherwise - exists.

Honestly though, I don't think the article really means to cover such a scenario. There are many times that engineers decide to replace rather than understand a system which is complex-for-a-reason, this is written specifically with that in mind.

I regularly see this as I use/develop against/contribute to Drupal - a marvelously complex-for-a-reason(usually) piece of engineering that happens to be written in PHP - a language with a community that still celebrates simplicity-at-whatever-cost. "Nevermind that the theme/cache/localization/form/whathaveyou system accounts for what I need, it's more than what I need and I don't trust code more complex than I (or any single person with a single use case for that matter) could have written."

He doesn't make that clear, but it's the only sense I can make of the article given his degree of experience.

Easy to build, hard to scale (although far easier than when Digg was first built). Also getting the voting algorithm right, assuming they're going to continue with that model, takes a lot of refinement.

But yeah, overall I agree. Why would they rebuild the same product? That's already failed.

The project I'm working on now is at a point where it's either spend a HUGE amount of time refactoring and fixing everything or just start over. We can't just continue adding on features with the state the code is in.

The question is, which will be faster?

Right now, it's written to run on an old OS, with old versions of old languages. We absolutely HAVE to update this stuff as it's just stupid to continue programming in the dark ages (no code completion, no jump to definition, printf debugging).

But is it better to refactor the code and rewrite it slowly, or just scrap it, lose all of the old terrible design decisions, and move on.

You ask the question "Why do you think you'd make the design decisions correctly this time around?" Well, we didn't write the first version. It was given to a vendor that totally botched the job and it's our responsibility to fix it.

"We absolutely HAVE to update this stuff as it's just stupid to continue programming in the dark ages"

There might also be security issues. I've found that's sometimes a good rationale - if I've got code that will only work in PHP 4.1, and it's gotta work for the next 5 years, probably rewriting ('wholesale refactoring') is a much more prudent choice, even though it's "a complete rewrite". There will be no security patches for the underlying language - building on a platform with known and 'never-will-be-fixed' security holes isn't a good long-term strategy.

My experience says that for real legacy code (deployed and actually used) it is almost always better to slice it up and re-write and deploy sections of it incrementally. Big-bang transitions with a "point of no return" are just so expensive and risky.

Yes, you lose "conceptual integrity" and your UX may be inconsistent in the transitional time, but those are small prices to pay.

IMO, an even worse one is the MS OS/2 2.0 fiasco where MS abandoned it and the MS/IBM JDA after the SDK with a beta of it was already sent to developers. To be honest, the aforementioned JDA was not particularly good, but IBM ended up beating Chicago (Win95) by three years with OS/2 2.0, and during that gap MS used unethical tactics that was a lot worse than the JDA to attack OS/2.

The second thought was that ground-up rewrites are almost never a good idea to begin with. You end up throwing out whatever's good about the old property along with all the bad, and risk turning off the members of whatever user base you have left (who presumably stuck around because they saw something they liked in the old version).

I suppose the population of Digg users is probably small enough at this point that the new proprietors aren't worried too much about whether they stay or go, though...

Not really though, everyone knew that V4 was a terrible idea and they simply went forward without caring much for the community. I mean what kind of marketing research could you do to prove that using promoted stories is a good way to build a community?

"Promoted Stories" myth needs to end. It wasn't the case, there was a severe bug whereby a Regular Expression only matched RSS content. The Regular Expression acted as a gateway into the Popular Algorithm. I worked at Digg and I fixed that bug.

It wasn't noticed before launch because we echoed the v3 popular stories into the beta version of v4.

Digg was never paid for stories hitting the frontpage. And for all the flack it gets for this myth, it should have been.

You can find a research firm to tell you anything you want. They may have to torture the numbers more in some scenarios than in others, but there's no shortage of people willing to take your money in exchange for telling you what you already want to hear.

Am I the only one without a smartphone? I'm still using some old, beaten up Samsung crap which allows me to talk and sms a little. I spend my whole life in front of a computer, I am still always connected.

Clearly, they are taking the "mobile" approach, it just doesn't suit me very well

Me as well, no mobile phone at all actually. But I just made decision to buy a WIFI only smartphone (meaning an Ipod Touch or Galaxy player) since I am connected 12 out my 16 waking hours. The rest of the time (driving, reading, relaxing) it's probably best I stay disconnected.

Please lose the Facebook/Twitter popup. Get rid of a lot of that ad space and social media buttons. "News" should not be a major focus. Programming, gaming, science, and the arts would be a good start.

Stop quoting Silicon Valley insiders and relegate them to a corner of the site if you must place any emphasis on "tech". They did a lot to ruin Digg with the endless self-promotion. Acknowledge Reddit like they never did and take inspiration from subreddits, but make them easier to discover. Reddit has yet to do this and it is a noticeable deficiency on their part.

They'll also get a modest press bump when they roll out the "new Digg," like AVOS did when they rolled out the "new Delicious." There'll be press interest in the potential revivial of a once-huge property that there wouldn't necessarily be if they launched under a completely new brand.

I dunno if that's worth $500K, $500K can buy a fair bit of PR, but it's something.

Yep, (they paid a lot more than that) and it's probably worth it imo. Half the battle with web 2.0 sites is getting enough users to get the ball rolling. Digg already has that, and all they have to do is "reinvent" themselves with a great design and a focus on some set of users that aren't being well served by other community websites. Personally I'd try to target groups that reddit alienates.

You could try creating a conservative-bent social news site (as suggested here in 2008: http://mattmaroon.com/2008/10/11/conservative-social-news/). As the article mentions, it can be harder to do so since the Internet as a whole leans left, but as it continues to become more mainstream someone could maybe make it work.

Again, betaworks paid 500K for domain, code (which they're throwing away), and data (and I'm not optimistic that they'll keep anything but the user accounts). What's left of the team went to WaPo, and LinkedIn bought some bullshit feature patents.

The brand is worth a fair amount, beta works launching a new news site won't get much attention but "DIGG REBUILT AND RELAUNCHED" could get some international press and at least convince old users to have a look at the new site.

It's not that much of a loss for Betaworks considering if they do it right they'll start building a huge company if not they've lost $500k and may make a fair amount of that back from the traffic.

So they have less than 2 weeks left, sounds like they waited till now to post this to insure they can hit that deadline. How much can they really change of the outlook now, based on the survey results?

This happened twice. Once at Yahoo, by engineers who really didn't want to understand product decisions, and then again at the new place, by people who didn't have access to the original product decisions.

Rebuilding is a lot more stimulating. I had an engineer quit on me when I convinced the team to do incremental updates instead of rolling forward with an over-engineered rebuild that would just trade one set of problems for a different set of problems.

We have no idea what the code looks like. Maybe some knowledgeable engineers took a look at it and figured it would be better in the long run to just design and build something new themselves, instead of inheriting someone else's hairy, warty code. (I am just offering ideas here, I don't actually know how hairy the code is or how many warts it has.)