If you ask programmers why they should write clean code, the number one answer you get is maintainability. While that's on my list, my main reason is more immediate and less altruistic: I can't tell if my new code is correct if it's too dirty. I find that I have focused on individual functions and lines of code so much that when I finish my first draft and step back to look at the big picture again it sometimes doesn't fit together very neatly. Spending an hour or two refactoring for cleanliness frequently uncovers copy/paste errors or boundary conditions that were very difficult to detect in the rough draft.

However, some people feel it's occasionally okay to intentionally check in dirty code in the interest of shipping software, with a plan to "clean it up later." Is there is some practicable technique that gives them confidence in the correctness of their code when the readability is less than ideal? Is it a skill worth trying to develop? Or is a lack of confidence in the code something some people just find easier to accept?

My theory is that all coders fall somewhere in between "memorizers" and "understanders", and few can do both well. The more crap you can remember at once, the more messy you can afford to make your code. Whether the code is clean or not, make it work, test it!
–
JobDec 13 '11 at 16:05

26

However, some people feel it's occasionally okay to intentionally check in dirty code in the interest of shipping software, with a plan to "clean it up later." heh... hell will froze before it's "later"...
–
Carlos CampderrósDec 13 '11 at 17:10

25

Not all programmers think alike -- I've been given code to maintain that made no sense to me for months, until one day, it was like a light switch was flipped, as I realized what the overall organizing structure was, and it all made sense why they had done it how they did. Would I have done it that way? No, but it worked.
–
JoeDec 13 '11 at 19:07

11

@joe -- +1 -- some programmers are too quick to dismiss code that does not fit into there personal idea of "good code". You should always try to understand the thinking behind a body of code and its code style, often you will learn something useful.
–
James AndersonDec 14 '11 at 1:51

7

How do quick & dirty programmers know they got it right? Because it works :)
–
RachelDec 14 '11 at 15:21

20 Answers
20

The code has a short lifetime. For example, you're transforming a bunch of data into a standard format with an ad-hoc program.

The negative impact of failure is low:

The data you're transforming is non-critical, and errors in it can be easily corrected

The end-user is a sympathetic programmer, who will reason about error messages and work around them by, say, massaging the input.

Sometimes it's not important that code is robust and handles every conceivable input. Sometimes it just needs to handle the known data you have on hand.

In that situation, if unit tests help you get the code written faster (this is the case for me), then use them. Otherwise, code quick and dirty, get the job done. Bugs that don't trigger don't matter. Bugs you fix or work around on-the-fly don't matter.

What's absolutely vital is that you don't misdiagnose these situations. If you code quick-and-dirty because the code is only going to be used once, then someone decides to reuse the code in some project that deserves better code, that code deserved more care.

"The code has a short lifetime. For example, you're transforming a bunch of data into a standard format with an ad-hoc program." What if the transformation isn't done correctly, but discrepancies in the data aren't noticed until much later on?
–
Joey AdamsDec 15 '11 at 3:42

1

@Trav So, just to confirm, if the actual consequence of failure is massive, but my perceived consequence of failure is zero, there is no risk whatsoever?
–
Christian StewartApr 22 '14 at 19:27

1

@ChristianStewart From a purely mathematical standpoint, your assertion would be correct. However, in practice, a perception of consequence being zero would not negate the weight of the probability x actual consequence. The perception is placed into the formula to account for organizational fears that often influence mitigation decisions. The lack of such fear does not lessen the actual probability or consequences. Thus, one should assume that perception is always at least equal to 1 (since it can magnify but not negate in any way actual risk)
–
TravMay 7 '14 at 18:41

They don't. I'm currently working on a code base created by "quick and dirty" programmers who would "clean it up later." They're long gone, and the code lives on, creaking its way toward oblivion. Cowboy coders, in general, simply don't understand all of the potential failure modes that their software may have, and don't understand the risks to which they are exposing the company (and clients).

Whenever I hear the "clean it up later" or "we will do that when things slow down a little" I'm always tempted to start singing "Tomorrow, Tomorrow, I'll love ya tomorrow. It's always a dayyy awaayyyyy." That could be just me, though.
–
JohnFxDec 13 '11 at 17:59

8

Many of us have been in that rather unfortunate position. It's pretty dis-spiriting being bequeathed other peoples' technical debt.
–
Mark BoothDec 13 '11 at 19:39

29

The actual problem is classifying other programmers into cowboy or quick&dirty or other titles. Every programmer has some failure modes, and reading someone elses code is very difficult, and finding your own failures is very difficult. All these together means that people too easily label other programmers as bad ones, while thinking their own code is perfect
–
tp1Dec 14 '11 at 1:08

8

@JimThio, do you seriously think that any of the programmers referred to above have ever intentionally written bad code? Have you ever read code written by yourself a few years back? Did you find it good? Chances are, you did your best back then, and you still see an awful lot of things to be improved in that code now.
–
Péter TörökDec 14 '11 at 11:34

Okay, at risk of being complete downvote-bait, I'm going to "devils advocate" the opposing view.

I propose that we developers have a tendency to get overly concerned about things like proper practice and code cleanliness. I suggest that, while those things are important, none of it matters if you never ship.

Anyone who's been in this business a while would probably agree that it would be possible to fiddle with a piece of software more or less indefinitely. Duke Nukem Forever, my friends. There comes a time when that nice-to-have feature or that oh so urgent refactoring work just should be set aside and the thing should be called DONE.

I've fought my colleagues about this many times. There's ALWAYS one more tweak, something else that "should" be done for it to be "right". You can ALWAYS find that. At some point, in the real world, good enough just has to be good enough. No real-world, actual, shipping software is perfect. None. At best, it's good enough.

I think this is a reasonable position, one definitely has to be pragmatic. As someone who's done a LOT of code maintenance, I'm convinced that only crappy code lasts for decades, the clean stuff is abandoned when the next shiny thing comes along.
–
Steve JacksonDec 13 '11 at 18:04

9

Or maybe if it's used, hard, for decades, it will likely end up looking like a mess. If it isn't used (at all, or for long), it won't have a chance to accumulate any cruft.
–
UselessDec 13 '11 at 18:32

6

"Anyone who's been in this business a while would probably agree that it would be possible to fiddle with a piece of software more or less indefinitely." It would be possible, but why do it? Once you have set your quality standard, you design it, implement it, test it, fix bugs, test it again, and then do not touch it any more. It takes longer than just hacking it but once you have reached your goal (required functionality is implemented and tested) it is quite clear that you should not fiddle with the code any more. Just my 2 cents.
–
GiorgioDec 13 '11 at 19:54

6

+1 -- in the real world there will always be a tradeoff between code quality and meeting deadlines. I would rather have a programer who can produce reasonable code quickly than a perfectionist who spends months agonizing over whether he should call a method "serialize" or "writeToFile".
–
James AndersonDec 14 '11 at 1:56

5

@Giorgio: I disagree with your "superstition" that quality work takes longer than just hacking it. That might be true if you equate programming with typing. Considering the whole software lifecycle things go much smoother and therefore quicker if you care about quality.
–
ThomasXDec 14 '11 at 7:49

Such programmers almost never know they got it right, only believe so. And the difference may not be easy to perceive.

I remember how I used to program before I learned about unit testing. And I remember that feeling of confidence and trust on a wholly different level after I ran my first decent suite of unit tests. I hadn't known such level of confidence in my code existed before.

For someone who lacks this experience, it is impossible to explain the difference. So they may even go on developing in code-and-pray mode throughout their life, benevolently (and ignorantly) believing that they are doing their best considering the circumstances.

That said, there can indeed be great programmers and exceptional cases, when one really manages to hold the whole problem space in his/her mind, in a complete state of flow. I have experienced rare moments like this, when I perfectly knew what to write, the code just flew out of me effortlessly, I could foresee all special cases and boundary conditions, and the resulting code just worked. I have no doubt there are programming geniuses out there who can stay in such state of flow for extended periods or even most of their time, and what they produce is beautiful code, seemingly without effort. I guess such persons might feel no need to write puny unit tests to verify what they already know. And if you really are such a genius, it may be OK (although even then, you won't be around that project forever, and you should think about your successors...). But if not...

And let's face it, chances are you aren't. I, for myself, know I am not. I had some rare moments of flow - and countless hours of grief and sorrow, usually caused by my own mistakes. It's better be honest and realistic. In fact, I believe the greatest programmers are fully aware of their own fallibility and past mistakes, so they have consciously developed the habit of double checking their assumptions and writing those little unit tests, to keep themselves on the safe side. ("I am not a great programmer - just a good programmer with great habits." - Kent Beck.)

"benevolently (and ignorantly) believing that they are doing their best considering the circumstances." Perfect summary of the problem. #1 rushed because of constraints and did the best he could. #2 comes along and has inherited chaos plus new deadlines and does his best too. All the way down to the 20th poor soul who couldn't do his best if he had years to undo the damage. That's why I practice the Boy Scout rule, "leave it cleaner than you found it." Give that next sap a fighting chance - it might be you.
–
Steve JacksonDec 13 '11 at 17:41

1

Funny, I feel the opposite since I've started unit testing my code (at work). It's like getting lazy; there's no reason to really understand your code, since other code will catch mistakes for you
–
IzkataDec 13 '11 at 18:08

8

You write unit tests in part to prove that your own code works. More importantly, you write unit tests so that other developers can alter your code with confidence.
–
Stephen GrossDec 13 '11 at 19:11

@Izkata - if you don't understand what you're doing, the unit tests are probably broken, and validating that the code has the same mistakes that the tests do. Also, even with 100% decision coverage and accurate unit tests, it's possible (though unusual) to have a bug that testing doesn't reveal.
–
Steve314Dec 14 '11 at 5:22

Good quote, but it wasn't Gandalf. It was Pippin, arguing why he, Frodo and Sam shouldn't cut across country to the Buckleberry Ferry, right in the initial journey from Hobbiton.
–
Daniel RosemanDec 13 '11 at 16:48

19

Correction: "Unit tests. Its the only way to have false sense of security in code (dirty or not)". Unit tests are good to have, but they don't guarantee anything.
–
CoderDec 13 '11 at 19:12

8

When I want to uncover hidden bugs, I show the app to my boss. I call it the boss-test, it is to be made after unit testing. He has an aura of magnetism that atracts all kind of weird bugs as well as cosmic rays straight to CPU registers.
–
Mister SmithDec 13 '11 at 22:46

7

While we're quoting, you might also like "Testing shows the presence, not the absence of bugs" - Edsger Dijkstra
–
Timothy JonesDec 14 '11 at 6:28

It’s good to learn to accept that no software system of reasonable complexity will be perfect no matter how much unit testing and code tweaking is done. Some degree of chaos and vulnerability to the unexpected will always lurk within the code. This doesn’t mean that one shouldn’t try to produce good code or conduct unit tests. These are, of course, important. There’s a balance that has to be sought and this will vary from project to project.

The skill to be developed is an understanding of what level of ‘perfection’ needs to be used for a particular project. For example, if you’re writing an electronic medical records application with a 12 month project timeline you’ll want to devote a lot more time for testing and making sure your code is maintainable than you would for a one-off conference registration web app that has to be deployed by Friday. Problems arrive when somebody doing the EMR app gets sloppy or the registration app doesn’t get deployed in time because the programmer is too busy tweaking code.

Quick and dirty is perfectly fine within a subsystem. If you have a well defined interface between your crap and the rest of the system, and a good set of unit tests that verify that you ugly quick and dirty code does the right thing, it may be perfectly fine.

For example, maybe you have some hideous hack of regular expressions and byte offsets to parse some files coming from a third party. And assume you have a test saying that the result you get out of parsing example files is what you expect. You could clean this up so that you could ... I don't know, react more quickly when a third party changes a file format? That just doesn't happen often enough. More likely they'll change to a completely new API and you'll throw away the old parser and plug in a new one that conforms to the same API, and voila, you're done.

Where quick and dirty becomes a problem is when your architecture is quick and dirty. Your core domain object need to be well thought out, and your interfaces, but the edges of your system can usually be messy without ever having to pay the piper.

I've known a person who regards unit tests a waste of time. After much argument, he finally wrote one. It consisted of one long method sprinkled with && and || and returned a boolean to assertTrue. The statement span 20 lines. Then again, he wrote a class where every method had one line and a main one had over 1000 lines with no whitespaces. It was a wall of text. When I reviewed his code and inserted some new lines, he asked 'why'. I said 'Because of readability'. He sighed and deleted them. He put a comment on top "Don't touch it, it works!"

Last time I talked to him, he coded a website for a company. He was trying to find a bug. He had spent the last 3 days doing that for 8 hours a day. A bit later I talked to him again, and it turned out his team mate changed the value of a literal and didn't update it else where. It wasn't a constant. So he changed the other literals too so that his bug got fixed. He complained about his team mate's spaghetti code. He told me 'Haha, don't we all know what it's like to stay up whole night with the debugger, not getting any sleep over one nasty bug" He think this is something really good programmers do and he actually feels good about it.

Also, he thinks reading programming books and blogs is useless. He says, 'just start programming'. He's done that for 12 years and he thinks he's an excellent programmer. /facepalm

Here's some more.

Another time we were writing a DatabaseManager class for our web app. He put all database calls in it. It was a God class with over 50 methods for every imaginable thing. I suggested we break it up into subclasses because not every controller needs to know about every database method. He disagreed, because it was 'easy' to just have one class for the whole database and it was 'fast' to add a new method whenever we needed one. In the end, DatabaseManager had over 100 public methods from authenticating the user to sorting archaeological site locations.

My lesson in avoiding quick and dirty was when I had six months to deliver what was estimated (under-estimated) to be a year's worth of work. I decided to research methodologies before starting work. In the end I invested three months of research and was able to deliver in the remaining three months.

We got big gains by identifying common functionality and building the required libraries to handle those requirements. I still see coders writing their own code when there are available library routines. These coders often rewrite, or at best cut and paste, the same code when they need to solve the same problem later. Bug fixes invariably only catch some of the code copies.

One developer gave a telling reply when I asked him to use library code: "Isn't that cheating? I had to write all my own code in school."

In some cases I guess there could be a large suite of regressiontests that will find "all" bugs and verify the behaviour, thereby allowing for a quick and dirty coding technique. But mostly its just a matter of bad project planning and a manager who thinks its more important to get it done, than to get it done right.

And forget about "clean it up later", that never happens. In the rare cases it happens, the programmer will have forgotten most of the code making the job alot more expensive than if he had done it right the first time.

I don't think they can honestly say they got it right if it's not easily maintainable. If they admit they have to "clean it up later", then there is likely something they haven't thought through enough. Testing it thoroughly will only truly uncover any issues with dirty code.

I personally wouldn't aim to develop the skill of "writing dirty code" and being confident about its correctness. I would rather write proper code the first time around.

Code doesn't exist in a vacuum. I have suffered untold grief firefighting the consequences of quick and dirty and cowboy coding. But sometimes finishing the product is the priority, not figuring out how to write the best code. Ultimately, if the product ships and works well enough, the users and customers won't know or care how "bad" the code inside is, and I will admit there have been times when I didn't care at all about "getting it right" as long as I got it out the door.

Yes, this in an organizational issue and "should never happen." But if you happen to be writing code in an organization that is poorly managed and heavily deadline-driven, at the individual programmer level one's options are limited.

Sometimes you won't have any money if you don't ship now...but shipping now, allows you to pay "ten-fold" to clean it up, and then some because you beat your competitors to the market, and got brand recognition first.
–
CaffGeekDec 13 '11 at 15:53

2

I beg to differ. If money is tight already, you spend the income on the next product and you'll keep doing so until your company either dies or gets big enough. In the latter case and most likely, the original developers won't be there anymore to fix the old code.
–
RakuDec 13 '11 at 15:57

1

Beg to differ all you like, but history is full of examples where being there at the right time, available to whoever wanted it, was more important than being the best possible product. There's always an opportunity cost, always always.
–
Warren PDec 20 '11 at 20:06

1

Warren, that's basically what I was saying. In my eyes the opportunity cost of getting the code back to maintainable grows, exponentially, the longer you delay it. If your company is in a position where it can survive the unmaintainability disaster because sales went well and the code wasn't too dirty, good, but what if not?
–
RakuDec 21 '11 at 9:27

In my opinion, learning to judge Q&D code for correctness is not a skill worth developing because it's just bad practice. Here's why:

I don't think "quick and dirty" and "best-practice" go together at all. Many coders (myself included) have cranked out quick and dirty code as a result of a skew in the triple constraints. When I've had to do it, it was usually a result of scope creep combined with a ever-approaching deadline. I knew the code I was checking in sucked, but it spit out proper outputs given a set of inputs. Very importantly to our stakeholders, we shipped on time.

A look at the original CHAOS Report makes it pretty clear that Q&D is not a good idea and will kill the budget later (either in maintenance or during expansion). Learning how to judge if Q&D code is right is a waste of time. As Peter Drucker said, "There is nothing so useless as doing efficiently that which should not be done at all."

"Dirty" means different things to different people. To me, it mostly means relying on things that you know you probably shouldn't rely on, but which you also know that you can expect to work in the near term. Examples: assuming that a button is 20 pixels high instead of calculating the height; hard coding an IP address instead of resolving a name; counting on an array to be sorted because you know that it is, even though the method that provides the array doesn't guarantee it.

Dirty code is fragile -- you can test it and know that it works now, but it's a pretty good bet that it'll break at some point in the future (or else force everyone to walk on eggshells for fear of breaking it).

At the risk of sounding a little controversial, I'd argue that nobody truly KNOWS that their code is 100% correct and 100% without error. Even when you have really good test coverage and you are rigorously applying good BDD/TDD practices, you can still develop code that contains errors, and yes that can even contain side-effects!

Just writing code and assuming it works implies overconfidence on the part of the developer's sense of that developer's own abilities, and that when problems arise (which they inevitably will) the effort to debug and maintain the code will be costly, especially if another developer needs to maintain the code later on. The real difference is made by applying good software engineering practices, which ensure that you can have real confidence that your code is likely to work most of the time, and that if you do encounter an error, it is more likely to be easier to debug and much less costly to change and maintain regardless of the person who works on that code later on.

The salient point is that well factored and well tested code will allow OTHERS to have confidence in your code, which is in most cases more important than your own confidence.

If dirty code is well tested, it can be trusted. The problem is, that unit testing dirty code is usually very hard and cumbersome. This is why TDD is so good; it reveals and removes dirt and smells. Also, unit testing is often the first thing to suffer from time preasure. So if the cleanest guy ever made the cleanest code he'd ever done, I would still not trust it one bit, if he omitted the unit tests due to time preassure.

Good programmers (Quick & Dirty and otherwise) do not have the hubris to assume they have got it right. They assume that all large systems have bugs and flaws, but that at some point might be well enough tested and reviewed to have a sufficiently low risk or low enough cost of failure that the code can ship.

So why do these, so called Quick & Dirty, programmers, exist? My hypothesis is Darwinian selection. Programmers who ship workable code fast, occasionally ship before the competition ships, or the budget runs out, or the company goes bankrupt. Therefore their companies are still in business employing new programmers to complain about the mess that has to be cleaned up. So called clean code ships as well, but not differentially well enough to drive Quick & Dirty coders into extinction.

If their code has been tested thoroughly by a good QA team and it passes, then I would say they got it right.

Writing quick and dirty code is not something that should be done as a habit but at the same time there are occasions when you can spend 20 mins writing code that may be classed as dirty or 4 hours refactoring a lot of code to do it right. In the business world sometimes 20mins is all that is available to do the job and when you face deadlines quick and dirty may be the only option.

I have myself been on both ends of this, I have had to fix the dirty code and have had to write my own in order to work around the limitations of a system I was developing in. I would say I had confidence in the code I wrote because although it was dirty and a bit of a hack sometimes I did make sure it was thoroughly tested and had a lot of built in error handling so if something did go wrong it wouldnt destroy the rest of the system.

When we look down on these quick and dirty programmers we do need to remember one thing, a customer generally doesnt pay till they have the product, if it ships and they go into UAT testing and find the bugs from quick and dirty code it is a lot less likely they will pull out when they have an almost working product infront of them, yet if they have nothing and you are telling them "you will have it soon we are just fixing x" or "it was delayed because we had to get y working just perfectly" they are more likely to give up and go with a competitor.

Of course as this image demonstrates no one should underestimate the danger of quick and dirty code!

One may probably think that a not optimal part of a code could make no difference, because of the short lifetime, little impact in business, or little time to get it done. The correct answer is you don't really know. Every time I listen to somebody saying that "this is a small feature", or "let's make it as fast and as simple as we can" and spend insufficient time on thinking about the right design, the only two things that actually and up occurring are:

1-) The project get's bigger and the team motivation lower, working on code full of "stitches". In that case the project will probable to a fast lane towards chaos.

2-) The project becomes known to be a non optimal solution and its use starts to be discouraged, in favor of a new solution or a refactoring that is as expensive as a new solution.

Try always to do the best code you can. If you don't have enough time, explain why you need more. Don't put yourself at risk with poorly made work. Be always a better professional. No one can punish you for this if you are reasonable. If they do, it's not where you should care to work at.