There is often a tension between "efficient" and "effective" in software development.

"Efficient" often means code that is "correct" from the point of view of adhering to standards, using widely-accepted patterns/approaches for structures, regardless of project size, budget, etc. "Effective" is not about being "right", but about getting things done. This often results in code that falls outside the bounds of commonly accepted "correct" standards, usage, etc.

Usually the people paying for the development effort have dictated ahead of time what it is that they value more. An organization that lives in a technical space will tend towards the efficient end, others will tend towards the effective.

Developers often refuse to compromise their favored approach for the other. In my own experience I have found that people with formal education in software development tend towards the Efficient camp. Those that picked up software development more or less as a tool to get things done tend towards the Effective camp. These camps don't get along very well. When managing a team of developers who are not all in one camp it is challenging.

In your own experience, which camp do you land in, and do you find yourself having to justify your approach to others? To management? To other developers?

You are asking people to classify themselves according to a false dichotomy. If you don't understand why "efficient" code pays for itself over and over on long projects, you're incompetent. If you can't whip out a quick and dirty solution to a quick and dirty (one time) problem, you're limited.
–
btillyFeb 21 '11 at 17:39

@btilly: I agree that the form of the question is black/white, but in the context of the examples I think it is fair. I have had personal experience with people on either side. I have been on teams where ALL code written is fully architected, reviewed, etc. before anything gets near production, regardless of the situation. The inverse has also been true where code begins to fly before the initial conversation ends. As you say, a balanced developer knows when to take each approach, but I maintain that there are adherents in both camps who can defend their positions and discuss the issue.
–
Todd WilliamsonFeb 21 '11 at 20:18

11 Answers
11

The two extremes are about equally bad: On one side the architecture astronauts/academics who can't even look at a class without defining two factories and a strategy pattern. On the other the self-aclaimed "duct tape programmers", often powered by at least some part ignorance, who subscribe to YAGNI ("You ain't gonna need it") to the extreme.

Good programmers land somewhere in between. They don't overdesign or overcomplicate things, but they do add some flexibility and eliminate redundancies/dependencies where appropriate

Personally I always estimate for correct, I would rather delay then release rushed code.

My standard backup for that claim is that I cannot assure for the overall quality or performance if we kludge it in and it will cost the PM more headaches in the long run.

From a development perspective if you can't grab my code and immediately know what its doing then that's a problem. If the structure doesn't make sense to begin with, its less likely it will make any sense by the time you are done.

If your client/company does code reviews during the project/support phase it helps build assurance you knew what you were doing to begin with.

Kludging together stuff is fine for Proof of Concepts but never for anything production worthy.

Agreed. The question you have to ask yourself is...does it matter? How long is the code going to be in production? What kind of load is going to be placed on it. If you're making a simple brochure site for a restaurant or business then you could just hand write the thing in PHP with one giant function file. It won't have to stand up to a heavy load and will probably be replaced in a year or two anyway. On the other hand, if you're making enterprise-level web applications you absolutely need documentation, deployment scripts, test suites, version control, load testing, etc.
–
Anthony ShullFeb 21 '11 at 18:01

7

@Anthony: it's going to be in production 3X longer than you think.
–
rox0rFeb 21 '11 at 18:11

1

My record is 17 years in production. It was a little alarming when they called me to help plan a conversion of stuff I wrote 17 years ago.
–
S.LottFeb 21 '11 at 18:14

1

@rox0r: Many Y2K problems were from systems written in the 1960s and 1970s and still running. At that time, there was good reason not to expect code to last that long (anything written in 1975 would have to last longer than the history of commercial software to make it to Y2K). I'm a lot less sympathetic with people who make that mistake nowadays.
–
David ThornleyFeb 21 '11 at 18:40

2

@roxOr it's hardly ever up to the developers how long code stays in production...that's a business decision. but we as developers have an idea about how long code SHOULD remain in production. we should optimize to that level. when an engineer builds a bridge they have an idea how long it will remain usable. if the city or whatever decides that it can last another decade after that because they don't want to pay for renovation or replacement, that's up to them.
–
Anthony ShullFeb 21 '11 at 22:54

This definition of effective doesn't account for the tail-end cost of maintenance, bug fixing, testing, and integration.

Initial coding is NEVER the most expensive part of a project. It seems like the most expensive part because people are very bad at measuring the TCO. QA and Operations are seen as unavoidable cost-centers and not really seen as a direct result of the development processes (or lack of process).

The people developing the code are judged on how much it costs to develop it and not on cost over the lifetime of the code; of course they optimize for the metric by which they are being judged.

I think far too many people expect code to be perfect when the QA releases happen. If QA isn't catching bugs 1) the test cases were not as detailed as they needed to be. 2) the requirements were dead on. I have never seen #2.
–
DarkStar33Feb 21 '11 at 23:11

That's a silly straw man. There are categories of errors that should rarely be seen by QA. There are far too many imperfections that QA sees that could otherwise have been prevented through good process and coding.
–
rox0rFeb 21 '11 at 23:56

I once sat at the office until 22:00, because I was supposed to find the cause of a SUPER-URGENT bug. It turned out that this bug was caused due to a quick-and-dirty fix for another bug, which in turn was caused by another quick-and-dirty fix for another bug, which was caused by another quick-and-dirty fix to another bug (true story).

I know all of this because I had to track down the last bugs in this magical chain (unfortunately I wasn't here to actually fix them, and either way my managers preferred quick and dirty fixes over correct ones so it's quite possible they wouldn't allow me to actually write proper code).

The funny thing is that all the programmers who knew anything about the original bug (the one whose quick-and-dirty fix made all the trouble to begin with) left, and everybody were so afraid to remove this buggy code, we couldn't do anything BUT fixing in quick and dirty ways.

I've been burned by developers taking a quick & dirty approach. All to often, it comes back as a bug which takes twice as long to fix as it would have to just do it right the first time. The small bit of time saved up front is almost always taken back many times over later.

I've seen the term technical debt thrown around before (unfortunately, I don't know where it originates). I think it applies well here. By cutting costs up front, one is just mortgaging future development time.

Efficient coding is another thing. Yes - maybe your code performs well, and right, but needs expensive refactoring in few weeks. Or you spend hours after hours for different looks and feels, which nobody cares about. That's not efficient programming, but the product might work efficiently nonetheless.

In my experience, quick & dirty is an oxymoron where software is concerned. The latter rarely leads to the former. I just redesigned and reimplemented a module with 100,000 lines of code. It took me a week and the result is 4,000 lines. The original "designer" probably outpaced me in lines of code per hour the first couple days while I was doing more thinking than typing, but there's no way he finished it in a week, and probably not a month. Not to mention all the extra hours the poor design has created in maintenance during its existence.

In the more technical mathy areas I work, the big issue, is always getting the messy algebraic equations right. A lot of mistakes may only have a small unnoticable impact on the unit testing, and so can go unnoticed. And the real nightmare is having hundreds of lines of dense algebra, that is "almost" right, and having to figure out where the defect is. The last time I had this problem, I restarted from scratch, and never found out why/where the original implementation was wrong.

Last week I had a discussion with a developer, filling a large matrix with numbers from formulas. Several pages of code... He was doing this by hand modification of a similar existing solution. How do you know the original is right? How do you know you aren't introducing subtle errors? If when you solve the resulting system of equations and the result comes out obviously wrong, how can you locate the defect? Are you using some sort of code generator, to minimize the number of places human error can propagate into bad code?

I've been known to write 300 hundred lines of code, that generates 100 lines of actual code. Not because it was easier, but because there were fewer places where a subtle screwup would yield a subtlely wrong result.

And then there is quick and dirty into terms of solution method. The order N squared algorithm is braindead easy to write and verify, but the N logN method is much trickier. Better to start with the braindead method, and refactor in the efficient method if/when needed -or at least in a manner that allows you to use it to check on the correctness of the tricky but efficient method.

Mind elaborating on where writing a code generator is less brittle than writing the actual code?
–
user1249Feb 22 '11 at 1:24

Thorbjorn: A code generator is useful when a relatively compact formulation defines the needed operation, but the volume of compilable source code is many times larger than the defining symbolism. Sometimes this is because the compact formulation may be inefficient, but can be used to spit out algebraic formulas (in the form of compilable code fragments), or sometimes it is just because a very simple formula can generate a lot of algebra. The gradient of a function of several variables is one such example.
–
Omega CentauriFeb 25 '11 at 3:53

During scheduled development time, I try to be as 'efficient' as possible. But if it's a production bug that has to be fixed YESTERDAY, then I'll implement the fastest dirtiest hack that doesn't break other functionality.

It also happens when requirements change (like they always do) late in the development process for a particular project, and the new features require major rework to be done properly, but can be done 'dirtily' without too much development cost, enabling the project to be shipped earlier.

I think the important thing to do is to go back and modify your design to take into account this new 'feature' and fix it properly once the crisis has been averted, or the product has been shipped. If you do that the dirty hacks don't build up and cause more problems down the road.

How do you determine if other functionality has been broken?
–
user1249Feb 22 '11 at 1:24

@Thorbjorn, regression testing? Hopefully unit tests, but code coverage isn't always that great. In an ideal world 'efficient' coding would always be done, with tests covering 100% of functionality being done before any work. But we don't live in a perfect world...
–
DominicMcDonnellFeb 22 '11 at 3:20

and for an "Efficient" programmer regression testing happens where exactly?
–
user1249Feb 22 '11 at 4:26

@Thorbjorn, in the places I've worked, regression testing was an external test with the testing team where they went through all functionality and made sure that everything is working before going out to production. Of course not fail proof, as then there wouldn't be production bugs, but fairly good for previous behaviour.
–
DominicMcDonnellFeb 22 '11 at 4:33

Edit: I just noticed I missed a word. Added but to the second sentence of the second paragraph. @Thorbjorn, is that why you seemed so antagonistic?
–
DominicMcDonnellFeb 22 '11 at 4:34

"Quick and dirty" typically means implementing a new feature by duplicating and them modifying the implementation of a similar feature. This is almost always quicker than writing a unit test for the new feature, refactoring the existing code for reuse, and then implementing the new feature. At least it is quicker today. Eventually, cut-and-paste programming results in an application that costs more to modify than the value of any new feature. I saw this happen with a large phone company's online ordering system. It had a middle layer of maybe 500,000 lines of C++, cut and pasted ad nauseum. It could only handle accounts of up to 12 phone lines. Management asked for an estimate of the cost to modify the application to handle large business accounts. After five yeas of cut-and-paste programming, the 12-line limit was baked into the program in hundreds of places. After a few weeks of multi-hour meetings, the software team came back with an estimate of over $10 million. Management was not happy. The entire team was laid off and the application was turned over to an offshore team for minimal maintenance.

Cut-and-paste programming is like running a bus company and never changing the engine oil. Every day, it's faster and cheaper to not change the oil. Eventually the engine stops running. So the company hires a skilled mechanic. The mechanic says the engine needs a complete overhaul. The company replies that there's no time for that, customers are on the bus. Then the company then tells the mechanic to get behind the bus and help push.

You must be very new to development. There is no correlation between code that does not meet standards and effectiveness. If you can guarantee that no other developer will ever have to understand--let alone modify--your code, then you can skip the guidelines and write it in straight 0s and 1s for all I care. But if you are wrong and I get stuck with y