Bah! Swallowed my carriage returns. I just put two declarations into one line.
–
Kent BoogaartOct 8 '08 at 18:34

1

This. For example, refactoring some legacy code for a client, I was able to cut the number of lines in their app's main form in half and that includes adding comment blocks to the refactored methods. I guess that also counts as bragging.
–
Rob AllenOct 8 '08 at 18:35

1

This answer is obviously better than the sum of all the other answers on this page. Thanks for using boldface.
–
dlamblinOct 9 '08 at 19:24

1

Number of lines removed seems only marginally more useful than lines added. It is still trivial to manipulate and therefore is not a very useful metric.
–
EliDec 4 '08 at 13:39

5

I once removed 2100 lines from a 2200 line module without any functionality change. (tidying up after some really sloppy copy+paste programmer... grrrrr)
–
SpudleyDec 23 '10 at 15:56

My point today is that, if we wish to count lines of code, we should not regard them as "lines produced" but as "lines spent": the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger.

It's a terrible metric, but as other people have noted, it gives you a (very) rough idea of the overall complexity of a system. If you're comparing two projects, A and B, and A is 10,000 lines of code, and B is 20,000, that doesn't tell you much - project B could be excessively verbose, or A could be super-compressed.

On the other hand, if one project is 10,000 lines of code, and the other is 1,000,000 lines, the second project is significantly more complex, in general.

The problems with this metric come in when it's used to evaluate productivity or level of contribution to some project. If programmer "X" writes 2x the number of lines as programmer 'Y", he might or might not be contributing more - maybe "Y" is working on a harder problem...

There is one particular case when I find it invaluable. When you are in an interview and they tell you that part of your job will be to maintain an existing C++/Perl/Java/etc. legacy project. Asking the interviewer how many KLOC (approx.) are involved in the legacy project will give you a better idea as to whether you want their job or not.

like most metrics, they mean very little without a context. So the short answer is: never (except for the line printer, that's funny! Who prints out programs these days?)

An example:

Imagine that you're unit-testing and refactoring legacy code. It starts out with 50,000 lines of code (50 KLOC) and 1,000 demonstrable bugs (failed unit tests). The ratio is 1K/50KLOC = 1 bug per 50 lines of code. Clearly this is terrible code!

Now, several iterations later, you have reduced the known bugs by half (and the unknown bugs by more than that most likely) and the code base by a factor of five through exemplary refactoring. The ratio is now 500/10000 = 1 bug per 20 lines of code. Which is apparently even worse!

Depending on what impression you want to make, this can be presented as one or more of the following:

50% less bugs

five times less code

80% less code

60% worsening of the bugs-to-code ratio

all of these are true (assuming i didn't screw up the math), and they all suck at summarizing the vast improvement that such a refactoring effort must have achieved.

I don't remember the exact # but Microsoft had a web cast that talked about for every X lines of code on average there are y number of bugs. You can take that statement and use it to give a baseline for several things.

How well a code reviewer is doing their job.

judging skill level of 2 employees by comparing their bug ratio's over several projects.

Another thing we look at is, why is it so many lines? Often times when a new programmer is put in a jam they will just copy and paste chunks of code instead of creating functions and encapsulating.

I think that the I wrote x lines of code in a day is a terrible measure. It take no account for difficulty of problem, language your writing in, and so on.

The statistic was published in the Software Engineering Institute's Process Maturity Profile of the Software Community: 1998 Year End Update. A survey of about 800 software development teams (or shops, I don't remember) led to a finding that there are, on average, 12 defects per 1000 lines of code.
–
Thomas OwensOct 9 '09 at 15:45

There are a lot of different Software Metrics. Lines of code is the most used and is the easiest to understand.

I am surprised how often the lines of code metric correlates with the other metrics. In stead of buying a tool that can calculate cyclomatic complexity to discover code smells, I just look for the methods with many lines, and they tend to have high complexity as well.

A good example of use of lines of code is in the metric: Bugs per lines of code. It can give you a gut feel of how many bugs you should expect to find in your project. In my organization we are usually around 20 bugs per 1000 lines of code. This means that if we are ready to ship a product that has 100,000 lines of code, and our bug database shows that we have found 50 bugs, then we should probably do some more testing. If we have 20 bugs per 1000 lines of code, then we are probably approaching the quality that we usually are at.

A bad example of use is to measure developer productivity. If you measure developer productivity by lines of code, then people tend to use more lines to deliver less.

It seems to me that there's a finite limit of how many lines of code I can refer to off the top of my head from any given project. The limit is probably very similar for the average programmer. Therefore, if you know your project has 2 million lines of code, and your programmers can be expected to be able to understand whether or not a bug is related to the 5K lines of code they know well, then you know you need to hire 400 programmers for your code base to be well covered from someone's memory.

This will also make you think twice about growing your code base too fast and might get you thinking about refactoring it to make it more understandable.

The Software Engineering Institute's Process Maturity Profile of the Software Community: 1998 Year End Update (which I could not find a link to, unfortunately) discusses a survey of around 800 software development teams (or perhaps it was shops). The average defect density was 12 defects per 1000 LOC.

If you had an application with 0 defects (it doesn't exist in reality, but let's suppose) and wrote 1000 LOC, on average, you can assume that you just introduced 12 defects into the system. If QA finds 1 or 2 defects and that's it, then they need to do more testing as there are probably 10+ more defects.

Lines of code isn't so useful really, and if it is used as a metric by management it leads to programmers doing a lot of refactoring to boost their scores. In addition poor algorithms aren't replaced by neat short algorithms because that leads to negative LOC count which counts against you. To be honest, just don't work for a company that uses LOC/d as a productivity metric, because the management clearly doesn't have any clue about software development and thus you'll always be on the back foot from day one.

It is a very usefull idea when it is associated with the number of defects. "Defects" gives you a measure of code quality. The least "defects" the better the software; It is nearly impossible to remove all defects. In many occasions, a single defect could be harmfull and fatal.

First of all, I would exclude generated code and add the code of the generator input and the generator itself.

I would then say (with some irony), that every line of code may contain a bug and needs to be maintained. To maintain more code you need more developers. In that sense more code generates more employment.

I would like to exclude unit tests from the statement above, as less unit tests do generally not improve maintainability :)

The number of codes added for a given task largely depends on who is writing the code. It shouldn't be used as a measure of productivity. A given individual can produce 1000 lines of redundant and convoluted crap while the same problem could be solved by another individual in 10 concise lines of code. When trying to use LOC added as a metric, the "who" factor should also be taken into account.

An actually useful metric would be "the number of defects found against number of lines added". That would give you an indication of the coding and test coverage capabilities of a given team or individual.

As others have also pointed out, LOC removed has better bragging rights than LOC added :)