In many companies there is a formal procedure of reviewing employees' work.

For example, a salesperson can stay she'll sell one million units at the beginning of the year. When she comes up for review a year later, she says she's sold two million units. Thus, her manager decides to promote her.

But what should developer say? I'll fix a million bugs, I'll write a hundred unit tests? I can't imagine many things that can be measured here, especially If you don't have a roadmap for year and if you working on maintenance.

my question is not about competence. What you provided is very useful when you hire a programmer, to find out his skills. But when he already work on you and every half-year you should measure his productivity along with others to find out who promote and who not.
–
user1449Mar 28 '11 at 8:54

Many answers seems to suggest that there is no real metrics for productivity. So how can proponents of certain software methodology claim that their methodology boosts productivity? How can you boost something that you cannot measure?
–
GiorgioApr 11 '13 at 8:31

5 Answers
5

Does anybody knows non-stupid objectives for programmers and how you can explain progress on them.

No.

Or performance measurement procedure not applicable for developers at all?

What does a programmer produce? Really. What do they produce. Anything you think a programmer produces can be "gamed".

"Code"? Copy and paste will run up the numbers.

"Bug Fixes"? Easy to introduce and fix bugs just to run up the numbers.

"Meeting the schedule"? Easy to over-estimate and always be early.

"Meeting the budget"? Hard for a programmer to control. But if you insist on it, they simply stop working when the run out of money and leave you with a product that might not work well, but will have the exact cost.

"Few defects"? Easy. Write very little code. Do lots of analysis and design and planning.

There are two kinds of metrics here. "Do More" and "Do Less". More code. Fewer defects.

Any "more" metric is gamed by copy and paste techniques to simply make the numbers.

An "less" metric is gamed by simply doing less of everything.

If you think a programmer produces "intellectual property" or "value" you find that these are very, very hard to measure.

For example. Value should be measured by the dollar value to the business. Since every dollar the business makes is touched by software, then 100% of revenue is created by programmers. That doesn't work out well, because you can't easily separate software from the rest of the business processes.

Intellectual Property (the knowledge embedded in software) is even harder to quantify.

You count the number of lines of code a programmer writes, of course. ;)

Seriously though, there are two aspects to a developer's performance, and that would be quality and quantity. For what concerns quality, you can roughly determine this by the occurrence of bugs which show up in a section of code written by a developer (though keep in mind that there will be bugs from the best of developers), and the time it takes to fix these bugs (though again, you'd have to look at averages since some bugs take longer to fix than others).

For what concerns quantity, tracking progress of a programmer is notoriously difficult even if for the simple reason that it's difficult to gauge progress in the creation of a program. Traditional thinking would suggest that if it takes 1 programmer one month of time to perform a task, 2 programmers would take a half a month to perform that same task (The mythical man-month). That's been proven again and again to be a completely inaccurate judge of progress.

Though it's clear what isn't a proper judge of progress, it's not clear what is. At this point you enter the realm of highly argued ideas about what are best practices for judging progress. Though, perhaps the best idea I've heard is to hold a meeting with a group of developers and ask each one to estimate how long it would take to develop a specific task. Take the average of all estimates and double it (yes, double it), and you have roughly a safe estimate on when it should be done by an accomplished developer, bugs and all. However, there are always exceptions to the rule, so it's better to evaluate performance of a programmer from a statistical standpoint rather than ability on specific tasks.

My advice would be to break down the tasks of a project into well-defined bite-sized pieces and handle each one with its own estimated deadline. After a few of such tasks, you can begin to get a feel for how developers perform with respect to their estimated deadlines.

If you really want to measure with numbers, you should probably need to count the number of tasks completed and the function points related to it. This combo is needed as you can fix hundreds of tasks with just 1 FP, but you probably won't be able to do that if the FP's are higher.
Sum this up and you know how 'productive' the developer was.

There should ofcourse be some kind of quality control. That's where peer programming/code reviews come in. Colleague developers should be asked how 'Dave' has been doing the past year.

I think it's not really fair to base the productivity level on how many bugs/issues a developer has introduced. Most of the time something is a team effort, which means you aren't the only one who hasn't noticed a bug/issue. Also, if I recall correct, there's at least 1 bug per 10 lines of code written.
The person who writes the most code (probably got most tasks/fp's done) probably has introduced most of the bugs.