Pages

Syndicate

What investigations would be useful to understand the claims about productivity of software developers? The existing studies are now old and come from an era when the technology was completely different from that available now. An era when one of the significant studies is into the difference between online and offline programming and debugging, McConnell’s 10X Software Development article refers to study titled “Exploratory Experimental Studies Comparing Online and Offline Programming Performance.” (emphasis mine):

The original study that found huge variations in individual programming productivity was conducted in the late 1960s by Sackman, Erikson, and Grant (1968). They studied professional programmers with an average of 7 years’ experience and found that the ratio of initial coding time between the best and worst programmers was about 20 to 1; the ratio of debugging times over 25 to 1; of program size 5 to 1; and of program execution speed about 10 to 1. They found no relationship between a programmer’s amount of experience and code quality or productivity.

An interesting point to start a modern investigation would be the last observation that they found no relationship between a programmer’s amount of experience and code quality or productivity. As a developer with 27+ years of experience in the field I have a vested interest in that observation being incorrect, but it would be interesting to see if the observation can be repeated. One reason for trying to repeat the experiment is that back in 1968 people with more than 10 years experience would have been exceedingly rare, but now 40+ years later, it should be easy to find people who have up to 10, 20, 30 and more years experience to see if there is any trend over the longer term.

Interestingly employers seem to have attached themselves to the idea that productivity is not that related to experience when they ask for team leads with 3 years of experience and consider senior developers to have 5 years of experience.

Other factors to investigate that seem to have some anecdotal evidence to support the idea that they may affect productivity

Breadth of experience — number of different programming languages that a developer has worked in.

Cross paradigm experience — does it make a difference how many different paradigms the developer has worked in?

Specific Paradigms — is there a paradigm that makes a difference - are the claims that functional programming improves general programming ability supportable from the data?

Specific experience — does it make a difference if a developer has spent a lot of time focused on one particular language? This might seem obvious, but we have been surprised by obvious ideas that turn out not to be true.

Similar experience — does it make a difference if the developer has experience in similar but not quite the same languages? Moving between C++, Java and C# could make a developer more aware of the subtle syntax differences between these languages and hence less likely to make mistakes that will slow down development.

Toolset experience — does the amount of time working in the toolset matter, independent of the experience with the particular language? There are several multi-language development environments that have been around for enough time for this to be a feasible investigation.

Domain experience — does experience in the problem domain make a difference? Employers seem to think so based on job postings, but does the data back the idea up and how significant is the difference?

There are probably more factors that could be investigated, but these will make for a good starting point.