Debunking Software Engineering Myths: Does the organization matter more than the programming?

I’ve been wanting to post a link to this article for a while, but ever since I discovered it, research.microsoft.com has been unreachable for me, so I’ll post a small summary:

Microsoft has done research on some popular conceptions about software engineering and come up with hard numbers on some factors affecting code quality. Here are the main findings reported in the artice, with links to the research papers, in case the original is lost forever:

More test coverage does not equal better code quality, as measured by number of post-release fixes. Usage patterns and code complexity are the main reason test coverage is a poor predictor of quality.

Organizational metrics, which are not related to the code, can predict software failure-proneness with a precision and recall of 85 percent. Not only that, but organizational structure was by far the best predictor of code quality, and was at least 8% percent better than the best predictor the researchers could get from code-based measurements. (The influence of organizational structure on software quality: an empirical case study)

One drawback with this research is that this is primarily based on case studies, which is a generally poor research method for drawing general conclusions. How valid are these observations for other organizations outside Microsoft? Is the organizational structure of your project or company actually more decisive than your programming methodology?

Also, how transferable is this to other programming frameworks. In dynamic typed languages like Perl, is test coverage more important? I often find that a sub-set of my tests do what a compiler could have done in a statically typed language, or even for Perl if I just had a more automatic testing tool. So maybe coverage would be more predictive of bugs if the compiler catches fewer mistakes? That would be a good candidate for further research.