Re: [Ksummit-2012-discuss] [ATTEND or not ATTEND] That's the question!

On 06/20/2012 11:51 PM, J. Bruce Fields wrote:> On Sat, Jun 16, 2012 at 07:29:06AM -0600, Jonathan Corbet wrote:>> On Sat, 16 Jun 2012 12:50:05 +0200 (CEST)>> Thomas Gleixner <tglx@linutronix.de> wrote:>>>>> A good start would be if you could convert your kernel statistics into>>> accounting the consolidation effects of contributions instead of>>> fostering the idiocy that corporates have started to measure themself>>> and the performance of their employees (I'm not kidding, it's the sad>>> reality) with line and commit count statistics.>>>> I would dearly love to come up with a way to measure "real work" in>> some fashion; I've just not, yet, figured out how to do that. I do>> fear that the simple numbers we're able to generate end up creating the>> wrong kinds of incentives.> > I can't see any alternative to explaining what somebody did and why it> was important.> > To that end, the best resource for understanding the value of somebody's> work is the lwn.net kernel page--if their work has been discussed there.> > So, all you need to do is to hire a dozen more of you, and we're> covered!> > --b.> >>>> Any thoughts on how to measure "consolidation effects"? I toss out>> numbers on code removal sometimes, but that turns out to not be a whole>> lot more useful than anything else on its own.>>>> Thanks,>>

Resurrecting this one.

So something just came across my mind: When I first read this thread, myinner reaction was: "People will find ways to bypass and ill-optimizetheir workflow for whatever measure we come up with".

That's is pure human nature. Whenever we set up a metric, that becomes agoal and a bunch of people - not all - will deviate from their expectedworkflow to maximize that number. This happens with paper count in thescientific community, for the Higgs Boson's sake! Why wouldn't it happenwith *any* metric we set for ourselves?

So per-se, the fact that we have a lot of people trying to find out whatour metrics are, and look good in the face of it, is just a testament tothe success of Linux - but we know that already.

The summary here, is that I don't think patch count *per se* is a badmetric. Maybe we should just tweak the way we measure a bit to steerpeople towards doing more useful work, and that would aid our review.

The same way we have checkpatch, we can have something automated thatwill attempt to rule out some trivial patches in the counting process.We can scan a patch, and easily determine if each part of it is:

* pure whitespace* pure Documentation change* comment fix

And if a patch is 100 % comprised by those, we simply don't count it.People that just want to increase their numbers - they will alwaysexist, will tend to stop doing that. Simply because doing it will nothelp them at all.