Lots of people I know love estimation and scheduling patterns and search for the One True Estimation Technique. I'm kinda going off mega-accurate ones and thinking of writing TaskSchedulingUsingZipfsLaw.

LoadFactor boils three concepts into the one number: programmer
availability (the ratio of time actually spent on the job, as distinct
from on holidays or at the dentist), programmer efficiency (the ratio
of job time actually spent programming, as distinct from other
development tasks or PMing the computer) and estimation error.

The only "real" measure of programming time is actual programming
time. The usefulness of RealProgrammingTime? is limited to determining
deployment schedules. Highly regarded by business, but that doesn't
make it a true measure of work/effort.

Estimates should be compared to actuals. AIM Software are now logging
programmer time so we can determine actual programming time. I can't
yet say I recommend it, but it's the only method I can think of.

The "gut-feel" or "one-minute" estimation technique used in
IdealProgrammingTime may be good enough for prioritising, but the lack
of an algorithm/audit trail leaves the result a little vaporous /
unsubstantiated.

FunctionPointAnalysis is a little whiffy too. Where did all those
factors come from? Show me their derivations.

It must be simpler. Take an engineering task. Write down:

The number of classes added

The number of classes modified

The number of classes deleted

The number of methods added

The number of methods modified

The number of methods deleted

The LOC added (it shouldn't matter how you measure them as long as it's consistent)

The LOC deleted

Write down the actual programming time.

Collect this data for a number of engineering tasks. Hopefully some
coherency will emerge.