I've made a chart of how long it takes for me to complete each task versus how long I estimated the task would take. I have a dataset of 42 tasks. For each task, I have a specific unit test in mind that needs to be complete before I consider the task completed. Sometimes I decide to do more than what I originally intended, sometimes less. Sometimes, I abandon the task and move on to something new. I do not factor the abandoned tasks into my calculations. So here's how I come up with my metric. If the tasks takes longer than expected then I divide the actual length by the estimated length and multiply the number by -1. If the task takes shorter than expected then I do the reverse and divide the estimated length by the actual length. So if I estimate that a task will take 10 hours and I finish in 5 hours then that would result in 2. If it's the other way around, then that would result in -2. The chart below represents a moving average of the last 5 tasks. On average for every 1 hour that I predict a task will take it takes 1 hour and 40 minutes to complete. There was one area where two tasks in a row took 16 and 18 times longer to complete than I thought, so I really got slammed there.