Over the past few days, my understanding of “why” firms won’t move from the timesheet model has had a breakthrough.

It’s really quite simple – people feel they need to “measure” their performance in some quantitative manner. This means that they prefer to use an inherently subjective measurement forms the basis of their perception as to how they have “gone” in doing their work. This sheeted home to me the other day when I was having a chat with one of my gurus at work. They wanted to know whether they had progressed over the past year and how successful they had been in delivering outcomes.
The discussion turned to the methods we could use to assess how they had performed. All good, but the conversation then progressed to a point where we were discussing the difference between qualitative and quantitative measures. Now, being accountants, we inherently prefer to use quantitative measures – things like gross margin, profitability, ROI, efficiency and the like. All good and useful in some respects, but the measurement is usually only an indication of something else that relates to qualitative issues.

Let me explain. In our conversation, we talked about the things that were really important to our firm – things like development of each other in technical and non-technical ways (communication, customer relations etc). These things are incredibly difficult to measure – so difficult that I am not aware of any way of objectively assessing them. As I pointed out to my team member, they had contributed incredibly to the development of one of their support people over the past 12 months. They had lead by example and created a more rapid pathway for the person concerned to develop their career – personally and professionally. The leadership provided and coaching and development have formed a platform for the support team member that will take them through their career. As I asked “How do you value that?” What method do we use to assess this level of contribution? In my view, such an assessment is going to be subjective and no two people would come to the same conclusion as to the “value” of that work.

In assessing this type of contribution, if we were using timesheets, we would be able to point out that the estimated time (do we record it in 6 minute or 10 minute increments) that might have been allocated to “development” or “training”. But, much of the training related to customer work so, should we allocate it to the customer? If we did allocate it to the customer, they would have every right to get pissed off that they were being “charged” for training. So many decisions!

How much time should we take in this work? Is here a benchmark or average (you know – where the best of the worst meets the worst of the best) that we should use to determine the input required? No. Everyone is unique and learns in their own particular style. There is no one over-arching approach that works for everyone and therefore, the time spent tailoring the training approach to achieve the best outcome is of incredible value (also, the trial and error process undertaken to work out the most effective approach). The value that my resident guru added to her team member was the combination of a range of skills, talents and abilities that they have developed over many years. And then, how do we attach an “hourly rate” to that? At the end of the day, does the arduous quantitative process we would need to go through to get the result add anything valuable to our analysis or inform our decision making?

The thing that really matters is that the outcome is effective. The process in itself is inefficient until such time as the trainer and trainee have worked out what works for them. Using a “one size fits all” approach will only create average outcomes and no-one wants to be average! Spending the time to work out effective outcomes is far better than recording the time spent. For example, if we knew that it took Manager A and Team Member B 20 hours to work out the most effective training method for Team Member B, can we use that when we look at the potential training time needed for Team Member C? Of course not. B and C are different people with different learning styles. To use the metrics from B to design the process for C stands a wonderfully unlikely chance of being useful to anyone for anything.

The measure of effectiveness should not be merely based on some subjective assessments that inform us of little and guide us nowhere. The effectiveness of what has been done in training lasts a long time (a lifetime?) and to try and reduce it to a number is devaluing the contribution that has been made.

And, because the analysis and assessment as to the effectiveness of what you do is so inherently subjective, most firms cling on to timesheets – they know they’re not right. They know they are subjective. They know they are a pain in the you-know-what. But they are too scared to let them go as they believe it’s all they have. Sad really.