How much detail makes for a good Competency framework?

Much of the work we do with clients involves, at some level, job analysis or competency frameworks. Working with large numbers of these you do begin to see patterns and consistent themes, but we do also see a great variation in the depth, breadth and structure of competency models. When developing a framework to underpin your talent management efforts, it can be fiendishly difficult to strike a balance between something that will apply to most and making it too generalised to be useful. The potential application and therefore level of detail and specificity required in a framework is therefore something well worth debating before you embark on a competency development project.

Research by James Meachin and Stephan Lucks (reported in the BPS’s Assessment and Development Matters, Vol. 2 No. 3, 2010) explored the optimal level of ‘granularity’ for competency frameworks when used as predictor measures and assessment criteria. Research into the effectiveness of various personality constructs to predict job performance suggests that some of the broad measures, such as the Big Five, have limited predictive validity, but that this might be improved when you correlate job performance with some of the finer-grain sub-traits, such as ‘dependability’. This would suggest that better predictions of job performance are made by fine-grain, or more specific, behavioural criteria.

Based on the literature, Meachin and Lucks hypothesised that assessment centre ratings which were based on a fine-grain competency framework would produce better correlations between conceptually-matched job performance measures (line manager ratings). In other words, they’d result in a more accurate prediction of high performance on the job. Interestingly, what they actually found is that the predictor measures showed stronger correlations with line manager performance ratings as they became broader, not narrower. Aggregating the competency scores into a general, overall measure of performance seemed to be a more reliable way of predicting high-performing individuals than picking on their performance in specific competency areas.

For practitioners, this is useful information. In order to create robust assessment processes, which differentiate between higher and lower performing candidates, a job analysis or competency framework has to provide depth and a level of detail which makes explicit the behaviours and competencies which are important to success or which demonstrate effectiveness. Undoubtedly, in the arena of assessment for development purposes, the value is in the detail – in helping people understand the specific aspects of their performance or behaviour which makes them more or less effective. But in recruitment, by being overly reliant on the detail and by honing in on one or two areas which may be deemed to be crucial to the job, we could be missing the bigger potential picture.

So, perhaps the optimal situation is to have a detailed, granular competency framework, which sets out the specific behavioural indicators across a number of competencies (no less than 6, and no more than 12?). By collecting assessment data against your framework (through performance appraisal, assessment processes, or 360 degree feedback), you should then perform a factor analysis on your competencies to determine whether there are any higher-order (or coarse-grained) factors underlying it (this may result in a general, overall performance construct or perhaps two or three clusters of competencies). Aggregating competency scores in line with these underlying factors and making decisions based on these broader measures is likely to improve the reliability of the framework, and ensure that you’re not letting potentially good candidates slip through the net.