The mysterious case of the assessment centre

To predict how people are going to perform in a new role takes a bit of detective work. You design an assessment centre around a series of tests and exercises, you compute the results, and the best candidates turn out to be the most suitable... don't they?

Unfortunately, this is not always how assessment centres pan out. The individual assessment methods might be good in themselves, and yet all too often the sum of the outcomes is not equal to the parts.

So what’s going on? How can a combination of good tests and exercises produce a bad result?

As long as you are using valid and reliable methods of gathering information, the assessment itself is probably not to blame. The culprit must be sought elsewhere.

In the search for answers, the assessment centre sleuth must turn the spotlight on the competence of the assessor. Are they suitably trained and able to administer their chosen assessments properly? Unless they follow best practice throughout, assessor might be tempted to stray from the correct process.

The biased assessor, closely related to our first suspect, is the classic wolf in sheep’s clothing. Subjective judgments can make them reach false conclusions. It’s human nature to warm to some people and not others, and once you allow yourself to be dazzled by the charm of an intelligent and affable Extravert (for example), you might start to ignore the actual assessment results. At which point you might as well declare: “We’ll choose the two tallest candidates and flip a coin to see which one we go for”.

Which leads us to the third suspect – the process itself. The assessment centre can appear to be thorough and fair, churning out stats for the assessors and their computers to mull over; but if the process is badly designed, it’s a waste of everyone’s time. For example, it’s critical to be clear about the competencies you need, and to what level. Selecting the right assessment methods for a particular role and situation, while at the same time avoiding discrimination, require s an expert touch. Even when you’ve achieved this, the scoring of exercises can still be subjective if not carefully designed; and sometimes the link between assessments and competencies might not have been clearly defined. It’s also worth paring things down. Exercises might be fun, challenging, ingenious, etc, but do two actually tell you more than one?

And this leads us to point our finger at suspect number four – the results. You might think they’re beyond suspicion, coming as they do from valid and reliable assessment methods. However, some measures, notably group exercises, can be highly situational and favour extraverts over introverts. And some groups may be at a disadvantage, if, for example, the assessments are aimed at a default Anglo-American-West European mindset. It’s important that the candidates, and not just the assessors, are supported through the process, and are assisted in understanding as much as possible so that they can provide you with responses that are typical. You need to help candidates embrace the spirit of the process and accept its validity.

Which leads us to the last of our suspects, the post-mortem. Exercises that have been assessed by several people can assume undue importance in the assessor discussion. This creates an imbalance in your efforts to capture data from different sources in order to create a reliable ‘final’ assessment against each competency. The group dynamics of the assessors’ post-mortem can also impact on objective decision-making – again, trained assessors and a clear process for integrating data can help with the quality of those decisions.

To catch the culprit and give this particular detective story a happy ending, jot down the following in your casebook:

Make sure you know what ‘good’ looks like in this role

Pick assessment methods that allow you to measure this effectively

Ensure that assessors are expertly trained

Structure assessments so that they can be objectively scored and clearly linked to competencies

Prevent one exercise from dominating proceedings (or any from introducing unfair bias)

Keep a tight rein on candidate-crunching assessor discussions

Document everything so that you can sniff out possible biases later

Know your law – you need to be able to justify all of the above (if challenged) to show you are doing the right thing

Track the people you recruit over time and check to see whether your predictions came true!

In brief, keep it professional and objective, and know where to draw the line. Creating a thorough list of evidence in your case is more important than having a whale of a time with the potential candidate. Above all, retain your Holmes-like professional aloofness and clear-sightedness throughout, and the case will be closed.