Home > Assessments: The Art of Measurement of Performance: Lessons from the Medical Profession

Assessments: The Art of Measurement of Performance: Lessons from the Medical Profession

Submitted by Tanya Gupta
On Wed, 06/04/2014

co-authors: Abir Qasem

In our last blog [1]we talked about why measuring competency and performance is vital for development professionals. In this blog, we talk about some take-aways from the medical profession on the measurement of competencies and performance.

The medical profession, by necessity, has hard requirements (inflexible and critical requirements) for measuring competencies and performance. In fact, such measurement is mission critical. While the development profession does not have “hard” requirements, we can learn from their rigorous approach. Here are a few principles and rules that we could borrow:

Focus on competencies, performance and the space in between: In the medical professional practice, the “does[1]” level of Miller’s model refers to “performance in context” and has been a challenge to measure. The problem is that what doctors do in controlled assessment situations (competencies) correlates poorly[2] with their actual performance in professional practice (performance). This is also a problem in non-medical workforce assessments. A great deal of work in the medical measurement field has been bridging the gap between competencies (what doctors can do in perfect conditions) and performance (what doctors do in real life). In development, too, we need to measure competencies as well as performance but not mistake one for the other. Competencies of a development staff may be quite clear and well-established, however it is not the same as performance. Performance involves using competencies effectively in real life and we need to measure how staff do this. Having the right degree and having worked in the right countries is not the same as closing a project effectively and to the satisfaction of a client.

Competence is specific to situations and not an absolute, but a continuum: It used to be the case that medical assessments assumed that certain areas of medical competence were stable and generic. However, now there is acknowledgement that competence is specific to particular situations or contexts, and not generic. Now, a wider assessment is conducted across the curriculum. Competency models have been widely used in the medical profession with an emphasis on a continuum so that progression is established-- as opposed to a binary pass-fail standard. In the development world, too, evaluation of performance and competence should be seen as a continuum that changes over time and is specific to situations. Working with fragile states, for example, requires different competencies than working with middle income countries. Likewise, working in development today requires a different set of skills than what was needed twenty years ago. For example, the development professional today needs to have the skills and networks to work with civil society and other stakeholders and must also be informed of social media and the importance of big data and analytics.

Assessment is a program of activity that uses multi-source qualitative and quantitative information: Assessment is now thought of as a “program of activity” that collects and integrates quantitative and qualitative information from different sources. Assessing doctors in the workplace adds to the richness of assessment. There is an acknowledgement that learning should be ‘authentic’ that is strongly linked to the understanding and solution of real world issues. However, due to implementation challenges, workplace assessments have not fully replaced other assessments. 360 degree feedback systems are becoming popular [3]in the development sector (e.g. the UN) but are used mostly for senior staff. What is often missing is a set of purely quantitative measures. We recognize that these measures are quite hard to develop, but this does not lessen their importance. Such quantitative measures are key in providing a sense of transparency to the assessment process. Very often management in development organizations recognize that quantitative assessment is important but fails to invest the resources needed to achieve this goal.

Reproducibility of assessments is key: Being able to reproduce the assessments is thought to be a strength. Ceteris paribus, if five people rate someone 10 out of 10, we can have more confidence in the quality of that rating. This confidence can be improved through “triangulation within the workplace assessment”. Triangulation involves using a variety of methods to collect evidence and using multiple raters over time. There are also established and recognized methods of gathering evidence. There has been an accompanying evolution from reliability to the concept of maximizing ‘consistency and comparability’. Development organizations could also benefit from ensuring that assessments are reproducible to the extent possible.

Encouraging the use of a portfolio: Including a portfolio in assessments is seen as a useful tool. In the medical profession, a portfolio can be seen as a “dossier of evidence” collected over time, that illustrates and documents a doctor’s education and practice achievements. Development portfolios could be incredibly important both for individual development professionals as well as for specific sector groupings and practices within a development network.

These are some useful lessons that can be viewed in a multi-disciplinary context. In our next blog, we will take this further and answer the “so what?” question in more detail. We will talk about how we can extend these lessons to further the measurement of performance and competencies in the development world.