Follow us

Manager-as-coach

Three-quarters of workers regularly make decisions they don't feel they are trained or qualified for, and nearly two-thirds feel their managers are unapproachable, reveals a survey from the Chartered Management Institute (CMI).
The survey highlights the issues of what the CMI calls a "stressed out, unfulfilled workforce". Some 23 per cent worry about making decisions, 32 per cent have lost respect for their manager and 10 per cent admit covering up mistakes.
Coaching at Work, Volume 6, Issue 4

How should we measure performance in the current climate? By potential, not results, says Paul Stokes, deputy director of the Coaching & Mentoring Research Unit at Sheffield Hallam University
In the current economic climate, particularly in the UK public sector, we are hearing more and more about value for money, effectiveness and efficiency (see Van Doren et al, 20101). In short, many are feeling pressured to justify their own performance and that of those they manage.
People are concerned with how their performance is measured. As coaches, we are also concerned with performance. For some, however, its management has a different meaning. In his book, Coaching for Performance, Sir John Whitmore2 argues the manager "must think of his people in terms of their potential, not in terms of their performance".
Making assumptions
He goes on: "The majority of appraisal systems are flawed for this reason. People are put in performance boxes from which it is hard for them to escape either in their own eyes or their manager's" (Whitmore, 2009, p14).
At a recent coaching and mentoring research day at Sheffield Hallam University, a group of participants examined the idea of performance management and its relationship with coaching. Our conclusion was that, when you talk about performance management, people associate this with performance measurement and a deficit model of organisations and people.
Rightly or wrongly, we tend to associate performance management with a bundle of tools and techniques such as 360-degree feedback, competency frameworks and appraisals, which are focused on dealing with under-performance and what people are not doing well. We also debated whether the label/brand of 'performance coaching' tapped into the same assumptions.
Glass half empty
We concluded that there was scope for this to happen. My concern is that by deliberating using the performance coaching label and working with such performance measurement tools we, as coaches, might be colluding with a glass half empty view.
As I have pointed out, this is quite different from what many coaches, like Whitmore, intend. To borrow a phrase from the organisational theorist, Tony Watson (2006)3, such coaches would argue that people are always in "a process of becoming" rather than being fixed in terms of their competencies.
I am not saying that managers or coaches should duck the idea of managing performance. But perhaps, as so often in coaching, it comes down to the questions we ask. Hence, it may be more important to ask: "What does good performance look like here?", rather than: "What has prevented you from delivering on this?"
My work as a coach, coaching researcher and manager-coach, has led me to believe that we can often make assumptions that people understand what good performance looks like and that the challenge is one of motivation/commitment. This is one possibility.
Another is that they do not yet understand what this looks like and that they need help in understanding good performance better. This seems to be the case, particularly with my public sector coaching clients in the current climate of cuts.
Of course, a focus on good/excellent performance is embedded in process coaching models like GROW and OSKAR, with their emphasis on success and what already works well.
Nevertheless, in our efforts to work effectively with our sponsor clients, we need to be aware of the dangers of the deficit model of performance and of being drawn into perpetuating this in our approaches to coaching.
References
1 W Van Doren, G Bouckaert and J Halligan, Performance Management in the Public Sector, London: Routledge, 2010
2 J Whitmore, Coaching for Performance: GROWing Human Potential and Purpose - the Principles and Practice of Coaching and Leadership (4th ed), London: Nicholas Brealey Publishing, 2009
3 T Watson, Organising & Managing Work, (2nd ed), London: Prentice-Hall, 2006
Coaching at Work, Volume 6, Issue 4

The number of organisations using coaching is steadily rising, yet its true value is still not being assessed. The CIPD's John McGurk shares his practitioner guide to real-world coaching evaluation
I was amazed when a colleague told me that the energy company she works for doesn't use coaching. After all, it's now part of normal management practices for most organisations, as a string of Chartered Institute of Personnel and Development (CIPD) surveys have shown.
Coaching and mentoring are powerful and enabling tools for raising performance, aligning people and their goals to the organisation, and cementing learning and skills1. Coaching is also a powerful agent for driving cultural change and agility - organisations use it linked with organisational development2.
However, coaching has an Achilles heel. Evaluation is largely neglected and this mustn't continue, particularly in the current climate when every item and line of expenditure is being acutely scrutinised.
While the number of people who report the use of coaching is steadily rising, its real value is still not being captured, as we established in our 2010 Learning & Talent Development survey.
Make the link
Well under half of respondents used approaches such as linking coaching outcomes to key performance indicators (KPIs) or more quantitative and mixed-method approaches, such as return on expectation (ROE) and return on investment (ROI). Few linked coaching evaluation to performance and only 13 per cent frequently discussed evaluation at management meetings. Only about 20 per cent frequently collected and analysed data on the impact of coaching.
Clear and present danger
In difficult times, anything that cannot prove its value will be increasingly vulnerable. Coaching cannot claim a unique contribution to organisational performance and impact if its practitioners and champions assume its value rather than prove it. We need to build a convincing evaluation narrative, yet many organisations are failing to do this.
We need to move towards a systematic approach based on a thorough review of the coaching process. The CIPD sees evaluation as a cornerstone of effective coaching and we want to assist practitioners in developing best evaluation practice. So what's getting in the way? Perhaps it's our focus on delivery.
Delivery focus
Many practitioners think that developing and delivering coaching is what they are there to do. This can lead people to believe that simply introducing coaching is enough. As Jarvis et al pointed out in The Case For Coaching (2006), there is often an assumption that time spent in any learning activity such as coaching always has a positive payback. The authors also suggest that evaluation may not be addressed because we might uncover negative results that could threaten coaching. Although we know from our surveys that coaching is being used primarily for performance management and leadership development, we know less about its impact on those areas.
There are other issues too:

An overuse of the Kirkpatrick model, even the augmented versions. Despite valiant attempts by Kirkpatrick and his successors to update the model, many use the least sophisticated version based on reactions and anecdote. But we need to look for broader and richer approaches to evaluation.

An obsession with a very narrow view of ROI. This generally subtracts the costs from the benefits of a coaching assignment and expresses that as a percentage. This is meaningless without the context of the coaching and evaluation of other activities. Philips and Philips (2008), provide a much more robust and systematic ROI approach, which is detailed in the report. As evaluation expert Paul Kearns argues, ROI without a baseline is next to useless.

Concern that evaluation is not a favourite activity of L&TD practitioners. We explain in the report that MBTI type ENFP is well over-represented in the coaching and L&TD community, so we have to be more mindful about developing evaluation. L&TD practitioners work best with delivery and collaboration over learning issues. Getting down to the data may not be their favourite task.

Our use of the softer data around coaching is not systematic. For example, coaching conversations are a source of rich data about the progress of coaching. With an appropriate and proportionate approach to confidentiality we can use basic tools to capture the nature of the conversation.

A lack of systematic collation of the sources of data to inform coaching. For example, psychometrics test pre-employment, manager reports, 360s and employee engagement scores, all provide valuable data.

Knowing and doing
So what should we do? Evaluation starts with delivery and what people want from us.
Stakeholders won't expect us to produce a spreadsheet with scenario forecasts for coaching and ROI. They are more likely to be convinced if we can tell them how many people are coached, how much we spend on external coaches, the length of assignments and data on impact: perhaps engagement scores before or after coaching, or maybe anonymous 360 feedback on people's ability to complete projects.
If we are also generous about interventions introduced by other departments and we can apportion some of the effect to coaching, we will have compelling evidence. That's not happening enough.
We use the lift conversation with the finance director to illustrate how poor evaluation can undermine the resources available for coaching in testing times. According to our survey, roughly seven out of every ten lift conversations would not go well.
What next?
The CIPD believes that the best method of evaluation for any L&TD intervention is to take a holistic approach - the Value of Learning approach we refer to in our research with Portsmouth University (2007). We should shift from this narrow ROI to ROE approach. What did we expect the coaching intervention to deliver? Which behaviours or skills do we wish to see? What improvements?
This raises the issue of alignment. Aligning coaching interventions to the goals of the business is key. The basics are simple. Make it relevant, align what you are doing and measure it.
We provide a simplified graphic model of the approach in Figure 2 above. The RAM (relevance, alignment and measurement) approach is useful for all learning and talent interventions and keeps us focused on the outcome, not the process.
Finally, an integrated approach is vital. If, for example, we are unaware of the sponsorship and ownership issues within the organisation, we won't get a clear view. If we are not conversant with the positioning and purpose of coaching we cannot design evaluation effectively. If we haven't given a great deal of thought to how coaching is resourced and paid for, including issues like the role of external support and consultancy, the use of internal coaches and the training of line managers, we will not be able to evaluate effectively from the start.
The CIPD has developed the OPRA model (Figure 1), which helps practitioners think about coaching from the point of view of Ownership, Positioning, Resourcing, Procurement, Assessment and Evaluation. This thinking tool can help us to provide an effective space for evaluation.
What the CIPD surveys revealed

Coaching is not being effectively evaluated (CIPD Learning & Talent Development surveys)

Liz Hall
There is a creativity crisis but fear not, creativity can be trained, creativity expert Dr Mark Batey told delegates at the Chartered Institute for Personnel and Development´s HRD conference in London on 6-7 April.
Creative thinking skills have been in steady decline since 1990 and the problem is getting worse, according to an analysis of 300,000 children reported in Newsweek in July 2010 cited by Manchester Business School´s Dr Batey.
“Creativity is a fundamental skill in individuals, organisations and economies but it does seem to be an issue with people leaving the school system. But we can train creativity,” he said.
Dr Batey defined creativity as “the capacity within individuals to develop ideas for the purpose of solving problems and exploiting opportunities.” He said creativity has been the number one strategic priority for businesses for seven out of the last eight years, according to research by the Boston Consulting Group. It is the primary focus for entrepreneurial, agile and high performing companies and is vital for growth out of the downturn, he said. He said creativity predicts life success, achievement and health and helps teams be more efficient, helps control costs and improves customer service.
He shared one easy exercise for developing creativity- writing a list of “what we do/know already” on the left hand side and “what we could do” on the other. He said the most obvious answers come first- these need to be first exhausted before throwing up potentially promising alternatives.
See also http://www.coaching-at-work.com/2011/01/09/ask-yourself-impossible-questions-thats-the-answer/
See also http://www.coaching-at-work.com/2007/05/04/open-your-mind/