"I think I’ve been in the top 5% of my age cohort all my life in understanding the power of incentives, and all my life I’ve underestimated it. And never a year passes but I get some surprise that pushes my limit a little further.” — Charlie Munger

‍Smart managers never underestimate the power of an incentive!

We like a good story. Here’s a good story all the way from Ancient Greece, from the time of Socrates and Plato, called The Ring of Gyges.

Gyges was a shepherd in Ancient Greece. One time, while he was out shepherding, he stumbled across an ancient lost tomb of a mysterious corpse. Gyges, being the curious sort, made an investigation of the tomb. On the dead body, he found a ring. Not only was he the curious sort, also he was the sort who didn’t like to see waste so he took the ring and put it on. Upon putting on the ring, he found it made him invisible to other people. This power led Gyges to do many things he wouldn’t dare do if people could witness his actions, including: spying on naked women, seducing the queen, murdering the king, etc etc. Basically, all the things any youthful and high spirited shepherd lad might do in similar circumstances…or not.

If you’d like to hear more about this ancient fable, you can watch this fun animated video of it here:

‍

The moral of this fable is the belief that people only behave in a good and respectable way when they have to, because they are required to by the opinion of other people and the law (and the consequences of breaking the law, when caught). If anyone discovered the power to escape the consequences of their actions and not get caught, in the fable’s case because Gyges could become invisible, then they certainly will not behave in a moral or good way. Basically, people are only good because they have to be. And, when they don’t have to be, people are basically innately selfish and amoral, as shown by Gyges.

So what does any of this have to do with employee engagement and incentives, you may ask?

Well, it has everything to do with employee engagement and incentives. Businesses and managers care about employee engagement because it’s commonly believed that a more engaged employee is a more productive employee (and an engaged employee is one who is more likely to be concerned with the good of their work, their team and the company). In other words, employee engagement is concerned with the motives, motivation, and behaviour of the employee. And incentives are all about trying to direct an employee’s motives, motivation and behaviour in a way that benefits the organisation (and, hopefully, that benefits the employee too — otherwise it won’t be a very effective incentive).

If you agree with the above summary of employee engagement and incentives, then it’s obvious that incentives really matter for employee engagement! So it’s worth us taking a minute to discuss what we mean by an ‘incentive’ in this context. We’ll do so with some help from Steven Levitt and Stephen Dubner in their hugely popular book, Freakonomics: A Rogue Economist Explores The Hidden Side Of Everything:

“Economics is, at root, the study of incentives: how people get what they want, or need, especially when other people want or need the same thing. Economists love incentives. The typical economist believes the world has not yet invented a problem that he cannot fix if given a free hand to design the proper incentive scheme. [To an economist] an incentive is a bullet, a lever, a key: an often tiny object with astonishing power to change a situation. An incentive is simply a means of urging people to do more of a good thing and less of a bad thing. There are three basic flavours of incentive: economic, social, and moral. Very often a single incentive scheme will include all three varities. We all learn to respond to incentives, negative and positive, from the outset of life. But most incentives don’t come about organically: someone — an economist or a politican or a parent — has to invent them.” (pp.16–17)

In the workplace, everything can be an incentive or a disincentive! An employee is incentivised (or disincentivised) by many things in an organisation, such as: the processes, systems, workspace and tools for work, management, colleagues, the organisational culture and values, communications, salary and benefits, career growth opportunities, office environment and facilities, location of the office and commute, etc, etc. Incentives can be formal, like earning a performance bonus, or informal, like earning the respect of one’s colleagues. Commonly incentives are used either to drive exceptional performance or encourage some form of change management in operations and employee behaviour; but incentives can also be used to reinforce normative traits and behaviours (such as the company’s values and standard operating procedure).

When seen in this light, it’s obvious that incentives really matter! They can contribute to an organisation achieving greater productivity and profitability. And, when incentives go wrong, they really can go wrong and risk an organisational train smash.

All too often, when designing and implenting a new organisational policy or process too little thought is given to the employee’s experience working with it and how it may affect their engagement with their work. Worse still, the managers who design or sign-off on the new policy, process or incentive can become irrationally wedded to their idea, despite all evidence to the contrary that it might not actually work or drive the right behavior!

A great example of how incentives matter and how managers can go the wrong way about them is a telling anecdote by Charlie Munger on Federal Express’ night shift:

“One of my favourite cases about the power of incentives is the Federal Express case. The heart and soul of the integrity of the system is that all the packages have to be shifted rapidly in one central location each night. And the system has no integrity if the whole shift can’t be done fast. And Federal Express had one hell of a time getting the thing to work. And they tried moral suasion, they tried everything in the world, and finally somebody got the happy thought that they were paying the night shift by the hour, and that maybe if they paid them by the shift, the system would work better. And lo and behold, that solution worked!”

As managers (and as HR professionals), we constantly try to address and improve employee engagement at our organisations. Often, we need to think about ways to incentivise (or avoid disincentivising) employees. Often, we are faced with questions concerning any implications on employee behaviour that might result from changes we make in their work and the organisational systems they interact with in the course of their work. When faced with such conceptual management challenges, it’s tempting and easy to fall into a simplistically binary paradigm about human nature and how it relates to managing our people. Namely, we end up assuming that people are either inherently good or they are inherently bad. Then we design something to manage either of these alternative views –people are all good or mostly untrustworthy- that we decide to take of the staff we employ. Basically, this becomes a question of which is more effective: carrot versus stick? In response, we build processes and incentives that assume all employees either are inherently engaged in their work or that they will behave like Gyges, if they think that they can get away with it! The authors of Freakonomics put it this way:

“For every clever person who goes to the trouble of creating an incentive scheme, there is an army of clever people or otherwise, who will inevitably spend even more time trying to beat it.” (pp. 20–21)

The problem with the fable of The Ring Of Gyges is that it makes the same binary assumption about human nature and sets up the moral of the fable as a questions as to whether ‘people are either all good or all bad?’. The problem is this paradigm and this phrasing of the question about human nature is a fallacy. In actual fact, the more prosiac and common sense truth is that people are a bit of both: good and bad. Humans are naturally ambivilant animals. Isn’t that why we all love an anti-hero?

Each one of us is a mix of rational agents and emotional beings (and, even sometimes, we can be highly irrational beings). Each of us do have certain character traits, some similar to those around us and some not. How an individual’s character influences their behaviour at any given time will vary according to their circumstances and the environment — all to a greater or lesser degree depending on the particular individual. Context is everything. Internal factors (their character, mood or health, etc) will affect the decisions they make and their subsequent behavior. Also, these ‘internal factors’ will be influenced, directly or indirectly, by external factors (the political, social and cultural context they live in, as well as things like where they live, who they sit next to at work, or even the weather that day, etc). This makes people complicated and it makes it challenging to try predict how employees may react to any particular aspect of their work, an incentive, the organisation or any change to any of these. Yet, if we are to find the right models and solutions for improving employee engagement and creating incentives that do succeeded in motivating them, then we must deal with and embrace the true reality of the nature of people/employees.

Our natural tendency is to perceive things in binary paradigms (i.e. as an ‘either / or’ choice), when in fact the world is somewhat more nuanced and complex than this. We often tempted to take a binary perspective of people’s morals and motives when we are designing incentives to drive the right employee behaviour and outputs. This error in our thinking about human nature can be a significant part of the problem when incentives don’t work and when employee engagement is low: a faulty understanding of what people are actually like and what actually motivates them. Other times when employee engagement is low it can be due to a complete lack of empathy for the employee’s end-user experience, especially when organisations design new processes, systems or incentives.

If you disagree with this view of human nature and of people/employees (and how this impacts the quality of employee engagement and the success of incentives), then here’s two case studies from Freakonomics along with some supporting statistical evidence to persuade you otherwise:

1. The Honest Bagel Business Man:

In 1984, Paul Feldman, a successful Head of a Public Research Group in Washington, quit his prestigious job to start an office bagel delivery business. Every weekday morning, he’d drive around the business district and deliver bagels (and cream cheese) to each of his client’s breaktime-rooms at their offices. Along with the bagels, he’d leave a cash box, because payment for the bagels was run on an ‘honour system’. He was leaving about 7,000 cash boxes per year in hundreds of offices across Washington. If people took bagels (and cream cheese) it was up to them to be honest and put the correct payment for it into the cash box. Feldman then returned later that day and collected the remaining bagels, crockery and — most importantly — the cash box. Within a few years, his bagel business had grown to a weekly delivery of 8,400 bagels to 140 businesses and he was earning as much as he had in his previous prestigious job!

But what’s really important about Feldman’s bagel business is that he kept a precise record of all his outgoings and incomings, including at which businesses (AKA: clients) the staff paid for their bagels and at which they didn’t (as well as the daily and seasonal variations in their payments. In other words, he was measuring their ‘honesty’). Basically, he had inadvertently created an experiment to see how honest white-collar and service workers are (in Washington, USA, at least). What makes this ‘experiment’ even more interesting is that there is very little reliable or detailed evidence about white-collar honesty and crime, as by its very nature it rarely generates data. It also relates nicely to the question at the heart of the fable of The Ring Of Gyges: when unobserved, are people more likely to behave in an amoral/immoral manner? Or, put another way, can you trust your employees?

The findings from Feldman’s ‘experiment’ show us a number of interesting insights into human nature and behaviour, as well as how people respond in different circumstances and to different ‘incentives’. His findings also demonstrate the importance of an organisation’s values and culture. Here is what he found from his findings from approximately 20 years of data:

Naturally, different organisations had different rates of honesty amongst their employees. Feldman ‘rated’ a company’s culture as ‘honest’ if payment rates averaged 90+%, whereas a payment rate of 80–90% was ‘tolerable but annoying’, while anything below 80% then he ‘rated’ the company culture as ‘dishonest’.

The ease of doing wrong and getting away with it, along with the nature of the ‘crime’, affects people’s honesty. In other words, when faced with disproportionate temptation people are more likely to be dishonest (like when the cash was left in baskets rather than wooden boxes — read on for more on this). When he started his business, he left an open basket at each organisation for bagel payments to be left in. But too often the cash ‘went missing’ from open baskets. So he changed them to sealed plywood boxes that took some effort to break into, (but they were not fixed down and so could be carried away were someone inclined to steal the box itself , with the cash still in it). Each year, he leaves 7,000 cash boxes at his clients offices but only loses one box per year to theft. So even at companies where people consistently steal 10+% of his bagels, they still mostly don’t stoop to stealing his cash box.

Smaller offices/organisations tend to be more honest than bigger ones. An office with less than 50 employees tends consistently to outpay offices with a few hundred or more employees, by 3–5%. (Notably, this trend mirrors street crime statistics, where there is less crime in rural communities than there is in cities. This is thought to be due to the criminals in rural crime being more likely to be known in the community, which creates a greater social disincentive that deters crime in smaller and closer communities: namely, the disincentive of shame and guilt.)

Employees further up the corporate ladder cheat more than those lower down it. (This conclusion Feldman drew from one particular company who’s offices covered three floors of a building: junior staff were on the bottom floor, specialist staff and middle-management were on the middle floor, while the executives worked on the third floor. The executives consistently paid far less often than the other two floors, while the bottom floor consistently paid by far the most. However, given this was just one company, it’s not statistically valid — although it might be indicative of some grain of truth for a certain type of organisational culture?)

Personal mood and external factors seem to affect people’s honesty. In unseasonably good weather, more people than average paid; wherease in unseasonably bad weather, far less people than usual paid (and days with heavy wind or rain had a similar effect). The time of year when payment was always worst was during holiday seasons: the week of Christmas regularly saw an overall 2% drop in payments across all organisations [So not so much the season for giving then, hey?], while Easter, Thanksgiving and Valentines Day also were very bad. [These trends probably chimes with HR professionals, in connection with the patterns they see in spikes of staff absenteeism over seasonal holiday periods.]

BUT, strangely, there were ‘good’ holiday days too, when the rates of payment increased (and therefore people’s honesty did too). These were the Fourth Of July, Labour Day and Columbus Day. The difference between these holidays and the ‘bad’ ones? The ‘good’ holidays were only for one day off work, while the ‘bad holidays’ were for more than one day off.

Between 1984 to 1992, payment (and honesty) rates were fairly consistent. However, between 1992 to 2001, there was a long and continuous slow decline in overall payment rates — having slipped from above 90% to 87%. Then, straight after 9/11 in 2001, the payment rate reversed a full 2% and didn’t change much thereafter.

Based on his experience and interactions with the different companies he delivered his bagels to (and for so many years), he drew the following conclusions about the difference between the client’s with consistently ‘honest’ employees versus those with a consistent number of consistently ‘dishonest’ people:i. The organisations with ‘honest’ employees had better morale, meaning that the employees liked their management and their work. ii. In other words, the above point shows that the organisation’s culture and its employee engagement levels really matters to the behaviour (and honesty) of the people at the company! And it matters in a measurable way, as well as an ethical one.iii. Offices where Feldman was more present in person more of the time and where the people buying the bagels knew him, payment rates were consistently 95+%. This seems to support Richard Thaler’s famous ‘Beer On The Beach’ 1985 case study: what people are willing to pay is affected by their perception of the source of the good or service being purchased. Thaler is an economist famous for his ‘Nudge Theory’ view of incentivising human behaviour. For more about Thaler’s Nudge Theory and ‘Beer On The Beach’ case study see:

2. When & How Do Teachers Cheat?

In 2002, President Bush signed into law the No Child Left Behind legislation, which included ‘high stakes’ testing of all students in education through national and state-wide examinations. Many of the States in the USA created incentives to support the work in improving student test scores: successful schools and teachers received more state funding, pay rises, praise and more promotion opportunities; while unsuccessful schools and teachers had funding withheld, frozen pay, crticism and they were passed over for promotion. For the teachers, the stakes were very high indeed! Yet when we think about cheating, we rarely think about it applying to the teaching profession, even less often is it looked for in the teaching profession, very rarely is it detected and almost never is it punished when it is discovered! Over the decades, there had been several academic studies of the teaching system in the USA that had found a significant percentage of teachers do indeed cheat, but they were never actioned by the relevant education authorities.

In 1996, the Chicago Public School system were one of the States that embraced high-stakes incentives with financial incentives for teachers who had good or bad results. In 2002, they aligned this with Bush’s new legislation. Also, in 2002, the newly appointed CEO of Chicago Public Schools decided to investigate whether the teachers in his district’s schools were cheating and, if they were, that he’d then do something about it. In looking for the worst teachers, the same study could detect the (genuinely) best ones too, so that they could be identified and modelled (as well as their being given the recognition that they deserved).

This new investigation in Chicago found that teachers indeed were cheating on these national and state-wide tests, usually by modifying their students’ test papers (after the test had ended and before submitting them to the official examinar). The cheating was detected by an algorithm in a number of ways, including: where students had not answered questions in the middle of the test (because they hadn’t reached that point) but had somehow answered all the questions in the final section of the test papers (and done so more right than wrong); where students who had got many of the early questions wrong (where the questions tend to be easiest in test papers) but then got most of the end questions correct (where the questions tend to be the hardest); where students who had done poorly in class-test scores all year suddenly got results in their national test that were noticeably above their usual classroom test scores. In contrast to this, genuinely good teachers’ students showed improvement in answering more of the easier questions correctly, while these students’ improved performance and gains tended to be carried over in their performance in the next year of schooling (while, not surprisingly, the performance of students taught by the cheating teachers regressed in the following academic year).

The findings showed that the profile of which teachers tended to cheat and why they cheated were the following:

Approximately 5% of all teachers cheat in these tests per year, although this number is an overly conservative estimate and it’s likely that the real number is actually double that or more (i.e. 10–15%).

Gender didn’t seem to be a factor. Cheating teachers were as likely to be either male or female.

A cheating teacher was more likely to be younger and less qualified than the average teacher.

A cheating teacher was more likely to cheat after the incentive system had been changed (and the stakes/consequences of poor performance were raised for them).

Cheating is linked to general poor performance. Cheating teachers were more likely to be found in the lowest performing classrooms during the course of the academic year.

‍

Then the CEO of the Chicago Public School system did something about the cheating teachers that this investigation had uncovered. He began to dismiss the cheating teachers, where the evidence allowed, while giving out warnings to those cheating teachers where the evidence didn’t permit for dismissal. The result: the following year, the cheating by teachers fell by 30%!

There are two pertinent questions that should occur to us in the light of the two case studies above:

What sort of person is likely to cheat or might commit a crime? The answer is: in the right set of circumstances, pretty much everyone and anyone might cheat at one time or another.

Why isn’t there more cheating or crime in the world than there actually is? In the bagel case study, between 5–20% were ‘dishonest’ and for Chicago teachers 5% (or 15+%) were cheats. That means 80–95% of people were consistenly honest (though not necessarily always the same 80–95% every time). Given the apparent ‘incentives’ and opportunities to cheat in these two case studies, the numbers of actual dishonesty and cheating seem suprisingly low — especially given that in both cases the majority of the time the ‘dishonest’ people or ‘cheats’ were never caught, faced little risk of being caught, and therefore faced no consequence for such dishonest actions! So, why aren’t more people more dishonest more of the time, than they seem to be the case in these two case studies? Why weren’t more people behaving like Gyges?

‍

These two questions and their answers, in relation to the two case studies above, tell us a lot about human nature, what motivates people, how people tend to behave most of the time. They also tell us something about the power of incentives, and ways for us to think about employee engagement and incentives. In short, we can draw the following conclusions about human nature from these two case studies:

People are not either all good or all bad. In the ‘right’ circumstance and with the ‘right’ incentives, they can be one or other or both. [Remember, we all love Han Solo and as kids playing ‘Star Wars’ we all wanted to be him and not Luke Skywalker.]

Most of the time, in normal circumstances, most people are honest (or, at the least, they are not overtly dishonest). People tend to behave in line with the principle of the ‘path of least resistance’. Where the effort or risk of committing dishonest behavior becomes high, most people (most of the time) tend not to go there.

Employees are people and people are subject to their moods, emotions, irrationality and peer pressure. They are not always ‘rational agents’ so they don’t always respond to incentives that speak only to their rational side.

People can be incentivised (or ‘nudged’) to behave in certain ways — for good and for bad.

An incentive scheme properly aligned to the intended output and to the reality of human nature can have dramatic results! This can be in the form of a carrot [Remember, Federal Express’ night shift] or a stick [Remember, Chicago Public Schools firing and warnings to cheating teachers]. What’s important is that the incentive is well designed, follow-up action is consistently taken and that it’s based on meaningful and accurate data. Remember, action and follow-up action can be incremental, so that you iterate your incentive to become the right model. You don’t have to get it all 100% right upfront. In other words, always test a new incentive or system by piloting it to ensure you can correct any unintended consequences and save yourself an expensive and complicated mess (were you to roll it out company wide without first testing it)!

A badly designed incentive scheme can have a disasterous result and it can lead to an epic fail. [Remember, The Big Short and the 2008 financial meltdown!]

People are more likely to cheat when under stress and pressure. Some scenarios where people are under such stress and pressure are in the control of the organisation to fix, like where an employee is underperforming because of their youth, inexperience or lack of qualifications. Some scenarios where people are under such stress and pressure are NOT under the control of the organisation so are harder to address, like where an employee’s behaviour is affected by their mood, the weather, problems at home, or the holiday season (especially the potentially stressful holidays, like Christmas).

People are more likely to cheat or be dishonest if: it’s very easy to do so and the temptation is made greater due to carelessness (like walking through a ghetto with a wheel-barrow full of cash, as a silly example to make the point); no one is watching; they work in a very large department or organisation; there’s no consequence for it; they feel they have no relationship with their manager, company or supplier.

People who do enjoy their work and like their managers, as well as employees who work in organisations with a positive morale and culture, are all more likely to be engaged and honest employees.

‍

So how do these conclusions about human nature and people’s behaviour help us in implementing practical interventions for improving employee engagement and designing incentives that motivate the right employee behaviour. We suggest the following steps:

You can’t design a system or incentive scheme that prevents any and all bad behaviour. So don’t even try it. If you do, you’ll end up creating system so complex that it’ll disincentivise more people than it’ll incentivise! Also, if you treat everyone like they can’t be trusted then they will resent you for it and they are more likely to behave in just such a way. Rather, only focus on the aggregate results of the system, as there’ll always be anomalies and and outliers: “In any big business, you don’t worry whether someone is doing something wrong, you worry about whether it’s big and whether it’s material. You can do a lot to mitigate bad behaviour, but you simply can’t prevent it altogether.” Charlie Munger.

Fair, consistent, transparent and easy to understand (and easy to use) systems and incentives are more likely to get your employees buy in and engender the intended behaviour from them. Overly complex, unfair and inconsistent systems, incentives, or organisational culture is likely to disengage employees as well as engender their mistrust and resentment (and lead them to behave in ways you’d prefer they didn’t). Clear and accessible communication and communication-systems are an important factor in this. Your management team needs to live these principles and values in their management of and interaction with the employees, as well as execute them well and without fail all the time.

Remove any pressure points or bottlenecks in your organisation that place unnecessary pressure and stress on employees, as well as always ensuring that they have the aptitude and competence they need to succeed in their work. Stress combined with incompetence is a strong driver of bad or dishonest employee behaviour. To avoid this, you can implement the smart use of integrated performance reviews (to spot gaps in an employees competence), combined with automated LMS delivery of targeted training interventions (to address the skills gap for any particular employee).

Make sure you have strong feedback loops for employees. These must address employee performance needs and gaps, while providing recognition for good work and behaviour by employees. And always ensure that there is a system for consistently following up on an employee’s performance needs and recognition, so that employees know that they are ‘seen’ and feel that the organisation cares and has their best interests at heart.

When designing a new policy, process, system or App, never forget to consider the employee’s end-user experience of it. Always try make their experience of it easy, helpful and fun for them to use!

Never underestimate the value of your organisation’s culture and how it lives the brand’s values. Done well it massively improves your employee engagement and ensures that their behaviour is the sort you want — more often than it’s not!

‍

We’d like to leave you with a quote by a great economist who knew a thing or two about human nature (and economic incentives), which we hope is helpful for you in managing your team and keeping faith in your people:

“How selfish soever Man may be supposed, there are evidently some principles in his nature, which interest him in the fortune of others, and render their happiness necessary to him, though he derives nothing from it, except the pleasure of seeing it.” Adam Smith, The Theory Of Moral Sentiments.

‍Thanks for reading this blog. We request for you the Highest of Fives!