Traditionally, people--especially economists--thought that human behaviour was dictated by outcomes. That is, we seek to maximize our outcomes, like getting a large profit. Consequently, most of the incentives and disincentives in business are outcome-centred like bonuses or suspensions. Working to maximize outcomes is called distributive justice.

In the mid-seventies, the social scientists John W. Thibaut and Laurens Walker combined their research on psychology of justice and the study of process to look into what makes people trust a legal system enough to follow the laws voluntarily. They discovered that people care as much about the fairness of the process as the outcome the process generates. Simply put, people want to be treated like people and not numbers.

FairProcess, or procedural justice, universally requires adherance to three principles:

Engagement. Involve individuals in the decisions that involve them. Get their input, allow them to actively PeerReview the ideas on the table. Respect individuals for their ideas.

Explanation. Everyone involved and affected must understand the reason why the decisions were made. Demonstrating the rationale behind decisions shows people that you have considered their opinions thoughtfully and impartially. Not only will this make people trust the decision maker but it will help them learn.

Expectation clarity. Once a decision is made, clearly specify the expectations for the people involved, what responsibilities they have. Even if the expectations are demanding, people want to know by what standards they will be judged and what penalties there will be for failure. Understanding what to do reduces useless political manouevering and it allows people to focus on the task at hand.

By no means does fair process imply consensus. In fact, people are more than happy to let someone make the final decision provided they understand why that decision was made and that it was the best decision for the best reasons. And of course if you try to ForceConsensus?, you will experience many failure modes, especially the ConflictParadox where we get buried staring at the trees instead of seeing the forest.

Because fair process builds trust and commitment, people will go above and beyond the call of duty, volunteering where before they would have to be coerced. Moreover, it is clearly optimal to use both mind and body instead of just body. This builds on the Group Dynamics theory of Kurt Lewin (1947), which states that people learn and adopt new imposed behavioural methods faster if they have a chance to influence the methods' introduction and application through consultation (Coch and French Jr., 1948).

On the other hand, once fair process has been violated, the victims often demand far more compensation than what they've been slighted. They tend towards retributive justice, trying to punish those that have harmed them and ensure it never happens again. The resulting red tape could cripple the organization.

FairProcess is one of the fundamentals of this site, so it appears in many places. Click on the title of this page to see the backlinks. Alternatively, some important related pages are OpenProcess, and the more practical, OnlineDiary.

After quite a bit of reading, I think one should be aware that many "academic" authors (especially economists and philosophers) use common words (like "fair") with a specific technical meaning. When this is done openly (as apparently done above), it can provoke interesting conversation. When the technical use is hidden or obscured, it can lead to many pointless arguments over peripheral disagreements (while ignoring more important conflicts).

Personally, after studying philosophy for a few years (including a couple years as a philosophy major in college), I decided that "Philosophy is the politics of pure ideas" (optional quotes around "pure"). At its best and worst, formal philosophy is largely a system of convincing people to adopt certain ideas, as politics convinces others to adopt certain actions.

In the case above, it seems that they have added "engagement" as a part of "Fair Process". It's an interesting idea, and I would certainly agree that engagment tends to lead toward more "fairness". I don't believe engagement is necessary for people to agree on the fairness of a process, however. Explanation is in a sense part of engagement, but I think it is also useful for "classic" meanings of fairness. (One could consider explanation to be one's engagement in the clarity of a decision.) Clarity (possibly "transparency"?) is probably the most convincing part of the argument--it is critical that people believe that they are judged by relevant, rational, and clear decisions.

I think you discount engagement too much. I can't think of very many situations where engagement is not vital to FairProcess. Perhaps the military? -- SunirShah

Certainly engagement is vital to "Fair Process", since it is part of the definition of their technical term. However, I would say that most people accept many systems/processes as "fair", even if they have essentially no engagement in the process. Indeed, I can't think of a single process that involves me which meets the terms of "engagement", yet I consider many of the processes to be fair. For instance, I was not engaged in the processes that set prices at the stores I visit, or determined the salaries at my current employer--yet I consider each of these to be "fair".

[Actually, I was very slightly engaged by my employer recently, when I received a shareholder invitation to vote on a merger proposal. My vote didn't make much difference, since I own less than 1/1,000,000 of the company (through 401-K retirement contributions).]

Engagement is certainly good. In fact, it is one of the best ways to convince people that a process is "fair", even if engagement doesn't affect the results. Many people have experienced corporate "feedback" processes that are simply a method of containing dissatisfaction.

For another example, voting is often a powerful tool for managing minority dissent. In the upcoming US presidential elections, thousands of US citizens will vote for minor third-parties, even though they have no reasonable chance of winning. The amazing thing about Western democracies is that they convince the losers to accept defeat gracefully. (In many US elections 40% or more of the voters accept their candidate's defeat.)

On further thought, I may have been too harsh in my previous criticism. As long as the authors define their new terms clearly, ideas like "Fair Process" might become useful terms in serious discussions. It would be quite awkward to carry on a discussion about "Engaged/Explained/Clear Process", so some degree of jargon is useful. --CliffordAdams

In a shop, you are engaged in setting prices - you can choose to buy or not to buy, and you can choose to buy at that store or at a competing store. You can also complain to management. Note that in a monopoly situation you can't engage in setting prices buy voting with your pounds (/dollars/euros/...). That's why monopolies either have some kind of public process for setting prices (eg, the BBC) or suffer from severe negative public reactions (eg, Microsoft). You could view this from a FairProcess perspective...

Simply put, people want to be treated like people and not numbers.

Ironically, one of the most "fair" ways to treat people is to treat them only by the numbers. Consider two employees (Dave and Frank) competing for a bonus. Both have generally equal records, but Dave created 50% more profit for the company last year. Fairness "by the numbers" would indicate that Dave should get the award, even if Frank is a more popular/funny/interesting person, and/or Dave belongs to an ethnic/religious/racial minority. (This assumes company level profit is somehow a metric of individual performance and contribution. Maybe Dave would go insane under the performance pressure if he didn't have a clown like Frank to laugh with; systems need many types to function, engagement validates this reality by encompassing the individual details and human process.)

One interesting related idea was in an article about possible risks in "neural net"-based programs for evaluating loan requests. The goal is to feed the program the information about the applicants, and train the program to make low/medium/high-risk kind of decisions similar to an experienced loan-processing manager. Some people are concerned that such a neural net may learn to mimic human discrimination (called "redlining" in the loan industry). See [1] for the article, and [2] for several replies.

Apparently a similar case occurred around 1987 in a similar system used to screen applicants to a medical school. When they studied the program, they found that it attached some weight to the applicant's name. Apparently, the program detected a correlation between names of ethnic minorities and the results typically given by a human decision-maker. In some cases, simply changing the name of an applicant could change the results. See [2] for the story (near the end).

See [3] for a reasonably-recent (1996) overview of neural networks in similar applications, and [4] for a concise introduction to neural networks. [Google is so good it's scary.]