Meta

Fighting Clean and Fighting Dirty, Part 2: How to Handle Being Fallible

June 3, 2009 — Alderson Warm-Fork

My last post set up the problem of justifying a ‘shared framework’, within which people can disagree profoundly (so that, for instance, pro- and anti-abortion groups can ‘fight clean’ by handing out leaflets, instead of ‘fighting dirty’ by shooting each other). It ended on a rather unsatisfied note: the task seems impossible, because what people disagree on will include issues more important than the values aimed at by the shared framework.

And this post isn’t going to be enormously more optimistic than that. I do think the task is, in a sense, impossible, though it’s also unavoidable. This is part of the messiness of reality. That said, I think it is certainly possible to seek greater clarity and offer practical reasoning for how to operate in this area.

(I should clarify this is meant to be a situation of ‘peace’ – if the fight has started, the barricades are up, and you can hear the people sing a song of angry men, then matters change considerably)

The clue I want to take as my (fairly obvious) first point is this: the problem would not arise if we were Gods. If we were infinitely rational beings, morally perfect and immune to error, then this problem would not exist. This should point us towards epistemic issues, issues about knowledge, as the origin of the problem.

The basic fact is that all people are finite intelligences, and are prone to get things wrong. They are wrong both about grand questions of ‘what is the correct theory of gravity’ and ‘why does this water fall on my head’, and also about petty questions of ‘what is behind that rock’ and ‘where should I go for dinner tonight’. They are wrong, and profoundly wrong, even about things that they think they are most sure of.

Now, that’s not a very troubling thought when it’s about other people. It’s also not a very troubling thought about me, when I’m sitting down and blogging. But when I have to act, including in matters that are important and make a difference to other people, the fact that pretty much everything I think has a good chance of being the opposite of true is very troubling indeed.

Note that we can’t get around it by simply saying “well, we estimate the probability of being wrong, and then multiply that by the expected result, and then subtract the…” I may be able to get a little bit more confident by qualifying beliefs with ‘probably’ or ‘about 90% likely’ but the basic fact remains unchanged, since our estimates of probability are likely to be quite wrong. (To appreciate, consider a 14th-century French peasant trying to put a ‘probability estimate’ on her belief that Jesus was the Christ.)

One solution comes from logical fallacies. That is, while what are called by the pretentious ‘argument ad populum’ (everyone thinks this, so it’s true) and ‘argument from authority’ (smart person X thinks this, so it’s true) are logical fallacies in the sense that, when engaged in reasoning, they are of no value, they are still of some value outside of reasoning.

What I’m basically saying is that one fairly reasonable response to knowing how wrong you probably are about most things is to provisionally abide by what ‘others’ believe (others could mean a number of things – people you respect, most people, some particular person, etc). Doing so doesn’t necessarily make you more likely to be right, but it ‘disperses responsibility’. If you are wrong, it’s less your fault.

I know this may sound remarkably conformist, but it’s just meant as an expression of obvious fact. For example, in the future meat-production will obviously have stopped. If one person goes out of their way to imprison, torture and then kill and eat some pigs, they’d be somewhat monstrous. But people who passively participate in buying and eating meat, even if they are, through demand, causally responsible for the same number of dead pigs, are not monstrous. Vegans may criticise them, but it would be stupid to say that 90%+ of the world are moral monsters.

By contrast then, what is it when people ‘break the rules’? In particular, what is it when they do so violently, since (for reasons I may go into more fully another time) I think violence is a sort of ‘extreme case’ of ‘breaking away’ from what’s accepted by/acceptable to the other person?

I don’t think that they act wrongly simply because they break the rules (the world’s not simple enough for that). Rather, by breaking the rules, especially violently, people like Scott Roederconcentrate responsibility in themselves. They in a sense ‘claim’ something: I am so right, and so not wrong, that I have no need to stay within the bounds of what others think appropriate. They are making, in fact, an almost epistemic claim, a claim to special knowledge. They say, in essence, ‘this is justified, trust me‘.

And so naturally we judge them as strictly and harshly as possible. Any little failing, any odd leap of logic, any indulgence of prejudice, any possibility that they believed what they wanted to believe, or that their belief was related to their own anger or insecurities or desires, prompts the harshest condemnation, while in another person, it might have been indulged (since, after all, we all do that).

But this still leaves us the basic problem. In fact, it shows us that the basic problem is essentially insoluble. Because sometimes, sometimes, I really am sure of something. And sometimes, not breaking the rules would be a worse crime and a greater betrayal than sticking to them, and I can’t simply ‘write off’ what I believe, because it’s what I believe is in fact true. But equally, I can’t write off the possibility that I’m wrong.

So a full-strength solution, which would tell us what rules for defining a shared framework are right, isn’t available, I think. What’s second-best?

I think if we recognise that the issue about individuals having to make unprescribable choices, then the best way to approach them is to consider biases – that is, what are the standard ways that people tend to go wrong? Perhaps the best advice we can offer is to try and counter-act those biases. This discussion will make reference to the brief array of possible rules set out towards the end of the post linked to at the start.

I’m going to suggest two common biases. The first is that people have a habit of over-estimating how right they are, because it feels good to think you’re right. So if people tend to neglect how fallible they are, then we might formulate the advice ‘assume you’re more fallible than you think’, i.e. err in favour of a stricter rule.

But this doesn’t apply equally to all groups. Groups that are vested with authority and taught to find their identity therein are much more liable to over-estimate their rightness than oppressed groups taught to accept the judgement of their superiors. This manifests in what I previously called ‘rule 6: those in authority can kill and torture, with however many thousands of corpses worth of ‘collateral damage’ they think necessary, but everyone else must be quiet and pacifistic and stick within the law. I won’t bother to explain why I think this is a pernicious bias.

So the best ‘answer’ that can be arrived at is to try to counter this, by looking for something ‘in between’ the authoriative and the oppressed, erring on the side of strictness.

In the list I used last post, 2. (minimise but accept ‘collateral damage’) or 3. (kill, but only the guilty/combatants, not civilians) seem to be the rules usually applied to authority-holders, while 4. (non-violent violence, i.e. harassments, lawbreaking, attacks on property) or 5. (only what’s legal, and definitely no violence) seem to be those usually applied to others.

So, going for a middle path, erring on the side of strictness, leads me to think 4. is probably most sensible. That is – I would be willing to use vandalism, threats, phone harassment, breaking and entering, destruction of property, and other forms of ‘direct action’ to impair and undermine violent systems, even in a situation of ‘peace’, if I thought it was strategically justified. I also think that the government should use nothing stronger than those in its similar efforts, but then of course it wouldn’t be a government.

This post has not really been a single argument as much a ramble with about two-thirds of a point behind it.