Menu

Notes on instrumentalist reasoning

Warning: David is doing philosophy without proper training. Please be alert for signs of confusion and keep your hands inside the vehicle at all times.

I was having some interesting discussions with a Paul Crowley yesterday evening. He and I have a large enough overlap between our beliefs and interests that we can disagree vehemently about almost every philosophical issue ever.

The particular point of disagreement here was about epistemic vs instrumental reasoning. For the sake of this post I’m going to oversimplify these to mean “I want to believe things that are true” vs “I want to believe things that are useful”.

I’m very much in the instrumentalist camp.

He, meanwhile, wasn’t convinced that this was an interesting distinction and asked me for an example of a false belief that it was interesting to hold.I gave one, but it wasn’t very satisfying.

On my cycle home afterwards I thought about it some more and realised that part of why I struggled to give a coherent answer here is that the framing is wrong. Instrumental reasoning, at least as I practice it, doesn’t really work like that.

The purpose of my instrumentalism is not that I give different answers to questions, it’s that I ask different questions in the first place. Sometimes the result is effectively still a distortion of reality, but it’s one that I’m sufficiently self-aware about doing that I can later come back and ask the question if it then proves useful to do so.

The example I cited, reframed so it makes sense in this context, is one I find quite socially useful: I do not, in general, allow myself to consider the possibility that my friends that my friends are being dishonest with me.

It’s not that I think all my friends are honest with me 100% of the time. I think I have a relatively good assessment of individual friends’ honesty, though it’s certainly biased in the direction of thinking they’re all more honest then they are (this is partly because I think it’s important to do that for everyone, partly because I think it’s important to trust my friends and partly survivorship bias – I probably don’t remain friends with people I think are less honest than they actually are). The forbidden question is not “Do my friends ever lie?” but “Is this particular thing they are saying a lie?”

Why do I do this? The answer is simple really: Worrying about whether my friends are lying stresses me out. I am aware that when the question is emotionally significant and difficult to test then I will be able to find convincing arguments for both sides. I think if I allow myself to think about this I will usually come up with the right answer, but there will be enough evidence both ways that even asking the question will mean that I have a constant low-grade doubt that my friends are lying to me. This does not make me a happy David.

So I don’t do it. Easy. I decide that I don’t care about the true answer here and my life will be improved by not asking the question.

This isn’t true in all cases. Sometimes the context is significant enough that I realise I actually have to evaluate this and come up with an answer even if it makes me unhappy. That’s fine. The world does not always give you easy choices. But it’s true most of the time, and it makes my life significantly better.

Essentially the idea is that knowledge isn’t free. There is a cost to obtaining it, and the benefit to obtaining it is highly variable. When the cost to asking the question is higher than any conceivable benefit from doing so, why would we bother?

Another example where instrumental reasoning is something I wrote previously: Problem solving when you can’t handle the truth. This is a different scenario. Basically you’re in a situation where one of two things are true. One of them is vastly more likely than the other. In the likely scenario, you will die horribly (well, not especially horribly. But you will die, and that’s pretty horrible). In the other you’ve got a decent chance of survival. A reasoning strategy which assumes that you are in the scenario which involves you not dying is unlikely to produce the true answer, but it significantly increases your chances of not dying.

These are apparently quite different examples, but what they have in common is the implementation strategy.

I am essentially reasoning in restricted universes. When making decisions there are a list of assumptions that I’m basically taking as read not because I think they are necessarily true but because I think I will make better decisions if I don’t spend time and energy worrying about the fact that they might be false.

This doesn’t always work. Sometimes the real world intervenes and says “Hey, evidence. You’re going to have to question this premise because it’s obviously wrong”. That’s fine. Being able to take premises on board as useful without believing them to be true is the whole point of this philosophy of knowledge. I now have to pay the cost of asking the question, but the cost is no higher than it would otherwise have been – I was attached to this premise not because I believed it was a fact about the universe but because it was making my life easier. Once it stops making my life easier by causing me to come into conflict with observed reality it’s time for it to take a hike.

Looking back at this, it seems obvious that this is the philosophy of knowledge I would have settled on. Back at university I did a lot of maths (it was a very experimental time of my life), but I was also a rabid materialist (in the sense of only believing in concrete things, rather than wanting lots of shiny stuff). Given that, the fact that the dominant philosophy of mathematics amongst practising mathematicians (insofar as they cared about philosophy of mathematics at all) was basically Platonism seemed utterly silly to me and I settled on an extremely formalist point of view and spent a lot of time obsessing over axioms (ask me about the axiom of choice some time) – what happens when you drop them, what happens when you add new ones, etc. It seemed perfectly natural to me that it could be interesting to study different types of mathematics with different assumptions.

So I guess my instrumentalism is in many ways just me going “Oh, hey, all those reasoning skills I developed during university are actually useful”. My extra assumptions to make my life easier are axioms I’ve chosen to temporarily adopt. When it turns out that the things I want to model (stuff that’s actually happening) don’t work out properly in the model I’m currently playing with, I back off and try a new one.

Which, I suppose, is how I started off from a position of strong materialism and ended up in a situation where I’m not really interested in silly things like objective truth, only whether believing certain things is useful or not. Life is funny like that.