I try hard to be open-minded. I think I succeed at that reasonably well, but I still regularly get surprised at the discovery of a prejudice I didn’t know I had.

I don’t know whether it’s possible to rid one’s self of all prejudice – I suspect it’s not. If so, the best I can aim for is to be on the alert for prejudices, try to rid myself of them when I discover them, and try to always remember that any opinion I have – regardless of how carefully thought out it may seem – may be inextricably tied up with some prejudice I don’t yet realise I have.

The Wikipedia article on Cognitive Biases has a very long list of them. With so many opportunities to go wrong, it’s hard to imagine one can escape all of them.

There’s a popular phrase ‘It’s good to have an open mind, but not so open that your brain falls out‘. I don’t like that phrase at all. It is most commonly used by bigots in an attempt to defend their bigotry while at the same time appearing rational. Nobody’s brain has ever fallen out from being open-minded, either literally or metaphorically. However, the rankest nonsense phrases often have a grain of truth of them, and there is a grain of truth even in that one. It is that, in order to achieve anything with our thoughts, we need a framework within which they can operate, and that framework will be made of rules and suppositions that are accepted without evidence. I agree that we need such a framework, but what is crucial is that we acknowledge the existence of the framework, that it has no supporting evidence, and that we hence have no basis on which to claim it is better than any other framework. That doesn’t mean we should refuse to act on conclusions drawn within our framework. But it does mean that (in my opinion, which was derived within my mental framework!) it is a good idea to regularly examine and challenge our framework, and consider alternatives. Sometimes that may lead to a radical change in worldview, which opens up whole new vistas.

It may lead to a Christian becoming a Buddhist, or vice versa. It may lead to a Socialist becoming a Libertarian, or vice versa. It may even (heaven forfend!) lead to personnel exchanges between Platonism and Existentialism. I may have my own preferences about which of those and other sets of ideas most people aligned themselves with but, regardless of the outcome of any migrations of beliefs, I see it as good that people regularly examine their beliefs, so that belief migration becomes a commonplace possibility. If we know what our prejudices are, we have the power to change them. But we cannot change a prejudice we don’t even know we have.

Here are two of my prejudices. The first is that it is preferable for there to be less suffering in the world. I know it’s a prejudice. I know I can’t prove it. But I’m going to hang onto it, for now at least.

The second prejudice is that if I have observed two phenomena to occur in close conjunction many, many times then, in the absence of strong reasons to the contrary, I should expect them to continue to occur in conjunction in future. Every self-supporting person on Earth has this prejudice. But nobody even realised it was a prejudice until David Hume pointed it out in the eighteenth century – his famous ‘Problem of Induction’. If you don’t believe me, think of how you use language. You speak English to somebody – say it’s Bertha, expecting them to understand it, because they have understood English when you spoke it to them in the past. But why should the fact that Bertha has always understood spoken English in the past indicate anything at all about whether she will understand it in the future? You might object that you know that Bertha learnt English as a child, so you know she knows English. But then you are relying on the association between the events ‘X has learned English‘ and ‘X understands English‘, which has been reliably observed in the past, but why should that tell us anything about whether it will be observed in the future? Whatever objection is raised, I (or rather David Hume) can find an answer to it. But I’m still going to hang on to this prejudice.

Prejudice in Music

I had been thinking over this in the context of musical styles. It’s hard to think of any other human activity, the study of whose history is so riddled with the use of the word ‘shocking‘. The most casual observer probably knows about how Rap was considered shocking when it emerged in the eighties, ditto Punk in the seventies, how Rock n Roll was considered shocking when it emerged in the fifties, and how Jazz was considered shocking in the early twentieth century.

But the history of people being shocked by music goes back much farther than that. The history of classical music in particular is regularly punctuated by shocks when some innovator broke hallowed rules. Working back in time we have Schoenberg, Stravinsky, Debussy, Wagner, Beethoven, Haydn and Monteverdi as major disruptors of established musical conventions.

The following story from a radio music presenter made a big impression on me. They told of how they had been working in the archives of a classical music operation, listening to, classifying and cataloguing recordings. After having been doing this for a few weeks they walked past a studio in where music was playing over the loudspeaker. Appalled at the terrible, disorganised racket they were hearing, they asked somebody what the noise was. It was JS Bach! [For the non-classical music buff, JS Bach was a genius who lived from 1685 to 1750, in the ‘Baroque’ period, and is as revered a part of the musical establishment as it is possible to be] The reason it sounded so terrible and formless was that the music the presenter had been listening to non-stop for the last few weeks was all pre-Baroque, and hence operated within a framework of rules and norms that Bach’s music ‘broke’. If they had heard it a few weeks earlier they would have likely thought ‘how lovely!‘ or maybe even ‘that’s a bit old-fashioned!‘

I want to pick on Schoenberg, because on the face of it he might seem to go as far as one can go in breaking rules. The Austrian composer Arnold Schoenberg rebelled against the tyranny of tunes having to be in a musical key, like G major or A minor. Although only classical pieces tend to state their key, with names like ‘String Quartet in E flat Major‘, nearly all pieces have one. Perhaps the most famous song of all, Lennon and McCartney’s ‘Yesterday‘, could have been called ‘Sad song in F major‘. Key changes do occur within a piece, but they have a big effect, because we become attached to the key in which the tune is set. That’s why key changes are often used towards the end of a song to build up the levels of excitement and energy towards a final climax.

Schoenberg’s project was to refuse to use any key at all, not even for one phrase at a time. To do that he invented his ‘Twelve-tone system’ in which he set a rule that every one of the twelve notes in a chromatic scale must be used exactly once within each short section of the piece (called a ‘tone row’). By giving every one of the twelve possible notes equal status, he prevented any note gaining prominence as the ‘Tonic’, the home note of a key. Unlike another famous Austrian, Schoenberg was very anti-racist: he wanted the black piano keys to get as much opportunity as the white piano keys in his pieces (note that’s a different use of the word ‘key’).

Here, from YouTube, is a Schoenberg piano piece using his twelve-tone system, for you to enjoy.

As I was musing over whether Schoenberg had achieved the ultimate in open-mindedness, I suddenly realised a hidden prejudice. Sure, he had proclaimed equality between all twelve notes. But a note is defined by a frequency – vibrations per second. So the number of possible notes is infinite, not only twelve per octave. Between any two different notes there are an infinite number of frequencies between them. In Western music, which is descended from Ancient Greek music, the smallest interval between two notes is a semitone, which means the ratio of the two frequencies is 21/12 or about 1.06.

In classical Indian music, the twelve Western notes are used, plus ten others, called ‘nadas’, giving twenty-two altogether, so that the average gap between adjacent permissible notes is just over a quarter-tone. A piece containing nadas sounds to a Western ear, which is not trained to understand those extra notes, like it is being performed on an out-of-tune instrument.

Here is a scale that goes up an octave in twenty-four quarter-tone steps (not exactly the same as an Indian scale, but closer to that than to a Western scale), then walks back down again. What does it sound like to you?

Try to sing or hum along to first the Chromatic scale, and then the quarter tone scale. I’m a reasonably accurate singer and can do the first, but can’t even get started on the second.

But even writing music that includes nadas or quarter tones still involves a prejudice against the in-between notes. It’s just a smaller prejudice than Westerners like me have. I expect a piece involving eighth-tone intervals would sound just as weird to an Indian as one using quarter-tones does to us.

If we want to write music that is free from all prejudice, we need to go beyond Schoenberg, beyond Indian music, beyond even eighth tones, and write music in which each note can be any frequency at all, without limiting the choice to notes that are certain multiples and ratios of others.

I wrote a piece of such music. To be precise, I programmed a computer to randomly generate a series of frequencies and note-lengths and produce notes using those. I then produced another version of it, in which each note was rounded to the nearest semitone, so that only the twelve notes in the Western scale were used.

Can you tell which is which? They are both weird. Both break most of the rules we are used to. But one is a bit weirder, a bit more free, than the other.

The above is a long way from JS Bach, but is it free from any form of musical prejudice (aka structure)? No. For a start I have constrained the notes to be within the audible frequency range, even though it is entirely conceivable that notes that we cannot consciously hear may still have an effect on our body and thereby alter the sensory experience. I have also constrained the notes to not be very short or very long, in order not to frighten or bore the listener. The volume is also constant, rather than varying between notes, or even within notes. The shape of each sound wave is a perfect sine curve, whereas the wave shape could be allowed to change between and within notes too. That would not change the ‘tune’ but it would change the texture (‘timbre’ in musician-speak). I expect there are other prejudices in there that I have not yet realised.

I like most of my prejudices. I prefer Bach to Schoenberg most days of the week. But it’s good to challenge oneself every now and again with a bit of Schoenberg (or its equivalent), and then to occasionally challenge the Schoenberg with something even more radical.

I like to surprise myself. Sometimes life is a mystery novel, in which we wait with bated breath to see what it is that we will do. This only seems to apply in certain particular situations – mostly where the decision really doesn’t matter.

I live near the top of a hill and, approaching my house from the West or the South one has to climb that hill. There are several ways of doing that, but two main ones. As I climb the road next to the park on my bicycle I have the option of continuing up that road until I come to my street, and turning up that. Or I can turn earlier and climb up a side street instead. The climb along the side street is shorter but steeper. So when I’m coming home I have the choice of the shorter, steeper hill, or the longer, shallower – but also more dull – route.

As I climb towards the turn I wonder – which one shall I take? I could try to reason my way about it, making up lists of pros and cons in my head and weighing them up. But I don’t. Instead I wait to find out which I will do. The secret answer to the question is already there – perhaps buried deep inside my nervous system, or perhaps still awaiting some final external stimulus to tip the decision one way or the other – a breath of wind, a car passing by, a creak of the cranks. Who knows what it is that determines the final choice?

All I know is that, as I approach within about ten metres of the possible turn, I will start to know what has been decided. Sometimes I will know straightaway, and be able to revel in, or marvel at, the certainty of the decision for a second or so before the actual turn. Sometimes I trick myself though – I find myself thinking it has been decided that there will be no turn today, when all of a sudden, at the very last second, there I am turning the corner.

One might say that I like to keep myself guessing.

There is a strange feeling of irresponsibility about it, as if the decision were nothing to do with me. It happens sometimes too with little quips that I think of possibly contributing to a conversation. I see the sentence fully-formed in my mind, but do not know whether I will say it, because I do not know whether it will amuse my interlocutors. So I wait and see whether it gets said, by me. Sometimes it does, and sometimes it doesn’t. I never know which until I’ve already started talking, or the opportunity has passed. Why the difference? Nobody knows.

This may be all very interesting, but it would be less fun if it applied to decisions of significant moral importance. Fortunately, for me and, apparently, for most other people, it doesn’t. I think of Camus’ outsider, Meursault, looking at the man before him on the beach, at whom he is pointing a loaded, cocked pistol (for no good reason), and wondering whether he, Meursault, will pull the trigger. I have never been in a similar situation – one of the many advantages of not owning a gun – but I would hate to think that I could look on the question of which way I would decide on an issue of such significance, as something amusing, diverting, outside my control.

While one can make theoretical arguments that whether or not I will choose to do X is purely determined by pre-existing circumstances, there is nevertheless a powerful, nay dominant, feeling of being involved in the decision, of being in control of one’s actions, when the stakes are high. So it is for me at least. Perhaps that’s what was so disconcerting about Meursault – that he had no greater feeling of involvement in, or responsibility for, the decision to shoot the man than he would for a decision to take the shorter, steeper path up the hill, or to risk telling a lame joke.

I cannot like Meursault, but I do feel sorry for him. There but for the grace of God and all that. Perhaps that’s part of what Camus was trying to do. He dissolved the black and white certainties of good and evil that plagued the world at the time of his writing.

There’s nothing so strange about that – he’s a memorable character. What makes this worthy of comment is that I realised this morning, for the first time, that I regularly have dreams about Voldemort. But until recently, I have always forgotten them. This is the first time I realised that they are a recurring phenomenon.

They are fairly dramatic dreams. It’s a classic tale of the good (presumably that’s me, and my companions if I have any) trying to find the courage to face up to evil, to confront it, struggle against it – and the fear it evokes – and, one hopes, to vanquish it. Or at least to banish it until the next time it shows up.

Details are sketchy, and would be boring to relate. But the recurring scenario seems to be that, like Harry Potter, I need to venture into Voldemort’s lair (like Frodo going into Mordor) in order to try to bring his plans undone.

There is no absolute need for me to fight Voldemort – no duel with wands at twelve paces or anything like that. But I need to sneak into his headquarters like a secret agent, perhaps to steal some plans or sabotage some special evil-doing equipment he has constructed. I can’t remember the reasons why I need to go into his headquarters, but I do remember that the mission is essential if evil is not to triumph, and that I am very afraid that he will detect my presence and leap out of a wardrobe or somesuch and fling the full weight of his malevolent powers at me. And he does – every time. No matter how quietly I creep about, Voldemort always detects my presence and suddenly leaps out of a wardrobe to attack me with a splendid and terrifying roar.

What happens next I cannot remember. But something extended happens, because he doesn’t win instantly, killing me stone dead on the spot. Maybe some sort of supernatural scuffle and or flight/pursuit ensues and sooner or later I wake up out of that on account of all the excitement.

I don’t want to get too Freudian, but I can’t help feeling that these dreams tell me something. The idea of confronting one’s fears and deliberately going into danger, because it is the right thing to do, may have a strong emotional pull on me. I am, at heart, a romantic, notwithstanding my obsession with mathematics and the correct use of grammar.

A rather more surprising aspect is that the dream involves imagining a character that is supposed to be pure evil. It surprised me because I believe the idea of ‘pure evil’ is dangerous, hyperbolic nonsense. I don’t believe anybody is purely evili. We all do some good things and some bad things. Some people – serial killers, dictators, rednecked talkback radio hosts – do lots of extremely bad things, but I expect even they are not purely evil. I expect they are sometimes kind – to family, to friends, even to strangers that manage to excite their interest or compassion – in those occasional lulls of peace between slaughtering hitchhikers, invading neutral countries and stoking up hatred in resentful white heterosexuals for Muslims, gays or environmental activists.

I don’t believe that evil can be personified – that people like Sauron, Satan, Voldemort or The Penguin are possible. Although I then ask myself ‘Are we really supposed to see the mythological figure of Satan as pure evil?‘. Satan is actually a very interesting fictional character. Some of his complexity may stem from the delightfully baroque Roman Catholic teaching on evil – first cooked up by St Augustine in the fourth century. It says that evil is not a ‘thing’, ie it is not a substance or spirit or anything like that. It is just an absence of another thing that is a thing, which is the ‘good‘. It’s an interesting position, and quite appeals to me, up until the bit where it suggests that the ‘good‘ is a thing. That’s a bit too ectoplasmic for me – the idea that there’s some sort of invisible, nonphysical substance called ‘good’ that floats about and goes here but not there (one wonders, can it be hoovered up by those ectoplasm suction guns that the Ghostbusters use?). It’s needlessly multiplying entities, I reckon. Much easier to just say that people sometimes do kind things and sometimes do mean things, and some people do more of one than the other. William of Ockham would not approve of ‘goodness as a thing‘ (although, being RC, maybe he pretended to, in order to avoid being burnt).

Back to Satan, then: the interesting thing about him is that he isn’t portrayed even in orthodox Christian texts as being pure evil. His story is just that of an angel that didn’t want to serve as an angel any more and so – in what appears to me to be an admirable display of honesty and integrity – resigned. Some bits of the Bible such as the book of Job portray Satan as pretty nasty (but then Yahweh doesn’t come out of Job looking very nice either) but there seems room to view him as a complex, conflicted, multi-faceted figure. Certainly not the sort of person you’d want your daughter to marry, or that you’d trust to do your tax accounts, but not bad enough to deserve exile to an eternity of torment either. I haven’t read Paradise Lost but, by eavesdropping on more literate people that have, I have gained the impression that maybe what Milton was trying to do there was investigate that complexity: Satan as exile, as rebel, as lonely iconoclast.

I digress. Sorry about that. Yes, well I don’t believe in evil as freestanding substance, and I certainly don’t believe in entities that personify evil. So it’s interesting that I dream regularly about battling a character who was created to represent pure evil. Does it mean that my disbelief in evil is purely intellectual, and that deep down I am as credulous and fearful of evil spirits as a Neolithic cave-dweller? Perhaps. Who knows?

Or perhaps even Voldemort is not pure evil. After all, JK Rowling does give him an unhappy childhood, to hint at the idea that maybe he was not always that way – that he was as much a product of his environment as anybody else.

But then I can’t be 100% sure that the terrifying Dark Lord in my dream is always Voldemort. All I know for sure is that in the most recent dream it was Voldemort, and that the dream series in general is about a stupendously powerful being (much more powerful than me) that wishes harm to all sentient beings in the universe. Perhaps other dreams are about Sauron, the Wicked Witch of the West, Darth Vader, or John Le Carré’s Soviet spymaster Karlaii.

Thank goodness my dream self has enough courage to go through with the daring mission each time. It would be mortifying if the last scene of the dream, instead of a big fight-or-flight with a terrifying Dark Lord, saw me skulking about at home in shame and humiliation, having realised that I was too scared to go on the mission that was the free world’s last chance.

I think I can say, without fear of contradiction, that I have one of the bravest dream selves in the observable universe. Now there’s a boast to conjure with! Who else can claim as much?

Andrew Kirk

Bondi Junction, March 2016

i And No, Tim Minchin, – much as I love most of your work and, like you, detest the power structures and many of the teachings of the RC church – not even George Pell.

When I first read Les Misérables, I was miffed to find that the first one hundred or so pages were taken up with a character that does not even appear in the musical – Monseigneur Myriel, the saintly bishop of Digne (saintly as in incredibly kind, not as in pious). That hundred pages is basically devoted to painting a picture of just how saintly Mgr Myriel is.

When you know you have 1800 pages ahead of you and are impatient for Jean Valjean (the hero) or Javert (his primary antagonist) to appear, you don’t have much patience for detailed portraits of peripheral characters, however saintly. Mgr Myriel’s sole role in the story is to be the first person that shows the cold, starving, exhausted Jean Valjean some compassion, as Jean makes his way on foot from the prison galleys in Toulon, where he was finally released after nineteen years’ penal servitude, to Pontarlier in Central Eastern France, which is several hundred kilometres to the north. Valjean’s attempts to buy food or shelter along his way are rejected by innkeepers, peasants and even local jail-keepers who distrust and fear him because they know he is a former convict. Valjean seems destined to starve or freeze to death until the bishop takes him in and treats him like an honoured guest. Despite that, Valjean sneaks out of the bishop’s house in the middle of the night, stealing away most of the bishop’s silverware with him – the bishop’s only possessions of any value. When the police arrest Valjean next morning and bring him to the bishop, expecting the bishop to accuse him and thus complete an easy arrest for them, the bishop instead says ‘No, I gave all that to M. Valjean, and also, you silly sausage, you forgot to take these that I gave you as well’ (and hands over to the astonished Valjean the few remaining pieces of silverware). This act of unfathomable kindness stuns Valjean, gives him much to think about, and changes his life (but not instantly: he still manages to steal a shilling off a small kid later that day before he finally ‘sees the light’ – a baroque flourish that is omitted from the musical).

There you have it – one hundred and fifty pages summarised in a paragraph!

Victor Hugo is given to these long diversions. Later in the book there is a very long, technical diversion about the topography of the field in which the battle Waterloo was fought – apparently just to show what a villainous knave the innkeeper Thénardier is (‘Master of the ‘ouse’). And another later on, almost one hundred pages long, describing the construction and layout of the sewers of Paris – just because Valjean will escape the police by going through these, carrying the half-dead body of Marius, his daughter’s boyfriend.

In most cases these interpolations are irritating. They subtract momentum from one’s reading and cause one to lose interest. That’s how I felt on my first reading of Les Mis. There was no momentum to lose, because Mgr Myriel is introduced on page 1, but one is beset by impatience to meet Jean Valjean and come to grips with the famous story. ‘Why are we wasting time on this bloody bishop?’ the impatient reader (me) asks themselves, and ‘We get it already, he’s a very kind person, can we move on now?’

But on the second reading it was different. I already knew the story. I knew when JvJ would enter, and why, and I knew what role the bishop would play. So, the impatience having been neutralised, I was alert for little details, items of colour and feeling, that were not essential to the plot, but instead artistic features of what is better considered as a vast tapestry.

And on that second occasion, I found myself entranced and inspired by Mgr Myriel. Unlike cardboard cut-out goodies like Dickens’s Little Nell or Little Dorrit (with Dickens, you always know you’re in for some insufferable Victorian sentimentality when somebody appears with the word Little prefixed to their name), Mgr Myriel seems real. One can imagine that there really are such people – rare, yes, but not extinct. I heard the retired heretical bishop Richard Holloway interviewed on ABC radio a couple of years ago and he sounded a little like what one imagined Mgr Myriel might be like.

How was it Inspirational? Basically, it just made me want to be like Mgr Myriel. I am sadly aware that my troubled, deeply flawed character is a million miles away from that of Mgr Myriel – a ridiculous seething mass of passionate good intentions with very little in the way of good actions to match. But just observing first hand the operation of Mgr Myriel’s apparently bottomless well of compassion made me want to be more like him – even if it meant travelling only a few small steps along the way between where I am and where he is. And in addition, Hugo managed to make it seem possible, that one could be at least a little bit like that.

It’s hard to put a finger on what it is that makes Hugo’s presentation of Myriel so inspirational and believable and so different from the goody two-shoes vaunted by other Victorian-era authors. Being honest, I have to concede it’s possible that it’s just a consequence of the frame of mind one has when one reads about them. Maybe if I’d read about Little Nell in the right time and place she would be my inspiration. I doubt it, but one must always remain open to the possibility of being mistaken.

One key difference is that Hugo doesn’t content himself with telling us how kind Mgr Myriel is, or with quoting dialogue in which Myriel says pleasant, amiable things. Talk, after all, is cheap. No, what we see beyond his gentle, friendly speech is a long string of tremendously kind actions. Myriel, piece by piece, gives away almost everything he has to those less fortunate than him. Since he is a bishop, and bishops in those days were very wealthy, with palaces, coaches, large incomes and expense allowances, there is an awful lot to give. Having given away almost everything he has, he then researches what other allowances and claims he can make from the church in virtue of his office, does the paperwork to claim whichever ones he can, and then gives those away too.

But never does Myriel congratulate himself. He seems to subscribe to Pierre-Joseph Proudhon’s ‘property is theft’ adage. When asked why he gave this or that thing away, he replies to the effect that he was never entitled to possess it in the first place. But Myriel is no anarchist. His comments are not generalised philosophical points about the nature of private property, but about the specific treatment by society of the people to whom he gives these things. They have been dispossessed, by the operation of law, of privilege, of capitalism, of raw temporal power. As his employer’s policy manual says ‘Whoever has will be given more; whoever does not have, even what they have will be taken from them’. Bishop Myriel does his humble best to redress the imbalance created by the church and state by returning some of the world’s good things – those that he has in his power – to those from whom they have been taken (whether directly or indirectly).

Hugo writes Myriel’s dialogue in such a way that one can imagine doing and saying such things. His lines are not ethereal or sanctimonious, but practical and down-to-Earth. After giving the last remaining silver to Valjean, as well as saving him from a return to penal servitude (this time for life), he professes relief, telling his sister and housekeeper that he was embarrassed to be dining off silver when others in the village had no utensils at all, and that he feels much more relaxed eating his soup out of a wooden bowl.

Here’s a sample. Mgr Myriel is talking to the director of the small, overcrowded church hospital that is attached to his large, luxurious bishop’s palace, and has learned that they have too many people crammed in, in unbearably uncomfortable conditions. After a series of probing questions about conditions in the hospital, Myriel comes out with:

‘Look, Mister Hospital Director, this is what I reckon. There’s obviously been a mistake. You have twenty-six people in five or six little rooms. We have only three people in here [in the palace], where there is room for sixty. It’s a mistake I tell you. You have my lodgings and I’ll have yours. Give me my house [meaning the little hospital]. This one here is your house.’

No moralising, no sermons, no verbal niceties, just ‘Look – this is what we need to do‘.

He even has a sense of humour – a quality nearly always lacking in nineteenth century heroes. When the housekeeper discovers that Valjean has disappeared overnight and so has the silverware, the following dialogue ensues:

Housekeeper: Your excellency, your excellency, do you know where the basket of silverware is?

Bishop: Yes.

HK: Jesus-God be praised! I didn’t know what had become of it.

Bish: [Picks up and presents to the housekeeper the empty basket that he had spotted lying under a hedge, where Valjean had jettisoned it last night] Here it is!

HK: What!? There’s nothing in it! Where’s the silverware?

Bish: Ah, so it’s the silverware you were worried about. I don’t know where that is.

One might be tempted to think that Myriel is a Marxist in disguise – a fifth-columnist usurping the rich, corrupt church from the inside by giving away whatever of its wealth he can lay his hands on. But that is not the case. For instance he does not give away the (very valuable) robes and ornaments of the cathedral – presumably because he feels that they belong to his congregation, who enjoy seeing them as part of their religious rituals every week. He even believes in a good God – quite an achievement given the corruption and cruelty of those around him who claim to represent that God. He holds fast to a humble, optimistic spiritualism in which God is identified with Love – the value that guides his life in every waking moment.

But he has no time for theology. He has no interest in doctrinal favourites like the trinity, the resurrection, sexual purity, salvation by faith or grace, or the damnation of sinners and unbelievers. When his ecclesiastical colleagues discuss such things he does not criticise them for wasting their time on meaningless arcana. He just shrugs his shoulders as if to say ‘They must be terribly clever to understand such things, but it’s much too complicated for a simple man like me‘. If he has a theological position, it is something like that everybody is worthy of salvation, and will ultimately be saved. He never quite articulates this though. If he did, he’d be at risk of punishment as a heretic. But all his actions seem to me to suggest such a belief. He expresses no theological opinions except for the primacy of love. He judges nobody, and is happy to admit his ignorance and uncertainty on all ‘ultimate questions’.

In general I am not a fan of clergy. But I make an exception for Monseigneur Myriel, even if he is fictional. He is an inspiration. I could never be anything like him. But if reading those 150 pages again, without the impatience this time, has motivated me to move even a little bit more from where I am towards where he is on the spectrum of compassion, it will have been worth it.

I don’t believe in reincarnation in the sense that I could be (unwittingly) the reincarnated soul of Marie Antoinette, but I think that there may be a germ of insight, perhaps even wisdom, in reincarnation myths.

There, I’ve said it. I’ve probably lost half my small readership right there. Let me try to explain, before I lose the other half. It’s not as bad as you think.

‘Here’s the thing’, as I am told young people say these days:

I am very taken by David Hume’s views on the self (as I am by many of Hume’s ideas). He was unable to find that he had any persistent self, no matter how hard he introspected (is that a word?). All he could find was ‘bundles of perceptions’. There is no perceptible separate watcher – a homunculus sitting in an armchair, as it were – watching those perceptions on a High Definition screen with SurroundSound. The perceptions just happen. And they are tied together – identifiable as the perceptions of David Hume – by occurring in the presence of the memories of the physical human body that bears that name.

There is a continuity to the stream of perceptions. They succeed one another, blend together and overlap. But that lasts only for as long as consciousness does. It is interrupted, usually at least once a day, by sleep, anaesthesia, concussion.

We say that we ‘return to consciousness’ but really it is not a return but rather a completely new stream of consciousness. The only connection to the previous one is that it occurs in association with the same human body, and hence that it has essentially the same set of memories.

We do not remember returning to consciousness. Or at least I don’t. Daniel Dennett explains this nicely in relation to peripheral vision. He says that we can’t perceive the boundary of our visual field (try it!) because to perceive a boundary we need to be able to see both sides of the boundary and, by definition, we can’t see the far side of the boundary of our visual field. Similarly, we cannot perceive the instant of regaining consciousness because to do so would require our being conscious of not being conscious immediately before waking up, and that is a contradiction. This only applies to dreamless sleep because when we wake from a dream we were conscious on both sides of the boundary, and we quickly realise that what went before was a dream.

So in a sense, the world is just full of streams of consciousness, each made up of a series of overlapping sensations and thoughts, with most streams lasting no longer than about sixteen hours. We can, if we wish, group those streams of consciousness based on the human body with which the stream is associated, but that grouping is fairly arbitrary. We could just as well have grouped them by the day on which they commenced, by length, or by mood.

Well, perhaps it’s not entirely arbitrary. Apart from memory and a shared body, there is one other thing tying a body’s streams of consciousness together, and that is that each stream cares very much about future streams that will be associated with that body. So Tom, as he goes to bed, cares more that tomorrow he has to wake up 15 minutes earlier to get to an 830 meeting at work than he does that Rajesh in Mumbai is going into hospital for a triple bypass operation, even though the stream of consciousness that is Tom-today is as distinct from Tom-tomorrow as it is from Rajesh-tomorrow. This chauvinistic, body-centric caring is easily explicable by evolution. Animals that cared about their future states of consciousness – particularly about whether the animal would be healthy and happy in future – survived better than animals that did not. We can’t fight it. That’s just the way our nervous systems are configured. But neither can we draw any metaphysical conclusions about the existence of some spooky continuous self or ‘soul’ from it.

If one is a Cartesian Dualist, one believes that there is a ‘soul’ attached to a body, that is non-physical – whatever that means. Although Dualism was the predominant metaphysical view for the last few millenia, it appears to be a minority view now. One can be an Immaterialist – denying the existence of matter and asserting that everything is mental, or one can be a Materialist – asserting that minds are just physical phenomena that we don’t properly understand yet. But either way, most people are Monists – meaning that they believe the world is basically only made of one fundamental kind of ‘stuff’. I feel quite fond of Dualism, if only because it is quaint, old-fashioned and a minority view – which is always attractive to me (which is why I’m typing this with a non-Microsoft word processor on a non-Microsoft, non-Apple operating system). But try as I might I just can’t believe it, so I’m afraid I’ll have to leave it aside and plough on with my Monist biases.

What about before we were conceived then? Nobody seems to feel any big deal about the fact that there are no streams of consciousness associated with their body before they were conceived. I wasn’t conscious then, so I wasn’t around to notice the fact that I wasn’t conscious. Nor can I identify my first conscious moment, probably because of the Dennettian boundary problem already mentioned. I suspect that ‘my’ body gradually attained consciousness, and gradually attained memory, over the first months or years of ‘its’ life.

I feel similarly about what will happen when this body dies. Since I don’t believe in a Christian, Islamic, Valhallian or Olympian after-life, I think that there will simply be no subsequent streams of consciousness associated with this body, and no streams of consciousness that share memories with streams of consciousness of this body. It’s Just As Well really, because after a few years, the body will have been gobbled up by worms and/or fish and/or bacteria and there will be no body left with which streams of consciousness could associate themselves.

And yet….

And yet…. there is something in being human that makes it almost impossible to comprehend that the consciousness of this body will cease forever. Perhaps it’s an evolutionary advantage to feel that, or maybe it’s just random. But it’s there, and I think that that feeling accounts for why nearly all cultures have developed some sort of after-life mythology.

Some deny the cessation by believing in an after-life – a continuation of the ‘same’ consciousness. It’s by no means obvious what ‘the same’ means here. My guess is that it means there will be future streams of consciousness that share memories with the body’s pre-death streams of consciousness. Some deny the cessation of consciousness, or at least mortality, by considering their children or grandchildren to be continuations of themselves. Others deny it by looking at their achievements – their legacy to the human race.

Here’s my answer:

After the death of this body, ‘I’ will still be conscious because every consciousness is an ‘I’. In other words, ‘my’ consciousness won’t cease because at any point in time, all those that are conscious will be conscious, and all those consciousnesses are ‘mine’ because every stream of consciousness is of a ‘me’.

‘My’ streams of consciousness don’t stop happening. All that stops is that there are no more streams of consciousness associated with this particular body, and this set of memories. So – and here’s the wibbly-woo, new-agey bit – ‘I’ become those other streams of consciousness, because they are all ‘I’. We were never really separate, it’s just that each individual stream of consciousness is locked in its own perspective for as long as it lasts – sixteen hours or so.

There’s all sorts of metaphors one could use for this, and they’re all wacky, but they have to be, since we are dealing with the indescribable. One I like is the idea of consciousness as some sort of fluid that is subject to conservation laws in the same way as energy, momentum, angular momentum, electric charge and matter. So whenever a stream of consciousness ends, because of sleep, death or whatever, the amount of consciousness it contains is released and flows into other streams. It’s a metaphor, alright (!?!), so don’t go reaching for those scientific instruments or ectoplasm-detectors or whatever they had in Ghostbusters to try to catch and measure this fluid.

Another metaphor is that in a sense ‘I’ am imprisoned in my own consciousness, unable to perceive what another perceives, no matter how close I am to them. When my stream of consciousness ends – usually around 11:15pm – ‘I’ am set free and can become someone else – another ‘I’. For some reason I visualise a bird – probably a dove (how twee) flying out from a cage whose door has been opened.

It is key to this perspective that consciousness is fungible, not hypothecated (after all what’s the earthly use of studying finance if you can’t insert technical financial terms at strategic points in a philosophical discourse, just to show off). In other words it’s like money. We can no more say that the consciousness from my stream of 29 May 2015 became that of Elton John on 30 May 2015 than we can say that my deposit in the bank paid for part of a particular customer’s home loan. That dismisses the possibility of my being Marie Antoinette right off the bat.

But just as all of a banks liabilities fund all of its assets, the consciousness that is liberated when I go to sleep tonight will replenish the consciousness of all streams that are going at that time. So I am connected to Marie Antoinette not because her consciousness – as a discrete entity – became specifically mine (with many other users in the 200 years between), but because we all share in the same cosmic pool of consciousness, that is particular to no body, and is drawn upon and supplemented billions of times per day as streams commence and end, be it by sleep, waking, death, birth, fainting, or other cause.

Arguably, a problem with this perspective is that consciousness will not persist indefinitely – at least not in this universe. We can be pretty confident that when the universe finally approaches heat death, no life will remain. So where does the consciousness go then? Well, that’s where the whole idea being a metaphor comes in handy. One great thing about metaphors is that you can drop them when something doesn’t fit, and pick them up again a little later. No metaphor fits every situation, because if it did, it wouldn’t be a metaphor (it would be the thing itself). So we drop it and think of something else, just as Shakespeare did when he realised that seas don’t generally fire arrows at you. Oh, no wait….

But why bother with a metaphor at all?

One might object that it’s silly to use a metaphor to orient oneself towards experience, especially when one knows that the metaphor will fail in some instances. My response to that is that every single one of our beliefs is a metaphor, and fails in some instances.

I tell myself I am sitting on a stool in front of a table to type this. The stool is solid and brown and the table is solid and purple. Yet that’s all metaphor too. The atomic theory tells us that what I’m sitting on is mostly empty space, and has no intrinsic colour. It has no integrity either, as it is constantly exchanging particles with its surroundings. But that too is only a metaphor, as quantum mechanics casts doubt on the whole notion of persistent particles, and who knows what even weirder theory will replace quantum mechanics and reveal it to be the crude metaphor that it undoubtedly is. It’s turtles all the way down, and there’s no reason to suppose that there’s a bottom.

Metaphors are neither true nor false, but they can be useful. We are story-telling animals, and stories – aka Metaphors – are the only way we can make any sense of life. They give it a shape that we can handle. Quantum mechanics is a useful metaphor if we want to make a laser (but not if we want to explain a black hole), and my metaphorical idea of this stool is useful if I want to have the experience that I call ‘sitting down’. So my metaphor of consciousness as a shared, universal, substance is useful to me if I want to think about inconceivable issues such as the non-existence of a persistent self, the lack of any conscious processes of this body before it was conceived and after it dies, and the relationship of all we people, and other animals, to one another.

Metaphors are also sometimes called myths, and they are just as good when they have that name.

Is this all just avoidance?

I can’t help pre-empting criticisms. It’s a vicious habit I picked up, I don’t know when but a long time ago. The wisdom of the ages says don’t bother, because it makes one’s writing longer, more complex, disjointed, ugly and harder to read. And critics rarely pay attention to one’s pre-emptions anyway. I can write “most dogs have fur that cause allergies to some people, but poodles don’t”, and some eager person will still sometimes respond “aha, but what about poodles? Got you there!”.

But since, like many people, I am my own worst critic I can’t help the odd pre-emption (of my own self-criticism), so I’ll allow myself one (or is it two? Did I already do one? We addicts are hopeless). Here it is.

Isn’t this all just some pathetic attempt to rationalise one’s way out of a fear of death by postulating some ridiculous Universal Consciousness? Why not just admit that when a body dies, it has no more conscious experiences, and that’s that?

Well Andrew (I reply), I’m glad you asked that question. Firstly I’d just like to observe that I did already say that (I believe) a dead body has no more conscious experiences, and there will be no more conscious experiences that have any memories of experiences that the body had. So this myth/metaphor doesn’t seek to deny or avoid that.

Nor is the myth relevant to fear of death, at least not for me. I used to fear death when I believed in a personal after-life, because I feared the punishments that had been threatened in that after-life if I didn’t conform to the strict expectations laid out in a rather large book of unrealistic rules. In fact I even feared the alternative of being ‘rewarded’ with eternal happiness, because I was convinced that no matter what treats and delights that reward comprised, I would be excruciatingly and agonisingly bored within a few billion years. But once I ceased to believe in an after-life, I ceased to believe in the possibility of such punishments, and hence I ceased to fear death. That is different of course from the fear of how one gets there (‘dying’), as I imagine that being squashed under the wheels of a Land Rover or being eaten by enraged Koalas is rather uncomfortable, albeit only for a short while.

No, the purpose of the myth, as far as I understand it, is twofold: first to escape the niggardly narrowness of the first-person perspective that is imposed on us by our bodily structure; second to open up possibilities for contemplating the mystery of consciousness, a phenomenon that no amount of scientific investigation seems ever likely to be able to explain. Given how mysterious and indefinable consciousness is (as opposed to mere brain activity that interprets sensory data, processes information and generates physical actions including speech), how unnecessary to the evolutionary account of the human brain it is, and how we (ie David Hume and I) are unable to detect any subject (‘self’) of this consciousness, it appears less ridiculous to me to regard consciousness as something primal, something universal that transcends individual bodies, than as an inexplicable phenomenon that arises in association with lumps of meat that are configured in just the right way.

Does that sound like a Humph! ? It wasn’t meant to. Ah well, if it is so, let it be so.

When I was young, I believed my body to be functionally perfect. I wished at the time that it were better-looking, and especially for it to have less pimples, but I thought it was functionally fine, most of the time.

I suppose I thought it perfect in the sense that it could do anything that I could reasonably expect it to do. It couldn’t attract girls like Robert Redford, run as fast as Sebastian Coe or be as muscly as Arnold Schwarzenegger, but it could run, jump, talk, dance, sing, read, write and do all the things seen by many as prerequisites to a full life.

Whenever I incurred an injury, there seemed to always be a potential horror lest the injury should lead to a permanent degradation of my body’s capability. One could tolerate a temporary loss of use of a limb, courtesy of a sprain or cut, but the idea of permanent loss of use was too horrific to contemplate. When I injured myself, the first thing that occurred to me was ‘I hope it’s nothing permanent‘.

Although I might have been less than Robert Redford in some schoolboyish calculus of value, I didn’t want to be reduced to less than I was at present.

Indeed, I expected my body to get better – stronger, faster, fitter, taller, less pimpled – and it generally did, up to the mid-twenties, maybe even longer than that for some activities, like long-distance running.

It was an obsessive attitude – like somebody that has a new car and is terrified that it might get a tiny scratch. It’s not that I spent time worrying about it. It was just that whenever an injury happened there was that sudden concern lest there be a permanent scratch.

The analogy isn’t a good one though, because scars were one change that I was perfectly prepared to accept. As long as they didn’t affect my body’s ability to function, and weren’t on my face, I was happy to accumulate scars. I suppose I imagined that they made me seem more manly. I have a dent in my right quadricep from running into a wire fence in the dark in 1987, a dent in my scalp from ducking insufficiently as I ran through a gap in a wrought-iron fence in 1996, a gouge in my left shin from slamming it into the iron footrest on the milk truck I worked on in 1979, when I slipped as I ran up to it, and from 1980 a scar across my right palm where I slipped with a milk bottle that then broke and stuck into my hand (Don’t ever let anyone tell you that running is a safe sport). I have many more scars, but those are the most noteworthy ones. And they’re only visual blemishes. They have no impact on function.

So it came as a shock the first time I had to accept a permanent loss of capacity. I’m not exactly sure when this shock was. It may have been in 2000 after an operation that was less successful than I would have hoped. That was like the first crack in the dyke, and the flood soon started coming through. A few years later I obtained my first pair of reading glasses and now I can barely read anything less than two metres away.

Fortunately, one’s concern about losing function seems to diminish in inverse proportion to the rate of loss of function. Once the new vehicle is a little scratched, one doesn’t worry so much about subsequent scratches.

Now of course, the vehicle is slowing down as well. A hard half-hour time-trial run for me now is much slower than a casual, conversational one-hour cruise jog was twenty years ago. But after a while, one comes to terms with it. It doesn’t worry me (I do wonder how I’ll cope when one of the wheels falls off though, or the transmission breaks).

And with the diminishing concern comes the realisation that my body was never functionally perfect anyway. It seems less dire to deteriorate from an already imperfect position than to suddenly lose perfection.

I wonder if there is anyone in this world of seven billion people that is entirely happy with how their body works – the digestion perfect, teach uncavitied, sleep easily achievable each night, anxiety, shyness or embarrassment never a problem; and for females: a regular, painless and easily manageable menstrual cycle. Out of seven billion bodies there must be one that has fewer problems than anybody else. But I imagine that even that one has some small challenges.

And all bodies will deteriorate as they age. Even if biologists eventually find the answer to programmed cell death, so that there’s no longer such a malady as ‘just getting old’, we’ll all still gradually accumulate permanent damage from our interactions with rocks, roads, fences and milk bottles. Fortunately, it seems that as we age we also learn better how to accept at least some damage with equanimity.

I’ve been reflecting on the notion of property recently – that is, on the idea of owning things.

Owning things used to be very important to me. When I didn’t have much, I longed to own all sorts of things. I particularly remember saving up for a Texas Instrument TI-58 programmable calculator, and for a custom-made racing bike frame that used Reynolds 531 double-butted tubing. In each case, as I saved, I spent much time dreaming over what it would be like to own it, reading whatever I could about the product and wishing the time away until I could afford to buy one.

Once I managed to buy them, I loved to sit and admire my acquisition. I remember being enchanted at the effect of the tiny gold specks in the deep green paint on the bike frame. I felt like I could gaze at it for hours. With the calculator, as well as making up lots of little programs to run on it, I would carefully polish it, wiping away the slightest speck of dust, and sometimes just press the buttons to hear and feel that satisfying click they made.

I feel a little ashamed to say that I also coveted ownership in the area of relationships. The reigning paradigm in those days seemed to be that a girlfriend or boyfriend was in some sense an acquisition or a possession. Somebody with a beautiful petit(e)-ami(e) was seen as possessing great wealth, at least in regard to that relationship. There is more to the phrase ‘I wish she was mine’ – uttered in so many popular songs – than a meaningless conventional grouping of words. Being the seventies, I suppose the sense of ownership may not have been symmetric between the sexes. Fortunately, we seem to have moved beyond that now.

Being a very sport-oriented boy in my teens, notions of ownership and coveting also infused my ideas of body. I would look at the rolls of muscle flowing over the knee of one of my cyclist friends and think ‘I wish that I had [owned] thigh muscles like that – it’s not really fair that he has them and I don’t‘. I did weights in the school gym twice a week and, once I developed a fairly muscular physique, became a little proud of it, as a possession. A friend that I did the weights with was very strong and more muscly than me. But his calves were not as prominent as his other muscles and he would often bemoan the fact that, no matter what exercises he did, he was unable to persuade his calves to jut out in the manly way that he would have liked.

I think the young may be especially literal-minded about ownership. I remember being intrigued by the phrases ‘my doctor‘ and ‘my lawyer‘ the first several times I came across them. I wondered how immensely wealthy and powerful somebody must be to have their very own doctor or lawyer. I interpreted it as meaning that the doctor or lawyer worked exclusively for their ‘owner’. It took me a long time to realise that the phrases did not imply exclusivity – that all they meant was ‘the doctor I do to when I’m sick‘ (although I still thought you had to be rich to use a lawyer). I think that, at least, was a sensible attitude of my young self. In most instances ownership – which is implied by the word ‘my’ – does imply an exclusive right to use whatever is owned. The cases like that where it doesn’t are exceptions to the general rule.

Now that I am older, and more affluent, I find that ownership is increasingly unimportant to me. I don’t deny that I enjoy owning a comfortable house in a convenient neighbourhood, and that I would find the transition difficult and distressing if I were evicted and sent to live in a slum. I also find it pleasant to earn and own enough money that I can buy whatever amenities of life and small whims I desire (a new hat, a book – I seem to have no expensive habits). But I no longer wish to gaze lovingly at my possessions as I did before. Also, and I think this is much more significant, I no longer react to the observation of something beautiful, powerful or otherwise impressive by wishing that I could own it.

For example, if I see a very lithe, fit, fast runner bounding along in the park, I don’t wish that I had the talents and capabilities that they have. I am content to observe them and to be glad that there are lithe, fit, fast runners in the world. If I see a beautiful painting or sleek, super-light, aerodynamic bicycle, I am now able to admire them, appreciate their beauty, and be glad that I had the opportunity to observe them, rather than wishing that I owned it. And if I see an amazingly beautiful, funny, clever or graceful woman, I do not wish that she were my partner – which is just as well as I already have a life partner that I dearly love. I simply feel glad that there are such people in the world and that others can take pleasure in being around them and interacting with them.

In other words, ownership for me is now almost completely unrelated to whether I can be glad about something or someone.

I think part of the reason for this softening of attitude is having meditated on the nature of ownership. I started by wondering what ownership means. We tend to accept it as if it were a fundamental concept, with some sort of invisible, metaphysical existence of its own. As if it were, in some deep sense, ‘important’. Actually though, it is nothing of the sort. ‘Ownership’ is just the name we give to a situation when somebody has a certain amount of control over something. If they have the power to use that thing when they want, for what purposes they want, to prevent others from using it, and also the power to pass control of that thing to someone else either in exchange for something of value (a sale or trade) or in exchange for nothing (a gift), then we typically say the person ‘owns’ that thing.

Note that the power to use the thing, and to prevent others from using it, is simply a social convention. Given certain preconditions, such as public purchase, presentation of receipts or habitually carrying the thing around with them, we tend to award the notion of ownership of a thing to the person that bought it, has the receipts, or carried it around. Different societies and social groups set different preconditions for ownership, and those preconditions can change over time. There is nothing metaphysical or Platonic about ownership. There are no graven tablets at the end of the universe that record who ‘owns’ what in an ultimate, objective, society-independent fashion.

Nor is the power granted by society to owners ever absolute. There is nothing that we ‘own’ over which our rights to use it are unlimited. When we own land there will be rules about the uses to which we may put it and the conditions under which we may sell it. For instance one is typically not allowed to subdivide land and sell it in parts, without first obtaining permission from a suitable level of government. There are limits on how high one can build and how deep one can dig on one’s own plot of land. It appears from Pride and Prejudice that in some cases there are even constraints on the people to whom one can bequeath land – as Mr Bennett was not entitled to bequeath the house and land which he ‘owned’, and in which he and his wife and daughters lived, to his family because they were all female.

Viewed in this light, we realise that there is nothing that special about ownership, compared to say, just borrowing or hiring. Life is temporary, and ownership is really just an extended lease.

Of course, if one is not subject to a Bennettian entail, one can sometimes bequeath that which one owns to one’s heirs – something we cannot do when we rent. But even that only grants an extension of our temporary control. Generally, in the history of the world, few items of land or moveables have been handed down through the generations without being lost after a few generations through war, natural disaster, theft or economic catastrophe. Far better, surely, to enjoy the use of things while we can, whether we own, rent or borrow them, than to quest after the illusory and impossible permanence of ultimate ownership.

I concede that there are some practical advantages of ownership. One of the great motivators for young people to buy a house is that they will finally be able to settle in one place, without the likelihood of having to move from one rented apartment to another as leases end and rents go up. Once one owns a house, one can decorate it to one’s taste and modify it to suit one’s lifestyle – which one can’t do with rented accommodation. I think it is in these practical advantages that the true benefit of societal conventions of ‘ownership’ lie. They allow people to attain a greater degree of stability and security. As I have argued, such security and stability is not permanent, but it lasts longer than is generally achievable with renting or borrowing, and hence can provide greater comfort.

But I return to the observation – that came to me late in life – that one does not need to own something to enjoy it. Perhaps I was peculiar, gazing at the deep, speckled green of my Abeni bicycle frame and thinking contentedly ‘this is mine now’. Was I unusual as a young person, in obtaining pleasure from the sheer fact of ownership? I don’t know. But I am glad that now I do not feel like that – that I can take as much pleasure from seeing a beautiful bicycle, pair of jutting calves or Georgian townhouse belonging to somebody else, as I would if it belonged to me.