Pages

Follow the reluctant adventures in the life of a Welsh astrophysicist sent around the world for some reason, wherein I photograph potatoes and destroy galaxies in the name of science. And don't forget about my website, www.rhysy.net

Friday, 26 May 2017

Absolutes are very comforting things : this is right, that is wrong. Once established, no further thought is required - indeed, an absolute fact can be used to refute anything that disagrees with it. This is extremely powerful, but also potentially very dangerous. What if you're just plain wrong about those facts ?

For some time now I've been trying to convince y'all that science is rarely - but not quite never - about these black-and-white absolute facts. Granted, it has a strong component of factual elements : I measured this result with this instrument using this method. Perhaps the measurements were wrong or the method was not appropriate, but that measurement was taken in that way even so. I even contend that it's possible to establish not merely that things happened but even to prove their underlying causes - though this does require a carefully stated (but by no means unusual or watered-down) definition of proof.

But the scientific method is truly a messy, complex thing. Working out the relations that explain all those disparate facts is seldom as clear as the measurements themselves. And so while I really do like the opening quote very much, I take strong issue with the final statement. That's all there is to it ? Is it really so simple to judge whether something is right or wrong ? Well, not exactly.

Falsifiability Is Nice, Though

At least it's undeniably easier to disprove than to prove something. Whether you can really do this with a single experiment is something we'll come back to later.

Falsifiability is important. You can't just go around saying things like, "I think gravity happens because of Dave the magic farting fox", because that isn't helpful. By what mechanism does this explain gravity ? Does it have something to do with Dave's farts or was this an unrelated additional description ? How would we test for the existence of Dave and his potentially magical farts, or are they so magical that they defy rational analysis ? Perhaps your idea is right, but we can't understand it using science, so it's useless for anyone trying to establish objective truth.

It's obviously much better if it's possible to conduct some experiment which will either confirm the predictions of your theory or find some different result. Now, confirming its predictions doesn't automatically prove your theory. It's still possible that another model would do an equally good job of coming to the same answer by a completely different mechanism - and of course it's possible that new observations would reveal something that your theory didn't predict at all, in addition to what it got right. Don't take the latter case lightly - as the link explains, models can make extremely specific, accurate predictions but still be fundamentally wrong. I call this the "more than one way to skin a cat" principle.

So proving models to be correct is extremely difficult. Not impossible, for reasons I go into at length here - but in short you have to be careful about your definitions. "Proof" must happen within the assumption of an objective, measurable reality in which we're not being fundamentally deceived by our poor senses / Dave's farts / evil demons / Theresa May's witchy powers. Your theory must be carefully stated so that it's as specific as possible - otherwise a single error could be unfairly held to disprove it, when in fact it just needed a very small adjustment*. Properly framed though, the line between theory and fact can indeed become blurred with sufficient evidence.

* Immediately we see that "that's all there is to it" is looking decidedly shakey. Great care must be taken to ensure that the result really does disagree with the fundamental nature of the theory and not some more legitimately tweakable aspect.

But surely disproving a theory is much easier ? It's surely much more decisive when an observation disagrees with prediction, right ? And isn't it absolutely essential to a scientific theory that it should be at least possible to disprove a theory with enough effort ?

Here's Where Things Get Tricksy

With wonderful irony, this mock horoscope is completely and utterly wrong for astronomers. If you felt like really trolling people, you could also argue that because astrologers have told them what's coming based on the positions of the stars and planets and people believe them, then it's surely a self-fulfilling prophecy...

Well for starters being falsifiable is clearly, at best, a necessary but not sufficient condition for a scientific theory. Astrology is falsifiable and is falsified continuously, but that doesn't make it scientific. Without wishing to spend ages on what science actually is, astrology clearly isn't it. Hell, it probably wouldn't even be scientific if it got its predictions right.

But is it even true that it's necessary for an idea to be testable to make it scientific ? If you can't even know if it disagrees with experiment, how can it possibly be scientific ? Doesn't that equate it to the intellectual level of Dave the magical farting fox ?

There are two aspects to this. Sometimes ideas are very difficult to test, and sometimes testing them is fundamentally impossible. Again the boundary between the two can be blurred : if your theory can be tested in principle but only using, say, energy levels comparable to the entire output of the Sun in a million billion squillion years, that is certainly practically impossible. It can't be done in your lifetime. Philosophically though, this is clearly different to the case where no-one will ever be able to test it. Dave's magical farts, you theory might say, might be so damn magical that they defy rational analysis and simply can't be used to make any kind of prediction at all.

Clearly we need some examples from the real world, to show just how messy this can really get.

Yet More Shades Of Grey In the World Of Science. Quelle Surprise.

Ensemble models - false but not false but still useful

Here's a nice one from the world of meteorology. Because the equations of fluid dynamics are incredibly difficult to solve, they have to be simplified. It's completely impractical to solve all of the necessary billions of equations if you wanted to do it perfectly, and it's completely impossible to know absolutely everything about the weather system at any given time. Both theory and observation have intrinsic errors that can't be avoided. So they have to be reduced and simplified - it's that or stop making weather forecasts altogether.

This means that instead of running a nice simple single computer simulation, meteorologists run many - each with slightly different equations and conditions based on observations. Predictions are based on what the majority of models say will happen, but of course sometimes only a few models get it right while most get it wrong. The thing is that this doesn't mean those models are flawed and can be thrown on the fire and spat on in disgust - they're still perfectly valid approximations and in other circumstances may actually give more accurate results than the others. So while both the overall prediction and individual models are falsifiable, that falsifiability doesn't even mean you can say they're fundamentally wrong. It's much more complicated than that. It would be better to think of them only as testable : they might be falsified in this one particular case, but not all.

Example of ensemble models for the path of a hurricane. Most agree very well up to a point : after that they predict radically different paths. Small differences in the models and initial conditions lead to big changes in the end result.

What if you extrapolate falsifiable phenomena to untestable scales ?

Astronomy is awash with examples of ideas that are both difficult and impossible to test. Predictions are often made as to what will happen billions of years in the future, when it's entirely possible the human race will be extinct. And they're also made about things happening billions of years in the past or beyond the observable Universe - things we'll never be able to test. Does this really make them unscientific ? After all, they're based on logic, observations, and testable phenomena on laboratory scales - or indeed on smaller but still astronomical scales and timescales where observations can verify them.

Measurable, useful comparisons do not require falsification

Astronomy also presents good examples of the ambiguity of what "testing" and "falsifiability" mean, further emphasising that they are not the same thing. First, we can't observe the behaviour of individual galactic systems on the necessary timescales. Second, observations generally have large errors in the measurements, because the systems we examine are so faint and distant. So we can test our predictions statistically, e.g. we can see if the population of galaxies in the distant past does what we predict it should have done - we can see how well it agrees with our models. But we can't truly falsify it by checking if individual galaxies behave as we think - there are usually just too many parameters and too many sources of error, so it's often possible to easily explain away a few weird outliers from the overall trend. We can often only determine which model does best rather than which ones are completely impossible.

At least the "weird outliers" in astronomy are often nice to look at.

A simplified example : galaxies in very dense regions tend to have smooth, elliptical shapes, while those in less dense regions tend to be spirals and irregulars. We know there are varying processes which can act to change a galaxy's shape, but which one dominates ? We don't even want to try to falsify which ones happen - because we know they all do - it's just a case of establishing which one has the biggest effect. The effects of the different mechanisms are so complex (and observational errors so large) it's possible we could make any of them work, with enough effort. So which method gives the results closest to reality with the least amount of tweaking ? That's the question we try to answer, which has little or nothing to do with falsifying anything.Theories can be only partially testable

Just to muddy the waters even more, some theories consist of a mixture of testable and untestable aspects. Inflation predicts that the observable part of the Universe is just a small part of a much larger whole, the majority of which can never have any influence on us at all. It predicts some signatures we can search for, but this most fundamental, dramatic aspect is thoroughly untestable*. And it uses a mixture of complicated mathematical models and hard-nosed observations. So is it science ? Similarly dark matter makes some testable, falsifiable predictions about galaxy cluster interactions, but could rightly be described as unfalsifiable in terms of directly detecting the actual particle in a laboratory - one can always say, "we didn't find anything, so the particle must be harder to detect than that."

* That is, we can't test it directly by flying off to a region of the Universe the theory predicts are forever inaccessible, by definition. More on the standards required for testing/falsifiability later.

Proofs you can't check - who watches the watchers ?

Here's another example - a computer claims to have proved an obscure mathematical theorem but its proof is far too long for any human to ever read. By necessity, this proof must be based on logical deductions, but if it's too long to check then is it really a proof ? This isn't really all that novel either - throughout history, stupid people have stubbornly refused to accept the proofs that cleverer people have come up with. Does that mean that clever people aren't being scientific if they can't explain their ideas to the mentally deficient ? With science becoming increasingly complex and requiring increasing amounts of time to fully understand, this is a real problem. And if scientists don't even fully understand their results, well...

Wiggle room is not cheating, but it can be problematic

And here's another extremely common issue : elephants ! I mean, models which depend on values which can't be determined by observation or experiment. These "free parameters" can be adjusted to get whatever result you like. In severe cases, even when you do have parameters that let you make a testable prediction which is found to be wrong, you might still be able to tweak the parameters so that everything's tickety-boo once again. This is similar to the nature of both dark matter and its alternatives : you can't falsify every possible dark matter particle and you can't falsify every theory of modified gravity. Are these ideas unscientific ?

Doing better than Dave but not very much

What about aliens ? The existence of aliens isn't yet established observationally, but the prospect of their existence is based entirely on scientific findings. Then there's the notion of godlike aliens, chronically invoked as explanations for anything ordinary theories can't account for. Such aliens are consistent with established scientific findings, but in terms of testability they're scarcely better than Dave the magical farting fox. On the other hand, most of the other examples I've just given are clearly much more scientific than Dave the magical farting fox, despite their lack of falsifiability in some areas.

Newton's theory of gravity offers a very nice combination of many of the above problems. It gave quantifiable predictions of the behaviour of objects falling under gravity and apparently explained both the small-scale stuff (apples falling from trees) and the very large-scale stuff (planets orbiting the Sun). Apples falling from trees are easy to measure very precisely, and what's more Newton could - if he wanted - change just a single parameter at a time to see what effect it had (e.g. the height of the tree, the mass of the apples).

Such control was trivial on the scale of apple trees, but absolutely impossible in Newton's day on the scale of planetary distances. It remained impossible for another 270 years, when Sputnik 1 became the first artificial satellite. It still isn't possible on planetary mass scales because we can't appreciably change the orbit of a planetary-mass body. And we can't do controlled tests on galactic distance scales either : like Newton, we can only interpret observations. Which brings us back to dark matter, with some people holding that there must be missing mass to explain the observations (assuming we know the theory is correct, it makes a prediction we can test to some limited extent) and others claiming that the theory of gravity breaks down on really large scales (assuming the theory has been falsified by observations). It's all rather messy, isn't it ?

One final example to provide another perspective : string theory. Simplifying hugely (it's the principle that matters here more than details), this doesn't provide any testable results that would distinguish it from rival models, but philosophically it solves a lot of problems with standard models. For example, removing the infinite density inside black holes prevents the equations from screaming in agony and gives us meaningful results where previously there were none... but we can't test them. Is it really fair to claim this isn't science solely on the basis of lack of current testability ? It too is a highly sophisticated idea based on other, testable ideas. It solves problems with the existing models. Must we really demand falsifiability as an absolute requirement ?

But We Can At Least Falsify Some Things, Right ?
Of course. But to return briefly to the notion that "if it disagrees with experiment, it's wrong", it's worth noting that even here things aren't always as clear as we might like. Models are often complex, not only because of their fundamental nature but because (as mentioned) they may have many different parameters. Now if, given the known values of the parameters, a model makes a prediction which is then found to be wrong, does that automatically mean the model is wrong ? No - it could be that those known parameter values were simply in error. This does in fact happen sometimes, on those rare occasions when theory proves superior to observational "facts". It's a sort of generalised case of the anthropic principle : accepting a theory as true, you can use it to predict observational values.

It could also be that the model itself needs adjusting. Grey areas rapidly emerge when multiple adjustments are needed : at what point do you say, "we've made so many adjustments that this model is unsalvageable, we should just chuck it right out" ? The notion of crystal spheres and epicycles comes to mind, with early astronomers inventing more and more elaborate, unwieldy theories until eventually they gave way to something very much simpler. So while you often can totally falsify many models, in other cases it's far less clear cut.

But let's not go nuts : it's worth repeating that you can completely falsify some models. If your model predicts the existence of a planet around a star that should be detected by some telescope and no planet is found, then your model is wrong. The key is not to get carried away. Maybe your observations are good enough to rule out the existence of any planet around the star, but often they won't be - they will just put some limit on how massive a planet might be there. That may or may not be sufficient to rule out your model, depending on the details. Your model as originally and precisely stated is wrong, but that doesn't necessarily mean that every single aspect of it is completely bunk.

Conclusion : This Is All Deeply Unsatisfying

Indeed. To recap, falsifiability isn't an absolute necessity :

Not all aspects of every theory are falsifiable or even testable, but that doesn't make them useless.

Theories can consist of testable elements which can be extrapolated to scales where they cannot be tested.

We often can't do controlled testing but are limited to purely observational interpretation, which is subject to some amount of unknown errors - so are falsification is far less rigorous.

Sometimes when we can't falsify a theory, we can at least say if it makes better or worse predictions than its rivals.

Theories which make no new predictions are still arguably better if they avoid philosophical conundrums or logical paradoxes that plague their rivals.

Bizarre though it may seem, falsifying a model doesn't necessarily mean that it's been disproven.

Models can often be saved - after seemingly being falsified - by honestly legitimate modifications; it's almost as rare to be able to declare a model truly dead as it is to say it's been proven.

There are always grey areas rather than strict boundaries. You can't rigorously define what counts as a legitimate modification to a theory, nor is it sensible to set a time limit within which it should be possible to test your theory if it requires technological advancement to test it.

How do you actually define falsifiability anyway ? What if a theorem is so complex that no-one else can even understand it ?

Falsifiability is, however, always a bonus. A theory never becomes worse by being falsifiable. But the demand for falsifiability, like so many other things, is highly beneficial in moderation but can be actually damaging if taken to extremes.

Indeed, really extreme proponents of falsification often tend to be those of the anti-science ilk. Geology, astronomy and anything else which involves deep time, they say, are not really sciences because we can't actually prove anything - no-one left records for billions of years ago for us to check, and we can't wait around to see how galaxies evolve. In a very strict sense, the evolutionary history of life on Earth and the behaviour of stars over cosmic time really can't be falsified.

Such a way of thinking has many parallels with conspiracy theories. It's not that everyone is lying, exactly, it's just that they are demanding impossibly high standards from the evidence which can never be met. By demanding ludicrously high levels of confidence, by refusing to make even the most basic assumptions and give the data some rudimentary level of trust, in short by refusing to even entertain hypothesis for the sake of it, they prevent themselves from learning anything. And they rarely say why they have such confidence in their own senses, which is bizarre given the complexities and many, many demonstrablefallibilities of the human brain.

Science, in a very crude sense, requires a sort of leap of faith - just enough to let you play with the data for the sake of it, just enough to trust that extrapolations are not totally outlandish - but not in such a way that your preferred conclusion becomes sacred and inviolable. Newton applied laws testable in Cambridge to the scale of the Solar System, an act that could be described as one of audacious faith, but he surely wouldn't have defended his ideas if the evidence had gone against them*. While science can often have elements similar to that of faith, it's actually more of a sort of highly elaborate play.

* If the evidence had gone against Newton, he wouldn't have rushed to proclaim himself wrong not (just) because he was a jerk, but because he'd want to be sure which way the evidence was really pointing overall. He'd have known that disproof is a far more subtle concept than a single observation disagreeing with a prediction.

This playful element is sometimes less obvious in the applied sciences like engineering, chemistry and medicine, where a very much higher degree of control and rigour is possible. There, falsification is not only necessary but largely unavoidable. But while it might be nice to hold falsification as a general overarching principle of science, it's a mistake to try and apply this to all sciences in the same way. The standards of falsification possible in engineering are simply impossible to meet in geology, archaeology, astronomy or quantum mechanics.

Yet while the latter disciplines cannot be distinguished from other pursuits by virtue of their falsifiability - which as we've seen is a thoroughly murky area - they are clearly of a different nature to the prospect of Dave the magical farting fox. So if we can't use falsifiability to set them apart, what should we use instead ? I prefer to abandon rigorous absolutes altogether, but there are at least some useful guidelines :

Is the theory based on falsifiable components ? Does it at least make some falsifiable predictions even if not all of them can even be tested yet ? Can it be falsified on some scales even if it's impossible to test on others ?

If you can't falsify that theory, does it at least make predictions which distinguish it from its rivals so that one can determine which one is more successful ?

Does the theory have mathematical rigour even if its observational predictions are untestable ?

If the theory offers no new predictions to distinguish it from existing models, does it at least do as well as those models ? And does it improve on any philosophical difficulties of the current ideas ?

The Universe is, of course, under no compulsion to be testable by a bunch of hairless monkeys on an unremarkable rock floating through the cosmic void who think that digital watches are a pretty neat idea. Consequently, falsifiability is always nice to have if you can get it, but if you insist upon it in all circumstances, then you're hindering scientific advancement - not helping it. A theory that isn't falsifiable doesn't become uninteresting; not being able to "solve mysteries" (to use the journalistic vernacular) doesn't mean you can't ask increasingly interesting questions. Right and wrong answers are only a small component of science - for the most part, it's far more interesting than that.

Feynman, of course, understood all this very well. The opening quote is merely a simplification, a lie to children that's a useful introductory teaching aid rather than a fundamental truth. It's a great principle to aspire to, but the reality is much more subtle. So just to prove I'm not out to attack Feynman, let me give him the final word with something I think is much closer to the truth :

Saturday, 13 May 2017

Consider, if you will, two men each running for office in two modern Western democracies. Let's follow their paths to leadership and see what they have in common and what sets them apart. Just for fun, you understand.

Don and Jim start off being very different even to a keen observer. Don is fat, loud and brash, extremely rich and somewhat influential with the upper political echelons. He doesn't care a jot if what he says is offensive or stupid and no-one would call him a great orator. "Intellectual" is probably the last thing you'd call him, although "boring" would be another very strong contender.

Jim, on the other hand, is somewhat more slender, generally quiet, reserved and very serious. He's no pauper, but neither is he anywhere near the financial elite. His style of speechmaking is calm and measured, perhaps a tad on the dull side. "Intellectual" might be a bit of a stretch, but he seems infinitely more well-informed and rational than Don.

They are not quite entirely dissimilar though. Like Don, Jim too doesn't care very muchif his statements are offensive. Both are either extremely thick skinned or literally don't care what their opponents think of their policies. Both appeal to their supporters by virtue of their apparent honesty, and both are derided by their detractors as being morons. Both make statements which their supporters defend in the name of free speech. Both throw the odd tantrum, though while Don genuinely doesn't care what people think of his policies, he can't tolerate any personal criticism whatsoever. Both could be described as inveterately smug.

Now Don and Jim are at opposite ends of the political spectrum. Don is far right, pro-business and anti-government. Jim is far left, pro-regulation and pro government. Don begins outside the political mainstream whereas Jim is a career politician. Don's policies are often accused of being both stupid and cruel, whereas although Jim's are often touted as stupid they are seldom if ever accused of being cruel. Both, however, are very much at the ends of the political spectrum.

Both men languish in political obscurity, and both have remarkably unsuccessful careers. Jim's biggest success is being elected for 30-odd years, but he's never given a single speech anyone can remember, held office, or even been recognised as the instigator of a single major policy. In fact he's been an activist in a political group for 50 years which has utterly failed to accomplish its objective because no-one cares about it very much. He is moral, perhaps, but not effective. He does, though, often like to point out how consistent he is, forgetting that this constancy hasn't actually achieved much of anything.

Don is unsuccessful for the opposite reason - he is effective but not moral. That is, his business empire is effective enough at making him lots of money (though exactly how much is disputed), but he has a longer than usual record of bankruptcies and a reputation for screwing people over. He pays millions of dollars out of court in order to avoid lawsuits. Making money for Don is what matters to Don; whether his businesses can be trusted to deliver quality goods and services doesn't much enter into the equation for him. So he too is unsuccessful, though he often likes to loudly proclaim what a good businessman he is, just as Jim likes to proclaim what a good campaigner he is even though he clearly isn't.

After decades of remarkably unsuccessful careers, these two old men (both, incidentally, also have a string of failed marriages) suddenly find themselves contesting the leadership of major political parties. Both are regarded by the establishment as no-hopers that no-one in their right mind would vote for. And yet, using their very unconventional, unorthodox and very much anti-establishment styles (albeit styles which are totally different to each other), both go on to stunning, rapid election victories.

Both men are hounded by their detractors throughout their campaigns - and importantly, both greatly exacerbate the usual level of hyperbole. It's something like, "not only do I think this man will ruin the country, but this time I really mean it !". Don is derided as a misogynist, a racist, a sexual predator, a neo-Nazi who hates anyone different from him, accused of colluding with the enemy, an idiot whose policies were tried and failed. Jim is portrayed as a Communist, an unpatriotic terrorist-sympathiser who hates his own country, an idiot whose policies were tried and failed. The accusations toward both men are not entirely the same, but they are extremely similar - especially in their extreme nature. They are both viewed as conservatives, in the sense of going back to a largely mythical better age when things were better for those of their respective (though diametrically opposed) ideologies.

To their supporters, both men are portrayed as suffering from incredible media bias. "You wouldn't be reacting to them like this if the other side had said it," they say, ignoring that both Don and Jim represent a radical departure from the norm. Don's apparent racism is ludicrously deemed to be "crying wolf", which makes absolutely no sense to anyone else, and his frequent changes of policy are excused because "you obviously weren't meant to take it literally" or other nonsense.

Jim's purported antisemitism is a charge which never really takes hold but never entirely goes away either. In contrast to Don, Jim's stubborn refusal to ever change his mind about anything (except under extreme necessity) is seen as a laudable moral conviction rather pig-headed idiocy. He apparently worked out all the best policies 40 years ago and more, so there's obviously no need for him to alter them now.

And to their really extreme supporters, allegations of Don and Jim's worst attributes are not excused or brushed away at all - they are defended as virtuous. Don's a racist ? Well then maybe racism is just true. Heil Don ! Jim's a Communist ? Good, because capitalist pigs will be first up against the wall come the revolution. "You just can't tolerate anyone whose views are different from your own", they say, as though that automatically validates their position. Neither Jim nor Don really do very much to distance themselves from these extremists.

The wider perception (beyond their power base) of both men follows a similar though not identical trajectory both during and beyond their leadership campaigns. Both start out as outsiders, virtually joke candidates. As the leadership campaign progresses and Don begins to look like a real contender, he becomes more and more vilified. He says things so disgusting that even his own party start to turn against him - but too late. They have no better candidate, all of their others were simply less charismatic lunatics. Not once does the man ever seem to have the wider support of the country. Breaking with convention, after winning party leadership he doesn't tone himself down the slightest in his campaign to win over the nation. Although he wins the premiership position, he loses the popular vote and begins with pathetic approval ratings that only get worse. He only ever appeals and tries to appeal to his core base.

Jim's path is a bit harder to gauge since his country has a different system. But he too is always immensely popular with his core supporters but never really wins the approval of the wider populace, though he's not as hated as Don and perhaps more widely respected initially. He too suffers attacks from within his own party - more severe than Don, being almost ousted by direct action on two occasions. He too seeks the approval of his core supporters far more than the rest of the country, using that to make his position unassailable despite being hugely unpopular outside his own cult. He too doesn't really care about persuading people, only achieving power - despite, just like Don, having never made any previous attempt though he's not a young man. And when he wins party leadership, he more or less entirely and instantly shuts up. A major political crisis develops - he says next to nothing. Why should he ? He's already leader of his party. And he continues saying nothing until a chance of the premiership itself appears, in which case he goes back on the offensive.

Both men do not respect evidence, though this is manifested in very different ways : Don says whatever happens to pop into his head without regard for the facts; Jim says the same thing he's always been saying as though nothing has changed in the world in decades. Don actively dismisses and attacks experts, selectively choosing which ones are wonderful and which ones are satanic monsters. Jim is more subtle. During the crisis entire legions of political, economic and academic experts loudly and clearly explain why there's only one sensible choice. Jim doesn't attack them - he simply ignores them. Evidence matters no more to him that it does to Don, he's just better as disguising it.

Both of them bullshit, at least on occasion, in remarkably similar ways. When two separate crises turn violent, both of them comment that there is "violence on both sides", despite the fact that the rest of the world places the blame firmly on one side. Don denies collusion with the Russians on the grounds that they weren't able to provide useful information; Jim's supporters say it doesn't matter that he cooperated with the Soviets because he wasn't able to provide them with any useful information. The point that both of them were actively trying to work with foreign enemies is apparently irrelevant and mere attempted treason is somehow nothing to be concerned about. Both of them strenuously deny collusion with the Russians only to make remarkably sympathetic overtures to a nation everyone else views with justifiable disgust.

We should pause, though, to note that it's not all Don and Jim's fault. The environment in which they operate does not favour those who favour evidence - it favours those who have enemies. The system is predicated on weighing opinion, not facts, of creatively interpreting facts to fit existing views rather than altering views to fit the facts. It isn't Don or Jim's fault the system is this way - and the system does also have some powerful advantages, but that is another story. It's very, very hard in this system to change your stance without being viciously attacked by your enemies.

Don and Jim might be victims of the system, but they don't even try and fight it. Both are unpopular populists. Both say what they need to say to their electorate to win the vote, regardless of whether this appeals to the wider community or not. They care about power, not persuasion, and if (as happens often for both men) they fail to implement a policy, they blame this on the establishment. Both say the system is rigged against them but vow to win anyway. They are only ever interested in appealing to their existing supporters, with little or no interest in persuading others to follow them. Both of them have no scruples around removing those around them from office if they disagree; in Don's case with uttermost impunity to an unprecedented degree, though Jim is more cautious only when his position is less secure.

It's very easy to see the loud, obnoxious, hate-filled Don as a villain and a tyrant. It's less obvious that Jim may well be cut from the same cloth. "What, Jim ?" say even many of his opponents. "Nice fluffy quiet Jim ? He might not be very good at his job, but surely he's no despot." Yet beneath the trappings of their various styles, are these two men really so different ? Nice fluffy Jim who hasn't changed his mind in 40 years. Nice fluffy Jim who doesn't care about anyone except his core supporters. Nice fluffy Jim who does nothing in the biggest political crisis in living memory but goes on the warpath come any chance to win power. Nice fluffy Jim who doesn't care about evidence. Nice fluffy Jim who's incapable of compromise. Nice fluffy Jim who vows to win a rigged election. Nice fluffy Jim who refuses to quit after massively losing a vote of no confidence. Nice fluffy Jim who vows to remain as leader even if he loses the election and sends his party into the abyss. And nice, "principled" Jim who puts his own leadership and "morals" ahead of the good of the party and the chance of ever actually implementing any of his apparently beloved-policies. What in the world would nice fluffy Jim do if he ever did achieve real power ?!?

Consider Jim carefully. Stripped of his (admittedly massive) differences in personal style and professed ideologies to Don, is his path to power really so different ? His words are certainly different, but are his actions ? And which is the greater danger : the candidate you trust or the one you don't ?

Thursday, 4 May 2017

How do you decide who to vote for ? I've no idea, because I don't know who you are. So I shall tell you how I decide who to vote for, and then you can decide if this is sensible or if you have a better system.

Astronomy has this famous thing called the Drake Equation, which is way of estimating how many intelligent alien civilisations might be around for us to talk to. As equations go, it's tremendously simple - nothing more than multiplication. It looks like this :

It's literally just multiplying a bunch of numbers together - say there are a million planets in our Galaxy but only a tenth are like Earth and only a tenth of those actually have life, then that's 10,000 planets with life. Easy peasy. Of course, working out what those numbers are is much, much more difficult.

A modified version of the Drake Equation makes for a pretty good way of explaining how I decide who to vote for. I don't normally actually set about plugging in numbers on a calculator, but this pretty well approximates (I think) what I'm doing unconsciously... and I suspect this is true in general as well. It might also be a handy way of explaining to people how you made your decision in (if we're very lucky) a less chest-thumping way. Maybe it'll even help you analyse your own thinking...

V is a number which determines how much credibility you should give to the prospect of voting for any given party. The higher the number the better, but it all depends on how each party fares - you have to evaluate this equation for all parties and consider their relative scores. Only if they all get an equal voting score should you consider not voting at all (let's ignore tactical voting considerations for the moment).

Each of the other parameters represents some reason you have to vote for or against that party - low numbers mean you shouldn't vote for them, high numbers mean you should. To keep things simple let's let those numbers all run from 0 to 10. This makes the minimum overall V score zero and the maximum ten million. But a score in the millions would be very rare, and the score unfortunately doesn't vary in a nice proportional way - but absolute values are far less important than the ranks here.

Still,to get a feel for the numbers, if you assigned equal values to each parameter, V would just be x7 where x is the value of each parameter. You can see an interactive plot of this here (or make do with the static one below, if you want). A totally useless party with all parameters of 2.5 would get a score of about 600, a mediocre one with all parameters equal to 5 gets about 80,000, a really good one with 7.5 in each category gets 1.3 million. The values differ dramatically, but, to emphasise this point, even if all parties score badly this doesn't mean you can't pick the lesser of evils unless their scores are all very similar.

Because everything is multiplied, a value of zero for any reason means you definitely shouldn't vote for that party regardless of any other considerations. This is because I consider all these parameters to be absolutely essential, but in practise, it should be very rare that you ever actually give a party of score of 0 or 10 for anything (but you can give very extreme fractional values, of course, like 0.001 or 9.999). Of course, it's open to debate if each category should really allow the same maximum value, but this'll do as a start.

What I like about this is that it accounts for much more than just stated policies. Policies are irrelevant if you don't trust the party or if you think they don't have the ability to enact them. Of course, the downside is that this is necessarily a self-analysis : no-one can objectively measure how much you trust a party. But self-analyses are extremely useful as long as you make a sincere effort to understand why you really came to a conclusion, and this equation has the additional advantage of being flexible and easily modified.

The criteria I've chosen for it are as follows. When evaluating them, keep in mind precisely who each party would likely elect to power. For example, don't give a party a high Ability score just because you think they have lots of talented people - rate this value according to who you think they're actually likely to put in charge. And remember, you're not trying to rate each party by some absolute system, but only in terms of how closely their ideas and values match your own - or even just by how well the match the characteristics of a party you want in government (for example you might be a personal pacifist, but accept that political leaders generally have to act less perfectly). Anyway, here are the assessment criteria :

I = Idealism
Do the party's ideals match your own ? This is more fundamental than current stated policies, it's more about the soul of the party - if it has one. Do you believe they are, overall, trying to do things you basically agree with, even if you don't accept some individual policies ? An extreme example might be agreeing with the policies of an avowedly religious party while not being part of that religion yourself. You could also think of this as the long-term "climate" of a party, as opposed to its short-term "weather" of its specific policies.
You'll want to weight this one appropriately. For example, if you don't agree with their ideology but only care about specific policy, increase this score to compensate.

P = Policies
Do you agree with the party's policies - are they practical, sensible ways of implementing their ideals ? Unlike many of the other parameters you can objectively measure how much you agree with their policies, including the weighting for the importance of each policy (this is very important !), through simple testing. This one (click the country at the top right) gives you a score as a percentage, so just divide it by ten to use it in this equation. This should also give you at least a handle on ideology, though ultimately only you can decide if you truly agree with the party's ideals or not.

A = Ability
Does this party have the necessary skills in order to implement their stated policies ? If you don't think they're capable of fulfilling their grandiose promises then there's no point voting for them.

T = Trust
You may think that your party has the right ideology, its policies are correct and its members are highly intelligent... but this doesn't mean you trust them. Maybe they could do what they say, but you don't think they actually will. Especially for smaller parties, this can be more complex than whether you think party members are decent people - you might think that when they got into government they might be forced to make compromises they didn't want to make.

B = Behaviour
Does the party behave in a sensible way in other aspects not related to its stated principles and governmental policies ? For example, does it select its representatives for government in a way you agree with ? Does it allow members to vote freely or does it enforce votes based on party policy, and do you agree with this ? Does it manage itself well ? Do they try and convince undecided people using well-reasoned arguments or by appeals to base emotion ? If they adopt a, "the beatings will continue until morale improves" approach, then for me there'd be no point voting for such a party.

E = Evidence
Will this party act appropriately as new evidence is presented ? That is, will it change policy if the evidence goes against it or devise new policies accordingly, and will it do so because of the evidence rather than to satisfy voters ? Another term might be "sincerity". Because, you see, the evidence on at least some policies most certainly will change, so I consider it essential that a party has the flexibility to be able to deal with that.

R = Respect
Will this party treat its opponents with the appropriate degree of respect ? Will it seek to gain unfair advantage to crush its rivals or will it try and build a genuine consensus - insofar as that's possible - through persuasion, negotiation and compromise ? In your judgement, does it act correctly to deal with those who continue to disagree with party policy after all attempts at persuasion have been exhausted ? Of course, this doesn't necessarily preclude being extremely harsh to its detractors - as with all the parameters, the question is whether you think this is a good thing or not.

A Worked Example

So, let's apply this to the real world. Here are my own numbers for the major political parties in the UK. I used numbers from isidewith for the Policies index. The rest are of course my own judgement - part of the usefulness here is to be able to communicate reasons as well as self-analyse.

EDIT : I considered a way to account for the importance users attach to each parameter in the discussion here. I'm not entirely happy with the result - it still needs a bit of work, though it's probably close to what I was aiming at. Weighting each parameter makes things more complicated since it makes the maximum possible value unavoidably variable. A better approach for the end result, which probably needs to be used anyone, is to make the final score a percentage of the maximum possible, or better yet a percentage of the maximums score actually achieved.Should I ever discover a way to create interactive spreadsheets for multiple users, I'll try and solve this and update this post accordingly.

Now of course you have only my word for it, but while I've already decided to vote for the Liberal Democrats, I was surprised at the incredibly decisive result in their favour. Especially so since Labour do better on policy according to this. I was expecting to have to artificially reduce the policy scores of both the Tories and Labour because I don't think the importance of Brexit is adequately utilised in the isidewith measurements; I'd probably opt to roughly halve the policy scores of every party apart from the Lib Dems. In my view, trying to get a good deal on Brexit won't work and so all other parties are advocating for madness. This more-or-less wipes out all of their other policies since we'd be spending years up the proverbial creek - it doesn't really matter if you've got a paddle or not if your boat is leaking and the piranhas are circling.

What you can also see here is that some parties do badly (in my estimation) on a range of factors, while some have just a few critical weaknesses. UKIP are irredeemable - I hate pretty nearly everything about them. Plaid Cymru do well in many areas but fail largely on evidence and ability : in my opinion, Welsh independence/nationalism is utterly barking bad, and they haven't convinced me they know anything much about their other policies either. A subtle point that the numbers can't show is that (especially for the Greens) sometimes I don't necessarily disagree with their conclusions, I just think that they're far more ideologically driven than evidence-based. Had the Greens persuaded me that their policies really were driven by the evidence, they'd be competitive with the Lib Dems.

Labour fail for me this time round based largely on their "leader". He has destroyed my trust that the party will do what they say, never mind whether their policies are a good thing. I don't think he's an intelligent man - he doesn't seem to have changed his views in his life - and I think he deals incredibly poorly with people who disagree with him. For me, the "I'm not leaving" after losing the vote of no confidence was a point of no return : I really just do not - indeed, cannot - understand how anyone can think such a man is trustworthy after that. But then, one man's democratic vote is another man's coup... Anyway, that's why Labour do badly in so many parameters.

The Liberal Democrats do surprisingly well here because they have no major weak points. This raises the obvious question : have I over-estimated any of these points, giving them an unfairly high score ? Or indeed have I under-estimated any points from the other parties ?

Of course there's margin of error on all of these. I won't go through them all because that'd take all day, but a few points are worth mentioning :

I'd consider reducing the P value for all parties except the Liberal Democrats and UKIP based on their stated Brexit stance and the exceptionally high importance I place on this.

I could perhaps increase my ratings for the Tories ability and trust scores with a new leader and cabinet, but I doubt I'd every change any other of their scores much so they'd always get a low overall result.

Labour could improve all of their weak points with a better leader and shadow cabinet - they have by far the most to gain here.

The two points of the Lib Dems I'm most flexible on are ability and trust - I could knock them down to 5 and 4 respectively, but that still gives them a whopping 367,696.

The Greens have very plausible scope for improvement - they need leaders more skilled in rhetoric to convince me of their abilities and critical thinking skills.

The SNP and Plaid Cymru are unsalvageable because I believe independence is in both cases a mad idea, just madder for Wales because we're a silly place.

UKIP are the sort of "party" at which everyone gets drunk and goes home with loss of limb and several terminal sexually transmitted diseases, or in other words oh God no never.

In short, this time round there's no point me considering anyone besides the Liberal Democrats, because all the other parties fall far short. Next time, Labour could plausibly be re-aligned with my own views, as could the Greens. Whether that will actually happen or if the parties all just collapse entirely remains to be seen.

Tactical
What I haven't considered here is tactical voting : should you vote for the party you agree with based on the party as a whole - the combination of policy, idealism, and pragmatism described above - or on the basis of which one you think has more of a chance of getting elected ?

That's much harder. The simplest way to modify the equation would be to add a T parameter which describes how likely it is you think your vote will actually help them get elected. Then if you find that this reduces V for a party to a level below that of others you also find acceptable, you should consider voting for one of those instead. But I dislike this kind of logic - it promotes groupthink, so that the smaller parties get much less of a chance for breakthroughs.

It's also not easy to factor in the the nationwide voting probabilities versus vote in a particular constituency. For instance, say you hate the Green party and you're in a Labour-Green marginal. You really want to vote Tory but they only get 2% of the vote in that constituency. Labour, the polls indicate, have a real shot at winning the whole election but you hate them too, though not as much as the Greens. Should you vote for Labour (to prevent the loathed Greens winning) or the Greens (to help deny Labour a government) or stick with your principles and vote Tory ? Not an easy decision.

One way to proceed might be to construct three list : the parties as ranked by the PDE, the opinion polls for your constituency, and the nationwide opinion polls. Then you have to choose the party which if it wins will a) produce a nationwide result you'd find acceptable, or least awful; b) has the greatest chance of winning in your constituency, or is least unlikely from the parties you'd consider voting for and c) is the "best of the rest" according to the PDE. Satisfying all three criteria gets tricky, and is highly vulnerable to errors in local opinion polls. Still, in principle the PDE can also be used to help you choose the least bad option, though personally if I think every party is genuinely and similarly shite I'd advocate not voting for any of them.

To my mind, tactical voting often makes good sense, but not always. If the political situation is relatively stable - no unusually large issues dominating the scene, only small changes in Parliament predicted - then it probably does make sense to vote tactically - either to deny the government you don't want a seat, or to elect the lesser of two evils. But if it isn't stable, if there's a pivotal issue in the air, then personally I think tactical voting is less sensible. Only one party is saying what I support on the most important political issue right now - none of the others come close. The political scene, I believe, is not at all stable. And if I were to adjust the P index to account more accurately for the importance of Brexit, I believe the Labour and Tory votes would be far closer to each other. So voting for Labour barely gets me a party that's any better than the Tories, overall - even though the majority of Labour policies are far superior to Tory ones, in my view.

Conclusion
Good God no ! You must decide for yourself if any of this has been useful or merely a way to justify my decision. That's the risk of self-analyses, of course : if you do them right you can realise that you've come to a stupid conclusion and change your mind, but if you do them wrong you just make your own filter bubble stronger.

If you want to try this exercise for yourself, you shouldn't be at all worried if you get a result which disagrees with what you expect. If that happens, what you should try to do is to consider each parameter again and try and decide if you've rated it correctly. It's absolutely fine to tweak parameters to get the result you want... provided you think carefully about why you're doing so. The main goal is to get you to carefully consider how you've formed your own conclusions. And of course, if you do think you've set the numbers correctly, maybe it's time to re-evaluate who you'll be voting for.

A caveat that I've only mentioned in passing is the relative importance of each parameter. You might think that policies are absolutely paramount and the others count for nothing, but other people might think that all the criteria have roughly equal value. Currently the PDE doesn't account for this, but I'm working on a way to implement it. You should be able to effectively remove certain parameters if you want - you'll be able to rank parties just by ability and policies if you so desire. This is in development, so watch this space.

While of course the qualities I've suggested reflect my own views on how to vote, weighting would be able to compensate for that (do please suggest any other fundamental properties you think I've missed). For example, which part of "representative democracy" do you prefer ?
Those who favour democracy believe in direct rule by the people, with politicians as empty vessels that serve only to enact the will of the people. These people would rate Policies very highly while giving very low or zero scores to Idealism and Evidence, as these two qualities are decided by the people, not the politicians. They would probably rate Respect, Evidence and Behaviour rather poorly as well - they want politicians to do as they're told, not think for themselves.
Conversely, those who favour representation want the politicians to speak for their values rather than specific policies, so they would weight Policies very low while giving Idealism the highest rating, and probably giving Respect, Evidence and Behaviour high scores. They want politicians who are more expert than themselves at dealing with the specifics but who fundamentally have their best interests and ideologies at heart. Most people, of course, are probably somewhere between the two extremes.

As to my own numbers you, dear reader, have basically three possible responses. The first is to examine the method and suggest improvements or dismiss it entirely. If you find that it's basically sound, your second option (if you disagree with my voting choice) is to persuade me why I'm wrong about the individual criteria for the other parties. If you agree with both the method and (roughly) the numerical values I've assigned, then your only remaining option to change my voting preference is to explain the flaw in my tactic voting argument. So at the very least, what this does is show as clearly as possible how I've reached my decision and lays out precisely what you can do to change my mind. Good luck.