A fine WordPress.com site

In a relatively recent discussion, the topic of applied probability has entered the fray, so I am taking the time to discuss this issue. The discussion is centred around a reflection of the debacle that was the 2016 US presidential election. Vitriolic politics aside, one notes that objectively, the prediction for the outcome of the election was an abject failure. Nearly all news outlets and pundits gave the Democrats a better chance for winning, and of course, they were all ‘wrong’.

There is a lot of contention with use of the word ‘wrong’ in this context. After all, just because there is a 95% chance that something will happen doesn’t mean it’s guaranteed to happen. However, I am claiming that assigning a ‘probability’ to something like a presidential election is fundamentally misapplying the principles of probability to a real world phenomenon.

Probability has been an absolutely invaluable tool in the sciences and economics, because it really helps us understand phenomena at all scales and how to interpret problems where we are only given partial information. But the underlying principle of probability is that events are similar, in the following sense: each time you flip a coin there is no reason to believe that it’s different from any other time you flip a coin, so the behaviour of the outcome should be similar. Therefore, one can concretely say that if you flip a coin a large number of times, that you will expect to see a certain proportion of coins being heads, and that this is the ‘probability’ that after a single coin flip you will get heads. This can be applied in any setting where similar, indistinguishable events happen in large numbers. Even for coin flips we know that not each coin flip is exactly the same (for example, maybe the temperature of the room is slightly different, the fatigue level of your hand is slightly different, etc) but we understand that these extra factors should be negligible. This allows us to apply, for example, probabilistic models on consumer behaviour even though we know that each person is different.

However, this basic principle is violated in models trying to predict large, singular events like elections. There is no reason to expect (the 2016 election of all elections) that a particular presidential election has any parallel in history or in the future. The candidates are completely different, the general political atmosphere is completely different, demographics are changing, etc. Thus the very notion of ‘probability’ is bunk: you can’t repeat it to check if your guess is correct, and models which cannot be repeated usually aren’t valid in science. Thus, one should step away from the pseudo-mathematical (mis)-application of probability in this subject and instead focus on whether one can actually predict the outcome of an election. On that note, this guy seems to have the right idea (see this article).

In a recent discussion with a friend, I talked about why the ability of polls and markets to predict political outcomes seem to have taken a beating as of late. I described an effect whereas the more faith people place into a certain predictive tool and gain more knowledge of it, the less reliable it becomes. I vaguely remember hearing about it somewhere but when pressed for an academic source, I came up short. I couldn’t even give a name for it, which leads to the disturbing possibility that I simply made it up.

However, weeks later, I found a name for it in a rather unorthodox source: this video on the video game Hearthstone. At around the 11 minute mark, Trump talked about something he named the “professional mage effect”, which I shortened to simply the professional effect, which I believe is one of the main causes in the crumbling of the accuracy of polls and other statistical tools as of late. Trump himself summarized the effect fairly succinctly in the video, but for those of you who don’t know the language of Hearthstone, I will give you an explanation.

In Hearthstone there are nine playable classes. Each class has a distinct set of accessible cards and hero abilities. In the current incarnation of the game, the mage class was given several new cards that gave it a significant advantage over other classes. This meant that every time it was possible to select a mage, people tend to select mage; the difference in win rate between mage and the other 8 classes was staggering. As a consequence, Blizzard decided to introduce some changes to try to balance the game, most notably removing several problematic cards for mages. However, after a month of the changes, it seems that mage still maintained a sizeable lead. Blizzard explained this with the ‘professional mage effect’, where strong players who tend to read the statistics instead of choosing at will or from word of mouth, tend to still prefer mage because its previous high win rate and its current average win rate still gives an above average win rate; and since these players tend to be the strongest, their superior skill allows them to win more, which is conflated with the class being strong.

It appears that a similar phenomenon is at play when it comes to high frequency trading. Whenever too many algorithms use similar methods to choose stocks, a slight advantage is exaggerated and causing many buyers to buy into it, significantly overvaluing the stock. This then creates a lot of uncertainty in the valuation of stocks and makes the market rife for bubbles.

This seems to give a rigorous justification that a diversity of ideas is quantitat

Ethics questions often give a scenario (often unrealistic and missing key details) where you have to choose between a set of difficult options. For example, it could be “an old lady who has no family or a young man with a promising career and loved by a lot of people are both admitted to the ER. Who would you prioritize on treating if their probability of survival is the same?”

However, sometimes the answer is quite a bit more straightforward. The question I saw is the following: Suppose that an intelligent machine has the ability to predict, with 99.99% accuracy, whether someone will commit murder. Would it be permissible to arrest people based on the predictions of the machine?

There seems to be no ‘correct’ way to answer this question, but that’s because the person who asked the question doesn’t understand statistics. There is a phenomenon in statistics called the false positive paradox, where even a very accurate test will produce many more false positives than actual positives. This is relevant when the actual probability of a true positive is very low.

Here is an example. You are a unique person, one of say 7 billion. Suppose that there is a machine that can identify someone with 99.9999% accuracy. A person gets scanned by the machine, and the machine says it’s you. What is the probability that the machine is right?

There are two possibilities for the machine to give that reading. Either the person scanned by the machine actually is you and the machine is accurate, or the person scanned by the machine is not you and the machine malfunctioned. The probability that the machine is accurate is 99.9999%, and the probability that the person scanned is you is one in 7 billion. The other possibility is that the machine is wrong and the person being scanned is not you. The probability that the machine is wrong is 0.0001%, and the probability that the person is not you is 6,999,999,999 out of 7 billion. Thus, the probability that the person actually is you, given that the machine says it scanned you, is given by the equation

Evaluating, the probability that the machine is right is less than 0.015%. The paradox is caused by the fact that the machine’s accuracy is very poor compared to the astronomically unlikely phenomenon that you would be picked out of 7 billion people.

Now back to the question. Whether it is ethical or not (that is, whether the machine produces a desirable result an acceptable proportion of the time) will depend on the actual murder rate of a place. The global average is currently 6.2 people out of every 100,000, or 31 out of every half a million. Assuming the machine has accuracy 99.99%, the probability that a person identified by the machine as a murderer is actually a murderer is given by

or roughly 38.3%. Therefore, the machine is right way less than half the time, far below what can be considered reasonable. Therefore this question has a concrete answer and is not really debatable.

In Hearthstone, a popular online card game, there is a minion called C’Thun which deals damage equal to its attack value, one damage at a time. This prompted a popular streamer named Trump (not the insufferable presidential candidate) to ask the following question once on his channel: when playing a C’Thun with 12 attack (hence, it will deal 12 damage one at a time), what is the probability that it will kill a 6 health minion when played, with no other minions on the board?

To those who don’t know how Hearthstone works, one can think of the question as follows: suppose you flip a coin 12 times in a row. Each time you flip the coin, you record down whether you got heads or tails. As long as you have fewer than 6 heads, the coin is fair and you have equal probability of getting heads or tails on your next flip. However, as soon as you hit 6 heads, then the coin becomes unfair and you can only get tails after. The question that Trump (real name Jeffrey Shih) asks is then equivalent to “what is the probability that you get 6 heads”.

To calculate this probability, we can ask the reverse question: what is the probability that we don’t get six heads? If we don’t get 6 heads, then the probability distribution is the same as if we always had a fair coin, and so the probability is the same. The probability of getting at most five heads when flipping a fair coin 12 times is given by

Recently the following article appeared on my Facebook feed: https://www.quantamagazine.org/20160628-peter-scholze-arithmetic-geometry-profile/. Aside from the usual bland mix of singing praises of an archetypal ‘genius’, the article does contain some genuine insights. The most striking of which is Scholze’s description of him learning the proof of Fermat’s Last Theorem by Sir Andrew Wiles: that he “worked backward, figuring out what he needed to learn to make sense of the proof”. Later he also said things like “I never really learned the basic things like linear algebra, actually – I only assimilated it through learning some other stuff.”

If you have experiences learning mathematics at the senior undergraduate level or post-graduate level, you will likely find that these experiences are orthogonal to your own. We spend an inordinate amount of time learning the ‘basics’ for various things, which very much includes linear algebra for example, in order to do research… or so we are told. If you have passed the part of your career where you do more courses than self-learning, then you have likely reached the epiphany that usually it’s not efficient to learn everything there is to know on a subject before actually doing work on the subject.

Some of you have had advisors telling you things like “read these five books (each 300+ pages) before you attempt any research work in the area”. Sometimes this advice comes from highly proficient researchers, which seems odd: if the above ethos is ubiquitous among researchers, why tell your students to do something totally different and entirely more dreadful? I am not sure what the right answer is, but probably part of the reason is a misguided attempt to make research ‘easier’ for students. Perhaps many advisors recall the struggle of trying to understand ‘simple’ phenomena that they encountered in their research careers, that if someone had just told them to read a book or if they were better prepared, would have been trivial to overcome. Perhaps they wish to save their students some time by telling them the shortcut. However, the struggle to understand phenomena on your own is part of what makes research rewarding, and more importantly, it is critical in forging a mind suited to making discoveries.

Of course, I am but a pebble to the avalanche that is Peter Scholze, so my advice may not be worth much. Nevertheless, I feel like I should say this to all prospective and current graduate students: be bold, and give every difficult paper in your field a read. Don’t be intimidated by them. If you don’t understand something, google it until you find what you need to learn the language of the subject. Don’t feel like you need to understand all of Harthshorne before you can read any research papers related to algebraic geometry. Your future self will thank you for this.

At the recent CNTA conference, Professor Joe Silverman gave an explicit homomorphism of into which he jokingly called a great question to ask undergraduates to work out explicitly… if you don’t like them very much. In a similar vein, here is a question that one might ask undergraduates they don’t particularly like:

Let be a triple of co-prime integers which are not all zero and such that . Prove that the ternary quadratic form given by:

takes on integer values whenever the triplet lies in the lattice defined by the congruence conditions

and

I will reveal the solution in due time (and it does not involve explicit computation of congruences), but if come up with a solution let me know!